Copyright is the only thing protecting us from getting absolutely fucked even harder by the rich than we already are, yes.
Copyright is the only thing protecting us from getting absolutely fucked even harder by the rich than we already are, yes.
Do you want corps just stealing every new idea and product, cloning it, and muscling out the original inventor without paying them a dime? Because abolishing copyright entirely would be an excellent way to do that.
I’m pretty sure he was agreeing with you…?
The problem is that as far as I’m aware there’s literally zero evidence of this doomsday scenario you’re describing ever happening, despite publicity rights being a thing for over 50 years. Companies have zero interest in monetizing publicity rights to this extent because of the near-certain public backlash, and even if they did, courts have zero interest in enforcing publicity rights against random individuals to avoid inviting a flood of frivolous lawsuits. They’re almost exclusively used by individuals to defend against businesses using their likeness without permission.
Holy fuck how do you not see the difference between “random nobody does an impression for free while hanging out with their pals” and “multi billion startup backed and funded by one of the richest companies on earth uses an impression as a key selling point for their new flagship product that they are charging access for and intend to profit from”
There’s something primal about making something with your own hands that you just can’t get with IT. Sure, you can deploy and maintain an app, but you can’t reach out and touch it, smell it, or move it. You can’t look at the fruits of your labor and see it as a complete work instead of a reminder that you need to fix this bug, and you have that feature request to triage, oh and you need to update this library to address that zero day vulnerability…
Plus, your brain is a muscle, too. When you’ve spent decades primarily thinking with your brain in one specific way, that muscle starts to get fatigued. Changing your routine becomes very alluring, and it lets you exercise new muscles, and challenge yourself to think in new ways.
In what world is OpenAI open source?
After reading this article that got posted on Lemmy a few days ago, I honestly think we’re approaching the soft cap for how good LLMs can get. Improving on the current state of the art would require feeding it more data, but that’s not really feasible. We’ve already scraped pretty much the entire internet to get to where we are now, and it’s nigh-impossible to manually curate a higher-quality dataset because of the sheer scale of the task involved.
We also can’t ask AI to curate its own dataset, because that runs into model collapse issues. Even if we don’t have AI explicitly curate its own dataset, it’s highly likely going to be a problem in the near future with the tide of AI-generated spam. I have a feeling that companies like Reddit signing licensing deals with AI companies are going to find that they mostly want data from 2022 and earlier, similar to manufacturers looking for low-background steel to make particle detectors.
We also can’t just throw more processing power at it because current LLMs are already nearly cost-prohibitive in terms of processing power per query (it’s just being masked by VC money subsidizing the cost). Even if cost wasn’t an issue, we’re also starting to approach hard limits in physics like waste heat in terms of how much faster we can run current technology.
So we already have a pretty good idea what the answer to “how good AI will get” is, and it’s “not very.” At best, it’ll get a little more efficient with AI-specific chips, and some specially-trained models may provide some decent results. But as it stands, pretty much any organization that tries to use AI in any public-facing role (including merely using AI to write code that is exposed to the public) is just asking for bad publicity when the AI inevitably makes a glaringly obvious error. It’s marginally better than the old memes about “I trained an AI on X episodes of this show and asked it to make a script,” but not by much.
As it stands, I only see two outcomes: 1) OpenAI manages to come up with a breakthrough–something game-changing, like a technique that drastically increases the efficiency of current models so they can be run cheaply, or something entirely new that could feasibly be called AGI, 2) The AI companies hit a brick wall, and the flow of VC money gradually slows down, forcing the companies to raise prices and cut costs, resulting in a product that’s even worse-performing and more expensive than what we have today. In the second case, the AI bubble will likely pop, and most people will abandon AI in general–the only people still using it at large will be the ones trying to push disinfo (either in politics or in Google rankings) along with the odd person playing with image generation.
In the meantime, what I’m most worried for are the people working for idiot CEOs who buy into the hype, but most of all I’m worried for artists doing professional graphic design or video production–they’re going to have their lunch eaten by Stable Diffusion and Midjourney taking all the bread-and-butter logo design jobs that many artists rely on for their living. But hey, they can always do furry porn instead, I’ve heard that pays well~
The problem is that there’s no incentive for employees to stay beyond a few years. Why spend months or years training someone if they leave after the second year?
But then you have to question why employees aren’t loyal any longer, and that’s because pensions and benefits have eroded, and your pay doesn’t keep up as you stay longer at a company. Why stay at a company for 20, 30, or 40 years when you can come out way ahead financially by hopping jobs every 2-4 years?
It makes sense to judge how closely LLMs mimic human learning when people are using it as a defense to AI companies scraping copyrighted content, and making the claim that banning AI scraping is as nonsensical as banning human learning.
But when it’s pointed out that LLMs don’t learn very similarly to humans, and require scraping far more material than a human does, suddenly AIs shouldn’t be judged by human standards? I don’t know if it’s intentional on your part, but that’s a pretty classic example of a motte-and-bailey fallacy. You can’t have it both ways.
Who even knows? For whatever reason the board decided to keep quiet, didn’t elaborate on its reasoning, let Altman and his allies control the narrative, and rolled over when the employees inevitably revolted. All we have is speculation and unnamed “sources close to the matter,” which you may or may not find credible.
Even if the actual reasoning was absolutely justified–and knowing how much of a techbro Altman is (especially with his insanely creepy project to combine cryptocurrency with retina scans), I absolutely believe the speculation that the board felt Altman wasn’t trustworthy–they didn’t bother to actually tell anyone that reasoning, and clearly felt they could just weather the firestorm up until they realized it was too late and they’d already shot themselves in the foot.
…So your metric of “too much AI safety” is that it won’t let you fuck the fish…?
The speculation I heard in the Ars Technica article is that the board was unhappy with how quickly he was pushing to commercialize OpenAI, and they were wary about all the AI side hustles he was starting, including an AI chip company to compete with nvidia.
Not OP, but I think the point they’re making is that LTT screwed up the video, and that the drama sparked from LTT’s screwup gave Billet a lot of publicity they wouldn’t have had otherwise.
Personally, I’d trade the publicity for my only working prototype and $2,000 GPU back and a video that didn’t shit on me, but if you believe any publicity is good publicity…
It’s very user friendly in terms of tooltips, and if you don’t make deliberately bad choices during level up (e.g. taking a feat that gives you a cantrip from the Wizard class… that scales off your INT score… while playing a Barbarian with 8 intelligence that can’t cast spells while raging) it’s fairly difficult to make an unplayably bad character.
There’s a few cases where some general knowledge of D&D is helpful, such as knowing to never take True Strike because it’s literally worse than just attacking twice and having some knowledge of good builds is useful, since it helps guide what you take when you level up. That said, there’s also entire categories of actions in BG3 that don’t really have an equivalent rule in TTRPG 5e, such as weapon proficiency attacks, so online cookie cutter builds don’t capture the full extent of what you can do.
A moment of silence in honor the sacrifice of Kent Shocknek, taken from this world too early while trying to jump his car o7
Maybe consider digitizing that cassette, or at least listening to it to make sure it’s still usable (assuming you can stomach it). Cassette mediums degrade over time and it’s quite possible that microcassete could be reaching the end of its usable life: https://en.m.wikipedia.org/wiki/Preservation_of_magnetic_audiotape
Yeah, that’s what happens when the LLM they use to summarize these articles strips all nuance and comedy.