

Really took the wind out of my satirical comment that Musk wanted to bring back the Pinto.
Really took the wind out of my satirical comment that Musk wanted to bring back the Pinto.
That’s kind of inevitable right now. The most important thing is making sure Trump and the GOP get the blame. Well, them or Russia.
Actually, the best targets aren’t the government. It’s big business.
Do harm others before others harm you
I definitely would’ve considered Tesla as my first EV but as of now they’re dead to me. If he was completely gone then that actually becomes a selling point for me.
“Merit-based” hiring.
I imagine an ad blocker could prevent this data going out, unless the hosts were generic and the game/app simply won’t work without allowing those connections. I’ve never seen an app be [obviously] broken from my ad blocker but I am interested in running a similar experiment to see just how much data is going out.
So rather than build off their successes we’re just gonna put up a blinder and hope they don’t completely leave us in the dust?
“Waaaaah” you say?
Does Lemmy have the equivalent of Reddit flair? That made some posts easier to avoid or focus on that others.
It’s up to the user to understand it’s a fantasy and not reality.
I believe even non-AI media could be held liable if it encouraged suicide. It doesn’t seem like much of a leap to say, “This is for entertainment purposes only,” and follow with a long series of insults and calls to commit suicide. If two characters are taking to each other and encourages self-harm then that’s different. The encouragement is directed at another fictional character, not the viewer.
Many video games let you do violent things to innocent npcs.
NPCs, exactly. Do bad things to this collection of pixels, not people in general. The immersion factor would also play in favor of the developer. In a game like Postal you kill innocent people but you’re given a setting and a persona. “Here’s your sandbox. Go nuts!” The chat system in question is meant to mimic real chatting with real people. It wasn’t sending messages within a GoT MMO or whatnot.
Llms are quickly going to be included in video games and I would rather not have safeguards (censorship) because a very small percentage of people with clear mental issues can’t deal with them.
There are lots of ways to include AI in games without it generating voice or text. Even so that’s going to be much more than a chat system. If Character AI had their act together I bet they’d offer the same service as voice chat even. This service was making the real world the sandbox!
The failure is reasonable scenarios where the fantasy needs to end. AFAIK the only other way this could’ve ended, without harm, would be if the kid just decided to stop chatting (highly unlikely) or if someone looked over his shoulder at what was being typed (almost as unlikely). As others have said, it’s hard to know what is the AI thought process or predict how it would react to a situation without testing it. So for all they know the bot could have said, in the first place, “Let’s die together.”
I haven’t used it nearly as much as VirtualBox but Boxes (flatpak) is definitely a breeze to use. It uses KVM under the hood I think. If your use cases are complicated it might abstract away too much though.
Not saying flaws make them useless, I’m saying the flaws mean they shouldn’t be a single point of failure.
The context size wouldn’t have really mattered because the bot was invested in the fantasy. I could just as easily see someone pouring their heart out to a bot about how they want to kill people but said in a tactful way that the bot just goes along with it an essentially encourages violence. Again, the bot won’t break character or make the connection that this isn’t just make believe, this could lead to real harm.
This whole, “It wasn’t me, it was the bot,” excuse is a variation on an excuse many capitalists have used before. They put out a product they know little about but they don’t think too hard because it sells. Then hundreds of people get cancer or poisoned and at worst there’s a fine but no real blame or jail time.
Character AI absolutely could create safeguards that would avoid harm but instead they’re putting in the maximum effort it seems to do nothing about it.
Folks should agree and understand that no one can be held responsible for what the AI outputs.
That would be a dangerous precedent. I think a lot of us have seen examples of AI not just making stuff up but having logical flaws. I could easily see an AI being in charge of creating recipes for food and saying something like, “This recipe does not contain peanuts so no warning label is required.” while not understanding peanut butter is made from peanuts and putting that into the recipe. Shit like this has been tried before where companies wanted to cut corners by letting software perform all safety checks and have no hardware or human safeguards.
It doesn’t even have to be a logical error. Companies will probably just tell the AI models that their primary function is to generate revenue and that will lead to decisions that maximize profits but also harm.
Told him not to but also failed to drop the fantasy and understand the euphemism of “come home”. Almost any human would put a full stop to the interaction and if they didn’t they should also be charged.
I don’t think this should be legal (philosophically). I think if a non-profit wants to sell IP or equipment they should be required to auction it off. If they’re really lucky the for-profit company would get everything they need.
If anyone knows of a player I can vertically mount to the wall let me know and I’ll buy it today. I see that kind of thing for CDs. It’s still just a spinning disc!
Yes, the scraper is going to mindlessly gobble up information. At best they’d expend more resources later to try and determine the value of the content but how do you do that really? Mostly I think they’re hoping the good will outweigh the bad.
Wow! Pretty much the “who’s who” of companies I don’t trust!