![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
Also the rise of containerised software that is more easily cross platform for self hosting what would once have been cloud only.
Also the rise of containerised software that is more easily cross platform for self hosting what would once have been cloud only.
Depending on private companies for free speech is bad for free speech in and of itself. So either course has negatives, which means the course with leqsr negative outcomes is best. If they over moderate, they lose users. If they undermoderate they face fines. I’m sure the market force will mean they do whatever is most profitable.
Holding social media companies responsible for the content they host is a better solution in my view. We hold newspapers responsible. Why not social media? Yes, moderation is expensive but they are wildly profitable, musk aside.
They don’t need to moderate everything, as the content volume is high, but they certainly could manually moderate all content that reaches a certain threshd. They choose not to and hide behind their users sharing as a reason.
I agree, but I think it is more complex than that. There are limits to free speech already. I agree that no one country should be able to censor others, but what about content that is illegally produced in that country.
So if terrorist training videos were made in Australia, could banning them from distribution mean they could prosecute fitter for distributing them? How about csam? How about China prosecutes for ibfro about Tiananmen. What about CSAM?
So objectively there are things some countries would want banned, but not all. Some that all might agree to ban. Classifying it might help but might that be more of an invasion of privacy? The web is built on lots of open protocols that assume good actors and no malicious intent. We are now adding protocols that increase privacy and security on top. Even something like the fediverse is a good example of the trade off between being public and being anonymous and being private. You can’t have it all.
9 times out of 10 password creation is one click with no prompt or indication that its for any particular vault. Not intuitive at all. I do it but less techy family or work colleagues, no, they don’t.
I also don’t see an option to save to both at once. So hard to share between users that have different access levels when there is crossover.
Perhaps I’m missing something. My personal use case is Personal passes Family passes Family passes, kids access Work passes all Work passes personal Work passes admin (higher security) Work passes customer facing Work passes clinical
So if I use a service at work but also on my kids ipad computer I need to created 2 seperste entries manually. I don’t want my work to have access to kids vault and likewise I don’t want my kid to have access to work vault. That’s just an easy example. There are many more cases like that for different work users a d not having cross access with other users. So it defaults to their personal account but they need access to joint accounts or department accounts. When theybsave something new, it saves to their personal.
Yes, I use them but it doesn’t work smoothly. I cannot easily add a password to my organisations from my personal account within a browser, even when setting up first time. If someone shares an organization vault with me, it can easily be accessed.
I find password sharing between family or others poor on bitwarden. It segments all the password vaults and then defaults all new into one. Very hard to change. It would be better to be able to choose zones or similar for sharing so I could have a personal vault, a family vault and a work vault and able to access all seamlessly. I would own all but be able to share as appropriate.
While this is possible to do its not seamless.
Broken clock.
Same as the link trust they used to use as a signal for trust. Again it was about popularity.
Trust here does not mean trustworthy, but rather higher likelihood to be answer that is sought. Calling it trust or authority as just marketing.
I wonder if that’s why Reddit data was chosen. Upvotes could be used as a signal for trust. What they forget is jokes comments often get upvotes.
Because the community response was negative. It didn’t end up there by mistake. It was put there.
Employees do testing, already covered by an NDA. Content creators do publicity. If they are restricted to no negative publicity, then they are not reliable and it’s dishonest.
So, even at full release, there could be bugs. That makes the suppression of actual opinions worse. If people didn’t call out unfinished projects, they would not get fixed. If they want preorders, stop making buggy mess games.
If it’s not ready to play, don’t do a playtest. They want to have their cake and eat it. The playtest is for publicity, not testing.
Trust their motivation. They are worried that ai including LLM processing will be mainly on Linux and they’ll be left behind. They are just following where they think the money will be. It just happens to be good for Linux and consumer choice, but that’s a side effect, not the reason.
Its community based so his opinion could be ignored, unlike when he’s on a board and his opinion was ignored.
I’m surprised they made 440m. However, investing in r+d is not unusual. This amount is not a huge investment for them based in overall revenue.
Which leads to less money. I’d prefer a few failed games and the industry learns. Fun games sell, it microtransaction nor half baked shovelware. Some strike it lucky with micro transactions, but only if the game is good.
They don’t need to fine them in every country. Just in Germany. If they pull our of Germany, they need to pull out of the EU. They are not doing that. They will make their document open, for real.
Lol, if you mean facebooks AI will skew more towards the views of American Facebook users, I’d say that’s a win for Europe. It will make the AI less valuable, creating a gap in the market for a better AI that can reflect European values or american or both.
AI does not need infinite data. They can easily licence that amount of content. They are just trying to do it cheaply with user content.
I gully expect use for AI training to become a standardized part of locencing for media and content going forwards. For a band or singer, or author, it may be they ibky get a small amoint for using their content but it won’t be stolen. There is minimal value in any one part of the content. There is value in the aggregate of lots of data.
Digitized books out of copyright have more archaic language but I expect we will see lots of media out of copyright being used also. Media organization that make movies, TV shows and publish newspapers and magazines also have a trove of content.