• 0 Posts
  • 87 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle
  • If you are at the point where you are having to worry about government or corporate entities setting traps at the local library? You… kind of already lost.

    What about just a blackmailer assuming anyone booting an OS from a public computer has something to hide? And then they have write access and there’s no defense, and it doesn’t have to be everywhere because people seeking privacy this way will have to be picking new locations each time. An attack like that wouldn’t have to be targeted at a particular person.





  • this will force us humans to go actually outside, make friends, form deep social relationship, and build lasting, resilient communities

    There is no chance it goes that way, how is talking to people outside even an option for someone used to just being on the internet? Even if the content gets worse, the basic mechanisms to keep people scrolling still function, while the physical and social infrastructure necessary for in person community building is nonexistent.












  • Privacy means personal agency and freedom from people, whether individuals, companies, or the government, controlling you with direct or implied threats, or more subtle manipulation, which they can do because they have your dox and because information is power.

    A lack of privacy adds fuel to the polycrisis because if we can’t act in relative secrecy that basically means we can’t act freely at all, and nothing can challenge whoever runs the panopticon.


  • The output for a given input cannot be independently calculated as far as I know, particularly when random seeds are part of the input.

    The system gives a probability distribution for the next word based on the prompt, which will always be the same for a given input. That meets the definition of deterministic. You might choose to add non-deterministic rng to the input or output, but that would be a choice and not something inherent to how LLMs work. Random ‘seeds’ are normally used as part of deterministically repeatable rng. I’m not sure what you mean by “independently” calculated, you can calculate the output if you have the model weights, you likely can’t if you don’t, but that doesn’t affect how deterministic it is.

    The so what means trying to prevent certain outputs based on moral judgements isn’t possible. It wouldn’t really be possible if you could get in there with code and change things unless you could write code for morality, but it’s doubly impossible given you can’t.

    The impossibility of defining morality in precise terms, or even coming to an agreement on what correct moral judgment even is, obviously doesn’t preclude all potentially useful efforts to apply it. For instance since there is a general consensus that people being electrocuted is bad, electrical cables normally are made with their conductive parts encased in non-conductive material, a practice that is successful in reducing how often people get electrocuted. Why would that sort of thing be uniquely impossible for LLMs? Just because they are logic processing systems that are more grown than engineered? Because they are sort of anthropomorphic but aren’t really people? The reasoning doesn’t follow. What people are complaining about here is that AI companies are not making these efforts a priority, and it’s a valid complaint because it isn’t the case that these systems are going to be the same amount of dangerous no matter how they are made or used.




  • I did all my transportation and shopping with a mountain bike for a year and it’s kind of difficult on snow and ice, fell over some. The trick is to never turn at all when on that stuff, but it’s still hard. The cold makes the oil for the mechanisms work worse too, you need special oil. My hands got very cold holding on to the handlebars, you need to find some balance between gloves that hold warmth and resist the wind and gloves that let you have enough dexterity for the brakes and shifters.