• 3 Posts
  • 94 Comments
Joined 1 year ago
cake
Cake day: June 4th, 2023

help-circle
  • If you buy your phone unlocked, you can get Red Pocket which is extremely cheap for service compared to most post paid plans. You can get ~5gb data and unlimited everything else for 20 a month on AT&T. And then if you go to Europe you can just buy a cheap Sim while there and pop it in.

    If you’re not picky about the phone, I have gotten sub 300 USD phones for the last 2, first lasted 4 years and I’m about 6 months into the second. Honestly there’s not much I feel like I’m missing, except spending way more money.




  • I have always felt that kids will get out of education what they put in/their interest in actually learning. I also think there is some benefits to learning how to manage technology de jure as it’s likely to come up when they’re out of high school too.

    I kind of disagree with some of the points about learning more just talking to an AI, both because I tend to get wrong answers or important missed context in my AI testing, but also because I think I needed to learn some stuff I wasn’t interested in personally.

    Today I don’t really have much opportunity to interact with classes beyond the great courses and linked in learning, and unfortunately much of the newer content is more like a YouTube curated Playlist than a traditional course. They are mostly superficial overviews more intended for entertainment than learning details.

    YouTube on the other hand is all over the map and you have to know what to search for.

    I think some value of the experiment is the part where it got the kids to review their notification settings to suppress things they weren’t interested in. Personally I think having phones in airplane mode / off during class is probably the best plan. Do the notifications during study hall, lunch, bus ride, and other free time.



  • Ehh. That’s like accident billboards. I maintain that most people don’t know they can block ads, and a large part of the masses who have heard of it think it’s complicated or too hard for them.

    With ad blocking I have a small tension that if I know a sort of thing exists, I presumably will find it when I search for it. So I don’t want another vacuum ad.

    If I don’t know something exists then I have to stumble on it somehow.

    The bigger problem would be if they didn’t block their own ads. I honestly didn’t even know they did ads so my blocking, of which they’re a part, apparently is working.


  • I think I’ve mostly moved to Kagi, because someone needs to be incentiviced to actually focus on search, not ads. That said it’s also good bang for buck in annual ultimate because you get access to multiple AI models.

    That said, I so far continue to be mostly underwhelmed by AI except for basic starting points on scripts or for games like D&D.


  • I think I’ve mostly moved to Kagi, because someone needs to be incentiviced to actually focus on search, not ads. That said it’s also good bang for buck in annual ultimate because you get access to multiple AI models.

    That said, I so far continue to be mostly underwhelmed by AI except for basic starting points on scripts or for games like D&D.



  • It’s also the anti commodity stuff IP has been allowing. If Hershey makes crap chocolate, there is little stopping you from buying Lidnt say. But if Microsoft makes a bad OS, there’s a lot stopping you from using Linux or whatever.

    What’s worse is stuff like DRM and computers getting into equipment that otherwise you could use any of a bevy of products for. Think ink cartridges.

    Then there’s the secret formulas like for transmission fluid now where say Honda says in the manual you have to get Honda fluid for it to keep working. Idk if it’s actually true, but I l’m loathe to do the 8k USD experiment with my transmission.

    You’d think the government could mandate standards but we don’t have stuff like that.



  • Yes definitely. Many of my fellow NLP researchers would disagree with those researchers and philosophers (not sure why we should care about the latter’s opinions on LLMs).

    I’m not sure what you’re saying here - do you mean you do or don’t think LLMs are “stochastic parrot”s?

    In any case, the reason I would care about philosophers opinions on LLMs is mostly because LLMs are already making “the masses” think they’re potentially sentient, and or would deserve personhood. What’s more concerning is that the academics that sort of define what thinking even is seem confused by LLMs if you take the “stochastic parrot” POV. This eventually has real world affects - it might take a decade or two, but these things spread.

    I think this is a crazy idea right now, but I also think that going into the future eventually we’ll need to have something like a TNG “Measure of a Man” trial about some AI, and I’d want to get that sort of thing right.



  • I think it’s very clear that this “stochastic parrot” idea is less and less accepted by researchers and philosophers, maybe only in the podcasts I listen to…

    It’s not capable of knowledge in the sense that humans are. All it does is probabilistically predict which sequence of words might best respond to a prompt

    I think we need to be careful thinking we understand what human knowledge is and our understanding of the connotations if the word “sense” there. If you mean GPT4 doesn’t have knowledge like humans have like a car doesn’t have motion like a human does then I think we agree. But if you mean that GPT4 cannot reason and access and present information - that’s just false on the face of just using the tool IMO.

    It’s also untrue that it’s predicting words, it’s using tokens, which are more like concepts than words, so I’d argue already closer to humans. To the extent it is just predicting stuff, it really calls into question the value of most of the school essays it writes so well now…


  • Well, LLMs can and do provide feedback about confidence intervals in colloquial terms. I would think one thing we could do is have some idea of how good the training data is in a given situation - LLMs already seem to know they aren’t up to date and only know stuff to a certain date. I don’t see why this could not be expanded so they’d say something much like many humans would - i.e. I think bla bla but I only know very little about this topic. Or I haven’t actually heard about this topic, my hunch would be bla bla.

    Presumably like it was said, other models with different data might have a stronger sense of certainty if their data covers the topic better, and the multi cycle would be useful there.



  • I actually always thought there was a possibility that what happened to AOL might happen to Google / Facebook / etc. I.e. people inherently don’t like extreme walled gardens, and will splinter off into more open, more random, more innovative spaces. I think the pendulum had swung back over to an early AOL like very limited set of 5 or so big “platforms”, and the issues with that were seen again, just like in the late 90s when people were ditching AOL for “the real Internet” en masse.