• 3 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle




  • There are alternatives to Lemmy. Kbin, I’d argue, is superior in most respects. (Kbin is still obviously young and rough around the edges at times, though.)

    I try to use both equally, because I’m always on the hook for picking the “doomed” standard in any 50/50 contest. It’s easier to read stuff from other instances in kbin, and that gives it the appearance of more frequent and more current activity; lemmy, even on “All/Active” or “All/Hot”, frequently drops 30 threads from one dude at the top of my feed, or I have three pages of threads with no comments and 6 upvotes. So even though I hate how kbin handles viewing pictures thumbnails (click on the post, wait for everything to load, click on the thumbnail, wait for it to load, chuckle, then x out of the picture to read the comments), I end up spending more time there.





  • after around five generations or so God would have to appear and kill a bunch of people once again, because apperently your decendants don’t belive in him anymore.

    Well, yeah. Dude vanishes for a thousand years, and I’m supposed to believe the stories of the people who did see his work (people who all died before my most distant tracable ancestor was even born) that were written down by obvious agenda-posters? Seriously?

    The quickest way to get more believers is just to show up and do a party trick every once in a while, but for some reason, God hasn’t done anything public and indisputable since cameras were invented. Weird for a guy who wants the whole world to worship him. All he’d have to do is just have a booming voice, audible everywhere on the planet, say “By the way, I’m God, I exist, and [insert holy book] is the correct one, so ya’ll better get on that.” Only the hardcore contrarians would still be non-believers.






  • The quickest way I’ve found to separate the articles that are going to be meaningless waste-of-time fluff pieces from ones that might be informative is to find the verb in the headline.

    Is it something like “claims”, “calls for”, “praises”, “criticizes”, or “expects”? Fluff. If something deserving of a more concrete, direct verb had happened, the headline would have said so. Verbs like “slams” or “attacks” or “demands” are even worse; they’re aggressive and enthusiastic about their content but still can’t make the claim something actually happened or changed.

    If the verb is preceded by “could”, “might”, “maybe”, or similar, especially with regard to tech news, it’s also probably an empty slow-news-day article, but those words aren’t necessarily as hollow as the ones mentioned above. Sometimes they’ll contain interesting information about the current state of things, even if they’re just going to lead you on a merry speculation romp about the optimistic/horrifying future.


  • I think one of the big problems is that we, as humans, are very easily fooled by something that can look or sound “alive”. ChatGPT gets a lot of hype, but it’s primarily coming from a form of textual pareidolia.

    It’s hard to convince people that ChatGPT has absolutely no idea what it’s saying. It’s putting words together in a human-enough way that we assume it has to be thinking and it has to know things, but it can’t do either. It’s not even intended to try to do either. That’s not what it’s for. It takes the rules of speech and a massive amount of data on which word is most likely to follow which other word, and runs with it. It’s a super-advanced version of a cell phone keyboard’s automatic word suggestions. Even just asking it to correct the punctuation on a complex sentence is too much to ask (in my experiment on this matter, it gave me incorrect answers 4 times, until I explicitly told it how it was supposed to treat coordinating conjunctions).

    And for most uses, that’s good enough. Tell it to include a few extra rules, depending on what you’re trying to create, and watch it spin out a yarn. But I’ve had several conversations with ChatGPT, and I’ve found it incredibly easy to “break”, in the sense of making it produce things that sound somewhat less human and significantly less rational. What concerns me about ChatGPT isn’t necessarily that it’s going to take my job, but that people believe it’s a rational, thinking, calculating thing. It may be that some part of us is biologically hard-wired to; it’s probably that same part that keeps seeing Jesus on burnt toast.