• 4 Posts
  • 36 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle





  • Refurbished ThinkPads are available in countries where Framework, System76, and Pine64 do not ship.

    Besides, ThinkPads are really well-built machines that perform well for everyday tasks at a fraction of their (or the aforementioned competition’s) original price.

    I love my two machines, which are from before Lenovo took over completely. Their keyboards, port selection, and repairability are almost unparalleled compared to today’s competition.



    • Windows 95, 98, 2000, XP, 7 spanning a decade and a half.
    • Ubuntu 10.04 going up to the release where Unity became the default DE (11.04, I think). Came back to 10.04, as it was an LTS release.
    • Linux Mint Maya because of Cinnamon, and it was terrible.
    • Fedora 16 to 25 or 26.
    • Linux Mint 19

    Been with Linux Mint ever since. It just works. LM19 was also around the time when I stepped into Apple’s walled garden with iOS and macOS.



  • I do not agree with @FiniteBanjo@lemmy.today’s take. LLMs as these are used today, at the very least, reduces the number of steps required to consume any previously documented information. So these are solving at least one problem, especially with today’s Internet where one has to navigate a cruft of irrelevant paragraphs and annoying pop ups to reach the actual nugget of information.

    Having said that, since you have shared an anecdote, I would like to share a counter(?) anecdote.

    Ever since our workplace allowed the use of LLM-based chatbots, I have never seen those actually help debug any undocumented error or non-traditional environments/configurations. It has always hallucinated incorrectly while I used it to debug such errors.

    In fact, I am now so sceptical about the responses, that I just avoid these chatbots entirely, and debug errors using the “old school” way involving traditional search engines.

    Similarly, while using it to learn new programming languages or technologies, I always got incorrect responses to indirect questions. I learn that it has incorrectly hallucinated only after verifying the response through implementation. This makes the entire purpose futile.

    I do try out the latest launches and improvements as I know the responses will eventually become better. Most recently, I tried out GPT-4o when it got announced. But I still don’t find them useful for the mentioned purposes.