• 1 Post
  • 24 Comments
Joined 1 year ago
cake
Cake day: September 27th, 2023

help-circle


  • Absolutely this. Phones are the primary device for Gen Z. Phone use doesn’t develop tech skills because there’s barely anything you can do with the phones. This is particularly true with iOS, but still applies to Android.

    Even as an IT administrator, there’s hardly anything I can do when troubleshooting phone problems. Oh, push notifications aren’t going through? Well, there are no useful logs or anything for me to look at, so…cool. It makes me crazy how little visibility I have into anything on iPhones or iPads. And nobody manages “Android” in general; at best they manage like two specific models of one specific brand (usually Samsung or Google). It’s impossible to manage arbitrary Android phones because there’s so little standardization and so little control over the software in the general case.


  • I posted some of my experience with Kagi’s LLM features a few months ago here: https://literature.cafe/comment/6674957 . TL;DR: the summarizer and document discussion is fantastic, because it does not hallucinate. The search integration is as good as anyone else’s, but still nothing to write home about.

    The Kagi assistant isn’t new, by the way; I’ve been using it for almost a year now. It’s now out of beta and has an improved UI, but the core functionality seems mostly the same.

    As far as actual search goes, I don’t find it especially useful. It’s better than Bing Chat or whatever they call it now because it hallucinates less, but the core concept still needs work. It basically takes a few search results and feeds them into the LLM for a summary. That’s not useless, but it’s certainly not a game-changer. I typically want to check its references anyway, so it doesn’t really save me time in practice.

    Kagi’s search is primarily not LLM-based and I still find the results and features to be worth the price, after being increasingly frustrated with Google’s decay in recent years. I subscribed to the “Ultimate” Kagi plan specifically because I wanted access to all the premium language models, since subscribing to either ChatGPT or Claude would cost about the same as Kagi, while Kagi gives me access to both (plus Mistral and Gemini). So if you’re interested in playing around with the latest premium models, I still think Kagi’s Ultimate plan is a good deal.

    That said, I’ve been disappointed with the development of LLMs this year across the board, and I’m not convinced any of them are worth the money at this point. This isn’t so much a problem with Kagi as it is with all the LLM vendors. The models have gotten significantly worse for my use cases compared to last year, and I don’t quite understand why; I guess they are optimizing for benchmarks that simply don’t align with my needs. I had great success getting zsh or Python one-liners last year, for example, whereas now it always seems to give me wrong or incomplete answers.

    My biggest piece of advice when dealing with any LLM-based tools, including Kagi’s, is: don’t use it for anything you’re not able to validate and correct on your own. It’s just a time-saver, not a substitute for your own skills and knowledge.



  • DuckDuckGo is an easy first step. It’s free, publicly available, and familiar to anyone who is used to Google. Results are sourced largely from Bing, so there is second-hand rot, but IMHO there was a tipping point in 2023 where DDG’s results became generally more useful than Google’s or Bing’s. (That’s my personal experience; YMMV.) And they’re not putting half-assed AI implementations front and center (though they have some experimental features you can play with if you want).

    If you want something AI-driven, Perplexity.ai is pretty good. Bing Chat is worth looking at, but last I checked it was still too hallucinatory to use for general search, and the UI is awful.

    I’ve been using Kagi for a while now and I find its quick summaries (which are not displayed by default for web searches) much, much better than this. For example, here’s what Kagi’s “quick answer” feature gives me with this search term:

    Room for improvement, sure, but it’s not hallucinating anything, and it cites its sources. That’s the bare minimum anyone should tolerate, and yet most of the stuff out there falls wayyyyy short.







  • Ah, somehow I didn’t see 18 there and only looked at 17. Thanks!

    I tried pulling just the one package from the sid repo, but that created a cascade of dependencies, including all of llvm. I was able to get those files installed but not able to get clinfo to succeed. I also tried installing llvm-19 from the repo at https://apt.llvm.org/, with similar results. clinfo didn’t throw the fatal errors anymore, but it didn’t work, either. It still reported Number of devices 0 and OpenCL-based tools crashed anyway. Not with the same error, but with something generic about not finding a device or possibly having corrupt drivers.

    Should I bite the bullet and do a full ugprade to sid, or is there some way to this more precisely that won’t muck up Bookworm?






  • Thanks! I didn’t see that. Relevant bit for convenience:

    we call model providers on your behalf so your personal information (for example, IP address) is not exposed to them. In addition, we have agreements in place with all model providers that further limit how they can use data from these anonymous requests that includes not using Prompts and Outputs to develop or improve their models as well as deleting all information received within 30 days.

    Pretty standard stuff for such services in my experience.


  • If you click the Chat button on a DDG search page, it says:

    DuckDuckGo AI Chat is a private AI-powered chat service that currently supports OpenAI’s GPT-3.5 and Anthropic’s Claude chat models.

    So at minimum they are sharing data with one additional third party, either OpenAI or Anthropic depending on which model you choose.

    OpenAI and Anthropic have similar terms and conditions for enterprise customers. They are not completely transparent and any given enterprise could have their own custom license terms, but my understanding is that they generally will not store queries or use them for training purposes. You’d better seek clarification from DDG. I was not able to find information on this in DDG’s privacy policy.

    Obviously, this is not legal advice, and I do not speak for any of these companies. This is just my understanding based on the last time I looked over the OpenAI and Anthropic privacy policies, which was a few months ago.



  • This is correct, albeit not universal.

    KDE has a predefined schedule for “release candidates”, which includes RC2 later this month. So “RC1” is clearly not going to be the final version. See: https://community.kde.org/Schedules/February_2024_MegaRelease

    This is at least somewhat common. In fact, it’s the same way the Linux kernel development cycle works. They have 7 release candidates, released on a weekly basis between the beta period and final release. See: https://www.kernel.org/category/releases.html

    In the world of proprietary corporate software, I more often see release candidates presented as potentially final; i.e. literal candidates for release. The idea of scheduling multiple RCs in advance doesn’t make sense in that context, since each one is intended to be the last (with fingers crossed).

    It’s kind of splitting hairs, honestly, and I suspect this distinction has more to do with the transparency of open-source projects than anything else. Apple, for example, may indeed have a schedule for multiple macOS RCs right from the start and simply choose not to share that information. They present every “release candidate” as being potentially the final version (and indeed, the final version will be the same build as the final RC), but in practice there’s always more than one. Also, Apple is hardly an ideal example to follow, since they’ve apparently never even heard of semantic version numbering. Major compatibility-breaking changes are often introduced in minor point releases. It’s infuriating. But I digress.