So what exactly is linktree?
So what exactly is linktree?
Ollama and Open WebUI, as far as I know, are just open source software projects created to run pre-trained models, and have the same business model as many other open source projects on Github.
The models themselves come from Google, Meta and others. Have a look at all the models available on Hugging Face. The models themselves are just binary files. They’ve been trained and there are no ongoing costs to use them apart from energy your computer uses to run them.
I run Ollama with Open WebUI at home.
A) the containers they run in by default can’t access the Internet, but they are provided access if we turn on web search or want to download new models. Ollama and Open WebUI are fairly popular products and I haven’t seen any evidence of nefarious activity so far.
B) they create a profile on me and my family members that use them, by design. We can add sensitive documents that the models can use.
C) they are restricted by what we type and the documents we provide.
You might instead just install the Alpaca flatpak. I found it a very easy and quick way to get started.
Something like what Jeff Geerling does with this display perhaps.
Is that what happened when things went Tapo? I’ve avoided Tapo so far
Welcome to Lemmy, where the comments are made up and the points don’t matter
Dumb fucks.
I’m on a XR and have the option.
Let the dunking begin.
Please add a summary of the video content.
Interesting. Active users in decline, posts and comments on the up.
Fedora Kinoite is KDE but also atomic, so you can easily roll back from bad upgrades in future.
I would bet that hardware being way more efficient and corporate IT infrastructure being consolidated in data centers is much more energy efficient than the alternative. The fact that we are running much more layered and compute-intensive systems doesn’t really change that.
Are you pirating shit? No? Guess what, use a VPN!
a nice feature 👀
Seems rational, and I assume they almost always use them since they are overwhelmingly beneficial.
Verify the model output. Running your own, you still need to worry about that.
You still need to pick your models and verify.
I’m still trying out combinations of hardware and models, but even my old Intel 8500T CPU will run around reading speed with a stock version of Meta’s Llama 3.2 3b (maybe the one you tried) with mostly good output—fine for rewriting content, answering questions about uploaded document stores etc.
There are thousands of models tuned for various purposes, so one of the key questions is your purpose. If you want to use your setup for something specific (e.g., coding SQL) you are going to be able to find a much more efficient model.