• 0 Posts
  • 3 Comments
Joined 10 months ago
cake
Cake day: January 10th, 2024

help-circle
  • Read a bit of the court filing, not the whole thing though since you get the gist pretty early on. Jornos put spin on everything, so here’s my understanding of the argument:

    1. Musk, who has given money to OpenAI in the past, and thus can legally file a complaint, states that
    2. OpenAI, which is a registered as an LLC, and which is legally a nonprofit, and has the stated goal of benefitting all of humanity has
    3. Been operating outside of its legally allowed purpose, and in effect
    4. Used its donors, resources, tax status, and expertise to create closed source algorithms and models that currently exclusively benefit for-profit concerns (Musk’s attorney points out that Microsoft Bing’s AI is just ChatGPT) and thus
    5. OpenAI has created a civil tort (a legally recognized civil wrong) wherein
    6. Money given by contributors would not haven been given had the contributors been made aware this deviation from OpenAI’s mission statement and
    7. The public at large has not benefited from any of OpenAI’s research, and thus OpenAI has abused its preferential tax status and harmed the public

    It’s honestly not the worst argument.



  • I use machine learning/ai pretty much daily and I run stuff at home locally when I do it. What you’re asking is possible, but might require some experimentation on your side, and you might have to really consider what’s important in your project because there will be some serious trade-offs.

    If you’re adamant about running locally on a Rasberry Pi, then you’ll want a RPi 4 or 5, preferably an RPi 5. You’ll also want as much RAM as you can get (I think 8gb is the current max). You’re not going to have much VRAM since RPi’s don’t have a dedicated graphics card, so you’ll have to use it’s CPU and normal RAM to do the work. This will be a slow process, but if you don’t mind waiting a couple minutes per paragraph of text, then it may work for your use case. Because of the limited memory of Pis in general you’ll want to limit what size LLM models you use. Something specialized like a 7b story telling LLM, or a really good general purpose model like Mistral Open Orca 7b is a good place to start. You aren’t going to be able to run much larger models than that, however, and that could be a bit creatively limiting. As good as I think Mistral Open Orca 7b is, it lacks a lot of content that would make it interesting as a story teller.

    Alternatively, you could run your LLM on a desktop and then use an RPi to connect to it over a local network. If you’ve got a decent graphics card with like 24gb of VRAM you could run a 30b model locally, and get decent results fairly fast.

    As for the 10k words prompt, that’s going to be tricky. Most LLMs have a certain number of tokens they can spit out before they have to start up again. I think some of the 30b models I use have a context length of 4096 tokens… so no matter what you do you’ll have to tell your LLM to do multiple jobs.

    Personally, I’d use LM Studio (not open source) to see if the results you get from running locally are acceptable. If you decide that its not performing as well as you had hoped, LM studio also generates python code so you could send commands to an LLM on a local network.