Unless I’m mistaken, and I probably am, the patents on blueray should have expired by now. Software side might be covered under copyright right though. Not sure if software can be copyrighted though tbh.
Unless I’m mistaken, and I probably am, the patents on blueray should have expired by now. Software side might be covered under copyright right though. Not sure if software can be copyrighted though tbh.
I do both. I buy the media, usually a physical release, and then put it on my Jellyfin server to stream to my devices. Benefits of streaming, but with the piece of mind that my favorite music, movies or tv shows won’t go away.
Not that I’m aware of.
The only time I heard anyone talking about it was on the podcast Self-Hosted . Supposedly it’s a NUC clone with performance similar to a then current (2023) mid range laptop and draws about the same amount of power. I think they said the N100 processor had Intel QuickSync for hardware transcoding.
Sure, but “better” is massively subjective. For me, when I set up a pi, I’m not usually making use of the GPIO or the camera inputs. I’m generally throwing together a headless server. To do that, in addition to the board itself, I need storage, power, heat sinks, an fan and usually some sort of case.
Using the prices at CanaKit as a rough guide, you can come up with this search on Ebay.
The first entry I saw drew my attention. It’s a 7th gen i5 with 16GB RAM and a 120 GB SSD. Not sure the 500 GB HDD would survive shipping, but it’s $100 shipped. Biggest concern is that the seller only has 65 sales. Possible scam?
On the higher end of that bracket there is this. 6th gen and only 8GB RAM, but the seller does have a history.
With the prices on the Pi5 your potentially getting into the price range where it might make sense to look at the Beelinks mini PCs, based around a 12th gen Intel.
Like I said, prices right now are at a spot where I can’t just say throw a Raspberry Pi at the problem. They are great boards but for someone self-hosting their own services they don’t necessarily always make sense anymore.
If you mean for ARM based systems (not just SBCs), I would agree, but the software and support ecosystems for amd64 systems far surpasses even the rPi ecosystem because you have backwards compatibility with a lot of the legacy x86-64 and x86 code. And because they support UEFI, distributions don’t need to explicitly support your particular version of your ARM processor so you can run pretty much whatever OS you want.
Not long ago I saw a one of those old small Dell Optiplex workstations with a 4th gen i3, 8GB ram and a 256GB SSD on Amazon for $100 USD. There’s a new BeeLink with an N100, 16 GB RAM, and 500 GB SSD for $200. They’d both be great for any home lab project that doesn’t need the GPIO of the rPi. And they are both in the same price range.
Don’t get me wrong, if I needed to kitbash a desktop or small server together in a hurry, I would probably be using a Pi3 or Pi4 because I’ve 6 of them collecting dust from when my self hosted services outgrew their available compute. I replaced them with a keyboard damaged laptop with a 6th gen i5 and my old desktop with a 4th gen i5. But if I needed to buy something today, I’d be doing some price comparisons first.
If you like Pi’s, use them. They are great kit. But if price or (more recently) power consumption are your primary consideration, it’s no longer as simple a choice as it was pre-pandemic. It’s worth looking around now.
Of course, none of this applies if you need the GPIO. But then you’re looking for project boards, not desktop or home server systems. Different set of criteria. And a different set of head aches.
Been able to use rPis as a desktop for a while now. The 2s and 3s weren’t particularly pleasant but it was doable. The Pi 4 8GB with an USB3 jump drive as root partition was a lot more pleasant, at least until you hit thermal throttle.
Right now though, there are more powerful options in the same price point, once you account for power, storage and optionally, a case. At least for desktop and home server use.
The Raspberry Pi’s just aren’t the go to hardware for the home lab anymore. Probably won’t be again unless the price comes back down on the Pi’s or the price on new and used amd64’s goes back up.
I didn’t use early generation smart phones and was completely bewildered when I discovered apps often used swiping left/right to interact. No app I had used before ever indicated that was an option. I suggested we should add indicators to our app to teach people but that was rejected because “everyone knows that”. It’s easy when you know how.
Oh it’s even worse when you did have experience with early smartphones. I’ve used Windows CE phones, Blackberrys, PalmOS phones, early Androids, and since 2015, iOS. None of them did things the same way, but all navigated using clickable objects on the screen. I was shocked when I had accidentally stumbled upon gestures. In 2017.
I’m still discovering new gestures, usually by accident. It’s becoming more intuitive, but only because I now know that it might be an option.
A cursory web search suggests your CPU may have been damaged. Can’t say for sure as I don’t know jack about modern hardware. My indepth knowledge basically ends at x86.
I found this while searching the error codes in the photo. Maybe it will be of some help.
There is no option. There is too much variation in the various phone chips for the hardware hacking community to reverse engineer more than a bare handful. And as soon as the hardware has been reverse engineered, it will never be used again by a manufacturer making the exercise largely pointless.
Add to that, the fact that Qualcomm actively discourages long term support of their chips….
That’s a site I haven’t heard of in a while.
I’ve bounced between both over the last 20 years. The main difference between Gnome and KDE is that KDE has always been far more customizable. Gnome has better support for Wayland over KDE 5, however that is not true for KDE 6. I’m not sure which is default currently for the Fedora KDE spin.
My personal take on the current Gnome DE is that it is a very different way of conceptualizing the desktop from what I’m used to, to the point that it puts me off. I had the same issue with the Unity DE on Ubuntu back when. While it’s not for me, a lot of folks do seem to like it. I’ts quite usable, but I wind up spending time fafing around trying to figure out how to get things done rather then just doing what it is I’m trying to do. Muscle memory runs deep and KDE keeps to the traditional Windows desktop feel (Win95 - Win 7) with a few nice upgrades. Gnome ( at least current Gnome) does it their own way.
Which have their own issues. Namely, to my knowledge, upfront cost and lack of flexibility. I’m sure there are others.
Here in the US, you are unlikely to find enough people willing to think far enough ahead for that to happen. Too many emotions guiding actions.
As a truck driver, I would like to ask, how would you acquire all the “stuff” you have bought over the years? I am reasonably sure most of it was not produced locally to you. And the raw materials almost certainly aren’t locally sourced. Trucking and logistics generally has its issues, and you only have glimpsed a fraction of them, but it is absolutely necessary for modern society. Unless you’re proposing we kill off 2/3rds of humanity and go back to hunter-gatherer. Not a fan of that idea.
Good to know! I haven’t run a dual boot configuration in at least 15 years.
Duel booting has been a thing for as I have been using Linux, say 2004ish, and it has only gotten easier over the last 20 years.
Some things to watch out for though. First, make sure that you have sufficient free space on your drive before beginning, and make sure that you have backups in case something goes sideways. Good practice anyways.
Second, Windows likes to hijack the bootloader making it difficult to boot into Linux. I would make sure that Windows is installed first and have a live linux disk/jumpdrive available in case Windows decides to hijack the boot loader at a later date. That has only happened to me once, and wasn’t difficult to fix, but it was a pain in the butt.
As for which distro, dealer’s choice. I don’t think that there is a bad distro out there currently. Currently, I’m using NixOS but I think highly of Ubuntu, Fedora and all of their derivatives. Really, it’s whatever boats your float.
And blackbox has nothing in common with KDE? /s
Im off for bed. Night.
It’s not as difficult as the length of my comment implies, and doing it in the terminal simplifies the explanation quite a bit.
The average user though might never need to use the terminal. Most of what they want can be done in the browser.
As for Linux mass adoption, that happened years ago. Just nobody noticed. Android, Chromebook, Steam Deck are all Linux based and MacOS (BSD derived) is a close relative. And Microsoft has even made it possible to run linux command line programs in Windows, with some caveats, using WSL. And that’s not counting the majority of servers, networking gear and space craft running linux or unix.
Linux is a slightly different way of thinking. There are any number of ways that you can solve any problem you have. In Windows there are usually only one or two that work. This is largely a result of the hacker mentality from which linux and Unix came from. “If you don’t like how it works, rewrite it your way” and “Read the F***ing Manual” were frequent refrains when I started playing with linux.
Mint is a fine distro which is based off of Ubuntu, if I remember correctly. Most documentation that applies to Ubuntu will also apply to you.
Not sure what exactly you installed, but I’m guessing that you did something along the lines of sudo apt-get install docker
.
If you did that without doing anything ahead of time, what you probably got was a slightly out of date version of docker only from Mint’s repositories. Follow the instructions here to uninstall whatever you installed and install docker from docker’s own repositories.
The Docker Desktop that you may be used to from Windows is available for linux, however it is not part of the default install usually. You might look at this documentation.
I don’t use it, as I prefer ctop combined with docker-compose.
Towards that end, here is my docker-compose.yaml
for my instance of Audiobookshelf. I have it connected to my Tailscale tailnet, but if you comment out the tailscale service stuff and uncomment the port section in the audiobookshelf service, you can run it directly. Assuming your not making any changes,
Create a directory somewhere,
mkdir ~/docker
mkdir ~/docker/audiobookshelf
This creates a directory in your home directory called docker and then a directory within that one called audiobookshelf. Now we want to enter that directory.
cd ~/docker/audiobookshelf
Then create your docker compose file
touch docker-compose.yaml
You can edit this file with whatever text editor you like, but I prefer micro which you may not have installed.
micro docker-compose.yaml
and then paste the contents into the file and change whatever setting you need to for your system. At a minimum you will need to change the volumes section so that the podcast and audiobook paths point to the correct location on your system. it follows the format <system path>:<container path>
.
Once you’ve made all the needed changes, save and exit the editor and start the the instance by typing
sudo docker compose up -d
Now, add the service directly to your tailnet by opening a shell in the tailscale container
sudo docker exec -it audiobookshelf-tailscale /bin/sh
and then typing
tailscale up
copy the link it gives you into your browser to authenticate the instance. Assuming that neither you or I made any typos you should now be able to access audiobookshelf from http://books If you chose to comment out all the tailscale stuff you would find it at http://localhost:13378
docker-compose.yaml
version: "3.7"
services:
tailscale:
container_name: audiobookshelf-tailscale
hostname: books # This will become the tailscale device name
image: ghcr.io/tailscale/tailscale:latest
volumes:
- "./tailscale_var_lib:/var/lib" # State data will be stored in this directory
- "/dev/net/tun:/dev/net/tun" # Required for tailscale to work
cap_add: # Required for tailscale to work
- net_admin
- sys_module
command: tailscaled
restart: unless-stopped
audiobookshelf:
container_name: audiobookshelf
image: ghcr.io/advplyr/audiobookshelf:latest
restart: unless-stopped
# ports: # Not needed due to tailscale
# - 13378:80
volumes:
- '/mnt/nas/old_media_server/media/books/Audio Books:/audiobooks' # This line has quotes because there is a space that needed to be escaped.
- /mnt/nas/old_media_server/media/podcasts:/podcasts # See, no quotes needed here, better to have them though.
- /opt/audiobookshelf/config:/config # I store my docker services in the /opt directory. You may want to change this to './config' and './metadata' while your playing around
- /opt/audiobookshelf/metadata:/metadata
network_mode: service:tailscale # This line tells the audiobookshelf container to send all traffic to tailscale container
I’ve left my docker-compose file as-is so you can see how it works in my setup.
About 2 years ago, I moved my music to Jellyfin and have been using their media players on every platform I use (iOS, FireTV, Ubuntu, and Windows). At this point my music library is close to 200 GB, kinda hard to store that much on every device I own.
This is news? To anyone?