I can’t help but feel overwhelmed by the sheer complexity of self-hosting modern web applications (if you look under the surface!)
Most modern web applications are designed to basically run standalone on a server. Integration into an existing environment a real challenge if not impossible. They often come with their own set of requirements and dependencies that don’t easily align with an established infrastructure.
“So you have an already running and fully configured web server? Too bad for you, bind me to port 443 or GTFO. Reverse-proxying by subdomain? Never heard of that. I won’t work. Deal with it. Oh, and your TLS certificates? Screw them, I ship my own!”
Attempting to merge everything together requires meticulous planning, extensive configuration, and often annoying development work and finding workarounds.
Modern web applications, with their elusive promises of flexibility and power, have instead become a source of maddening frustration when not being the only application that is served.
My frustration about this is real. Self-hosting modern web applications is an uphill battle, not only in terms of technology but also when it comes to setting up the hosting environment.
I just want to drop some PHP files into a directory and call it a day. A PHP interpreter and a simple HTTP server – that’s all I want to need for hosting my applications.
Containers really shine in the selfhosting world in modern times. Complete userspace isolation, basically no worries about dependencies or conflicts since it’s all internally shipped and pre-configured, easy port mapping, immutable “system” files and volume mounting for persistent data… And much more. If built properly, container images solve almost all problems you’re grappling with.
I can’t imagine ever building another application myself without containerization ever again. I can’t remember the last time I installed any kind of server-side software directly on a host without containerization, with the exception of packages required by the host that are unavoidable to support containers or to increase security posture.
I’m my (admittedly strong) opinion, it’s absolute madness, and dare I say, reckless and incomprehensible, why anybody would ever create a brand new product that doesn’t ship via container images in this day and age, if you have the required knowledge to make it happen, or the capacity to get up to speed to learn how to make it happen (properly and following best practices of course) in time to meet a deadline.
I’m sure some would disagree or have special use-cases they could cite where containers wouldn’t be a good fit for a product or solution, but I’m pretty confident that those would be really niche cases that would apply to barely anyone.
The thing that boils my blood is secret sqlite databases. I just want to store my volumes on an NAS using NFS, and run the stacks on a server built for it. Having a container randomly blows up because an undocumented sqlite database failed to get a lock sucks ass.
secret sqlite databases
The thing is: “secret”. SQLite databases in general are awesome. Basically no need to configuration. They just work and don’t even need an own server and in 99% of all cases they’re absolutely enough for what they used for. I’d always chose a SQLite database over anything else - but it should made clear that such a database is used.
While php is still cool… join the dark side and start using containers 😏
Yeah, I can’t imagine going back to not using containers. Call me a script kiddy if you want but I can copy paste some environment variables into a Docker Compose and stand up a new service in ten minutes.
I’m not going to say it’s always smooth sailing. I’ve definitely had containers with frustrating complications that took some sorting out. But man, if you want to just drop some files in a directory and go? Just get on board the Docker train and save yourself the headache.
deleted by creator
Yes, containers could be the way – if every applications would come in a container or it were super easy to containerize them without the applications knowing it.
Can I run half a dozen of applications in containers that all need port 443 and how annoying is it to set it up?
[This comment has been deleted by an automated system]
Docker containers do pretty much solve that, drop a
docker-compose.yml
file in place, maybe tweak a few lines, and that’s all.retvrn to cgi-bin
And that’s why my rule is: if it doesn’t container it doesn’t go on my server. If I can’t get the application crammed into my docker compose stack I look for an alternative. Hell I run PiHole and Octoprint inside container
Not sure what’s the problem though. Pull up a reverse proxy, and give all the crappy shit a private ip and whatever port they want, and access it through the proxy, and everyone can be on 443.
127.42.1.123:443,
whatever. Maybe use real containers, or that crappy docker shit, both offer you independent namespaces with all the port and whatnot.