Dropping by to express healthy interest in this project.
Rancher have a pretty good track record so far:
- the Rancher platform itself (https://rancher.com/) is a really powerful and user friendly way to manage container clusters of all sorts, giving you a self-hosted dashboard for both your cloud and on prem clusters, for a variety of Kubernetes distributions; you can even manage the available drivers and also create deployments graphically
- the K3s distribution (https://k3s.io/) is in my eyes one of the best ways to run Kubernetes yourself, both in development environments and production ones. I benchmarked K3s alongside Docker Swarm as a part of my Master's thesis and it was surprising to see that its overhead was actually very close to that of Docker Swarm (a more lightweight orchestrator that's included with Docker and uses the Docker Compose format), only exceeding it by a few hundred MB with similar deployments being active, making K3s passable for small nodes
As for this particular project, it's very positive to see that it supports all of the big OSes, even though the 0.6.0 version tag would still advise caution for a while, even though it can definitely be considered as a replacement for Docker Desktop.
Admittedly, it's also nice to see that Docker and the ecosystem around it is still supported and is alive and kicking, since for many projects out there it's a perfectly serviceable stack with tools that a lot of people will be familiar with, as opposed to having to migrate to podman, while it's still becoming more and more stable, yet isn't quite there yet. Now, that may be a controversial take, and Docker Inc also have their fair share of challenges, about which there was a very nice writeup here: https://www.infoworld.com/article/3632142/how-docker-broke-i...
They've also got Longhorn, a distributed container-attached storage solution that's very simple to understand and easy to deploy. Performance is another thing but that's the same with all of the general networked storage solutions (Ceph included).
Rancher's got a well deserved good impression in my mind, though early on I avoided them somewhat.
Yes, but a different Longhorn. The early 00s Windows codenames all had a PNW skiing theme, and that Longhorn is a well-known bar at the base of Whistler mountain.
Also there's k3d (k3s in Docker). It lets you run all kinds of k3s clusters on the desktop with just a simple command. Great way to test your k3s deployments.
Looking over k3s and it really pushes the IoT/embedded aspect.
Any comments on what makes it different to a server-oriented k8s solution? Why wouldnt I run it on servers?
- by default packaged as a single binary, so is extremely easy to install, even uses SQLite for storage (though that can be swapped out for others as well)
- includes the functionality that you'd expect, like local storage, load balancing, ingress, while at the same time getting rid of some of the unnecessary plugins that you'd get in other distros
- as a consequence of the above, has a small runtime footprint, so running K8s clusters on VPSes with 2 to 4 GB of RAM is no longer a pipe dream, also has way less overhead on the actual nodes that you want to manage (think along the lines of a few hundred MB)
- also includes a variety of tools with it for managing your cluster more easily, so you don't need to install those separately
k3s, by default, uses sqlite for storage instead of etcd. This is one of the ways you can get a performance improvement.
If you are going to run a cluster you're likely going to want to have your database be an HA cluster. While k3s can support other databases, like PostgreSQL, it doesn't have handling for clusters in the config.
This is a limitation I see for k3s in clusters, right now.
I would love to see cluster handling in k3s. etcd can be a scaling bottleneck and replacing it with PostgreSQL could help it scale much higher.
Disclaimer, I work on Rancher Desktop which uses k3s
At my job, we use k3s not for IoT or embedded, but for deployment to back-office "servers" in field offices for deployments of services that don't need HA in that environment.
I'm running it on an Oracle Cloud Free Tier server, it's nice because the 1Gb of ram they include is less than the requirement for k8s but plenty for k3s
I finished it about a year ago, but due to reasons outside of my control (administrative decisions), i had to write it in my native language, Latvian, so it's probably a tad useless to a wider audience.
In case anyone desires to mess around with PDFs and machine translation:
- research praxis: https://files.kronis.dev/s/AJRCs84D7WngLzD
- development praxis: https://files.kronis.dev/s/Xz6mtbAamoA7Pe8
- the full text: https://files.kronis.dev/s/ioiW96dpnD5YcLk
Here's a tl;dr summary of what i did during it: essentially i set out to improve the way applications are run within the infrastructure of the company that currently employs me. To achieve that, i researched both the ways to utilize systemd services for everything as well as improve configuration management with Ansible, later introduce containers into the mix and compare their orchestrators (Docker Swarm and K3s in this case), as well as how to manage them, in this case, with Portainer. Then, after finishing that stuff up for the company, i proceeded with some further research of my own, into whether it's feasible to develop bespoke server configuration management tools and to integrate them with container management technologies, as well as do further benchmarks on how well the orchestrators actually handle containers running under load, since this doesn't often get tested.
There are probably a few reasons for those choices:
- at work, it was really disappointing to see environments where servers are started manually
- similarly, in the current day and age it's not acceptable to keep domain knowledge about how to start services and where the configuration is to yourself
- manual configuration management simply increases the risks of human error greatly, especially due to turnover
- in contrast, containers are a surprisingly usable way to introduce "infrastructure as code"
- that said, their orchestrators still need a lot of work and it's probably a good idea to compare them
- i was also curious to see whether it would be hard to create my own automation tools like Ansible, but executing remote Bash scripts through SSH
- lastly, i was a participant in the development on https://apturicovid.lv/ which was a Latvian COVID contact tracing solution; i was curious about the choice of using Ruby, so i wanted to create my own mock system to see how it'd perform under load in a real world scenario, with tools like K6s https://k6.io/
- if you're not using Ansible or another alternative for managing the configuration of your environments, you should definitely look into it
- if you can't or don't want to run containers, at least consider having systemd services with environment config files on your servers
- Docker Swarm and the lighter distributions of Kubernetes, like K3s are largely comparable, even though Kubernetes takes a little bit more resources
- tools like Portainer can make managing either or both of them a breeze, since it's one of the few solutions that support both Docker, Docker Swarm and Kubernetes clusters
- despite that, you'll still probably want to configure your applications (e.g. HTTP thread pool sizes, DB pool sizes) and introduce server monitoring or APM into the mix
- as for Python, it's pretty good for developing all sorts of solutions, even for remote server management: with Paramiko (for SSH), jsonpickle (serialization), Typer (CLI apps) and other libraries
- for my tool, i used JSON as an application data format, so one command could pipe its JSON output (e.g. returned tokens from cluster leader server for follower servers) as an input for another; there are few cases where this can work, but when it does, it is pretty nice
- that said, any such tool that you write will probably just be a novelty and in 90% of the cases you should just look at established solutions out there
- load testing is actually pretty feasible with tools like K6s, but only as long as you just want to test Web APIs - anything more complex like that might be better tested with something like a server farm running instances of Selenium
Here's a few Git repos of the projects:
- A very simple COVID infection tracking system which generates heatmaps to attempt to preserve anonimity: https://git.kronis.dev/rtu1/kvps5_masters_degree_covid_1984
- The K3s benchmarks for it: https://git.kronis.dev/rtu1/kvps5_masters_degree_covid_1984_load_test
- The Python configuration management tool: https://git.kronis.dev/rtu1/kvps5_masters_degree_astolfo_cloud_servant
Oh, and here's a test environment for the mock system that shows the generated heatmaps, you'll need to press the "Start" button to preview the data, though you can change the visualization parameters at runtime: https://covid1984.kronis.dev/
Shameless plug, I had the exact same problem with wanting to deploy some apps to a server (either home, on production at work, or IoT/Raspberry Pis), and I didn't like any of the options (Ansible is too dependent on the machine's state, Kubernetes is too complicated and heavy), so I wrote 200 lines of code and made this, which I love:
It basically pulls the repos you specify and runs `docker-compose up` on them, but does it in an opinionated way to make it easy for you to administrate the machines.
Docker compose doesn’t get the love it deserves. The most recent versions even dropped requirement to specify the schema version, it’s a beautifully compact way now to describe services and their relationships.
I have used docker-compose for my one man projects for years, removes any need for me to remember any kind of specific deployment steps for a given project. One of the biggest time savers in my entire career.
At last they are finally bringing the functionality into the core docker CLI with a native "docker compose" (not docker-compose) command - why that took until 2020 I have never understood, can only assume internal politics. If compose had been integrated sooner maybe adoption would have been wider.
I've actually come across your project before and keep meaning to try it for my PiHole/HomeAssistant/WireGuard setup etc, will check it out!
I use Dokku and like it a lot, but the automatic ingress complicated things a bit, and I couldn't easily have more elaborate setups like with Docker Compose. However, if you need a Heroku alternative, I wholeheartedly recommend it.
> Admittedly, it's also nice to see that Docker and the ecosystem around it is still supported and is alive and kicking
When the controlling organization is starting to go down the user hostile route (e.g. payed update opt-outs), it is more or less sad to see that the Docker is alive and kicking. Developers should ran away as fast as possible.
I see it as a legitimate way to pay programmers.
You still have the core functionality, but the kind of features that the enterprise likes is paid and supports the development of open source.
Why should everyone work for free to enrich the Kleiner Perkins of the world?
Rancher have a pretty good track record so far:
As for this particular project, it's very positive to see that it supports all of the big OSes, even though the 0.6.0 version tag would still advise caution for a while, even though it can definitely be considered as a replacement for Docker Desktop.Admittedly, it's also nice to see that Docker and the ecosystem around it is still supported and is alive and kicking, since for many projects out there it's a perfectly serviceable stack with tools that a lot of people will be familiar with, as opposed to having to migrate to podman, while it's still becoming more and more stable, yet isn't quite there yet. Now, that may be a controversial take, and Docker Inc also have their fair share of challenges, about which there was a very nice writeup here: https://www.infoworld.com/article/3632142/how-docker-broke-i...