I think the QML linter detects that but may require maintaining some extra files that tell which methods and properties are available in C++ classes. Not sure if they can be auto-generated if using C++ and more native tools since I've been using QML with Rust. Better type integration would be nice though.
Whoa, I noticed something similar. I was updating my password or something a few years back and decided to test the backup codes too. They didn't work. I don't know what went wrong but that got me worried a bit.
After reading some comments: this probably goes without saying, but one should be very careful what to expose to the internet. Sounds like the analytics-service maybe could have been available only over VPN (or similar, like mTLS etc.)
And for basic web sites, it's much better if it requires no back-end.
Every service exposed increases risk and requires additional vigilance to maintain. Which means more effort.
The ietf standardization was irrelevant so I would give them some slack. ISPs were using CGNAT already in a widespread fashion. The ietf just said, “if we’re gonna do this shit, at least stay out of the blocks used by private networks”.
It has been a non-existent problem for roughly 20 years now. Why do people still keep pulling out "uniquely identified down to the device" as an argument?
Windows, macOS and most Linux distros by default rotate SLAAC addresses every 24 hours.
For learning vim, I recommend searching for a "vim cheat sheet" that has an image of a keyboard layout with vim commands in it and printing that. Makes it easier to check and learn more, little by little.
Another one is online tutorials that make you practice interactively. Haven't used those much but the little I did, it was helpful.
I'm running NixOS on some of my hosts, but I still don't fully commit to configuring everything with nix, just the base system, and I prefer docker-compose for the actual services. I do it similarly with Debian hosts using cloud-init (nix is a lot better, though).
The reason is that I want to keep the services in a portable/distro-agnostic format and decoupled from the base system, so I'm not tied too much to a single distro and can manage them separately.
Ditto on having services expressed in more portable/cross distro containers. With NixOS in particular, I've found the best of both worlds by using podman quadlets via this flake in particular https://github.com/SEIAROTg/quadlet-nix
If you're the one building the image, rebuild with newer versions of constituent software and re-create. If you're pulling the image from a public repository (or use a dynamic tag), bump the version number you're pulling and re-create. Several automations exist for both, if you're into automatic updates.
To me, that workflow is no more arduous than what one would do with apt/rpm - rebuild package & install, or just install.
How does one do it on nix? Bump version in a config and install? Seems similar
Now do that for 30 services and system config such as firewall, routing if you do that, DNS, and so on and so forth. Nix is a one stop shop to have everything done right, declaratively, and with an easy lock file, unlike Docker.
Doing all that with containers is a spaghetti soup of custom scripts.
Perhaps. There are many people, even in the IT industry, that don't deal with containers at all; think about the Windows apps, games, embedded stuff, etc. Containers are a niche in the grand scheme of things, not the vast majority like some people assume.
Really? I'm a biologist, just do some self hosting as a hobby, and need a lot of FOSS software for work. I have experienced containers as nothing other than pervasive. I guess my surprise is just stemming from the fact that I, a non CS person even knows containers and see them as almost unavoidable. But what you say sounds logical.
I'm a career IT guy who supports biz in my metro area. I've never used docker nor run into it with any of my customers vendors. My current clients are Windows shops across med, pharma, web retail and brick/mortar retail. Virtualization here is hyper-v.
And it this isn't a non-FOSS world. BSD powers firewalls and NAS. About a third of the VMs under my care are *nix.
And as curious as some might be at the lack of dockerism in my world, I'm equally confounded at the lack of compartmentalization in their browsing - using just one browser and that one w/o containers. Why on Earth do folks at this technical level let their internet instances constantly sniff at each other?
Self-hosting and bioinformatics are both great use cases for containers, because you want "just let me run this software somebody else wrote," without caring what language it's in, or looking for rpms, etc etc.
If you're e.g: a Java shop, your company already has a deployment strategy for everything you write, so there's not as much pressure to deploy arbitrary things into production.
Containers decouple programs from their state. The state/data live outside the container so the container itself is disposable and can be discarded and rebuild cheaply. Of course there need to be some provisions for when the state (ie schema) needs to be updated by the containerized software. But that is the same as for non-containerized services.
I'm a bit surprised this has to be explained in 2025, what field do you work in?
First I need to monitor all the dependencies inside my containers, which is half a Linux distribution in many cases.
Then I have to rebuild and mess with all potential issues if software builds ...
Yes, in the happy path it is just a "docker build" which updates stuff from a Linux distro repo and then builds only what is needed, but as soon as the happy path fails this can become really tedious really quickly as all people write their Dockerfiles differently, handle build step differently, use different base Linux distributions, ...
I'm a bit surprised this has to be explained in 2025, what field do you work in?
It does feel like one of the side effects of containers is that now, instead of having to worry about dependencies on one host, you have to worry about dependencies for the host (because you can't just ignore security issues on the host) as well as in every container on said host.
So you go from having to worry about one image + N services to up-to-N images + N services.
Just that state _can_ be outside the container, and in most cases should. It doesn't have to be outside the container. A process running in a container can also write files inside the container, in a location not covered by any mount or volume. The downside or upside of this is, that once you down your container, stuff is basically gone, which is why usually the state does live outside, like you are saying.
Your understanding of not-containers is incorrect.
In non-containerized applications, the data & state live outside the application, store in files, database, cache, s3, etc.
In fact, this is the only way containers can decouple programs from state — if it’s already done so by the application. But with containers you have the extra steps of setting up volumes, virtual networks, and port translation.
But I’m not surprised this has to be explained to some people in 2025, considering you probably think that a CPU is something transmitted by a series of tubes from AWS to Vercel that is made obsolete by NVidia NFTs.
I don't think that makes 100 / 100 the most likely result if you flip a coin 200 times. It's not about 100 / 100 vs. another single possible result. It's about 100 / 100 vs. NOT 100 / 100, which includes all other possible results other than 100 / 100.
reply