At minimum, you'd need to wire in new 240V circuits, and you could only run one or two of these servers before you'd need a service upgrade.
Then you'd have to deal with noise from a literal wall of fans, or build a separate high capacity water cooling system (and still deal with dumping that heat somewhere).
A utility is most likely only going to offer a 240V 400A single-phase service at best for a residence in the US, 320A can be used continuously. If you need more they’ll usually offer multiple 400A services.
I’ve heard stories about people convincing their utility to install three-phase service drops in their homes, but usually it’s not an option.
Anyways, 320A continuous load at 240V single-phase is 76.8kW, if you assume 25kW per server (20 kW for server, 5kW for cooling), you can run (3) servers and 15 tons of cooling and still have just enough left for one 120V 15A circuit to charge your phone and power a light.
Linked in the article, DDR4 and LPDDR4 are also 2-4x more expensive now, forcing smaller manufactures to raise prices or cancel some products entirely.
I tried living with just the Apple Watch instead of the Touch ID for a few months to see how reliable it was...
Sometimes it would take a few minutes in the morning before my Mac would even recognize 'hey, there's a watch' (typing in my full password was usually much quicker than waiting for the watch unlock).
Sometimes whatever notification happens that triggers the watch to vibrate and allow the double-squeeze-to-accept action would just... not.
Other times the above notification would pop up about 8-15 seconds after the prompt on the screen.
It was inconsistent enough I got _really_ good at typing my password, since it was normally quicker than waiting on the Apple Watch.
Contrast that with the Touch ID, that's always ready to go.
Possibly a similar process to when you go into an AWS account, and find dozens of orphaned VMs, a few thousand orphaned disk volumes, etc., saving like $10k/month just deleting unused resources.
It's not a case of forgotten data, it's duplicated for access time reasons, like with optical media.
It follows in the footsteps of trading in storage for less compute and/or better performance.
An opposite approach in the form of a mod for Monster Hunter: Wilds recently made it possible [0] for end-users to decompress all the game textures ahead of time. This was beneficial there, because GPU decompression was causing stalls, and the trading in of compute for less storage resulted in significantly worse performance.
And the big ones don't even have typical PCIe sockets, they are useless outside of behemoth rackmount servers requiring massive power and cooling capacity that even well-equipped homelabs would have trouble providing!
Usually it's just "same thing but faster". CPU is 2-3x faster, and even boot speed is faster, so it can be handy to not have to wait so long to run updates, compile something, reboot, etc.
I was paying around $1200/month last year (a little under that with subsidy).
This year I'm paying $2100/month for a family of five, on a roughly equivalent plan. Except, none of the options in my state allow me to visit the PCP I switched to this year (since none of the plans last year covered my PCP from the year before).
So I guess I'm on a primary care physician merry go round :D
I am at least able to have my main specialty doctor and the drug I take to keep me in remission from Crohn's disease, and my kids' pediatrician is covered.
But I can't imagine what people have to sacrifice to keep any kind of coverage (with high deductible and horrible coinsurance and prescription drug coverage) for their families if they don't have a decent income :(
> But I can't imagine what people have to sacrifice to keep any kind of coverage (with high deductible and horrible coinsurance and prescription drug coverage) for their families if they don't have a decent income :(
These increases are specifically a lapse in subsidies for high earners -- those with a "decent income." People under 400% of Federal Poverty Level still qualify for the subsidies. And it's a relatively recent policy change to roll back; we didn't have this subsidy from 2010-2020.
This is not specifically just a lapse in subsidies for high earners, this is for everyone which is telling how little people actually understand what will happen when the subsidies expire.
The enhanced subsidizes made it so people earning more than 400% FPL were also eligible for subsidies, but also more importantly increased the cap on how much income insurance could cost. In reality, most people would see their insurance costs double if the subsidys expired [1].
I take Remicade for UC on a monthly cadence. From $500 to now $1300/m for 2 in TX, and an added bonus of a 10% lab coPay + All kinds of fees.
I am Blessed running a good startup but I've always felt this deeply.... "But I can't imagine what people have to sacrifice to keep any kind of coverage (with high deductible and horrible coinsurance and prescription drug coverage) for their families if they don't have a decent income :("
If you're getting health insurance through your employer, that's a pretty standard price (counting both your contribution and your employer's together).
I'm probably going to be self employed for 2026 and a cheap-ish (not the cheapest, but probably below the average) plan for my family is going to be a little under $1500 / month.
It's pre-tax money, which helps a wee bit, but it is definitely expensive. If I made less money, I'd qualify for subsidies, but I don't, so that's just something that needs to be paid in full unfortunately.
My Employer sponsored supposedly nice insurance (I say supposedly because they keep being a pain in the ass for pretty much everything) is $200+ per paycheck for me and my spouse, i.e. ~$450/month. That is after my employer covers most of the cost. This stuff is ridiculous.
I’m in Germany, and for a family of four, the public healthcare system, covering my wife and my two kids costs us around 2,200€ per month. The company pays half.
A switch to a private insurance would lower the costs around half.
I was under the impression that German healthcare was essentially free (government funded) at the point of delivery, with additional top-insurance carried by most people similar to how it is in here in France.
Here I am self-employed and pay about 100 euros a month in top-up insurance (mutuelle) for myself and a couple of kids. Of course, the healthcare costs more, that’s why my taxes are high; but the insurance cost is about €1200 a year, not €2200 a month.
Free at point of delivery does not mean free at all. Netflix is also free when delivering you movies, but it costs a monthly fee.
I think it’s time that we all stop with the nonsense that government funded healthcare is free. Because who ends up funding the government are us, the citizens, and that costs lots of money.
Some governments, like the German one, still make the costs transparent to the citizen, something you can even see in your payslip. Other governments, after failed policies and extreme inefficiencies, hide that and just budget healthcare costs out of the rest of the taxes.
In your case you believe your cost is only 1200€ a year, because your government has not made at all clear to you how much you’re paying from your other taxes into the healthcare system. When governments hide that type of information is because they actually do have something they don’t want the normal citizen to see. And that’s worrying and not democratic at all.
>> In your case you believe your cost is only 1200€ a year, because your government has not made at all clear to you how much you’re paying from your other taxes into the healthcare system.
I absolutely do not believe my healthcare costs only €1200 a year. As I wrote, my top-up insurance costs about €1200 a year, and the healthcare costs more and that is why my taxes are so high.
However it’s still unclear how much you’re paying, as the problem with socialized services like healthcare is that you never know exactly how much you’re paying and if you’re overpaying or underpaying as there’s no free competition whatsoever.
There are of course also negative second order consequences. In socialized health care systems, where doctors and hospitals are payed the same no matter their performance, the economical incentives to provide modern treatments or provide better services do not exist, so best professionals need to leave the public systems if they believe they are being underpayed according to their value.
I’ve seen that happening in Germany and Spain a lot. Best doctors I had left their public healthcare position to open their own private business as that was the only way to be compensated economically according to the level of service they were providing.
> as the problem with socialized services like healthcare is that you never know exactly how much you’re paying and if you’re overpaying or underpaying as there’s no free competition whatsoever.
Also true in America, in which there is no socialized healthcare. (In any standard use of the term)
Hell, even Medicare ended up partially privatized. (At huge extra cost)
The worst thing about government-run monopoly services is there's little bottom-up incentive to optimize.
The worst thing about private-run services is there's little incentive for anything other than profit.
Given the fundamental realities of must-deliver services (e.g. healthcare, prison, etc.), I'd rather have them government-delivered than some bastardized free market without competition.
At least the former has a path to excellence. The latter just inevitably turns into a hostile hellscape for the end consumer.
Hot take - the private doctors tell you they are great but the public doctors can often be spectacular because their motives are not primarily economic.
For example, the absolute best diagnosticians in Houston are at the public hospital primarily serving Medicaid and Harris Health patients. Super evidence based, order tests for differential diagnosis not to make $$. Passionate about what they do. In a unexplained emergency my doctor friends would go there to be diagnosed and then the fancy privates to be treated.
Bro, try going to the doctor in America's not socialized services. You have zero idea how much that specific visit is going to cost, on top of the tens of thousands in insurance premiums paid on your behalf yearly.
My understanding (British citizen living in Berlin) is that the German system looks and acts like a tax, but is actually mandatory payments to one of a handful of almost-but-not-quite-identical private insurance companies, with care being free-at-point-of-use.
It's possible to opt out if you're rich enough, but if you change your mind later it's very hard to return to the normal system.
I'm currently not working*, my monthly insurance cost is €257,78.
* thanks to my very cheap lifestyle, my passive income of only about €1k/month means I don't strictly speaking need to work ever again.
Nevertheless, I am treating this time as a learning opportunity with a view to being able to change career path, given that I think LLMs make the "write the code" skill I've been leaning on for the last two decades redundant in favour of, at a minimum, all the other aspects of "engineering", "product management", and "QA", and possibly quite a bit more than that.
Plus, y'know, get that B1 certificate so I can get dual citizenship.
That will be hard to explain in English but you can find what you're paying to French healthcare system by looking at your paycheck (the document you receive every month that detail your paycheck rather). It basically either 7% or 13% of your paycheck (and .5% of non-work income via the CSG), and you have a hard cap on total contributions (4k/month, and healthcare if a bit more than half of that, so a bit more than 2k/month). It cover universal healthcare of course , but also maternity/paternity leave and invalidity benefits.
Paternity/maternity also cover the pension parents get (half a year of contribution to the pension system per child if you take care of them til they are 13, plus half a month for giving birth) (that's so awkward explaining this in English, sorry)
Thanks for this. I work independently (no CDI or CDD) so I don’t get the paycheck/payslip. I imagine it is broken down somewhere in the different taxes I pay, but unfortunately I don’t get the monthly reminder.
The maximum personal contribution to public health insurance (GKV) is capped at around 400/m for healthcare (and an additional 200 towards long-term/elderly care). Spouse and children are free if they are unemployed.
If you are paying more than that then you are already paying for private health insurance (PKV) or private supplementation on top of GKV for some premium coverage.
I am not mistaken. I know how to read my own payslip.
Both me and my wife are employed. We have GKV both and we’re basically paying the maximum rate. That’s around €1100/month each, pre-tax. Half of it comes from my official bruto salary and the other half comes from my unofficial bruto salary. Which is how governments hide the costs of public healthcare. Ultimately is part of my salary deductions for the finances of my employer.
Kids are not free: kids doctors don’t work for free. They need to be payed, and they’re paid from the contributions me and other employed fellow citizens pay every month.
Yeah, when a coworker and I showed my wife the first OS X preview, she was alarmed at how long it took to shut down (I mean System 7 shut down like you just kicked the cord out). "You'll have to find something else to like about it," was my coworker's response.
And to be sure, there was/is a lot to like about OS X.
But, probably because of the lack of a kernel, etc., System 7 sits somewhere in that nether/middle region on our personal computer journey. It's rich library of functions (the Toolbox) set it apart from machines before it that might have instead had a handful of ASSM routines you could "CALL" in BASIC to switch display modes, clear the screen, etc. But, as Amiga owners often reminded the Mac community in the day, no "true" preemptive multitasking…
I should say too, regarding programming, these days your ability to write safe, threaded code is probably the highest virtue to strive for, hardest to perfect — at least for me (so hard to wrap my head around). It seems to separate the hacks (in the negative sense) from the programming gods. I think wistfully of those simpler times when managing memory well, handling error returned from the system API gracefully were the only hurdles.
"You can’t simply add a lock here, because this function can be called while the lock is already held. Taking the same lock again would cause a deadlock…"
"The way you've implemented semaphores can still allow a potential race condition. No, I have no idea how we can test that scenario, with the unit tests it may still only happen once in a million runs—or only on certain hardware…"
(Since I have retired I confess my memory of those hair-pulling days are getting fuzzier—thankfully.)
Threads and locks are fundamentally the wrong abstraction for most scenarios. This is explained in complementary ways in two of the finest technical books ever written, Joe Armstrong's "Programming Erlang" and Simon Marlow's "Parallel and Concurrent Programming in Haskell". I highly recommend both.
Thank you for many fond memories of playing Glider and Pararena.
There are plenty of ways to multi threaded code these days. From actors to coroutines on the programmatic interface level to using green threads directly in go or Java. There is very little reason to resort using locks, mutexes, or semaphores outside of frameworks designed to make multi threading easier or very specific high performance code. (Where in the latter case it could be argued that multi threaded probably adds unreasonable latency and context switching.)
Whoever is downvoting you for speaking the truth should go stand in a corner. Or try maining BeOS for a while, to experience first-hand what happens when application programmers are forced to use threads and locks.
MacOS 9 was awful, a product of a rather unpleasant era for Apple really. I wanna say through 9.2.1 maybe even through to 9.2.2 the OS had a nasty habit of corrupting your disk. Hardware-wise Apple used CMD64x based IDE controllers so when OS9 wasn't screwing with your data the hardware itself would.
There absolutely were animations e.g. when closing a Finder window, but they were much lighter weight. As far as I'm concerned System 7 was probably the zenith.
I'd rather say the zenith was 8.1 which was not very widely used. 8.5 did add some nice gimmicks like the app switcher palette but for some reason it felt way slower than 8.1.
To me it’s the opposite, System 7 crashed all the time and MacOS 9 was rock solid. System 7 was a mess until 7.6, at which point it was basically MacOS 8. And the UI was way more pleasing, the system 7 one had a 80s vibe to me.
Mac OS 9 was Apple Windows ME; too many side ports of new features into the rickety legacy core OS (Win32 / Toolbox Mac OS) and not enough attention paid to detail since the Next Big Thing was already cooking (XP / OS X).
Mac OS 9 was certainly not rock solid as far as crashes were concerned, but very much better than System 7, that was clear to me. Maybe it is my rose-tinted glasses colouring my memory but I also remember that there were very few small bug, you know the just annoying kind, than I have today with macOS 15, there may be fewer hard crashes, but the number of paper cuts have increased by many orders of magnitude.
Well, I got my B&W G3 because MacOS 9 lunched the filesystem as it was prone to doing. SCSI drive so it wasn't that other disk corruption fun (which I went through in PC land). As far as I'm concerned MacOS 9 was mostly a bunch of paper cuts glued together. Lots of stuff that would've demoed in OSX if Apple had the time and patience.
So yeah Apple had tacked on vestigial multi-user support, an automatic system update mechanism, USB support, etc., etc. but underneath it was still the same old single user, cooperative multitasked, no memory protection OS as its predecessors. Unlike OSX, MacOS 9 (like 7 and 8 before it) still relied on the Toolbox which was a mishmash of m68k and ppc code.
I remember it crashing a lot but maybe that's because I came of age around the OS 8/9 era. IIUC OS 9 had no memory protection so it's not exactly a surprise it was fragile.
Yup. It feels like I have traded, on an average week, three hard crashes (enough to need a reboot) and five small bugs back then, with zero hard crashes and ninety minor bugs (some requiring restarting the app) today. Sometimes I feel like I would like to go back because many of the smaller bugs drive me mad in a way that never happened back then.
Win98 was head and shoulders above System 9, from a stability perspective. It had protected memory, actual preemptive multitasking, a somewhat functional driver system built on top of an actual HAL, functional networking, etc, etc.
To be clear, Win98 was a garbage fire of an OS (when it came to stability); which makes it so much worse that Mac OS 8-9 were so bad.
98's multitasking and memory 'protection' were a joke. In the same mid high machine for the era, 2k and xp were miles ahead of w98 on mid-high load.
Maybe not on a Pentium, but once you hit 192MB of RAM and some 500 MHz P3/AMD k7, NT based OSes were tons better.
You only realized that upon opening a dozen of IE windows. W98, even SE, will sweat. 2k will fly.
On single tasks, such as near realtime multimedia ones, w98 would be better, such as emulators/games or video players with single thread decoders. On multiprocessing/threading, w98 crawled against W2K even under p4's and 256MB of RAM.
Well, Win NT is an actual operating system, and Win 98 and Classic macOS are just horribly overgrown home computer shells in an environment they should never have been exposed to.
Ahem, w98 BSOD if you sneezed hard near it. Installing a driver? BSOD. IE page fault? BSOD. 128k stack limit reached? either grind to a halt or a BSOD. And so on...
I worked at a company that was delivering a client-side app in Java launched from IE. I think we had an ActiveX plugin as the "launcher." This predated "Java Web Start." It was hysterically bad. We were targeting 32 meg Win 98 systems and they were comically slow and unstable. Most of our developers had 64 and 128 meg boxes with NT 4.0. I mostly worked on the server side stuff, and used them as a terminal into the Solaris and Linux systems.
It’s not everything, it’s just Chrome. Chrome is 1.6GB including all its dependencies. It’s going to be slow to start on any system if those dependencies aren’t preloaded.
Most Mac software I use (I don’t use Chrome) starts quickly because the dependencies (shared libraries) are already loaded. Chrome seems to have its own little universe of dependencies which aren’t shared and so have to be loaded on startup. This is the same reason Office 365 apps are so slow.
It's not just Chrome, it's everything, though apps that have a large number of dependencies (including Chrome and the myriad Electron apps most of us use these days) are for sure more noticeable.
My M4 MacBook Pro loads a wide range of apps - including many that have no Chromium code at all in them - noticeably slower than exactly the same app on a 4 year old Ryzen laptop running Linux, despite being approximately twice as fast at running single-threaded code, having a faster SSD, and maybe 5x the memory bandwidth.
Once they're loaded they're fine, so it's not a big deal for the day to day, but if you swap between systems regularly it does give macOS the impression of being slow and lumbering.
Disabling Gatekeeper helps but even then it's still slower. Is it APFS, the macOS I/O system, the dynamic linker, the virtual memory system, or something else? I dunno. One of these days it'll bother me enough to run some tests.
Somewhere around 2011 when I switched my MBP to an SSD (back when you could upgrade the drives, and memory, yourself), Chrome opened in 1-2 bounces of the dock icon instead of 12-14 second.
People used to make YouTube videos of their Mac opening 15 different programs in 4/5 seconds
Now, my Apple Silicon MacBook Air is very, very fast but at times it takes like 8-9 seconds to open a browser again.
I loved the MBP’s from that era. That was my first (easy) upgrade as well in addition to more memory. Those 5400 RPM hard drives were horrible. Also another slick upgrade you could do back then is to swap out the super drive with a caddy to have a second SSD/HDD.
It still works fine today, though I had install Linux on it to keep it up to date.
I'm running the latest MacOS right now on a modest m4 Mini and it doesn't seem slow to me at all. I use Windows for gaming and Linux for several of my machines as well and I don't "feel" like MacOS is slow.
In any case, Chrome opens quickly on my Mac Mini, under a second when I launch it from clicking its icon in my task bar or from spotlight (which is my normal way of starting apps). When Chrome is idle with no windows, opening chrome seems even faster, almost instant.
This made me curious so I tried opening some Apple apps, and they appear to open about the same speed as Chrome.
Gui applications like Chrome or Keynote can be opened from a terminal command line using the open command so I tried timing this:
$ time open /Applications/Google\ Chrome.app
which indicated that open was finished in under 0.05 seconds total. So this wasn't useful because it appears to be timing only part of the time involved with getting the first window up.
It's always been that way. Even when I had a maxed out current-gen Mac Pro in 2008, it still launched and ran faster in Windows than MacOS.
I have seen people suggesting that it's because of app signature checks choking on Internet slowness, but 1. those are cached, so the second run should be faster, and in non-networked instances the speed is unchanged, and 2. I don't believe those were even implemented back in 2002 when I got my iMac G4, and it was likewise far quicker in Linux than in OS X.
At the time (2002), I joked that it was because the computer was running two operating systems at once: NeXTSTEP and FreeBSD.
System 6 had menu blinks, zoom animations (with rect XORs no less), and button blinks when you used keyboard completion. Mac was the original "wasteful animation" OS.
That xor effect was under FVWM too for moving and resizing windows and doing an xor wireframe was MUCH faster than a full repaint.
If you had no X11 acceleration (xvesa for instance), that mode was magnitudes faster than watching your whole browser window repaint on a resize lasting more than 3 seconds on a Pentium.
This is feedback. You press a shortcut; how do you know it worked or not? You do because the corresponding menu rapidly blinked. Or you double click an icon and suddenly a rectangle appears in another part of the screen. Is this related? Here the animation shows that yes, the icon transformed into a window.
On the other hand on my mobile Firefox I wait seemingly a half second each time I long press a link, because there is an animation that zooms a context menu. It does not zoom from the link, which could be justified maybe, but always in the same place in the center of the screen. This animation is meaningless and thus wasteful.
I remembered reading about this news back when that first message was posted on the mailing list, and didn't think much of it then (rust has been worming its way into a lot of places over the past few years, just one more thing I tack on for some automation)...
But seeing the maintainer works for Canonical, it seems like the tail (Ubuntu) keeps trying to wag the dog (Debian ecosystem) without much regard for the wider non-Ubuntu community.
I think the whole message would be more palatable if it weren't written as a decree including the dig on "retro computers", but instead positioned only on the merits of the change.
As an end user, it doesn't concern me too much, but someone choosing to add a new dependency chain to critical software plumbing does, at least slightly, if not done for very good reason.
Agreed. I think that announcement was unprofessional.
This was a unilateral decision affecting other's hard work, and the author didn't provide them the opportunity to provide feedback on the change.
It disregards the importance of ports. Even if an architecture isn't widely used, supporting multiple architectures can help reveal bugs in the original implementation that wouldn't otherwise be obvious.
This is breaking support for multiple ports to rewrite some feature for a tiny security benefit. And doing so on an unacceptably short timeline. Introducing breakage like this is unacceptable.
There's no clear cost-benefit analysis done for this change. Canonical or debian should work on porting the rust toolchain (ideally with tier 1 support) to every architecture they release for, and actually put the cart before the horse.
I love and use rust, it is my favorite language and I use it in several of my OSS projects but I'm tired of this "rewrite it in rust" evangilism and the reputational damage they do to the rust community.
> I love and use rust, it is my favorite language and I use it in several of my OSS projects but I'm tired of this "rewrite it in rust" evangilism and the reputational damage they do to the rust community.
Thanks for this.
I know intellectually, that there are sane/pragmatic people who appreciate Rust.
But often the vibe I’ve gotten is the evangelism, the clear “I’ve found a tribe to be part of and it makes me feel special”.
So it helps when the reasonable signal breaks through the noisy minority.
>I know intellectually, that there are sane/pragmatic people who appreciate Rust.
For the most part that is almost everyone who works on rust and writes rust. The whole coreutils saga was pretty much entirely caused by Canonical, The coreutils rewrite project was originally a hobby project iirc and NOT ready for prod.
for the most part the coreutils rewrite is going well all things considered, bugs are fixed quickly and performance will probably exceed the original implementation in some cases since concurrency is a cake-walk.
The whole re-write it in rust largely stemmed from the idea that if you have a program in C and a program in Rust then the program in rust is "automatically" better which is often the case. The exception is very large battle tested projects with custom tooling in place to ensure the issues that make C/C++ a nightmare are somewhat reduced. Rust ships with the borrow checker by default meaning logically its like for like.
In the real world it is not always the case there are still plenty of opportunity for straight up logic bugs and crashes (See cloudflare saga) that are completely just due to bad programming practices.
Rust is the nail and the hammer, but you can still hit your finger if you don't know how to swing it properly
FYI for the purpose of disclosing bias I am one of the few "rust first" developers. I learned the language in 2021, and it was the first "real" programming language I learned how to use effectively. Any attempts I have had to dive into other languages have been short lived and incredibly frustrating because rust is a first-class experience in how to make a systems programming language
It really makes me upset that we are throwing away decades of battle tested code just because some people are excited about the language du jour. Between the systemd folks and the rust folks, it may be time for me to move to *BSD instead of Linux. Unfortunately, I'm very tied to Docker.
That “battle-tested code” is often still an enduring and ongoing source of bugs. Maintainers have to deal with the burden of working in a 20+ year-old code base with design and architecture choices that probably weren’t even a great idea back then.
Very few people are forcing “rewrite in rust” down anyone’s throats. Sometimes it’s the maintainers themselves who are trying to be forward-thinking and undertake a rewrite (e.g., fish shell), sometimes people are taking existing projects and porting them just to scratch an itch and it’s others’ decisions to start shipping it (e.g., coreutils). I genuinely fail to see the problem with either approach.
C’s long reign is coming to an end. Some projects and tools are going to want to be ahead of the curve, some projects are going to be behind the curve. There is no perfect rate at which this happens, but “it’s battle-tested” is not a reason to keep a project on C indefinitely. If you don’t think {pet project you care about} should be in C in 50 years, there will be a moment where people rewrite it. It will be immature and not as feature-complete right out the gate. There will be new bugs. Maybe it happens today, maybe it’s 40 years from now. But the “it’s battle tested, what’s the rush” argument can and will be used reflexively against both of those timelines.
As long as LLVM (C++ but still) is not rewritten is rust [0] , I don't buy it. C is like JavaScript, it's not perfect, is everywhere and you cannot replace it without a lot of effort and bugfix/regression tests.
If I take for example sqlite (25 years old [3]) there are already 2 rewrites in rust [1] and [2], and each one has its bugs.
And as an end user I'm more enclined to trust the battle-tested original for my prod than its copies. As long as I don't have the proof the rewrite is at least as good as the original, I'll stay with the original. Simple equals more maintainable. That's also why sqlite maintainers won't rewrite it in any other language [4].
The trade of rust is "you can lose features and have unexpected bugs like any other language, but don't worry they will be memory safe bugs".
I'm not saying rust is bad and you should not rewrite anything in it, but IMHO rust programmers tend to overestimate the quality of the features they deliver [5] or something along these lines.
systemd has been the de facto standard for over a decade now and is very stable. I have found that even most people who complained about the initial transition are very welcoming of its benefits now.
Depends a bit on how you define systemd. Just found out that the systemd developers don't understand DNS (or IPv6). Interesting problems result from that.
> Just found out that the systemd developers don't understand DNS (or IPv6).
Just according to Github, systemd has over 2,300 contributors. Which ones are you referring to?
And more to the point, what is this supposed to mean? Did you encounter a bug or something? DNS on Linux is sort of famously a tire fire, see for example https://tailscale.com/blog/sisyphean-dns-client-linux ... IPv6 networking is also famously difficult on Linux, with many users still refusing to even leave it enabled, frustratingly for those of us who care about IPv6.
Systemd-resolved invents DNS records (not really something you would like to see, makes debugging DNS issues a nightmare). But worse, it populates those DNS records with IPv6 link local addresses, which really have no place in DNS.
Then, when after a nice debugging session why your application behaves so strangely, all the data in DNS is correct, why doesn't it work, you find that this issue has been reported before and was rejected as won't fix, works as intended.
Hm, but systemd-resolved mainly doesn't provide DNS services, it provides _name resolution_. Names can be resolved using more sources than just DNS, some of which do support link-locals properly, so it's normal for getaddrinfo() or the other standard name resolution functions to return addresses that aren't in DNS.
i.e. it's not inventing DNS records, because the things returned by getaddrinfo() aren't (exclusively) DNS records.
The debug tool for this is `getent ahosts`. `dig` is certainly useful, but it makes direct DNS queries rather than going via the system's name resolution setup, so it can't tell you what your programs are seeing.
systemd-resolved responds on port 53. It inserts itself in /etc/resolv.conf as the DNS resolver that is to be used by DNS stub resolvers.
It can do whatever it likes as longs as it follows DNS RFCs when replying to DNS requests.
Redefining recursive DNS resolution as general 'name resolution' is indeed exactly the kind of horror I expect from the systemd project. If systemd-resolved wants to do general name resolution, then just take a different transport protocol (dbus for example) and leave DNS alone.
It's not from systemd though. glibc's NSS stuff has been around since... 1996?, and it had support for lookups over NIS in the same year, so getaddrinfo() (or rather gethostbyname(), since this predates getaddrinfo()!) have never just been DNS.
systemd-resolved normally does use a separate protocol, specifically an NSS plugin (see /etc/nsswitch.conf). The DNS server part is mostly only there as a fallback/compatibility hack for software that tries to implement its own name resolution by reading /etc/hosts and /etc/resolv.conf and doing DNS queries.
I suppose "the DNS compatibility hack should follow DNS RFCs" is a reasonable argument... but applications normally go via the NSS plugin anyway, not via that fallback, so it probably wouldn't have helped you much.
I'm not sure what you are talking about. Our software has a stub resolver that is not the one in glibc. It directly issues DNS requests without going through /etc/nsswitch.conf.
It would have been fine if it was getaddrinfo (and it was done properly) because getaddrinfo gives back a socket and that can add the scope ID to the IPv6 link local address. In DNS there is no scope ID, so it will never work in Linux (it would work on Windows, but that's a different story).
If you don't like those additional name resolution methods, then turn them off. Resolved gives you full control over that, usually on a per-interface basis.
If you don't like that systemd is broken, then you can turn it off. Yes, that's why people are avoiding systemd. Not so much that the software has bugs, but the attitude of the community.
It's not broken - it's a tradeoff. systemd-resolved is an optional component of systemd. It's not a part of the core. If you don't like the choices it took, you can use another resolver - there are plenty.
I don't think many people are avoiding systemd now - but those who do tend to do it because it non-optionally replaces so much of the system. OP is pointing out that's not the case of systemd-resolved.
It's not a trade-off. Use of /etc/resolv.conf and port 53 is defined by historical use and by a large number of IETF RFC.
When you violate those, it is broken.
That's why systemd has such a bad reputation. Systemd almost always breaks existing use in unexpected ways. And in the case of DNS, it is a clearly defined protocol, which systemd-resolved breaks. Which you claim is a 'tradeoff'.
When a project ships an optional component that is broken, it is still a broken component.
The sad thing about systemd (including systemd-resolved) is that it is default on Linux distributions. So if you write software then you are forced to deal with it, because quite a few users will have it without being aware of the issues.
Yes, violating historical precedent is part of the tradeoff - I see no contradiction. Are you able to identify the positive benefits offered by this approach? If not, we're not really "engineering" so to speak. Just picking favorites.
> The sad thing about systemd (including systemd-resolved) is that it is default on Linux distributions. So if you write software then you are forced to deal with it, because quite a few users will have it without being aware of the issues.
I'm well aware - my day job is writing networking software.
That's the main problem with systemd: replacing services that don't need replacing and doing a bad job of it. Its DNS resolver is particularly infamous for its problems.
Sure, those authors chose that license because they did not really particularly care for the politics of licenses and chose the most common one in the Rust ecosystem, which is MIT/Apache 2.
If folks want more Rust projects under licenses they prefer, they should start those projects.
> If folks want more Rust projects under licenses they prefer, they should start those projects.
100% true, but also hides a powerful fact: Our choices aren't limited to doing it ourselves. Listening to others and discussing how to do things as a group is the essence of community seeking long-term stability abd fairness. It'a how we got to the special place we are now.
Not everyone can or should start their own open source project. Maybe theyre already doing another one. Maybe they don't know how to code. The viewpoint of others/users/customers is valid and should not only be listened to but asked for.
I agree that throwing away battle tested code is wasteful and often not required. Most people are not of the mindset of just throwing things away but there is a drive to make things better. There are some absolute monoliths such as the Linux kernel that will likely never break free of its C shackles and thats completely okay and acceptable to me
It is basic knowledge that memory safety bugs are a significant source of vulnerabilities, and by now it well-established that the first developer who can avoid C without introducing memory safety bugs hasn't been born yet. In other words: if you care about security at all, continuing with the status quo isn't an option.
The C ecosystem has tried to solve the problem with a variety of additional tooling. This has helped a bit, but didn't solve the underlying problem. The C community has demonstrated that it is both unwilling and unable to evolve C into a memory-safe language. This means that writing additional C code is a Really Bad Idea.
Software has to be maintained. Decade-old battle-tested codebases aren't static: they will inevitably require changes, and making changes means writing additional code. This means that your battle-tested C codebase will inevitably see changes, which means it will inevitably see the introduction of new memory safety bugs.
Google's position is that we should simply stop writing new code in C: you avoid the high cost and real risk of a rewrite, and you also stop the neverending flow memory safety bugs. This approach works well for large and modular projects, but doing the same in coreutils is a completely different story.
Replacing battle-tested code with fresh code has genuine risks, there's no way around that. The real question is: are we willing to accept those short-term risks for long-term benefits?
And mind you, none of this is Rust-specific. If your application doesn't need the benefits of C, rewriting it in Python or Typescript or C# might make even more sense than rewriting it in Rust. The main argument isn't "Rust is good", but "C is terrible".
I agree with everything you've said here, except that the reality of speaking with a "rust first" developer is making me feel suddenly ancient. But that aside, the memory safety parts are a huge benefit, but far from the only one. Option and Result types are delightful. Exhaustive matching expressions that won't compile if you add a new variant that's not handled are huge. Types that make it impossible to accidentally pass a PngImage into a function expecting a str, even though they might both be defined as contiguous series of bytes down deep, makes lots of bugs impossible. A compiler that gives you freaking amazing error messages that tell you exactly what you did wrong and how you can fix it sets the standard, from my experience. And things like "cargo clippy" which tell you how you could improve your code, even if it's already working, to make it more efficient or more idiomatic, are icing on the cake.
People so often get hung up on Rust's memory safety features, and dismiss is as through that's all it brings to the table. Far from it! Even if Rust were unsafe by default, I'd still rather use it that, say, C or C++ to develop large, robust apps because it has a long list of features that make it easy to write correct code, and really freaking challenging to write blatantly incorrect code.
Frankly, I envy you, except that I don't envy what it's going to be like when you have to hack on a non-Rust code base that lacks a lot of these features. "What do you mean, int overflow. Those are both constants! How come it didn't let me know I couldn't add them together?"
Much of the drive to rewrite software in Rust is a reaction to the decades-long dependence on C and C++. Many people out there sit in the burning room like the dog in that meme, saying "this is fine". Most of them don't have to deal at all directly with the consequences involved.
Rust is the first language for a long time with a chance at improving this situation. A lot of the pushback against evangelism is from people who simply want to keep the status quo, because it's what they know. They have no concept of the systemic consequences.
I'd rather see over-the-top evangelism than the lack of it, because the latter implies that things aren't going to change very fast.
If you were right, then people should not be using Rust or C/C++. They should be using SPARK/Ada. The SPARK programming language, a subset of Ada, was used for the development of safety-critical software in the Eurofighter Typhoon, a British and European fighter jet. The software for mission computers and other systems was developed by BAE Systems using the GNAT Pro environment from AdaCore, which supports both Ada and SPARK. It's not just choosing the PL, but the whole environment including the managers.
Nvidia evaluated Rust and then chose SPARK/Ada for root of trust for GPU market segmentation licensing, which protects 50% profit margin and $4T market cap.
> If you were right, then people should not be using Rust or C/C++. They should be using SPARK/Ada.
Not all code needs that level of assurance. But almost all code can benefit from better memory safety than C or C++ can reliably provide.
Re what people "should" be using, that's why I chose my words carefully and wrote, "Rust is the first language for a long time with a chance at improving this situation."
Part of the chance I'm referring to is the widespread industry interest. Despite the reaction of curmudgeons on HN, all the hype around Rust is a good thing for wider adoption.
We're always going to have people resistant to change. They're always going to use any excuse to complain, including "too much hype!" It's meaningless noise.
On the other hand, the presence of an alternative is the persuasion.
It's very easy to justify for yourself why you aren't addressing the hard problems in your codebase. Combine that with a captive audience, and you end up with everyone running the same steaming heap of technical debt and being unhappy about it.
But the second an alternative starts to get off the ground there's suddenly a reason to address those big issues: people are leaving, and it is clear that complacency is no longer an option. Either evolve, or accept that you'll perish.
That was probably a mischaracterization on my part. I wouldn't consider rewriting almost everything useful that's currently in C or C++ to be over the top. That would be a net good.
Posts that say "I rewrote X in Rust!" shouldn't actually be controversial. Every time you see one, you should think to yourself wow, the software world is moving towards being more stable and reliable, that's great!
But it is nonsense. Every time some rewrote something (in Rust or anything else), I instead worry about what breaks again, what important feature is lost for the next decade, how much working knowledge is lost, what muscle memory is now useless, what documentation is outdated, etc.
I also doubt Rust brings as many advantages in terms of stability that people claim. The C code I rely on in my daily work basically never fails (e.g. I can't remember "vim" ever crashing on me in the last 30 years I use it). That this is all rotten code C that needs to be written is just nonsense. IMHO it would far more useful to invest in proper maintenance and incremental improvements.
Regarding VIM - it's not as risky as something that's exposed over a network, but it's had plenty of CVEs, and skimming them shows many if not most are related to memory safety. See:
Me neither, I just do not want to make steps backwards because people rewriting stuff for stupid reasons. But your argument is just the old "you do not want to adapt" shaming attempt which I think has no intellectual substance anyway.
Sometimes good things are ruined by people around. I think Rust is fine, although I doubt its constraints are universally true and sensible in all scenarios.
> It disregards the importance of ports. Even if an architecture isn't widely used, supporting multiple architectures can help reveal bugs in the original implementation that wouldn't otherwise be obvious.
The problem is that those ports aren't supported and see basically zero use. Without continuous maintainer effort to keep software running on those platforms, subtle platform-specific bugs will creep in. Sometimes it's the application's fault, but just as often the blame will lie with the port itself.
The side-effect of ports being unsupported is that build failures or test failures - if they are even run at all - aren't considered blockers. Eventually their failure becomes normal, so their status will just be disregarded as noise: you can't rely on them to pass when your PR is bug-free, so you can't rely on their failure to indicate a genuine issue.
> Instead of just "builds with gcc" we would need to wait for Rust support.
There's always rustc_codegen_gcc (gcc backend for rustc) and gccrs (Rust frontend for gcc). They are't quite production-ready yet, but there's a decent chance it's good enough for the handful of hobbyists wanting to run the latest applications on historical hardware.
As to adding new architectures: it just shifts the task from "write gcc backend" to "write llvm backend". I doubt it'll make much of a difference in practice.
> to rewrite some feature for a tiny security benefit
For what it's worth, the zero->one introduction of a new language into a big codebase always comes with a lot of build changes, downstream impact, debate, etc. It's good for that first feature to be some relatively trivial thing, so that it doesn't make the changes any bigger than they have to be, and so that it can be delayed or reverted as needed without causing extra trouble. Once everything lands, then you can add whatever bigger features you like without disrupting things.
> Canonical or debian should work on porting the rust toolchain (ideally with tier 1 support) to every architecture they release for, and actually put the cart before the horse.
They already have a Rust toolchain for every system Debian releases for.
The only architectures they're arguing about are non-official Debian ports for "Alpha (alpha), Motorola 680x0 (m68k), PA-RISC (hppa), and SuperH (sh4)", two of which are so obscure I've never even heard of them and one of the others most famous for powering retro video game systems like Sega Genesis.
> This is breaking support for multiple ports to rewrite some feature for a tiny security benefit. And doing so on an unacceptably short timeline. Introducing breakage like this is unacceptable.
I’m
Normally I’d agree, but the ports in question are really quite old and obscure. I don’t think anything would have changed with an even longer timeline.
I think the best move would have been to announce deprecation of those ports separately. As it was announced, people who will never be impacted by their deprecation are upset because the deprecation was tied to something else (Rust) that is a hot topic.
If the deprecation of those ports was announced separately I doubt it would have even been news. Instead we’ve got this situation where people are angry that Rust took something away from someone.
Those ports were never official, and so aren't being deprecated. Nothing changes about Debian's support policies with this change.
EDIT: okay so I was slightly too strong: some of them were official as of 2011, but haven't been since then. The main point that this isn't deprecating any supported ports is still accurate.
*It disregards the importance of ports. Even if an architecture isn't widely used, supporting multiple architectures can help reveal bugs in the original implementation that wouldn't otherwise be obvious."
Imo this is true for going from one to a handful, but less true when going from a handful to more. Afaict there are 6 official ports and 12 unofficial ports (from https://www.debian.org/ports/).
It really comes down to which architectures you're porting to. The two biggest issues are big endian vs little endian, and memory consistency models. Little endian is the clear winner for actively-developed architectures, but there are still plenty of vintage big endian architectures to target, and it looks like IBM mainframes at least are still exclusively big endian.
For memory consistency, Alpha historically had value as the weakest and most likely to expose bugs. But nobody really wants to implement hardware like that anymore, almost everything falls somewhere on the spectrum of behavior bounded by x86 (strict) and Arm (weaker), and newer languages (eg. C++ 11) mean newer code can be explicit about its expectations rather than ambiguous or implicit.
> and the author didn't provide them the opportunity to provide feedback on the change.
this is wrong, the author wrote a mail about _intended_ changes _1/2 year_ before shipping them on the right Debian mailing list. That is _exactly_ how giving people an opportunity to give feedback before doing a change works...
Sure, they made it clear they don't want any discussions to be side tracked about a topics about thing Debian doesn't official support. That is not nice, but understandable, I have seen way too much time wasted on discussions being derailed.
The only problem here is people overthinking things and/or having issues with very direct language IMHO.
> This is breaking support for multiple ports to rewrite some feature for a tiny security benefit
It's not braking anything supported.
The only thing breaking are unsupported. And are only niche used too.
Nearly all projects have very limited capacities and have to draw boundaries, and the most basic boundary is unsupported means unsupported. This doesn't mean you don't keep unsupported use cases in mind/avoid accidentally breaking them, but it means they don't majorly influence your decision.
> And doing so on an unacceptably short timeline
1/2 a year for a change which only breaks unsupported things isn't "unacceptably short", it's actually pretty long. If this weren't OSS you could be happy about one month and most likely less. People complain about how little resources OSS projects have, but the scary truth is most commercial projects have even less resource and must ship at a dead line. Hence why it's very common for them to be far worse when it comes to code quality, technical dept, not correctly handled niche error cases etc.
> to every architecture they release for
Rust toolchain has support for every architecture _they_ release for,
it breaks architectures niche unofficial 3rd party projects support.
Which is sad, sure, but unsupported is in the end unsupported.
> cost-benefit analysis done for this change.
Who says it wasn't done at all. People have done so over and over on the internet for all kind of Linux distributions. But either way, you wouldn't include that in a mail announcing an intend for change (as you don't want discussions to be side tracked). Also benefits are pretty clear:
- using Sequoia for PGP seems to be the main driving force behind this decision, this projects exists because of repeating running into issues (including security issues) with the existing PGP tooling. It happens to use rust, but if there where no rust it still would exist. Just using a different language.
- some file format parsing is in a pretty bad state to a point you most likely will rewrite it to fix it/make it robust. When anyway doing so it using rust if preferable.
- and long term: Due to the clear, proven(1), benefits of using rust for _new_ project/code increasingly more use it, by not "allowing" rust to be required Debian bars itself form using any such project (like e.g. Sequoia which seems to be the main driver behind this change)
> this "rewrite it in rust" evangilism
which isn't part of this discussion at all,
the main driving part seems to be to use Sequoia, not because Sequoia is in rust but because Sequoia is very well made and well tested.
Similar Sequoia isn't a "lets re-write everything in rust project" but a "state of PGP tooling" is so painful for certain use cases (not all) in ways you can't fix by trying to contribute upstream that some people needed a new tooling, and rust happened to be the choice for implementing that.
Command line utilities often handle not-fully-trusted data, and are often called from something besides an interactive terminal.
Take for example git: do you fully trust the content of every repository you clone? Sure, you'll of course compile and run it in a container, but how prepared are you for the possibility of the clone process itself resulting in arbitrary code execution?
The same applies to the other side of the git interaction: if you're hosting a git forge, it is basically a certainty that whatever application you use will call out to git behind the scenes. Your git forge is connected to the internet, so anyone can send data to it, so git will be processing attacker-controlled data.
There are dozens of similar scenarios involving tools like ffmpeg, gzip, wget, or imagemagick. The main power of command line utilities is their composability: you can't assume it'll only ever be used in isolation with trusted data!
Some people might complain about the startup cost of a language like Java, though: there are plenty of scripts around which are calling command-line utilities in a very tight loop. Not every memory-safe language is suitable for every command-line utility.
I totally agree. In reality, today, if you want to produce auditable high-integrity, high-assurance, mission-critical software, you should be looking at SPARK/Ada and even F* (fstar). SPARK has legacy real world apps and a great eco system for this type of sofware. F* is being used on embedded and in other realworld apps where formal verification is necessary or highly advantageous. Whether I like Rust or not, should not be the defining factor. AdaCore has a verifed Rust compiler, but the tooling around it does not compare to that around SPARK/Ada. I've heard younger people complain about PLs being verbose, boring, or not their thing, and unless you're a diehard SPARK/Ada person, you probably feel that way about it too. But sometimes the tool doesn't have to be sexy or the latest thing to be the right thing to use. Name one Rust realworld app older than 5 years that is in this category.
> Name one Rust realworld app older than 5 years that is in this category.
Your "older than 5 years" requirement isn't really fair, is it? Rust itself had its first stable release barely 10 years ago, and mainstream adoption has only started happening in the last 5 years. You'll have trouble finding any "real-world" Rust apps older than 5 years!
As to your actual question: The users of Ferrocene[0] would be a good start. It's Rust but certified for ISO 26262 (ASIL D), IEC 61508 (SIL 4) and IEC 62304 - clearly someone is interested in writing mission-critical software in Rust!
The point was how would you justify choosing Rust based on any real world proof. Maybe it will be ready in a few years, but even then it is far from achieving what you already have in SPARK along with proven legacy. I am very familiar with this, and I still chose SPARK/Ada instead of Rust. SPARK is already certified for all of this. And aerospace, railway, and other high-integrity app industries are already familiar with the output of the SPARK tools, so there's less friction and time in auditing them for certification. Aside from AdaCore, who collaborated with Ferrocene, to get a compiler certified I don't see much traction to change our decision. We are creating show control software for cyber-physical systems with potential dire consequences, so we did a very in-depth study Q1 2025, and Rust came up short.
> I love and use rust, it is my favorite language and I use it in several of my OSS projects but I'm tired of this "rewrite it in rust" evangilism and the reputational damage they do to the rust community.
This right here.
As a side-note, I was reading one of Cloudflare's docs on how it implemented its firewall rules, and it's so utterly disappointing how the document stops being informative suddenly start to reads like a parody of the whole cargo cult around Rust. Rust this, Rust that, and I was there trying to read up on how Cloudflare actually supports firewall rules. The way they focus on a specific and frankly irrelevant implementation detail conveys the idea things are ran by amateurs that are charmed by a shiny toy.
> I think the whole message would be more palatable if it weren't written as a decree including the dig on "retro computers", but instead positioned only on the merits of the change.
The wording could have been better, but I don’t see it as a dig. When you look at the platforms that would be left behind they’re really, really old.
It’s unfortunate that it would be the end of the road for them, but holding up progress for everyone to retain support for some very old platforms would be the definition of the tail wagging the dog. Any project that starts holding up progress to retain support for some very old platforms would be making a mistake.
It might have been better to leave out any mention of the old platforms in the Rust announcement and wait for someone to mention it in another post. As it was written, it became an unfortunate focal point of the announcement despite having such a small impact that it shouldn’t be a factor holding up progress.
Not just really, really old, but they in fact have long since been depreciated in any semblance of official support.
I get the friction especially for younger contributors, not that this is the case here. However there are architectures that havent even received a revision in their lifetime which old heads will take as personal slights for which heads must roll when presented with even the slightest of inconvenience for their hobbyist port.
I haven't seen any complaints from anyone who uses those ports personally. I would bet there's someone out there who uses Debian on those platforms, but 100% of the complaining I've seen online has been from people who don't use those ports.
It's the idea that's causing the backlash, not the impact.
> The wording could have been better, but I don’t see it as a dig.
He created (or at least re-activated) a dichotomy for zero gain, and he vastly increased the expectations for what a Rust rewrite can achieve. That is very, very bad in a software project.
The evidence for both is in your next paragraph. You immediately riff on his dichotomy:
> It’s unfortunate that it would be the end of the road for them, but holding up progress for everyone to retain support for some very old platforms would be the definition of the tail wagging the dog.
(My emphasis.)
He wants to do a rewrite in Rust to replace old, craggy C++ that is so difficult to reason about that there's no chance of attracting new developers to the maintenance team with it. Porting to Rust therefore a) addresses memory safety, b) gives a chance to attract new developers to a core part of Debian, and c) gives the current maintainer a way to eventually leave gracefully in the future. I think he even made some these points here on HN. Anyone who isn't a sociopath sympathizes with these points. More importantly, accidentally introducing some big, ugly bug in Rust apt isn't at odds with these goals. It's almost an expected part of the growing pains of a rewrite plus onboarding new devs.
Compare that to "holding up progress for everyone." Just reading that phrase makes me force sensitive like a Jedi: I can feel the spite of dozens HN'ers tingling at that and other phrases in these HN comments as they sharpen their hatred, ready to pounce at the Rust Evangelists the moment this project hits a snag. (And, like any project, it will hit snags.)
1. "I'm holding on for dear life here, I need help from others and this is the way I plan to get that help"
2. "Don't hold back everyone else's progress, please"
The kind of people who hear "key party" and imagine clothed adults reciting GPG fingerprints need to comprehend that #1 and #2 are a) completely different strings and b) have very different-- let's just say magical-- effects on the behavior of even small groups of humans.
> As an end user, it doesn't concern me too much ...
It doesn't concern me neither, but there's some attitude here that makes me uneasy.
This could have been managed better. I see a similar change in the future that could affect me, and there will be precedent. Canonical paying Devs and all, it isn't a great way of influencing a community.
I agree. It's sad to see maintainers take a "my way or the highway" approach to package maintenance, but this attitude has gradually become more accepted in Debian over the years. I've seen this play before, with different actors: gcc maintainers (regarding cross-bootstrapping ports), udev (regarding device naming, I think?), systemd (regarding systemd), and now with apt. Not all of them involved Canonical employees, and sometimes the Canonical employees were the voice of reason (e.g. that's how I remember Steve Langasek).
I'm sure some will point out that each example above was just an isolated incident, but I perceive a growing pattern of incidents. There was a time when Debian proudly called itself "The Universal Operating System", but I think that hasn't been true for a while now.
> It's sad to see maintainers take a "my way or the highway" approach to package maintenance, but this attitude has gradually become more accepted in Debian over the years.
It's frankly the only way to maintain a distribution relying almost completely on volunteer work! The more different options there are, the more expensive (both in terms of human cost, engineering time and hardware cost) testing gets.
It's one thing if you're, say, Red Hat with a serious amount of commercial customers, they can and do pay for conformance testing and all the options. But for a fully FOSS project like Debian, eventually it becomes unmaintainable.
Additionally, the more "liberty" distributions take in how the system is set up, the more work software developers have to put in. Just look at autotools, an abomination that is sadly necessary.
> Canonical paying Devs and all, it isn't a great way of influencing a community.
That's kind of the point of modern open source organizations. Let corporations fund the projects, and in exchange they get a say in terms of direction, and hopefully everything works out. The bigger issue with Ubuntu is that they lack vision, and when they ram things through, they give up at the slightest hint of opposition (and waste a tremendous amount of resources and time along the way). For example Mir and Unity were perfectly fine technologies but they retired it because they didn't want to see things through. For such a successful company, it's surprising that there technical direction setting is so unserious.
> I think the whole message would be more palatable if it weren't written as a decree including the dig on "retro computers"
Yes, and more generally, as far as I am concerned, the antagonizing tone of the message, which is probably partly responsible for this micro-drama, is typical of some Rust zealots who never miss an occasion to remind C/C++ that they are dinosaurs (in their eyes). When you promote your thing by belittling others, you are doing it wrong.
There are many high profile DDs who work or have worked for Canonical who are emphatically not the inverse — Canonical employees who are part of the Debian org.
The conclusion you drew is perfectly reasonable but I’m not sure it is correct, especially when in comparison Canonical is the newcomer. It could even be seen to impugn their integrity.
If you look at the article, it seems like the hard dependency on Rust is being added for parsing functionality that only Canonical uses:
> David Kalnischkies, who is also a major contributor to APT, suggested that if the goal is to reduce bugs, it would be better to remove the code that is used to parse the .deb, .ar, and .tar formats that Klode mentioned from APT entirely. It is only needed for two tools, apt-ftparchive and apt-extracttemplates, he said, and the only ""serious usage"" of apt-ftparchive was by Klode's employer, Canonical, for its Launchpad software-collaboration platform. If those were taken out of the main APT code base, then it would not matter whether they were written in Rust, Python, or another language, since the tools are not directly necessary for any given port.
Mmm, apt-ftparchive is pretty useful for cooking up repos for "in-house" distros (which we certainly thought was serious...) but those tools are already a separate binary package (apt-utils) so factoring them out at the source level wouldn't be particularly troublesome. (I was going to add that there are also nicer tools that have turned up in the last 10 years but the couple of examples I looked at depend on apt-utils, oops)
I know you can make configure-time decisions based on the architecture and ship a leaner apt-utils on a legacy platform, but it's not as obvious as "oh yeah that thing is fully auxiliary and in a totally different codebase".
I understand, but the comment to which I was replying implied that this keeps happening, and in general. That’s not fair to the N-1 other DDs who aren’t the subject of this LWN article (which I read!)
Then you'd have to deal with noise from a literal wall of fans, or build a separate high capacity water cooling system (and still deal with dumping that heat somewhere).
reply