HP makes them, so does Dell. They cost a bit extra, but essentially the whole Federal government runs on nothing else.
The difference between EU and US is that it's possible to make all components in the US, using US equipment, and so some companies do because it commands a pretty decent premium. It's not even that hard since most components (e.g. reference motherboard designs) are still designed and actually built in the US. China still really mostly does what you might politely call "commercializes US tech". And let's not discuss too deeply if they correctly pay licensing for all the components they make, because nobody enjoys that discussion.
And yep, as you might expect, only Intel chips, no Nvidia cards ... and that's not the end of the limitations. The previous version had no USB-C monitor support, never mind one USB-C cable to multiple monitors, but last year intel really pushed a bit harder. But even this year, I'd hope you're not going to be trying to use these machines for gaming.
The EU can't even make a modern motherboard's USB port chip.
Oh and yes, there are cracks in the US version too. The phones used, for example, are iPhones. Radio designed in South Korea ...
The most technologically critical component of ASML's EUV lithography machines (the EUV light source) is designed, developed, and manufactured in California by Cymer.
> Is that an easy mistake to make and a hard one to recover from, in your experience?
If you're alone in a codebase? Probably not.
In a company with many contributors of varying degrees of competence (from your new grad to your incompetent senior staff), yes.
In large repositories, without extremely diligent reviewers, it's impossible to prevent developers from creating the most convoluted anti-patterny spaghetti code, that will get copy/pasted ad nauseam across your codebase.
Terraform as a tool and HCL as a programming language leave a lot to be desire (in hindsight only, because, let's be honest, it's been a boon for automation), but their constrained nature makes it easier to reign in the zealous junior developer who just discovered OOP and insists on trying it everywhere...
Isn't that largely nationalism and pressuring companies to use (initially) mediocre local tech solutions though? Once the market is there, quality catches up rapidly.
GUI apps are good for discoverability. They generally are not optimized for rapid use by power users though. That's of course not an inherent limitation of GUI apps, it's just that dominant paradigms hardly seem to consider power users.
I'm still annoyed and baffled by the fact that Ubuntu had searchable application menus 10 years ago, which were awesome and worked for just about any program, and then dropped them when they abandoned Unity. And neither KDE not Gnome thought to bring them back. In stead, many apps have since dropped application menus entirely, in favour of... some mishmash of icons and ad hoc menu buttons?
Also, even in the post-LLM era, building a decent GUI app is a lot more work than building a decent terminal app.
Another factor is the lack of a good cross-platform GUI toolkit (by which I mean one that (a) doesn't feel out-of-place and jarring on any platform and (b) doesn't turn a 50K app into a 1GB download.)
Between that and the level of complexity of modern GUI toolkits - and the size of their dependency trees - distribution of a GUI app is a much bigger headache than distributing a terminal app.
That hasn't been true for a while, it's easily the best of the bunch at this point. It's also always been trivial to change, which can't be said of the others.
On Smartcards yes, maybe Android, but certainly not on iPhones. On iOS, it's only been possible to implement alternatives to Apple Pay since 17.4 (2024), and only in Europe (EEA).
But they already had their own solutions that worked just fine. I can't see how switching to integrate a new system instead would save on dev costs. There surely must be some other reason?
You'd add a lot of technical complexity, especially if you need this to be instant. You'd loose the ability to effectively fight fraud, and because of this get a huge target on your back attracting all sorts of unwanted behavior.
On the other hand, you'd gain... nothing? Especially since consumers cannot be expected to run their own blockchain stack, they'd need to fully trust their banks and intermediaries anyhow.
reply