In my workplace, its availability. We have to use US-only models for government-compliance reasons, so we have access to Opus 4.6 and GPT 5.4, but only Gemini 2.5 which isn't in the same class as the first two.
I complain about movie costs while I watch movies at home, drive a VW that was under $40k new, live in a state with a minimum wage over $17 an hour, and refuse to pay $14 plus tip to a food truck that doesn't provide seating when I can pay $12 and no tip at a fast food restaurant that does provide dine-in eating.
Some of us live our principles, we're not all just whinging hypocrites.
> when I have something to say about my day, there's nowhere to say it; no one on HN cares whether I fixed up the blinds or cooked pork steaks.
As someone who lives alone, two ways I address this aspect: talk to yourself out loud and to your pets like they're people, and also write these things down in a journal. Every night after I get into bed but before I turn out the light and fall asleep, I write a journal entry. Sometimes they're quite mundane, exactly like your examples. "I cooked a steak for dinner that turned out better than I expected" or "Tomorrow I'm thinking about making some bread." There's no pressure on length of entry, I fill anywhere from three sentences to a full page each night, but it helps fill the 'how do I communicate this minor accomplishment or discomfort that nobody else cares about' need for me.
My added 2 cents is to write in a journal and also to read it.
If it helps, be meta and write about what you would want to look forward to read in your own journal, what kind of writing makes you keep going back reading it.
Certainly, an awesome evergreen entry is your reflection on a previous entry.
Just like material on how to blog, there are self-help books on how to journal well.
Solitude doesn't have to be a curse if we learn how to treat it as a blessing.
For the 2D version, you might not need very much custom. Use a regular pen plotter and use a pen with conductive ink. These both exist today, though personally as a hobbyist PCB designer, I can get 2-layer and 4-layer boards cheap enough from JLCPCB or Oshpark or PCB Unlimited that I don't bother trying to make them myself.
I haven't tried it myself so you might be right, but I was thinking of the silver conductive pens from Chemtronics with a conductivity of 0.02-0.05 ohms/sq/mil.
For attachment, I'd evaluate their conductive epoxy or maybe glue down the underside of the component and then smother the lead with the silver conductive ink. But again, just hypothetical since I have a quickturn shop make cheap prototype PCBs for me and either hand solder or use a stencil, paste, and a hot air gun for my hobbyist projects.
Yes, conductive ink has too high resistance; at least the one that works on a basis of carbon; a "trace" can easily have kilo-Ohms, and the metal ink interface makes things worse.
I remember reading some "Sputnik" magazines from the 1970s where Russian scientists were searching for the holy grail of a good conductor resin. I didn't understand at the time why they found the (concept of the) thing so valuable; but now I've got an inkling...
I grant that SpaceX engineers are smart people and can figure out how to make Starship and Superheavy reliable and reusable.
But if they have to launch 10-14 times in order to get the propellant to the LEO depot in order to fuel the Lunar Starship, can we actually deliver that many launches worth of LOX and LNG to the launch pads in the timeframe needed to prevent it all from boiling off once in orbit before Lunar Starship can get there, get refueled and head to the moon? I don't know the answer to that, and to me that seems like the hard problem.
When Korolyov worked on N-1 rocket in 1960-s, some plans included building a hydrogen upper stage. http://astronautix.com/n/n1blocksr.html Hydrogen is rather hard to keep cold, but that stage was designed to work for over 11 days.
Falcon-9 flies almost every other day, about 3 times per week. Methane is way more storable than hydrogen. Of course we'd like to compare numbers, but, given that Starship is way bigger than than N-1 stage - about 15 times, and there is the law of squares-cubes, which for our case says the bigger the tank the less percent of boiloff per unit of time, and it's methane, and we can afford to lose a little and top off with another tanker...
Now, how many tanker flights we'll need? That's a favorite riddle in Musk's plans :) . Korolyov, again, had some early ideas for 5 tankers - https://graphicsnickstevens.substack.com/p/sever-the-bridge-... ... For Starship - if you have 1500 tons of fuel in the Starship, and 150 tons of payload in a tanker, you need 10 flights. You can probably optimize, or be disadvantaged by some obstacles - so, 8-12 flights? That many can fly in less than a month. We can also use additional measures to reduce boiloff - better protection from the Sun, active cooling, maybe more permanent orbital refueling depot - but still, with our today's Falcon-9 flight rate we may consider one Starship per month refueled on LEO. Even if some refueling flights won't be successful, the replacements could be sent.
I personally suspect Starship will fly much more often than Falcon-9. We're so much better in rendezvous and docking these day than we were during Apollo flights, the reliability is so much higher - just take a look how many Falcon-9 flights in a row are successful - so I don't think operationally LEO refuelling will present a significant problem. And I'm sure we need maybe a couple of years to see first examples of that.
Space is hard, yes. But we're getting better, for sure.
Theres a huge difference between sending up a stage full of H2 and transferring H2 from one stage to another with acceptable losses at cryo temperatures.
NASA is actually further ahead with space refuelling tech than SpaceX. But either way the tech is unlikely to work at scale this decade.
In 1992 I watched a car parallel park itself in NYC on Today, on nbc before I went to school. My mind was reeling, automated car technology is right around the corner! That technology did not ship for 20 years.
It is easy to say we are getting better, that doesn’t mean we will see, in this case, starship fly in the near future. And while I have the utmost confidence in Gwynne Shotwell, I am not holding my breath that we see starship launch with any meaningful payload in this decade.
They are already past the point that they could have expended Starship and just reused Super Heavy and launched payloads successfully. It is only their own goals to have a fully reusable system that is preventing it.
SpaceX is the undisputed king of launch cadence. Falcon 9 just flies every other day nowadays.
If anyone can take "we need 14 launches per mission" and make it work, it's SpaceX.
Boil off isn't somehow unsolvable. We know cryogenics can work in space, and SpaceX's approach is actually less aggressive than Blue Origin's requirement of zero boil off on LH2.
> But if they have to launch 10-14 times in order to get the propellant to the LEO depot in order to fuel the Lunar Starship, can we actually deliver that many launches worth of LOX and LNG to the launch pads in the timeframe needed
If only Starbase was located somewhere near abundant gas pipelines, within spitting distance of of the Texas Shale Oil boom…
I'm an embedded software engineer by day and like it or not I have to acknowledge that AI tooling is coming to our work, so I'm currently working on learning to interact with AI coding tools like Claude Code more effectively and efficiently by "vibe-coding" a game for a family member on my personal time. Something inspired by a blend of 'Recettear' and 'Stardew Valley' with the touch that the player shopkeeper is an anthropomorphic cat.
Not OP, but one example I can think of: Jeff Bezos moved from Washington state to Florida two years after Washington enacted a 7% capital gains tax "on the sale or exchange of long-term capital assets such as stocks, bonds, business interests, or other investments and tangible assets"[1] which "reportedly helped him save $1 billion in taxes."[2]
I mean, yeah. When people way saving the planet they mean saving humanity. That's exactly it. A barren rock does no one no good. I don't get it why people hang onto this expression, it's as if you heard that George Carlin bit and now that's your anchor to reality.
It's not like the dinosaurs had a save the earth campaign. Yet, before humans the rock had life forms that died out while the rock itself continued being a viable planet supporting life. If humans die off, the planet will continue on with life continuing in new ways.
For the past 50+ years there really has been a somewhat significant and quite influential body of people who genuinely want to preserve the planet’s ecosystem even at the expense of the people living on it.
I think it might be a organizational architecture that needs to change.
> However, we have never before applied a killswitch to a rule with an action of “execute”.
> This is a straightforward error in the code, which had existed undetected for many years
So they shipped an untested configuration change that triggered untested code straight to production. This is "tell me you have no tests without telling me you have no tests" level of facepalm. I work on safety-critical software where if we had this type of quality escape both internal auditors and external regulators would be breathing down our necks wondering how our engineering process failed and let this through. They need to rearchitect their org to put greater emphasis on verification and software quality assurance.
> In particular, our code to parse .deb, .ar, .tar, and the
HTTP signature verification code would strongly benefit
from memory safe languages
> Critical infrastructure still written in C - particularly code that parses data from untrusted sources - is technical debt that is only going to get worse over time.
But hasn't all that foundational code been stable and wrung out already over the last 30+ years? The .tar and .ar file formats are both from the 70s; what new benefits will users or developers gain from that thoroughly battle-tested code being thrown out and rewritten in a new language with a whole new set of compatibility issues and bugs?
I wish, but I get new security bugs in those components like every year or so, not all are tracked with security updates to be fair, some we say it's your own fault if you use the library to parse untrusted code.
After all the library wasn't designed around safety, we assumed the .debs you pass to it are trusted in some way - because you publish them to your repository or you are about to install them so they have root maintainer scripts anyway.
But as stuff like hosting sites and PPAs came up, we have operators publishing debs for untrusted users, and hence suddenly there was a security boundary of sorts and these bugs became problematic.
Of course memory safety here is only one concern, if you have say one process publishing repos for multiple users, panics can also cause a denial of service, but it's a step forward from potential code execution exploits.
I anticipate the rewrites to be 1 to 1 as close as possible to avoid introducing bugs, but then adding actual unit tests to them.
> But hasn't all that foundational code been stable and wrung out already over the last 30+ years?
Not necessarily. The "HTTP signature verification code" sounds like it's invoking cryptography, and the sense I've had from watching the people who maintain cryptographic libraries is that the "foundational code" is the sort of stuff you should run away screaming from. In general, it seems to me to be the cryptography folks who have beat the drum hardest for moving to Rust.
As for other kind of parsing code, the various archive file formats aren't exactly evolving, so there's little reason to update them. On the other hand, this is exactly the kind of space where there's critical infrastructure that has probably had very little investment in adversarial testing either in the past or present, and so it's not clear that their age has actually led to security-critical bugs being shaken out. Much as how OpenSSL had a trivially-exploitable, high criticality exploit for two years before anybody noticed.
Actual cryptography code, the best path is formally verified implementations of the crypto algorithms; with parsers for wrapper formats like OpenPGP or PKCS#7 implemented in a memory safe language.
You don't want the core cryptography implemented in Rust for Rust's sake when there's a formally verified Assembler version next to it. Formally verified _always_ beats anything else.
I should have clarified that I was primarily referring to the stuff dealing with all the wrapper formats (like PKIX certificate verification), not the core cryptographic algorithms themselves.
The core cryptographic algorithms, IMHO, should be written in a dedicated language for writing cryptographic algorithms so that they can get formally-verified constant-time assembly out of it without having to complain to us compiler writers that we keep figuring out how to deobfuscate their branches.
Sure. But assembly implementations by definition are not portable. And I don’t know what it takes to write a formally verified library line this, but I bet it’s very expensive.
In contrast, a rust implementation can be compiled into many architectures easily, and use intrinsically safer than a C version.
Plus cryptography and PKI is constantly evolving. So it can’t benefit from the decades old trusted implementations.
Formally verified in an obscure language where it's difficult to find maintainers does not beat something written in a more "popular" language, even if it hasn't been formally verified (yet?).
And these days I would (unfortunately) consider assembly as an "obscure language".
(At any rate, I assume Rust versions of cryptographic primitives will still have some inline assembly to optimize for different platforms, or, at the very least, make use of compile intrinsics, which are safer than assembly, but still not fully safe.)
It's insanely complex, particularly you want _verified_ crypto. Last year (or two years ago?) I had to fix a tiny typo in OpenSSL's ARM assembly for example, it was breaking APT and Postgres left and right, but only got triggered on AWS :D
You don't want to write the whole thing in assembly, just the parts that need to be constant time. Even those are better written as called subroutines called from the main implementation.
Take BLAKE3 as an example. There's asm for the critical bits, but the structural parts that are going to be read most often are written in rust like the reference impl.
I would like a special purpose language to exist precisely for writing crytographic code where you always want the constant time algorithm. In this niche language "We found a 20% speed-up for Blemvich-Smith, oops, it actually isn't constant time on the Arrow Lake micro-code version 18 through 46" wouldn't even get into a nightly let alone be released for use.
It seems that for reasons I don't understand this idea isn't popular and people really like hand rolling assembly.
I do think this is pretty much the one use case for a true "portable assembler", where it basically is assembly except the compiler will do the register allocation and instruction selection for you (so you don't have to deal with, e.g., the case that add32 y, x, 0xabcdef isn't an encodable instruction because the immediate is too large).
If you mean GnuPG, that is what Snowden used. It could be better than new software that may have new bugs. Memory safety is a very small part of cryptographic safety.
(New cryptographic software can also be developed by all sorts of people. In this case I'm not familiar, but we do know that GnuPG worked for the highest profile case imaginable.)
GPG works great if you use it to encrypt and decrypt emails manually as the authors intended. The PGP/GPG algorithms were never intended for use in APIs or web interfaces.
Ironically, it was the urge not to roll your own cryptography that got people caught in GPG-related security vulnerabilities.
Isn't it also funny that all of these things are done by the same person?
In all seriousness though, let me assure you that I plan to take a very considerate approach to Rust in APT. A significant benefit of doing Rust in APT rather than rewriting APT from scratch in Rust means that we can avoid redoing all our past mistakes because we can look at our own code and translate it directly.
Honestly having seen trainwreck after trainwreck after trainwreck come out of Canonical for the last decade, I'm sure I'm not the only one that has strong doubts about anyone associated being able to "avoid redoing past mistakes" or to make things not suck.