Nothing screams being infantilised by your platform more than having to wait 24 hours to be allowed to install software on your own purchased computing devices.
No judgement whatsoever, but for almost everyone they too will think, no big deal you only install software through stores right? Nothing changes for them, in fact they can't conceive of an alternative anymore.
How can you judge if Google's plan is a good one? Add up the harms caused by the new rules and weigh that against the reduction in harm and see where the balance is?
I have a hard time believing the net outcome for the overall Android community would be negative.
It took ~5 months for anyone to notice and fix something that is obviously wrong at a glance.
How many people saw that page, skimmed it, and thought “good enough”? That feels like a pretty honest reflection of the state of knowledge work right now. Everyone is running at a velocity where quality, craft and care are optional luxuries. Authors don’t have time to write properly, reviewers don’t have time to review properly, and readers don’t have time to read properly.
So we end up shipping documentation that nobody really reads and nobody really owns. The process says “published”, so it’s done.
AI didn’t create this, it just dramatically lowers the cost of producing text and images that look plausible enough to pass a quick skim. If anything it makes the underlying problem worse: more content, less attention, less understanding.
It was already possible to cargo-cult GitFlow by copying the diagram without reading the context. Now we’re cargo-culting diagrams that were generated without understanding in the first place.
If the reality is that we’re too busy to write, review, or read properly, what is the actual function of this documentation beyond being checkbox output?
Huh, I thought that the MS tutorial was older. The blurry screenshot in it is from 2023.
And there ist another website with the same content (including the sloppy diagram). I had assumed that they just plagiarized the MS tutorials.
Maybe the vendor who did the MS tutorial just plagiarized (or re-published) this one?:
You are assuming: A) That everyone who saw this would go as far as post publicly about it (and not just chuckle / send it their peers privately) and B) Any post about this would reach you/HN and not potentially be lost in the sea of new content.
> So we end up shipping documentation that nobody really reads
I'd note that the documentation may have been read and noticed as flawed, but some random person noticing that it's flawed is just going to sigh, shake their heads, and move on. I've certainly been frustrated by inadequate documentation before (that describes the majority of all documentation, in my experience), but I don't make a point of raising a fuss about it because I'm busy trying to figure out how to actually accomplish the goal for which I was reading documentation for rather than stopping what I'm doing to make a complaint about how bad the documentation is.
This says nothing to absolve everyone involved in publishing it, of course. The craft of software engineering is indeed in a very sorry state, and this offers just one tiny glimpse into the flimsiness of the house of cards.
I usually would post it in our dev slack chat and rant for a message or two how many hours were lost "reverse-engineering" bad documentation. But I probably wouldn't post about it on here/BlueSky.
If you work in a medium to large company, you know most of the documentation is there for compliance reasons or for showing others that you did something at one point. You can probably just put slop at the end of documents, while you still keep headlines relevant and no one will ever read it or notice it.
You should be unbelievably proud of what you've achieved, and it's lovely to be reminded of the amazing things people can accomplish amongst the backdrop of almost deafeningly negative sentiment going around.
Thanks for doing what you do and for sharing your story!
Thank you :) Watsi is lucky to have an incredible team and medical partners, who work in some of the most challenging environments to provide care to patients.
The AI fatigue is real, and the cooling-off period is going to hurt. We’re deep into concept overload now. Every week it’s another tool (don’t get me started on Gas Town) confidently claiming to solve… something. “Faster development”, apparently.
Unless you’re already ideologically committed to this space, I don’t see how the average engineer has the energy or motivation to even understand these tools, never mind meaningfully compare them. That’s before you factor in that many of them actively remove the parts of engineering people enjoy, while piling on yet another layer of abstraction, configuration, and cognitive load.
I’m so tired of being told we’re in yet another “paradigm shift”. Tools like Codex can be useful in small doses, but the moment it turns into a sprawling ecosystem of prompts, agents, workflows, and magical thinking, it stops feeling like leverage and starts feeling like self-inflicted complexity.
> I don’t see how the average engineer has the energy or motivation to even understand these tools, never mind meaningfully compare them
This is why I use the copilot extension in VS code. They seem to just copy whatever useful thing climbs to the surface of the AI tool slop pile. Last week I loaded up and Opus 4.6 was there ready to use. Yesterday I found it has a new Claude tool built in which I used to do some refactoring... it worked fine. It's like having an AI tool curator.
Your point about the overwhelming proliferation of AI tools and not knowing which are worth any attention and which are trash is very true I feel that a lot today (my solution is basically to just lean into one or two and ask for recommendations on other tools with mixed success).
The “I’m so tired of being told we’re in another paradigm shift” comments are widely heard and upvoted on HN and are just so hard to comprehend today. They are not seeing the writing on the wall and following where the ball is going to be even in 6-12 months. We have scaling laws, multiple METR benchmarks, internal and external evals of a variety of flavors.
“Tools like codex can be useful in small doses” the best and most prestigious engineers I know inside and outside my company do not code virtually at all. I’m not one of them but I also do not code at all whatsoever. Agents are sufficiently powerful to justify and explain themselves and walk you through as much of the code as you want them to.
Yeah, I’m not disputing that AI-assisted engineering is a real shift. It obviously is.
My issue is that we’ve now got a million secondary “paradigm shifts” layered on top: agent frameworks, orchestration patterns, prompt DSLs, eval harnesses, routing, memory, tool calling, “autonomous” workflows… all presented like you’re behind if you’re not constantly replatforming your brain.
Even if the end-state is “engineers code less”, the near-term reality for most engineers is still: deliver software, support customers, handle incidents, and now also become competent evaluators of rapidly changing bot stacks. That cognitive tax is brutal.
So yes, follow where the ball is going. I am. I’m just not pretending the current proliferation is anything other than noisy and expensive to keep up with.
This is a very good point. Years ago working in a LAMP stack, the term LAMP could fully describe your software engineering, database setup and infrastructure. I shudder to think of the acronyms for today's tech stacks.
And yet many the same people who lament the tooling bloat of today will, in a heartbeat, make lame jokes about PHP. Most of them aren't even old enough to have ever done anything serious with it, or seen it in action beyond Wordpress or some spaghetti-code one-pager they had to refactor at their first job. Then they show up on HN with a vibe-coded side project or blog post about how they achieved a 15x performance boost by inventing server-side rendering.
As time goes on, I find myself increasingly worried about supply chain attacks—not from a “this could cost me my job” or “NixOS, CI/CD, Node, etc. are introducing new attack vectors” perspective, but from a more philosophical one.
The more I rely on, the more problems I’ll inevitably have to deal with.
I’m not thinking about anything particularly complex—just using things like VSCode, Emacs, Nix, Vim, Firefox, JavaScript, Node, and their endless plugins and dependencies already feels like a tangled mess.
Embarrassingly, this has been pushing me toward using paper and the simplest, dumbest tech possible—no extensions, no plugins—just to feel some sense of control or security. I know it’s not entirely rational, but I can’t shake this growing disillusionment with modern technology. There’s only so much complexity I can tolerate anymore.
Emacs itself is probably secure and you can easily audit every extension, but if you update every extension blindly via a nicely composable emacs Nix configuration, you would indeed have a problem.
I guess one could automate finding obvious exploits via LLMs and if the LLM finds something abort the update.
The right solution is to use Coq and just formally verify everything in your organization, which incidentally means throwing away 99.999% of software ever written.
Formal verification solves nothing. You can have a formally verified 100% secure backdoor exploit. (Ultimately it all depends on the semantics of "sysadmin" vs "hacker", who are really just two different roles of the same person.)
This is also why signing code commits isn't a solution, only a way to trace ends when something fucks up.
Formal verification would solve everything. It's just that whoever is using the software actually needs to understand the specification, but when there is some trusted base of I/O primitives (like "read a file"), such things become trivial to check; even Haskell has such things in a limited fashion via SafeHaskell and to an even lesser extent via its IO monad.
The specification for a text editor would be much simpler than an implementation. For example, efficiently searching for a substring is non-trivial, but a specification is easy. So, all that I would be interested in, is a proof that "eval(optimized_substring_search needle haystack) = eval(easy_substring needle haystack)", for example. Obviously, there are many thousands of such theorems that would have to be done to clone Emacs, but at least a new release wouldn't contain bugs anymore (wrongly specifying something would happen, but it's much easier to write a specification of desired behavior than to find the exact bug in some mess from someone else, because it conflates implementation and specification in the first place).
I did understand; it's just that I am way ahead of you.
Your point is that users are too stupid/lazy to comprehend specifications. That is, they won't bother to read that the specification of their formally verified secure version of Google Maps really just copies their credit card data to a random server.
reply