The default has been pay $x/month for every service. I've seen startups that require a dozen service accounts just to run the software, and dozens more to get onboarded org wide. One service for feature flags. One service for logs. One service for traces. One service for error handling. Another service for ticket tracking, which is completely separate from your planning, design, and CI services. Jesus. What do people hope to accomplish here besides just defering blame?
Replacing SAAS isn't about building a replacement services 1:1. It's about figuring out what you actually needed in the first place! Often we only use a tiny fraction of what the full-blown SAAS offers. IOW it's about eliminating the service entirely and building something that fits your actual needs, rather than following what some VC thinks your needs are.
AI or not, the "build vs buy" pendulum is now swinging hard to build. And IMO that's a real opportunity to consolidate, trim some fat, and actually apply engineering practices rather than just blindly signing up for every SAAS that crosses your path.
This fundamentally misunderstands a couple of things.
DIY software is "free" like a free yacht is free. It initially looks appealing but there's a lot of expensive hidden costs and upkeep and pitfalls and problems.
For one, this is a bad assumption:
> building something that fits your actual needs
Unless your business is very small and not growing, this is a moving target. Your needs are going to change as you grow and different groups in the org are going to have different needs.
You really don't want to be dicking around creating software that already exists instead of doing the shit that actually makes you money. Spending a few hundo thousand on a bunch of software is nothing, you spend that on one engineer.
You buy a SaaS product because you have a problem and want to throw money at someone else to deal with it.
Try CodeCompanion if you're using neovim. I have a keybind set up and takes the highlighted region, prepends some context which says roughly "if you see a TODO comment, do it, if you see a WTF comment, try to explain it", and presents you an inline diff to accept/reject edits. It's great for tactical LLM use on small sections of code.
For strategic use on any larger codebase though, it's more productive to use something like plan mode in Claude code.
Considering LLMs are models of language, investing in the clarity of the written word pays off in spades.
I don't know whether "literate programming" per se is required. Good names, docstrings, type signatures, strategic comments re: "why", a good README, and thoughtfully-designed abstractions are enough to establish a solid pattern.
Going full "literate programming" may not be necessary. I'd maybe reframe it as a focus on communication. Notebooks, examples, scripts and such can go a long way to reinforcing the patterns.
Ultimately that's what it's about: establishing patterns for both your human readers and your LLMs to follow.
Yeah, I think what is needed is somewhere between docstrings+strategic comments, and literate programming.
Basically, it's incredibly helpful to document the higher-level structure of the code, almost like extensive docstrings at the file level and subdirectory level and project level.
The problem is that major architectural concepts and decisions are often cross-cutting across files and directories, so those aren't always the right places. And there's also the question of what properly belongs in code files, vs. what belongs in design documents, and how to ensure they are kept in sync.
The question being - are LLMs 'good' at interpreting and making choices/decisions about data structures and relationships?
I do not write code for a living but I studied comp sci. My impression was always that the good software engineers did not worry about the code, not nearly as much as the data structures and so on.
The only use of code is to process data, aka information. And any knowledge worker that the success of processing information is mostly relying on how it's organized (try operating a library without an index).
Most of the time is spent about researching what data is available and learning what data should be returned after the processing. Then you spend a bit of brain power to connect the two. The code is always trivial. I don't remember ever discussing code in the workplace since I started my career. It was always about plans (hypotheses), information (data inquiry), and specifications (especially when collaborating).
If the code is worrying you, it would be better to buy a book on whatever technology you're using and refresh your knowledge. I keep bookmarks in my web browser and have a few books on my shelf that I occasionally page through.
Wow, the world is getting much faster at exploiting CVEs
> 67.2% of exploited CVEs in 2026 are zero-days, up from 16.1% in 2018
But the exploit rate (the pct of all published CVEs that are actually exploited in the wild) has dropped from a high of 2.11% in 2021 to 0.64% in 2026. Meaning we're either getting worse at exploitation (not likely) or reporting more obscure, pragmatically not-really-an-issue issues that can't be replicated IRL.
So we're in a weird situation:
The vast majority 99.4% of CVEs will never see the light of day as an actual attack. Lots of noise, and getting noisier.
But those that do will happen with increasing speed! So there are increased consequences for missing the signal.
The entire zeitgeist of software technology revolves around the assumption that making things efficient, easy, and quick is inherently good. Most people who are "sitting in front of rectangles, moving tiny rectangles" have sometime grandiose notions of their works' importance; we're making X work better for the good of Y to enable Z. Abstract shit like that.
No man, you're just making X easier. If the world needs more X, fine. If not, woops.
The detachment from reality makes it all too easy to deceive yourself into thinking "hey this actually helps people".
> Most people who are "sitting in front of rectangles, moving tiny rectangles"
Hey dude these are my emotional support rectangles!
Truth is, anything can be meaningful. We make our own meaning and almost anything will do as long as you believe in it. If optimizing rectangles on the screen makes you happy, that’s great. If it doesn’t, find something else to do.
It’s really just because those of us choosing this profession are also very good at optimizing chosen metrics. But don’t always ask whether they are good metrics and whether they become counterproductive past some point.
This is one of the reasons why I'm so disgusted by the mainstream voices around AI. As if I'm going to be "left behind" because my only priority isn't increasing shareholder value or building a saas that makes the world a worse place.
Requirements handed down - never seen it in 25 years. The requirements are always fluid, by definition. At best, you get a wish list which needs to be ammended with reality. If you have completely static requirements, you don't need an engineer! You just do it. Engineering IS refining the requirements according to empirical data.
Once you have requirements that are correct (for all well-defined definitions of "correct"), the code implementation is so trival that an LLM can do it :-)
Doing things "faster" and "easier" is an interesting way to put it. It places all of the value on one's personal experience of using the AI, and completely ignores the quality of the thing produced. Which explains why most stuff produced by LLMs is throwaway garbage! It only reinforces the parent comment - there is virtually no emphasis on making things "better".
There is a funny, deep observation made by The Good Place character Michael (a non-human) that has stuck with me since. He says that humans took ice cream, which was perfect, and "ruined it a little" to invent frozen yogurt, just so they could have more of it. There's supposedly a 'guilt' angle there somewhere but I never felt guilty for eating "too much" ice cream so can't relate.
Still, this "making something worse so you can have more of it" shows up pretty much everywhere in human experience. Sometimes it's depressing, other times amazing to see what was achieved with that mentality, and it seems AI is just accelerating it.
There won't even be a quality conversation if a thing isn't built in the first place, which is the tendency when the going is slow and hard. AI makes the highly improbable very probable.
I agree. I think this is the LLM superpower: making quick prototypes that allow us to speak concretely about technical tradeoffs.
My comment was pointed at people who use AI specifically with the goal of making anything easier and faster. Doesn't matter what it is. "Faster and easier is better". as though doing more of the same shit are primary goals in themselves.
If you're using AI to explore better technical decisions, you're doing it right! AI can be a catalyst for engineering and science. But not if we treat it like a mere productivity tool. The quality of the thing enabled by the AI very much matters.
Doing things faster/easier means I now do most of these things whereas I didn't before.
Because I have limited time and energy. Take learning as an example:
I couldn't afford to spend a weekend learning the tradeoffs made by the top 5 WebGL JavaScript game engines AND generate the same demos for all of them to compare DX, and performance on my phone. And as I had more questions about their implementation I would have to scavenge their code again, for each question.
A sample of the questions I had (and as I asked it would suggest new question for things I didn't know I should ask):
- Do they perform sorting or are their drawing immediate? sort on z? z and y? z/y and layers? immediate'ish + layers? Frustum culling supported? What's their implementation for it if any?
- What are their GPU atlas strategies? fixed size? multiple with grouping by drawing frequency to reduce atlas switching? 2048? 4096? How many atlases? Does it build the atlas at boot or does it support progressive atlas sprite loading? How does it deal with fragmentation? What does it use for packing algo? Skyline ir something more advanced? How is their batch splitting behaviour and performance characteristics?
- Does it help with ECS? How is their hierarchical entity DX, if any? Does it math with matrices for transformations or simpler math? Shaders support? Do they use an Uber shader for most things? And what about polygons? Also, how do they help with texture bleeding? What's their camera implementation? Do they support spatial audio?
...and so on. Multiply the number of questions by at least 10.
And I asked LLM to show me the code for each answer, on all 5 engines.
This kind of learning just wasn't feasible for me before with my busy life.
So when I say "easier" it often means "made possible".
Finally let's not forget most of us in HN are incredibly privileged and can afford to learn futile things on the weekend. But for a great part of the less privileged population, having access to easier learning is LIFE CHANGING.
I agree with the sentiment, that most non-decisions are really implicit decisions in disguise. They have implications whether you thought about them up front or not. And if you need to revisit those non-decisions, it will cost you.
But I don't like calling this tech debt. The tech debt concept is about taking on debt explicitly, as in choosing the sub-optimal path on purpose to meet a deadline then promising a "payment plan" to remove the debt in the future. Tech debt implies that you've actually done your homework but picked door number 2 instead. A very explicit choice, and one where decision makers must have skin in the game.
A hurried, implicit choice has none of those characteristics - it's ignorance leading (inevitably?) to novel problems. That doesn't fit the debt metaphor at all. We need to distinguish tech debt from plain old sloppy decision making. Maybe management can even start taking responsibility for decisions instead of shrugging and saying "Tech debt, what can you do, amirite?"
> Succinctness, functionality and popularity of the language are now much more important factors.
Not my experience at all. The most important factor is simplicity and clarity. If an LLM can find the pattern, it can replicate that pattern.
Language matters to the extent it encourages/forces clear patterns. Language with more examples, shorter tokens, popularity, etc - doesn't matter at all if the codebase is a mess.
Functional languages like Elixir make it very easy to build highly structured applications. Each fn takes in a thing and returns another. Side effects? What side effects? LLMs can follow this function composition pattern all day long. There's less complexity, objectively.
But take languages that are less disciplined. Throw in arbitrary side effects and hidden control flow and mutable state ... the LLM will fail to find an obviously correct pattern and guess wildly. In practice, this makes logical bugs much more likely. Millions of examples don't help if your codebase is a swamp. And languages without said discipline often end up in a swamp.
This is the hidden super power of LLM - prototyping without attachment to the outcome.
Ten years ago, if you wanted to explore a major architectural decision, you would be bogged down for weeks in meetings convincing others, then a few more weeks making it happen. Then if it didn't work out, it feels like failure and everyone gets frustrated.
Now it's assumed you can make it work fast - so do it four different ways and test it empirically. LLMs bring us closer to doing actual science, so we can do away with all the voodoo agile rituals and high emotional attachment that used to dominate the decision process.
I basically just _accidentally_ added a major new feature to one of my projects this week.
In the sense that, I was trying to explain what I wanted to do to a coworker and my manager, and we kept going back and forth trying to understand the shape of it and what value it would add and how much time it would be worth spending and what priority we should put on it.
And I was like -- let me just spend like an hour putting together a partially working prototype for you, and claude got _so close_ to just completely one-shotting the entire feature in my first prompt, that I ended up spending 3 hours just putting the finishing touches on it and we shipped it before we even wrote a user story. We did all that work after it was already done. Claude even mocked up a fully interactive UI for our UI designer to work from.
It's literally easier and faster to just tell claude to do something than to explain why you want to do it to a coworker.
That's only because no one understood agile or XP and they've become a "no one actually does that stuff" joke to many. I have first hand experience with prototyping full features in a day or two and throwing the result away. It comes with the added benefit of getting your hands dirty and being able to make more informed decisions when doing the actual implementation. It has always been possible, just most people didn't want to do it.
Replacing SAAS isn't about building a replacement services 1:1. It's about figuring out what you actually needed in the first place! Often we only use a tiny fraction of what the full-blown SAAS offers. IOW it's about eliminating the service entirely and building something that fits your actual needs, rather than following what some VC thinks your needs are.
AI or not, the "build vs buy" pendulum is now swinging hard to build. And IMO that's a real opportunity to consolidate, trim some fat, and actually apply engineering practices rather than just blindly signing up for every SAAS that crosses your path.
reply