I regularly use Graviton CPUs on AWS (even if Amazon pays the cheaper ARM license), why would people switch back from that? It's effectively better in terms of performance/price, I expect these improvements to slowly but steadily reach the on-premises world as well.
Yeah, but everything on AWS is already way more expensive than it should be, so the slight discount on ARM instances is a gimmick from Amazon to diversify their servers or something. Actual ARM servers aren't cheaper or better.
Mac has always had horrible window management. Made worse because applications and windows are a separate concept. Used to seem clever but in the world of multiple workspaces it's a terrible decision. Now it's even worse trying to manage multiple llms and projects.
Yes, but it's much worse than that because it makes multiple workspaces essentially unusable. Try them on Windows or any Linux desktop. When a window is also an application it makes handling them much more seemless. Not to mention the animation on Macos (slide or fade) takes multiple seconds, then when it completes it takes 500ms to actually focus. That's if it actually focuses to the right window when switching, which is currently a bug. Been there for years.
Edited. I'm not strictly saying this was caused by AI, but more of a general point that AI is really good at producing crap work which would make the generator spin faster.
It is so important to use specific prompts for package upgrading.
Think about what a developer would do:
- check the latest version online;
- look at the changelog;
- evaluate if it’s worth to upgrade or an intermediate may be alright in case of code update are necessary;
Of course, the keep these operations among the human ones, but if you really want to automate this part (and you are ready to pay its consequences) you need to mimic the same workflow.
I use Gemini and codex to look for package version information online, it checks the change logs from the version I am to the one I’d like to upgrade, I spawn a Claude Opus subagent to check if in the code something needs to be upgraded. In case of major releases, I git clone the two packages and another subagents check if the interfaces I use changed. Finally, I run all my tests and verify everything’s alright.
Yes, it might not still be perfect, but neither am I.
I wouldn't say that AI is killing B2B SaaS, let's analyze the situation: AI is shifting the boundaries of the Build VS Buy decision.
If software is cheaper... building it seems a much better option but...
...the fact that software is cheaper means that SaaS companies will be able to be more competitive with price, offering more features, integrating them much better with the newest agentic tools and frameworks that are getting released.
Sure, the market will adjust, some will be pushed away, others will find businesses that weren't possible before... but we're far away (I hope) from the AGI revolution, don't forget to see this crysis as an opportunity too.
Hetzner is definitely an interesting option. I’m a bit scared of managing the services on my own (like Postgres, Site2Site VPN, …) but the price difference makes it so appealing. From our financial models, Hetzner can win over AWS when you spend over 10~15K per month on infrastructure and you’re hiring really well. It’s still a risk, but a risk that definitely can be worthy.
> I’m a bit scared of managing the services on my own
I see it from the other direction, when if something fails, I have complete access to everything, meaning that I have a chance of fixing it. That's down to hardware even. Things get abstracted away, hidden behind APIs and data lives beyond my reach, when I run stuff in the cloud.
Security and regular mistakes are much the same in the cloud, but I then have to layer whatever complications the cloud provide comes with on top. If cost has to be much much lower if I'm going to trust a cloud provider over running something in my own data center.
You sum it up very neatly. We've heard this from quite a few companies, and that's kind of why we started our ours.
We figured, "Okay, if we can do this well, reliably, and de-risk it; then we can offer that as a service and just split the difference on the cost savings"
(plus we include engineering time proportional to cluster size, and also do the migration on our own dime as part of the de-risking)
I've just shifted my SWE infrastructure from AWS to Hetzner (literally in the last month). My current analysis looks like it will be about 15-20% of the cost - £240 vs 40-50 euros.
Expect a significant exit expense, though, especially if you are shifting large volumes of S3 data. That's been our biggest expense. I've moved this to Wasabi at about 8 euros a month (vs about $70-80 a month on S3), but I've paid transit fees of about $180 - and it was more expensive because I used DataSync.
Retrospectively, I should have just DIYed the transfer, but maybe others can benefit from my error...
Extremely useful information - unfortunately I just assumed this didn't apply to me because I am in the UK and not the EU. Another mistake, though given it's not huge amounts of money I will chalk it up to experience.
Hopefully someone else will benefit from this helpful advice.
I’m curious to know the answer, too. I used to deploy my software on-prem back in the day, and that always included an installation of Microsoft SQL Server. So, all of my clients had at least one database server they had to keep operational. Most of those clients didn’t have an IT staff at all, so if something went wrong (which was exceedingly rare), they’d call me and I’d walk them through diagnosing and fixing things, or I’d Remote Desktop into the server if their firewalls permitted and fix it myself. Backups were automated and would produce an alert if they failed to verify.
It’s not rocket science, especially when you’re talking about small amounts of data (small credit union systems in my example).
No it was not. 15 years ago Heroku was the rage. Even the places that had bare metal usually had someone running something similar to devops and at least core infrar was not being touched. I am sure places existed but 15 years while far away was already pretty far along from what you describe. At least in SV.
Heroku was popular with startups who didn’t have infrastructure skills but the price was high enough that anyone who wasn’t in that triangle of “lavish budget, small team, limited app diversity” wasn’t using it. Things like AWS IaaS were far more popular due to the lower cost and greater flexibility but even that was far from a majority service class.
I am not sure if you are trying to refute my lived experience or what exactly the point is. Heroku was wildly popular with startups at the time, not just those with lavish budgets. I was already touching RDS at this point and even before RDS came around no organization I worked at had me jumping on bare metal to provision services myself. There always a system in place where someone helped out engineering to deploy systems. I know this was not always the case but the person I was responding to made it sound like 15 years ago all engineers were provisioning their own database and doing other times of dev/sys ops on a regular basis. It’s not true at least in SV.
I have no doubt that was your experience. My point was that it wasn’t even common in SV as whole, just the startup scene. Think about headcount: how many times fewer people worked at your startup than any one of Apple, Oracle, HP, Salesforce, Intuit, eBay, Yahoo, etc.? Then thing about how many other companies there are just in the Bay Area who have large IT investments even if they’re not tech companies.
Even at their peak, Heroku was a niche. If you’d gone conferences like WWDC or Pycon at the time, they’d be well represented, yes, and plenty of people liked them but it wasn’t a secret that they didn’t cover everyone’s needs or that pricing was off putting for many people, and that tended to go up the bigger the company you talked to because larger organizations have more complex needs and they use enough stuff that they already have teams of people with those skills.
I think we are talking past each other here. While your language is a bit proactive originally your not wrong as I have already agreed startups absolutely, lavish budgets? No that’s just silly.
Again 15 years even in moderately large organizations it was quite common as a product engineer to not be responsible for the provisioning all the required services for whatever you were building. And again it’s not the rule but it is far from being an exception. Not sure what you’re trying to prove or disprove.
A tricky thing on this site is that there are lots of different people with very different kinds of experience, which often results in people talking past each other. A lot of people here have experience as zero-to-one early startup engineers, and yep, I share your experience that Heroku was very popular in that space. A lot of other people have experience at later growth and infrastructure focused startups, and they have totally different experiences. And other people have experience as SREs at big tech, or doing IT / infrastructure for non-tech fortune 500 businesses. All of these are very different experiences, and very different things have been popular over the last couple decades depending on which kind of experience you have.
Absolutely true but I also think it’s a fair callout when the intent was to disprove the original post asking how old someone was because 15 years ago everyone was stringing together their own services which is absolutely not true. There were many shades of gray at that time both in my experience of either have a sysops/devops team to help or deploying to Heroku as well as folks that were indeed stringing together services.
I find it equally disingenuous to suggest that Heroku was only for startups with lavish budgets. Absolutely not true. That’s my only purpose here. Everyone has different experiences but don’t go and push your own narrative as the only one especially when it’s not true.
I kind of thought the "15 years" was just one of those things where people kind of forget what year it is. Wow, 2010 was already over 15 years ago?? That kind of mistake. I think this person was thinking pre-2005. I graduated college just after that, and that's when all this cloud and managed services stuff was just starting to explode. I think it's true that before that, pretty much everyone was maintaining actual servers somewhere. (For instance, I helped out with the physical servers for our CS lab some when I was in college. Most of what we hosted on those would be easier to do on the cloud now, but that wasn't a thing then.)
Did you fail to finish reading the rest? At the same time I had touch with organizations that were still in data centers but I as an engineer had no touch on the bare metal and ticket systems were in place to help provision necessary services. I was not deploying my own Postgres database.
It's 2026 and banks are still running their mainframe, running windows VMs on VMware and building their enterprise software with Java.
The big boys still have their own datacenters they own.
Sure, they try dabbling with cloud services, and maybe they've pushed their edge out there, and some minor services they can afford to experiment with.
If you are working at a bank you are most likely not standing up your own Postgres and related services. Even 15 years ago. I am not saying it never happened, I am saying that even 15 years ago even large orgs with data enters often had in place sys and devops that helped with providing resources. Obviously not the rule but also not an exception.
True. We had separate teams for Oracle and MSSQL management. We had 3 teams each for Windows, "midrange" (Unix) and mainframe server management. That doesn't include IAM.
Ahah I'm 31, but deciding if it makes sense to manage your own db doesn't depend on the age of the CTO.
See, turning up a VM, installing and running Postgres is easy.
The hard part is keeping it updated, keeping the OS updated, automate backups, deploying replicas, encrypting the volumes and the backups, demonstrating to a third party auditor all of the above... and mind that there might be many other things I honestly ignore!
I'm not saying I won't go that path, it might be a good idea after a certain scale, but in the first and second year of a startup your mind should 100% be on "How can I make my customer happy" rather than "We failed again the audit, we won't have the SOC 2 Type I certification in time to sign that new customer".
If deciding between Hetzner and AWS was so easy, one of them might not be pricing its services correctly.
I’m wondering if it makes sense to distribute your architecture so that workers who do most of the heavy lifting are in hetzner, while the other stuff is in costly AWS. On the other hand this means you don’t have easy access to S3, etc.
Depends on how data-heavy the work is. We run a bunch of gpu training jobs on other clouds with the data ending up in S3 - the extra transfer costs wrt what we save on getting the gpus from the cheapest cloud available, it makes a lot of sense.
Also, just availability of these things on AWS has been a real pain - I think every startup got a lot of credits there, so flood of people trying to then use them.
Is it though? This is a genuine question. My intuition is that the investment of time / stress / risk to become good at this is unlikely to have high ROI to either the person putting in that time or to the business paying them to do so. But maybe that's not right.
It's more nuanced for sure than my pithy comment suggests. I've done both self-managed and managed and felt it was a good use of my time to self-manage given the size of the organizations, the profile of the workloads and the cost differential. There is a whole universe of technology businesses that do not earn SV/FAANG levels of ROI - for them, self-managed is a reasonable allocation of effort.
One point to keep in mind is that the effort is not constant. Once you reach a certain level of competency and stability in your setup, there is not much difference in time spent.
I also felt that self-managed gave us more flexibility in terms of tuning.
My final point is that any investment in databases whether as a developer or as an ops person is long-lived and will pay dividends for a longer time than almost all other technologies.
Managing the PostgreSQL databases is a medium to low complexity task as I see it.
Take two equivalent machines, set up with streaming replication exactly as described in the documentation, add Bacula for backups to an off-site location for point-in-time recovery.
We haven't felt the need to set up auto fail-over to the hot spare; that would take some extra effort (and is included with AWS equivalents?) but nothing I'd be scared of.
Add monitoring that the DB servers are working, replication is up-to-date and the backups are working.
> We haven't felt the need to set up auto fail-over to the hot spare; that would take some extra effort (and is included with AWS equivalents?) but nothing I'd be scared of.
this part is actually scariest, since there are like 10 different 3rd party solutions of unknown stability and maintanability.
> Managing the PostgreSQL databases is a medium to low complexity task as I see it.
Same here. But, I assume you have managed PostgreSQL in the past. I have. There are a large number of people software devs who have not. For them, it is not a low complexity task. And I can understand that.
I am a software dev for our small org and I run the servers and services we need. I use ansible and terraform to automate as much as I can. And recently I have added LLMs to the mix. If something goes wrong, I ask Claude to use the ansible and terraform skills that I created for it, to find out what is going on. It is surprisingly good at this. Similarly I use LLMs to create new services or change configuration on existing ones. I review the changes before they are applied, but this process greatly simplifies service management.
> Same here. But, I assume you have managed PostgreSQL in the past. I have. There are a large number of people software devs who have not. For them, it is not a low complexity task. And I can understand that.
I'd say needing to read the documentation for the first time is what bumps it up from low complexity to medium. And then at medium you should still do it if there's a significant cost difference.
You can ask an AI or StackOverflow or whatever for the correct way to start a standby replica, though I think the PostgreSQL documentation is very good.
But if you were in my team I'd expect you to have read at least some of the documentation for any service you provision (self-hosted or cloud) and be able to explain how it is configured, and to document any caveats, surprises or special concerns and where our setup differs / will differ from the documented default. That could be comments in a provisioning file, or in the internal wiki.
That probably increases our baseline complexity since "I pressed a button on AWS YOLO" isn't accepted. I think it increases our reliability and reduces our overall complexity by avoiding a proliferation of services.
I hope this is the correct service, "Amazon RDS for PostgreSQL"? [1]
The main pair of PostgreSQL servers we have at work each have two 32-core (64-vthread) CPUs, so I think that's 128 vCPU each in AWS terms. They also have 768GiB RAM. This is more than we need, and you'll see why at the end, but I'll be generous and leave this as the default the calculator suggests, which is db.m5.12xlarge with 48 vCPU and 192GiB RAM.
That would cost $6559/month, or less if reserved which I assume makes sense in this case — $106400 for 3 years.
Each server has 2TB of RAID disk, of which currently 1TB is used for database data.
That is an additional $245/month.
"CloudWatch Database Insights" looks to be more detailed than the monitoring tool we have, so I will exclude that ($438/month) and exclude the auto-failover proxy as ours is a manual failover.
With the 3-year upfront cost this is $115000, or $192000 for 5 years.
Alternatively, buying two of yesterday's [2] list-price [3] Dell servers which I think are close enough is $40k with five years warranty (including next-business-day replacement parts as necessary).
That leaves $150000 for hosting, which as you can see from [4] won't come anywhere close.
We overprovision the DB server so it has the same CPU and RAM as our processing cluster nodes — that means we can swap things around in some emergency as we can easily handle one fewer cluster node, though this has never been necessary.
When the servers are out of warranty, depending on your business, you may be able to continue using them for a non-prod environment. Significant failures are still very unusual, but minor failures (HDDs) are more common and something we need to know how to handle anyway.
For what it's worth, I have also managed my own databases, but that's exactly why I don't think it's a good use of my time. Because it does take time! And managed database options are abundant, inexpensive, and perform well. So I just don't really see the appeal of putting time into this.
If you have a database, you still have work to do - optimizing, understanding indexes, etc. Managed services don't solve these problems for you magically and once you do them, just running the db itself isn't such a big deal and it's probably easier to tune for what you want to do.
Absolutely yes. But you have to do this either way. So it's just purely additive work to run the infrastructure as well.
I think if it were true that the tuning is easier if you run the infrastructure yourself, then this would be a good point. But in my experience, this isn't the case for a couple reasons. First of all, the majority of tuning wins (indexes, etc.) are not on the infrastructure side, so it's not a big win to run it yourself. But then also, the professionals working at a managed DB vendor are better at doing the kind of tuning that is useful on the infra side.
Maybe, but you're paying through the nose continually for something you could learn to do once - or someone on your team could easily learn with a little time and practice. Like, if this is a tradeoff you want to make, it's fine, but at some point learning that 10% more can halve your hosting costs so it's well worth it.
It's not the learning, it's the ongoing commitment of time and energy into something that is not a differentiator for the business (unless it is actually a database hosting business).
I can see how the cost savings could justify that, but I think it makes sense to bias toward avoiding investing in things that are not related to the core competency of the business.
This sounds medium to high complexity to me. You need to do all those things, and also have multiple people who know how to do them, and also make sure that you don't lose all the people who know how to do them, and have one of those people on call to be able to troubleshoot and fix things if they go wrong, and have processes around all that. (At least if you are running in production with real customers depending on you, you should have all those things.)
With a managed solution, all of that is amortized into your monthly payment, and you're sharing the cost of it across all the customers of the provider of the managed offering.
Personally, I would rather focus on things that are in or at least closer to the core competency of our business, and hire out this kind of thing.
You are right. Are you actually seriously considering whether to go fully managed or self managed at this point? Pls go AWS route and thank me later :)
I ran through roughly our numbers here [1], it looks like self-hosted costs us about 25% of AWS.
I didn't include labour costs, but the self-hosted tasks (set up of hardware, OS, DB, backup, monitoring, replacing a failed component which would be really unusual) are small compared to the labour costs of the DB generally (optimizing indices, moving data around for testing etc, restoring from a backup).
Yes thank you for that. I always feel like these up front cost analyses miss (or underrate) the ongoing operational cost to monitor and be on call to fix infrastructure when problems occur. But I do find it plausible that the cost savings are such that this can be a wise choice nonetheless.
I really do not think so. Most startups should rather focus on their core competency and direct engineering resources to their edge. When you are $100 mln ARR then feel free to mess around with whatever db setup you want.
Or CDN, queues, log service, observability, distributed storage. I am not even sure what the people in the on-prem vs cloud argument think. If you need a highly specialised infra with one or two core services and a lower tier network is ok then on-prem is ok. Otherwise if is a never ending quest to re-discover the millions of engineering hours went into building something like AWS.
This is absurd, the fact that Windows has 70% if global desktop operating system market share makes them their most important moat, why are they deliberately taking actions and steps to make it worse?
They added ads, forced updates, mandatory Microsoft account activation, so much unwanted AI slop... Think about it, if it was another more competitive company, they would be charging for the AI service and the onboarding experience would be totally different. It seems like the management is totally disconnected from their product.
I think the summary is that "product owner" types and other agile simulacrums wanted it simply because they viewed it as an easy win towards KPI and other performance metrics. The most damning proof of this is Copilot in Notepad, and that half-attempt at renaming the entire Office suite to simply "Copilot" (they seemed to reverse this a few days later).
The moat is still secure. So the strategy is to loot as much value as possible from within the moat. After all, where are the corporate customers going to go?
reply