Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Dario Amodei says software engineering is fully automated by 2027. You might have the 0.01% engineer left over, but that's it, the job is finished.

I think people need to start considering strongly what kind of career they can re-skill to.

https://darioamodei.com/machines-of-loving-grace



What does that even mean?

What exactly is the .01% of engineering work that this super intelligent AI couldn't handle?

I'm not worried about this future as a SWE, because if it does happen, the entire world will change.

If AI is doing all software engineering work, that means it will be able to solve hard problems in robotics, for example in manufacturing and self driving cars.

Wouldn't it be able to create a social network more addictive than TikTok, for anyone who might watch? This AI wouldn't even need human cooperation, why couldn't it just generate videos that were addictive?

I assume an AI that can do ultra complex AI work would also be able to do almost all creative work better than a human too.

And of course it could do the work of paper shuffling white collar workers. It would be a better lawyer than the best lawyer, a better accountant than the best accountant.

So, who exactly is going to have a job in that future world?


gee, I wonder why the guy with an enormous vested interest in pushing this narrative would say that?

in general, the people saying this sort of thing are not / have never been engineers and thus have no clue what the job _actually_ involves. seems to be the case here with this person.


Don't you think software engineers have a vested interest in their jobs being relevant, just with less information?


> Don't you think software engineers have a vested interest in their jobs being relevant

virtually everyone has a vested interest in their jobs being relevant

> just with less information

i'm not sure how someone who has no relevant background / experience could possibly have more information on what it entails than folks _actively holding the job_ (and they're not the ones making outlandish claims)


Good counter-points!

That being said, I suspect Dario has very skilled engineers advising him.


Re-skill to what? Everything is going to be upturned and/or solved by the time I could even do a pivot. There's no point at all now, I can only hold onto Christ.


If you believe that everything will be solved by the time you can pivot, what will we need jobs for anyway? I mean, the bottleneck justifying most scarcity is that we don't have adequate software to ask the robots to do the thing, so if that's a solved problem, which things will remain that still need doing?

I don't personally think that's how it will go. AI will always need its hand held, if not due to a lack of capability then due to a lack of trust. But since you do, why the gloom?


Way I figure, or what I worry about anyhow, is most of the well paying jobs involve an awful lot of typing, developing, writing memos or legal opinions.

And say like LLMs get good enough to displace 30% of the people that do those. That's enormous economic devastation for workers. Enough that it might dent the supply side as well by inducing a demand collapse.

If it's 90% of all jobs (that can't be done by a robot or computer) gone, then how are all those folks, myself included, going to find money to feed ourselves? Are we going to start sewing up t-shirts in a sweatshop? I think there are a lot of unknowns, and I think the answers to a lot of them are potentially very ugly

And not, mind, because AI can necessarily do as good a job. I think if the perception is that it can do a good enough job among the c-suite types, that may be enough


I'm a student, so all pivots have a minimum delta of 2 years, which is something like a 100x on current capabilities on the seemingly steep s-curve we are on. That drives my "gloom" (in practice I've placed my hope in something eternal rather than a fickle thing like this)


What he meant is that if this really happens, and LLMs replaces humans everywhere and everybody becomes unemployed, congratulations you'll be fine.

Because at that point there's 2 scenarios:

- LLMs don't need humans anymore and we're either all dead or in a matrix-like farm

- Or companies realize they can't make LLMs buy the stuff their company is selling (with what money??) so they still need people to have disposable income and they enact some kind of Universal Basic Income. You can spend your days painting or volunteering at an animal shelter

Some people are rooting for the first option though, so while it's good that you've found faith, another thing that young people are historically good at is activism.


The scenario that is worrying is having to deal with the jagged frontier of intelligence prolonging the hurt. i.e

202X: SWE is solved

202X + Y; Y<3: All other fields solved.

In this case, I can't retrain before the second threshold but also can't idle. I just have to suffer. I'm prepared to, but it's hard to escape fleshy despair.


There's actually something you can do, that I don't think will become obsolete anytime soon.

Work on your soft skills. Join a theater club, debate club, volunteer to speak at events, ...

Not that it's easy, and certainly more difficult for some people than for others, but the truth is that soft skills already dominate engineering, and in a world where LLMs replace coders they would become more important. Companies have people at the top, and those people don't like talking to computers. That is not going to change until those people get replaced.


How about retraining for a field that would require robotics to replace?

Seems more anti-fragile.


Thats the point.

EVERYTHING is upturned. "All other things solved" includes robotics. It's a 10x everywhere.


Let's run with that number, 10x.

Say there used to be 100 jobs in some company, all executing on the vision of a small handful of people. And then this shift happens. Now there are only 10 jobs at that company, still executing on the vision of the same handful of people.

90 people are now unemployed, each with a 10x boost to whatever vision they've been neglecting since they've been too busy working at that company. Some fraction of those are going to start companies doing totally new things--things you couldn't get away with doing until you got that 10x boost--things for which there is no training data (yet).

And sure, maybe AI gets better and eats those jobs too, and we have to start chasing even more audacious dreams... but isn't that what technology is for? To handle the boring stuff so we can rethink what we're spending our time on?

Maybe there will have to be a bit of political upheaval, maybe we'll have to do something besides money, idk, but my point is that 10x everywhere opens far more doors than it shuts. I don't think this is that, but if this is that, then it's a very good thing.


Not everyone has "vision".

Most people are just drones, and that's fine, that's just not them.


So far it has seemed necessary to compel many to work in furtherance of the visions of few (otherwise there was not enough labor to make meaningful progress on anyone's vision). Probably at least a few of those you'd classify as drones aren't displaying any vision because the modern work environment has stifled it.

If AI can do the drone work, we may find more vision among us than we've come to expect.


nursing, electrician. maybe the humanoid robots will get to those soon, but we'll see


Seems inevitable once multi-modal reasoning 10x's everything. You don't even need robotics, just attach it to a headset Manna-style. All skilled blue collar work instantly deskilled. You see why I feel like I'm in a bind?


> CEO of an AI company says AI is the future


This isn't exactly a Scam Altman screed, you should read the link.


That's a huge wall of text. Ctrl+f 2027 or "years" doesn't turn up anything related to what you said. Maybe you can quote something more precise.

I mean, 99.99% of engineering disappearing by 2027 is the most unhinged take I've seen for LLMs, so it's actually a good thing for Dario that he hasn't said that.


I think your text search might be broken, or you missed the context.

Dario's vision of AI is "smarter than novel prize winners" in 2027.


Sorry, Dario's Claude itself disagrees with you

> The comment about software engineering being “fully automated by 2027” seems to be an oversimplification or misinterpretation of what Dario Amodei actually discusses in the essay. While Amodei envisions a future where powerful AI could drastically accelerate innovation and perform tasks autonomously—potentially outperforming humans in many fields—there are nuances to this idea that the comment does not fully capture.

> The comment’s suggestion that software engineering will be fully automated by 2027 and leave only the “0.01% engineers” is an extreme extrapolation. While AI will undoubtedly reshape the field, it is more likely to complement human engineers than entirely replace them in such a short timeframe. Instead of viewing this as an existential threat, the focus should be on adapting to the changing landscape and learning how to leverage AI as a powerful tool for innovation.


I did, and I don't really see where it says what you wrote it does


What happens when these people are wrong? They already got the clicks.

Can they be permanently embarrassed?


Dario isn't some hack that makes fake predictions.


No, but he does have quite the incentive to over-hype the capabilities of LLMs.


And he also has knowledge that isn't available to the public.

Combined with his generally measured approach, I would trust this over the observations of a layman with incentive to believe his career isn't 100% shot, because that sucks, of course you'd think that.

Unfortunately, it appears to be.


> Combined with his generally measured approach

People sang similar praises of Sam Bankman-Fried, and that story ended with billions going up in flames. People can put on very convincing masks, and they can even fool themselves.


Cool, let's see if in 2027 Anthropic still exists.


I didn't care for that article even while agreeing with some points.

"Fix all mental illness". Ok.. yes, this might happen but what exactly does it mean?

"Increased social justice". Look around you my guy! We are not a peaceful species nor have we ever been! More likely someone uses this to "fix the mental illness of not understanding I rule" than any kind of "social justice" is achieved.


So powerful ASI will arrive out of the blue sooner-than-I-thought, we've got to reg cap naow!

It will be greater than anyone but it won't be able solve THAT problem or any problem created after 2026, I can tell.

CEOs with little faith in their own products. Most likely it's widespread imperfect AI for a long while == unprofitable death for his company.


The code was never the hard part.


I don't know about that. Writing the code for these models seems pretty hard, or we'd have AGI already.


I’m talking about the typical crud biz app or startup codebase they the majority of us here slarp out.


I fully believe this as well. And I have 15 years of SWE experience at top tech. Its over for this field


[flagged]


We get it, you found Jesus. Now stop injecting that into every one of your comments.


Frankly, I can't agree on any of this. Majority of the state of AI is way faar from where it can be really useable. We are nowhere near AI, that is emulating our intellect, besides - the byproduct of AI is much bigger than any pesky LLMs - understanding how our brain works and eventually making human megamind, that can persist through hormonal changes that humans go and what makes our life so unstable and full of changes.

Robotics - is nowhere near the promise as well - we are nowhere near biological entities(not made from metal) with syntetic brains, not to mention biological robotic arms that humans can use as prostetics while they are regrowing natural limbs. So much to learn.

As for the Jesus. That is not really a deep subject. We know what Jesus was as a human - his real life and his violent and human nature(as a military representative of cult, that was lead by John the Baptist) has nothing to do with how it is portrayed by religion. History of how Christianity started and including about Jesus was one of the easiest problems that I have encountered and wished to know and I fullfilled just recently.


It's a shame that AI seems to be causing a lot of despair, even prior to its vision being complete.

I was forced to implement AI systems that toasted many of our employees.


What is this vision you hint at? Everyone seems to have a different opinion as to this "vision of AI". Is it good? Or is this vision one of "despair" as you mentioned and it is coming early?


Toasted? With, like, an oven? Or do you mean with champagne?


Are you one of those people who has a plaque hanging in his home with the "proper" definition of "literally"?


Are you one of those people who, when faced with someone who tells you they don't understand what you're saying, responds with snarky rhetorical questions?


Oh, you were serious?

To be "toast", or done for.


> Not enough ideas or interest to get into LLMs before they frankly left the station completely.

My dad was introduced to boolean algebra and the ideas of early computing in high school in the early 1960s and found it interesting but didn't pursue a career in it because he figured all the interesting problems had already been solved. He ended up having a successful career in something unrelated but he always tells the story as a cautionary tale.

I don't think it's too late for anyone to learn LLM internals, especially if they're young.


It doesnt fully replace... You will always need someone speaking to it and able to properly debug etc.


>always


I think it's about time unpaid labor becomes on politicians' radar if they don't want to have 25% unemployment rate in their hands. As advocated by Glen Weyl and Eric Posner.


Dario is the CEO of Anthropic. I'm struggling to imagine a more blatant case of motivated reasoning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: