Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For a Rust developer, neglecting their ability to debug cargo build issues puts their career at risk. For someone like that, letting AI handle it would be a really shortsighted move.

But Simon isn’t a Rust developer - he’s a motivated individual with a side project. He can now speedrun the part he’s not interested in. That doesn’t affect anyone else’s decisions, you can still choose to learn the details. Ability to skip it if you wish, is a huge win for everyone.



> He can now speedrun the part he’s not interested in.

The reductio that people tend to be concerned about is, what if someone is not interested in any aspect of software development, and just wants to earn money by doing it? The belief is that the consequences then start becoming more problematic.


Those people are their own worst enemies.

Some people will always look for ways to "cheat". I don't want to hold back everyone else just because a few people will harm themselves by using this stuff as a replacement for learning and developing themselves.


Do you genuinely believe that this only applies to "a few people"?

This new post gets at the issue: https://news.ycombinator.com/item?id=45868271


I don't understand the argument that post is making.

I agree that people using LLMs in a lazy way that has negative consequences - like posting slop on social media - is bad.

What's not clearly to me is the scale of the problem. Is it 1/100 people who do this, or is it more like 1/4?

Just a few people behaving badly on social media can be viewed by thousands or even millions more.

Does that mean we should discard the entire technology, or should we focus on teaching people how to use it more positively, or should we regulate its use?


>> He can now speedrun the part he’s not interested in

In this case its more like slowrunning. Building rust project is 1 command and chatgpt will tell you this command in 5 seconds.

Running an agent for that is 1000x more inefficient.

At this point its not optimizing or speeding things up but running agent for the sake of running agent.


The best thing about having an agent figure this out is you don't even need to be at your computer while it works. I was cooking dinner.


You’re not properly accounting for the risk of getting blocked on one of these 5 second tasks. Do an expected value calculation and things look very different.

Across a day of doing these little “run one command” tasks, even getting blocked by one could waste an hour. That makes the expected value calculation of each single task tilt much more in favor of a hands off approach.

Secondly, you’re not valuing the ability to take yourself out of the loop - especially when the task to be done by AI isn’t on the critical path, so it doesn’t matter if it takes 5 minutes or 5 milliseconds. Let AI run a few short commands while you go do something else that’ll definitely take longer than the difference - maybe a code review - and you’ve doubled your parallelism.

These examples are situational and up to the individual to choose how they operate, and they don’t affect you or your decisions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: