Hacker Newsnew | past | comments | ask | show | jobs | submit | andrewstuart2's commentslogin

I was talking to some friends about this over drinks the other day. I feel it has the same effects as any drug (or behavior) that triggers dopamine. If I can get a dopamine hit for lower effort AI in 10 minutes, and maybe a tiny bit better of a hit doing it myself after a day, why would my brain go for anything but AI? Especially when my DIY muscles are a bit atrophied.

And of course the hedonic treadmill (if that's even valid any more, IDK) has reset the baseline so that anything less than the quick gratification feels like nothing. It makes the stuff I used to absolutely love feel like more of a chore compared to just cranking out features with code only an AI can love.


I'm curious whenever I hear takes with your perspective.

Entering the workforce happens at an age where people have built (some more rudimentary than others) a level of understanding and self control regarding delayed gratification and Type II fun.

Did you have the kind of life where you were never really challenged to build that skillset, or is the mental stimulation so strong for you when you use AI that it overcomes executive function?


I have ADHD and have done quite a bit of reading and study on it, so I'm pretty familiar with how dopamine and dopamine disorders work. I've also been in the workforce as a software engineer long enough to have done some really hard things. So my life and career have both been plenty challenging lol.

And I'm not alone here. Like I said, I was discussing this with a bunch of friends who are also quite senior and accomplished in their engineering careers, and the sentiment was familiar for us all.


> Did you have the kind of life where you were never really challenged to build that skillset,

Do you really think phrasing a question like this will ever induce a productive response?


I guess I could have phrased it better, but at some point I'm asking about weak self control vs if the drug is that strong. The life experience thing was meant as laying down a facesaving reason that it's OK to say your willpower sucks. You just weren't forced to cultivate it. Plenty of reasons that can happen in life.

I think it's pretty normal to be able to reflect on the difference in life skills between myself and those I see in others. There are things I've struggled with throughout adulthood because through some happenstance I was able to avoid the class of challenge as a child.

I didn't learn how to study until my 20s. I didn't have will-power over eating and exercise until my body changed around 30 and I suddenly got fat, then I talked with friends that teased me for being less skilled at something than a teenage version of themselves.

What's the saying: someone who's never smoked doesn't have to learn how to quit smoking?


I understand what you're asking and why, but the phrasing reads very dismissively and that's what I was asking about. Generally a friendly tone will get you a lot further.

I'm on board with your gut that this feels more YOLO than careful but to be fair, in the engineering world fly by wire is very much precedented. I'm specifically thinking of the B2 bomber where it's essentially unflyable without a computer between the inputs and the outputs. Partially just keeping the plane from turning into a frisbee by reacting faster than a human possibly could, but also treating the controls inputs as the intent and manipulating the control surfaces programmatically in order to make that work. It's not quite the same thing of course but I think there's some carryover.

Still. Not a huge fan of this announcement or the general ways the landscape is evolving these days.


This is a great analogy to what this time in IT and tech feels like. We are moving up a layer in terms of abstraction, and for those of us who cut our teeth in processes where we had lower level understanding it feels very destabilizing. I've been telling my team for a few years now that becoming an "agent manager" is the path forward - it's more true now with the latest revelation that "managers are out of style", and every role will have some IC component. I've seen it in my work - I can express intent more clearly to Claude and get immediate feedback for any technical tasks, so my team needs to be able to create intent at a higher level and translate that to their agent team, rather than getting directive task alignment. We've been through these pendulum swings before - this should start to stabilize in a few years... That's the industry vet perspective - I'd be lying if I said it didn't feel different this time though...

> We are moving up a layer in terms of abstraction

I’ll believe it when I don’t need to understand the code. Until then it’s just autocompletion with (a lot of) extra steps.

Wake me up when I can actually let agents do the IC work without having to be the accountability sink that gets fired when things go wrong.


One thing you and the OP are not addressing is that most of these modern tactics are also necessitated by the fact that building an air force, navy, or cavalry that can beat modern superpowers is just a complete non-starter.

I'm not so sure the F-35 is built for the wrong war as much as the war would probably call for the F-35 if it didn't already exist.


Honestly I feel sometimes like about the only thing they do successfully is hacking. Not just in the sense of breaking into systems that are assumed to be secure although also in that sense. They're just, highly effective at fumbling around with a hatchet until something works. We just happen to have version control and automated testing that generally makes that approach somewhat viable for the task of programming. But while I've been genuinely impressed at how much it can put features into a workable state, I've never been confident looking at its output that it's going to do more than POC quality at the current state of things. But it's pretty dang effective at that given enough time and a space safe to hack away and reset until the product looks close enough.


"Genius is but the capacity to take infinite pains."


You know, that's also true. I am where I am because I'm stubborn AF and just keep hacking on things until they work. Maybe one of the biggest differences is just ego, lol.


I mean, it's a post a week. I think that's pretty plausible. The worst part of this era is just not knowing if I'm reading generated output or genuine human thought.


The great irony is that now that Splunk audit trail will probably end up being consumed by LLMs on the lookout for threat actors who are probably also using LLMs to attempt intrusions.

It's a great time to be selling GPUs.


Yeah, gives me very similar vibes to the famous "pale blue dot."


I'm getting flashbacks to the 2018 hit:

    This is extremely dangerous to our democracy
We evolved to share information through text and media, and with the advent of printing and now the internet, we often derive our feelings of consensus and sureness from the preponderance of information that used to take more effort to produce. Now we're now at a point where a disproportionately small input can produce a massively proliferated, coherent-enough output, that can give the appearance of consensus, and I'm not sure how we are going to deal with that.


This could have been written almost verbatim after the printing press came out and printed pamphlets became ubiquitous.


Finally, good efficient code is going to get its moment to shine! Which will totally happen because it's not like 80% of the industry is vibe coding everything, right?


just do vibe performance optimization (I am not even kidding)


Yep I’ve seen multiple instances of this so far.


Yeah, I got the AI to convert some code that ran at 30fps in Javascript to C, and it resulted in a program that generated 1 frame every 20 seconds. Then I told it to optimize it, and now it's running at 1 fps. After going back and forth with the AI for hours, it never got faster than 1 fps. I guess I'm "doing it wrong" as the hypesters like to tell me.


> Yeah, I got the AI to convert some code that ran at 30fps in Javascript to C, and it resulted in a program that generated 1 frame every 20 seconds. Then I told it to optimize it, and now it's running at 1 fps. After going back and forth with the AI for hours, it never got faster than 1 fps. I guess I'm "doing it wrong" as the hypesters like to tell me.

Remove the "I actually only want a slideshow" instruction from your prompt :-)


speedrunning super mario world with neural nets is weirdly effective though. i guess you need a genetic algorithm to refine different approaches rather than a neural net.


Try converting the Javascript into a slide deck and spam the next button.


Honestly speaking, it has started to look like AI coders could actually do a better job than 80% of app developers in writing efficient apps just by being set to adhere to best-practice programming conventions by default (notwithstanding their general tendency of trying to be too clever instead of writing clear and straightforward code).


They would do well just by letting the AI generate Rust code.


Vibe coding might be a positive here since there's no need to optimize for DX over perf when the clanker is the one reading/writing code.


This is my theory: we're going to see a lot of languages with straightforward and obvious semantics, high guard rails, terrible dx, and great memory allocation and performance behavior out of the box. Assembler or worse, but with extremely strong typing bolted on in a way that no human would ever tolerate, basically, something in that vibe.


So Pascal and Delphi are coming back? I'm actually cool with that.


I vibe coded a library in Nim the other day (a language I view very much as a spiritual continuation of the Pascal/Modula line), complete with a C ABI.

The language has well defined syntax, strong types, and I turned up the compiler strictness to the max, treat all warnings as errors etc. After a few hours I put the agent aside, committed to git then deleted everything and hand coded some parts from scratch.

I then compared the results. Found one or two bugs in the AI code but honestly, the rest of our differences were “maters of taste” (is a helper function actually justified here or not kind of things).


Yeah actually I worked with Pascal early in my career and that's kinda the vibes I am thinking about, with maybe a stronger type system more ada-esque though (composite, partial and range-and-domain types, all that jazz)


Have a look at Nim

Pascal inspired syntax

Ada inspired type system

Lisp inspired templating and Macros

Compiles to C


You can get more forward thinking than that: AI will output machine code, compiling it directly.


My main complaint about the project changes we've seen lately is that these companies are happy to take all the code that previous contributors have written for free in good faith, and profit off of it without any sharing. The whole reason I and many people have contributed to some of the projects out there is under the premise that I've been given something great/useful for free so I'm going to give back for free. If you want to create a project that's source-available or whatever you want to call it, from the start, you'll probably even get my support.

Sure, it's totally legal for the company to change how they operate in the future. But it burns all that good faith of previous contributions in favor of profit. And so yeah, I hope the companies that pull this crash and burn in proportion to how much free code they accepted from contributors that they now wish to profit from.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: