This thread is a great discussion and I have kept coming back to it over the last couple days to read more of it when I have a chance. I’m kind of disappointed that it artificially ended. I think at some useful comment threshold level you just have to let it go.
>But it shouldn't matter if he gave 5 bullets to Chat gpt that expanded it to a full page with a detailed plan.
The coworker should just give me the five bullet points they put into ChatGPT. I can trivially dump it into ChatGPT or any other LLM myself to turn it into a "plan."
I feel the same way, if all one is doing is feeding stuff into AI without doing any actual work themselves, just include your prompt and workflow into how you got AI to spit this content out, it might be useful for others to learn how to use these LLMs and shows train of thought.
I had a coworker schedule a meeting to discuss a technical design of an upcoming feature, I didn't have much time so I only checked the research doc moments before the meeting, it was 26 pages long with over 70 references, of which about 30+ were reddit links. This wasn't a huge architectural decision so I was dumbfounded, seemed he barely edited the document to his own preferences, the actual meeting was maybe my most awkward meeting I've ever attended as we were expected to weigh in on the options presented but no one had opinions, not even the author, on the whole thing. It was just too much of an AI document to even process.
If ChatGPT can make a good plan for you from 5 bullet points, why was there a ticket for making a plan in the first place? If it makes a bad plan then the coworker submitted a bad plan and there's already avenues for when coworkers do bad work.
How do you know the coworker didn't bully the LLM for 20 minutes to get the desired output? It isn't often trivial to one-shot a task unless it's very basic and you don't care about details.
Asking for the prompt is also far more hostile than your coworker providing LLM-assisted word docs.
Honestly if you have a working relationship/communication norms where that's expected, I agree just send the 5 bullets.
In most of my work contexts, people want more formal documents with clean headings titles, detailed risks even if it's the same risks we've put on every project.
On this topic I think it’s pretty off base to call HN a “well insulated bubble” - AI skepticism and outright hate is pretty common here and AI negative comments often get a lot of support. This thread itself offers plenty of examples.
> there are about 230 billion* links that need visiting
> * Thanks to arkiver on the Archive Team IRC for correcting this number.
Also when running the Warrior project you could see it iterating through the range. I don't have any logs handy since the project is finished but they looked a bit like
https://goo.gl/gEdpoS: 404 Not Found
https://goo.gl/gEdpoT: 404 Not Found
https://goo.gl/gEdpoU: 302 Found -> https://...
https://goo.gl/gEdpoV: 404 Not Found
They are useful for putting URLs in print materials like books. Useful for sharing very long links in IRC and some other text based chat apps (many google maps links would span multiple IRC lines if not shortened, for example). They are good for making more easily scannable QR codes.
The major difference is that in the type of reading Joel Splosky is talking about, you are coming in not knowing the code's intent. It was written by one or more other people at some point in the past, likely with many iterative changes over a period of time. Figuring out the intent in this case is 90%+ of the work. With LLM generated code, you know the intent. You just told the assistant exactly what your intent was. It's much, much easier to read code that you already know the intent of.
It doesn’t really matter what this or that person said six months ago or what they are saying today. This morning I used cursor to write something in under an hour that previously would have taken me a couple of days. That is what matters to me. I gain nothing from posting about my experience here. I’ve got nothing to sell and nothing to prove.
You write like this is some grand debate you are engaging in and trying to win. But to people on what you see as the other side, there is no debate. The debate is over.
The thing about people making claims like “An LLM did something for me in an hour that would take me days” is that people conveniently leave out what their own skill level is.
I’ve definitely seen humans do stuff in an hour that takes others days to do. In fact, I see it all the time. And sometimes, I know people who have skills to do stuff very quickly but they choose not to because they’d rather procrastinate and not get pressured to pick up even more work.
And some people waste even more time writing stuff from scratch when libraries exist for whatever they’re trying to do, which could get them up and running quickly.
So really I don’t think these bold claims of LLMs being so much faster than humans hit as hard as some people think they do.
And here’s the thing: unless you’re using the time you save to fill yourself up with even more work, you’re not really making productivity gains, you’re just using an LLM to acquire more free time on the company dime.
Again, implicit in this comment is the belief that I am out to or need to convince you of something. You would be the only person who would benefit from that. I don’t gain anything from it. All I get out of this is having insulting comments about my “skill level” posted by someone who knows nothing about me.
You don’t know the harm you’re inflicting. Some manager will read your comment and conclude that anyone who isn’t reducing tasks that previously took hours or days into a brief 1 hour LLM session is underperforming.
In reality, there is a limit to how quickly tasks can be done. Around here, the size of PRs usually have changes that most people could just type out in under 30 minutes if they knew exactly what to type. However, getting to the point where you know exactly what you need to type takes days or even weeks, often collaborating across many teams and thinking deep about potential long term impacts down the road, and balancing company ROI and roadmap objectives, perhaps even running experiments.
You cannot just throw LLMs at those problems and have them wrapped up in an hour. If that’s what you’re doing, you’re not working on big problems, you’re doing basic refactors and small features that don’t require high level skills, where the bottleneck is mostly how fast you can type.
reply