Yes, it's often faster if you sit around waiting. What I will do instead is prompt the AI to create various plans, do other stuff while they do, review and approve the plans, do other stuff while multiple plans are being implemented, and then review and revise the output.
And I have the AI deal with "knowing how to do it" as well. Often it's slower to have it do enough research to know how to do it, but my time is more expensive than Claude's time, and so as long as I'm not sitting around waiting it's a net win.
I do this too, but then you need some method to handle it, because now you have to read and test and verify multiple work streams. It can become overwhelming. In the past week I had the following problems from parallel agents:
Gemini running an benchmark- everything ran smoothly for an hour. But on verification it had hallucinated the model used for judging, invalidating the whole run.
Another task used Opus and I manually specified the model to use. It still used the wrong model.
This type of hallucination has happened to me at least 4-5 times in the past fortnight using opus 4.6 and gemini-3.1-pro. GLM-5 does not seem to hallucinate so much.
So if you are not actively monitoring your agent and making the corrections, you need something else that is.
You need a harness, yes, and you need quality gates the agent can't mess with, and that just kicks the work back with a stern message to fix the problems. Otherwise you're wasting your time reviewing incomplete work.
Your point being? A proper harness will mostly catch things like that. Even a low end model can be employed to do write tests plans and do consistency checks that mostly weed out stuff like that. Hence: You need a harness, or you'll spend your time worrying about dumb stuff like this.
Glancing at what it's doing is part of your multitasking rounds.
Also instead of just prompting, having it write a quick summary of exactly what it will do where the AI writes a plan including class names branch names file locations specific tests etc. is helpful before I hit go, since the code outline is smaller and quicker to correct.
That takes more wall clock time per agent, but gets better results, so fewer redo steps.
>And I have the AI deal with "knowing how to do it" as well. Often it's slower to have it do enough research to know how to do it
This is exactly the sort of future I'm afraid of. Where the people who are ostensibly hired to know how stuff works, out source that understanding to their LLMs. If you don't know how the system works while building, what are you going to when it breaks? Continue to throw your LLM at it? At what point do you just outsource your entire brain?
There are many layers to "knowing how stuff works". What does your manager do when your code breaks?
> Continue to throw your LLM at it?
Increasingly, yes. If you have objective acceptance criteria, just putting the LLM in a loop with a quality gate tends to have it converge on a fix itself, the same way a human would. Not always, and not always optimally, but more and more often, and with cheaper and cheaper models.
I also tend to throw in an analysis stage where it will look at what went wrong and use that to add additional criteria for the next run.
If anything, it's the opposite. With a proper harness you stop having to spend so much energy reviewing every little intermediate step, and can focus on the higher level.
I'm actually working on a project now where the biggest problem I need to solve is that the verifier that reviews the test harness is too strict.
I keep being told that a proper harness makes agents better, but no one has shown me exactly what is it that gives them such amazing results.
Yesterday Gemini burned 40 minutes trying to diagnose a failed Expo build and going into loops of changing the Podfile and re-running the build, when the issue was that Xcode needed updating (quick Google search for it).
But my comment on burnout stands. The lack of downtime and dynamic thinking modes (admin, planning, review, actual coding) seems like it would conspire to make you either cram out more work or disconnect from it. Both of these become dangerous, after a while.
(Information workers were productive 4–6 hours a day, and the economy did just fine.)
And I have the AI deal with "knowing how to do it" as well. Often it's slower to have it do enough research to know how to do it, but my time is more expensive than Claude's time, and so as long as I'm not sitting around waiting it's a net win.