One issue I often run into with this stuff is the tightly coupled nature of things in the real world. I’ll fashion an example:
Let’s say you break a job down into 3 tasks: A, B and C. Doing one of those tasks is too much for an LLM to accomplish in one turn (this is something you learn intuitively through experience), but an LLM could break each task into 3 subtasks. So you do that, and start by having the LLM break task A into subtasks A1, A2 and A3. And B into B1, B2 and B3. But when you break down task C, the LLM (which needs to start with a fresh context each time since each “breakdown” uses 60-70% of the context) doesn’t know the details of task A, and thus writes a prompt for C1 that is incompatible with “the world where A1 has been completed”.
This sort of “tunnel vision” is currently an issue with scaling 2025 agents. As useful context lengths get longer it’ll get easier, but figuring out how to pack exactly the right info into a context is tough, especially when the tool you’d reach for to automate it (LLMs) are the same tool that suffers from these context limitations.
None of this means big things aren’t possible, just that the fussyness of these systems increases with the size of the task, and that fussyness leads to more requirements of “human review” in the process.
I've been experimenting with this with a custom /plan slash command for claude code, available here: https://github.com/atomCAD/agents
Planning is definitely still something that requires a human in the loop, but I have been able to avoid the problem you are describing. It does require some trickery (not yet represented in the /plan command) when the overall plan exceeds reasonable context window size (~20k tokens). You basically have to start having the AI consider combinatorially many batches of the plan compared with each other, to discover and correct these dependency issues.
>the LLM (which needs to start with a fresh context each time since each “breakdown” uses 60-70% of the context) doesn’t know the details of task A, and thus writes a prompt for C1 that is incompatible with “the world where A1 has been completed”.
Can't that be solved with sub agents? The main agents oversees on combines code and calls sub agents for each tasks.
Reasoning by analogy is great for intuition, but doesn’t guarantee real results hold. Consider “voltage is like water pressure in pipes, so if there’s a cut in my wire’s insulation, the device won’t get enough voltage” — clearly this is not true, even though it relies on an analogy that’s generally useful.
I really like that analogy, thank you for it. Also applies to “it’s overvoltage, so I just need to poke a little hole in it to let the excess bleed out”…
> "If there’s a cut in my wire’s insulation, the device won’t get enough voltage" doesn't follow from: "voltage is like water pressure in pipes"
I absolutely agree! In the same way, "an LLM can solve complex problems if it breaks them into subtasks" doesn't follow from "NASA breaks large projects into smaller parts"
This is a really good analogy because the complex intersections between multiple groups independently working and trying to collaborate together into a collaborative hierarchy towards one large goal was one of the things that hid a lot of the problems that led to the Challenger disaster, according to Feynmen.
I'm pretty sure the problem with the shuttle was that it had too many (possibly conflicting) goals instead of one large goal.
It's manned, even though most launches probably could be done without crew. The deadly Challenger launch was risking human crew for something as mundane as launching two satellites into space.
Because it's manned, it has to be able to land at airports, because retrieving astronauts at sea is an unreasonable complication for launching a satellite. Damage to the wings will cause loss of the entire aircraft, something that is unlikely to happen to a capsule.
Because it is a horizontal landing system, the aerodynamics favor putting the shuttle on the same level as the external fuel tank, which exposes the wing to debris from the top of the external fuel tank. If you try building a vertical shuttle in KSP, you will notice that the wings give you too much control authority during launch. Fins are best placed near the bottom of the rocket.
It's reusable, which means wear and tear can secretly accumulate without you noticing. This significantly increases the design requirements for the critical components, like the SRB that had a poor "tang" design, which, as it turns out, was definitively not fit for reuse.
The space shuttle’s design was also deeply flawed to the point it failed to do the core objective, significantly lowering costs. Instead the core mission was sacrificed to meet some arbitrary design goals such as being able to de-orbit heavy objects.
That’s the core issue with decomposition of tasks, you aren’t communicating back up the chain and finding globally optimal solutions unless the task is simple enough to be completely understood.
IBM tried that with CMM (capability maturity model), it didn't work, the problem is NASA knows what they're building, rockets and satellites don't have any grey areas and everything is specified. Other things are less well defined, and the people specifying aren't rocket scientists.