He seems stuck in the GOFAI development philosophy where they just decide humans have something called a "world model" because they said so, and then decide that if they just develop some random thing and call it a "world model" it'll create intelligence because it has the same name as the thing they made up.
And of course it doesn't work. Humans don't have world models. There's no such thing as a world model!
I do agree humans don't have a world model. It is really more than that. We exist in the world. We don't need a world model because we exist in the world.
It is like saying a fish has a water model. It makes no sense when the fish existence is intertwined with water.
That is not to say that a computer that has a model of the world would not most likely be extremely useful vs something like the LLM that has none. The world model would be the best we could do to create a machine that simulates being in the world.
I don't think the focus is really on world models, rather than on animal intelligence based around predicting the real world, but to predict it you need to model it in some sense.
IMO the issue is that animals can't have a specific "world model" system, because if you create a model ahead of time you will mostly waste energy because most of the model is not used.
And animals' main concern is energy conservation, so they must be doing something else.
There are many factors playing into "survival of the fittest", and energy conservation is only one. Animals build mental models to predict the world because this superpower of seeing into the future is critical to survival - predict where the water is in a drought, where the food is, and how to catch it, etc, etc.
The animal learns as it encounters learning signals - prediction failure - which is the only way to do it. Of course you need to learn/remember something before you can use that in the future, so in that sense it's "ahead of time", but the reason it's done that way because evolution has found that learning patterns will ultimately prove beneficial.
Right - I've no idea how LeCun thinks about it, but I don't see that an animal needs or would have any more of a "world model" than something like an LLM. I'm sure all the research into rats in mazes etc has something to say about their representations of location/etc, but given a goal of prediction it seems that all is needed is a combination of pattern recognition and sequence prediction - not an actual explicit "declarative" model.
It seems that things like place cells and grandmother cells are a part of the pattern recognition component, but recognizing landmarks and other predictive-relevant information doesn't mean we have a complete coherent model of the environments we experience - perhaps more likely a fragmented one of task-relevant memories. It seems like our subjective experience of driving is informative - we don't have a mental road map but rather familiarity with specific routes and landmarks. We know to turn right at the gas station, etc.
And of course it doesn't work. Humans don't have world models. There's no such thing as a world model!