This seems obviously wrong? Any system whose name includes the word "forecast" was built to predict the future in some domain / over some time horizon / to some level of granularity.
It's an interesting thought, but isn't that still a statistical response to stimuli based on learned experience? Albeit one more advanced and subtle
It no more requires reasoning about the future as such than does stopping when someone or something is actually in the way (and thus the car will hit it in the future)
I’m not sure about that, I mean this is something that client-side prediction in games is doing all the time, so why wouldn’t a self-driving car do it?
The even scarier thing is, there are people I know who are well educated etc and in my conversations with them I hear more and more about how they are relying on chatgpt for information re. surgery and illness and so on. As if chatgpt came up with the information itself, as opposed to, being a more superior interface that uses the same data as Google Search - at least with Google Search you actively knew you the source of the information.
I believe this speaks to something deeper about humans - only those with great discipline will be able to prevent themselves from being sucked in and losing their valuable human capital. It doesnt seem to matter whether one is dumb or smart.