That’s what they usually do. The assumption here is that due to the blackout or some other related issue the human drivers were unavailable.
However even if that’s not true if they have more cars than human drivers there’s gonna be a problem until they work through the queue. And the bigger that ratio, the longer it will take.
I guess that in a blackout they should just have the cars park somewhere safely. Maybe it'd be best to never have more cars on the road than assisting/available human drivers. As soon as no human drivers are available to take over for outage/staffing/whatever reason all cars should just pull over and stop.
The demographics of who was online before the internet went mainstream matter a lot, here. It wasn't exactly a representative slice of the general population.
Forums were still going strong a decade after the Internet went mainstream. They only started to fade after smartphones took off and many forums took years to introduce mobile themes. For sports teams however, forums never faded, there tens of millions of users on team-specific soccer forums for example.
That's a good point. I think a lot of forums were less vulnerable for a number of reasons. They typically don't have a large audience (not all, but most), which makes them less of a target. They're also organized around niche interests that don't intersect much with politics and cultural issues, off-topic forums aside. And they're probably more heavily moderated than social media and blog comments.
I think the general point stands when considering large-scale platforms.
Have you considered the possibility that you're the one who's really committed to an outcome, and are desperately trying to discredit anything that contradicts it?
I have! But the lack of a testable procedure tells me that's not a question worth asking. Look, if "qualia" can tell me something practical about the behavior of AI, I am here for it. Lay it on me, man. Let's see some of that "science" being promised.
It can't, because it's a meaningless word. It's not "discrediting" an idea to point out that (by its own admission) it's unfalsifiable.
"Qualia" is not totally meaningless - it means the inner experience of something, and can bring up the real question say of is my inner experience of the colour green the same as your experience of the colour red? Probably not but hard to tell with current tech. I asked Google if it has qualia and got "No, as an AI, Google Search does not have qualia." So Google search seemed to know what it means.
Every time they contact me I tell Meta recruiters that I wouldn't stoop to work for a B-list chucklehead like Zuck, and that has been my policy for over 15 years, so no.
I am annoyed to no end by all the LIDAR wankery - while in practice, LIDAR systems don't provide much of an advantage over camera only systems, and consistently hit the same limitations on the AI side of things.
Nonetheless, there is no shortage of people who, for some reason, think that LIDAR is somehow a silver bullet that solves literally everything.
So why do you think the only reliable FSD car out there is built around an expensive LIDAR system?
LIDAR may not solve everything, but the point is that it allows for greater safety margins. All the non-safety-critical parts can be done with AI, yes.
- Google actively is working to, and has reduced, conflict cobalt from the supply chain.
That's good, but doesn't change the fact that the supply chain for tech exemplifies "the hub exploiting the periphery".
- No one I know in Silicon Valley "cleaves to Hobbesian myths" to "justify" their grip on anything. Everyone I know shows up to work to provide for their family, grow professionally, or self-actualize.
"Like much of the oligarchic class, the boy-gods of Silicon Valley" is likely referring to the CEO/founder/VC class.
- People who "dream of Singularity and interplanetary civilization" isn't a thing, no one dreams of this fantasy.
That's patently untrue. A bunch of them post here.
And when one of the parties is a group of men with guns who abuse their neighbors in order to produce the something they're selling to the other party, it becomes exploitation in a quick hurry.
I use any number of professionals’ knowledge or skills or supplies just the same as I use natural gas to heat the home or water to hydrate myself or clean whatever.
Maybe something about the seller (or buyer) being under duress would be a start to defining exploitation.
> trying to prove to everyone that LLMs are not possible or at least that they're some kind of illusion
This is such poor phrasing I can't help but wonder if it was intentional. The argument is over whether LLMs are capable of AGI, not whether "LLMs are possible".
You also 100% do not have to buy into Chomsky's theory of the brain to believe LLMs won't achieve AGI.
Your reasoning seems clear enough to me. cmiiw, you’re saying Marcus says LLMs don’t really understand language and only present an illusion of that understanding. And that illusion will noticeably break at a certain scale. And to be honest when context windows get filled up to a certain point, they do become unintelligible and stupid.
In spite of this, I think LLMs display intelligence and for me that is more useful than their understanding of language. I haven’t read anything from Chomsky tbh.
The utility of LLMs come from their intelligence and the price point at which it is achieved. Ideally the discussion should focus on that. The deeper discussion of AGI should not worry the policy makers or the general public. But unfortunately, business seems intent on diving into the philosophical arguments of how to achieve AGI because that is the logic they have chosen to convince people into giving them more and more capital. And that is what makes Marcus’ and his friends’ critique relevant.
One can’t really critique people like Marcus saying he is being academic and pedantic on LLM capabilities, are they real, are they not etc when the money is relentlessly chasing those un-achieved capabilities.
So even though you’re saying we aren’t talking about AGI and this isn’t the topic, everything kind of circles back into AGI and the amount of money being poured into chasing that.
I would appreciate if you and the GP not personally insult me when you have a question though. You may feel that you know Marcus to be into one particular thing but some of us have been familiar with his work long before he pivoted to AI.
I'm sorry, I didn't mean to insult you. To explain the reason: you seem to use some particular wordings that just seem strange to me, such as first saying that Marcus position is that "LLMs are impossible" which is either false or incredibly imprecise shortcut for "AGI using LLMs is impossible", and then claiming it was beautiful.
I didn't mean to attack you personally and I'm really sorry if it sounded this way. I appreciate the generally positive atmosphere on HN and I believe it more important than the actual argument, whatever it may be.
The first is that your phrasing "that LLMs are not possible or at least that they're some kind of illusion" collapses the claim being made to the point where it looks as if you're saying Marcus believes people are just deluded that something called a "LLM" exists in the world. But even allowing for some inference as to what you actually meant, it remains ambiguous whether you are talking about language acquisition (which you are in your 2nd paragraph) or the genuine understanding and reasoning / robust world model induction necessary for AGI, which is the focus of Marcus' recent discussion on LLMs, and why we're even talking about Marcus here in the first place.
You seem more familiar with Marcus' thinking on language acquisition than I, so I can only assume that his thinking on language acquisition and LLMs is somewhat related to his thinking on understanding and reasoning / world model induction and LLMs. But it doesn't appear to me, based on what I've read of Marcus, that his claims about the latter really depend on Chomsky. Which brings me to the 2nd problem with your post, where you make the uncharitable claim that "he appears to me and others to be having a sort of internal crisis that's playing out publicly", as if it were simply impossible to believe that LLMs are not capable of genuine understanding / robust world model induction otherwise.
> but an explicit choice was made not to do so for safety.
You know this how?
reply