Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Could Cruise Be the Theranos of AI? (garymarcus.substack.com)
36 points by rbanffy on Nov 4, 2023 | hide | past | favorite | 68 comments


I took a bunch of Cruise rides in SF when it was free, and I think I know what was going on.

The cars were clearly driving themselves. But what did happen fairly often was that they would get confused and stop.

For example, my Cruise car needed to make a right turn and a van had parked on the corner, sticking out into the perpendicular street. The Cruise car went up to the corner and then stopped. It was apparently not programmed to go further out into the street and also wasn't programmed to be allowed to turn if it couldn't see around the corner.

We sat there for a couple of minutes, then I called Cruise support. They did not notice the problem themselves until I called.

I suggested that they inch the car forward because I had the above hypothesis about why it wouldn't turn, but instead I was told to wait. After a couple of minutes, the car inched back a few feet. Then, after a pause in which I assume they were transferring control back to the car's computer, it went to the corner again and stopped.

I told them again that they should move it forward, not back. But after another couple of minutes they moved it back again. Again it went forward and stopped on the corner. They tried this at least three times before accepting my suggestion. As soon as the car was able to see around the corner, it took off again, clearly under its own power.

So I don't think it's a case of remote drivers operating the cars, it's a case of remote drivers having to intervene when the cars stopped, and even in that case they try to make the cars do it on their own.

This is not Theranos behavior. This is debugging.


It's interesting as from the videos I've seen, it seems like waymo doesn't even entertain the idea of allowing remote support staff that level of micromanagement to control a car.

It makes sense too, as you really don't wanna deal with the cell network to provide realtime updates to the dispatch. Better to just send a car out.

Heck it would even seem sensible to allow the customer support staff to update the model of the world that the car sees but still allowing it to travel autonomously.

Allowing direct remote control is wild


We don’t know that it’s direct remote control. It might be that they sent a command to it to back up or move forward and it figures out how to do that.


> This is debugging.

On public roads. This shouldn't be allowed. If they want to do R&D, they need to build their own roads instead of endangering people going about their lives. None of these "automated" cars should be allowed on the roads, including Tesla cars with "self-driving" features enabled.


With requirements like that, there's no way such a product would ever get to the point of being usable.


Then it's not a viable product or business.


We don't have those requirements on any other product, so it seems absurdly unnecessarily burdensome to just apply them to this particular product category.


Yes we do. You don't get to to install experimental electrical outlets in people's homes so you can iterate on a design known to be deficient. They are certified first. Then sold. AVs should be no different.


> We don't have those requirements on any other product

Like what?

I mean, there are literally requirements even for these very products. It's just that they were given special permits to operate. California revoked these permits for Cruise. It is my opinion that they shouldn't have been given in the first place for such nascient and unproven technology. But the government and taxpayers, the latter unwittingly, have been swindled into dumping money into these companies.


We do actually. In the car industry they have to test airbag systems in closed roads and special warehouses that the companies built themselves.


I don't see how this is in any way contacting the article. It seems to be confirming it.

The article just said that the cars needed a "remote human intervention" every few miles. Well, what you described was exactly that, an intervention.

The point is that if they need this kind of intervention every few miles, on average, than they're still not really at the level of driving themselves around town.


The implication of the article, and also some subsequent discussion here, was that there were remote drivers driving the cars around, and that the supposed self-driving was not really happening.

This is clearly not the case. When I was in Cruise cars they did all kinds of things that looked really difficult, like figuring out that a truck was stopped and double-parked and going around it, dodging cyclists that got in front of it, navigating around cars that were double parked on both sides of the street like a maze, etc.

Then, they would get stumped by unusual configurations like the one I recounted.

So whether they are 'at the level of driving themselves around town' or not depends on your definition. They drove around quite deftly although a bit erratically, and using sometimes very circuitous routes, but they definitely seemed to get all around town with only occasional glitches, none of which seemed at all unsafe (when I was in the car at least).


Tesla behavior without a driver but with remote assistance? Elon should’ve just thrown a call center at FSD if that’s all it takes (half joking).


I would have just got out and walked at that point


I was quite far away from where I was going and the ride had just started a couple of blocks earlier.


Or pushed.


"Those vehicles were supported by a vast operations staff, with 1.5 workers per vehicle."

Well remember that staff only work about 40 hours/week while Cruise is in operation day and night (168 hours/week). With sick days and vacations it probably takes about 5 people to provide one person always on duty. And a lot of these workers are doing other things than monitoring individual Cruise vehicles.

In summary a Cruise worker is monitoring several vehicles.


> worker is monitoring several vehicles.

It does not seem otherwise to be viable to replace a cheap driver person to expensive engineer as 1 to 1. At least as 1 to 10, probably even more.


Why does it need to be an expensive engineer? You can use a cheap remote driver person to manually control the car to get it past whatever obstacle it's encountered, then they give control back to the car.


You're assuming these are engineers. Even if that might actually be the case now, because they want to learn about the edge cases where the cars are failing, I doubt they'd need highly skilled staff long term.


Maybe the 1.5 people is the number taking into account hours?

That might work because for say 10 cars that is 15 regular people, or about 4 "24 hour people"


The human-assisted aspect of AI development in general seems to be getting a lot more (well-deserved) attention recently. Google along with everyone else has depended on cheap human labor for data labeling for many years. The fact that Cruise is also relying on human labor to deal with the 2-3% of edge cases the AI can't handle is not surprising. I agree there may be a concern about how it's being represented, and the consequent implications for public perception. It's also true that the cars are operating themselves a significant portion of the time. In my book there is a lot of "self-driving" happening here, just not 100%. But that seems like an unsurprising stage in this process. I have my doubts about whether we will ever get to 100% autonomy, i.e. no human intervention ever, but whether that is a viable goal will be up to the company implementing it to decide... is it more cost-effective to provide their service with mostly-autonomous cars that require occasional human intervention? or is it cheaper just to rely on human drivers in the first place?


Every 2.5-5 miles in SF = about once a ride. The city is only 7x7 after all. I've taken 4 Cruise rides, all within that range, and had a message pop up saying a human was intervening during one of them when the car had gotten stuck in front of some street nonsense in the Tenderloin. I'm not sure I would classify this as a "major scoop" unless there was evidence that humans were also intervening during situations that weren't apparent to the rider.


Cruise is here in Houston. It's only available for booking between 9pm and 6am UTC-6. When they are operating in autonomous mode, they make very very weird routing decisions. They'll stop while traffic is flowing, drive on the wrong side of the road and more. During the day, they are mostly driven by human safety drivers.

My wife refuses to use the service, and after that horrible accident in SF, I hesitate to use it myself (despite being able to use it)


i always assumed they had an ops center with live drivers ready to step in. seems the only sensible way to get started.


Nissan's Path to Self-Driving Cars? Humans in Call Centers Remote operators could be the simple, scaleable answer to what Nissan says is an unsolvable problem: making robot drivers do everything humans can. https://www.wired.com/2017/01/nissans-self-driving-teleopera...


Call centers barely handle simple requests. Why do I want a foreign driver handling my life and a $70k chunk of metal that I insure?


Especially handling not a stopped vehicle control, but the one ar speed of 30mph or more in an area of bad 4/5G coverage with huge latency.

Such an intervention can (and will) simply kill passengers.

Insurance helps a bit, but does not recover from death.


i suspect that these sorts of applications are driving the design of low latency, high bandwidth, line of sight 5g mmwave sidewalk cell towers.

that and their potential ability to do radio sensing to provide high resolution realtime maps of their environment.


As an engineer worked for more than 10 years in cellular telco area, I always feel a bit magic when the network data transfer works OK.

It's not rational, of course: just knowing how much things regularily goes wrong in a low-level net makes me feel that way.


it's always a shitshow underneath the abstraction, it seems...


Not a Cruise fanboy, but the mention of Theranos strikes me as over cooked. Theranos did not have anything, at all, Cruise has cars that are working. Being propped up by humans when there is human life at risk does not strike me as fraud but caution. But are they leaning into “we make our own rules” - seems like it.


I don’t think it’s that overcooked a comparison.

Theranos could test your blood if they took it to another lab to be tested. Cruise can drive you autonomously as long as there’s someone to drive the autonomous car every 5 or so minutes.


Theranos level would be if there were no self driving software in the car.


It kind of sounds like there isn't.


Perhaps the right analogy is with AWS's mechanical turk (an API that has humans do work behind the scenes and return a result, as distinct from a traditional API that typically has the computer do the work).


My understanding is that the first Tesla was essentially a kit car with some polish? Not an expert. But it seems like all of these companies are playing very loose with people's lives. This might be okay when the lives at stake are aware of the risks and agree to do it (say, like astronauts/engineers at SpaceX). Less so when you false advertise full self driving capabilities when you never had them.

Sorry, this got away from me. If Tesla is somehow the "most responsible" of the lot, then surely everyone else is as crooked as Holmes (and really, so is Musk).


The first Tesla was the Tesla Roadster. It was not a kit car, but the body/chassis was made by Lotus and the drivetrain by Tesla.

This isn’t uncommon at all. Toyota has two models currently that have drivetrains made by BMW and Subaru. Subaru sells the car under their own brand as well.

For limited run cars it is very common for the coach work to be done by a completely different company than the mechanicals


Toyota doesn't have models that have drivetrains made by BMW and Subaru, they have complete cars OEM manufactured by BMW and Subaru(GR86, Supra, bZ4X, etc). Installing just the drivetrain is not common at all.


Fair enough. The Supra is at least an entirely different car than the Z4.

For a better example look at Ram’s trucks that you can order with a Cummins engine and Allison transmission.

Or look at commercial trucks where it is still very much the norm to have the chassis and driveline be completely different companies.

Or look at Pagani who use engines from AMG and Xtrac transmissions for something closer to Tesla’s situation.

Having a drivetrain sourced from outside the frame/body manufacturer isn’t common at all, but it is common enough that calling the Elise/Tesla a “kit car” or somehow unheard of is inaccurate.


The first Tesla was a Lotus with an electric engine, drivetrain, etc instead of the normal setup. They got partially assembled stuff from lotus

(Not a kit car)


Ah I see. Disregard please.


Ah! Gary Marcus using catchy titles to stay relevant. I am not surprised.



It's hard to claim this is qualitatively worse than companies that burn money the normal way. Cruise had an "autonomous" fleet that required P human interventions per mile. At high levels of P, the cars aren't meaningfully autonomous and the company clearly isn't profitable. Management's goal is to drive down P - make the cars more autonomous - over time. At some point P is low enough, the company is profitable, and they win. If they can't get it low enough, fast enough, they lose. It's the same plan as many other companies who, when they began, lost money on every transaction, but could reasonably hope to become profitable as they became more efficient.


I have an idea, though this may not be a popular idea. Why not outsource the remote driving part to drivers in, say, India? A call-center-like setup in India with drivers trained on the roads of San Francisco. They pick errand Cruise cars stuck, confused, or hit a bug. To the drivers, it is just a workstation, and they don't need to find out which car is which. And trust me, Indian drivers can get out of any driving scenario.

So, an AI self-driving car is assisted by human drivers anywhere in the world. They learn more and get better over the years.

A few months back, I was walking in the neighborhood of the Townsend Building and Oracle Park in San Francisco. I saw a line of Cruise cars returning home. I stood on the side of the narrow road and thought that was the perfect spot to take a video where self-driving cars pass through and return home. I realize every one of them has someone in the driver's seat. That didn't feel dramatic enough, and I just ignored it. I only shot and edited where they were already parked[1]. Won't it be cool -- the robotic vehicles, after a day of learning about the humans they plan to overthrow, now returning to their overlords?

1. https://www.instagram.com/p/Cv37yyiPzjx/


Could investors have been duped once again by the tech hypecycle? No way!


I thought that was Ghost


Tesla has the best approach to collect the petabytes of data required to eventually solve driverless transport (assuming it can be solved someday).

Manually intervening with your own employees does not scale.


Yeah regulations and pedestrians be damned. Progress uber alles!


I’m referencing collecting all the data from the human drivers that don’t use FSD.


Waymo was founded nearly 15 years ago. Cruise was founded 10 years ago. I think at this point we have to admit that driverless is not going to happen for several decades, if ever.


Human drivers kill 40,000 people a year in the US. It's a catastrophe, but we are all just used to it. Because of that, I'm willing to give self-driving cars some slack. People put cones on them. What would happen if you tried to put a cone on a drunk driver's car?


46% of those fatalities are from multi-vehicle crashes (versus single-vehicle crashes).


Self-driving cars can communicate more effectively. It's so hard to merge because we can't communicate well with the other drivers.


In a world with only self-driving cars on the road, I definitely agree.

I think things will be worse in a mixed human and self-driving environment, though. I say this as someone who lives in a zip code where Waymo has been testing for years. The only reason those cars are even able to integrate at all is because they are so incredibly conspicuous. If their sensors were entirely stealth, accidents would be incredibly common.


That is a good point that it is a challenge to mix robots and humans. That to me has been the most obvious failure so far.


Does that mean half are from single vehicles smashing into stationary objects?


I guess so! I got that number from the PDF linked here: https://www.nhtsa.gov/press-releases/early-estimate-2021-tra...


The point of the article is that there is no such thing as self driving cars. Cruise cars operate with regular, routine human intervention — albeit remote. You don’t need to cut them any slack.


To be fair, there is such a thing as self-driving cars - just not from Cruise. Waymo seems to be doing much better across the board.


Of course, you need to walk before you run.


This sounds perfect.

To be clear, every autonomous car company has human intervention.

It's a very good idea.

Yet we call it fraud. Perhaps it is when you claim not to use humans ...


In fifty year's time, we will still be having this conversation.

Driverless private vehicles driven on public roads will remain a pipe dream. The solutions to mass driverless transit already exists - we have trains, buses, planes, ferries, etc etc.

Not only is the technology borderline impossible in the general case, the arguments around safety are at best... hopeful. At worst, they're criminally dishonest.


/Looks around/

Train infrastructure is horrible in the US and not a realistic competitor in the next many decades. Buses are not driverless, and the labor shortage will be rough to this industry. Planes don’t scale to all those in need.


I would like to see a thorough defense by CA DMV for allowing these services to operate, given that the technology is very clearly not remotely close to being at an acceptable level. I would expect that this might be another case of “public servants wishing to look innovative, and in the process skipping due diligence and protocol.”


My suspicion is that universally self driving cars are 'AGI complete' in the sense that you'd need something that was as competent as a human in all circumstances. Not least as driving is as much a social interaction problem as a terrain following problem.


"Clean roads easily drivable by an AI" is another midpoint on the continuum, somewhere between regular roads and tramways.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: