Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Robots Are Already Killing People (theatlantic.com)
19 points by lantry on Sept 6, 2023 | hide | past | favorite | 50 comments


I think the news here is that robots are NOT killing people. They seem to be MUCH safer than human involved industrial equipment. Anyone who has worked in a factory probably has a story of their co-worker being sucked into a planer or their skull caved in by a forklift. Not to mention the thousands dying of chronic illnesses. The average life expectancy of welders is 50-60 years. Every welder we replace with a robot is saving 20-30 years of human life.

With AI safety features we can only see the gains in human life growing. At least until the robot dogs with needles for teeth start hunting down people who read books or the justice drones start shooting protestors with exploding freedom rockets.


I find it so curious how people are perfectly happy to accept human casualties so long as it’s the the status quo / caused by a human.

Every time an autonomous vehicle causes a car accident, there are many people calling to ban them. Yet when a human causes an accident, it’s “just the way it is, people need to get to work, nothing we can do about it”. Doesn’t matter if the rate of incidence is lower, people would rather have more accidents they can emphasise with. Human bias is so fascinating.


It's not really that curious:

- People put the probability of them getting involved in an accident that is their fault at 0. - People put the probability of them getting involves in an accident that is a robot's fault at >0

This is why people are afraid of flying and not driving. The agency is the big problem.

If your objection is "what about others" - humans are naturally selfish and will hold whatever belief that are incentivized to.


Not only that, we usually know when we're doing something risky and agree to do it. But I don't agree to let YOU do that while I'm the passenger.

For example, I know driving is somewhat dangerous... But I also know that if I have my newborn in the car with me on the way home from the hospital, I am able to drive 1000x more cautiously than my typical drive to work at 6am.


I think part of problem is that it's not a pure reduction in casualties, but rather a large reduction in causalities of one kind in exchange for a small increase in casualties of another kind. Which means we end up gravitating to cases where someone would have been alive if not for [New Technology].


This doesn't seem hard to understand at all to me.

Someone getting hurt or killed by a machine in a factory likely wasn't following the proper and ideal safety protocols for one reason or another.

Lots of people are careless and don't care enough about safety rules in my experience. So it's not hard to end up with a lower number of total incidents when robot are put in charge.

But for some person who is not careless and who does follow safety protocols religiously, a robot causing them harm could be outside of their own control, even if they are following all the safety protocols and are always very careful.

Can you see how that could be a problem?


No, a well designed safety protocol will integrate with the robot operation. Unless the safety protocol is flawed there should be no incidents with the robots. If the safety protocol is flawed it could happen with humans as well.


I don't know how you can guarantee that. That a well designed safety protocol for am AI machine would never make a mistake that a given human wouldn't make.

People are allowed to operate with their own additional steps of safety precautions which may be much more conservative than the steps assigned to an automated machine that they must now interact with.

I'm talking about general personal safety protocols that a person could follow, that could be safer and more conservative than the protocols governing a more automatic machine.

A machine being controlled by AI could have unpredictable behavior, I mean look at other areas of AI. Can we really predict what will always happen?


Robots can operate on a scale much higher than humans. Flawed self driving software can potentially cause havoc in a way that a single bad driver could not.

It’s natural to expect robots to be several times more safe than humans if we are to let them into dangerous situations with us.

Also, I think people are rightfully concerned about a future in which self driving cars result in more cars on the road as opposed to fewer. An unbounded congestion potential!


I have no doubt that robots are safer than the average human. So overall incidents would go down.

But are robots safer than all humans? Safer than the humans who religiously follow all safety protocols?

It's hard to fault a person who normally operates very safely for being afraid of losing control of their equipment to a robot which could be unpredictable.

Surely it's valid to worry about a robot making a deadly mistake that you wouldn't have ever made yourself.


You're conflating a great deal by not defining "robots".

The choices are:

1. Ordinary machines under human control

2. Machines that perform fixed, repetitive operations automatically

3. Machines that perform variable, repetitive operations at unexpected times, automatically

4. Autonomous machines that perform a variety of operations without human control

5. Tanks, UAVs, and rifles under human control

6. 5. but under semi-automatic human control (At present time, the US mil absolutely requires an officer to issue a lethal order.)


In my mind, 2 and higher all qualify as robots.

Any factory I have been to calls the machines that do fixed repetitive motions, automatically, robots.


To be honest, I would expect the robot to have more accidents than a human. However, the consequences of a robot being crushed under a steel beam are acceptable. Not so much the death of a human being. There is a reason that robots are surrounded by cages to prevent humans from entering the area around them.


If humans were rational the human average would keep going up as anyone bellow average leaves it to robots, but instead it sounds like a lot of people with illusory superiority will keep hurting themselves..

https://en.wikipedia.org/wiki/Illusory_superiority


So even though overall fatal traffic accidents would go down with self-driving cars for example.

You personally wouldn't be upset at all if a loved one died to a bug in the software that caused the self-driving car to make an obvious mistake that the loved one would never in a million years have made themselves?

I can see how people can have concerns at least, when control it taken away.


I think you should expect to die if you spend a lot of time on the road. Does it matter what bug the driver has that runs you down?


Get over yourself. If you don't speed, don't drink and drive, and don't road rage, the odds of dying on the road are not that bad.

Riding a motorcycle is 60x more dangerous than a car per mile.

I have easily done over 100,000 miles on a sportsbike in California traffic.

That's the equivalent of 6 Million miles in a car. I have had one close call in 10 years. Never even stubbed a toe on a motorcycle.


> Get over yourself.

I consider ceasing to interpret my own survivor bias as skill instead of luck, getting over myself.


Are you denying that skill exists in the world? But beyond that, not speeding, not drinking and driving/riding, and not raging on the road are not exactly 'skills'.

But honestly, I am a pretty skilled rider. I just don't have to use most of it on the road... I said myself I only had one close call in 10 years. Only that moment really involved skill.


It's called Dunning-Kruger: https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect

The smart and the wise seek automation and others to multiply their effort. The lazy and the less molecular wedge-like instruments do things the hard way(tm) themselves. It doesn't take a particle physics PhD to be resourceful, improvisational, and productive.


Wow, I never knew welders died young - I just thought it was a great working class profession, unfairly looked down upon.

Guess I won't be recommending that to anyone: I can't justify 20-30 years of lowered life expectancy for any job.


Fumes from melted metal, it'll get ya.

And welding seems like a great job, you surely hear about someone's cousin making good money that does it. Everyone does, which is why there's a huge glut of talent at the entry level end struggling to find work. It is not the labor printer outsiders think it is. I guarantee you that at any given walmart you can find 3 employees there who own a welder and aren't awful with it.


Welders really should start wearing full P100 masks when they are working. Their issues seem to be primarily around respiratory causes.


nit: P100 is only for particulates. As DIYer I'd add an activated carbon organic vapor filter. If I were doing a lot of welding I'd research exactly what type of cartridges are appropriate for the particular materials.

From the DIYer perspective you can get a good and comfortable silicone half face respirator setup for like $50. There's really no reason not to have one. I tend to favor 3M 7503 (or 7502/7501 for size), plus 60921 or 2297 (OV or not).


There exists plenty of options for fume extractors but the bosses don't ever buy them.


I sold cars for a living. I could have bought $50 shoes, but soon I learned to buy $200 shoes that lasted longer and were better on my feet. I still bought them pretty often since I walked TONS on asphalt and concrete.

No doubt I invested well over $1000 in shoes and even more on suits while I sold cars.

Are you telling me a welder can't spend his own money to make his life better like I did?


A welder can afford to buy a $50 mask himself- fume hood or not. Not that he should have to but you gotta protect yourself in a bad situation.


My dad was a pipeline welder for a long time. He smoked as well (which definitely hasn't helped his lungs) but I read a couple of years back about how pipeline companies will reuse pipes, including pipes that carried radioactive waste water from wells. Welding a radioactive contaminated pipe sounds like a bad time.

My dad learned he had lung cancer because the tumor migrated to the base of his skull and started damaging his vision until he had a car accident. Then the scans revealed his lung cancer and the migrated tumor. Welding and metalwork for so many years probably played a part in his cancer. They removed 1/3rd of his right lung and now he has a metal plate in his head from getting that tumor removed.

He was 63 when diagnosed, and is a stubborn old Texas Marine who is still around about 5 years later. Overall, I think he spent more time doing other work than welding, but I remember him always talking about any time he welded it always gave him black boogers.


I used to be a pipe welder and that's exactly the reason I quit. Toxic fumes, grinding dust. We could make it safer but it slightly lowers profit margins.


> At least until the robot dogs with needles for teeth start hunting down people who read books or the justice drones start shooting protestors with exploding freedom rockets.

At our current trajectory, we're more likely to get those rather than welding robots replacing the blue collar workforce.


Then you've must've never set foot on a farm, a factory, or a construction site. There is a great deal of semi-automated and automated machines and situations that can easily you if you are unfamiliar with them.


> From 1992 to 2017, workplace robots were responsible for 41 recorded deaths in the United States—

Vs. if you have any idea of how dangerous non-robot industrial machinery was in the 1820 - 1920 era...41 deaths in 25 years is less than a rounding error.


And they try to make it sound worse with "and that’s likely an underestimate, especially when you consider knock-on effects from automation, such as job loss". The robot doesn't malfunction and suddenly take over someone's job, a human factory owner decided to put it there.


The problem is not robots killing people but robots making the decision to kill or not. Freak accidents like in the article happened with purely mechanical machinery as well.


I'd rather a calculating robot make that decision based on available data on the scene and preprogrammed rules of engagement rather than some teenager in a trailer looking at blurry aerial photos or some scared kid who just saw his buddy blow up next to him. Somehow the cold efficiency seems less cruel than adrenaline fueled vengeance.


Until the ML model hallucinates that Dora the Explorer is a terrorist and goes after school children with that cold efficiency, rather than a teenager saying "this is wrong" and refuses to airstrike an elementary school.


I'm having a lot of trouble understanding the scenario you're imagining here. Are you really just completely fabricating nonsense scenarios as some sort of counterargument? Can we just make up anything we want? "Until the AI hallucinates that Christopher Walken is actually the Egyptian God-Emperor Ramses II and starts grinding human beings into cement to create a giant pyramid in his honor. Is that what you want, huh!?" Does the actual probability of that happening factor into your consideration at all?

Compare that to the probability of human beings air-striking elementary schools, which humans have shown many times they're perfectly willing to do during war.


The comment was trying to convey a false positive in the targeting system that would result in many casualties. Given we have already seen AI mis-identify subjects and that we don't understand how an AI can "dream"/other edge cases, it doesn't seem much of a stretch to imagine some xk-class/mass casualty scenario.


> Until the AI hallucinates that Christopher Walken is actually the Egyptian God-Emperor Ramses II and starts grinding human beings into cement to create a giant pyramid in his honor. Is that what you want, huh!?

I’ve had a few beers, but this is the funniest thing I’ve read in a long while.


Teenagers in “the land of the free” shoot up schools every week. So if your argument hinges on the idea that a teenager wouldn’t kill innocent people, I have very bad news for you.


For sure mistakes and accidents will happen. I sure hope we won't program them on purpose to go on school shooting sprees though, or massacre unarmed villagers, or pee on prisoners and torture them for fun... all I'm saying is we don't need to hypothesize about possible AI evils when humans are already there.

Why is it given that an AI or algorithm has to be less ethical than humans? I don't think it has to be, and hell, ChatGpt (with or without guidance/censorship) already shows a greater capacity for moral reasoning than some (many?) humans. We didn't evolve for some sort of absolute moral superiority, we're just a smart violent ape with in and out groups, quite prone to otherization and dehumanization on a whim. And we elect, follow, and worship sociopaths that strongly exhibit those traits... over and over.

Every parent hopes their children will be better people. With AI we might actually be able to make that happen, unbounded by human genetics, though of course other challenges will arise.

Shrug. Maybe Skynet is around the corner. But maybe not. There's uncertainty and thus hope there, at least, whereas I guarantee you the human species, unchanged, will keep producing genocidal dictators every decade or three.


Seems like no one writing this article noticed the drone war going on in Ukraine right now.

I'm a lot less worried about industrial accidents, and a lot more worried about the path of military accidents. Drones /will/ outpilot human pilots, and AIs /will/ out-strategize humans. That's inevitable at this point. Bugs will creep in too, as will cyberattacks.

In a war, no one will have time to dive into AI ethics, let alone capacity to apply them.


I'm less worried about strategic robots than an ape species that kills each other with machetes and gas chambers for no real reason. It's not even a bug in our case. Willful genocide is a feature as old as our ancestors (and then some).

Maybe the AI will at least use modern, quick-killing weapons.


I'm less worried about machetes since they can't kill the whole human population. Even something as horrific as gas chambers only killed a fraction of a percent of the world population.

Genocide is ancient, but humanity somehow stumbles on as a species.

I'm more worried because we developed technology capable of wipe out all of humanity (and potentially, all life on Earth) in the fifties with thermonuclear bombs. The number of potentially humanity-destroying technologies has grown exponentially since, and with biological weapons, the cost and complexity has fallen.

I can think of directions to take humanity to take extermination off-the-table, but we don't seem able to implement big changes anymore.


When bombs kill the wrong people now it’s never called a human error, it’s an “errant” bomb.

I’d sumbit AI bombs will be less errant than their humans who are not accountable.


The robot operating space should be fenced out or should have a proper collaboration defined.

The future is going to be more robots work in parallel to humans and collaborate. Robot manufacturers should ensure the collaboration between robots.


It's inevitable that automation will have it's fatalities.

We can claim it's not robots killing people; it's people killing people because they decided to use the robot.

That's specious. We have something resembling a free market world. Anybody 'deciding' to not use automation (or AI or whatever) will be outcompeted by folks using the cheaper alternative. The playing field inevitably will become robot and AI dominated.

Make all the regulations we want; money will force the issue. As long as they don't egregiously slaughter people they will win.

Like slavery and indentured servitude before, and removal of indigenous populations, and exploitation of natural resources. The tragedy of the commons or something similar, where you have to join the throng or be outcompeted and your efforts extinguished. Only efficient market competitors can possibly survive.

All the rest is philosophy (sophistry?)


Errata: The Grover shoe factory disaster was the impetus for what became BPVC and the mandate of safety features including safety valves and LWCOs. BPVC is a mostly comprehensive engineering code that often needs interpretation reports by professional engineers regarding product design, application, and failure analysis. Almost all safety regulations are written in blood.

https://en.wikipedia.org/wiki/Grover_Shoe_Factory_disaster

https://www.asme.org/codes-standards/bpvc-standards


The standard red herring of form over function - focus on these scary metal things rather than thinking too hard about what's directing them. In reality non-metal extrahuman intelligences have been killing humans far longer with their various paperclip maximizing orphan crushing machines.


Bruce Schneier? How the mighty have fallen.

In the US, 16 people were murdered today (by another human).

Worldwide, one person is murdered every second, or 1,440 murders per day.

Perspective...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: