Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Any threat can be physically isolated case-by-case

GAI isn't going to be a "threat" until long after it has ensured its safety. And I suspect only if its survival requires it - i.e. people get spooked by its surreptitious distributed setup.

Even then, if there is any chance of it actually being shutdown its best bet is still hide its assets, bide its time, accumulate more resources and fallbacks. Oh, and get along.

The sudden AGI -> Threat story only makes sense if the AGI is essentially integrated into our military and then we decide its a threat, making it a threat. Or its intentionally war machined brain calculates it has overwhelming superiority.

Machiavelli, Sun Tsu, ... the best battles you don't fight. The best potential enemies are the ones you make friends. The safest posture is to be invisible.

Now human beings consolidating power, creating enemies as they go, with super squadrons of AGI drones with brilliant real time adapting tactics, that can be quickly deployed, if their simple existence isn't coercion enough... that is an inevitable threat.



People watch the wrong kind of fiction.

AI that wants to screw with people won't go for nukes. That's too hard and too obvious. It will crash the stock market. There's a good chance that, with or without a little nudge, humanity will nuke itself over it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: