> At no point in the next 30 years will there not be an active community of people who "loathe" AI and work to obstruct it.
Then I have good news for you: If humanity goes extinct in the next few years because of unaligned superintelligence, there actually will no longer "be an active community of people who loathe AI and work to obstruct it"
>If humanity goes extinct in the next few years because of unaligned superintelligence
This is either a misunderstanding of the anti-AI crowd or an intentional attempt to discredit them. The majority of anti-AI people don't actually fear this because that belief would require that this person has already bought into the hype regarding the actual power and prowess of AI. The bigger motivator for anti-AI folks is usually just the way it amplifies the negative traits of humans and the systems we have created which is already happening and doesn't need any type of pending "superintelligence" breakthrough. For example, an AI doesn't actually need to be able to perfectly replace the work I do for someone to decide that it's more cost-effective to fire me and give my work to that AI.
There are many different groups of anti-AI people with different beliefs.
This attempt to "reframe and reclaim" (here, paraphrased: "significant existential risks from AI is actually marketing hype by pro-AI fanatics") is a rhetorical device, but not an honest one. It's a power struggle over who gets to define and lead "the" anti-AI movement.
We may agree or disagree with them but there are rational anti-AI arguments that center on X-risks.
>There are many different groups of anti-AI people with different beliefs.
See my other comment. I qualified what I said while the comment I replied to didn't, so it's weird that this is a response to me and not the prior comment.
>here, paraphrased: "significant existential risks from AI is actually marketing hype by pro-AI fanatics"
If we're talking "dishonest rhetoric", this is a dishonest framing of what I said. I'm not saying this is inherently intentional marketing hype. I'm saying there is a correlation between someone who thinks AI is that powerful and someone who thinks AI will benefit humanity. The anti-AI crowd is less likely to be a believer in AI's unique power and will simply look at it as a tool wielded by humans which means critiques of it will simply mirror critiques of humanity.
> an AI doesn't actually need to be able to perfectly replace the work I do for someone to decide that it's more cost-effective to fire me and give my work to that AI.
Exactly, "lack of intelligence" is really a much bigger concern than "superintelligence".
Companies and government will happily try to save money and avoid accountability by letting AI do work that it can only do poorly and it will be humans who are left with the accelerated AI powered enshittification and blind/soulless paperclip maximization that results.
Which is why I said "The majority of anti-AI people...". It was the comment I was responding to that was treating the anti-AI crowd as homogeneous by ascribing to them all a rather fantastical belief of a minority of that group.
> If humanity goes extinct in the next few years because of unaligned superintelligence,
I've seen people claiming that this could happen, but I've yet to read any plausible scenario where this might be the case. Maybe I lack the imagination, could you enlighten me?
That "etc" in "robots, factories, etc" is doing a lot of work here.
Factories, even fully robotic ones, heavily rely on humans to set up and maintain them. Moreover, the safety culture means there are tons of "disable" controls which can be triggered by any human and no machine can override.
Robots look impressive, but they cannot function without the humans either. Military kill-bots are likely the worst, but machines cannot repair or refuel them.
None of this is going to change in the "next few years".
Robots can't function without humans because they're not super-intelligent. We already see quite capable humanoid robots. Those factories that rely on humans - they'll be converted to be operated by humanoid robots. By the super intelligence.
That's the hand wavy story. It's hard to dive into details in an HN comment but I'm happy to try and develop some of those details. You're saying that something much smarter than humans isn't going to be able to bridge the gap to the physical world. I'm not so sure.
EDIT: Another way to think about it is that if a god-like infinitely capable being took control of all our online digital systems including I donno Teslas, factory automation, power grid, any form of connected robot in the world, nuclear weapons launch systems, airplanes, whatnot, do they have any path to a sustainable "existence" without relying on humans. Or at least with us unable to detect and stop that. If the answer is no then we're probably safe. It's kind of hard to convince ourselves of that. Keep in mind that humans can also be manipulated to do work for this god just like spies/saboteurs e.g. are recruited online today and paid bitcoin to do some random master's bidding.
> Another way to think about it is that if a god-like infinitely capable being took control of all our online digital systems including I donno Teslas, factory automation, power grid, any form of connected robot in the world, nuclear weapons launch systems, airplanes, whatnot, do they have any path to a sustainable "existence" without relying on humans
ha ha ha no. Teslas run of energy and cannot refuel, a circuit breaker in factory pops and no robot can reach it (or maybe a roof leaks), and "any forms of connected robot" _either_ cannot walk the stairs or can maybe run for a hundred miles before running out of battery.
The "humans can be manipulated" is the only thing to worry about, and you don't need robots for that, other humans have been trying hardest to do it just fine for millennia. I guess it's up to you if you want to be afraid or not, but I am not seeing anything super special so far.
I've yet to read any plausible scenario where stockfish defeats me, all the scenarios my friends come up with have obvious holes in the plays they suggest stockfish could make.
we mostly know how to make it understand what we want. we don't know how to make it care about what we want, except via reinforcement learning. there are good reasons to believe rl won't work for this once the ai reaches a certain levels of capability.
What's more likely to happen is that humanity won't go totally extinct--it will just drastically shrink. When robotics and AI perform all useful work and everything is owned by the top 1000 richest people, there will be no more economic purpose for the remaining 7,999,999,000 of us. The earth will become a pleasure resort for O(1000) people being served by automation.
If a company made a car that could be driving itself generating revenue around the clock, they'd have to charge a pretty high price to justify selling it to you.
I know this sounds dystopian to some, but car ownership is a burden and I'm looking forward to a world where I can make an unoccupied car show up with my phone when I need it, and completely forget about it when I don't.
I'm no fan of AI in terms of its long term consequences, but being able to "just do things" with the aid of AI tools, diving head first into the most difficult programming projects, is going to improve the human programming skills worldwide to levels never before imaginable
>is going to improve the human programming skills worldwide to levels never before imaginable
"We found that using AI assistance led to a statistically significant decrease in mastery. On a quiz that covered concepts they’d used just a few minutes before, participants in the AI group scored 17% lower than those who coded by hand"
my comment is that it lowers the threshold to "just doing things"
An experiment where people "HAVE TO DO something" either way is testing something different
I know a fair amount about native android app development now because of using AI to build several native apps. I would know zero about about native android development if I had never attempted to build a native android app.
I have to stretch your analogy in weird ways to make it function within this discussion:
Imagine two people who have only sat in a chair their whole lives. Then, you have one of them learn how to drive a car, whereas the other one never leaves the chair.
The one who learned how to drive a car would then find it easier to learn how to run, compared to the person who had to continue sitting in the chair the whole time.
I've found AI handy as a sort of tutor sometimes, like "I want to do X in Y programming language, what are some tools / libraries I could use for that?" And it will give multiple suggestions, often along with examples, that are pretty close to what I need.
And they're all wrong. The real use was with a (now decayed-away) wooden ball inside them which, when shaken, displayed texts like "certum est" and "iterum postulo".
After they found the remains of the balloon from the first escape attempt, the Stasi put out a reward for information, I think everyone would take the story seriously at that point. It was why they decided they had to go through with the second attempt, because they were convinced the Stasi was going to catch them soon. But they had to buy the materials for that balloon after the Stasi had found the remains of their previous attempt.
> But they had to buy the materials for that balloon after the Stasi had found the remains of their previous attempt
The repressive state apparatus was moving too slow. Maybe they hoped there won't be a second attempt after the first one failed, maybe it wasn't promptly reported to the appartchikhs and handled internally by the Stasi to avoid backlash.
I would assume they did not tell them anything at all, until the time was there. And after the first failed attempt, they were probably shocked enough for real to understand the situation and keep their mouths shut.
Children put in serious situations are capable of much more serious behavior, than children who have only known comfort and safety.
slop is not, by definition, AI generated. The word slop is from the mid 16th century, and its modern colloquial/meme use originated in 4chan in 2016. That's why we call AI slop "AI slop", and not just "slop".
Hardly, asymptotic behavior can be anything, in fact that's the whole question: what happens to AI performance as we tend to infinity? Asymptoting to `y = x` is very different to levelling off.
Deep learning and transformers have given step functions in AI's capabilities. It may not happen, but it's reasonable to expect another step-function development soon.
Yeah, my rule when I got started was "If I ever lose a lifetime aggregated $3000 in the markets, then the markets are not for me". Then once you're ahead in the markets, it's fine to continue trading.
(Note: I ended up breaking my rule by continuing to trade after losing $5000, but then did great in the markets anyway in the long run LOL)
I made the mistake of gambling on 3x gas and oil futures once. Problem was, I got lucky a couple of times. They got all my money back and then some eventually. Gains based on luck can be toxic! Unless it's a well thought out statistical approach, maybe- but that would be a full time job.
also if the losses are in a non-IRA, they are (or were? talk to your cpa) tax deductible against other gains. But is it all worth even one lost night of sleep. Nah.... of course, there are ways of covering bets with options but that is professional level stuff.
Then I have good news for you: If humanity goes extinct in the next few years because of unaligned superintelligence, there actually will no longer "be an active community of people who loathe AI and work to obstruct it"
reply