There's a difference in quality needed for bad things vs good.
If I'm trying to oppress a minority group, I don't really care about false positives or false negatives. If it's mostly harming the people I want to harm, it's good enough.
If I'm trying to save sick people, the I care whether it's telling me the right things or not - administering the wrong drugs because the machine misdiagnosed someone could be fatal, or worse.
Edit: so a technology can simultaneously be good enough to be used for evil, while not being good enough to be used for good.
- AI is good enough to do "bad" things as to scare us
- AI is also bad enough to do "good" things as to be undesireable otherwise