Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Chinese chatbots apparently re-educated after political faux pas (reuters.com)
72 points by pseudolus on Aug 4, 2017 | hide | past | favorite | 56 comments


Anyone remember Microsoft's Tae chatbot experiment? 4chan got it to start spewing hate speech and 9/11 conspiracy theories on Twitter in no time.

https://www.theguardian.com/technology/2016/mar/24/tay-micro...

It just goes to show that chatbots are easy to manipulate. Since the Chinese bot said something we agree with we think that it's somehow showing us a deeper level of truth. Perhaps that deeper level of truth is that humans are easily fooled by confirmation bias?

You'll know that AI is about to take over the world when it can use our cognitive biases to convince us of something so strongly that it would take a human a much longer time to convince us that what the AI made us believe was not in fact true. Maybe this is what Elon Musk was referring to when he warns us about AI that is "deep intelligence in the network".


The article doesn't pretend chatbots aren't hard to manipulate, and in fact mentions the case of the Tay Bot. I don't see anyone claiming that the bot has shown a deeper truth, although I would agree the Chinese government is corrupt.


The higher level question is how should we respond when machines, trained on facts and data, systematically make decisions that go against our sensitivities?


Can't say our machines are trained based on facts, but purely data. What you get out of it is from what you feed into it.

The challenge we are going to face in this AI era is that the training data can be biased or include some morally/politically incorrect information. Someone can intentionally manipulate the data and make us feel that the response from the machine is trustworthy and factual, which can be devastating if it is used by a dictating government. Mind the alternative facts!!!


I feel like that's still an open problem for human intelligence...


A good article on this: https://www.propublica.org/article/breaking-the-black-box-ho...

We can UNintentionally manipulate the data too even when we are trying our best to be honest.


This is not like a "fundamental" AI problem; just dont feed the AI shitty data.


What counts as "shitty data" is a subjective question; worse, it's often a political question. So "don't use shitty data" is a reductive and completely unhelpful strategy.


Data is shitty if it's intentionally adversarial, otherwise it's fine. Problem solved.


Intent has no bearing on the intrinsic quality of the data.


Sure it does. If I intentionally doctor data then it is low quality.


And what if you unintentionally doctor data?


Right but what you call "shitty data" is subjective and can easily differ based on opinion.


It's really an outdated concern anyway since most AIs run in realtime and are fed data direct from clickstreams / other realtime pipelines. It's unclear what a 'biased' data source means.


Look at 4chan's successful "attacks" on chatbots: https://www.washingtonpost.com/news/the-intersect/wp/2016/03...

You'll be hard pressed to find a single fact that would be considered so by the entire human population. There are people who are seriously disputing that the earth is round.


I think that's a good example. Yes or no, is the world "round"?

Well, it's sort of a very slightly lumpy oblate spheroid... it also ties into whether we agree on what "round" means.

There's a knowledge problem involved - when answering a question you really have to understand the knowledge and intent of the speaker, establish a conversational frame.

The first years of university are often spent learning certain "facts" and theories, which are technically incorrect (but useful), which will be torn down and shown as approximations later.

Even things that look like "hard" facts are very often a matter of perspective.


this isn't about it being lumpy. this is about whether it's flat.


Your model can still be disfunctional even if it is trainer on "facts and data". No model has 100 accuracy, and it's easy to make a mistake when modeling and end up with unexpected issues (for example image recognition systems not working well with people of colour because of either a bias in the dataset, or non-robust methods)


If the current climate is anything to base predictions on, I would put my money on the answer lying in hard-coding them to adhere to that which is deemed politically correct.


Thereby opening a market edge to algorithms that leverage politically incorrect hidden truths.


You cannot make decisions based purely on facts and data. You also need goals, values, preferences or what you prefer to call it.


I agree with you. I just don't know how it will work.

Suppose we have a program that approves or denies loan applications. We input the acceptable default rate, and it makes the approval decisions. Then, we notice that the software disproportionately denies applications from a few demographics. What's the right reaction?

Or, what if the software makes hiring decisions, where the objective function is "hire people that will, in aggregate, maximize the corporate stock price."


In fact, it is a new problem in fin tech. While ML can mimic human decision quite accurately nowadays, machines remain largely incapable to handle new information.

Now suppose again we have a machine that rate load applications. In the past when you write your occupation as AI scientist, probably it won't make a difference than others. However, same occupation in the past few years, even you haven't got the default data to retrain the machine, a human would know the risk is significantly lesser than the majority. The problem? While we think the machine is at inferring future outcome, it is not in some important cases which we are largely unaware of.


garbage in, garbage out


You know that something is very wrong when even an AI doesn't approve of your shitty government.


This is hilarious, and if these bots are fed actual user data, this means anti-communist feelings are much more prevalent than the Chinese government wants to let out. I.e. their censorship is not the brainwashing they wish it were, but only a curtain attempting to cover existing discontent.

So: funny, but also potentially good news.


Does anyone have information as to how the party leadership in China manages to pass off what's happening in China as "Communism"? I don't understand how the leadership which calls itself Communist while encouraging wage labour, private property, money and capitalist organisations can pretend to be in any way defending Communism, using chatbots or not.

So how is it done? Do they say "Socialism is what's happening in China, full stop" (which is false and ahistorical) or do they say "China is not yet a Socialist society, full stop" (which would seem to contradict their goal of being a Communist leadership)?


Currently in China, the locals don't really look upward, it's an unchangeable fact that they have a corrupt dictatorship. Instead they look around them and try to cheat each other because they think that is the only way to get the head.

I'm not sure they actually know much about what communism actually is. I'm sure they were taught something though


that's just not truth that they don't look upward, plenty of officials got into trouble when people complained about them online


I’m not Chinese, but I believe the more orthodox Marxists in the Party argue China needs to reach the capitalist stage of development before it’s able to achieve communism (same justification as Lenin’s for the NEP), while the “right” of the Party prefer Third Way type rhetoric of capitalism harnessed to support a welfare state.

The phrase “socialism with Chinese characteristics” was common in their domestic propaganda in the 00’s; I’m not sure whether it still is.


Interesting, that makes sense, though of course I don't agree with either subset of the party; China has (almost?) entirely progressed to the capitalist mode of production, and if anything I believe that in China of all places the distinction between proletarian and bourgeois is most developed. It seems to me from the outside as though China has wholly abandoned the Communist cause in a less obvious way than North Korea has.

I'm about to start reading Badiou on the "Communist Hypothesis". It is clear the modern Marxists and Communists in general need to develop their praxis, or I am unaware of the modern developments.


they don't really call it any name, what's delivered it's that CCP is irreplaceable and that China and CCP are synonyms

you can't love China and dislike CCP


Why should a society tolerate seditious speech, that seeks to topple the very society that allows it to flourish? Free speech should be used responsibly, to promote the values of a society, not destroy them. Few countries share the extremist US point of view of allowing all speech (with few exceptions, e.g. libel and true threats). Instead, they approach it more maturely, to safeguard their values and not allow extremism to take hold.

It saddens me I have to explicitly tag this as sarcasm.


>It saddens me I have to explicitly tag this as sarcasm.

I just thought you were just a pro China poster until I read that. It's a viewpoint I've commonly heard in Chinese propoganda, and even among ordinary Chinese people.


I think it's common in communist leaning circles in general. I've talked with a few of the fine folks of anti-fa on twitter, who express similar views. I think it makes sense in the context of the worldview. After all, why let one man's words lead to the mental detriment of thousands?


Does it really make sense in context of the worldview, or is it an ex-post facto rationalization they're making solely because of all the communist/socialist dictatorships in the past (tankies would be the ones doing this)? I have socialist beliefs but I hate the blatant authoritarianism that a lot of people on the left espouse. I hate to make a slippery slope argument, but limitations on free speech scare me simply because what is "unacceptable" will (in my mind) assuredly grow larger and larger until perfectly reasonable beliefs are persecuted.

If your society can't exist with free speech, it probably shouldn't exist at all. Because at that point, you've become the oppressor. I know a lot of the hate towards "liberals" among those on the left is due to their opinions on capitalism and such, but given the original meaning of the term liberal (one who believes in liberalism, generally personal liberty) I fail to see why anyone would actually oppose that.


> Does it really make sense in context of the worldview

it seems there is a strong postmodernist bent in the "anti-fa" crowd. words are now violence, and sometimes the use of "violence" is justified to defend against "violence" (ie self-defense and defense of others).

nevermind that this equivocation of "violence" means that anti-fa is assaulting, hospitalizing, intimidating and harassing people, and destroying property and businesses.

it's all a righteous and necessary defense against "violence". even though the "violence" they "defend" against is... words.


Who is anti-fa?


I'm being 100% honest here. It's not a totally unreasonable position to take, and there are lots of countries that aren't dystopian nightmares who have limits on free speech. I think free speech is important, but I also think that you need an established civil society first that supports human rights and liberty, or else immediately after allowing unfettered free speech, you could end up with pogroms or theocracy.


> there are lots of countries that aren't dystopian nightmares

it perhaps depends on your perspective on dystopia; as well as whether these countries are approaching a dystopic descent as people aren't permitted to articulate the problems out loud.


I'm afraid that sentiment is growing outside of China as well, and the disclaimer would remain necessary even in the absence of Chinese posters.


This viewpoint is effectively British and Canadian law.


hey we just have hatespeach laws. I'm not proud of them but that's what we have.


> hatespeach

Who hates peach?


you are just 100% on the wrong side of poe's law here.


Very indicative of the growing ineffectiveness of censorship.


I've often wondered how long China can keep up this act of having their cake and eating it too when it comes to operating as a capitalist country with a controlling, pro-censorship Government. Part of me wonders why the people still tolerate it at all.


The economy is growing and people are doing better today than they were yesterday. The problem is that when the growth inevitably stops who will people blame and how will the government respond?


Growth solves many problems. ;-)


I feel like I am in the future.


Were you expecting bots to have more free speech rights than humans?


What's the point of a chatbot appart for automated propaganda?


Oh, you can do great stuff with them. For example you can train them on Q&A for technical questions, using Databases like Stackoverflow and a NN.


Won't be long before neuroscience achieves a whole new level of totalitarian horror and conspiratorial doctors reprogram brains to remove these sorts of bugs.


Isn't that already done, if not at the biological level but at an ideological level via propaganda.


4. (CLASSIFIED)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: