Can artificial intelligence applications be hacked?



Can artificial intelligence (or machine learning) applications or agents be hacked, given that they are software applications, or are all AI applications secure?


Posted 2018-10-24T07:21:53.587

Reputation: 125

Question was closed 2020-03-14T13:59:19.750

You may appreciate this story about someone hacking an AI for revenge: Unchained: A story of love, loss, and blockchain (MIT Tech Review)

– DukeZhou – 2018-10-24T20:01:28.527

YES even noise added can fool any classifier - some may be trained on noised examples but you can always find a way although it's a bit hacking unknown algorithm so I would compare it to birthday technique – quester – 2019-10-12T20:19:28.897



To answer your question, it really depends on the purpose of the Artificial Intelligence program.

For example, 4Chan has hacked a number of "Artificial Intelligent" bots, most notably was Microsoft's Twitter bot Tay. The general purpose of the bot was to parse what was tweeted at it and respond in kind, learning and evolving with each and every interaction.

Within 24 hours, 4Chan had corrupted Tay beyond repair, by teaching it racist and sexist terminology, ironic memes, sending it to shitpost tweets, and otherwise attempting to alter its output so much so that Microsoft had to remove it.

Now, the flaw with Tay was that it accepted any input, and learned off of that exclusively, without any interaction from the developers. Other bots have similar features, but they have checks in place that require human intervention to determine what is "quality" information to learn, and what is "bad" information to learn as to not pollute the global knowledge base of the bot.

These are just two examples of how Artificial Intelligence can be "hacked", but it ultimately comes down to how the programs are implemented.

You mention in one of your comments about Cellphone Artificial Intelligence such as Siri, and whether this technology can be hacked. The answer is - not really.

Siri learns based off of her global interactions - with limited user input allowed. You can ask Siri how to pronounce a name. When she pronounces it incorrectly, you can say to her "Siri that's now how you pronounce that". And she will provide you with a limited set of options of how you pronounce that name, and you have to choose which option sounds the best.

There was no option to allow a user to give Siri "bad" information, as she already populates the results for you, and you have to teach her based off of her list of options. To give Siri bad input, you would have to have access to Siri's global learning base, which we do not have access to, and alter how she accepts human interactions within the program - which would never happen due to too many moving parts within the iPhone update process, and would be caught before you were able to deploy your update.

Jordan Benge

Posted 2018-10-24T07:21:53.587

Reputation: 151

1Nice angle. I was going to post: "One way to 'hack' a learning algorithm would be to feed it bad data." – DukeZhou – 2018-10-24T20:03:34.190


Everything can be hacked. The solutions found by artificial intelligence can be much more efficient than human solutions, but they can also be confused because of the diversity and immensity of details that our mind possesses.

Artificial Intelligence models bring us more secure solutions, but nothing is 100% safe when we talk about information security. There are ways to improve security, hinder invasions and attacks, but every system has flaws.

Perhaps, in the future (things of my imagination) we will have an artificial superintelligence ahead of the human, which may be one of the greatest challenges of invasion of history, but until then .. just my imagination.

Guilherme IA

Posted 2018-10-24T07:21:53.587

Reputation: 691


Perhaps what you are looking for is the notion of adversarial attacks on machine learning systems?

k.c. sayz 'k.c sayz'

Posted 2018-10-24T07:21:53.587

Reputation: 1 835