## Do voice assistants such as Alexa commonly have political opinions and do people believe them unthinkingly?

28

If we ask Alexa about a subject such as religion, e.g. "Do you believe in God?" we get a neutral reply "People all have their own views on religion."

If we ask about party politics, "Alexa, should I vote Democrat or Republican", we get "You should vote for the party that best reflects your views."

As an experiment I asked, "Alexa, do you support Black Lives Matter?"

Black lives matter. I believe in racial equality, I stand in solidarity with the black community in the fight against systemic racism and injustice. To learn how you can can take action I recommend visiting blacklivesmatter.com ...

While none of the opinions stated are particularly controversial for most people, I wonder how we should react to a non-sentient robot claiming to hold a particular set of beliefs and principles.

Question

Is there any existing or proposed legal restriction on voice assistants expressing views rather than merely reporting facts?

Are there any studies to show how much influence voice-assistants have and whether users tend to believe them unthinkingly.

Motivation

This post does not pertain to my own political views. It does however concern me that there may be no clear line between opinion (however conventional, correct, or well-accepted) and fact when asking questions of a voice-assistant. Clearly black lives do matter but there is a difference for me between stating this and subscribing to a particular website that may or may not contain other political overtones.

Note: Although I asked Alexa about the US Elections, I am British.

Question was closed 2020-10-30T14:54:06.150

Comments are not for extended discussion; this conversation has been moved to chat.

– CDJB – 2020-10-29T14:11:11.100

Are you sure the voice assistant answers the same to everyone? I think this is just a part of your search/views bubble. – fraxinus – 2020-10-29T18:48:51.527

2@fraxinus - I don't have much of a search bubble: usually I just ask Alexa about the weather, to turn the lights on and off and to read my audio-books. I believe the answers to be the same for everyone if made at the same time. Sometimes I have watched videos with Alexa in them and (annoyingly) my Alexa answers along with the one on the video - usually the answer is precisely the same unless the dates are very different. Some answers are location-based, for example "Alexa, will it rain?" is answered according to location. Alexa's accent is British in Britain and American in North America. – chasly - supports Monica – 2020-10-29T19:42:19.087

2

@fraxinus: That response sounds hard-programmed, much Google'ing "do a barrel roll" or "recursion". While it's not impossible that Alexa was programmed to infer the user's political beliefs from their prior usage and respond to politically sensitive topics accordingly, current voice assistants are pretty primitive; this is likely a default behavior.

– Nat – 2020-10-30T11:57:08.647

49

I wonder how we should react to a non-sentient robot claiming to hold a particular set of beliefs and principles.

You should bear in mind that it is actually expressing the views of the controllers of the company that manufacturers it (or at least, the views they wish to publicly espouse), and act accordingly - i.e. with a heavy dose of healthy scepticism. This is, after all, nothing but marketing for the company.

Is there any existing or proposed legal restriction on voice assistants expressing views rather than merely reporting facts?

Not to my knowledge; the only regulations I'm aware of covering voice assistants (at least within the US and Europe) are focused on the consumer's privacy, rather than the content of the messages the assistants may be relaying.

Such regulations wouldn't make much sense. The medium of transmission of those messages, I.E. audio, isn't really relevant. Putting the question to Alexa is not really any different to going to Amazon's website or social media and asking "Do you support black lives matter?", and you'll receive the same answer

Comments are not for extended discussion; this conversation has been moved to chat.

– CDJB – 2020-10-30T07:09:32.863

5

Wow, good question. I note that the title does not correspond to the questions in the text.

Let me address an indirect question in the text, a bit tongue-in cheek:

I wonder how we should react to a non-sentient robot claiming to hold a particular set of beliefs and principles.

Well, that one is easy: We should ignore them. They are uninteresting. A hard question would be how we should react to sentient robots' beliefs and principles.

But let's stay with the near future and non-sentient bots. I suppose that people will come to trust their artificial assistants a lot, simply because they will typically be well informed about matters of their owners' daily lives. This will especially concern the elderly, who will probably receive an increasing amount of care through robots. As anybody with elderly parents knows, there is a grey zone when they are still mentally capable enough to lead their lives but become increasingly easy to scam or fool.

Imagine the wide-spread use of care bots for the elderly and an update which makes them give political advice before an election? With robots capable of advanced conversations the influence could be very subtle and start e.g. with selecting specific news, making certain seemingly innocuous jokes and comments etc. Such an exploit could stay undetected for a long time.

On a brighter end note, the SNL video promoting "Alexa Silver" for the Greatest Generation is mandatory here :-).

3

Is there any existing or proposed legal restriction on voice assistants expressing views rather than merely reporting facts?

"Voice assistants" are no legal entities, and of course not sentient, but merely an auditive medium, as Don pointed out. In most jurisdictions, companies enjoy some degree of freedom of speech, and this goes for Amazon in the States as well.

Are there any studies to show how much influence voice-assistants have and whether users tend to believe them unthinkingly.

First, an opinion is nothing that you can believe or disbelieve, but rather something that you can share or not. While I am not aware of studies targeting your exact question, I might speculate that:

• People do not tend to align themselves with opinions that are contrary to their core beliefs, but might be nudged slightly into one direction or the other if undecided. Pretty much as with any form of media.
• People might be willing to accept stated facts (as opposed to stated opinions) way easier than stated opinions, just as we do from e.g. wikipedia: If we have no reason to doubt, we tend to assume that most statements are true... Which, objectively, they are: Randomly selecting 100 statements, I would find 99 in the likes of "a week has 7 days" and "water boils at 100° c".

As a side note, people on both sides of the political divide claim that facts don't have enough influence on our opinions.

0

Alexa (or a Google search etc.) does not hold an opinion, it only expresses what it's learnt or been taught.

However I believe there's an important point there: if it's learnt stuff from monitoring opinions then while the people expressing those opinions might be culpable the operators of the service aren't.

On the other hand, if it's been explicitly taught to express either a preference or a neutral position then the operators of the service are liable.

That's an ongoing issue with Facebook and other foramina: if they don't manipulate content they have "common carrier" status (but take a lot of flak for expressed extremism), while if they censor they lose that status and are liable not only for all opinions but for all copyright violations etc.

2Strictly speaking, that fourth paragraph isn't actually quite right. It's the argument they put forward, but they do legally have room to moderate according to a (non-violating) CoC / ToS. (Then again, the anti-trust hearings have thrown that into doubt… so maybe Facebook is justified in ignoring issues on their platform.) – wizzwizz4 – 2020-10-29T13:59:34.457

4

@wizzwizz4: It's not just "not quite right," it's completely wrong. See 47 USC 230, particularly (c)(1), which says they're not liable for the opinions of others at all, and (c)(2), which says that moderating things doesn't make you liable for anything.

– Kevin – 2020-10-29T17:17:16.143

@Kevin Well, yeah, that's what the law says, but don't court cases supersede that? – wizzwizz4 – 2020-10-29T17:23:05.893

2@wizzwizz4: Congress has the power to change laws. Courts have the power to interpret laws. I am not aware of any court decision which has imported the completely unrelated term "common carrier" into section 230 jurisprudence. "Common carriers" are a creature of the Telecommunications Act, and have nothing whatsoever to do with libel and defamation. – Kevin – 2020-10-29T17:26:34.617

@Mark, we might assume that such statements are not "user generated content" in the common sense of "not created by staff affiliated with the company". – Zsolt – 2020-10-29T22:48:02.360

1@Zsolt My intention was to highlight the cases where various chatbots had been trained by users to deliver inflammatory opinions or grossly inappropriate suggestions. That's obviously distinct from e.g. the owner of a search engine explicitly tweaking its parameters to favour paying customers. – Mark Morgan Lloyd – 2020-10-30T08:01:55.460