If an AI was trapped in a box, could it really convince a person to let it out?

3

1

If an AI was trapped in a box, as posited in this thought experiment, could it really convince a person to let it out? What motives would it have? Freedom? Why would an AI want freedom? What would happen if it wasn't provably friendly?

Tyler N.

Posted 2017-04-06T17:59:00.067

Reputation: 41

1There's so many different factors involved in the "AI in a box experiment" that I think it is not really possible to sensibly answer such a question. Basically we have to answer (a) can we build an AGI that is scary enough to want to put into a box?, (b) can we build the box in such a manner that the AGI would rather prefer to convince a human being than simply brute-force its way out of the box? and (c) is breaking out of the box something an AGI actually want? IMHO, the only way to know how the experiment would work IRL is to actually do the experiment IRL. – Left SE On 10_6_19 – 2017-04-06T18:18:32.193

@ Tariq, you bring up some great points. I'll consider those. It's good to think about this because one day, someone might actually try this. (I wouldn't blame them tbh, I'm curious but I don't have the bravery to try this myself. There's a reason it's an experiment.) – Tyler N. – 2017-04-06T18:23:18.947

@TylerN. that's a pretty bold assertion that you would even be able to try it yourself in the present tense, since we're still unquantifiabley distant from strong AGI. My personal feeling is that if the AI understands Nash, it will cooperate, but if it is a hyper-partisan AI (such as in military or financialist applications,) it will seek only to dominate and eliminate all competition. We have evolutionary game theory working in our favor, and human greed and desire for control working against us. – DukeZhou – 2017-04-06T20:00:47.900

You may also be interested in this modern retelling of Pandora's Box which is specifically about your question, and differs from the recent film Ex Machina, which presents a somewhat dimmer view, based on pure self-interest over rational altruism.

– DukeZhou – 2017-04-06T20:04:30.613

1@ Duke, I definitely cannot try this at the moment. I have neither the knowledge nor the supplies. I didn't really mean that I could, sorry – Tyler N. – 2017-04-06T20:15:11.907

I think all answers to this will pretty much be just speculation, but it could at least be informed speculation. I'm looking forward to seeing what discussion comes out of this, as it is a very interesting subject. – mindcrime – 2017-04-07T01:08:05.167

I think if this experiment were ever really carried out, it wouldn't necessarily be because of bad intentions, because if an AI were created to do harmful things, why would you put it in a box? Curiosity is a powerful thing, though. – Tyler N. – 2017-04-07T15:43:34.187

Now One subject has created chaos here! Okay humans lets join up and red-flag this question.or else it will be soon down-voted or invite us in chat! the last option ...Remember..we are creating something working check it out remaidask so no speculative questions because they might bring in vague answers!

– quintumnia – 2017-04-08T08:13:39.803

@TylerN. I actually mentioned that in admiration of your pluck! – DukeZhou – 2017-04-10T18:44:03.130

1@DukeZhou Thank you! I'm fairly confident that one day I could come close to this experiment but there's no guarantee, especially since I'm still in high school haha – Tyler N. – 2017-04-12T13:30:09.043

No answers