Is an oracle that answers only with a "yes" or "no" dangerous?

2

I was thinking about the risks of Oracle AI and it doesn't seem as safe to me as Bostrom et al. suggest. From my point of view, even an AGI that only answers questions could have a catastrophic impact. Thinking about it a little bit, I came up with this Proof:

Lemma

We are not safe even by giving the oracle the ability to only answer yes or no.

Proof

Let's say that our oracle must maximize an utility function $\phi$, there is a procedure that encodes the optimality of $\phi$. Since a procedure is, in fact, a set of instructions (an algorithm), each procedure can be encoded as a binary string, composed solely of 0 and 1, Therefore we will have $\phi \in {\{0,1\}^n}$, assuming that the optimal procedure has finite cardinality. Shannon's entropy tells us that every binary string can be guessed by answering only yes/no to questions like: is the first bit 0? and so on, therefore we can reconstruct any algorithm via binary answers (yes / no).

Is this reasoning correct and applicable to this type of AI?

Yamar69

Posted 2020-03-06T14:53:13.503

Reputation: 133

Can you clarify the relationship between your proof that uses Shannon's entropy and the fact that the oracle could be dangerous? How would the ability of guessing make an oracle dangerous? – nbro – 2020-03-06T19:24:44.553

My proof is simple: you can implement every procedure just by answering yes/no in the same way that you can compose every string just by answering yes/no. – Yamar69 – 2020-03-06T20:22:52.537

1How can an oracle that just guesses and cannot do anything else (e.g. move something) be dangerous? She may be used to retrieve information, etc., but, if she cannot act, apart from guessing questions, how can it be dangerous? Well, if someone uses the oracle to do something, that could be dangerous, but the oracle alone will not. What do you mean by "you can implement every procedure just by answering yes/no..." How can you compose a string by answering yes or no? Someone needs to ask the question anyway. Are you talking about any theory of computation concept (that I don't recall now)? – nbro – 2020-03-06T20:25:44.960

It can by fooling the humans that make the questions. Of course the procedure will be exponentially longer in respect to a "free and cross domain ai" but it can nonetheles reach its final goal. – Yamar69 – 2020-03-06T20:28:56.863

1Your argument is interesting. However, I think that it misses some details (and therefore could be flawed). For example, what is your definition of an oracle? When proving something, you precisely define your terms, so that everyone knows what you're talking about. I encourage you to define the oracle. – nbro – 2020-03-06T20:30:52.537

@nbro How reconstruction of an algorithm is related to safety? – abhas_RewCie – 2020-04-27T05:11:25.533

No answers