The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not real intelligence.
Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was chorus of critics to say, 'that's not thinking'." AI researcher Rodney Brooks complains "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"
The Wikipedia page then proposes several different reasons that could explain why onlookers might "discount" AI programs. However, those reasons seem to imply that the humans are making a mistake in "discounting" the behavior of AI programs...and that these AI programs might actually be intelligent. I want to make an alternate argument, where the humans are making a mistake, but not in "discounting" the behavior of AI programs.
Consider the following situation. I want to build a machine that can do X (where X is some trait, like intelligence). I am able to evaluate intuitively whether a machine has that X criteria. But I don't have a good definition of what X actually is. All I can do is identify whether something has X or not.
However, I think that people who has X can do Y. So if I build a machine that can do Y, then surely, I built a machine that has X.
After building the machine that can do Y, I examine it to see if my machine has X. And it does not. So my machine lacks X. And while a machine that can do Y is cool, what I really want is a machine that has X. I go back to the drawing board and think of a new idea to reach X.
After writing on the whiteboard for a couple of hours, I realize that people who has X can do Z. Of course! I try to build a new machine that can do Z, yes, if it can do Z, then it must have X.
After building the machine that can do Z, I check to see if it has X. It does not. And so I return back to the drawing board, and the cycle repeats and repeats...
Essentially, humans are attempting to determine whether an entity has intelligence via proxy measurements, but those proxy measurements are potentially faulty (as it is possible to meet those proxy measurements without ever actually having intelligence). Until we know how to define intelligence and design a test that can accurately measure it, it is very unlikely for us to build a machine that has intelligence. So the AI Effect occurs because humans don't know how to define "intelligence", not due to people dismissing programs as not being "intelligent".
Is this argument valid or correct? And if not, why not?