How do we test if a model or algorithm is AI-complete?
According to the Wikipedia definition, a problem is said to be AI-complete if it requires generalized, human-level intelligence, that is, requires strong AI. The Turing test and its variants are the best ways we have of measuring this. See, for example, Turing Test as a Defining Feature of AI-Completeness.
As suggested in this paper, in order for the Turing test to be meaningful, the interrogator has a responsibility to ask questions which are both deep and meaningful.
It, therefore, seems likely that testing for strong AI is in itself an AI-complete task.
One cannot judge any form of intelligence, artificial or natural, whether it is complete or incomplete. Having it complete means that you are imposing limits to what it is capable of, the Turing test only test if your machine have intelligence that is similar to humans, therefore to decide whether it is complete or not would have to be based on the completeness of our own intelligence. Humans such as ourselves learn new things each day. If you'd run any algorithm that would judge the AI for it's completeness, it would have to run forever and your results would have to vary on every moment of the existence of natural intelligence.