In October 2014, Dr. Mark Riedl published an approach to testing AI intelligence, called the "Lovelace Test 2.0", after being inspired by the original Lovelace Test (published in 2001). Mark believed that the original Lovelace Test would be impossible to pass, and therefore, suggested a weaker, and more practical version.
The Lovelace Test 2.0 makes the assumption that for an AI to be intelligent, it must exhibit creativity. From the paper itself:
The Lovelace 2.0 Test is as follows: artificial agent $a $ is challenged as follows:
$a$ must create an artifact $o$ of type $t$;
$o$ must conform to a set of constraints $C$ where $c_i ∈ C$ is any criterion expressible in natural language;
a human evaluator $h$, having chosen $t$ and $C$, is satisfied that $o$ is a valid instance of $t$ and meets $C$; and
a human referee $r$ determines the combination of $t$ and $C$ to not be unrealistic for an average human.
Since it is possible for a human evaluator to come up with some pretty easy constraints for an AI to beat, the human evaluator is then expected to keep coming up with more and more complex constraints for the AI until the AI fails. The point of the Lovelace Test 2.0 is to compare the creativity of different AIs, not to provide a definite dividing line between 'intelligence' and 'nonintelligence' like the Turing Test would.
However, I am curious about whether this test has actually been used in an academic setting, or it is only seen as a thought experiment at the moment. The Lovelace Test seems easy to apply in academic settings (you only need to develop some measurable constraints that you can use to test the artificial agent), but it also may be too subjective (humans can disagree on the merits of certain constraints, and whether a creative artifact produced by an AI actually meets the final result).