There are different ways to compare different kinds of AI techniques.
As a starting point, be aware that "AI System" can mean an incredibly broad range of things. In popular culture, we usually think of a deployed system that uses AI techniques. These systems can only be compared on the basis of their performance, and their performance may have relatively little to do with AI itself (e.g. their behaviours might be more strongly affected by user interface decisions, not the AI techniques under the hood).
In contrast, AI researchers are usually more interested in comparing the performance of different AI algorithms at solving the "AI-ish" parts of the problem a fully developed system aims to solve. A common way to do this is with benchmark problems. For example, in machine learning it is common to compare two algorithms by running each of them on a commonly used dataset, and comparing the performance of the models they create. In AI Planning, it is common to issue planning challenges to the community, and compare the quality of the plans on several different axes (e.g. average wait times, maximum wait times, whether goals were accomplished, how long it took to create a plan, etc.).
There is no generally agreed upon way to compare techniques across different areas of AI, but a commonly adopted approach is the Turing Test. In the Turing Test, we care only about the ability of the system to mimic something like human intelligence. It's fair game to ask about planning problems, or learning problems, or other topics, so you could in some sense judge one technique to be better than another. However, most judgements made in the Turing Test are subjective, so it's not clear that it really solves the problem you posed.