In addition to Jaden's excellent answer "no one is trying to actually make a “conscious” AI because we don’t know what that word means yet" I'd like to add that the word "yet" there is highly optimistic.
It's highly problematic and likely impossible to distinguish between a conscious being and a being that behaves exactly as if it was conscious. Philosophers have been struggling with that for centuries; some even espoused solipsism, which is a "I live in the Matrix" philosophy. In particular, how can you tell whether your childhood friend or your spouse or anybody else is a conscious being rather than an embodiment of AI that acts exactly as a conscious being would?
It's possible, of course, to go "if it walks like a duck and quacks as a duck then it's a duck" way. In that case a Turing Test passing AI would be automatically considered conscious. However, most people wouldn't accept the duck criteria of consciousness; otherwise they would very soon have to call their Alexa operated household appliances conscious.
My two cents are basically the same as Jaden's, except that I'm more pessimistic about ever understanding what consciousness is.