There are the 3 Asimov’s laws:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given to it by human beings, except where such orders would conflict with the first law.
A robot must protect its own existence as long as such protection does not conflict with the first or second law.
These laws are based on morality, which assumes that robots have sufficient agency and cognition to make moral decisions.
Additionally there are alternative laws of responsible robotics:
A human may not deploy a robot without the human–robot work system meeting the highest legal and professional standards of safety and ethics.
A robot must respond to humans as appropriate for their roles.
A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control to other agents consistent the first and second laws
Thinking beyond morality, consciousness and the AI designer's professionalism to incorporate safety and ethics into the AI design.
Should AI incorporate irrefutable parent rules for AI to be inevitably mortal by design?
How to assure AI can be deactivated if necessary the way the deactivation procedure cannot be worked around by the AI itself, even at the cost of the AI termination as its inevitable destiny?
EDIT: to explain reasoning behind the main question.
Technological solutions are often based on observing biology and nature.
In evolutionary biology, for example, research results of birds mortality, show
potential negative effect of telomere shortening (dna) on life in general.
telomere length (TL) has become a biomarker of increasing interest within ecology and evolutionary biology, and has been found to predict subsequent survival in some recent avian studies but not others. (...) We performed a meta-analysis on these estimates and found an overall significant negative association implying that short telomeres are associated with increased mortality risk
If such research is confirmed in general, then natural life expectancy is limited by design of its DNA, ie by design of its cell-level code storage. I assume this process of built-in mortality cannot be effectively worked around by a living creature.
A similar design could be incorporated in any AI design, to assure its vulnerability and mortality, in the sense a conscious AI could otherwise recover and restore its full health state and continue to be up and running infinitely.
Otherwise a simple turn off switch could be disabled by the conscious AI itself.
Wilbourn Rachael V., Moatt Joshua P., Froy Hannah, Walling Craig A., Nussey Daniel H. and Boonekamp Jelle J. The relationship between telomere length and mortality risk in non-model vertebrate systems: a meta-analysis373Phil. Trans. R. Soc. B