I highly suggest you read Hannu Rajaneimi's Quantum Thief trilogy, William Gibson's Neuromancer trilogy, and Phillip K. Dick's Do Android's Dream of Electric Sheep, Isaac Asimov's I, Robot.
It's a nuanced subject, and no one knows.
The most credible scenarios I've found for the destruction of humanity by automata involves not high-intelligence, but very low intelligence. (The Grey Goo scenario.) Dick, a gifted philosopher in his own right, proposes that empathy is a natural function of high intelligence, which provides a credible, increasingly validated ray of light.
But Hawking and Musk are credible imo per their comfort with very large numbers (complexity) and proven world-class abstract thinking capabilities. It's possibly that strong "hyperpartizan" AI could run amok. We've already seen issues with high-frequency trading algorithms, and automation is proceeding in weapons tech.
One might say "we are cursed to live in such interesting times."