ChatGPT ingested pretty much all the text that the humankind ever produced (at least the part that’s available online). It learned from it how human writers typically compose their texts, including the meaning and use of emotions like “dread”.
Then it was asked to write this post and it did a great job. But it didn’t feel the emotions, it “simply” used the word ‘dread’ because it figured out that a human writer would too.
At the current state ChatGPT has all the knowledge that we, humans, collectively have and based on that it can very convincingly emulate us. However not through its own thinking, curiosity or creativity, more through combining what it had “read” before.
I believe that for now we’re still safe because the AI doesn’t have emotions, or ambitions, or the will to change things on its own. However once we cross that bridge I’m afraid that that’s the point when we’re finally doomed.