It’s a slog, but Bostrom is worth a read to get a handle on the AI you’re talking about. I like Joscha Bach’s work too. He’s a great communicator and thinker.
The moral issue is a big deal, and a lot of thinking is going into it.
Bostrom has a chapter (13; it’s readable on its own) on choosing the criteria for choosing to which he attaches a criterion of anchoring the outcome in deep human values (whose?) and asks: How can we get a superintelligence to do what we want? What do we want the superintelligence to want?
He’s an academic, so he sets up academic arguments — rather than tell a story — but he doesn’t do that weird AI field thing of turning stuff into mathematical notation, so it’s perfectly readable.
I suspect we’re far more likely to be downed by a missing semicolon or errant squiggly bracket than embedded or learned moral ambiguity, though — a clear reason why such syntax is terrible And, of course, morals change over time. Bostrom memorably mentions the acceptability of cat-burning in 16th century Paris.
As an aside: You mention humans inventing language. Later came writing, then the printing press, then the internet. These look like the four biggies, for now. I hadn’t understood until recently that the printing press was controlled by the church (them again), so Gutenberg didn’t realise its potential during his lifetime. It wasn’t until Martin Luther nailed his theses to the Wittenberg church door and had small, cheap pamphlets produced that its use exploded. Martin Luther entrepreneurial disrupter of tech and instigator of the Reformation. The Googs, FBs, and Amzs are mere dabblers in the game.
Coincidentally, it’s the quincentennial of Martin Luther’s hammering tomorrow, 21 Oct. Chapeau! (John Naughton wrote an interesting article about this at the weekend.)