The future of AI and our relationship with it


(Daniel Hollands) #1

I asked this question of a friend of mine, but figured I’d cast a wider net:

In these early years of interaction with AI, such as Alexa, is the language we’re being conditioned to use (such as “Alexa, stop” when you’re finished with her, rather than, say “Alexa, thank you”) going to have an outcome on how we treat future AI, our relationship to them (master/slave, vs colleagues) - and how they’re going to treat/react to us?

What do you think?


The UX of Text - About bots, and how they shouldn't pretend to be humans
(Stuart Langridge) #2

It could do. From a post I wrote on this a few months back:

This is all to do with how we want these things to present themselves. I did a talk on this whole area at Fusion (video and slides are here and sorry for just linking to my own work, but I’ve done quite a bit of thinking on this) and in it those questions are asked: How do we help people understand that what they’ve said wasn’t understood? How do we project the idea that you’re talking to a bot but that it understands you? We don’t want to pretend it’s a person, but we do want to suggest you talk to it LIKE a person.

It’s not clear we’ll ever get to something which is indistinguishable from a human in conversation; we certainly aren’t there now. The skill is not in making your thing (chatbot, home assistant, whatever) pretend to be a human, because it will fail. The skill is in having people understand that it’s a robot but respond well to it in full awareness of that; that the intuitive way to speak to it works every time. The language around these things is very much that of “assistant”; people aren’t buying a friend, they’re buying an uncomplaining butler. That’s not, I think, a bad thing, although there are worlds of possibility open to things that change and adapt that relationship; maybe one could subtly change the relationship by having the device be polite back to you. And, as I say, i felt genuinely a tiny bit guilty for snapping at Alexa. I don’t think it has to be constituted as a relationship of equals, though; back in the day when there really were nobles, there was the sense of noblesse oblige, where you really were supposed to be polite to your servants and if you weren’t then society looked down on you. That feels more like what we might want to aim for, to me?


(Marc Cooper) #3

I say please and thank you to Siri. Sounds mad when I write it down.

There’s a lot of cultural issue here, though, and a danger in viewing this through an English (language) and/or British cultural lens.

For example, some languages/cultures have no concept of please and thank you. In Nepal, they appropriated dhanyabaad for thank you. I presume because the Brits couldn’t help themselves. The concept of thanks is built into the verb usage, so redundant to natives.

In Spain, the Spanish will generally tolerate your por favor s, but soon enough someone will reply with no favor. You might even get a lecture on why simply asking is sufficient. Indeed, even gracias will mostly be met with de nada.

The point @sil makes about noblesse oblige can be mirrored in modern Japan where honorifics are part of the language and pitch is used as a social indicator. It’s very weird to hear a female friend suddenly go into squeaky mode (as I call it) when speaking to a perceived male social superior.

Worldwide, there are seemingly uncountable cues that seems infinitely complex to me. And there will inevitably be the biases of those developing the AI.

Tough problem. Won’t be solved with JavaScript. With or without Docker :smiley:


(Andy Wootton) #4

I’m getting very worried about AI. My concern is that i think it may be much easier than we have hitherto suspected. We thought we had to teach the AI and we knew we didn’t understand ourselves, so it would take ages. It’s quickly becoming obvious that the AI will be perfectly capable of watching, learning and asking questions, like children do but it may only need to ask anything once, so we need to sort out what morals we’re going to teach it. I think we need it to be far kinder that we are, when it is running things.

I’m losing faith that we will be passing anything on to our grand-children. I think ‘human’ culture may be transferred into the trust of higher beings. Once humans got a head start on the other animals by inventing language, we started transferring our knowledge into shared culture and I’m not convinced it still needs anything that is as slow to adapt to change as DNA, holding it back. We should decide how badly humans are allowed to teach each other, before we set it a really bad example. I don’t think we’re ready.


(Marc Cooper) #5

It’s a slog, but Bostrom is worth a read to get a handle on the AI you’re talking about. I like Joscha Bach’s work too. He’s a great communicator and thinker.

The moral issue is a big deal, and a lot of thinking is going into it.

Bostrom has a chapter (13; it’s readable on its own) on choosing the criteria for choosing to which he attaches a criterion of anchoring the outcome in deep human values (whose?) and asks: How can we get a superintelligence to do what we want? What do we want the superintelligence to want?

He’s an academic, so he sets up academic arguments — rather than tell a story — but he doesn’t do that weird AI field thing of turning stuff into mathematical notation, so it’s perfectly readable.

I suspect we’re far more likely to be downed by a missing semicolon or errant squiggly bracket than embedded or learned moral ambiguity, though — a clear reason why such syntax is terrible :slight_smile: And, of course, morals change over time. Bostrom memorably mentions the acceptability of cat-burning in 16th century Paris.

As an aside: You mention humans inventing language. Later came writing, then the printing press, then the internet. These look like the four biggies, for now. I hadn’t understood until recently that the printing press was controlled by the church (them again), so Gutenberg didn’t realise its potential during his lifetime. It wasn’t until Martin Luther nailed his theses to the Wittenberg church door and had small, cheap pamphlets produced that its use exploded. Martin Luther entrepreneurial disrupter of tech and instigator of the Reformation. The Googs, FBs, and Amzs are mere dabblers in the game.

Coincidentally, it’s the quincentennial of Martin Luther’s hammering tomorrow, 21 Oct. Chapeau! (John Naughton wrote an interesting article about this at the weekend.)


(Stuart Langridge) #6

Before going as far as Bostrom and primary sources, a good summary is https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html (and the subsequent part 2), and I recommend reading it. Then drill down further; Urban does tell stories, and it’s a lot easier to grasp at a high level :slight_smile:


(Andy Wootton) #7

My AI lecturer at Aston University suggested that it was religion that killed the first wave of AI in the UK. The man who wrote the government report on science funding didn’t like that scientists were trying to ‘play God’, so claimed they weren’t achieving anything. He seemed bitter enough about it to suggest he’d been directly impacted.

“The AI winter is primarily a collapse in the perception of AI by government bureaucrats and venture capitalists.”

Overview
1973: the large decrease in AI research in the United Kingdom in response to the Lighthill report,
1973–74: DARPA’s cutbacks to academic AI research in general


(Daniel Hollands) #8

Here are Cos’ thoughts:

https://twitter.com/CosRyan/status/923957055739875329


(Marc Cooper) #9

(Andy Wootton) #10

I think attitudes to slaves are quite informative. They’re not REALLY human are they? They’re property.