Download Free Bonus Content
Sign up for the Premier Unbelievable? newsletter and be the first to see new episodes a whole week before they release! Plus you’ll also gain access to our bonus content archive packed with exclusive content and show updates.
This includes the full interview of Prof Nigel Crook and “Nao” the robot, and the 100+ page ebook edition of Lord Martin Rees & Dr John Wyatt’s Big Conversation about Robotics and Transhumanism.
About this episode:
The Big Conversation – Episode 5 | Season 5
In this special two-parter, we explore “The Robot Race”. Are developments in Artificial Intelligence spiralling out of control? Can we slow the development of large scale AI even if we wanted to? And what can we learn about ourselves – the human race – in light of AI’s rapidly expanding capabilities?
Here, in part one, we confront the ominous existential question: could AI ever replace humanity? The implications for the future of humankind posed by developments such as ChatGPT, robotics, and data sharing are discussed by Nigel Crook, Professor in Artificial Intelligence & Robotics at Oxford Brookes University and author of “Rise of the Moral Machine: Exploring Virtue through a Robot’s Eyes”, and Anil Seth, Professor of Cognitive & Computational Neuroscience at the University of Sussex, winner of The Michael Faraday Prize and Lecture 2023, and author of “Being You: A New Science of Consciousness”.
Take our survey! https://survey-star.co.uk/robotrace1
Share this episode:
More from this season:
- Episode 1: Did Jesus of Nazareth rise from the dead?
- Episode 2: Christianity, the Sexual Revolution and the future of the West
- Episode 3: Can Science and Religion Tell us What it Means to be Human?
- Episode 4: Do Consciousness and Near Death Experiences Point to an Afterlife?
- Episode 6: The Robot Race, Part II: How should humanity flourish in an AI world?
- Episode 7: Is Religion Good or Bad for Society?
Audio Transcript for The Big Conversation (Season 5: The Robot Race, Part One: Could AI Ever Replace Humanity?)
Andy Kind (AK), Nigel Crook (NC) & Anil Seth (AS)
AK: Ladies and Gentlemen, welcome back to another captivating episode of The Big Conversation from Premier Unbelievable, proudly presented in partnership with the John Templeton Foundation. I am your host, Andy Kind.
Today my guests are Nigel Crook and Anil Seth. Now, guys, I’ve got something to read to you, I hope you will be impressed by this, we’ll have a talk about it in just a moment. Here is an introduction not written by myself. ‘Today, we embark on a thrilling journey into the realm of artificial intelligence. With the topic of our discussion: The Robot Race, Part One: Could AI Ever Replace Humanity? Join us as we explore the frontiers of AI and its potential impact on our very existence.
Our distinguished guests for this thought provoking dialogue are two eminent minds in their fields.’ Do you agree with that so far? ‘First we have Nigel Crook, a trailblazing figure renowned for his pioneering work in the field of AI and robotics. Nigel has graced The Big Conversation before alongside his remarkable robot friend, Nao. To watch their captivating full interview simply register at thebigconverstaion.show. Nigel is not only an esteemed professor of AI but he is also the author of the ground-breaking book, ‘Rise of the Moral Machines’. Within its pages, Nigel explores the profound ethical implications of AI and its capacity to revolutionise the way we perceive and interact with machines. ‘Rise of the Moral Machines’ delves into the crucial question of how we can ensure AI aligns with our values and not just our algorithms.
Joining Nigel in this intellectual odyssey is the brilliant Anil Seth, an internationally acclaimed neuroscientist known for his ground-breaking research on consciousness. Anil’s latest magnum opus, ‘Being You: A New Science of Consciousness’, takes readers on an enlightening exploration of the self and the mysterious nature of consciousness. In ‘Being You’, Anil delves into the depths of neuroscience, philosophy, and psychology to unravel the enigma of consciousness shedding light on what makes us uniquely human.
Together, Nigel Crook and Anil Seth will shed light today on the profound questions surrounding AI and its potential to replace humanity. With Nigel’s expertise in AI ethics and Anil’s profound understanding of consciousness, this episode promises to be an enthralling enquiry into the future of human-machine interactions. And here’s a fascinating titbit for our astute audience; the whole introduction you just heard, including these very lines have been written by ChatGPT, an incredible AI language model. Isn’t it remarkable how technology continues to shape and influence our world? So, dear listeners, fasten your intellectual seatbelts as we venture into ‘The Robot Race Part One: Could AI Ever Replace Humanity’? with our exceptional guests, Nigel Crook and Anil Seth. Let the conversation begin’.
I think that’s enough of that. So, first of all, how accurate was that, Anil? Are you an expert in consciousness?
AS: I think there was a lot of ground-breaking wasn’t there? Everything was ground-breaking which is interesting. And also, did it really call itself incredible? Itself was an incredible language model?
AK: It did not display modesty or humility.
AS: So it’s broadly, I mean broadly accurate in terms of what I do without all the value judgements laden on top of it. But, I mean, that’s what these things are good at; they are good at generating roughly plausible things.
AK: They are great at nouns and facts, aren’t they, not so much with the adverbs and adjectives.
AS: Well, I’m not sure they’re great at facts either. I think they are pretty terrible at facts. They are great at generating stuff that has the sheen of plausibility. I mean, people often say they hallucinate. I prefer actually to say they confabulate; they make stuff up to fill in gaps that they don’t even know are gaps. So we have to… to understand really what they are doing you’ve got to dig a little bit into how they work which I’m sure we’ll do later. But I think that’s just enough to show you that they can quite easily suck us in to believing there is a mind behind what they say.
AK: Nigel, how about you?
NC: I was amazed at how much factual information was in it actually! So I think… I agree with Anil that there is a lot of hype and overtones in it. The thing that impresses me always with these things is the fluency of the text which is what it’s designed to do. And we’ll get into it later to understand how it’s generating what it’s generating but that, to me, is the impressive stuff. Not actually what it says but the fluency and the fact that we understand it and that it’s saying something that is understandable.
AK: Absolutely. It strikes me, it’s a bit like when you speak to… I remember when – my degree is in French – I remember when I was in France for a year, by the end of the year I was fluent in French but it was obvious to anyone who was French that I wasn’t French. I had the idioms, I had the language, I even had a bit of the local dialect, but it was obvious to a real native French speaker – maybe not someone from another country – that I wasn’t French. Is it similar to that?
AS: I think it’s almost totally the opposite. When you learn the amount of French that you did, I’m sure you understood something about what you were saying. And, yes, you were easily distinguishable from a native French speaker because you’d only learnt a relatively small amount. I think with things like ChatGPT and language models that are just around the corner, if not here already, will be almost indistinguishable from native speakers. I think that is, actually, one of the main worries about them. They will get so good that they will be very difficult, if not impossible, to trip up. Yet, unlike you, they will understand nothing about what they are saying and I think that’s one of the main risks and dangers that these systems pose.
AK: Fantastic, well, we’ve got so much to talk about and it’s really fascinating and I can’t be the only person who is slightly disconcerted about this. On Premier Unbelievable there is a lot of conversations and debates around origins; origins of the universe, origins of morality and a lot of stuff about the present tense. But this is different, isn’t it? This is tomorrows world stuff, it’s future-casting. So I am slightly disconcerted, I’m hoping that you guys are going to be able to reassure us about the future. I think, Nigel, you will try to. Anil, I’m not convinced you are going to try and reassure me about what’s to come but we’ll see!
AS: I’m not generally reassuring.
AK: We’ll see how we get on. So, the topic of this conversation is, ‘Could AI ever Replace Humanity’? First of all, I want to get your origin story, really, about how you came to this place. But Nigel, is this a debate that wouldn’t have been feasible 20 years ago? 20 years ago, if you’d been invited to come on something like this, would you have thrown it out as lunacy or as ludicrous?
NC: No, I think I would have taken it seriously. I think even then we could see the direction of travel. I think the issue for many of us, even working in the field, is that this last development, the generative AI, that we are seeing coming out now has just accelerated and been adopted at such a rapid rate that it’s thrown up lots of ethical issues that we have not really got to grips with. We could have done with preparing for this 20 years ago, it would have been good if we’d sat down 20 years ago to do exactly that.
AK: Anil, what do you think about that?
AS: My PHD was in AI and it was about 20 years ago, 2001 I think I finished. At that point it was a kind of almost a low point in AI which is why I didn’t become a billionaire, it’s one of the reasons. We were working on different aspects of AI and, yes, I agree with Nigel that there has always been this idea, mainly fuelled by science fiction films, that there is a threat here, there is a danger. But at the time the practical capabilities of these systems were so impoverished that it didn’t seem to be a realistic threat that anyone had to pay attention to.
And it has been astonishing what’s happened over the last even 12 months, I think. There has been progress, progress that’s taken people within the field by surprise. And certainly progress that has exploded into the public awareness. So, yes, I think we could have had the discussion 20 years ago but I doubt many people would have listened to it.
AK: Whereas now it’s very much the topic on the table.
And so one of the things I want to do over the course of this conversation is to work out who we listen to as well. Who are the fear-mongers and who are the genuine prophets? Just a couple of quotes; so Elon Musk, I don’t know if have heard of him, either of you, he said that the chances of something incredibly dangerous happening is in the 5-year timeframe. Maximum 10-year time frame. And then Stephen Hawking said to the BBC said that, “the development of full AI could spell the end of the human race as AI redesigns itself as an ever increasing and exponential rate. Humans who, by comparison, are slow to evolve, couldn’t compete and would be superseded”. Now, where is the prophetic voice in that, Nigel, and where is the fear-mongering?
NC: There is a solid mix of both, I would say, in that. There are risks, in fact, AI is already causing harm, we know that. We can see how it’s exhibiting bias, unwanted bias. That it’s not transparent, it appears to be making decisions on behalf of people without the ability to question those decisions or understand how those decisions were formed. So that’s already happening.
But I think in terms of the destruction of humanity in, what was it, five years did you say? I think we would have to be monumentally stupid to put ourselves into a position where AI had that much power over us and over society. It could happen, we could do it, but I think that, to me that’s not the big worry. The big worry is that the technology is being rolled out at such a fast rate because of the commercial drivers behind it, meaning that it is being adopted by people who don’t understand what the technology does and how it functions and how it does what it does when they interact with it and therefore they come to it with the wrong understanding of what it’s doing. So ChatGPT is a perfect example of that. People think that it is trying to communicate with them, that its being empathetic, and that it’s engaging in an intentional conversation but it’s doing no such thing. It’s not doing any of those things. So I think that, to me, is the biggest danger; it’s a lack of understanding and a lack of education for people that use it. And ChatGPT was released without really any thought about the ethics of making that available to anybody who has an internet connection and a device that could enable them to use it.
AK: So, you’re a Christian, Nigel. Is it fear-mongering, is it too much to say that Chat GPT and AI is Satan masquerading as an angel of light?
NC: No, no, I don’t think that at all. I think that it does lead us to ask questions of ourselves as humans; what does it mean to be human? And I think it does so in a way that we’ve never had to think about before. I think before it’s been the domain of philosophers and theologians mainly to think about that and psychologists and the like but now everybody is interested in this concept because they can just use this technology and it appears to be human. It is behaving like a human being in a very convincing way so then you start to think, well, is that me, is that what I’m made of, is it that kind of thing? I’m generating these words, am I generating them like ChatGPT?
AK: That debate around identity and consciousness of which of course you’re an expert according to ChatGPT, Anil. But going back to the two quotes from Elon Musk and Stephen Hawking, because it does seem to be that it’s the rate at which things are accelerating which is causing people to worry that Skynet’s about to go online and that we are in the Terminator 2 situation.
AS: Things have changed quickly, things are changing quite quickly. And there is often this idea of exponential growth. We learnt about exponential curves during the recent pandemic. Very hard to get a grip on psychologically because when things are changing with that kind of dynamic then wherever you are on the curve it looks impossibly steep in front of you and basically flat behind. And that’s true wherever you are. So there’s… it’s very hard to locate and orient ourselves.
In terms of the risks, the quotes are interesting because partly from where they come from. They come from, especially with Elon Musk, they come from the heart of Silicon Valley, the people developing these kinds of things, and I think it’s really important to emphasise something what Nigel was saying, which is there are many types of risk here. There are the very dramatic science fiction driven existential threats to humanity. Will they take over? Will they bootstrap themselves beyond our understanding and control and turn the whole world into a vast mound of paperclips or into something like that, something that will really mark the end of humanity as we know it.
Now, I think we shouldn’t discount these big existential worries entirely because it’s a powerful technology. If we put ChatGPT let’s say in control of, I don’t know, nuclear weapons – why would you do that, don’t do that – that could go very badly wrong, right? But we can not do that reasonably easily. But there are existential concerns out of that, we should pay a little bit of attention to those. But there are clear and present dangers and I think one of the risks we face as a society is being distracted to some extent by these very narrative heavy science fictiony massive threats and we don’t pay sufficient attention to what AI is doing to society in the here and now. Which is both good and bad; there are extraordinary opportunities for increasing overall global wellbeing of people and the planet but there are also problems too.
I think it probably pays to just rewind a little bit and clarify what we are talking about when we talk about AI because you mentioned already that there’s a particular kind of public perception which is probably not doing us any favours when we try to get clarity about what the threats and opportunities are. There’s this sort of public idea of AI, again, that comes largely from the movies, either the ‘Terminator’ or maybe ‘HAL 9000’ and ‘2001’, something that is extremely intelligent, extremely able. (AK: WALL- E would be another one. Disney Pixar. I got ChatGPT to pronounce it for me). There are lots… and yes, that’s probably one of the most appealing. But It does suggest a kind of AI that’s not really the AI we are dealing with in the here and now. The AI that’s called AI… it’s given this term AI, that’s kind of part of the problem because, is it really? No, it’s really just applied statistics but if you call it that then no one pays any attention at all it sounds very boring. But what these systems are very good at are things like recognising patterns, and increasingly generating patterns. This can be in language, it can be in images. That’s really the core business of AI as it is at the moment. But you can couple these things to systems that make decisions. Maybe make decisions about, is this person a risk? Does this person get an insurance policy they have applied for? How can we optimise certain things? But this is quite far from the science fiction idea of what AI is. And then there’s the whole robotics side which you work on – which I hope you don’t mind me saying because I started in robotics as well – it hasn’t really kept up with the rapid acceleration in these disembodied AI’s like language models and so on.
NC: That’s true, although I think that some of the recent work on humanoid robots has accelerated to my surprise. We have a robot back in my lab called Artie. He’s a 6-foot humanoid robot with screens for eyes and a mechanical mouth, very obviously not a human but the company that developed that, ‘Engineered Arts’, it’s a UK company, the next version on from that Ameca robot is incredibly realistic in human facial expression and bodily gestures, very, very sophisticated robotics controlled systems to enable that to happen at all. And that really has taken me by surprise in the 7 years since we bought that robot from them they have come on quite a way. So it’s not the same I don’t think as AI but I do see an acceleration towards more human-like robots that at some point you could say, well, for a period of time I might actually be fooled into thinking that is a person and not a robot.
AK: So at the moment the phrase ‘artificial intelligence’ is a bit of a misnomer because we’re talking about predictions and statistics, a bit like a computer game in a sense. But are you suggesting, Nigel, that the reality is catching up with the name artificial intelligence?
NC: I still think it’s not a great name but it’s too late to change it now! I think Anil is absolutely right, this is mainly statistics with some sort of biological inspiration behind it but very simplified biology. And I think that we’ve gone a long way on that but it’s enabled us to develop hugely complex models and train them. So GPT, for example, is a massive model trained on the whole of the text of the internet plus more. But trained using huge computer farms, lots of energy over several months costing millions of dollars to produce.
So, I mean, we are using models that are very simplistic copies of the brain. So, neurons – we call them neurons in artificial intelligence – do this learning all the time. They learn, for example, to produce sequences of meaningful words but they are nothing like biological neurons really. They are more like switches on lights, on/off or faders on lights whereas biological neurons are capable of a much richer range of behaviours. And if we start to tap into that I think, to me, that will be the next sort of development in AI is to be able to learn more of the richness of how the brain works and how it processes information and adapt those to algorithms that could be much more efficient in producing this kind of intelligent responses to situations.
AK: Right, so, Anil, is one of the benefits – you talked about benefits and downsides – one of the benefits, and I think I’ve heard you say this in one of your interviews, of AI is to use it as a sort of reflective lens on the wonder of being human?
AS: Yea, I think so. There’s many benefits. There are many more immediate practical benefits of AI method. So, AI is being used to optimise new drug design in pharmaceuticals and that’s been a huge boon, probably revolutionary in biology driven by AI there. It can optimise many things, for instance, there’s possibly new approaches to energy efficiency that AI can help with.
But, yea, there is this very interesting academic perspective on it. And you mentioned this earlier, it’s almost as if technology is like – artificial intelligence and language models being the breaking wave that right now – they hold up a mirror to us as human beings. We can use – and this can be a mirror in many different senses – so we can use AI models as ways of understanding how brains work. This has been done for a long time anyway, this is part of our bread and butter as we build models of the brain to try to figure out how it works so AI methods can help us with that. But then when it comes to something like language models there’s a sort of deeper, more conceptual challenge and opportunity here which is, I think as you said it, if ChatGPT can converse with fluency even if I confabulates, makes stuff up all the time, what is it that we are doing? Are we doing the same thing? Are we doing something different? And if we think we’re doing something different what is the difference that makes the difference?
And I think this is really instructive because even with the other successes in AI, so, people have already maybe forgotten about, it’s already two or three years ago, the successes in playing games like ‘Go’ which were incredibly impressive at the time. It’s the ability to converse that has made people see themselves within these algorithms. And that, I think, is something really quite new. It’s also probably the first step to what some people call a general AI and I think this is a useful distinction actually to have on the table to orient us about where we are with AI now.
So there’s this distinction on the one hand you’ve got so called ‘narrow artificial intelligence’; these are systems that are really good at a specific thing, whether it’s playing ‘Go’, designing new drugs, figuring out protein folding, whatever it might be. A characteristic of human intelligence is that we are generalists. We are reasonably good at lots of things. Learning to play chess doesn’t prevent us from learning to speak a language, we can do both. The idea of AI reaching this kind of capability of general human intelligence, that’s one of the holy grails of AI. I mean, I’m not entirely sure it’s a good idea to get that, in fact, I think it’s probably not a good idea to get there in a rush, for sure. We’re not there yet, we’re not at general AI yet, I think that’s still quite far away. The language models do show some ability to at least talk… language models can talk nonsense about lots of different things rather than just one thing. That’s as close as we’ve got yet.
AK: Well, you guys are not talking nonsense, we are having a fantastic debate. We are going to have a short break but today’s episode is, ‘The Robot Race: Could AI Ever Replace Humanity? We’re having a very substantive and reassuring so far conversation between Nigel Crook and Anil Seth and we’ll be back after this short break.
Welcome back to The Big Conversation from Premier Unbelievable in association with the John Templeton Foundation. My guests today are Nigel Crook and Anil Seth and I am your host, Andy Kind, a ChatGPT version of Justin Brierley.
Today’s episode is called ‘The Robot Race, Part One: Could AI Ever Replace Humanity’? And in the first section we had a very interesting and illuminating conversation and you guys unpacked your views on what exactly AI is and what it isn’t. In this section we want to talk about how far it could go, the dangers and capabilities and the possibility of sentience.
So, Nigel, we’ll start with you as an expert in robotics. What are your thoughts on dangers of AI? The real threats further down the line?
NC: Well, yes, so I think the core issue for me is the more human-like they get the more power they have in society, the more we will be tempted to give them agency, and the more that we will effectively put ourselves potentially at risk in doing that. So I think that… I do think that there are risks… I still think that we are quite a long way from it despite the most recent accelerations, but I do think that there is a risk in losing that distinction between humans and AI and robotics and that does worry me.
AK: And Sam Harris said that you can’t put the genie back in the bottle, Anil. Would you share that view?
AS: Well, that’s evidently true, right? You can never do that. The question is, does it matter? And I think in this case it likely does matter. There’s… we mentioned briefly before that in other great technological advances in other industries we tend to engage with a certain amount of risk assessment before unleashing a new technology on the wider society. We wouldn’t just design a new pharmaceutical drug and put it out there unless there is an extremely good reason – we could argue this happened recently – but in general we do a lot of testing in a constrained environment before we release anything whether it’s a new drug or a new type of aeroplane or anything. This is patently not happening in machine learning and AI. New systems are being just thrown out there and the problems that they might cause are… I think people do worry about them but there’s not the systems in place and I think it’s unwise… its really unfair to pillory the tech company themselves, many of whom want increased regulation because without it the playing field gets incredibly uneven. But there is a need, I think, for regulation.
And this speaks to two kinds of dangers. I think there’s the danger, as you mentioned, of AI that becomes increasingly human-like. I think this carries its threats in a very specific way because we humans tend to anthropomorphise; we tend to project human-like qualities into things when they’re not there. There was a reported case in April this year, I think, of a Belgium man who’d be interacting with a chatbot who was sort of an artificial girlfriend and ended up committing suicide. And that’s a very, very tragic occurrence which just speaks to what happens when people psychologically invest in things, attribute beliefs to systems that don’t actually have them. So there are lots of dangers in things seeming intentionally human-like.
There is also a whole other suite of dangers which are the hidden dangers, the invisible dangers, that come from the fact that most AI isn’t human-like and never will be. It’s the algorithms that run on phones or in server farms that make decisions about who gets what job. There’s a lot of bias. We don’t know how these algorithms make the decisions that they do, as you said, they are not transparent. They have a certain opacity.
There are huge problems with misinformation and disinformation. We’ve seen already how bad social media in general can be for the sort of consensus about what is on which much of our society depends; be it elections, be it how we deal with a health issue, a health threat to society. There has to be a certain amount of social cohesion which depends, to some extent, on us all agreeing what is the case and the social media can amplify existing misinformation and disinformation but what language models can do is generate it and tune it to our psychological vulnerabilities. So these are different kinds of threat. There is the short term, already here threats that we might not even see and then the longer term threats of systems that become indistinguishable from us in various ways.
NC: I think one of the things that worries me is the way in which these systems will not necessarily develop as humans develop. As we grow from children to adults we are taught between right and wrong, we’re taught how to behave, how to interact with other people, what is acceptable to do and what’s not acceptable to do. And what worries me is that we are developing technologies that don’t really carry that in any real way.
And I think it’s being driven by three different things; one is the desire to create robots with increasing autonomy. So, in other words, the capacity to make decisions on our behalf. The second one is increasing embeddedness in society. So these machines, these robots, are joining us in society. They are no longer just in factories and building cars but they are with us, they are on our phones, they’re in our schools, they’re in our hospitals and they are becoming increasingly embedded in society. And, I’ve forgotten what the third one is… human-likeness. Increasing human-likeness is the third one. The example that I give is a lovely robot called ‘Jibo’ which was a very flash in the pan hit in 2017/18, it looked like a desk lamp, a chubby desk lamp that had a head that looked round with a face on it and you could talk to it. It was a bit like Alexa but with a head moving round.
AK: Like one of those Pixar lamps at the beginning of Pixar films? (NC: Exactly like that). Like at the start of WALL-E?
NC: Yes, exactly, again. Important film reference. But this robot could independently take photographs, it would look for opportunities to take photographs. And it could read stories to your children. And the promotional video for it showed it doing all these wonderful different things and then the last shot of it was with this little girl in her bedroom telling her a story and then the last frame is a picture of her face outlined in a box, her name at the bottom, and this is an internet connected device that could easily put a child like that at risk if it published that, even just that photograph of her and her name. But we don’t even think about that, we think how cute that is and how wonderful it would be to have a robot like that and I would love to have one that did that. But we haven’t thought through how do we equip it with the moral understanding of what is appropriate and not appropriate to do? It’s not appropriate to take a young girls photograph in her bedroom on her own and make that available publically. But we are not equipping these machines with that kind of capacity.
AS: I think that actually picks up… I’m being less and less reassuring as this conversation goes on I realise (AK: You can see me shaking!) There’s another, I think, again, possibly hidden danger because that’s a real danger when it happens we have a sort of immediate visceral reaction to that, that seems wrong. But a related danger with systems like that which embed themselves in our lives is the loss of data privacy. And pretty much any system we interact with now hoovers up our data, whether it’s speech data, what we choose, what we buy, where we are, what we eat, how we sleep. And there’s some good to be had from that; we can have personalised medicine, we may get personalised sleep advice, all these sorts of things can be very good. But the cost is not made transparent to us. The cost is not revealed to us at all that in doing this we are allowing large companies and governments to learn so much about us to be able to predict what we will do. And when you can predict what somebody does you can control them.
And I think one of the larger political dangers here is that we become so much more vulnerable to political coercion, to social coercion, to corporate coercion in virtue of losing data privacy which we are, frankly, just giving away and frankly it’s because we have very little choice in the matter if we want to participate in modern societies. There’s, I think, another need for regulation. It’s a real pain, I work in a neuroscience lab and we try to do experiments on large numbers of people and we have to be very, very careful about data privacy, anonymisation and so on. But I think it’s for a very good reason. It’s one of those things that you don’t realise how valuable it is until it’s gone.
NC: And I think another area linked to that is the prospect of deep-fake technology which we are already seeing generative AI is able to create things that aren’t real. But it also will happen in robotics. I think we will see realistic robotics out in the world being posed as people who they aren’t, doing things, causing mischief and could be misused. They could actually be misused by people who want to cause harm to people.
AK: And it is disconcerting. I remember… it may have been coincidence but I was speaking to a friend once and I said I would like to look into having an otter as a pet and later on that day when I opened up my social media one of the first promoted posts was adopt an otter. Now I don’t know whether you’d want to adopt an otter first of all (AS: Apparently they are very violent. They seem very nice but…) I didn’t go ahead with it in the end. There was a website where you meet them under a bridge and you come alone but I lost a lot of money. That’s not really what the conversations about.
But already there’s that sense of impinging on our privacy and you seem to be suggesting that there isn’t that regulation. So is that going to get worse? Is there a way of introducing a sort of prohibition like they did in the 1920s or would that do what prohibition did which is to create a series of technological speak-easies and the good guys are behaving themselves but you have that bad guys running riot in the background?
AS: Yea, I think prohibition is the wrong way to go and there was a recent letter which was co-singed by many people in the field calling for this 6-month pause in the development of large scale AI, not all AI, but large scale AI. That was never going to happen. The point of that letter was to get some publicity to this issue; what would be the right kind of regulation? Prohibition is not the right kind of regulation. But there are precedents; the beginning of genetic engineering in the 1970s there was these series of conferences called the Asilomar Conferences where people projected out ahead what this technology might do, what it’s benefits might be, what it’s risks might be. And as a result of that there were some tramlines put in place. For instance, human cloning was agreed by pretty much everybody as something that we should not do. And by and large that has helped.
Something similar could work. The genie’s a little bit too far out of the bottle already for that but that doesn’t mean we shouldn’t try. Trying now is still better than doing nothing. And finding where to put those tramlines so that you don’t stifle innovation and you don’t suppress the enormous potential for social good. That’s a real challenge but it’s a challenge that I don’t think it’s a hopeless challenge.
NC: And I think we ought to also put the responsibility on the academics and the researchers who are developing the technologies. We’ve been working on language models for more than two decades now and we could have started thinking about this a lot earlier. What is this going to do? If we are successful in what we are aiming at how will it be received? How can it be misused? What are the side effects? What are the ethical concerns? And I think we need to get into that frame of mind right from the sort of grassroots of where this technology emerges on to when it hits the commercial scene. Because once it’s hit the commercial scene, if it’s of commercial value and there’s no regulation to stop it, it will fly, like we’ve seen in ChatGPT which has taken off like no other technology has been adopted in the world in all of history. Just skyrocketed. And I think that sort of… we need to take that responsibility earlier on the development cycle for this technology.
AS: Well, yes, it’s true, the problem is that we’ve already gone through several cycles of development and release of this stuff. So I think there are other things we can do. I mean, there’s the challenge…it’s as much a sociology challenge as it is a technology challenge. We will live in a world where we will interact with AI systems that reveal themselves to be AI systems in some cases, in other cases stay behind the scenes. And so education of how to be socially literate in this kind of environment I think is really important and this speaks to the privacy issue. Most people don’t even realise there’s such a thing let alone the risks of giving away their data privacy. Education about that I think can really help.
There are other simple things I think that can be done. One of the big things that actually… I’m not sure if it’s reassuring or not, but when we think about future language models, as Nigel said, existing language models, astonishingly, have basically been trained on everything that was ever written by anybody at any time which is amazing, right? And they still make stuff up. But what’s likely to happen, what’s already happening, is that language model generated content is going back onto the internet. So the internet… so language models in a sense are polluting their own training resource. And so things might actually degenerate a bit from now and the internet could get an even more unreliable cesspool of misinformation than it already is. But there are ways to fix that; you can watermark content as being AI generated or not. I think there are… there’s lots of cause for optimism but one just has to be sensible and not just go gung-ho into, yes, we can do this, so we’ll build this and the people will come or they won’t. That’s the wrong attitude.
NC: I think the education piece is absolutely central. I’ve been helping various businesses understand what ChatGPT does, how it works, and if it’s ok I’ll just give a very brief explanation of it because I think it’s illustrative of what Anil is saying.
So a language model, essentially what it does is it estimates the probability of a sequence of words or a sequence of tokens, actually. And you can put a sequence in like ‘the moon is made of cheese’. That would get a high probability because it makes sense. It’s a likely sequence of words. But it’s not true. We know it’s not true. You can take the same words and rearrange them in a different order, you get a low probability. So that’s the basic model that’s underneath it. Now, what it does is when it’s generating the text you give it a question, so that’s its first sequence, and then what it does is it then uses it’s learning to predict what the next word would be based on probabilities on the data it’s looked at. And essentially it has a probability for every word in its dictionary and it’s like taking a dice and rolling it, a weighted dice according to that probability will come out with a likely word. Not necessarily the same one every time which is why it generates new content even if you ask the same thing again. And then that is fed back into the model and it predicts the next word, and then the next word and then the next word and it feeds that until it reaches a stop.
AK: But at a superfast speed?
NC: At a superfast speed, it’s doing that very, very quickly. Now once you understand that you then see and you understand that this is not communicating to you. It’s not forming an intention, ‘I want to say this to Anil’. It’s just spewing out highly probable sequences of words that may be true or not.
AS: This brings us back actually to the GPT written introductions that were read out which we were talking about a little bit before. So the introduction that it had for me and for Nigel, for me anyway I could tell it was broadly right in the generalities. Now a few weeks ago some friends of mine asked GPT4 to write a biography of me and it came up with a longer one and again it was right in the generalities but wrong in the specifics. And, again, this is because language models are not trained to provide facts they are trained to predict the next most likely part of a word. And so it said that I was born in London when I was in fact born in Oxford. Now, it’s plausible that I was born in London because a lot of people are born in London and because 1,000 of me on average might have been born in London. But then when I asked GPT4 to do the same again but with fewer errors in dates and places what it did was really revealing. It generated a new biography and instead of saying I was born in Oxford, which would have been true, it said I was born in Hammersmith in London. Now, for me, this betrays that there is absolutely no understanding going on behind the hood at all because Hammersmith, it’s immediately obvious to us that if London was wrong then Hammersmith is even more wrong. It’s wrong in a way that no human would get that wrong. You’d zoom out. If you didn’t know you’d say born in England.
So this was one way of probing whether these systems had the kind of capabilities that people ascribe to them and in this case a property called ‘metacognition’, the ability we have to know whether we know. This is something that is fundamental to human cognition. And certainly GPT4 doesn’t have it at all. This is dangerous because currently if you… one of the reasons I find language mode output both fascinating but also insanely boring is that everything is evenly confident; it just spews out very fluently stuff with a sort of high level of, yes, this is how things are. Whereas we don’t, we modulate our interactions according to our confidence in what we are saying. But if we project qualities like metacognition into language models then we will be misled by them. We will assume that they know stuff when they don’t. So it’s almost as if when you are dealing with someone who lies a lot you are playing opposite sides of the same game but something like a language model is playing a different game entirely (NC: It is, yes) and that can lead us far astray if we assume that it’s playing the game.
NC: And the other thing is that people assume that because it is a computer it will get things right that computers tend to get right. So if you ask it to multiply two very large numbers together it will often get that wrong because this is a rare occurrence in the internet. If you pick two extremely large numbers it’s rare that those two numbers will appear a lot in the text that it has been trained on, it will get it wrong. But if you ask it to write a computer programme to multiply two numbers together it will get it right because a computer programme is a sequence of instructions and that’s what is good at doing, it’s good at generating…
AS: And because most of the computer code that’s on the internet actually works. It’s a really, really high quality training set.
AK: Well, this is fantastic, we are almost out of time so we’re coming towards the summing up phase. But I think you have been more reassuring than you thought, Anil, I mean, it’s a fascinating topic, you’ve both spoken very fluently, you’ve made no errors unlike ChatGPT which is fantastic.
So it seems at the moment that we are at least… we are starting to get to grips with our moral responsibility towards AI. Things are accelerating but they are not yet… the genie might be out of the bottle but it’s not yet beyond the pale in terms of having some kind of control over it and we just need to be careful. And really what we are saying, I think, is that ChatGPT, it’s a very effective bootlegger, it’s a moonshiner, but it’s never going to replace proper whiskey to use the analogy. Answer that, and also give your summing up thoughts Nigel, please.
NC: I agree. I mean, I think this latest wave of generative AI that can generative not only text but images and sound as well does kind of give us an insight into, in my view, partial creativity in humans but not the whole picture because, for me, creativity in humans involves a capacity to choose and to curate your skills in creating music or text or images.
So I think that what we need to do now is to enter a period of really thinking about what is this technology doing? How is it impacting our lives? How can we make the most of what it’s good at but minimise the amount of harm that its causing in individuals and society at large? And develop regulation that enables us to keep people safe but that doesn’t dampen the development and advancement of the technology.
AK: But your view, in a word, AI can never replace humanity?
NC: In my view, it can never replace it.
AK: Ok. Anil? Final thoughts?
AS: It certainly should never replace it and I think it’s very unlikely to. I agree about finding this balance between the positive and the negative uses of a powerful dual use technology. I think part of the problem is we lump together so many different things under a single term, ‘artificial intelligence’. On the positive side, we have these things like these algorithms which very recently have been shown to help with diagnosis of breast cancer. This is not going to replace radiologists, it can complement radiologists but it can save an enormous number of lives. There are going to be many applications like that. Some of the technology that these systems use is the same as a language model uses but not all of it and it’s certainly different to say what a humanoid robot technology might use.
I think the overall message for me is I’m reminded of something one of my mentors has said repeatedly, for many, many years now, Daniel Dennett the philosopher said when we thing about AI and when we build AI we should always remember that we are building tools and not colleagues. If we have that front of mind then I think a lot of things become clearer about what we build, how we build it, and what our interactions should be with the systems that we create.
AK: Fantastic. Splendid job, chaps. Thanks to both of you.
Well, we hope you’ve enjoyed it, it’s been absolutely fascinating sitting here at the table with these two heavy weights. Today we’ve been looking on The Big Conversation at The Robot Race: Could AI Ever Replace Humanity? We don’t think so but the future is not yet written, not even by ChatGPT. We’ll see you next time.