Download Free Bonus Content
Sign up for the Premier Unbelievable? newsletter and be the first to see new episodes a whole week before they release! Plus you’ll also gain access to our bonus content archive packed with exclusive content and show updates.
This includes the full interview of Prof Nigel Crook and “Nao” the robot, and the 100+ page ebook edition of Lord Martin Rees & Dr John Wyatt’s Big Conversation about Robotics and Transhumanism.
About this episode:
The Big Conversation – Episode 6 | Season 5
In this special two-parter, we explore “The Robot Race”. Are developments in Artificial Intelligence spiralling out of control, and what can we learn about ourselves – the human race – in light of AI’s rapidly expanding capabilities?
Here, in part two, we debate the nature of human consciousness and free will – comparing it to AI and robotics – and ask how humanity should best flourish now that the AI genie is out of the bottle. Speaking into this issue, from two very different perspectives, is Christian believer Nigel Crook, Professor in Artificial Intelligence & Robotics at Oxford Brookes University and author of “Rise of the Moral Machine: Exploring Virtue through a Robot’s Eyes”; and atheist Anil Seth, Professor of Cognitive & Computational Neuroscience at the University of Sussex and author of “Being You: A New Science of Consciousness”.
Take our survey! https://survey-star.co.uk/robotrace2
Share this episode:
More from this season:
- Episode 1: Did Jesus of Nazareth rise from the dead?
- Episode 2: Christianity, the Sexual Revolution and the future of the West
- Episode 3: Can Science and Religion Tell us What it Means to be Human?
- Episode 4: Do Consciousness and Near Death Experiences Point to an Afterlife?
- Episode 5: The Robot Race, Part I: Could AI ever replace humanity?
Episode Transcript:
Audio Transcript for The Big Conversation (Season 5: The Robot Race, Part Two: How should we flourish in an AI world?)
August 2023
Andy Kind (AK), Nigel Crook (NC) & Anil Seth (AS)
AK: Hello, and welcome back to The Big Conversation from Premier Unbelievable brought to you in partnership with the John Templeton Foundation. Today’s episode is entitled, ‘The Robot Race, Part Two: How Should We Flourish in an AI World’? I am your host, Andy Kind. I am not a robot.
I have been joined today by two very distinguished guests, back by popular demand. We have Anil Seth and we have Nigel Crook, welcome back. And we’ve checked that these guys are not robots either they had to tick all the boxes with bridges and traffic lights in so we have confirmed they are in fact human beings. And, you know guys, I have a script writer? Did you know that? We have the budget for a script writer so I’m going to read the scripted introduction and then we’ll talk about… it won’t be a surprise to you at this point who the script writer is. So, listen to this: ‘Today we dive back into the captivating world of artificial intelligence with, ‘The Robot Race, Part Two: How Should Humanity Flourish in an AI World’? It’s like we’re caught in a sequel but this time the stakes are higher and we are all wondering if we are living in the Matrix of just another complex algorithm. Back by popular demand our two phenomenal guests making their triumphant return. First up we have Nigel Crook, the AI maestro himself. But wait, there’s more, we’ve got the brilliant Anil Seth, the neuroscientist extraordinaire who is sure to blow your mind. He’s all about being you and I have to admit I’m curious if that means I’m just a simulation running on some cosmic computer. I mean, don’t tell me if I am, I don’t want an existential crisis on air!
In this thrilling sequel, Nigel Crook and Anil Seth are back to tackle the pressing questions about humanity’s place in an AI world. With Nigel’s, ‘Rise of the Moral Machines’, and Anil’s, ‘Being You’ under their belt they’re armed with knowledge to navigate this digital frontier. Now, here’s the kicker: this introduction, like last time, was crafted by none other than ChatGPT, the AI language model. I mean, it’s a bit awkward, isn’t it? An AI writing about AIs. It’s like asking an artist to paint their own portrait. But, hey, that’s the digital age for you. So folks, get ready for an electrifying ride as we delve into ‘The Robot Race Part Two: How Should Humanity Flourish in an AI World’? With our remarkable guests, Nigel Crook and Anil Seth, it’s time to laugh, to ponder and maybe question our reality just a bit. Let’s roll.
That’s robots for you. So, welcome back. Like last time, we got ChatGPT to write the introduction. And slightly more complimentary do you think this time?
AS: It was very bombastic again, wasn’t it? I wonder if it was asked to write in a particular style? It seemed to be a kind of musical style. And, also, I noticed at the end there was a dead giveaway wasn’t it when it said, ‘that would be like asking an artist to paint a portrait of the artist’. Have you not heard of self-portraits? That’s quite a bit deal in art over the last few centuries.
AK: That’s right, yes. So, it was a very bombastic intro. The fact that you are not here to put your head into a lion’s mouth or walk a tightrope, Nigel, it doesn’t really fit the intro. But we’ll see how we get on I’m sure it will be entertaining nonetheless.
And that is the thing though isn’t it with ChatGPT, with AI. We talked in the previous episode about how artificial intelligence is a bit of a misnomer but that’s part of the popularity. If ChatGPT was simply described as applied statistics it wouldn’t be popular in the same way.
But I remember I got …recently I asked ChatGPT to write a short horror story in the style of Stephen King and it wrote a short horror story. But it was very remedial; the function and the framework was correct, it would have got a mark at GCSE but there was no real imagination, there was no real twist, there was no subtext to it. So that’s sort of where we are. But as you have said previously, Anil, it’s so confident, it’s so convincing in its communication that you sort of buy into it, don’t you? You think, well, it’s a computer and it seems to know what it’s talking about?
AS: Yea, I mean it’s very flatly confident, I think, it’s got this uniform level of assurance about everything that comes out of it. Which is one of the reasons I actually find it a little bit dull to be honest. If you ask people, have lots of fun asking GPT to write poetry and it’s really terrible poetry, isn’t it? It’s really not very good at all.
And I think it belies something we talked about before about the things that we can be tempted to project into these systems that they don’t actually have and the dangers that arise when we do this. If we believe that GPT understands things, if we believe that it knows things, if we believe that it knows about what it’s saying and ultimately, of course – and we’re going to talk about this – if we believe it’s conscious; if we believe there is an actual conscious mind sitting behind the text that spews out of it, then I think we can get ourselves in all sorts of trouble.
AK: Great, well we are going to talk about consciousness, mind, identity, what it means to be conscious. Obviously both of you have strong views on that and I’m sure we’ll be able to unpack that.
So let’s just talk about your origin stories a little bit, how you got here. Nigel, we’ll start with you. Not simply how did you get to this table today but what is your background and are you surprised by where your career trajectory has ended up?
NC: Yes, I am surprised. My background… so in terms of my upbringing I was brought up a Catholic. I then, after university, became a Methodist, and just over 20 years ago I had a major moral crisis and I then became an Anglican, as you do. But that kind of got me interested in thinking about moral development. I started to reflect on my moral development, having been at that point probably a Christian for 30 plus years, and what did that mean in terms of developing character, developing moral character, as a human being? So that began that journey.
In terms of my involvement with artificial intelligence and robotics, I’ve been doing that now for nearly 40 years. I started out in the medical domain helping medics diagnose conditions with premature babies. I then moved on to looking at how human brains function and process information, did that for about a decade. (AK: I think we all have at some point, Nigel!) Yes, I know. I then went into robotics so I’ve been pretty much of a nomad in terms of both my religious experience and my academic experience. But in the last ten years I’ve become very interested in social robotics and robots around people and in particular about the moral implications of that and what it would mean for a robot to possess moral competence. And that’s kind of the convergence, for me, of that amateur interest in moral development – Christian moral development, actually, is what I looked at and discovered new things about my own religion which I didn’t know – and also robotics and sort of looking at how you can mix those two together; the theology and the AI and robotics to produce simulations of moral competence.
AK: That’s fantastic. Anil, what about you? What about your background and how you ended up here?
AS: Well, so, I was born in Oxfordshire. My mother was from a Catholic family, from Yorkshire actually. My father was from a Hindu family in Uttar Pradesh, so when they got married in the 1960s it was very hard to reconcile, I think, those two belief systems. So I grew up in a very areligious environment in South Oxfordshire.
But I’ve always been interested in consciousness. And I think many… probably most people are at some point. There is a point, I think, in all of our lives where we wonder; who are we? Why am I me and not somebody else? Why am I here? What does it mean to be a conscious person? That’s a bit more of a sophisticated question. But they snowball; do I have free will? All these kinds of things I think we often grapple with.
And then most people get onto different topics that they can earn a living with but I remained fascinated by this foundational question of human and animal consciousness, biological consciousness in general. And I was also a bit of a nomad in how I ended up building a career about it because in the 1990s, when I was starting out in university, it wasn’t really a thing you could do. Consciousness was certainly, scientifically, was pretty much on the fringes or off the table entirely. It was a matter for philosophy, it was indeed a matter for theology. It wasn’t really the subject of psychology, neuroscience or anything like that. So my scientific training went around physics, it went around experimental psychology. My PHD was in AI because I thought then in order to understand how the brain works we really need to be able to build systems that exhibit some of these capabilities. The great physicist, Richard Feynman, always said you can’t build something when we don’t understand it.
And it was only about 20 years ago that I was able to focus back squarely on understanding consciousness by bringing together perspectives from many different disciplines; from philosophy, from psychology, from physics, from mathematics trying to understand what it is about these complex interactions between brains, bodies and worlds that brings about subjective experience; experience of there being a world and of being a person within it.
And so AI has always been part of this equation and over the last couple of years it’s become a much more prominent part of what I do because the tools have developed so rapidly and the discourse around them has evolved so rapidly also.
AK: So, consciousness is a mystery that matters is something that you would say? (AS: It’s something I have said). It’s something you have said, I wrote it down there. I’m just going to change Andy Kind 2023. So, that’s really interesting because probably although you both agree on lots of things, maybe for different reasons, but in the previous conversation we had there was almost no area of divergence, we are probably going to find the sort of battle line here in a gentle way.
For you though, another thing you’ve said, Anil, is that consciousness is any subjective experience. So that’s how you would boil down consciousness; any subjective experience?
AS: It’s always tricky to define it precisely, in fact, there’s still a lot of philosophical argument about how we define consciousness. But I think we don’t need to come up with a fully consensus definition we just have to make sure we’re not talking past each other. And so by describing consciousness as any kind of experience whatsoever, it could be the experience of redness when you’re looking at a beautiful sunset or the taste of a red wine, it could be the feeling of joy or of pain of a toothache; all the experience of being a person with its emotions, moods, sense of agency, freewill, all of these things, any kind of experience whatsoever. And I think this broad lens on consciousness is useful because it stops us, if you like, associating consciousness with things that it isn’t, like intelligence which is relevant to our current conversation. Intelligence, the explicit sense of being a person, language, all of these things that go along with consciousness is humans I think are, if you like, optional extras. And a system that has subjective experience for which there is something it is like to be that system, that’s enough.
AK: And so, for you, consciousness is a bundle of perceptions?
AS: That’s in practice the way I approach the topic. The philosopher David Chalmers has talked for many years about the so-called hard problem of consciousness. If we think about how we might understand conscious experience it seems like an almost intractable mystery. We have this insanely complicated biological machinery inside our skulls, the brain, connected to the body and the world, and that’s on the one hand. And on the other we have this realm of subjective experience; the redness of red, the sharpness of pain. How can we ever explain one in terms of the other? I mean, this is a thing that theology touches on as well, right? I mean, there’s certain perspectives there.
And my perspective on it is maybe not to address is head on and try to find the magic source that creates consciousness out of biology but to explain the properties that conscious experiences have. Every kind of conscious experience I think can be usefully thought of as a kind of perception. We are used to thinking about that when we think about the outside world; I perceive the world around me. But it also, I think, applies to the self. So the self – the experience of being you, Andy, or you, Nigel or me, Anil – is not the thing that does the perceiving, in my view. It’s a bundle, a collection of perceptions that the brain is forming, in this case, that are grounded in the body itself. So the self is a form of perception rather than a thing or essence that does the perceiving. And I think approaching consciousness this way we begin to dissolve its sense of mystery and we can understand how and why conscious experience fits in to this emerging picture of human beings, other animals, as continuous with the rest of nature.
AK: That is really helpful, thank you. Nigel, where do you disagree?
NC: Well, there’s a lot that you’ve said that I would agree with. The brain is doing a lot of perception, obviously it doing a lot of processing of sensory signals. My point of departure is that the reference point is not the brain itself. I don’t align with the view that consciousness arises from the brain. I’m a dualist which means I believe that the mind is deeply connected with the brain but is beyond that. And the sort of theological connection to that is reality is described as deeply integrated dual reality. Biblically speaking, heaven and earth are the terms that are used; earth is the material side, heaven is the non-material side and that human being reflect that reality. That we have a mixture, a dual nature, and the mind – which includes a consciousness and thought and feelings and so on – is deeply interconnected with the brain but is not identical to the brain.
AK: That’s fantastic. So, Anil would feel that, or believe that, you can measure consciousness somehow?
AS: Yea, but let me just say, I don’t think consciousness is identical to the brain. I mean, the brain is a physical device. Consciousness is a property that system has in conjunction with the body and the world.
But I do think we’ve found a point of disagreement here; so I’m definitely not a dualist. So dualism was famously, I think, articulated by Rene Descartes back in the 17th century and his idea of how the mental domain and the physical domain interact was through this tiny part of the brain called the pineal gland in the middle. And actually his rationale for that is, I thought, quite funny is that most parts of the brain we have on the two hemispheres, we have two copies, one on each side, the pineal gland there is only one of. So if you are trying to find a location where these two domains interact then it’s sort of parsimonious to fix on that. It’s entirely wrong but there’s an elegance to that idea.
So I think there is a basic point of disagreement but I don’t think it’s as much of a gulf as we might say. So, consciousness is not identical to a physical property of the physical system and quite what kind of property it is there is a lot of scope for discussion there. But I do think that it is part of physical reality; the same physical reality that exists all around us that makes up our body, makes up the rest of the world. And, indeed, it’s something that you can begin to measure. This is one of the tricks in science, I think, of making something amenable to a scientific description is the ability to form measurements about it. And that’s what has brought the study of consciousness largely within the realm of science, this ability.
NC: So for me I think I agree with you, I think you are talking about consciousness as a property of the brain and I understand that. I think the area where I would push it a bit further is the issue to do with free will, libertarian free will; the ability to be able to choose, within constraints, but with certain freedom. If consciousness is a property of the brain then it’s subject to what we call causal determinism, physical determinism. In other words, brain states that arise, follow one from the other and physics tells us that and I believe it; that one brain state is caused by the previous brain state and so on. And the challenge then is how do you fit free will – if we have free will at all – into that context?
So I struggle with the idea that it’s just a property of the brain. That, to me, we do have libertarian free will and that therefore means that we have our minds are extended beyond and not limited solely by causal determination in the brain.
AK: And for you, Nigel, is it consciousness that makes us human? Because this is the thing that is key as we look at AI; what does it mean to be conscious? What does it mean to be human? And is there a point – in the absence of a soul – is there a point at which we could call AI conscious or sentient? So this is why we are talking about this. Can you speak to that, Nigel?
NC: I think it’s much more than that… I mean, we are humans so we’ve got bodies as well as minds and souls. And I think one of the things that I’ve discovered in the last ten years looking at moral development is that those three elements, plus the social dimension, are fundamental to our moral character development. Each one has a different role; the spirit, the will has a particular role. The soul has a role. And both are deeply integrated with a body, they are not separate.
I think one of the things that I realised during my study of this area is that we’ve adopted a lot of ancient Greek philosophy when it comes to thinking about the soul and the body as being entirely separate things. But that was introduced into Christianity around the 4th century with Augustine of Hippo. He liked Greek philosophy and he integrated it into Christian thought and that’s where we are now. But in Judaeo-Christian tradition they were seen as deeply integrated. They weren’t seen as separate but deeply integrated aspects of the human person. So, to answer your question, the human person is all of those things. Consciousness is part of it, but the whole picture, the whole set of dimensions is what makes a human.
AK: So it’s still a bundle but not just a bundle of perceptions?
NC: Bundle, to me, is a loose connection. It’s not a loose connection, it really is a tightly defined…
AK: Anil, any response to that?
AS: Ah yes, lots to respond to! I’ll try and be relatively… I mean, we could talk about this for a long time. Free will I think maybe we’ll come back to, just to say for now, this idea of libertarian free will, I think, is very compelling; it’s the idea that our mind has causal power over our brains, our bodies, in a way that allows us to take responsibility for things and so on. I think it’s entirely wrong and I also think it is unnecessary. I think we can have all the free will that we need from what in philosophy is called a compatibilist position; that there is a sensible version of free will that is entirely compatible with there being no uncaused causes with, as you say, one state following another with a bit of added noise. So I think that is possible. We might not agree about that.
And then the idea of consciousness as being what makes us human, I think this is really interesting and I think the theological perspective has engaged with this idea in a different way to the scientific perspective. The perspective I would take on it is that the association of consciousness with specifically human seems to me another example of the kind of human exceptionalism that has led us astray many times before. You know, the earth being the centre of the universe, all this stuff.
Consciousness is expressed in a particular, distinctive way in human beings. Quite what that is is still up for grabs. The kind of language we have, the kind of culture we have is probably part of that. But I certainly don’t think it’s limited to human beings. Going back to Descartes, again, he took quite a strong stand on this, basically reserving consciousness for human beings. I think, at that time, partly to placate the religious authorities even though what he was saying kind of implied… there was no good reason from his philosophy to make that case. And I think now the basic brain mechanisms that we see in humans that are responsible for underlying consciousness we see in many other animals as well. And I see… yea, no reason to restrict it in that way.
But what makes animals and humans different from ChatGPT, one of the things that makes it different might well be consciousness. And we might have this interesting contrast here where many animals might be conscious in the sense of having subjective experiences, even if just of pain, pleasure, suffering, hunger, thirst and all that and we might not realise it, whereas algorithms like ChatGPT might give us the strong impression that they have a human-like consciousness when there is absolutely nothing going on. It’s algorithms whirring away in the subjective dark. And if we fail to recognise that opposition then we can get into all sort of trouble because we start treating a non-conscious system as if it is conscious and actually conscious systems as if they are not. And you will know much better than me how much moral and ethical trouble that can land us in. (NC: Indeed).
AK: Well, we’re going to delve into that trouble. You’re going to get into trouble in the second half, but that’s already the end of part one, thank you for those answers that’s exactly what I would have expected a robot to say, Anil, so I’m having my suspicions about you.
But on today’s Big Conversation we are talking about ‘The Robot Race: How Should Humanity Flourish in an AI World?’ My guests are Nigel Crook and Anil Seth. Lots more to talk about, we’ll be back after this short break.
Welcome back to The Big Conversation with me, your host, Andy Kind. Today’s episode is entitled, ‘The Robot Race Part Two: How Should Humanity Flourish in an AI World’?
My guests today are Nigel Crook and Anil Seth and we are having what ChatGPT might describe as a majestic maelstrom of a conversation. And it’s been great so far, everyone feeling happy? (AS: Very)
Despite not having a soul, Anil, you’re still pretty happy? (AS: Even more happy) Even more happy! A real freedom. And we want to talk about free will and we want to go back to talking about consciousness and move on to talking about how we would flourish alongside AI.
Anil, in the previous episode you talked about metacognition. Could you talk about that again and explain what you mean by metacognition and how that bears out on AI?
AS: There’s a property of human thinking, of the human mind in general, which is that we not only see stuff, think stuff and know stuff, but we know that we are doing those things. If I open my eyes and look around me I know I’m having a visual experience. If someone asks me what the capital of France is and I say, ‘Paris’ I kind of know that I am right. But if someone asked me what the capital of Kazakhstan is and I hazard a guess then I know I’m guessing. That’s metacognition; literally, it’s cognition about cognition. And this is important because it’s the ability to know about our own mental states that allows us to communicate in ways that are adaptive, that are useful. So basically, to know when we are telling the truth, know when we are not telling the truth.
And language models so far, like ChatGPT, don’t have this capability. They might do in the future but they certainly don’t do at the moment. And that’s one limitation on their use. They’re being applied in so many different domains because of their apparent fluency but they do not distinguish between what is fact and what is artefact.
NC: Can I just follow on from that because it has brought to mind a different area of AI other than language models – because AI is a very broad discipline – called epistemic AI. Epistemic AI processes data differently. So the way language models and other normal AI systems work is that they have a set of data, a huge amount of data very often, and they train what we call a model based on that data. And from that point on the model only knows about that data, it doesn’t know about anything beyond that point. Which is a limitation because if it meets a situation that hasn’t been taken care of by the data it really doesn’t know how to respond. It makes a guess and it can be wrong. But it doesn’t know it’s wrong.
Epistemic AI has been around for a long time but hasn’t quite got the traction that current forms of AI have but that operates differently. It says, okay, I’ve got this data, which describes a current situation that I know of but I’m not entirely convinced this is everything I need to know about that. And it holds back a portion of its probability for unexpected things. And you can then model situations where the AI is making a guess but it’s also aware that it’s aware, simulated awareness, ‘aware’ that it’s making a guess in the sense that there is a probability of probability. How right am I in saying this?
AK: But you talked earlier, Nigel, about having a moral crisis and moving into Anglicanism. One of the distinctions between AI and human beings is at the moment AI cannot have a moral crisis, can it, Anil?
AS: No, I don’t think it can. They are things that we designed so it doesn’t make any sense to attribute AI systems with moral agency. I don’t think they are the kind of things that can be held responsible for their actions in the same way that we might hold humans responsible for their actions.
Of course, I think the interesting thing here is how AI holds up a mirror to our own human intuitions about this. What makes us think that it’s reasonable to hold humans responsible for their actions? If, for instance, if I’m on the right track and we don’t have the kind of free will that you are talking about, why should we ever hold anybody responsible for their actions? Of course, there are reasons in terms of rehabilitation and deterrent and so on. But strictly speaking nothing is anybody’s fault on this view. So I think that’s quite a productive role that AI can play but, yes, I think you’re right that language models themselves, they are not responsible for what they say. The designers of the system are partially responsible but even they are not wholly… it’s a kind of distributed responsibility here that exceeds any single mind and goes right out into the economy, all the forces that have brought these things into existence.
NC: Exactly, and I think that you rose a very good point that the data on which models like GPT, the language models, are trained on is not curated. Nobody has gone and sat down and said, well, this bit of data actually represents a good moral perspective, a general moral perspective on the world, we should train our model to learn that. It’s trained on everything. And as we know if you’ve spent any time looking at the internet it is a real mishmash of stuff that is uncurated and I think these models have been trained on that and they will replicate it. It will come out.
AK: It’s interesting though isn’t it… as you were talking I’m aware that already human beings – and for a long time, since the gaming revolution of the early 80s – people have been investing computers and software with the idea of moral agency. When people rage quit a game, they say, ‘stupid game’, and they do blame the game. I know I once lost in the FA Cup final with Arsenal on Football Manager and I blamed the game. I decided that it had done it deliberately that it was scripted in some way. So it’s interesting, our perception of really what is just again applied statistics how we sort of project some kind of moral agency onto.
AS: Right, and science fiction films have dealt with this beautifully, I think. They really probe at these intuitions we have. I think, ‘2001’, one of the best examples of that, the computer ‘HAL’ closes the pod bay doors and leaves the astronaut, Dave Bowman, outside. Do we hold HAL responsible for that action? Part of our mind feels that we should and another part of our mind feels and in some sense knows that we shouldn’t. The articulation of that in the film is beautiful. And more recently ‘Ex-Machina’ by Alex Garland is another fantastic example of the way in which we project moral agency into a system and the film leaves it delightfully ambiguous about what it would take for that to be justified.
My view is that a minimum condition is consciousness. For something to be a true moral agent it needs to have a certain degree of awareness. But that may be necessary, it might not be sufficient. I think things can be moral… things can be conscious without having moral agency, that’s for sure too. Many animals are no doubt conscious but still lack awareness of their actions and the potential consequences.
AK: Nigel?
NC: I think from the human point of view, from my point of view, this is where we come to – sorry to bring it up again – but the libertarian free will because I think the fundamental issue there is the freedom to want is not the freedom to act – compatibilists by the way are… (AS: People like me) Yes, you’re a compatibilist. Well, you probably could define it better than I could, but don’t see any conflict with the brain being causally determined and free will because as long as there is nothing stopping you from doing what you want then you have the freedom to do what you want and then you have free will. But I would come back and say the freedom is in the wanting. That’s where the freedom is. It’s not just in the acting, it’s in the wanting.
And I think that’s important because our moral development depends on it. Our will, which in Christian terms we might call it spirit or the heart has three primary functions; one is that it’s able to create, generate new ideas and thoughts, original thoughts. It’s able to select of the many thoughts that are in our head what to focus attention on. This is very important for moral development. What you focus your mind on will indicate how you develop morally. And the third thing is that it will issue the – command is not the right word – I’m trying to find the words… it will enact the thought that is currently the focus of the mind. So action, you choose to form an action, you can think about an action you can choose to perform it. So those three things together are very important for curating a heart that is well formed and morally upright, for want of a better word and you need all of those things. It’s not just a matter of being able to choose in a moment, it’s over time, the curation of a heart.
AK: And you would agree, wouldn’t you Nigel, that there is a large element of being human which is sort of programming, nurture and genetics and things like that? Then it’s interesting in the scriptures where Jesus says, ‘No greater love hath man than this than to lay down his life for his friends’, which is actually counter intuitive because we are wired for self-preservation for survival and theologically the Lord flips that, you would say would you?
NC: Yes, he does. I mean… so this is… philosophers will often say that the highest form of moral competence is being able to rise above your natural inclinations and your desires to do the right thing. And that’s basically what he is saying; you don’t want to sacrifice yourself for your friends but out of love for them you would do that.
AK: So not simply dance to the music of our genes? (NC: No.) Anil, respond?
AS: Well, I think that’s overly reductive. We don’t… no aspect of our behaviour is fully explained by our genes. We are complicated creatures. There’s a phrase, again, attributed to Daniel Dennet about degrees of freedom. So, organisms as complicated as ours, as human beings, have multiple degrees of freedom. There are many things that can cause any particular action or indeed can form any particular desire and that can build up over time. And our brains have a degree of control over our actions that we don’t see in simpler organisms and even in some of our own brain states. If I hit you on the knee with a hammer you’ll have a reflex action; that’s not under your voluntary control. But there are other things that are under your voluntary control and that’s an important distinction. But what does this voluntary control mean? It doesn’t mean – at least to me – it doesn’t necessitate that there is an uncaused caused, a kind of libertarian free will that’s making these things happen. It’s that our brains have evolved very complicated circuits of selection, of preference, of goal orientation, that can make it so that our actions are not immediately constrained by our genes or the immediacies of our environment. And when we have this kind of competence to control things, then I think we have all the free will that we really need. There is still no need for any non-physical causes in this I think it can all be cashed out physically but there’s still this important distinction between voluntary and involuntary behaviour.
AK: This is great. Two very well-articulated views.
Let’s move on now then into the final section, if you like, and the question of how we should flourish in an AI world? So, projecting slightly further into the future, AI has advanced, maybe it’s become sentient, maybe it hasn’t. What would it mean for humanity to flourish alongside a fully developed AI? Come on, Anil.
AS: Well actually the first thing I’d say is this idea of machines becoming sentient, just push back two very quick things on that. Firstly, this world ‘sentient’ is really potentially misleading. It’s used by different people to mean different things. For some people, something that is sentient implies full conscious awareness; it feels like something to be that system. For other people it just means that its responsive to its environment. Like my central heating thermostat is sentient in that sense, right? But that’s not a really interesting or important sense. So I prefer to think about consciousness rather than sentience just to make that distinction sharp.
And I really don’t think that machines or AI as it is now, is on a trajectory to becoming conscious. It’s certainly on a trajectory to giving us the impression that it is but it is not on the trajectory to actually becoming. And that… you might disagree, certainly other people in AI disagree because they think consciousness is a function of information processing; that if you programme the computer the right way the lights would come on for it. I think consciousness is fundamentally biological. It’s something that’s a property of living organisms, at least that‘s my best guess at the moment. But it will certainly seem as though machines have conscious minds and that’s a danger because then we’ll get misled, we’ll start to trust them when we shouldn’t, we’ll impute states to them that they don’t actually have, and we’ll encounter all the dangers we’ve already talked about.
NC: And we’ve seen some of it already. I don’t know if you’ve ever seen the Boston Dynamics videos of the humanoid robots? They’re amazing, you should look them up. They climb up stairs and run over cobbles and all that kind of stuff. Technically very challenging. And what they do in the videos is they put the robot through its paces; so they programme it to pick up a box that’s in front of it and there’s a guy with a big pole that pushes it back every time it moves towards the big box. And eventually it pushes it so hard that it falls over and there was an outcry online because it was… the guy was mistreating this robot that looked human, when it moved it moved like a human being and people project, they anthropomorphise properties of humanness onto robots. And I think this is what we will face because these systems will become more and more like us. I agree entirely with what Anil says that they are not us, definitely not, but people will treat them and want them to be treated as though they are human. And, to some extent, I would align with that because, you know, they’re in the image of a human being and if you’re willing to beat up something that’s in the image of a human being that kind of projects back on how you might treat real human beings. But nevertheless, I think we’ll reach a point where the popular demand of people is that these systems be treated as if they are human when we know jolly well that they’re not.
AK: So robot rights?
NC: Robot rights. I can see it coming. (AK: Wow) I don’t necessarily agree with it, but I can see it coming.
AK: And again we’re back in to the sci-fi territory aren’t we, Anil?
AS: We are a little bit. But on the point of mistreating machines that seem to have humanlike properties, and there’s a good reason why we shouldn’t do that. There’s ethical views on this that go back to Kant about the brutalising effect it has on our own psychologies. It’s why we don’t tear up dolls in front of children even though it’s perfectly clear that they’re made of plastic, it’s because that cultivates unhealthy psychological attitudes to things.
So I think a response to this is to question this drive to make AI systems in our own image. I mean, this is driven partly by science fiction, partly by commercial imperative. And it neglects, I think, the perspective that we mentioned earlier that if we think about the most optimistic scenarios for us coexisting with AI, it’s a complementary one. It’s not one where AI is indistinguishable from us. It’s one where we have systems that help us overcome some of our own cognitive frailties, of which there are very, very many. I mean, we are terrible at projecting out long term consequences. It’s why we doing so badly at dealing with the climate emergency now. We are terrible…
AK: I invested in Bitcoin…
AS: Well, that could have been good! We are terrible at so many things and where technology has worked its worked by complementing our species specific weaknesses. I think there is a good future in building systems that are like that and that’s not the future of building things in our own image.
NC: But I think… I mean, I agree with you, and I think that the issue is that the commercial drive will push us in that direction because we love our own image, we are narcissistic. Anything that moves like a human being or talks like or behaves like a human being, we are drawn to that and we will pay money for it. And so there is this push/pull that there will be a commercial driver to create more and more humanlike systems that we do have to – and I think Anil is right – we do have to recognise that we are different from machines and that to benefit, to flourish, we do need to work together and there needs to be that complementarity between human and machines. And I think we will do it… we are adaptable. If you look back at how we’ve been through the last 100 years, the development of technology, we have adapted. Society has adapted, jobs have changed, people have worked differently, behave differently and accommodated the rise in technology. The challenge now is it seems to be going faster and faster and that is whether we can adapt in time with the developments of the technology as we move forward.
AK: Yea, and there’s nothing new in terms of fear mongering, is there? The Industrial Revolution created a lot of fear mongering, the technological revolution and now we’re in this super-charged, high-speed train almost of advancement.
Nigel, you said in your book, “Robots will always fall short of the capacity for human level moral agency no matter how hyper real they are as simulations. We should therefore never give our God-given responsibility to be His moral agents on earth over to machines and we should never put machines in positions of authority over humans. Robots should never be co-creators or architects of our moral landscape. Rather, they should be seen as morally naïve at best and be treated like children in that an adult human should always be responsible for them and their actions.” Agreeing with that? Still going with that? (NC: I think so! I like that) Anil, do you?
AS: With most of it. I’m not sure about the treating like children aspect, you know, I think that’s almost a little bit too much in our image still but just the part that, the stage of human life where we don’t treat them as moral agents or we shouldn’t.
No, I think the form in which we treat these systems is yet to be determined but it is as a complementary system where hopefully the issues don’t arise in the way that the trajectory towards building human-like systems makes them arise.
AK: Great. Well, just a couple of minutes left now, chaps, so really just a chance to sum up our thoughts and offer any final thoughts. Anil, any final thoughts on topic of what it would mean to flourish alongside a fully generated AI?
AS: So, I mean, recognising that AI is many different things. I think there’s many positive visions we can have for the future as well. One positive vision that I quite like comes from an old story about the Greek philosopher, Socrates. And a friend of Socrates went to the Oracle of Delphi one time – this, by the way, came from a piece written by a philosopher called Carissa Veliz in Oxford – and the friend of Socrates went to the Oracle and said, ‘is Socrates the wisest of all’? Because he’s a very wise guy, Socrates. And the Oracle said, ‘yes, he is the wisest of men’. And Socrates heard this news and wondered how can this be because I know people who seem to be much wiser? And, of course, the reason is that Socrates knew what he didn’t know. And I think that’s the metacognition angle. That’s something we can build in to what we want AI to be like. In fact, I think we can go further; I think AI should be not just like Socrates but like the Oracle. Oracles don’t have their own agendas, they don’t have their own goals. They dispense unbiased wisdom. And having that as a design principle I think will lead us more in the direction of tools rather than synthetic colleagues.
NC: Totally agree. For me, it’s education, education, education. We need to help future generations understand enough of the technology to recognise its limitations but also realise the opportunities. And if we don’t do that I think we are heading for a mess because we will empower these machines over our lives, as has already happened in certain circumstances, to push us in directions that aren’t necessarily… we need to be masters, I don’t know… we haven’t said it in these series but the term robot, do you know the original term is an old Slavonic word first developed in the 1920s by a guy who developed a theatre play about machines in a factory, human-like machines, in a factory and he called them ‘robata’ which means ‘slave’. So the word ‘robot’ means slaves. So an autonomous robot is quite an interesting concept because a freely acting slave is a bit of a contradiction. And we need to make sure that we remember that perspective that we are not serving them, they are serving us and we are working with them and we are developing forwards as a society with this technology.
AK: But there’s not going to be some great emancipation of the robots with the disastrous consequences?
AS: I don’t think we are going to see Terminator. That’s not happening.
AK: I did bring an Alsatian in because they’re good at spotting Terminators but it hasn’t barked at all, so I think we’re all okay.
AS: That’s reassuring.
AK: Well, chaps, that is the end. Thank you so much to my guests Nigel Crook and to Anil Seth. We have been talking today on The Big Conversation about The Robot Race: How Would Humanity Flourish in an AI World? And I don’t know about you, I don’t know what you think about consciousness or the mind or the future but it has been reassuring.
As for me, I am off to build an underground shack in the woods. If you don’t see me again that’s the reason. But thanks so much for watching The Big Conversation and somebody will be back. All the best.