Download Free Bonus Content
Sign up for the Premier Unbelievable? newsletter and be the first to see new episodes a whole week before they release! Plus you’ll also gain access to our bonus content archive packed with exclusive content and show updates.
This includes the full-length additional robotics interviews referenced in the episode (totalling an hour’s worth of extra material), plusthe ebook edition of Ros Picard & Nick Bostrom’s discussion about God & Artificial Intelligence.
About this episode:
What does the future hold for humans, and does faith have any part to play in the race to upgrade our bodies and escape our planet?
Astronomer Royal, Lord Martin Rees and Christian bioethicist Dr John Wyatt have both been writing about the future.
John Wyatt’s book ‘The Robot Will See You Now’ explores where technology and transhumanism is taking humanity from a Christian perspective. Lord Rees’ recent books such as ‘On the Future’ and ‘The End Of Astronauts’ explore similar themes, including the prospect of interplanetary colonization.
Take our survey! https://survey-star.net/rees
Share this episode:
More from this season:
- Episode 1: Biology, Belief, and Covid: Can Science and Faith be Reconciled?
- Episode 2: Conversion, Culture and the Cross: Are we ready to believe in God again?
- Episode 3: Is there a Master Behind our Mind?
- Episode 4: Rationality, Religious Experience and the Case for God
- Episode 6: Are Millennials & Gen Z ready to believe in God?
Audio Transcript for The Big Conversation: Season 4 Episode 5
Robots, Transhumanism and Life Beyond Earth
Justin Brierley (JB), Lord Martin Rees (LMR) and Dr John Wyatt (JW)
JB: Hello and welcome to The Big Conversation from Premier Unbelievable, in partnership with the John Templeton Foundation. I’m Justin Brierley and today I’m joined by Lord Martin Rees and Doctor John Wyatt, to discuss ‘Robots, Transhumanism and Life Beyond Earth’.
Astronomer Royal Lord Rees and Christian bioethicist Dr John Wyatt have both been writing about the future in recent years. John Wyatt’s book, The Robot Will See You Now, explores where technology and transhumanism is taking humanity, from a Christian perspective. He regularly explores issues of technology in bioethics with his journalist son, Tim, on their Matters of Life and Death podcast, also available from Premier Unbelievable.
As well as holding the position of Astronomer Royal, Lord Martin Rees is a former Master of Trinity College Cambridge and former president of The Royal Society. And his recent books, such as, On the Future: Prospects for Humanity, explore futuristic themes, including the prospects for humanity in the age of robots, transhumanism and AI.
Well, during today’s show we’ll also respond to a few clips from interviews that I conducted a few years ago with robotics experts Nigel Crook and David Levy, which are very relevant to today’s conversation – so look out for those later. We’ll also be asking questions like: Is artificial intelligence a friend or foe of human flourishing? Does faith have any part to play in the robotics revolution and the race to upgrade our bodies and even escape our planet? And what does the future hold for humans? So Martin and John, welcome along to the show!
JW: Yes thanks very much, it’s good to be here.
MR: Very good to be joining you.
JB: Thanks for joining us. Martin, tell us first of all about the book you wrote a few years ago now – though there’s a recent new edition with a new preface – on the future. What made a renowned physicist and an astronomer want to look at what the future holds for life on earth?
JW: I think I’ve always been concerned with these broader issues. I remember going back to CND demonstrations when I was a student, Pugwash Conferences in the 1980s and 90s, and then in more recent times I was president of The Royal Society, which meant I had even an obligation to get engaged with these more social issues. And being an astronomer, I think one has a slightly special perspective, because one is aware of the vastness of space and one is aware in particular of a far future.
To expand on this slightly, I think most of us are aware that we’re the outcome of four billion years of Darwinian selection here on earth, but all too many people think that somehow we humans are the culmination – top of the tree. No astronomer can believe that. Because we know that the sun is less than halfway through its life, and the Universe may have an infinite time ahead of it. And so I think we all realise that we are at an intermediate stage, but a very important stage – because this century is the first when one species, namely human species, can determine the entire planet’s fate. Because there are more of us; we’re more empowered by technology – which is advancing fast. And that was really the theme of my book on the future – what is going to happen this century and why we are going to have a very bumpy ride, in my view.
JB: Yes. But it’s a very, very well written book – very, very readable indeed. But you do interestingly just mention God towards the end of the book; you say you’re not a believer, but you do appreciate taking part in the rituals of the Anglican Church. So what’s your relationship with faith, up to this point, Martin?
MR: Well I think my view is I participate in the ritual of the Church of England; I was brought up in that. Had I grown up in Iran, I would in the same spirit have gone to the mosque. Because I value religious ritual and practice as something which brings us all together when so much divides us, and which makes us aware of our heritage from generations past, when so many things are transient. So I have those views, but I don’t believe any religious dogma. Because being a scientist, I know that even Hydrogen Atom is pretty hard for most people to understand. So I’m very, very sceptical indeed, of anyone having anything more than a very incomplete metaphorical view of any deep aspect of reality. So I wouldn’t say I’m believer in any particular religion.
JB: Well it’s fascinating anyway to be able to have you on the show today, to explore some of the ethical issues that arise from the way technology is going, and perhaps we’ll come back to some of how that integrates with faith issues and that sort of thing later on in the show.
But John, welcome to the show as well – it’s great to have you on. You’ve been writing about the future as well with The Robot Will See You Now. What made you want to delve into that particular area?
JW: Yes, thanks Justin, and it’s just great to be in this conversation with Martin; I’ve sat in the audience and listened to him speak a number of times and engaged in previous conferences. But my background is I’m a medic by background; I specialised in the care of newborn babies and in providing very technologically based medicine. And I went into that field because I loved children and it was a very exciting and interesting rapidly developing field. And it’s really only after I was in that field, that I realised I was in an ethical maelstrom; there were all kinds of new, challenging ethical issues about the implications of advancing technology. And so increasingly, I moved away from the frontline clinical work and really focused on those ethical problems.
And particularly… the fascinating thing to me is that as technology advances, it raises very fundamental age-old questions. And in particular, I think two age-old questions it raises – which I suspect we’ll continue to discuss today – one is: what does it mean to be a human being – how do we understand our humanity in the face of ever-increasing technology? And two: what kind of society do we want to build for the future? And so as I’ve been reflecting on these issues, I – like Martin – think that developments in artificial intelligence and robotics are extraordinarily significant, particularly for the future.
And so this book was a very preliminary attempt to try to raise some of the issues. I and my co-authors – I edited this book with a number of other academics – we felt that it was a kind of scoping exercise; it was just trying to lay out a range of issues which advances in technology were going to raise for the future. And which, in particular, I felt it was really important for people like myself, who have a Christian commitment, and somehow think: how on earth do we relate this ancient religion and faith, which has been around for thousands of years, with these completely new, challenging developments in AI.
JB: Well it is another fascinating book, full of lots of different insights from different authors. So we’ll make sure there are links to both of your books from today’s show, with your different perspectives. I thought just something that might be… sorry, go ahead Martin, you wanted to engage there…
MR: Well I just wanted to say how fascinating I found the chapters in your book; it was a really fascinating book and introduction to these concepts.
JB: Yes. And I thought what we might do just to begin, is actually just to get the conversation going, look at a little bit of a fun interview I had with a robot myself a few years ago! This is a clip from 2018, when I interacted with Nao, courtesy of robotics expert Nigel Crook, from Oxford Brookes University. So let’s just watch this interaction:
(INTERVIEW CLIP STARTS)
JB: Well I’m going to have a little chat with Nao, who looks very cute and is looking at me now – is that he’s noticing where the sounds coming from, is that right Nigel?
NC: Yes that’s right, it’s hearing your voice.
JB: How are you doing today?
Nao: I’m doing well, thank you for asking.
JB: What’s your favourite colour?
Nao: The camera in my head can see lots of colour; I like them all.
JB: Do you know what you were created for?
Nao: I was created to interact with people in different ways. For example, I think I could be a pretty good radio presenter.
JB: You’ll be after my job next!
Nao: I’m doing well, thank you for asking.
JB: Do you think artificial intelligence will benefit the human race?
Nao: AI is already helping people in lots of ways. For example, AI programmes can diagnose some human diseases better than human experts. Ask me another question.
JB: Well here’s one: can a robot have a soul?
Nao: No I don’t think robots can have a soul. But some functions that are traditionally ascribed to the soul can be simulated, such as emotions.
(INTERVIEW CLIP FINISHES)
JB: Well there you go, just a little bit of what was possible then, and it’s only advanced in the years since – voice activated devices, robotics, becoming ever more advanced. Maybe starting with you Martin, what are some of the things you feel we’re benefiting from with this advance in robotics and AI currently?
MR: Well clearly the reason that AI is so impressive in many ways is because of the speed of computers; it can assimilate and analyse huge amounts of data, and it can learn the rules of games and learn protein folding and things like that. So it’s the speed that gives it a huge advantage. And of course in linguistics – I’ll say I’m not an expert at all on this issue – it can study huge amounts of text; it learns to translate by looking at a billion pages of documents in one language and another. I believe it’s used in European Union documents; it’s boredom threshold’s infinite, so it can learn by those! But it’s the speed that gives it an advantage.
But I’m a big sceptic about the extent to which robots will ever resemble humans. Because what they do is they understand words – how words link together, syntax etc., and the latest ones are therefore able to produce a whole paragraph of what looks like normal English prose, etc. And they would be able similarly to perhaps compose a bit of dialogue for a play – a conversation between two people – and they’re going to write that. And that’s something which they can do just by understanding words, not by having any view of what the reality is behind them. And so I think when we consider The Turing Test – or observe the conversation you just had – then that’s happening. It doesn’t mean that the machine has any concept of the real things behind those words; it just knows how to string together words.
JB: And do you think though that we will get to the point where a machine could in principle understand the meaning behind the words, as well as simply being able to replicate effectively what looks like comprehensible text?
MR: I really doubt that, unless they can interact with the external world through senses, and actually understand and have concepts of the world that we do. And that’s very, very different; a computer in a box will never be able to do that. So I’m very sceptical about these things.
JB: What’s your position on that, John?
JW: Well again, I would say that I’m not a technical expert in terms of all the complexities of machine learning and so on. But from a general perspective, I completely agree with Martin, and I think my major concern is that these things can be incredibly deceptive. And that’s because we as human beings, we’re hardwired – to coin a phrase – to anthropomorphize; in other words, to see a human resemblance and respond to it as though it was human.
And there’s an interesting story, right from the very dawn of computing, that one of the great computer pioneers, Joseph Weizenbaum, produced a program called ELIZA, which was a very crude programme – at text-based programme – which was supposed to respond like a psychiatrist. So it said, ‘ tell me how was your day today’, and you said, ‘I have had a good day’ – you typed it – and then it says, ‘why did you have a good day?’ or ‘why do you think you had a good day?’ and it was very, very simplistic. But Weizenbaum left his programme running, and his secretary in the lab, who wasn’t a computer expert at all, started typing into it. And basically she started developing this intimate relationship with this very simplistic programme, so much so that she was asking Weizenbaum to leave the laboratory while she was having this private conversation! So he wrote up: I hadn’t realised the power of a very simple programme in promoting… I think he said, ‘delusional beliefs’, in those who use it. And so I think ever since… and computer scientists talk about the ELIZA effect, that even quite simple programmes can be very, very powerful.
And so as the technology improves, and as this sort of simulation becomes more and more effective, I do see real issues rising in the future. Just recently we’ve had the story of the Google engineer who felt that one of these more sophisticated versions of what you were having a conversation with, had actually become sentience, and therefore had to be protected from harm. And so there’s a hugely deceptive element here isn’t there; the potential for abuse seems to be enormous.
MR: Yes. But I think it is delusional, in many ways, to think that these entities are really having feelings etc.
JW: The problem is it’s almost as though, as human beings, we are set up for this delusion. And it’s interesting, looking at the ways that the very youngest babies interact with the world – the fascinating thing is that the very first reaching out to the world of a newborn baby is in terms of relations. In other words, it actually is looking for human faces and responding in a relational way. So this ability to determine something that appears to be human and then to respond to it is very deeply part of our humanity.
JB: I wonder if we could maybe even now then turn to our next clip, because this very much looks at the whole issue of whether computers, AI, robots, could pass the so-called Turing Test, which is this sort of idea that robots will become indistinguishable from humans. Again, this is a clip of Nigel Crook, but also David Levy, talking about these issues:
(INTERVIEW CLIP STARTS)
JB: Obviously Nao is, in a sense, a very simple version of what is achievable with robotics. Do you think we are anywhere near the point at which The Turing Test could be passed at some point, where someone could interact with a humanoid robot and be unaware that it’s a robot – think that they’re actually interacting with a human?
NC: I think we’re a long way from that, actually. There have been recent claims that The Turing Test has been passed, but there have been a lot of questions about whether that actually is the case. Because the scenario that was used was a setup, essentially, to fool the person who was making that decision as to whether they were talking to a human or an AI. We have such a long way to go to even begin to approach human level general intelligence, that I think it’s a long road.
DL: In the area of speech recognition and human computer conversation, the technology is fairly primitive. However, I think that will develop and I think that probably within about twenty years or so, The Turing Test will have been passed – people won’t be able to tell whether they’re having a conversation with a human or with an AI. And I think the same is true of the appearance and movement and behaviour of robots; I think they’ll be incredibly human-like by the middle of this century, to the point where people will be falling in love with them and marrying them.
(INTERVIEW CLIP FINISHES)
JB: So there we go – that’s two views from two different people – quite different in many ways. I wonder who you agree with more, Martin first of all, of those two views on whether robots will pass The Turing Test and so on?
MR: Yes. Well let me say I’m not an expert at all, but my view is that even if they do, this doesn’t at all mean that they are actually thinking or feeling in a way that a human is. So I think The Turing Test is actually a rather low bar, compared to seeing if these are entities that we need to take account of – should we worry about their welfare, should we fret if they’re unemployed or bored etc. I don’t think we’re going to ever have to worry about that, even if they can simulate conversation in a way that we can’t distinguish from a human behind a screen.
JB: Is that because in principle you don’t think a robot can attain consciousness in the way that a human being can?
MR: Well I don’t think the kind of robot that’s based on the digital technology that we use could. I mean, maybe one day it’ll be possible to understand how the brain works and simulate all the connections etc., and create some post-human entity, which perhaps will have feelings. Because it’s a long philosophical debate about whether consciousness and self-awareness is an emergent property, or whether it’s specific to the flesh and blood that our brains are made of – we don’t know the answer. It’s perfectly possible that there could be entities, which are superhuman in most of their manifestations, but nonetheless are zombies – have no self-awareness.
And to digress slightly, I’ve talked about post-human evolution on earth and far beyond, and I’ve mentioned these points, and get a bi-modal response from people who listen to me. One is to say: isn’t it wonderful that we are just a stage towards the emergent of bigger and better brains, etc. Others say: well if these entities are just zombies, they may behave super-intelligently, but if they are not able to appreciate the wonder and mystery of the Cosmos, then how sad that they’re to be our descendants. And so this is a question which does affect the way we react to these scenarios.
JW: Absolutely, yes.
JB: I’d be interested in your response to that, John – do you think there could be consciousness, or do you think that is a uniquely human or biological trait?
JW: Well you know the more I’ve gone into this and read and reflected about it, the more I sense this is a very deep, deep dark hole, into which it’s almost impossible to think your way out of it. Because this is what philosophers call the problems of other minds: the only consciousness of which I am aware is actually the consciousness that is going on within me at the moment – we never have any real understanding of any other human beings consciousness. And so of course we assume – I assume, and actually I think again we’re made to assume; we’re hardwired to assume – that what’s going on in your head is very similar to what’s going on in my head.
But interestingly, medically, there is no knockdown medical test for consciousness. In fact I have to say, with all our clever scans and everything else, the most sophisticated test we have is basically you prod someone, and if they go, ‘ouch’, then medically we say, ‘yes, this person is conscious’. You know, it’s a very sophisticated kind of level of… And so when you then ask the question: well what does it mean – could a computer be conscious? – and how could we ever know? This is where I think it all gets deeply, deeply problematic, because what I guarantee… and the episode we’ve just seen with the Google engineer is a kind of hint of what is to come. And that is that as these systems become more and more sophisticated and human-like, more and more people in our society are going to say, ‘I don’t mind what you clever scientists say; as far as I am concerned, this is a person – this is conscious, this is sentient, and I insist that we as a society do something about it’. And interestingly there is no kind of knockdown test that anybody can do and say: I can prove to you that this machine is not conscious. So in other words, it becomes much more a question of the surface appearance – or to use a long word, the phenomenology of the experience – than it is what actually is happening underneath.
JB: Any thoughts on that Martin?
MR: Well I agree completely with that, and of course, people may start to consider the feelings and the satisfactions being derived by adequate stimulus of these robots, etc. And of course this does affect very much the extent to which you are happy to be cared for by a robot, as against a human being. You want to feel they somehow empathise with you, and whether it’s true or not, what you think about it does make a difference.
JB: Yes. I mean to that extent, Martin, do you think that we are going to be happy to simply increasingly engage with robots, engage with automated answerphones, algorithms, you know, to the extent that we won’t necessarily know when we phone up a company, whether we’re talking to an algorithm or a human? Will it make any difference, ultimately, to us?
MR: Well I think it will. I mean, I think it’s rather regrettable. But if you think of the role of robots as carers – then in some contexts obviously we welcome it, to deal with bedpans and things like that, probably you’d rather have a robot. But I think to have someone who’s actually going to be someone to think of your welfare etc., you want it to be a real human being. And as we observe – when we look at rich people who have the choice – they always choose real people to look after them, not a robot in the way that the Japanese are forced to.
So to link this with a thought about call centres and all that, one thing I discuss in my book is that I think it would be a real win-win situation if the companies that make the robots, and these big multinational conglomerates, were properly taxed. And the taxation revenue was to fund huge numbers of dignified jobs for carers, teaching assistants, gardeners in public parks, and people like that, where being human is important. And provide those jobs for those made unemployed by the automation of Amazon warehouses and call centres. So that would be a win-win, if the people in those mind-numbing jobs could be replaced by robots, and they could be provided with dignified jobs from the public purse to look after people.
JB: It’s an interesting way of trying to deal with it. We’re going to go to a quick break and we’ll come back to talk more about the whole issue of robots now caring for people, increasingly – we’ve got another clip from our other guests and we’ll watch that. But we’ll continue the conversation on ‘Robots, Transhumanism and Life Beyond Earth’, with Martin Rees and John Wyatt, in just a moments time.
JB: Welcome back to The Big Conversation. I’m joined today by John Wyatt and Lord Martin Rees; we’re talking about ‘Robots, Transhumanism and Life Beyond Earth’. As ever, there are links to both my guests on today’s show. We’re going to continue talking about the interesting dilemmas and challenges that the advent of robotics and advanced robotics plays in the world today.
Here’s another clip that I’ll share from two robotics experts – again this is Nigel Crook and David Levy – this time talking about the way that robots are increasingly providing care, even developing sometimes romantic relationships – whether that’s a possibility in the future. Let’s here what they have to say:
(INTERVIEW CLIP STARTS)
JB: We’re already seeing I think in places like Japan, robotics starting to be integrated into healthcare and care for the elderly, in particular. And you can see the advantages of that, in terms of saving on human resources and the ability for robotic companions and things to be there day and night and in all kinds of circumstances. At the same time, what are your feelings about effectively replacing human care with machine care?
NC: I don’t think we should ever do it. I think that technology – like robotics and AI – should be used to augment human care, if possible. The issue in Japan and the issue will be in this country within the next fifty years, is there won’t be enough people to maintain the same level of care that we currently have.
JB: We have an aging population…
NC: We have an aging population, so there won’t be enough younger people to fulfil that role. And so for me, the question is: how can this technology be used to augment the care that humans give, rather than replace it.
JB: And why is that important for you, that we don’t simply replace human care with machine care?
NC: Because again, these are vulnerable people, and human-to-human interaction is always going to be far better to me, than machine to human interaction.
JB: I suppose my next natural question though is, if you could create the kind of robot that for all intents and purposes, looks, acts and feels like a human, would you be effectively giving that person human-to-human care?
NC: That is a really good question, and I’m not sure about the answer to that. It’s possible that we could reach that level of realism within a very specific scenario. But the problem for me is that robot would have to experience so much of life to be able to truly empathise with that individual, that that makes it beyond reach, as far as I’m concerned at the moment. What you could do is you could have a monitoring system or a simple interaction with an elderly person, which keeps them engaged, perhaps helps them to remember their past – show some photographs of the past and enables them to sort of keep that sort of memory alive. But you would also need that human-to-human interaction as well.
JB: When it comes to advances in robots that are used for sex, for relationships, for perhaps companionship and other things, generally are you quite positive about movement in that direction?
DL: I’m very positive, and the reason I’m so positive is that I see robots as being the answer to the prayers of all those millions of people all over the world who are lonely, because they have no one to love and no one who loves them. And for these people – they might have psychological problems, they might have psychosexual problems, social problems, they might not be appealing to the people they would like to be appealing to, all sorts of reasons – they have a huge void in their lives. And this void will be filled when there are very human-like robots around, with whom they can form emotional attachments, have sexual relationships and even marry them.
JB: Is that not an indictment though of humans, that we are not fulfilling, that there are so many people who are lonely. Is it not something of an abnegation of our duty to be caring for each other, rather than simply saying, ‘we’ll let the robots do it’?
DL: I don’t think it’s realistic to believe that we could possibly be sufficiently caring for each other, to take care of the huge number of people who suffer from such problems. I think that robots are really the only answer. A lot of people ask the question: why is it better to be in love with or have sex with a robot than with a human? To my mind that’s the wrong question. I think the real question is: is it better to have love and sex with robots or no love and sex at all?
JB: Is there a danger that we change the way we think about sexuality and romance, if essentially you’re doing something to a subject, rather than it being something that can equally have its own range of emotions and will and psychology and so on?
DL: To some extent… I mean there are people who argue that sex robots objectify women, because most sex robots in the first instance are going to be female in form. I disagree – I don’t think they objectify women. I think what sex robots do is to humanise robots. So I don’t see a problem there at all.
(INTERVIEW CLIP FINISHES)
JB: Well, lots to respond to in those two very different views on the role of robots in caring and even in romantic relationships. Martin, you look like you want to respond immediately – go ahead!
MR: Well, there’s a certain inconsistency isn’t there, in that if you think that these robots are going to be realistic enough and human-like enough to be emotional partners, as it were, then should one consider their feelings and say, ‘do they want to be linked with this repulsive person?’ You’ve got to ask that question. And so I think there’s an inconsistency in the aim of what’s being said, that if they are sufficiently like a human to be a substitute, then we have to think about their feelings.
JB: Yes, absolutely, robot rights and so on! I mean, John, I suppose these are the dilemmas that are thrown up by all of this, isn’t it John? What’s your feeling on those…
MR: I don’t think we’re going to get there, but those who take this view seriously are going to have to worry about this for consistency.
JB: Yes. John, go ahead…
JW: Yes, I completely agree. I think this is a fascinating point about we all understand the difference between rape and consensual sex, and yet the question is exactly that. If you have a sex robot who is, to all intents and purposes, behaving like a human being, then do you have to ask consent? Do you have to take their feelings into account? And if you’re so repulsive that no human being wants to have intimate relations with you, then maybe… There is a very serious element to all this – we’re partly making it humorous, but there’s a very serious element to this.
And it is this confusion… What I see happening is that previously, until this point, we all pretty well agreed that there was a kind of fairly clear barrier between where we were having relationships with other human beings and where we were having “relationships” with beings that were non-humanoid. And yes it’s true that the little girl has her teddy bear or her doll and talks to it and so on, but actually the little girl knows perfectly well that this is not a real baby, and that when the little girl actually meets a real baby, she behaves in a completely different way from the way that she would behave with a doll. So up until this point we’ve always had, psychologically and relationally, this understanding that there’s a difference. And now what we’re doing is we’re creating this third entity – this blurred blurring – where it’s not really human but it’s almost human, and then how do I respond to this?
And my greatest concern is with children who grow up with this. If your earliest experience – your earliest memories – are talking to Alexa and playing with your robotic dog, what does that do? And Sherry Turkle, who’s a very thoughtful commentator on all this, she has written, ‘the problem will not be will we come to love our robots. The question will be: what will love mean?’ In other words, it’s that redefinition of relationships, and in particular a kind of transactional relationship.
I mean, historically – and I think this is partly from a Christian perspective – Christians and others and have always understood human relationships as touching on something immensely profound. And the highest level, this reaching out to another, of giving to another and so on, is something deeply profound. And what it’s being replaced by is this kind of transactional understanding; that I press these buttons, this thing smiles at me, it makes me feel happy – that’s a relationship.
JB: Do you share those concerns as well Martin?
MR: Yes, I couldn’t have expressed them so well, but I completely agree with that reaction to this scenario. Completely agree.
JB: I mean, given that robots… Perhaps we’re not at the stage where people are really attaching full on romantic relationships with robots. But we are seeing in places like Japan – you know, an elderly population increasingly being serviced by autonomous robots of some kind, and there being sort of at least companionship given by algorithmic robots and so on. Again Martin, is it just practically do we just take a pragmatic view and say, well if there aren’t enough people to go round, that’s the way we’ll have to do it – we’ll have to employ the services of robots that have a kind of quasi-human relationship with people. Or do you think we should actively strive to not go down that road, Martin?
MR: Well I think we should strive very hard not to. And of course it’ll be easier to avoid that route if we’re prepared to have massive retribution of funds, so as to publicly support large numbers of people who will act as carers for the old and assistants to teachers, to look after the young, etc. So I think we should try very, very hard to provide care for all who need it, from real human beings. And this is in the interests of both those who might otherwise be unemployed, if they’re not digitally expert; they’ve got to have some employment where being human is important. And also I think it’s more dignified for an individual to be looked after by another human being.
JB: Why do you say it’s more dignified? What is it about the human interaction that you can’t replicate the dignity of with a robot?
MR: Well, you think that there’s an entity that really cares about you. And you’re not convinced that that is the case for any of these robots.
JB: So in that sense, it’s feeling like you’re interacting with someone who is like you and has the same sort of feelings and desires and compassion and so on.
John, where does your concern come from? Is it a similar place to Martins?
JW: Well I think it’s very interesting isn’t it, that one of the fundamental drives we have as human beings – and from a Christian point of view, we would say this is part of our being created in God’s image – is our desire and longing for truth, for authenticity, for meaning and significance. And therefore, however clever the simulation and the simulacrum, it is ultimately still a simulacrum.
I’ve been doing a thought experiment, trying to get my head around this. And I was thinking: I’ve been happily married for nearly forty years – suppose it turns out that my wife is actually a Russian agent in deep cover and that unknown to me she has her own handler and she’s been playing this part for forty years. Suppose I go to my grave and I never know that she actually was a double agent – does it matter? Because I thought I had a happy marriage; I enjoyed the marriage. And actually, I think most of us would say yes, it does matter, because we were living a lie. And it matters; truth, authenticity matters.
And that’s why I think that gets at the heart of what Martin said about the dignity of being cared for. You think of this elderly person, perhaps slightly confused, who thinks that this program genuinely cares about her and is there for her and is compassionate. And actually, it’s all clever programming. And intuitively we recoil, and we say, ‘there’s something wrong about that; there’s something inauthentic’.
Could I just say something else about this question of… I often hear this narrative, and that is: there are just not enough people to do the caring. And like Martin, I would totally want to push back against that. Because of course it is true that there is an acute shortage of carers in our community and across many Western countries. But the reason for that is that their terms and conditions, their pay, their status – the way they’re treated – is that they’re regarded as absolutely the bottom of the rung. And there’s no shortage of people wanting to do MBAs and wanting to become commercial lawyers and so on. And so the challenge we’ve got is to reimagine the human caring role as something to be regarded as a high status, dignified, well-trained, well-paid job. I think there are more than enough human beings to take on that job, if we can only present it in that way.
JB: Martin, go ahead…
MR: I completely agree with that. And of course, those who are going to be displaced first by robotic advances are those in the mind-numbing jobs, like call centres. And they need to be given some more fulfilling role, and among those roles would be carers.
JB: Yes, I mean turning from the issue of the jobs and the roles and the ways in which potentially robotics may be a help but may also start to change the way we interact with each other and think about relationships and so on. I mean, there are also the ethical implications of the way in which we’re handing over so many responsibilities to AI, to machines, and so on.
You know, one of the most often talked about one is self-driving cars, which are becoming a reality now – we are seeing self-driving cars starting to pop up and the laws being put in place that would allow for that kind of reality. But then at the same time, we hear about these thought experiments that… you know, what kind of decisions will a self-driving car make, when it is faced with either going off the road and hitting the lady with her pram or hitting someone else on the other side and so on. Someone has I suppose got to be programming the ethics, haven’t they, of these robots and so on. Martin, what are your thoughts on this? Are you concerned by the fact that we are having to effectively create in part some kind of moral view to our robots? Is that again just an inevitable part of the progress of technology – what are your thoughts on it?
MR: Well we certainly have to do this – confront what philosophers call the Trolley Problem, in deciding how robotic cars react in an emergency. And I personally think actually it will be a long time before we have the so-called stage five fully driverless cars, where the passenger can sit in the back as though they’ve got a chauffer – I think that will be a very long time coming.
But I think what we are facing immediately is a lot of decisions which affect us, which are being taken by AI. And of course AI has the advantage that it can work through a huge volume of data very quickly. But I certainly think that if we are going to be recommended for parole if we’re in prison, or recommended for an operation or anything like that, or even denied credit by a bank, then we ought to be able to contest that decision. It’s not enough to say it’s been done by a robot – which on the whole has a better record than a human, of consistent judgment. We want to be able to contest this.
And of course, going back to what John said, robots can be helpful. I mean, I know in the medical area, in radiology, a robot can have scanned a hundred thousand chest x-rays to look at size of cancer and things like that, which is more than any human doctor could in a lifetime. And so obviously that’s one example where the AI can help with diagnosis by a real doctor – and there are many cases like that. But I think to delegate any important decision to a machine, when we can’t question it and can’t be sure there aren’t some emergent bugs in the programme, is very dangerous, and we are heading that way already.
JB: Go ahead, John…
JW: Yes I think this is a really important issue, which we’re wrestling with in healthcare. Because it’s absolutely right that advances in deep learning technology are bringing all kinds of new possibilities, including interpretation of scans and pathology results and basic science insights and so on. And one of the interesting things that this kind of automation technology does, is it decomposes traditional professional roles. So whereas previously we thought we knew what a medical doctor did – what their role was, what their training was and so on. And what increasingly AI is doing, is it’s decomposing the different tasks that doctors do, and increasingly many of the tasks that doctors have done in the past will now be better done by machines.
And then the question is, well what is the value added – what is it that the human being brings and do we need to train physicians and spend a million dollars and so on in training a human being. And I think there is definitely an important role and I think of it particularly that there is… that part of what, as a patient, when you’re facing catastrophe and disaster, you want somebody with humanity and wisdom. And the role of the physician, it seems to me, is to be a wise friend. And that means using all the information provided by the technology, but ultimately providing that human wisdom, which is what you so desperately need when you’re in this very vulnerable position as a patient.
And I think that it’s going to be a challenge, because who takes the responsibility? And if the doctor makes a decision based on because the AI has said the scan shows this, you know, and it then turns out the programming was faulty, who carries liability? Who carries responsibility? And interestingly the problem with these deep learning systems is that you cannot question them; you can’t interrogate them and say, ‘why did you come to this conclusion and not that conclusion?’ At least with a human being, you can say, ‘why did you think it was cancer?’ and you can push back…
JB: Does that concern you, Martin, the idea that we’re kind of developing almost black boxes in their own light; these self-learning machines. So I know that for instance, these amazing computers that have learned ‘Go’ in twenty-four hours, and then beat the world grand master of Go and so on. Even the people building the machine don’t really know how it did it; it just looked at the rules and started to do its algorithmic thing. So it’s almost as though we are already creating machines that are kind of going off and doing their own thing, in that sense.
MR: Well, I think that there are certain kinds, and playing Go or chess is an example where, through speed, playing against themselves thousands of times in an hour, they can learn very fast. And that’s similar learning to interpret x-rays of lungs and all that; they can do these things. But I think it’s important to bear in mind there is a rather complicated gap between what machines can do well and what they can’t do well. What John said about medicine, that’s one example. But it’s certainly not the case that white-collar jobs are going to be harder to automate than blue-collar jobs. I mean, I suspect our computing can be done by machines; a lot of our routine legal conveyancing and things like that can be recognised. But on the other hand, plumbing I suspect can’t, you know. The plumber who comes has to poke around in your roof to find where there is a leak; one can’t see a robot doing that in the near future. And in gardening, it’s true that you can go from having a spade to having a bulldozer, but neither of those can design a nice garden. And so I think gardening in another so-called blue-collar job, which is going to very hard to replace. So I think there is going to be a sort of redistribution of human labour, but it’s going to be rather complicated. And of course, in most cases, the machine is going to be supplementary, when harnessing the role of the human, rather than usurping it.
JB: And do you have any fears – and this is the stuff of sci-fi movies and books and so on – that humans might at some point, as it were, become fed up with the number of robots taking their jobs, and perhaps they’re not seeing the benefits that should accrue from all these labour saving devices? Rather they’re just feeling that I’m being left behind, and so on. Is there any danger of a sort of, I don’t know, Luddite kind of smashing of the robots kind of rebellion or anything like that?
MR: Well I mean I wouldn’t call it a danger necessarily, because it may very well be that we decide that we want to slow down the development of some kind of technology, because it’s not cost-effective, it’s not beneficial. That has happened – if you look at the history of technologies, some have developed very, very fast. Smart phones, for instance, spread around the world far faster than anyone predicted. But on the other hand, supersonic flights – we’re still using the same kind of planes as we were fifty years ago. The jumbo jet first flew in 1969, Concorde came and went, and that’s an example where we could have had a rapid development of technology, but it was thought it wasn’t commercially or socially acceptable.
And so in the same way, it may well be decided that we don’t want to develop these technologies further. I think in the smart phone, maybe that’s saturated – maybe the iPhone 24 will be not too different to the iPhone 13, and that’s an example. So technologies, they develop fast and then plateau, and that’s because of lack of public demand or concerns, and that may very well happen to some of these uses of machine learning. We should welcome it.
JB: I’d just be interested John in your perspective, because another big sort of concern that’s often raised is the idea of a singularity or at some point the machines sort of take over and sort of decide humans are dispensable – ‘we can get on better with the job we’ve got in front of us without humans’. I mean, is that just again sci-fi, or do you think there is a genuine concern that we could be heading in that sort of direction, John?
JW: Well I think the idea of a singularity, which is that at some point we suddenly create super-intelligent machines, and then within a few minutes they become so ridiculously intelligent they then take over the world – this is just pure science fiction, and it’s not a genuine risk. I think the risks of AI are much more to do with malevolent human beings using AI for nefarious purposes, and the power of the simulation and of deception is absolutely enormous.
But what I do think I see happening, and that is an increasing ‘virtualisation’ of many people’s experience. I mean we’re already spending so much time aren’t we, sitting on our backsides staring at screens. And as the new generations of virtual reality and augmented reality become available, I think there are going to be increasing numbers of our society who start to think actually, living in a digital world is preferable. In the real world I’ve got no money, I’ve got rampant inflation, we’re got warfare, I’m hanging on by my fingertips, the oil prices and so on. I go into my virtual world and all of a sudden it’s fantastic. I mean, I can do whatever I want to do; I can experience, I can pretend to play roles. And this kind of dystopic view of the future, where actually the virtual starts to become more and more attractive – I think that’s a real threat.
JB: Yes. Do you agree with that, Martin?
MR: It’s like being on drugs that distort your perception of reality. And so I agree, it would be deplorable if that happened. And we are heading that way.
JW: So I think one of the real questions then is, what is it… Because although Martin and I tend to agree a lot, there are a lot of people out there that would take the contrary view, and they would say: this is the future – this is the way it’s going. And rather than being Luddite like we are, and say, ‘isn’t it terrible’, we should embrace the future – the future of humanity is merging with the virtual. I mean, Elon Musk, for instance, well-known entrepreneur, is saying the only hope for us as human beings made out of meat is that we’ve got to find a way of connecting ourselves to the digital technologies, so that it doesn’t take over. So how we respond to that I think is going to be really important in the future.
MR: Well indeed, and of course if we got too dependent then we’d be vulnerable to breakdowns. And of course one of the things I worry about very much is, as you already said, malevolent use of these powerful technologies, which enables just one person or a few people to cause a massive cyber attack which shuts down the electric grid or something like that. Or more insidiously, just breakdowns, which are so complicated to track down and repair, that we become helpless. So there is a big risk that as these things get more complicated, then we become far more vulnerable, and that’s another reason for not becoming too dependent on these things.
And also the other new thing is that they span the globe in many cases. So if something goes wrong, it’s not localised in a way that most disasters were in the past – it could spread globally. And one of the themes in my book is indeed, what a new concern is that cascades globally can occur from bio or cyber errors or terrorists.
JB: Well look, we’re going to go to a quick break and we’ll be back to conclude today’s discussion. And I’d like to talk about in the next segment sort of that concept which I think left someone like – I think it was Max Tegmark, wept on the side of the street when he considered the idea that one day humans would be completely replaced by robots, you know, stretching far out into the galaxy, who knows. And we’ll talk about maybe what that looks like in the future. But we’ll come back to all that. My guests today are Martin Rees and John Wyatt; we’re talking today about ‘Robots, Transhumanism and Life Beyond Earth’ and much more besides. We’ll be back very shortly.
JB: Welcome back to the final part of this week’s edition of The Big Conversation from Premier Unbelievable. It’s been so good to be joined by John Wyatt, he’s the author of The Robot Will See You Now – exploring where technology and transhumanism is taking humanity, from a Christian perspective. If you want more from John: johnwyatt.com – and do check out the Matters of Life and Death podcast that he hosts with his journalist son, Tim. Also joining us today on the show has been Lord Martin Rees, who is the Astronomer Royal, former Master of Trinity College Cambridge, former president of The Royal Society – many other illustrious titles that I could list. But his recent book on the future, On the Future: Prospects for Humanity, is really the central ideas from that that we’ve been discussing today. We’ve been looking at ‘Robots, Transhumanism and Life Beyond Earth’ – martinrees.uk, for Martin’s website.
I did want to open up that question of what the long future may look like. I’ve mentioned transhumanism, and there are lots of way in which that crosses over between robotics and biotech and everything. But people even question whether humans will be part of the future – you referenced this a bit earlier, Martin. I know that you’ve also published another recent book called The End of Astronauts, where you’ve said really humans won’t be involved in space travel in the future. Which is a rather sad prospect really – doesn’t sound quite as romantic, you know, to say rather than one small step for man, it’s one small step for a robot somewhere in a distant galaxy. Just explain why you think actually, on that level, if there is going to be further exploration of space, it won’t involve humans, it’ll involve robots. What’s your reasoning for that, Martin?
MR: Well robots are becoming vastly more capable of course. And the practical need for humans in space is going down, because robots can now assemble big structures in orbit – they can build a structure on the moon, a big telescope or something. And if you think about exploring, then robots on Mars will soon be able to actually do what a geologist could do; they can already travel around Mars and dig up samples, but soon they’ll have a bit more intelligence and they could do it better. So of course they could do these things and of course they’re hugely cheaper. If you want to send a person to Mars, then you’ve got to feed them, keep them safe for a two-hundred day trip and perhaps bring them back – that’s hugely more expensive than sending a robot, which just hibernates and you can leave there. And so the practical case for humans is getting weaker. And obviously, as you say, there’s an inspiration in sending humans. We look back, if we’re old enough, to Neil Armstrong’s one small step on the moon in 1969, and that was the high point of exploration in space; no-one’s been further than the moon since.
So the line I take in my book, is that if we are going to have humans going into space in the future, then they should go as an adventure. And this should be I think not paid for by the tax payers – because if the tax payers send civilisations into space, they’ve got to be very risk adverse. Whereas if the billionaires pay for it, then first of all it’s not our money in the same sense. But secondly, they can launch the kind of adventurers – people like Sir Ranulph Fiennes, or people who go hand gliding and all that, and even some people who will be willing to go with one-way tickets. Musk himself has said that he wants to die on Mars, but not on impact. He’s now I think 51, and when he’s 90 maybe he could go and maybe we should cheer him on!
So I think robots of course can do all the exploration, and they can of course go far beyond Mars, where humans never could. But the role of humans is nonetheless very interesting. Because supposing that there is a small community of these bold pioneers living on Mars, which would be less comfortable than living in the South Pole or the ocean bed, but there could be people there. Then at the end of a century, they will want to use all the techniques we will then have for genetic modification and cyborg, to adapt their progeny to this very hostile environment; very different from the earth. Now here on earth, we’re pretty well adapted and also we are going to regulate all these techniques for ethical and prudential reasons. But these guys on Mars are away from the regulators. And so my line in our book is that if there is going to be a sort of post-human species, it will be the progeny of those crazy pioneers on Mars, who will have the incentive and not the constraints, which we would have on earth. So that can happen.
And of course this leads to the question of the far future, because we know that the solar system has billions of years ahead of it. And in even a few centuries, there may be a species different from humans, maybe mainly robotic. And if they are electronic rather than of flesh and blood, then they may not want to stay on a planet – they won’t need an atmosphere, they might prefer zero gravity. And they’d be near immortal, so they can go on interstellar voyages. So a scenario of the far future is that entities, which are electronic progeny of humans who went to Mars, will spread to the galaxy.
And of course this raises the question, which astronomers are always asked – are their aliens out there already or will the galaxy be awaiting our progeny? We don’t know. But electronic entities may be our remote descendants, and the ones who can actually go out into deep space. And that of course still begs the question of whether they will be zombies or whether they will be intelligent. But they will be capable of surviving for millions of years and making these long voyages. So that’s a scenario for the long-term future.
JB: Yes. John, what do you think of this sort of idea of post-humans who are effectively advanced robots who are actually our descendants, once we’ve died out or earth becomes uninhabitable – it’s them who will be taking things forward in the far future.
JW: For me it’s a fascinating prospect, isn’t it? And I think it’s much closer to science fiction – these are very common science fiction tropes. And I think one of the fascinating things about this technology is that whereas previously, whenever a new technology came, we got the technological advance first, and then we got the artists and the writers and the novelists, who wrote about it. So we have the industrial revolution, and then Dickens writes about it and so on. What’s unique about this revolution is that right from sort of Mary Shelley and H. G. Wells, people have been speculating about the future of the human race and technology and post-humanism and so on. And so there’s a very interesting interaction between the science fiction and then the actual technology. And I know that some of the technology that’s in Silicone Valley, they’re actually trying to make the science fiction come true. I mean, Elon Musk has said how much his science fiction background and Asimov’s Foundation and Empire – all these kind of big themes of life across a galaxy – he’s now trying to make this come true.
I mean, my own perspective is that I suspect the drive for human enhancement is going to be massive on this planet, not just on Mars. And in many ways, what we’re seeing already is a kind of low-tech human enhancement. I mean, the idea of cosmetic surgery, of gender changing surgery, of recreational pharmaceuticals – this is all low-tech transhumanism. And it shows that there is a sort of inexhaustible appetite for us to improve our bodies. And I suspect that as the technology advances, there is going to be more and more demand for sophisticated technology to improve our bodies. And I think the question we’re going to have to ask is: are we satisfied… I think there is going to be a campaign for real humanity. You know, there’s a campaign for real ale that says, you know, we don’t want this kind of fizzy stuff. I think there’s going to have to be campaigns for real humanity. That we think actually you know what, I think this old-fashioned, mark one homo sapiens, I’m quite satisfied with that!
JB: What about you, Martin?
MR: I completely agree with that. And I think we ought to restrain all these developments, we should restrain all these developments here on earth. Because we’re well adapted to the earth and we don’t want to change it too fast. I agree with that. And that’s why I think it will be the crazy guys on Mars who are away from the regulators and who are ill adapted and have reasons for trying to change their descendants – so I think they will pioneer these things and we should cheer them on. But we should restrain that happening here.
And to take a more recent example, there’s the establishment just very recently of three labs called Altos Laboratories, to extend lifespan. These are funded by billionaires who when young, wanted to be rich, and now they’re rich, they want to be young again – and that’s not quite so easy! And there’s one in San Francisco, one in San Diego and one in Cambridge. And I think if they succeeded in harnessing the lifespan of the few privileged people, that’s a really fundamental kind of inequality. And that’s something I think we ought to put the breaks on. I mean, even if everyone could do it, it’s not obvious it would be a good idea. But certainly a few people who have conquered death to a greater extent than the rest of us, is a really fundamental kind if inequality. So I’m very much on side, that since we are adapted to living on the earth, we’ve got to preserve it as it is and to a reasonable extent preserve ourselves and distinguish between a remedying defect and enhancement. One is good; one is less good.
JB: In that case, Martin, as someone who doesn’t have any particular religious beliefs, why for you is there that sense of sadness at the idea that humans might one day go extinct and be replaced by some sort of robotics, advanced technology that we’d created? Because as you said, some people just say: well, we’re just this blip in a vast cosmos going on through billions of years – why should we privilege our particular moment in the sun over what may come after us? Is it just a sort of sentimental attachment to humans? What is it, Martin?
MR: Well I suppose as a kind of chauvinism, being a human being myself, I do care about our species and I admire the beauty and wonder of this world and its variety etc., in which we have evolved over a four billion year process. And I think we want to cherish this. But nonetheless, in the broader perspective, there may be all kinds of life out there already – even more wonderful than the earth. And there could be descendants – if we think on timescales of millions of years, not just centuries – descendants of humans who also are very different. So I think in the grand cosmic perspective, then we may be just a blip, but I think we’re entitled to a bit of chauvinism for the human species.
JB: What kind of drives your particular concern, John, for seeing humans flourish, rather than some robot ancestors of ours flourish?
JW: Well I think astronomers and other thinkers like Martin Rees who don’t believe in any kind of fundamental meaning actually have a problem. Because there’s a famous quote by the physicist Weinberg, who said, ‘the more the Universe is comprehensible, the more it seems pointless’. And it’s that – as we talked earlier about this human longing for meaning, longing for significance. And so I think that we need some bigger story than just, well, we happen to be a sort of cosmic accident, which happened to evolve for completely meaningless reasons, and we happened by a great fluke to enjoy it and then we went out – bonk!
I think there is a deep hunger for something more than that.
MR: I think I disagree in two respects. First, I think we can make our own meaning in life – I don’t feel my life is meaningless. I think most people can make a meaning in life and look at human achievements and try and ensure that we improve things – that’s one thing. The second point is that even if we would like the Universe to have a meaning, that doesn’t mean it does. So our wishes are not the same as the reality. So that’s why I don’t sympathise with these views, but I feel we’ve got to make the best meaning we can. And we certainly know that there’s a huge gap – indeed I think a widening gap – between the way the world is and the way it could be. And our aim should be to narrow that gap. And I think we should do that as human beings, and I don’t think any perception about what is happening in the wider Cosmos should diminish our motivation to make meaning here on earth.
JB: John, any response to that?
JW: Well I still struggle with how you can have… I mean, I think what Martin is expressing is a view which is often said, is that basically ethics, morality – all of these are just human inventions which our clever human minds sort of invent. You know, we invent that certain things are good and certain things are evil. We pretend that it matters whether you love people or whether you torture them. But actually, it doesn’t – these are just stories we tell ourselves. There is no fundamental linkage between morality and reality out there. And I think most people really would struggle with that. And my own belief, you know, as a Christian, is that actually morality is objective – it’s out there; we don’t invent it, we discover it. We discover what is real, we discover what is true, we discover what love means.
MR: Well of course there are some people who think it, like Kant, who thought it was objective, but that doesn’t mean it involves a God. So I think there’s a separate question of the extent to which ethics is objective, from the question of whether it’s imposed by a God. And then of course there is a question of is a statement that God is good, a tautology or not? So I think there are all these deep debates, and I’m not expert in any of them, but I don’t think we need to abandon the idea of improving people’s lives by their own perceptions, simply because we don’t have any religious beliefs.
JB: I suppose for me, that whole concept of the idea that one day humans might be replaced by robots – and the question of whether we could know, actually, whether they would actually have the same meaningful experience as us, even if they appear to all intents and purpose, you know, appear to be having interactions. I mean, would it be a shame for you, in that sense, Martin, if the ability to appreciate our place in the Cosmos was ultimately lost, even if there was still apparently activity happening through these robotic people and so on?
MR: No, no, very much so – that’s why if our remote progeny are zombies, I would find it a very depressing outcome. But of course, the Universe isn’t necessarily going to be the way that pleases us as humans. So that scenario isn’t less likely because I don’t like it. And so we’ve got to bear in mind that that could be a scenario. But also, we should bear in mind that there could be far more wonderful entities out there already.
JB: Who knows, who knows! Well that would be a whole other show to explore!
JW: We could be unique, but on the other hand may not.
JB: Yes. It’s been such an interesting conversation between you both. Any final thoughts as we end this? John, perhaps you’d like to start, and then I’ll pass to Martin.
JW: Well just taking this conversation on, what the Christian understanding of what it means to be human, it explains the fact that there seems to be this mysterious link between us, as these carbon-based life forms on a particular blue blob in the galaxy, and this vast Universe out there. And there is some kind of correspondence between what goes on in my mind and what is going on in reality. And the Christian answer for that is because we are made in God’s image, and therefore my mind is capable of thinking after the mind of the Creator. There is a Great Mind behind reality, and my puny little human mind is capable in some highly limited and… of course, I mean, I hear what Martin is saying. Nonetheless, there is some kind of correspondence between my mind and the mind of the Creator.
I think what Martin just has to say is that actually, as far as he knows, there is no mind behind the Universe, and it’s just an amazing fluke that this little carbon-based life form manages to do all these extraordinary things, including developing intelligence, reaching out, exploring the Cosmos. And the question is: which of those two stories makes more sense?
MR: Well I mean I certainly think it is remarkable that the human brain, which hasn’t changed very much since our ancestors from the African savannah knew about the everyday world, has made as much progress as it has, in understanding the Cosmos and the micro-world of the quantum. I think that is remarkable. And of course, had that not been the case, we wouldn’t have technology. But of course, we don’t know that we are unique in the Universe – there could be other civilisations out there in space, thinking just the same thoughts as you, thinking they’re unique. So we just don’t know that.
But I would say one other thing, which is that we are perhaps nearer the beginning than the end of the emergence of complexity in the Universe. We’re the outcome of four billion years of Darwinian evolution, and the future evolution we’ve been discussing is going to be mainly perhaps electronic rather than biological. And it’s going to be not Darwinian; it’s going to be determined by what I like to call Secular Intelligent Design – humans and their progeny designing better ones. And I think this has a big difference, because evolution by Darwinian selection favours two things. One is intelligence, and the other is aggression. And that leads some people to think that any advanced aliens are going to be aggressive; they’ll want to come and conquer us, etc. Whereas the future evolution will favour intelligence, but not aggression. And so for all we know, there are these electronic brains thinking deep thoughts our there in space, not bothering us at all. And it completely changes ones view about the likelihood of there being advances elsewhere in the Universe, if we believe that it’s developed by what I call Secular Intelligent Design. So that’s a possible scenario, because there could be places out in space where there’s been a billion year head start, because they formed on plants around older stars.
So I just think we’ve got to bear in mind the huge variety, and the fact that we are still nearer the beginning than the end. Humans aren’t the culmination, and that’s one thing which I don’t resonate with most religious people in believing.
JB: Yes. And if we do meet aliens one day, maybe they will be robots from another start system – who knows!
MR: Much more likely.
JB: There you go. Whether that’s something that inspires you or depresses you, we’ll leave to the viewer to decide. But John and Martin, it’s been just a really interesting conversation, thank you so much for being willing to engage in the way you have today, really appreciated it.
Again, if you want to find out more about both of the books that I’ve mentioned in regards to Martin and John, John’s book is The Robot Will See You Now – find that at johnwyatt.com.
martinrees.uk is the website of Martin Rees the Astronomer Royal, and we’ve mainly been touching on themes from his book, On the Future: Prospects for Humanity, so again, that’s available too, as well as his more recent book, The End of Astronauts. And of course you can find this show – do share it on with others. I hope it’ll interest them and inspire them as well. But for now, Martin and John, thanks for being with me.
MR: Thank you very much.
JW: Thank you.