Download Free Bonus Content
Sign up for the Premier Unbelievable? newsletter and be the first to see new episodes a whole week before they release! Plus you’ll also gain access to our bonus content archive packed with exclusive content and show updates.
About this episode:
Why have humans always sought immortality and could the rise of Artificial Intelligence and advanced technology enable humans to live forever?
These and many more questions are tackled by two leading figures in the AI community, as they discuss the ethical, philosophical and spiritual dimensions of new technology. Could we all end up living in a digital ‘heaven’? Is the rise of AI a threat to humanity? What are the ethics of creating ‘conscious’ machines? Could our universe be a computer simulation of an advanced AI? And where does the rise of new technologies leave the question of God?
Ros Picard, Affective Computing MIT
Ros Picard is founder and director of the Affective Computing Research Group at the MIT Media Lab and has pioneered the development of wearable AI tech. Picard, an adult convert to Christianity, often speaks of how her faith influences her approach to technology
Nick Bostrom, The Future of Humanity Institute Oxford
Nick Bostrom is founder and director of the Future of Humanity Institute at Oxford University. Bostrom a leading secular voice on AI and author of the bestselling book ’Superintelligence: Paths, dangers, strategies’.
We’d love to know what you think of the conversation! Take our survey
Share this episode:
More from this season:
- Episode 1: Atheism or Christianity? Which makes best sense of who we are?
- Episode 3: Identity, myth and miracles: Can we find a story to live by in a post-Christian world?
- Episode 4: The fine tuning of the Universe: Was the cosmos made for us?
- Episode 5: The Origins of Life: Do we need a new theory for how life began?
- Episode 6: Judaism and Christianity: How do we recover the Jewishness of Jesus?
Episode Transcript:
JB: Justin Briley
R: Rosalind Picard
NB: Nick Bostrom
JB: Well, hello and welcome to The Big Conversation from Unbelievable brought to you in partnership with the John Templeton Foundation. I’m Justin Briley and the big conversation is all about exploring the biggest questions of science faith and philosophy with leading thinkers across the religious and non-religious spectrum. And today we’re talking about God, artificial intelligence, and the future of humanity, and asking, is technology the key to immortality? This season is, of course, being recorded for obvious reasons remotely, but that does allow us to bring together some really interesting people from all over the globe and joining me today are Nick Bostrom and Rosalind Picard. Nick Bostrom is the Director of the Future of Humanity Institute at Oxford and a leading voice in AI and Technology. Nick’s the author of ‘Super Intelligence’, which warned of the possible dangers that the rapid development of artificial intelligence may pose globally, Rosalind Picard is director of affective Computing at MIT, harnessing the power of AI to develop things like wearable technology, machine learning that predicts and responds to human emotion and well-being and helping people with autism, with epilepsy, depression, and many other physiological conditions. So, today with Nick and Ros we’re going to be looking at the future of AI and asking big questions like, the ethics of AI, could machines become conscious one day? Might we all be uploaded to some digital heaven, and what if we’re all living in a universe simulated by God-like alien technology? Just a few of the little questions in life. So, I’m really looking forward to this this programme. So, good to have you with me Nick and Ros. Tell me, first of all, I will start with you Nick. You head up the Future of Humanity Institute. It sounds like something from a sci-fi film or something. Are you basically there to sort of predict doomsday, technological scenarios and just make sure they’re averted? What is it you do at The Institute?
NB: Yeah. Especially the latter. If I could either predict or prevent, I’d choose to prevent. We are an interdisciplinary research group. We have computer scientists, mathematicians, philosophers, physicists trying to get the clear understanding of some of the really big picture questions regarding the future prospects for intelligent life, threats to the survival of the human species and, in particular, the levers like where are the opportunities whereby some modest effort now, we might improve the expected value of the long-term future.
JB: Does that just involve a lot of sort of sitting around and thinking through issues. I mean, obviously you have to keep abreast of all the latest developments I suppose in technology doing that.
NB: Yeah. Well, a lot of our work is focused more on the kind of underlying questions rather than what’s next year’s iPhone going to be like. These kind of ripples on the great pond are not really our focus. But sometimes the most important questions are the ones that get the least attention. So, for example, if we continue to make impressive progress in machine intelligence what happens down the line when AI finally succeeds at, not just automating specific tasks, but providing the same general kind of intelligence and problem solving that has made us humans unique on this planet, and other more ways that the human condition could change in some more fundamental way. It’s very much big picture stuff, obviously.
JB: I know you don’t have any personal religious beliefs, Nick, but does the issue of God ever come up in your research, questions around religion, and that kind of thing?
NB: Well, it comes up in my own thinking, not so much in the specific research products we are pursuing. And I can’t speak for all of my colleagues there. But at some point, I think if you zoom out enough and start to think about really what is the human condition all about and what might it lead to, you do start to butt up against theological concerns. Unfortunately, that’s also the point where my competence kind of runs out to some extent.
JB: We’ll be testing your competency a little bit on the programme today Nick because I would love to open up some of those metaphysical questions that the AI pose around consciousness, ethics and even the god question at some level, but we’ll start obviously by talking about AI and what some of the possibilities and challenges are. Before we do that, Roz, welcome to the show as well. Now you’ve pioneered this this field of affective computing for over two decades, I think. Tell us firstly what it is, and what sort of things you’ve been developing in the process.
RP: Yeah, affective computing is computing that relates to, arises from, or deliberately influences emotion. The original idea was that computers were driving us nuts. They were so stupid and frustrating and the kind of intelligence we wanted to build for the future AI was not just playing games like chess or Go, or using language or solving maths, but was not being annoying, right? A computer or a robot that you would actually want to have around and that if you were frustrated or pleased it would see that use that as feedback and learn to do better and have skills for how to manage emotion. Not just recognising them but knowing how to help us manage our emotions in respectful ways, always with deference for people’s feelings, not sensing things if we don’t want them and so forth. So, I saw it as a hierarchy out of wanting to respect people’s feelings and that these were skills that we could teach computers how to do better at it.
JB: And what has that resulted in terms of some of the practical applications that you’ve been able to engineer in the process?
RP: We have been surprised. In the beginning the idea I think was very much about like, what would a robot do, what would a software agent do in interacting with a person to be intelligent? And what might happen deep inside a machine that most people don’t see that might be like how the human brain works that might enable it to think more intelligently, not that computers think like we do, but to have processes that give it better flexibility and generalisability and things like that. We have actually found that there are a lot more applications than that and many applications have to do with human health, for example we learned that the fundamental mechanisms inside us when we’re having feelings communicate with pretty much every organ in the body. And when you look at the messages, they carry they change when people are healthy or sick and by paying attention to these unseen aspects of our motion system, we could help people have better health. So, we have been working on a lot of cool things that have come out of that. Like we accidentally developed a seizure detector. Which actually became the focus of [name 00:07:30] PhD thesis and the focus of a spin-out company, Empatica, that’s now an FDA cleared device that I’m wearing here called Embrace. But we’ve followed the data where it goes and wherever we can look to see how if we can steer it to help make lives better.
JB: Well in that sense, it sounds very much like you’re obviously harnessing technology for a wonderful end, even if there may be potential pitfalls in other parts of the AI story. Before we start talking about that, I’m fascinated a little bit about your own faith journey, because I think you had an adult conversion to Christianity yourself, Roz. So how does someone who’s evidently intelligent, scientifically minded end up believing in God?
RP: There’s some longer versions of this people can read if they go to places like Christianity Today or other talks, but the short version is I was an atheist and a sceptic and a very proud one, and when I was challenged to read the best-selling book of all-time – the one that is so best-selling they leave it off the bestseller list because it dwarfs everything else each week – as I read that book I started to realise there was a lot of wisdom and intelligence in that book and that it wasn’t just a bunch of goofy made up stuff that I thought it might be when I went to read it. And through a process of reading, asking questions and many years of gathering data about it actually, I gradually felt myself changing. I became somebody who started to believe not only that a possible God existed, but maybe it was more probable than not, and I started to see if that made a difference in my life, and then gradually started to embrace the Christian worldview, and that has actually made a huge difference in my life. So, now I’ve run the experiment both ways and I definitely prefer the Christian worldview.
JB: And again, just briefly, how would you say that that particular change in your own life has impacted your career, your scientific work and everything?
RP: It’s funny, when Nick was saying we bump up against the boundary of theology I’m reminded of Donald Knuth when he was visiting, and he’s a practicing Christian, a very famous computer scientist and when he came to speak about theology, he said when it comes to theology – mind you everybody in the room thought he was the world expert on computer science – he said when it comes to theology, I’m a user not a developer. We aren’t inventing that, right? There’s some truth there that we are barely catching a glimpse of. And in my work the more I learn about how the human mind works and how the human emotion system works, I’m just in awe. I’m in awe of how fearfully and wonderfully we are made, and it inspires me, and another key piece of my faith is that all people are of equal worth and that has driven my willingness to work on topics with people who are often stigmatised for conditions that other people didn’t value so much. They didn’t value those people so much because they had those conditions which is a horrible tragedy, and knowing from my faith, this view that we’re all of equal worth, I have spent a lot of time learning from them, and it has just been tremendously productive in terms of just wonderful relationships, wonderful insights, and now pretty much all of our research has gone in really cool new directions because of things we’ve learned from their different kinds of minds.
JB: It’s great to hear that. Such an interesting area. I could spend the whole evening talking about that. But we’ve got all kinds of areas that we want to cover in today’s programme. I want to talk about firstly, the future of AI and whether the sci-fi movies have it right. Now, I don’t know what you think of this Nick, but I guess ever since things like 2001- A Space Odyssey, you know, where you’ve got Hal basically saying, ‘I can’t do that, Dave,’ and that sort of thing, we’ve had this fear that technology will one day essentially get a mind of its own and work against us in some way. How broadly optimistic or pessimistic are you about those kinds of foreboding challenges around AI, Nick?
NB: I’d say I’m a fretful optimist. I think it’s going to be transformative and that both very, very good outcomes and very bad outcomes are potentially on the cards, and we have a lot of ignorance as to how these kinds of things play out. We’ve never seen a transition to machine intelligence era before. We don’t have that much evidence to go on. So, some quite large amount of uncertainty seems appropriate in terms of our expectations.
JB: I mean, what are some of the potential challenges or even pitfalls that you anticipated in super intelligence? And I’m aware the book came out a few years ago now, so do you think that the landscape has changed a great deal in terms of people’s I suppose awareness of some of these areas of risk?
NB: It has changed a great deal, I think. Well, first in terms of actual capabilities, I think progress has been faster than was generally expected back in say 2014 when the book came out. Really exciting developments in deep learning and a lot of things every year. Another way in which the situation has changed is that when I wrote the book it was partly to draw attention to the need to do research in advance of how to align very powerful AI systems with human intentions, or other ways of making them safe and controllable back then. It was an extremely neglected area. It was kind of relegated to science fiction authors to just make up fun and interesting stories like the one about Hal and many, many others, but that was not like when you have people being paid to do research on the dung beetle and there’s like a whole community with peer reviewed articles studying every aspect of the dung beetle’s life and physiology, but this didn’t seem to qualify. But that has changed in the years that have passed. There is now a vibrant research community working on the alignment problem and we have a group doing that at the Future of Humanity Institute, but at Deep Mind, at Open AI and at a number of other places. There are now some really smart people working on this.
JB: Do you consider that there is a genuine threat at some point in the future of some kind of cybernet type situation developing where AI will get to the point where it’s become self-aware and develops its own attitudes towards the world and how things should be run and turns into some kind of cyber dictator. I mean, is that just sci-fi as far as you’re concerned, or is it a genuine thing that could happen if we didn’t, keep an eye on and moderate the way we develop AI?
NB: It’s easy to anthropomorphise AI systems and that usually blocks understanding, but it is true that if you have very powerful optimisation processes where you have some kind of learning mechanism that tries to gradually improve performance against some specified objective, that you often find that the ways in which the system goes about meeting the objective is unexpected and sometimes undesired. And the potential for this kind of unexpected and undesirable way of meeting the nominal specification that we have put in increase the more capable, the more creative, the more intelligent the system becomes and so, if we extrapolate beyond the systems for now, where it’s easy to go in afterwards and tweak it and change it if we don’t like, but if we imagine that one day this might be smarter than humans and therefore extremely capable and maybe extremely powerful, then it does seem that we are faced with a big challenge here to figure out how to actually align a superintelligence, something that is maybe way smarter than we are, and construct it in such a way that it is nevertheless on our side – a kind of extension of our volition. So, I’d say that maybe you could divide it into three categories of ways in which things could go wrong if we are unlucky in this transition. So, one is this kind of AI running amok, and the answer to that seems to be this kind of research to do the technical work, to figure out how to align powerful systems. The second category would be if these AIs become very powerful, they become very powerful tools and we know if we just look at the history of technology that humans have used technologies for all kinds of purposes – some evil. We use a lot of technologies to wage war against each other to oppress each other. So, there is a concern about if we have increasingly powerful tools, what would we do with them to each other? So, that broadly speaking points in the direction of improving governance, institutions and ethics and other practices, and we have a group working with that. And then there’s the third category as well which is quite neglected, which is not just what the AI might do to us or what we might do to each other using AI tools, but what we might do to these digital minds. If you take that at some point, they might become moral subjects if they can become conscious or otherwise, maybe have some of the attributes that give rise to moral status. Then I think there is a risk that we might mistreat them and that a great deal of morally undesirable consequences could flow from that.
JB: I’d be really interested in your thoughts on this Ros, where do you stand? Nick says, he’s fretfully optimistic – I love the phrase – of the future of AI. Are you generally an optimist or worried about where things are heading?
RP: My worries are maybe different than the ones usually parroted in the science fiction movies which are designed to be entertaining and cataclysmic. I think the real worries right now are more that people use AI to amplify the good or bad things they do, and there are people on our planet who don’t share this ethic that all people have equal worth. They think they have more worth, and their goal is to preserve and protect their power first and handle other people’s needs second to that. And many of them see an opportunity to use AI to achieve their goals and when they have a little power or more power than others, they take the AI and amplify that more and that’s a function that could be very hard to restore some equity to. So, my bigger worry actually is when very powerful people who don’t have others best interests in mind, or they may say they do, but they don’t, act that way. They use the technology to consolidate their power. I see AI as not operating on its own. I see it as something we have made and those things we’ve made, we’re ontologically superior to that which we make, not to that which we beget, but to that which we make, and to that we have to recognise there’s somebody paying for that system and they’re getting money from somewhere and they’ve got power from some source. And that’s where I think we have to keep our eye on the real worries. I think the rest of it is Hollywood entertainment and distraction.
JB: What about Nick’s contention that if we were to see an AI that became a moral agent? I mean, you’ve been developing software and hardware to measure our feelings. What if AI developed feelings of its own?
RP: Yeah. It’s not going to have feelings like we do, but we can make it look like – I mean already we can make it look like it has feelings – and people have already done moral attachment to things that aren’t AI, right? I remember as a sixteen-year-old, the car I had saved my money to buy and babied and took care of, and then one day I sold it to somebody else and when it drove away, I was like hmm and had tears for this thing and you treat it like a baby, and people name their things.
JB: They anthropomorphise them.
RP: Yes, right we do that. For example, you take a little toy robot – so, for example, when I, Robot first did this doll that was on the market and they took it off called My Real Baby, and it was the first robotic baby doll. They had to have this decision about whether or not the doll would cry and what would make the doll cry and they did make the doll cry, but they decided that if somebody strung It up by its toes and started abusing it, that they would not make it scream or cry because that could actually oogle on the sick kind of person who gets a sick delight out of doing that to something that looks like a baby, and even knowing it’s not, it still pushes our buttons. It makes us feel as if it is because it looks that way. So, we have to recognise people will feel that way. They will see that in things. Even ten years ago AI, much less the future, it will continue, and we need to be careful about what we subject people to. In fact, there’s interest in using robots for helping deal with people who have problems with things like sexual abuse or assault or not caring about somebody else’s feelings. Could the robot help them get some role playing in a way that doesn’t hurt another person?
JB: Nick, it’s interesting to hear Roz say that she doesn’t think ultimately a robot, or a piece of AI will feel in the same way that we do. I don’t know whether you want to clarify that, Roz. Do you think that this idea of a consciousness, in the way that we have consciousness, you don’t think that’s going to be a reality in AI terms?
RP: Well, I won’t say it won’t ever happen, because we don’t know what we don’t know, right? We can’t say something is a hundred percent impossible because we’re often surprised. I can say that with decades of super bright minds working on all different kinds of computation, biological computers, silicon computers, quantum, we do not see something that looks like conscious experience or emotional experience happening. And so, we don’t see a path to building it. It’s not something that’s just going to go poof! Here it is with this much faster of a machine, or this many more transistors or something, right? It’s not about the parameters, we know.
JB: Okay. What’s your take on that Nick? Do you think there could be a consciousness at some point from an AI?
NB: Yeah. I mean it’s not completely obvious whether current AI systems, say Alpha Zero – I mean how sure whether they are not conscious in whatever sense. Maybe insects are conscious, maybe some small animals. If the capabilities, the learning abilities and so forth – the behavioural repertoire starts to become comparable to animals, where a lot of humans would at least not feel confident that there is not even a glimmer of consciousness then it seems our confidence that these AI systems do not have similar kinds of internal life starts to seem a bit shaky. And certainly, as we are moving, over the coming years towards things that maybe become more mouse-like, let us say. Most of us would probably think mice have the ability to experience pain and to suffer, to be hungry and to have other mental states and combining that with our lack of like a really solid account of necessary and sufficient conditions for conscious experience. It seems like at least the uncertainty would spill over into hypotheses where some of these systems will have some degree of consciousness, long before you reach kind of human level capabilities. And certainly, I think it is possible in principle to have, say I digital computational system that that would be fully conscious. In fact, I think indistinguishable from the kind of consciousness that we humans have.
JB: How would you know I suppose is my question Nick, because inevitably, we can probably develop fairly soon and maybe we already are, algorithms and technologies that pass the Turing test where you would talk to it and you wouldn’t know you were talking to a computer, but as far as I’m aware, it’s still just a set of algorithms. It’s ones and zeros. It doesn’t mean anything to the machine. How would you ever know? It’s kind of like the zombie problem, isn’t it? How would you ever know that you really are talking to a conscious being and not just an incredibly clever set of algorithms?
NB: I mean, you could kind of ask the same one you’re talking to like Rosalind or me about how you can know that we really are conscious and not just like cleverly mimicking the types of behaviour that the conscious being would exhibit. And it seems broadly speaking we might point you to types of criteria. So, one is, what functionally are we capable of doing? Like if we can respond intelligently to your questions and react to your actions and so forth, that would seem to point to the presence of a mind and I don’t I think it’s really the case that we right now have the ability to pass the Turing test. Well, we have to be careful what we mean by it. There are sort of crummy versions of it that you could pass by means of cheap acts. So, if you have some naive judges who don’t know like the real questions to ask, they will all ask the same kind of questions and in the limiting case you could just imagine kind of having hard-coded answers to the most commonly asked questions and then probably you could trick some people for two minutes in an interview quite easily. And you can do a little bit better than that with current kind of big language models and so forth, but like actually doing like a half hour interrogation with people who know how to kind of probe, that’s way beyond current capabilities. But anyway, so those kind of functional capability related criteria would be one basis. Another you might try to point to is the internal architecture. Some people might think it’s not enough to replicate the input/output functionality of say a human for it to have the same mental experience. The internal organisation would also have to be sufficiently similar, and there of course then, the question arises at what level of similarity is it sufficient if you have say implemented in silicon the same kind of computational processes or do you need some lower-level isomorphism as well? But you could have both the same computational structure in principle in a digital computer and the same input/output behaviour, and I think that would ground a very strong claim to having the same kind of conscious experience.
JB: Why are you more sceptical of all of this, Roz?
RP: Well, there’s functional similarity where something acts like something and where we give it a bunch of specifications and say if it does these things then it jumps over this hoop and we label it something, like we label it Consciousness level 0.2 or something. And we can do that anytime. The more specifically we define the functions, the more likely we are to get the computer to be able to succeed at them in the constrained environment. We’re very good if we can specify all possible things that it should do or behave or patterns of them within a space of programming the computer to do things like that. I have ideas how to program it so that it will look conscious and take a list of functions of consciousness and make it look that way. But am I deceived that it actually is? No, it’s just looking like it is and there is a difference. It may not be a functional difference in an interaction. There are times, when you call for help for service on your computer or something and the person is just going through a list. I’m like, oh my gosh, you’re more like a computer than my computer. There are times when we just act according to this set of procedures, but we know that we have more to us than that, right? We know that we can be motivated by love, that we can be motivated by feelings and experiences that transcend these functional approximations that I program, and I know that because I program them, and I also know that if I didn’t program them, that machine wouldn’t act that way. It doesn’t care. It doesn’t feel, it doesn’t think or know and when it was switched off at the end of its amazing victory, it didn’t care.
JB: It sounds almost like there is a theological issue at stake for you here though, Roz. Obviously as a Christian you believe humans are made in the image of God. There’s a sort of divine spark if you like. Does that, for you, kind of cause you to sort of be sceptical that we could somehow recreate that from the ground up? We could from the dust create a soul because that’s ultimately God’s prerogative in your view.
RP: Actually, it gives me maybe a little more boldness than that in the other direction. I believe because of belief in the existence of God, and the lines at the end of first Corinthians 13, that ‘now we see as through a glass dimly, but then face to face.’ All right, what does that mean that at some future point we will see face to face and not just like a face, but it goes on to say, ‘and then we shall be known even as we are fully known,’ and that’s like, holy cow! We are fully known, right? That tells me that our consciousness may actually be algorithmically specified by someone: that it is fully known, and I know Nick has written about somebody simulating all of this perhaps. So, what’s really cool is that actually says, St Paul says there that we are fully known. Now we also have a choice of whether or not we choose to be fully known, and that’s pretty interesting too. And if we are fully known by one that we don’t only see like through a glass dimly, then that gives me hope that actually we will someday be able to understand completely how we work and that then it is a solvable identifiable problem. So maybe that’s not the answer you expect from a Christian, but that is actually more helpful than some of my colleagues that it is completely specified.
JB: That’s very interesting to hear in that sense. I mean, Nick I don’t know whether this rings any bells with you, but I guess you’re kind of agnostic on the God question, but would the development of some kind of genuine consciousness in AI, would that swing the evidence away from God in the sense that it would show that life can be, if you like, built from the ground up in a kind of materialistic way and therefore there isn’t any need for the sort of divine spark or anything.
NB: I don’t think it would. I mean it’s hard to generalise because there are many different beliefs about God based on a lot of different experiences and evidence and arguments. So, maybe some of those would be contradicted by our actually creating a human level system, but that would leave a lot of others. And it’s hard to say, there might be many different ladders on which people could climb up to some conviction about there being a higher plane of being and greater than human possibilities. Maybe this would be one of them. So, if you could actually bring into existence intellect greater than human intellect, it would kind of show off the existence of space above us. We talked a little bit, Rosalind referred to some of my ideas about the simulation argument, it can maybe open up the horizon even for somebody who starts out with a very naturalistic picture of the world that there could be a lot more in heaven and on Earth than is kind of dreamt of when we just look around us and take a kind of naive, realist interpretation of that.
JB: Talk to us about this simulation, because for those who aren’t aware of what we’re talking about here. The simulation hypothesis – what is it, and sketch it out for us and some of the implications of it.
NB: So, this is an argument which I published back in I think 2001 or 2002, that shows that one of three possibilities is true. Although it doesn’t tell us which one of these. So, the first one is that almost all civilizations at our current level of development if there are many in the universe, like almost all of those go extinct before they reach technological maturity. So, that’s possibility one. Kind of decide if that were true, but it could be. Maybe we all invent some dangerous technology that destroys. Okay. So, the second possibility is that out of the civilizations that do reach technological maturity, there is a very strong convergence in that they virtually all decide not to use their immense computational resources to create simulations – ancestor simulations – simulations of people like their forebears. Simulations that would be conscious. We can discuss that, but the third possibility is the simulation hypothesis, which is that we are living in a computer simulation, not in a metaphorical sense that the kind of laws of physics were usefully described like in terms of algorithms. But in the literal sense that we are inside some computer built by some advanced civilization. So, the simulation argument then, it was on probability theory and stuff, but yeah, it seems to show that at least one of these three possibilities is true. I mean, it’d probably take like another five minutes or something like that. So, depending on whether you want to – it tends to derail the conversation if you start to pull on it.
JB: That’s fine. I was going to ask, let’s suppose that option three is a genuine possibility. I mean, how would that make you feel personally, Nick, if that were true that all of this that we’re experiencing is a simulation, either just in your mind or that we’ve all got individual consciousness that have been simulated. I feel a bit like we’d be in The Matrix or something like that, everything we thought was reality is in fact an illusion, as real as it looks to us. For me it’s a very unsettling idea in a way. But what’s your response to it Nick?
NB: Well, I mean, it depends very much on what kind of simulation this was. You could imagine very good ones and very bad ones and all kinds of things in between. I don’t think it would necessarily follow that all our beliefs about ordinary reality, like the appearances of furniture around that that would be an illusion exactly I think. Those things would still exist. It’s just that the nature of their existence would be somewhat different than we had assumed, but it would be, I think, a more natural interpretation to say that we were a little bit confused about some important questions in ontology, but that our everyday beliefs, they still track the information structure around us. They’re still useful for guiding our actions. They’re still our best basis for planning what to do next which is to look at these patterns we see, whether they’re in the simulation or in basement level physical reality, extrapolate from those, build models, etc. I don’t think, if we are in a simulation, then we might just as well go crazy, and anything is equally likely to work as anything else. I think it would have some more subtle implications perhaps for what we could expect in the future and what would make sense to do but that would require a little bit more kind of work to unearth.
JB: Yeah. Roz, what are your thoughts because this is a kind of fairly mind-blowing hypothesis, isn’t it? What’s your thoughts on it?
RP: Well, I used to build simulations so, I’ve thought a lot about them and I’m curious, behind the simulation there’s usually a purpose for the simulation. There’s usually some mind, at least in my experience, building the simulation. If it’s one I built it doesn’t work perfectly or the computers don’t work perfectly. So, after so many days of simulating or weeks of simulating some of the computers I’m running it on crash, and I have to restart it, and so within the simulation there’s a different time concept than outside of the simulation depending on when you restart and what you reset. There’s a lot of attributes of it and I’m curious maybe to hear your thoughts on the mind behind a simulation that we could be in. What kind of mind do you think that suggests?
NB: It would be some kind of super intelligence presumably because creating simulations with conscious beings and where the virtual environment is indistinguishable from reality is very hard, but clearly, it’s not something we can currently do. Although computer games are getting better each year in terms of their graphics, but we’re still a long way away from that. So, I’m imagining that by the time that becomes possible, other things will also have become possible such as cognitive enhancement of biological intelligence and presumably machine superintelligence. Even if some simulations were done before you had true machine superintelligence, which I doubt, nevertheless after you have superintelligence you will be able to build vastly more of these simulations because with superintelligence you could like just move much more quickly to the technological limits that physics permit. You could colonise the universe, turn planets into supercomputers and so forth. So, I think almost all simulations would be built by superintelligences. That’s one thing we can infer. Regarding the motivations, we could imagine a range of different possible motivations for creating simulations. And I think we are somewhat ignorant as to the relative preponderance of these different possible motivations. You can ask just why do humans create simulations and imaginary worlds, and we do it for all kinds of reasons. There is like recreational attempts with video games or movies and theatre and so forth. There may be like, you could imagine historical research like kind of explore counterfactuals for history, you could imagine historical tourism if you want to experience the past but time travel just doesn’t work because of physics, then the second best thing might be to create a kind of recreation of some historical epoch and then you could kind of inhabit it for some period of time. So, like we humans do similar things, like historical re-enactments is our best effort. If we could make them more realistic some people probably would. There’s a bunch of other things as well that you could imagine that you could use them for to find out more about the origin or the forces shaping different kinds of AI civilizations to study how they might have a reason.
RP: It’s hard talking online, but one of my colleagues once said, he thought maybe that this world in this life was kind of like a simulation, but it was real, but it was one of many – a series of them and a series of realities and we were in this one to have the freedom to make the choice as to whether or not we wanted to go on to the next one, and he was a Christian believing the next one was the great beyond that if you choose to be known by God you get to go to, and that he was saying that God doesn’t want to force that choice on us. This is the world where we get to choose freely whether we accept this free gift of Jesus to accept this loving gift to go to the next one, because we come into this world without that choice, right? We’re here, we weren’t given that opportunity to say I want to be here, but we get the opportunity to say if we want to go to the next one. What do you think of that?
NB: Yeah, that’s another class of possible reasons for why somebody might create these things, and as you could imagine kind of people wanting a second run, or you could imagine all kinds of research, like maybe if somebody died early you could imagine creating a simulation where they are allowed to continue out their normal life and then maybe after that migrate into this social reality. It’s easy to create a lot of different imaginary possibilities, what’s harder to do is to find some way to kind of constrain them and form a very firm expectation for them. Because ultimately, it’s a question of numbers here. Like what would most stimulations be like? If the universe is sufficiently big with a lot of civilizations probably all of these and many other reasons have activated some creation of simulations somewhere, but we would be most likely to be in the most common type of simulation. So, if we are trying to form some kind of a prediction about our own position.
JB: It does raise the question for me though, Nick, as Rosalind sort of alluded to, of the purpose of doing this because it almost feels like such a superintelligence that could simulate an entire universe and our consciousness within it, is almost godlike as far as we’re concerned, and when I hear people like Elon Musk and others sort of taking this idea quite seriously that we are simply living in a sort of supercomputer simulation I kind of almost wonder, isn’t sort of the traditional God hypothesis that there’s a transcendence divine mind behind the universe a kind of more parsimonious potentially explanation than this idea of a super intelligent alien race or whatever that has created us, which would itself in a sense need an explanation and just knock the question one place back. What’s your view on all of that, Nick?
NB: They’re not mutually exclusive they could both be true, or one or the other could be truer, or neither, but they don’t seem to have any obviously very strong evidential relationship to one another, so you could have had a universe created by God and then in that universe it turns out to be possible to build really powerful computers and some of those get used to create these simulations. I’m not sure that I would say it’s not parsimonious. It’s not really a postulate brought in in order to motivate something else. In that respect I think, there are these arguments in the history of philosophy that form the basis of a sceptical challenge. Like how can we know that the external world exists, that we are not dreaming or that it’s not all just an illusion. So, these got back to Descartes and you can find earlier ancient examples of this, but there the structure is that you kind of start from a position of doubt and then the challenge is prove to me that the external world exists and then there’s the philosophy around that. But here rather we start by assuming that everything is as it appears to be, there are trees and trains and there are computers and there are scientists improving the computers, and we can study the laws of physics. They seem to suggest you could build much more powerful computers with more advanced technology and you kind of think through the implications of just taking all of this at face value, and one of those implications seems to be that at some point in the future you could build extremely powerful computers and you would have the other prerequisites required for creating simulations with gazillions of beings with experiences like ours in them. Many, many more beings than have existed through human history. And then you ask, if that is the case, if the world is such that there were all of these many, many orders of magnitude, more beings with our experiences in computers studying original history, then where would we most likely be, given that from the inside you couldn’t tell the difference. Then appeal to a piece of probabilistic methodology known as anthropics, where it seems like in that kind of situation you should apply a kind of principle of indifference. And so, it would follow then from that with overwhelming probability, we would be in the simulation. So, there is no kind of extraneous assumption you have to make in order for that reasoning to go through it seems. There certainly are assumptions, but they are not kind of extraneous. I mean, obviously we are assuming here some form of substrate independence that in principle you could Implement consciousness on other substrates than biological brains, like for example on silicon computers. So that’s kind of important from the philosophy of mind.
JB: Again, I’m detecting a kind of a level, that you’re more sceptical about all of this, Roz. And is it again because of that that consciousness issue?
RP: All speculation, yeah. Where’s the data? Where’s the evidence of this? Right? You’re a philosopher, I guess and a white male, you get away with a lot of speculation. I have to have data.
NB: That’s what I’m afraid of, in terms of what kind of computational performance does the physics that we have in our universe permit? And so, you could do theoretical models, say of various nanomechanical systems where you can estimate, not exactly, but you can estimate the lower bounds.
RP: I’ve built those models. I’ve built statistical physics models. Yeah. I’ve built statistical physics models. I did my doctorate on statistical physics models simulating with probability greater than 0 the space of everything that we were trying to create, and visual images in the beginning, so map the whole world visually. So, I know the math and I also know that it’s different than feelings, experience, and consciousness. We can represent it. We can some symbolically represent it, we can map it and we can build gorgeous math, which we also don’t know where that comes from. Where does all the beauty of the abstract math come from? I’m curious too if the mind that you have behind this, if you’ve thought about how that mind has chosen to create beauty and the aesthetics and the gorgeousness inside, very fabulously abstract math, because I haven’t seen you talk about those things too. One more thing too. In the simulated world that I think is really interesting is, some people have trouble with the concept of miracles and religion and yet all of that becomes just a snap to explain when you have the ideas you’re putting forth with the simulation. In fact, I remember talking to Will Wright, a creator of The Sims games and he controls, right? He can change something in the game, and if we’re all just in that game world and the creator reaches in and changes something and it violates all the models, we thought within that world of how things work. We might say, that can’t happen, that doesn’t fit our physics. That doesn’t fit our graphical world models, but it is possible in the larger world, right, when the simulator reaches in. So, I like the idea of the superintelligence out there, able to change things in our world, but I think we should be asking what is the evidence, and I think there is evidence for the God of the Bible and I’m not sure there’s anywhere near as much evidence for some other something out there that’s a superintelligence. I’m not saying it can’t exist. I’m just asking about real evidence that you could produce.
JB: You obviously feel it’s a bit of an extravagance, Roz, but go on Nick.
1: Yeah. I think you’ve got to kind of follow along the argument to see where the assumptions come in and those are the bits that would require evidence, if any. But it sounded like the part of it that you were objecting to was not so much the ones that one would normally think would require evidence, but rather the substrate independence thesis, like the assumption that, say these digital minds that could be simulated on digital computers, that they would be able to have all the kinds of experiences that we have – conscious experiences, including experiences of love and what other favourite emotions that we might point to. So, that I think is more a philosophical assumption than an empirical one, and for the purposes of the simulation argument that the actual paper doesn’t try to defend that at all. It just says this is a common assumption in philosophy of mind and computer science. But if I want to dive into that discussion then I mean, I could look at some of the more popular or more well-motivated theories of consciousness that we have and see what they imply about the possibility of instantiating conscious states in computers. So, we could look at the Global Workspace Theory or Attention Schema Theory or Higher Order Thought Theory, so it seems like a lot of these currently like most favoured theories of consciousness – although by no means all – would imply that digital implementation would be possible.
RP: Yeah, I just think it’s important to keep asking for the evidence and for a distinction between representation and reality, and our reality allows us two very powerful representations and hypotheticals and simulations and speculations, and all of that’s wonderful. And I just think sometimes your statements it’s hard to tell when you’re speculating. Which I think is a lot of the time. Which is fine and wonderful. And when you’re talking about facts, they sometimes come across the same way, but it’s really important in today’s world, especially with all the fake news and the power of it, that we help people understand who aren’t experts on this to think about what has a lot of evidence for it and what is really just a whole lot of speculation – interesting speculation, but that’s my main slip.
JB: Let’s talk about another interesting sort of religious perspective on this. Obviously, most world religions have an interest in the idea of immortality, of a life beyond this one, but there’s a sense in which AI in some views, is potentially able to offer us that in the here and now – the transhumanist project and we can sort of define what that is. I mean, but just yourself Nick, I read a really fascinating – it is a few years old now – but a fascinating profile of you in the New Yorker where it talked about the fact that you – I think you’re signed up for a sort of cryogenics thing where when you die if the technology enables it in the future, you could be thawed out and brought back to life. Do you have a sort of desire even personally yourself for kind of continuing to live a sort of an immortality in that sense?
1: Immortality, that’s a very long time, right? It’s a really long time. I think people just very quickly from, if you would live to more than 80 years than you want to be immortal, but there’s like a lot of space in between 80 and infinite. So, I mean I have no idea. I mean in truth, I think, it would mainly depend on the conditions and then presumably I would not want to decide once and for all but take a little bit at a time and see how it goes and then maybe be in a more informed decision to figure out whether you would want another ten years at that point, when you can kind of see what it’s like.
JB: I’d be fascinated to know, is that still an ongoing thing that you’d like to be able to come back in the future if the cryogenics and the technology allowed?
NB: I think most people signed up for this cryonic stuff, like the first best thing they would wish for, I think, would not be to get sick or to be run over by a car in the first place. And this this would be a kind of desperate last resort that may not work but has a maybe a greater chance than if you’re cremated or something like that, of this worldly reanimation. So, I guess there are different questions where I could ask like is the probability of this actually working large enough to be worth whatever the cost are. I mean that’s like a financial cost, and I guess there’s a social awkwardness cost or whatever else. And certainly, there are many steps where it could fail, even if you think that theoretically with arbitrarily advanced technology you could do it. But like there are other things like you could die in the wrong way, like lost at sea or something, right? Or the maybe the cryonics company could go bust. Or there’s a big revolution that – there are many things that could – the future might not be interested in reanimating. So, my question is like, is the probability high enough that it’s just worth the hassle, and then the question of the desirability, which is kind of separate from that, which I think obviously would depend hugely on the circumstances.
JB: Sure. Well given all that, obviously there’s a lot of question marks over whether we would want to be awoken in the future and whether the world is a place we’d want to continue living in, but I guess the interesting thing for me is that AI also holds out the possibility, but again, the question is, is this just sci-fi or is this sci-fact, of digitally uploading our consciousness to the memory banks of some cloud whereby we could potentially live forever? I mean again, it’s been explored a great deal recently and shows like Black Mirror and others where they’ve imagined this possibility of people effectively when their physical bodies wear out, they simply go to a sort of digital nirvana. I mean, in that sense it’s interesting, isn’t it, that it’s almost as though AI technology is almost taking the place of religion in that sense kind of fulfilling the promises that have traditionally been seen as those of faith of a kind of future life, a life everlasting and so on. What’s behind that drive do you think, Nick, for people to want to be able to live forever in a sort of digital space in that kind of way?
NB: I don’t know exactly. I think there is like some mundane level of motivation, like most of us just put on a seat belt because we don’t want to die or get injured in a car crash. And we avoid the most unhealthy food or smoking because we think we might get cancer. That’s the kind of commonsensical desire that most people have, assuming their lives are sort of reasonably okay, to prefer to live and be healthy. And so, for some people I think it’s just like an extrapolation of that. Maybe they see this is like a possible additional means by which you could increase the amount of expected future life that you have, in this life, in this world. And then it’s a cost-benefit thing, so depending on whatever, how rich they are. Maybe if it’s a very small fraction of their income that would have to go to pay the life insurance premium every month and they would rather give up a cappuccino every day and have this, that gives them X percent chance. I’m sure that for some people there might be a more complicated or idiosyncratic psychological thing where they need some – maybe they are not religious, and they still have a psychological need for some straw to cling onto that could give them hope. I don’t want to speculate as to the kind of deep psychological roots for all of these people. The most striking thing, if anything, is maybe just how few people have. This cryonics thing has been around for many decades now and it’s still very much a niche. It’s growing, but it’s like a very slow linear rate and you’d think that we just see, like the state of technology is becoming more advanced. A lot of these other ideas that were really out on the fringes 20-30 years ago are now kind of like great advances – in AI, nanotech, synthetic biology, VR. All of these things now seem like, oh, yeah, we are making progress a lot of people think. Even this stuff about the simulation, right? That’s become quite mainstream. Like a lot of people are tweeting and blogging. Elon Musk is tweeting. But for some reason cryonics still remains just as much a niche thing today as it as it did back in the 90s. But that maybe that’s more of a thing that strikes me as needing explanation.
JB: Yeah. What do you think, Roz, could AI technology be the future to a form of immortality? Again, I guess there’s a lot of speculation there and a lot of sci-fi dramas do hang on this idea that you could in principle upload one’s consciousness into some digital format, which again is something, if it’s happening, it’s a long way off and there’s the question of whether it could happen at all anyway, but I’d be interested in your general thoughts on this whole concept.
RP: There could in the future be deep fake synthetic versions of each of us having this talk right now. And then also, taking every other digital form of talks or typing, all of our Google takeout, everything we’ve searched for, texted or written an email to somebody on, that can be recomposited into a synthetic being that plays as an agent with somebody you could talk to. We’ve already been doing this in the MIT media lab, you build an agent that’s for the deceased professor and ask what that one would have thought, right? Actually, one is a Marvin Minsky bot and he’s one of the ones who also signed up for cryogenics and that hasn’t resurrected him yet, but we can resurrect some of what he said online with a digital form and it’s a kind of immortality, right? I find it interesting though that so many people I know who haven’t thought that much about God or what they believe in the afterlife, still desire it. It’s almost like we’re made for it. Like some people say, you’re hungry because you’re made to eat food. You’re thirsty because you’re made to drink. Maybe there’s something in us that is immortal that desires immortality. Like Seneca, the Greek philosopher saying we are like mortals in what we fear and like immortals in what we desire. We seem to have something in us that seeks that beyond. At the same time, I don’t think a lot of us have thought deeply about what we’d do in that beyond. And there is a worry, if you have an infinite number of grandchildren, how are you going to attend all those birthday parties, right? There are some practical elements that we don’t understand. We don’t know how we’re going to deal with. But, yeah, there’s this almost universal desire for it.
JB: Does the concept, Nick, of everyone choosing at some point to just keep living, whether it be in physical bodies by that time, maybe the technology is there that you can implant your consciousness into a robot, and we can continue living in the physical world or its otherwise in some kind of digital space, where we can create our own reality and have a lovely ever after. Again, it’s a subjective question, but is that a good thing as far as you’re concerned, or do you worry at the idea of sort of just carrying on and on and on because technology allows us to?
1: I don’t know exactly how long would be the optimal life span. I think it certainly would seem to depend a lot on the conditions. So, first of all, I think our intuitions about the desirability of longer than usual lives are shaped very heavily by the fact that in this world older age is associated with worse health, so you never really get people much above a hundred you are in perfect mental condition and fully productive and run marathons and stuff like that. So, we just see this very strong association that at the certain point we just fall apart and it’s kind of sad and you certainly don’t want to just extrapolate that and then think we want a hundred more years where you’re sort of kept alive on some respirator and like a whole medical team is kind of – that seems utterly pointless. So, we have to first of all exert the mental push to imagine a different condition where you would actually maybe get stronger every year and healthier and learn more and get new capabilities, and that already seems to quite radically change the aspect of this scenario. I think it’s kind of sad that we start early in life at our peak and then at a certain point we go down, like if anything, one would maybe think you would want to start at the bottom and then go up. But anyway, so that would be one variable. Then, of course the world we live in and what other people might also be there. Another thing is you think, if I live longer but everybody else, everybody I cared about, my friends and family, they’re all dying off; to a lot of people that doesn’t seem very appealing because they imagine themselves to be lonely and so forth, but in a scenario where many together could do this, continue the adventure, then maybe that flips. Maybe if everybody is going to continue the adventure you would feel I don’t want to lose out, if the party is still going you get the FOMO effect, right? So, without like adding a lot of these specifications, it’s hard to evaluate it positively or negatively. And then I would say on top of that like if we consider really radically long, even long finite life spans like a million years, or something like that – let alone infinite time – then I think it becomes even harder because those are just so far beyond what we are normally assuming when we are thinking about living and dying.
JB: But it does sound to me a little bit, Nick like you regret the fact that we do live in this sort of physical body, which does wear out. It sounds like if there was a way of stopping that, you would welcome it. You don’t want life to wear out.
NB: Certainly, if there were like a pill that postponed the aging, I’d be happy. I mean, people try with creams and everything, like snail extract from Korea, like every possible means, right? If there were ways of not just preventing it from showing up, but of actually preventing the cellular degradation, I’m sure once that technology exists it will be extremely popular even amongst those people who now, kind of dismiss it as saying, oh, that’s just kind of a hubristic thing. We wouldn’t want to have anything to do with that. It’s easy to say one when no anti-aging medicine actually exists, but once it’s there, like do you want to have worse knees year by year and like an aching hip, or would you want to have a good hip and good knees and like a good heart and all your neurons still there? I mean, it seems like an easy choice.
JB: It’s a fascinating question because at one level, Roz, that’s kind of what you’re involved in. You’re involved in helping people live happier, healthier, longer lives with AI, with the technology you’ve developed. But where does this line exist between what’s healthy and what’s not healthy when it comes to encouraging some huge, long life span which seems so out of keeping with our normal human condition in that sense?
RP: It’s a great question. We had a big effort at MIT media lab years ago called Human 2.0, and it had a physical component with Hugh Herr and his biomechatronics replacing limbs. This one woman walked in and her legs were so amazing. I’m like, wow, look at that. I mean, I’m a happily married heterosexual woman and I’m admiring this other woman’s legs and it turned out they were artificial, and they were gorgeous. And she said, I don’t know why people say I don’t have any legs. I have six pairs! And she had legs that could do all these different amazing things and you can climb rocks brilliantly and even better with some of the different legs he’s built. So, we know we can physically augment in amazing ways that could maybe make it so that I don’t need to take a car or train from Boston to New York, I could walk and burn off and justify eating more delicious fattening things without getting fat. So, they’re all kinds of ways we could augment ourselves, but we were also looking in my case at the cognitive affective augmentation. We have a lot of efforts in our lab on that. Can we augment our memory? Can we augment our emotion regulation? Could we help people on the autism spectrum who have difficulty interpreting emotional signals, especially in real time? Maybe they’re concentrating hard to understand your words and they can’t read the ocean waves of 10,000 other patterns of things changing on your face, the complexity of which in two-way conversation quickly dwarfs not just the moves in chess, but the moves in Go for all the possible combinations. It grows so fast. It’s so hard for a person to comprehend it and for a computer to comprehend it, and as we learn about it, we realise how incredibly complicated all this stuff is and how little we really understand it, and how hard it is to help people today, much less augment ourselves in the future. But we think it’s a really worthy goal for AI, and the more people maybe who work on those, call them more AI for good causes to help augment and expand human abilities, maybe the fewer people will work on the AI for bad causes that just make the rich richer and exert more control over people and don’t treat them with justice.
JB: I guess the question that hangs over all is what kind of a world are we wanting to live forever in as well and indeed, I suppose as a Christian, do you see that kind of desire to live on, to potentially kind of live forever and that sort of thing, is that something that could happen and would you welcome that or is that actually just an echo, as you’ve said of an ultimate desire that you think can only really be fulfilled in God in a kind of a new creation as a Christian would see it?
RP: Yeah, it’s interesting. I mean personally, I don’t desire to live an infinite life. I do desire, however, to get to the next one, the one where I can see God face-to-face. I mean, what a mind-blowing idea. I don’t know 100% that that’s true and my guess is that we very imperfectly understand what has been revealed through the Bible. So, probably we have a bunch of it wrong because we just see through a glass darkly, but we are held out this possibility that there is a life beyond this where we are loved, or we are known. Where we can know the supermind: the true, super intelligence be beyond all beyond all space and time. And to me the opportunity to meet that superintelligence is so exciting that even if it’s just for a moment, I would like to see about getting there. So, I do find myself desiring that, and I think that’s consistent with what I read and learn from a Christian worldview.
JB: Sure. I mean, it sounds like the closest thing, Nick, that you believe to in God, in terms of is potentially a super intelligence that we are the creation of, albeit not the same one that Roz has in mind. If it is the case that we’re living in some kind of simulation, would you like to be able to meet that superintelligence one day? Would that be a goal that you aspire to?
NB: I don’t know. I’m not sure whether that’s the closest in my belief space either to what Rosalind was talking about. I do fundamentally feel though that it’s inadequate to try to actually form a firm opinion about a lot of these things. And I don’t really mean this modestly. It’s not as if I think a lot of other people out there are like so much more qualified than me either. I think we’re all like these little creatures who are like in a vast world that we understand very little of, and until very recently understood even less. So presumably even if we were just allowed with our own small brains to continue for another few hundred years, we’ll probably figure out some pretty cool stuff, and then beyond that there are presumably many, many more things that we have no chance of figuring out our little brains. So, it seems likely that we’ve overlooked at least one crucial consideration like something, some argument or idea or insight or fact that if only we became aware of it, then took it properly into account would kind of radically change our whole picture of what we should be doing – our whole stack of priorities. It seems kind of a little implausible that we’ve now found the last crucial consideration and we’ve got it figured.
JB: And by that, do you mean that you’re not prepared to be as certain in that sense that there’s a God that will answer all these questions. I mean, I was fascinated, and maybe we can start to just wrap it up here, but there’s a paragraph on your website, Nick, which is almost exactly what you just said, which is, ‘in terms of directing our efforts as a civilization, it would seem useful to have some notion of which direction is up and which is down, what we should promote and what we should discourage. I believe it’s likely we are overlooking one or more crucial considerations, ideas or arguments that might plausibly reveal the need for not just some minor course adjustment in our endeavours, but a major change of directional priority and those seeking to make the world better should therefore take it as important to get to the bottom of these matters or else to find somewhere dealing wisely with our cluelessness if it is inescapable.’ I guess it’s hard to envision what that course change might be, what that big thing is we might be missing, but are you saying you think we’re only sort of in the foothills of where our understanding of mind and technology and everything, where it could ultimately lead us as human beings. Is that the idea?
NB: Yeah, and I think we know a lot of details, but there might be some crucial things about how all of these details fit together, the overall picture to reveal. I mean just take this simulation argument, like suppose for the sake of this discussion that it’s actually right, then until that was discovered that would just be this massive piece like of information about the world that we were oblivious to up until just two decades ago. And why think that would be the last such thing? And like theology could be one source of these, there might be many other places as well where these kinds of earthquakes in our worldview could arise from, but I don’t think we’ve seen the last. And so, one feels a kind of ultimate sense of dependency of like hoping that things works out.
JB: A kind of humility in a sense as well. It’s been such a good discussion; any final thoughts as we start to wrap up this this programme today, Roz?
RP: I just wish we could meet in person and continue the conversation. Nick, I have a lot more questions for you probably than the audience wants to hear?
JB: Well ultimately, technology has enabled us in the midst of a pandemic to have this conversation, which is great, so we know that technology certainly serves great purposes very often. All the best as you both go on with your work trying to make sure that technology continues to serve humanity and not to hinder it. But it’s been such a great conversation today, so, Nick and Roz, thank you very much for being my guest on the Big Conversation.
NB: Thank you, Justin.
RP: Thanks for hosting.
Transcript ends: 01:16:46