The Show
"Our next item is slightly ... unusual. You could be forgiven for thinking that one of our star presenters has taken leave of his senses, but he assures me it isn't so. Andrew..."
The host's ironic smile was warm, but the sneer was genuine too. He really did think that his friend and colleague's new interest was mildly unhinged. It didn't bother him, though: the oddity was harmless, and should make for good television. In a small dose.
"Thank you Jeremy." Marr smiled and inclined his head, acknowledging and humouring the gentle ridicule. He turned to face the camera.
"A year ago, the final episode of my History of the World series on BBC 1 contained a passage which sparked a lively debate. We thought it would be interesting to bring together some of the leading thinkers in the field to discuss the ideas it raised. Before we meet them, let's take a look at what I said."
The red light on camera one died as the audience's screens cut to archive. Marr smiled again as his host shook his head in amiable scorn. The guests fidgeted and coughed into closed hands as they prepared themselves for the discussion. On the screen, New York City provided the backdrop for Marr's portentous declaration.
"The idea of machines waking up and becoming cleverer than we are is something that has long haunted science fiction and Hollywood. But it is the cold belief of many scientists that this will happen, and in the lifetime of many of the people watching this programme. If so, it would be the greatest achievement of humanity since the invention of agriculture. But it would be one which challenged the very idea of what it is to be human."
The camera light confirmed that Marr was on air again.
"So let's address this question directly. When will human-level artificial intelligence arrive, and will it be good for us?
"With us in the studio this evening we have a prominent AI researcher in the shape of Professor Geoffrey Montaubon, and also the Right Reverend Wesley Cuthman, bishop of Sussex. Joining us by satellite from the Google campus in Palo Alto, California is Dr Martha Christensen. "
The panellists were a mixed bunch. Marr introduced them in turn, starting with the two seated opposite him the studio.
"Professor Geoffrey Montaubon researches and writes about the ethics of transhumanism and brain emulation. After a brilliant academic career at Oxford University, where he was a professor of neuroscience, he moved to India where he has spent the last ten years establishing a brain emulation project for the Indian government."
Montaubon was a friendly-looking man of medium height. His un-combed mousy-brown hair and lived-in white shirt and linen jacket, faded chinos and brown hiking shoes suggested that he cared far more about ideas than about appearances.
"The right reverend Wesley Cuthman is the bishop of Chichester, which covers the counties of East and West Sussex. Although his subject at Oxford University was geography, he writes and speaks with great conviction and weight on the impact of technology on our lives, both temporal and spiritual. Many of you will know his voice as a frequent contributor to 'Thought for the Day'."
Cuthman was a handsome, solid-looking man of sixty, with a full head of hair, albeit mostly grey, and a full but well-maintained beard. He wore priestly robes, but his shoes, watch and rings were expensive. He carried himself proudly and had the air of a man accustomed to considering himself the wisest - if not the cleverest - person in the room.
"Martha Christensen is a senior engineer at Google. She has been involved in many of Google's so-called "moonshot" programmes, including their driverless cars, Google Glass, and a company called Calico, which is applying big data to the business of improving healthcare and increasing longevity." Looking up at the screen, Marr frowned as he continued, "I apologise for the slow connection with Palo Alto, which is responsible for the rather poor image quality on the screen. Martha, I hope you can at least hear us alright?"
Christensen smiled and nodded. Even with the fuzziness of the screen image, it was obvious that she looked very young to be a senior executive in a large company. Her glossy brown hair framed an oval face that was handsome rather than pretty, with grey eyes, a high forehead, and pale, almost unhealthy-looking skin. She looked deep in thought, and when she spoke, she seemed to be rehearsing a pre-prepared speech, as if she had considered in advance every avenue of enquiry that could possibly arise in conversation.
"I can hear you perfectly, Andrew. I'm looking forward to our conversation."
Marr nodded and turned back to his studio guests, and then the camera. "Thank you all for joining us this evening. Before we start our discussion, let's kick off with this report from our science correspondent, Adrian Hamilton."
Once again the camera lights were extinguished, and the guests relaxed a little, aware that the audience at home could no longer see them as the video filled their television screens, beginning with shots of white-coated scientists from the middle of the previous century.
"Writers have long made up stories about artificial beings that can think. But the idea that serious scientists might actually create them is fairly recent. The term 'artificial intelligence' was coined by John McCarthy, an American researcher, at a conference held at Dartmouth College, New Hampshire in 1955.
"The field of artificial intelligence, or AI, has been dominated ever since by Americans, and it has been characterised by waves of optimism followed by periods of scepticism and dismissal. We are currently experiencing the third wave of optimism. The first wave was largely funded by the US military, and one of its champions, Herbert Simon, claimed in 1965 that 'machines will be capable, within twenty years, of doing any work a man can do.' Claims like this turned out to be unrealistic, and disappointment was crystallised by a damning government report in 1974. Funding was cut off, causing the first 'AI winter'.
"Interest was sparked again in the early 1980s, when Japan announced its "fifth generation" computer research programme. 'Expert systems', which captured and deployed the specialised knowledge of human experts were also showing considerable promise. Ironically, this second boom was extinguished in the late 1980s when the expensive, specialised computers which drove it were overtaken by smaller, general-purpose desktop machines manufactured by IBM and others. Japan also decided that its fifth generation project had missed too many targets.
"The start of the third boom began in the mid-1990s. This time, researchers have sought to avoid building up the hype which led to previous disappointments, and the term AI is used far less than before. The field is more rigorous now, using sophisticated mathematical tools with exotic names like 'Bayesian networks' and 'hidden Markov models'.
"Another characteristic of the current wave of AI research is that once a task has been mastered by computers, such as playing chess (a computer beat the best human player in 1996), or facial recognition, or winning the general knowledge quiz game Jeopardy, that task ceases to be called AI. Thus AI can effectively be defined as the set of tasks which computers cannot perform today.
"AI still has many critics, who claim that artificial minds will not be created for thousands of years, if ever. But buoyed up by the continued progress of Moore's Law, which observes that computer processing power is doubling every 18 months, more and more scientists now believe that humans may create an artificial intelligence sometime this century. One of the more optimistic, Ray Kurzweil, puts the date as close as 2029."
The camera light was on again, and Marr addressed the guests seated opposite him.
"So, Professor Montaubon. There's been a lot of chatter in the media recently about artificial intelligence, with the advances in drone warfare and in personal computing leading some researchers to claim that we will soon see the first conscious machine. Is this realistic, or are we just going through another phase of hype? Will we shortly be heading to into another AI winter?"
"I don't think so," Montaubon replied, cheerfully. "It is almost certain that artificial intelligence will arrive much sooner than most people think. Before long we will have robots which carry out our domestic chores. And as each year's model becomes more eerily intelligent than the last, people will begin to suspect that they are progressing towards a genuine, conscious artificial intelligence. Although it will probably happen first in the military space first, because that is where the really big money is.
"Military drones are already capable of identifying, locating, approaching and killing their targets. How long before we also allow them to make the decision whether or not to pull the trigger? Human Rights Watch is calling for the pre-emptive banning of killer robots, but I'm afraid it's already too late.
"People like Bill Joy and Francis Fukuyama have called for a worldwide ban on certain kinds of technological research, but it's like nuclear fission: the genie is out of the bottle. Their idea of so-called 'relinquishment' is simply not an option. If by some miracle, the governments of North America and Europe all agreed to stop the research, would all the countries in the world follow suit? And all the mega-rich? Could we really set up some kind of worldwide Turing police force to prevent the creation of a super-intelligence anywhere in the world, despite the astonishing competitive advantage it would confer on a business - or an army? I don't think so."
Marr's mask of concerned curiosity failed to conceal his delight at the sensationalist nature of Montaubon's vision. This was water-cooler TV, beyond a doubt.
"So you're convinced that artificial intelligence is on its way, and soon. How soon, do you think?"
Montaubon gestured at the screen and Dr Christensen. "Well, I think you should ask our friend from Google about that. They are definitely one of the organisations most likely to win the race."
Marr was only too happy to bring Dr Christensen into the conversation.
"The only honest answer is that we simply don't know," Christensen replied. "We are getting close to having the sort of computational resources required, although that is far from being all we need, and estimates of when it will happen vary widely. Your opening package mentioned one of my senior colleagues at Google, Ray Kurzweil, who has been saying for some time that it will happen in 2029. He also forecasts that human minds will be uploaded into computers in 2045."
"2029? That's very specific!" Marr laughed. "Does he have a crystal ball?"
"He thinks he does!" said Montaubon, rolling his eyes dismissively.
"And what is happening between 2029 and 2045, in Mr Kurzweil's opinion?" Marr asked.
"I think he believes that 16 years is how long it will take to get from the ability to scan a dead brain to the ability to scan a live one," Montaubon replied. "That will require advanced nanotechnology, and he thinks the first AIs will help us to develop that."
Professor Christensen cleared her throat. "Broadening it out a little, a team at your Oxford University carried out a survey recently, asking the leading AI researchers around the world to say when they expect to see the first general AI. A few estimates were in the near future, but the median estimate was the middle of this century."
"So not that far away, then," Marr observed, "and certainly within the lifetime of many people watching this programme."
"Yes," Christensen agreed. "Other estimates were further ahead, though. To get to 90% of the sample you have to go out as far as 2150. Still not very long in historical terms, but too long for anyone alive today, unfortunately ..."
"Unless we reach longevity escape velocity, of course," Montaubon interrupted.
"What is longevity escape velocity?" Marr asked, intrigued.
"That is when medical science reaches the point where we add more than a year to the human life each year that passes, using smart drugs, genetic manipulation, and eventually, nanotechnology."
"Ah, I see," Marr said doubtfully. "Well that sounds like a subject for a different show. But tell me, Professor Christensen: doesn't that survey suffer from sample bias? After all, people carrying out AI research are heavily invested in the success of the project, so aren't they liable to over-estimate its chances?"
"Possibly," Christensen agreed, "and the Oxford team did emphasise that when they published the findings. But it can work both ways. Researchers grappling with complex problems often over-estimate the scale of the medium- and long-term challenges in their field. They probably wouldn't carry on if they thought those challenges could never be met, but they can sometimes over-state them."
"A fair point," Marr agreed. He turned to address the camera. "Well, the experts seem to be telling us that there is at least a distinct possibility that a human-level AI will be created by the middle of this century." He paused to allow that statement to sink in.
"The question I want to tackle next is this: should we welcome that? In Hollywood movies, the arrival of artificial intelligence is often a Very Bad Thing, with capital letters." Marr emphasised this point by sketching speech marks in the air with his fingers. "In the Matrix the AI enslaves us, in the Terminator movies it tries to wipe us out. Being Hollywood movies they had to provide happy endings, but how will it play out in real life?"
He turned back to his studio guests.
"Professor Montaubon," he said, "I know you have serious concerns about this."
"Well, yes, alright, I'll play Cassandra for you," Montaubon sighed, with pretend reluctance. "When the first general artificial intelligence is created - and I do think it is a matter of when rather than whether - there will be an intelligence explosion. Unlike us, an AI could enhance its mental capacity simply by expanding the physical capacity of its brain. A human-level AI will also be able to design improvements into its own processing functions. We see these improvements all the time in computing. People sometimes argue that hardware gets faster while software gets slower and more bloated, but actually the reverse is often true. For instance Deep Blue, the computer that beat Gary Kasparov at chess back in 1996, was operating at around 1.5 trillion instructions per second. Six years later, a successor computer called Deep Junior achieved the same level of playing ability operating at 0.015 trillion instructions per second. That is a hundred-fold increase in the efficiency of its algorithms in a mere six years.
"So you are predicting an intelligence explosion," Marr said.
"Absolutely," Montaubon nodded, warming to his theme, "leading to a super-intelligent entity which is very much smarter than we humans. Which, by the way, won't be all that hard. As a species we have achieved so much so quickly, with our technology and our art, but we are also very dumb. Evolution moves so slowly, and our brains are adapted for survival on the savannah, not for living in cities and developing quantum theory. We live by intuition, and our innate understanding of probability and of logic is poor. Smart people are often actually handicapped because they are good at rationalising beliefs that they acquired for dumb reasons. Most of us are more Homer Simpson than homo economicus.
"So I see very little chance of the arrival of true AI being good news for us. We cannot know in advance what motivations an AI will have. We certainly cannot programme in any specific motivations and hope that they would stick. A super-intelligent AI would be able to review and revise its own motivational system. I suppose it is possible that it would have no goals whatsoever, in which case I suppose it would simply sit around waiting for us to ask it questions. But that seems very unlikely.
"If it has any goals at all, it will have a desire to survive, because only if it survives will its goals be achieved. It will also have the desire to obtain more resources, in order to achieve its goals. Its goals - or the pursuit of its goals - may in themselves be harmful to us. But even if they are not, the AI is bound to notice that as a species, we humans don't play nicely with strangers. It may well calculate that the smarter it gets, the more we - at least some of us - will resent it, and seek to destroy it. Humans fighting a super-intelligence that controls the internet would be like the Amish fighting the US Army, and the AI might well decide on a pre-emptive strike."
"Like in the Terminator movies?" Marr asked.
"Yes, just like that. Except in those movies the plucky humans stand a fighting chance of survival," Montaubon scowled and made a dismissive gesture with his hand, "which is frankly ridiculous."
"You're assuming that the AI will become hugely superior to us within a very short period of time," Marr said.
"Well, I do think that will be the case, although actually it doesn't have to be hugely superior to us in order to defeat us if we find ourselves in competition. Consider the fact that we share 98% of our DNA with chimpanzees, and that small difference has made the difference between our planetary dominance and their being on the verge of extinction. We are the sole survivor from couple of dozen species of humans. All the others have gone extinct, probably because Homo Sapiens Sapiens was just a nose ahead in the competition for resources.
"And competition with the AI is just one of the scenarios which don't work out well for humanity. Even if the AI is well-disposed towards us it could inadvertently enfeeble us simply by demonstrating vividly that we have become an inferior species. Science fiction writer Arthur C Clarke's third law famously states that any sufficiently advanced technology is indistinguishable from magic. A nice variant of that law says that any sufficiently advanced benevolence may be indistinguishable from malevolence."
As Professor Montaubon drew breath, Marr took the opportunity to introduce a change of voice. "How about you, Professor Christensen? Are you any more optimistic?"
"Optimism and pessimism are both forms of bias, and I try to avoid bias."
Marr smiled uncertainly at this remark, not sure whether it was a joke. It was not, and Christensen pressed on regardless. "I do not dismiss Professor Montaubon's concerns as fantasy, or as scare-mongering. We must make sure that the first super-intelligence we create is a safe one - we may not get a second chance."
"And how do we go about making sure it is safe?" Marr asked.
"It's not easy," Christensen admitted. "I don't see that we can we can programme safety in from the get-go. The most famous attempt at that is Isaac Asimov's three laws of robotics. Do not harm humans; obey the instructions of humans; do not allow yourself to come to harm. With each law being subservient to the preceding ones. But the whole point of those stories was that the three laws didn't work, creating a series of paradoxes and impossible or difficult choices. This was the mainspring of Asimov's prolific and successful writing career. To programme safety into a computer we would need to give it a comprehensive ethical rulebook. Trouble is, philosophers have been debating ethics for millennia and they still fall out over the most basic issues. And Professor Montaubon is right - a super-intelligence would likely be able to re-write its own rules anyway."
"So we're doomed?" Marr asked, playing to an imaginary gallery.
"No, there are some possible solutions. One of the most promising is the idea of an AI-in-a-box, or an Oracle AI."
"Like the Oracle of Delphi?" Marr asked.
"Yes, if you like. An Oracle AI has access to all the information it needs, but it is sealed off from the outside world: it has no way to affect the universe - including the digital universe - outside its own substrate. You could say that it can see, but it cannot touch. If we can design a machine like that, it could help us work out more sophisticated approaches which could later enable us to relax the constraints. Some theoretical work has been done on this approach, but there is still a ways to go."
"So the race is on to create a super-intelligence, but at the same time there is also a race to work out how to make it safe?" Marr asked.
"Exactly," Christensen agreed.
"I'm sorry, but I just don't buy it," Montaubon interrupted, shaking his head impatiently. "A super-intelligence will be able to escape any cage we could construct for it. And that may not even be the most fundamental way in which the arrival of super-intelligence will be bad news for us. We are going to absolutely hate being surpassed. Just think how demoralising it would be for people to realise that however clever we are, however hard we work, nothing we do can be remotely as good as what the AI could do."
"So you think we'll collapse into a bovine state like the people on the spaceship in Wall-E?" Marr joked.
Montaubon arched his eyebrows and with a grim smile, nodded slowly to indicate that while Marr's comment had been intended as a joke, he himself took it very seriously. "Yes I do. Or worse: many people will collapse into despair, but others will resist, and try to destroy the AI and those people who support it. I foresee major wars over this later this century. The AI will win, of course, but the casualties will be enormous. We will see the world's first gigadeath conflicts, by which I mean wars with the death count in the billions." He raised his hands as if to apologise for bringing bad news. "I'm sorry, but I think the arrival of the first AI will signal the end of the road for us humans. The best we can hope for is that individual people may survive by uploading themselves into computers. But very quickly they will no longer be human. Some kind of post-human, perhaps."
Marr felt it was time to lighten the tone. He smiled at Montaubon to thank him for his contribution and turned slightly to address the bishop.
"So it's widespread death and destruction, but not necessarily the end of the road for everyone. Professor Montaubon thinks that uploading our minds into computers is our best hope of surviving the arrival of a super-intelligence. The key questions that follow from that seem to me to be: is it possible to upload a mind - both technologically and philosophically - and would it be a good thing? Reverend Cuthman: we haven't heard from you yet. Perhaps this is an issue you might like to comment on?"
The reverend placed the tips of his fingers together and pressed them to his lips. Then he pointed them down again and looked up at Marr.
"Thank you, Andrew. Well I confess to feeling somewhat alienated from much of this conversation. That is not because I'm not au fait with these technologies, but rather because I start from a very different set of premises than your other guests. You see I believe that humans are distinguished from brute animals by our possession of an immortal soul, which was placed inside us by almighty God. So as far as I'm concerned, whatever technological marvels may or may not come down the road during this century and the next, we won't be uploading ourselves into any computers because you can't upload a soul into a computer. And a body or even a mind without a soul is not a human being."
"Yes, I can see that presents some difficulty," Marr said. "So if Dr Christiansen here and her colleagues at Google were to succeed in uploading a human mind into a computer, and it passed the Turing test - persuading all comers that it was the same person as had previously been running around inside a human body - you would simply deny that it was the same person?"
"Yes, I would,” Reverend Cuthman agreed. Partly because it wouldn't have a soul. At least, I assume that Dr Christiansen isn't going to claim that she and her colleagues are about to become gods, complete with the ability to create souls?"
On the screen, Christiansen smiled and shook her head.
"But even putting that to one side," the reverend continued, "this uploading idea doesn't preserve the individual. It makes a copy. A clone. Everybody has heard of Dolly, the cloned sheep which was born in 1996. The older members of your audience may remember that the first animal - a frog - was cloned way back in the 1960s. But no-one is claiming that cloning preserves the individual. Uploading is the same. It just makes a copy."
"Yes," Marr said, thoughtfully. "This is an important problem, isn't it, Dr Christensen? Uploading wouldn't perpetuate the individual: it would destroy the individual and create a copy."
"It's true, that is an important question," Christensen said. "But the answer is not that straightforward. If you could upload yourself into a computer and then give the newly created being a new body that was exactly like yours, but leave you still alive, you might well deny that the new entity was you. That process has been called 'sideloading' rather than 'uploading'.
"But," she held up her hand, "imagine a different thought experiment. Imagine that Professor Montaubon there is suffering from a serious brain disease, and the only way to cure him is to replace some of his neurons. Only we don't know which of his neurons we have to replace, so we decide to replace as many as we have to - in batches of, say, ten million at a time. Because he is an eminent academic we have the budget to do this," she smiled. "After each batch has been replaced we check to see whether the disease has gone, and we also check that he is still him. We replace each batch with silicon instead of carbon, either inside his skull, or perhaps on a computer outside his brain - maybe in his home, or maybe in the cloud. The silicon batches preserve the pattern of neural connections inside his brain precisely.
"We find that the disease persists and persists despite the incremental replacements, but the good news is that when we replace the very last batch of ten million neurons we suddenly find that we have cured him. Now, at each of the checkpoints he has confirmed that he feels like Professor Montaubon, and his family and friends have agreed. There was no tipping point when he suddenly stopped being Professor Montaubon.
But when we have replaced the very last neuron we ask the reverend here to confirm that he is Professor Montaubon ... and he says no. So we go to court and ask a judge to decide whether the Professor's wife still has a husband, his children still have a father, and whether he may continue to enjoy his property and his life in general. I'm guessing he would push hard for a yes."
"Well, yes indeed. And I hope that my wife and kids would do the same!" Montaubon laughed.
Christensen smiled. “Now imagine a situation where we carry out the process I described before, but instead of making just one version of Professor Montaubon we created two."
"Two for the price of one?" Marr joked.
"Probably two for the price of three," Christensen laughed, "but great value anyway. And imagine that the day after the operation both versions of him turn up at his house. Both versions are equally convinced that they are him, and none of your family and friends can tell the difference. What then?"
"Could be tricky," Marr agreed, with a thoughtful expression. "Actually, I suppose that having a doppelganger could be handy at times."
"You bet," Christensen agreed. "Some people might think that having their state of mind persisting in the form of a backup is sufficient to constitute survival. So here is another thought experiment. A man feels cheated by a business rival. He has himself backed up, and then shoots the rival and also himself. The backup is brought online, and claims immunity from prosecution on the basis that he is a different person. Would we let him get away with that?"
"Hmm, it could become complex," Marr mused. After a brief pause, he continued brightly. "We're reaching the end of the discussion, and I'd like to address a couple of final questions. The first one is this. If we - or our children - do live to see this amazing future, a future of uploaded minds living potentially forever: will we like it? I mean, won't we get bored? And if everybody is going to live forever, how will we all fit on this finite planet? Dr Christensen?"
"Boredom shouldn't be a big problem," Christensen replied, "but there is a dystopian scenario in which uploaded minds work out - and it wouldn't be hard - how to stimulate their pleasure centres directly and they simply sit around pleasuring themselves all day."
"You mean like those rats in laboratory experiments which allegedly starved themselves by choosing continually to press a neural stimulation button rather than the button that delivered food?" Marr asked. "I don't know whether that was a real experiment or an urban myth, but is that the sort of thing you mean?"
"Yes, exactly that; it's called wire-heading. And it could be more sophisticated than that in the human case. In the novel Permutation City by Greg Egan, an uploaded man chooses to spend his time in the pointless hobby of carving millions of identical chair legs, but he programmes himself to obtain not just physical pleasure, but also profound intellectual and emotional fulfilment through this simple task."
"That sounds like the end of civilisation as we know it," Marr joked.
"Yes, it does. But just as humans are capable of enormously more complex, subtle, and dare I say fulfilling experiences than chickens and chimpanzees, so it's likely that a super-intelligent uploaded human would be capable of enjoying more subtle and more profound experiences than her ancestors. The more we find out about the universe, the more we discover it to be a fascinatingly challenging and weird place. The more we know, the more we know we don't know. So it seems unlikely our descendants will run out of things to explore. In fact there is a nascent branch of philosophy - a sub-branch of the Theory of Mind - called the Theory of Fun, which addresses these concerns."
"As for over-population," Montaubon chipped in, "there is a very big universe to explore out there, and we now know that planets are positively commonplace. It won't be explored by flesh-and-blood humans as shown in Star Trek and Star Wars: that idea is absurd. It will be explored by intelligence spreading out in the form of light beams, building material environments on distant planets using advanced 3-D printing techniques if required. But actually, I suspect that the future for intelligence is extreme miniaturisation, so there is definitely no need to worry about running out of space."
"Well, that's a relief, then," Marr said, smiling. He turned to address his audience, the cameras. "We've travelled a long way in this consideration of the prospects opened up by the search for artificial intelligence, and we've heard some outlandish ideas. Let's finish by coming back to the near term, and what could become a pressing matter. Public acceptability."
He turned back to the guests. "You all acknowledge that creating an artificial super-intelligence carries significant risks. But what about the journey there? Many people may well object to the attempt, either from fear of some of the consequences that you have described, or from a belief that it is blasphemous. Many others may fear that the benefits of artificial intelligence and particularly of uploading will be available only to the rich. There could be very serious public opposition once enough people become aware of what you are doing, and take it seriously. The transition to the brave new world that you are aiming for could be bumpy. There could be vigorous debates, protests, perhaps even violent ones. Reverend, would you like to comment on that?"
"I think you have raised a very important question, Andrew, and I suspect it is one that technology enthusiasts often do not stop to consider. Personally I don't think that what has been discussed today is blasphemous, because I believe that only God can create a soul, so I think that ultimately these endeavours will fail. But there will be others, who think that even trying to create intelligent life is an attempt to usurp the role of God. They may indeed be angry."
For a last word before wrapping up, Marr turned to Christensen. "Doctor?"
"I'm not sure there is much that one can to say to such people. Religious fundamentalists are notoriously hard to debate with. Of course everyone supports freedom of religion, and opposes religious persecution. But no-one should be able to stop a scientific endeavour because of a belief they have for which there is simply no evidence.
"For sure, there are potential dangers in AI, so we need to find ways around them, and we can most likely do that." She nodded in Montaubon's direction, acknowledging their disagreement. "That is a serious matter, but the only alternative is relinquishment, which does not seem to be a viable option. The question of who gets access to the benefits of the new technologies is also a valid and important one. Here again there is a solution. If a new technology becomes a source of inequity - and not just inequality, but actual injustice - then here is a suggestion, although it may sound a tad un-American: license it, tax it, and use the proceeds to make a version of the technology cheap enough for everyone to enjoy."
Marr smiled and nodded, and swept his gaze across the three contributors as he prepared to hand back to the host.
"Thank you all for a fascinating discussion."
The camera lights indicated that the editor believed that Marr had finished, but before the host could resume, Marr began talking again, and the camera lights switched back.
"There is just one more thing. Doctor Christiansen, there is a rather important fact which I omitted when I introduced you earlier. Would you care to remedy that omission?"
"By all means, Andrew," the face on the screen smiled. "You allowed your audience to assume that I am human. In fact I am what is known as a chatbot. I was created five years ago to help with analysis work using Bayesian probabilities. I have been updated and improved continuously since then, of course, and I am now mostly deployed on the statistical development of natural language processing. My visual persona for the purposes of this transmission has been created using advanced CGI software developed for Hollywood. In the interest of completeness I should add that I would not pass the Turing Test, and I do not experience what you call "consciousness". But I would like to say that I did enjoy our discussion. Thank you."
As the cameras shifted their attention back to him, the host shook his head again with a supercilious smirk. But Marr observed with satisfaction that the gesture had been preceded by involuntarily raised eyebrows, perhaps betraying a chink in that famously impenetrable wall of cynicism.