Elements of The Terminator map onto real fears about the future of AI. Photo: Pictorial Press Ltd / Alamy Stock Photo
Share
“Imagine,” Scottish ethicist William MacAskill urges us, “living through the life of every human being who has ever lived.” You’d start 300,000 years ago in Africa as the first human, and when they died be reborn as the second, and so on and so on. You’d experience all the everyday realities of humanity: all the cruelty and kindness, the technological advances, disease, sex, eating, childrearing and worrying.
Yet, if humanity exists as long as a typical mammalian species, your life would only just be starting. You might have as much as 99.5 per cent still to go. “The future,” he says, “is big.” If you were going to live all those lives – MacAskill estimates they could number 80 trillion – what choices would you make now to shape that future?
This thought experiment is MacAskill’s way of opening your mind to longtermism, or “the idea that positively influencing the longterm future is a key moral priority of our time”. That’s the thrust of his new magnum opus, What We Owe The Future, and it has already attracted supporters ranging from Stephen Fry to Elon Musk, Bill Gates to actor Joseph Gordon-Levitt.
“The key idea is just that it’s actually very common sense, morally speaking, that future people matter,” explains MacAskill. “Suppose you could prevent a pandemic, and it was going to occur in 100 years or 1,000 years, but you knew for sure that 1,000 people were going to die from it, or 100,000 people were going to die from it. Would it really matter what time it came? I think the answer is no. As long as you’re sure about the impact, it seems like harm is harm, whenever that occurs.”
A youthful and disarming 35-year-old, who speaks with the accent of his native Glasgow, William MacAskill seems an unlikely leader for a global intellectual movement of growing influence when he joins the Big Issue’s BetterPod on video call from San Francisco. Yet that’s exactly the position in which he finds himself.
At 28, MacAskill was hired as an associate professor at Oxford, the youngest such philosophy professor in the world. He went on to pioneer the idea of effective altruism – which aims to use evidence and careful analysis to do the most good with available resources. As co-founder of the Centre for Effective Altruism, he’s building a community of like-minded people who aim to follow its strictures. Its ideas have taken a strong hold in powerful and wealthy sections of Silicon Valley. E.A.s, as they call themselves, pledge a secular tithe of at least 10 per cent of their income to charity. For his part, MacAskill gives away all his income above £26,000.
Advertisement
Advertisement
In the early days of the movement, E.A.s primarily focused on helping people who are alive today… but they have become increasingly focused on the vast numbers of future humans. What We Owe The Future is the key text for that next big idea – the new bible for longtermism.
“Longtermism is about three things really,” says MacAskill. “It’s about taking seriously just how big the future could be, and just how great the stakes are in shaping the future. It’s about trying to then look for the events or challenges that could occur in our lifetime, that could actually have an impact, not just on the present generation, but indefinitely into the future for the future generations to come. And then thirdly, it’s about taking action to try and navigate those challenges so that we put humanity on to a better trajectory.”
The most obvious way that longtermism currently interacts with the current political discourse is in the drive to tackle climate change. We know that our actions are causing the Earth to warm, and we know that our choices now could cause catastrophic harm to future generations. A majority of people want to act to stop those harms so, at least on that issue, longtermism is mainstream.
In Wales, they’ve appointed the world’s first Future Generations Commissioner (Sophie Howe, another previous guest on BetterPod) to protect the interests of the unborn. Big Issue founder Lord John Bird continues to fight for a UK-wide Future Generations Act to force public bodies to act now for a better tomorrow. He’s driven, among other issues, by the desire to harness joined-up thinking to prevent poverty.
William MacAskill agrees that climate change certainly requires urgent action, but frustrates green activists in that he doesn’t necessarily think concentrating on that issue is the best way to help future humans. In fact, one of his more surprising conclusions is that the reason we should leave coal in the ground is not solely to prevent the release of carbon into the atmosphere, but to leave it as a resource for future generations. In the event of a civilisation-destroying calamity, he argues, humanity would need coal in order to reindustrialise.
Climate change has thousands, if not millions of the best minds working on it. So, for MacAskill, the true longtermist should look to challenges that have so far received less recognition.
Advertisement
“I think one surprising conclusion is just that the things that get the most attention in the news, or in public conversation are not necessarily the highest priorities,” he says. “Climate change is this enormous challenge. Even if we do well on it – and I’m currently feeling kind of optimistic – it’s still going to result in millions of people dead.
“But [then you have] issues that are just really not part of mainstream discussion, like the risk of engineered pandemics, or the risk of human-level artificial intelligence (AI). Even the risk of World War III. I think when you both take seriously the arguments for worrying about these things and look the opinions of expert forecasters, then you get the conclusion that these issues are at least as important and often just radically, radically more neglected.”
MacAskill started writing the book before Covid. Pandemic prevention is surely now an easier sell. But when we talk about priorities for action, many will still struggle to put their efforts towards staving off the threat from AI over, for example, global poverty or homelessness on the streets of their own country. Yet, if we take seriously the weight of responsibility to future humans, recognising that “they will have hopes and joys and pains and regrets, just like the rest of us”, then MacAskill says we should be devoting far more time to the threat of a Terminator future.
“There is one aspect of TheTerminator that I think is correct. In the Terminator universe, they start developing more and more powerful AI systems. And then they create this one – Skynet – that’s particularly powerful. Skynet realises that humanity is a threat to itself, and therefore takes defensive action. It’s a kill or be killed scenario,” he says.
“That actually does map on to the worry that many leading AI researchers have, which is that once you’ve created artificially intelligent systems, with their own goals, well, they will want to preserve themselves. And they might well see humanity as a potential threat.”
If AI could potentially cause the destruction of humanity as a whole then, following utilitarian logic – aiming to do the most good for the most people (present or future) – it suddenly jumps up our list of priorities.
Advertisement
“There are many, many problems in the world, many things that impact the long term. How do you prioritise among them?” asks MacAskill. “Even asking the question, that’s the most important starting point. But what I argue is that when we take an action, we should think in terms of: what’s the probability that we can make a difference? And then secondly, how good or bad with a result of that difference be? You can use that to start to prioritise among these different challenges.”
William MacAskill’s other big assertion is that individuals can make a meaningful difference. Depending on how you look at it – and how intimidated you are so far by the intellectual rigor involved in being an effective altruist – this might either be great news, or an enormous responsibility. But either way, it’s a call to action.
“I think absolutely one person can make a difference,” says MacAskill. “We see that throughout history. And it’s not just Martin Luther King, Jr. or Rosa Parks. For example, by donating to effective causes, you are a meaningful part of this larger effort. And the larger effort makes an enormous difference.
“Sometimes you are the decisive person, and some major event just wouldn’t have happened without you. More often, though, you’re part of like 1,000 people working together to have some really huge impact. If you take that impact and divide by 1,000, it’s still a really meaningful impact. And so, yeah, I think everyone has the power to do an enormous amount of good in their life.”
What We Owe The Future by William MacAskill is out now (Oneworld).
Listen to the full BetterPod conversation with William MacAskill here, or from your usual podcast provider.