Artificial intelligence (AI) is all the rage, with ChatGPT boasting millions of users, new AI tools being released daily, and the biggest companies racing to win the AI wars. The rise of AI has brought both excitement and anxiety.
Optimists argue that the rise of AI will usher in a new era of super-rapid growth and help us solve the world’s most difficult problems, like climate change and nuclear fusion.
Meanwhile, pessimists warn that, left unchecked, super-intelligent AIs will destroy our politics, jobs, and the human race. In this article, we’ll look at this debate, explain the arguments on either side and figure out if we should really be worried about AI.
Reasons to Worry About AI
The arguments for why we should be worried can be divided into three categories: political, geopolitical, and existential.
The political anxieties are about how AI will affect our politics and society. AI might entrench our societal prejudices if it’s trained on racist or sexist data. It might undermine our sense of truth by flooding the internet and social media with AI-generated but false data. Or it might undermine our privacy by constantly collecting and analyzing everything we put on the internet.
Perhaps the most common worry is AI’s impact on jobs, as sufficiently intelligent and accessible AI could replace millions of humans. If workers are constantly under threat from AI, they’ll have less bargaining power against bosses, which could lead to lower pay, reduced labor standards, and ultimately higher inequality.
More generally, AI might benefit capital over labor. This is one of the reasons mentioned in the open letter signed by Elon Musk and numerous AI researchers last month, calling for a six-month moratorium on AI research.
The geopolitical anxiety is that because AI is such a powerful tool, whichever country figures out AI first will become the world’s hegemon. This might sound dramatic, but consider the fact that according to forecasting site MetaCulus, there’s a greater than 50% chance that AI will stimulate at least 30% annual GDP growth. The first country to integrate AI into its military will also enjoy an enormous advantage in any future conflict.
Currently, the geopolitical AI race is primarily between the US and China, with the US taking the lead in recent months. While this isn’t necessarily bad news (America’s continued global dominance might reduce the chances of a war over Taiwan, for example), the basic worry is that AI will shake up the global balance of power in unpredictable ways.
The existential anxiety revolves around how super-powerful AI could end up destroying humanity. This stems from the so-called ‘alignment problem,’ which refers to the difficulty, or perhaps impossibility, of creating AI that’s aligned with our values and ethical priorities.
For example, if we told AI to reduce CO2 emissions, we’d ideally do that by building out a load of renewable energy, not by killing most of the human race. Both of these would obviously reduce CO2 levels, but only the first option aligns with our values.
Aligning AI is difficult for two main reasons: AIs are, to some extent, black boxes (i.e., we don’t really understand what’s going on inside them), and philosophers have been trying and failing to codify human values for thousands of years.
Reasons Not to Worry About AI
While there are valid concerns about AI, there are also reasons to be optimistic and not succumb to pessimism.
History of Technological Doom-Mongering
First, we’ve heard this sort of doom-mongering about technology’s impact on jobs and society before. Agriculture is a great example. In the 18th century, 80% of Brits were employed in agriculture.
Productive innovations have since pushed that figure to below 5%. But this hasn’t been bad news for farmers. People have just shifted to new and usually better jobs, mostly in the service industry.
Similarly, pessimists have been warning about the imminent threat of nuclear Armageddon for nearly 100 years, but we’re still here. Although AI is different from tractors and nukes, history is full of technological doom-mongering that has proven unfounded.
Exponential Growth Slows Down
Second, a lot of the anxiety about AI assumes that its abilities will continue to grow exponentially. While it might be true that humans aren’t good at thinking about exponential growth, it’s also true that exponential growth always slows down eventually and usually a lot earlier than expected.
The global population, crop yields, and the speed of commercial airplanes are all examples of trends that initially looked like they might grow exponentially but eventually followed logistic, not exponential, curves.
Just because AI capabilities are currently increasing exponentially doesn’t mean they’ll continue to do so indefinitely. We don’t know what the limiting factor might be, but exponential growth never lasts forever.
Overestimated Predictions and Politicians Paying Attention
It’s worth remembering that in the 2010s; we were told our cars would be electric and self-driving, that Musk was planning a colony on Mars by 2022, and that we’d all be living either via augmented reality or in the metaverse.
Today, there are no fully self-driving cars, fewer than two percent of the world’s vehicles are electric, Musk can’t even get an unmanned rocket out of orbit, and Zuckerberg’s metaverse looks worse than a 2008 Wii game.
The final reason to be optimistic is that politicians are paying attention. One of the reasons AI doom-mongers were so pessimistic is that they assumed politicians would be uninterested in AI.
However, politicians are paying attention. Italy has banned ChatGPT, the EU is currently drafting the AI Act, and the UK just published a white paper, all of which bodes well.
So, is AI about to destroy us all, or is this just techno-futurist nonsense? The future remains uncertain, but it’s essential to approach the AI debate with a balanced perspective, taking both the potential risks and the reasons for optimism into account.