The steam engine changed the world. Artificial intelligence could destroy it.

Estimated read time: 7 min

Industrialization meant the widespread adoption of steam power. Steam power is a general purpose technology – it powered factory equipment, trains and farm machinery. Economies that embraced steam left behind – and conquered – those that didn’t.

AI is the next big all-purpose technology. A 2018 report from the McKinsey Global Institute predicted that AI could generate $13 trillion in additional global economic activity by 2030, and countries leading AI development will capture a disproportionate share of these economic benefits.

AI also improves military power. AI is increasingly being applied in situations that require speed (such as defending against short-range projectiles) and in environments in which human control is logistically inconvenient or impossible (such as underwater or in areas blocked by the signal).

In addition, countries leading in the development of AI will be able to exercise their power by setting norms and standards. China is already exporting AI-based surveillance systems around the world. If Western countries cannot offer an alternative system that protects human rights, then many countries could follow China’s techno-authoritarian lead.

History shows that as the strategic importance of a technology increases, countries are more likely to exercise control over that technology. The British government provided funds for the early development of steam engines and provided other forms of support for the development of steam power, such as patent protection and tariffs on imported steam engines.

Similarly, in fiscal year 2021, the US government spent $10.8 billion on AI R&D, of which $9.3 billion came from the Department of Defense. Chinese government spending on AI is less transparent, but analysts say it’s roughly comparable. The United States has also attempted to restrict Chinese access to specialized computer chips, which are crucial for the development and deployment of AI, while securing our own supply with the CHIPS and Science Act. Think tanks, advisory boards and politicians constantly urge US leaders to follow China’s AI capabilities.

So far, the AI ​​revolution fits the pattern of previous general-purpose technologies. But the historical analogy breaks down when considering the risks posed by AI. This technology is much more powerful than the steam engine, and the risks it poses are much greater.

The first risk comes from an accident, a miscalculation or a malfunction. On September 26, 1983, a satellite early warning system near Moscow reported that five US nuclear missiles were heading towards the Soviet Union. Fortunately, a Soviet lieutenant colonel, Stanislav Petrov, decided to wait for confirmation from other warning systems. Only Petrov’s good judgment prevented him from passing the warning up the chain of command. Had he done so, the Soviet Union might well have launched a retaliatory strike, sparking full-scale nuclear war.

In the near future, countries may feel compelled to rely entirely on AI decision-making due to the speed advantage it offers. The AI ​​could make dramatic miscalculations that a human wouldn’t, leading to a crash or escalation. Even if the AI ​​behaves roughly as expected, the speed at which fights could be fought by autonomous systems could lead to rapid escalation cycles, similar to the “flash crashes” caused by high-speed trading algorithms. .

Even if not integrated into weapon systems, poorly designed AIs could be extremely dangerous. The methods we use to develop AI today – essentially rewarding AI for what we perceive to be correct results – often produce AI systems that do what we told them to do, but not what what we wanted them to do. For example, when researchers sought to teach a simulated robotic arm to stack Lego bricks, they rewarded it for placing the underside of a brick higher on a surface – and it flipped the bricks over. upside down rather than stacking them.

For many tasks, a future AI system could be entrusted, hoarding resources (e.g. computing power) and preventing itself from being disabled (e.g. by hiding its intentions and actions from humans) might be useful. So if we develop a powerful AI using the most common methods today, it may not do what we designed it to do and hide its true purposes until it perceives that it doesn’t have to – in other words, until it can dominate us. Such an AI system would not need a physical body to do this. He could recruit human allies or operate robots and other military equipment. The more powerful the AI ​​system, the more concerning this hypothetical situation. And competition between countries can make accidents more likely, if competitive pressures lead countries to devote more resources to developing powerful AI systems at the expense of the safety of those systems.

The second risk is that competition for AI superiority could increase the chances of a conflict between the United States and China. For example, if it appears that a country is about to develop a particularly powerful AI, another country (or a coalition of countries) could launch a preemptive attack. Or imagine what could happen if, for example, advances in marine detection, partially enabled by AI, reduced the deterrent effect of nuclear missiles launched by submarines by making them detectable.

Third, it will be difficult to prevent AI capabilities from spreading once developed. The development of AI is currently much more open than the development of strategically important 20th century technologies such as nuclear weapons and radar. The most recent findings are published online and presented at conferences. Even if AI research becomes more secretive, it could be stolen. Although early developers and adopters may enjoy first mover advantage, no technology – even top secret military technologies like the nuclear bomb – has ever been exclusive.

Rather than calling for an end to competition between nations, it is more practical to identify pragmatic steps the United States could take to reduce the risks of AI competition and encourage China (and others) to do the same. Such stages exist.

The United States should start with their own systems. Independent agencies should regularly assess the risk of accident, malfunction, theft or sabotage of AI developed in the public sector. The private sector should be required to carry out similar assessments. We don’t yet know how to assess how risky AI systems are – more resources need to be devoted to this tricky technical problem. At the margin, these efforts will be made to the detriment of efforts to improve capacities. But investing in security would improve US security, even if it delays the development and deployment of AI.

Next, the United States should encourage China (and others) to make their systems secure. The United States and the Soviet Union entered into several nuclear arms control agreements throughout the Cold War. Similar steps are now required for AI. The United States should propose a legally binding agreement prohibiting the use of autonomous nuclear weapons launch control and explore “softer” arms control measures, including voluntary technical standards, to prevent accidental arms escalation autonomous.

Nuclear security summits convened by President Obama in 2010, 2012, 2014 and 2016 brought together the United States, Russia and China and led to significant progress in securing nuclear weapons and materials. Now, the United States and China should cooperate on AI safety and security, such as pursuing joint AI safety research projects and promoting transparency in safety and security research. AI security. In the future, the United States and China could jointly monitor signs of compute-intensive projects, to detect unauthorized attempts to build powerful AI systems, as the International Intelligence Agency does. atomic energy with nuclear materials to prevent nuclear proliferation.

The world is about to experience a transformation as dramatic as the Industrial Revolution. This transformation will pose immense risks. During the Cold War, the leaders of the United States and the Soviet Union realized that nuclear weapons linked the fates of their two countries. Another such link is being created in tech company offices and defense labs around the world.

Will Henshall is pursuing a master’s degree in public policy at Harvard’s Kennedy School of Government.

Post a Comment

Cookie Consent
We serve cookies on this site to analyze traffic, remember your preferences, and optimize your experience.
Oops!
It seems there is something wrong with your internet connection. Please connect to the internet and start browsing again.
AdBlock Detected!
We have detected that you are using adblocking plugin in your browser.
The revenue we earn by the advertisements is used to manage this website, we request you to whitelist our website in your adblocking plugin.
Site is Blocked
Sorry! This site is not available in your country.