Articles

Guardians of the AI Gateway Are Not Ready for What Is Coming

Guardians of the AI Gateway Are Not Ready for What Is Coming

New technologies can change the global balance of power. Nuclear weapons divided the world into "haves" and "have-nots." The industrial revolution allowed Europe to gain an economic and military edge, fueling a wave of colonial expansion. The central question of the AI revolution is who benefits? Who will have access to this powerful new technology? And who will be left behind? These are questions that Paul Scharre, Vice President and Director of Studies at the American Security Center and author of "Power in the Age of Artificial Intelligence," seeks to answer in an article published in Foreign Policy.

**Artificial Intelligence**

Until recently, artificial intelligence was a rapidly spreading technology, with open-source models readily available online. However, the recent shift to large models, like "ChatGPT" launched by the AI research firm OpenAI, has concentrated power in the hands of big tech companies that can afford the computing hardware necessary to train these systems. The global balance of AI power depends on whether this kind of intelligence is held by a few players, similar to nuclear weapons, or by the broader population like smartphones.

**Haves and Have-Nots**

Access to computing devices creates a divide between those who have and those who do not in this new era of AI. Models like Frontier AI, such as ChatGPT and its successor, GPT-4, use enormous amounts of computational power. They are trained using thousands of specialized chips that run for weeks or months at a time. The production of these chips and the equipment used to fabricate them is limited to a few leading countries: Taiwan, South Korea, the Netherlands, Japan, and the United States. This means these countries have a veto power over who can access the latest AI capabilities. The U.S. has already exploited this dependence as a weapon to deny China access to the most advanced chips.

Countries responded to the challenges of the nuclear age by controlling access to the materials needed to create nuclear weapons. By limiting countries' access to uranium and plutonium, the international community worked to slow nuclear proliferation. Likewise, control over the specialized hardware needed to train large AI models will shape the global balance of power.

**Revolution**

The deep learning revolution began in 2012, and as it moves into its second decade, significant paradigm shifts are underway. New generative AI models like ChatGPT and GPT-4 are more general-purpose than previous narrow AI systems. While they do not yet possess human-like general intelligence, they can perform a wide variety of tasks. GPT-4 performs at a human level on the SAT, GRE, and bar exam. An AI agent defeated the top human player, Lee Sedol, in the Chinese strategy game Go in 2016, but failed to conduct a conversation, write a poem, analyze an image, play chess, create recipes, or code. In contrast, GPT-4 can do all these things and more.

These new general-purpose AI models have the potential for widespread societal benefit but could also cause real harm. Linguistic mistakes are just the tip of the iceberg regarding misinformation, and future harms could be much worse. Language models can create software and assist in cyberattacks. They can help synthesize chemical compounds and aid in the manufacture of chemical or biological weapons, making their general-purpose capabilities dual-use in nature, with both civilian and military applications. Although current models have limitations, AI systems are rapidly improving with each generation. Researchers are increasingly working to empower them with the ability to access the internet, interact with other models, and use Cloud Labs remotely for scientific experiments. However, researchers express concern about greater risks, such as an AI model exhibiting power-seeking behavior, including resource acquisition, self-replication, or concealing intentions from humans. Current models have not demonstrated this behavior, but improvements in AI capabilities are often surprising. No one can be certain about what AI capabilities will be possible in the next 12 months, let alone a few years from now.

**Not Safe**

What is clear is that the latest AI models are not safe, and no one knows how to reliably make them safe. OpenAI has tried to train ChatGPT and GPT-4 not to provide users with information that could be used to cause harm — with varying success. ChatGPT refused to assist in a certain experiment, while GPT-4 refused to make mustard gas but was willing to produce chlorine and phosgene, both chemical weapons used in World War I. Even when AI models refuse to perform a harmful task outright, users can often circumvent safeguards with simple tricks, such as asking them to simulate what an opportunistic agent might do. As AI capabilities increase and access to them proliferates, there is a grave risk of malevolent actors using them in cyberattacks or chemical, biological, or other assaults.

**Calls for a Moratorium**

Researchers have recently called for a six-month moratorium on the development of next-generation AI models due to the risks of societal harm. Others have argued that improvements in AI capabilities should be halted altogether. Leaders of all major AI labs signed an open letter warning that future AI systems could pose an existential threat to humanity. The European Union is drafting AI regulations. In May, U.S. President Joe Biden met with CEOs of major AI labs to discuss safety practices, and the Senate held a hearing on AI oversight.

While many AI regulations may be industry-specific, general-purpose models require special attention due to their dual-use capabilities. Nuclear technology is also inherently dual-use, but society has found ways to balance the positive benefits of nuclear energy with the risks of nuclear weapons proliferation. Similarly, society must find approaches to harness the benefits of AI while managing its risks.

**Benefits**

One of the primary ways to reap the benefits of AI, while minimizing its risks, is to control access to the computing hardware necessary for its training. Machine learning algorithms are trained on data using computers in the form of chips. Among these technical inputs—algorithms, data, and computing—chips are the most controllable resource. Unlike data and algorithms, chips are a tangible resource that can be controlled. The most advanced chips are used to train sophisticated AI models like ChatGPT and cannot be trained without them. Note that the most advanced chips are produced in Taiwan and South Korea and can only be manufactured using equipment from Japan, the Netherlands, and the United States. These five countries control global access to the most advanced chips.

**Chips**

Currently, hardware serves as a barrier to access AI models for all but a few. Unlike the space race or the Manhattan Project, the main actors in AI research are private companies, not governments. Only a few companies, such as OpenAI, Google, Microsoft, Anthropic, and Meta, compete to develop or utilize the most effective AI models. In doing so, these companies spend billions of dollars to build larger, more computationally intensive AI models. The amount of computing power used to train advanced machine learning models has increased tenfold since 2010, roughly doubling every six months. (The growth in computing for training the largest models doubles approximately every ten months.) This pace far exceeds the doubling of chip performance over 24 months since the 1970s, often referred to as Moore's Law. This growth is also much faster than hardware improvements alone, so AI labs are making the difference by purchasing more chips. Consequently, the costs of training advanced AI models have risen dramatically. Cost estimates for training some of the largest models are in the tens of millions of dollars. OpenAI's CEO Sam Altman estimated that training GPT-4 costs over $100 million. Tech companies are spending billions on AI. After the successful app launch, Microsoft announced a $10 billion investment in OpenAI, and Anthropic reportedly plans to spend a billion dollars to train the next generation of AI models.

This race to spend on computing divides the AI community, concentrating power in the hands of a few companies that train the most advanced models. Academics are barred from access to advanced AI models because they cannot train them. On the other hand, big tech companies have deep pockets and the ability to invest tens of billions of dollars annually in major tech projects if they see a return. If current trends in development continue, the power of computing could lead to massive demands in the next decade. The AI field could shift to a world where a small number of large tech companies are the gatekeepers of extremely powerful AI systems, and everyone relies on them for access.

**Geopolitics**

Given the risks, it is not surprising that the geopolitics of AI hardware is also heating up. In October 2022, the Biden administration issued export controls on the most advanced AI chips and semiconductor manufacturing equipment to China. While the most advanced chips are not made in the U.S. — they are produced in Taiwan and South Korea — those chips are manufactured using U.S. tools, such as specialized software used in chip production, giving the U.S. unique leverage over who can buy them. The U.S. has imposed restrictions on Taiwan and South Korea from using their equipment to fabricate advanced chips intended for China, even if those chips do not contain American technology. It is worth noting that the additional U.S. export controls on semiconductor manufacturing equipment deprive China of the tools necessary to produce its advanced chips.

In March, Japan and the Netherlands announced similar export controls on advanced chip manufacturing equipment to China. Collectively, the U.S., the Netherlands, and Japan control 90 percent of the global market for semiconductor manufacturing equipment. For the most advanced equipment ever existing — ultraviolet lithography machines used in advanced chipmaking — a Dutch company, ASML, has a monopoly in this area, and if the three countries cooperate, they could deny China the equipment necessary to produce advanced chips. Along with American export controls on the chips themselves, these actions aim to prevent China from purchasing or building the chips needed to train the largest AI models.

As AI models become more capable and depend on increasing amounts of computing power, they are poised to become a global strategic center. Semiconductors are today a core technology embedded in all sorts of digital devices, such as phones, cars, and internet-connected devices. However, the ongoing changes indicate a different trajectory. The field of AI is moving into an era where assembling the most sophisticated semiconductors resembles owning highly enriched uranium — a strategic global asset that is hard to obtain but offers access to powerful new capabilities. The U.S. and its allies hold a significant advantage in this new competition. Their control over the technology necessary to manufacture advanced chips resembles an opportunity to control the global uranium production in 1938. However, there are other forces at play, in the form of technology, market incentives, and geopolitics, which could cause this control to evaporate.

**Closing Access to Chips**

The U.S. and its allies have begun taking steps to close access to advanced chips, but additional measures are needed. Without proper enforcement, export controls on chips will be ineffective, as chips can be diverted or sold through intermediaries. Reportedly, the Chinese AI company SenseTime, blacklisted by the U.S. government for human rights violations, was able to access restricted chips through a third party. Increased government resources and new tools for chip tracking are necessary to ensure that prohibited actors cannot amass large quantities of controlled chips.

Control must also be exercised over computing devices in the data centers where they are used for training models. Another Chinese company blacklisted for human rights violations, iFLYTEK, reportedly circumvented U.S. restrictions by leasing chips in data centers instead of purchasing them outright. Current U.S. export controls apply only to chip sales. They do not restrict virtual computing companies from offering chips as a service, a loophole that could enable banned actors to access computing resources through cloud providers. Governments should impose "know your customer" requirements, similar to those in the finance industry, on cloud computing companies to prevent illicit actors from training powerful AI models.

Government oversight and regulation of training operations on a large scale will also be necessary. AI companies training powerful models should be required to inform the government of their training operations, including model size and design, datasets used, and the amount of computing power used in training. Over time, as safety standards evolve, a government licensing system may be needed for training operations that could give rise to dual-use AI systems capable enough to exceed a certain computing threshold.

Once trained, models should undergo rigorous testing to ensure they are safe before deployment. AI companies should be required to conduct risk assessments and allow third-party experts to "audit" the model or test it to identify vulnerabilities and potential harms before release.

**OpenAI**

OpenAI brought in over 50 external experts for months of collaborative work before launching GPT-4. The potential harms assessed included generating misleading information, aiding in the manufacture of chemical or biological weapons, facilitating cyberattacks, and exhibiting power-seeking behavior such as self-replication or resource acquisition. The company then applied AI safety measures to improve model safety before release. Despite these precautions, the public launch of the model revealed additional vulnerabilities, allowing for a wide range of behaviors to obtain information on the manufacturing of chemical weapons.

Our readers are reading too