Artificial intelligence (AI) is already being used in weapons by a few powerful countries. Many are wondering how these developments will affect global security and the future of warfare.
In this article, we examine which countries are developing AI, where the technology stands today, and potential implications on international relations.
Who is Developing AI Weapons?
AI is widely viewed as a next important step in technology and many countries are working on it.
However, some are more advanced than others.
The United States of America is already working on AI for many military applications including logistics, intelligence, data consolidation, self-driving vehicles, and weapons.
Weapons using AI range from semi- to fully-automated, through most still have a human in the loop.
First are semi-automated systems like fire-and-forget weapons. These are missiles that guide themselves to a target once fired.
The larger Switchblade model carries a payload that could take out a vehicle. Although a human is in the loop with these drones, they navigate autonomously. The CEO of AeroVironment, the company making Switchblades, explained that the technology to make them fully autonomous already exists.
The U.S. has also developed AI facial recognition technology that is being used in America and abroad. American tech company Clearview AI has either sold or given trials of its technology to 3,200 government agencies in the U.S. This technology is also being used in the Ukraine to identify suspected enemies to the country, and the dead.
The U.S. military is also researching unmanned vehicles that support, or act as a “wingman” for another vehicle operated by a human. One example is pairing an unmanned aircraft with a fighter jet piloted by a human in the lead.
AI Partnership for Defense
The U.S. is committed to working with other countries to advance military AI technology. The Pentagon has initiated a partnership with 12 other countries to cooperate on AI development including tools for data sharing.
In total there are 13 countries in the AI Partnership for Defense. The first meeting was held in September of 2021. The following countries are involved:
∙ The U.S.
∙ The United Kingdom
∙ South Korea
The countries were chosen not only because they are democracies with a similar worldview, but also because they have developed some AI technology.
Israel, for example, has developed the HARPY, a fully autonomous loitering munition that attacks radar emitters.
According to a 2020 Congressional Research Service report, China is America’s leading competitor in AI weaponry and plans to become the world leader in AI technology by 2030.
China is already using sophisticated face-recognition technology that could be adapted for military purposes. The government has claimed it can scan the faces of every one of its citizens in seconds.
In terms of weapons, China demonstrated a swarm of autonomous air vehicles in 2017. Later, a media report was released showing a computer simulation of similar vehicles destroying a missile launcher.
China is also known to be developing AI tools for cyber warfare.
The KYB-UAVs are loitering munitions that are loaded with an explosive, then crash into targets. They have the power to destroy a tank and can also be used for surveillance and reconnaissance.
The AI component on the KYB-UAV is a visual identification technology used to classify targets.
Russia has exported the KYB-UAV drones to Eritrea.
According to the Congressional Research Service report, Russia has also used AI for propaganda and surveillance on its own citizens, and in information operations against the U.S. and its allies.
Turkey’s Kargu-2 was reportedly used against Libya. Another loitering munition, a United Nations (UN) report said it attacked targets independently, without an operator. STM, the Kargu-2’s manufacturer states the AI in these drones is only navigational, and the drones are otherwise under human control.
The Kargu-2 is said to have the ability to identify and fire on human targets.
STM signed an agreement in 2021 to export Kargu drones to an “undisclosed customer.”
How Effective is AI in Warfare?
AI could help make the military more efficient, and save man-power for more important tasks.
In terms of its use on the battlefield, AI has the potential to react much faster than human beings. An AI developed for unmanned aircraft destroyed a human pilot in a simulation. The human pilot didn’t even get a chance to fire.
At the moment, AI is in early stages of development, and there are a lot of bugs to be worked out. The systems are only as good as the data they are trained on. AI facial recognition, for example, has varying levels of accuracy with different genders and skin colors. AI systems also have a difficult time adapting to real-world circumstances. It’s one thing to develop a successful AI for test conditions, a battlefield is very different.
However, countries that manage to wield AI effectively will certainly have a strategic advantage over others. They will likely have more efficient military capabilities, with weapons that are able to fire and react faster than those fully operated by humans.
AI and Global Security
The development of AI for warfare does have implications for global security. First, the push to secure military AI technology is turning into an arms race. The U.S., China, and Russia are all working to gain a leading edge in this realm. And it’s not cheap.
In 2018, the Department of Defense increased AI funding giving the Joint Artificial Intelligence Center (JAIC) $1.75 billion over six years, and the Defense Advanced Research Projects Agency (DARPA) a $2 billion multi-year investment for its AI projects. DARPA received authorization for an additional $75 million in the 2023 fiscal year.
Competition over AI weapons technology could contribute to already significant tensions between the U.S., Russia, and China.
Another serious concern is the potential for these weapons to fall into the hands of terrorists. Many loitering munitions, for example, are small and easy to transport. Since private companies are often involved in their development, this is highly possible. Take, for example, the sale of Kargu drones to an “undisclosed client.”
In February of 2023, over 60 countries signed a “call to action” on the responsible use of AI in the military. The 12-point declaration includes an agreement to follow international humanitarian law, ensure humans are in control of the deployment of nuclear weapons, and that “appropriate levels of human judgment” will be used in developing and deploying AI weapons. The organization Human Rights Watch pointed out the agreement isn’t legally binding, and challenged the U.S. government to define what appropriate levels of human judgment are.
In March of 2023, Elon Musk and over 1,000 experts in the field publicly called for a hiatus in the creation of “giant digital minds.” Musk co-founded OpenAI, the company that created ChatGPT and GPT4 and left the company in 2019.
Whether these announcements will actually help shape the progress of AI weaponry development remains to be seen.
Humans are only at the beginning of their journey with AI, in warfare and everyday life. Although walking and talking killer robots aren’t going enter the theater of war any time soon, the decisions made in the next few years will be of utmost importance.
To find out more about AI and its use on the battlefield, tune into RSnake’s conversation with AI expert Russ Bodnyk.