We are in the midst of a momentous time in human history as increasingly autonomous weapons are used on the battlefield. What will the future of warfare look like with AI on the scene?
The decisions our political, technological, and military leaders make today will shape the future of war, and maybe even humanity.
One thing is certain, AI is here to stay. It is already being used in the military and will likely continue to be developed. Whether it will be used responsibly is another question.
The Future of AI in Warfare
In the United States of America, AI is already being researched for many military applications including logistics, autonomous vehicles, intelligence, data consolidation, autonomous weapons, and more.
If all current projects go according to plan, the future of warfare could look quite different than it does today.
An AI could determine when military vehicles need to be maintained, with customized schedules for each vehicle. And AI systems could create efficient and cost-effective shipping timetables.
Other types of AI might gather and consolidate intelligence information to help the higher-ups in the military chain of command make decisions.
AI-informed defense systems would alert military personnel when an attack is imminent. Automated weapons could also attack quickly to neutralize the threat.
On the battlefield, soldiers might be paired with autonomous vehicles. The vehicles would scan the horizon for threats and react rapidly if any are identified. In the air, autonomous drones could provide surveillance footage and fire automatically at threats.
This is one vision of the future out of many. What unfolds will depend on how current technical challenges are overcome, and the guidance of our leaders.
Leaders today are grappling with the question of how much we can trust AI, and determining the extent to which humans should be kept in the loop when it comes to this technology.
Isaac Asimov, the famous science fiction writer, created three (and later four) laws of robotics. The first is, “a robot may not injure a human being or, through inaction, allow a human being to come to harm.” Will today’s leaders uphold this principle?
The Downsides of AI
With AI, there are also many issues to consider.
AI systems require huge amounts of data. If there are errors in the information AIs are given, they can be inaccurate. It’s also difficult to process military data that is extremely sensitive.
While AIs can be better than humans at some tasks, they are not infallible. Facial recognition systems, for example, have been found to have varying levels of success depending on the subject.
AIs also have a difficult time adapting to unexpected circumstances.
Then there is the issue of AI logic, which is unlike human logic in ways that are sometimes surprising. In his conversation with AI expert Russ Bodnyk, RSnake mentioned an AI that was given the task of sorting items in a stack. The AI’s solution was to pull everything off the stack. To an AI, it made sense, the problem was solved as quickly as possible. But this wasn’t the outcome researchers were hoping for.
These kinds of mistakes and odd decisions are funny in some circumstances, but not when it comes to war. What problems would an AI solve logically (but badly) on a battlefield?
Thousands of civilian lives have already been lost due to mistakes made by humans using technology to identify threats and neutralize them. If AIs are put in charge, this number could increase exponentially.
Then there is the issue of accountability. If an AI takes the wrong human life, who will be held to account?
The Future of AI and Global Security
The international community is aware of the issues surrounding military AI technology. Whether agreements regarding its responsible use will be reached in good time is uncertain. Diplomacy is steam-engine slow compared to the hypersonic speed of current technological development.
Some movement has been made towards international agreements governing the use of AI weapons. A recent “call to action” was signed by over 60 countries on the responsible development of AI. However, the agreement is not legally binding, and the language around how much humans should be in the loop is vague. Russia wasn’t invited to the table and Israel didn’t sign. Organizations like Human Rights Watch are pushing for more robust, tangible action.
The race for AI technology will likely put even more pressure on the relationships between the U.S., Russia, and China. President Biden’s move to restrict the sale of semiconductor chips to China is seen as an attempt to stall the country’s ability to further develop AI along with other leading technologies.
If war broke out between superpowers, ethics around AI could go out the window. It was World War II that led to the development of nuclear weapons. The war in the Ukraine has already seen new forms of warfare. Facial recognition technology and automated AI drones have been used by both sides in the conflict.
The next few years will be interesting, to say the least.
With the international community increasingly aware of advancements in AI and the dangers they pose in warfare, it’s possible some form of real agreement may be reached in time to curb the worst possibilities of this technology. However, an AI arms race or a war between superpowers could jeopardize this.
Interested in AI on the battlefield? Listen in to RSnake’s conversation with AI expert Russ Bodnyk.