Recent advances in AI make it seem like sentient robots are just around the corner. But the reality is more complicated – especially where the military is concerned.
There are still many issues, small and large, associated with AI that need to be worked out.
As RSnake guest Charlie Burgoyne, Founder and CEO of AI firm Valkyrie said, “I actually think that there are fundamental unknowns about the nature of intelligence that we are not really that much closer to uncovering than we were 40 or 50 or 60 years ago.”
Aside from the bigger challenges like those mentioned by Burgoyne, there are also a host of other issues involved with AI in the military.
Technical Challenges of AI
According to this article out of the Modern War Institute at Westpoint, data is a major sticking point because AI systems need access to enormous amounts of it. Consider that training an AI to recognize a weapon would involve showing it every weapon in existence. Not an easy task. And dealing with such large amounts of data could slow systems down.
Also, an AI system is only as good as the data that goes into it. Mislabeled or incorrect data could lead to serious errors.
In his chat with Burgoyne, RSnake also pointed out that AI systems are often intentionally seeded with bad data. Although the military wouldn’t allow open access to its technology, like the Microsoft chatbot RSnake mentioned, this is still a concern.
And military information is usually sensitive and highly classified, making it difficult to process and share.
Lack of Adaptability
AIs work well in simulations with clearly defined parameters, but they can be tricked easily in the real world. Self-driving cars have swerved off course because of three small stickers placed in an intersection. In a battle, where unforeseen incidents happen all the time, it’s easy to imagine an AI going awry.
AIs are frequently inaccurate. A 2020 study found that AI facial recognition systems are less precise with female subjects who have darker skin. The systems fare best on male subjects with light skin.
In his conversation with Burgoyne RSnake said, “I have a vehicle that can detect humans or deer, and it regularly detects a dog as a deer and various other things. So, there is definitely a lot of room for improvement in these systems.”
Risks of AI Military Applications
Aside from the current technical difficulties with AI, there are also critical risks to consider.
Errors in Judgment
Even when AIs are accurate, they lack nuance.
The National Security Agency’s (NSA) SKYNET system (yes, just like in the Terminator movies) is a good example. A leaked slideshow revealed that a target identified as a terrorist courier was actually an investigative journalist working for Al Jazeera. It’s impossible for an AI to make this distinction. An automated weapon using this data could have killed an innocent man.
It’s estimated that between 22,000 and 48,000 civilians have already been killed by U.S. drone and airstrikes since 2001. These casualties occurred with human-operated weapons. What could be the cost to human life if AI makes these decisions?
AI is being developed to help with cybersecurity, but it is also vulnerable to cyber attack. The more automated a system becomes, the more vulnerable it is to cyber warfare.
Sophisticated hackers have already had enormous impact on businesses and even countries. Imagine the devastation that could occur if someone fed bad information into a military AI system, infected an automated weapon with a virus, or gained control of missiles.
The military is developing AI for intelligence purposes, raising red flags regarding privacy.
First, there’s the issue of facial recognition. Clearview AI, a company that makes facial recognition software, has been involved in a number of legal battles over its acquisition of photos from sites like Facebook and Twitter without any consent. This hasn’t stopped the company from building an enormous database of people’s faces, or government agencies from using the software in the U.S. and abroad.
The U.S. Air Force awarded a contract to Clearview AI for research on augmented reality glasses that would scan people’s faces for security purposes.
AI is also used by the military to gather intelligence. One potential application is to create something called “digital patterns-of-life.” This would use an AI to comb the internet and consolidate an individual’s online footprint including purchase histories, subscriptions, and more. According to a 2020 Congressional Research Service report, private citizens could be targeted for these operations along with service members, suspected intelligence officers, and government officials.
This would represent an unprecedented level of access to people’s private lives.
Human Rights Implications of AI in Warfare
Human rights are also at stake.
Artificial intelligence cannot understand the value of human life. A Human Rights Watch article pointed out that only human judgment can weigh whether or not an act is in compliance with international law.
Right now, AIs don’t even understand what humans are. RSnake and Burgoyne discussed how difficult it is to code this knowledge. Programming an AI to discern when killing a human may or may not be appropriate is even more of a hurdle. Burgoyne said, “Ethics is certainly something that is way down the line once we've solved the problem for emotionality, motivation, core tenants of intelligence that we still don't really understand.”
Although AI weapons are still relatively new, organizations like Human Rights Watch and the Stop Killer Robots campaign are calling on the global community to set international limits on their development and use.
AI Weapons and Global Security
AI weapons could also have an adverse effect on global security.
With America, Russia, and China racing to acquire the latest and best AI technology, already tense relations could become even more strained. In the fall of 2022, President Biden announced restrictions on selling semiconductor chips to China. Even more restrictions could be forthcoming in 2023.
This move is widely understood to slow China’s technological growth, including in AI. It certainly isn’t leading to better U.S.-China relations.
It’s also possible that terrorists could acquire autonomous weapons. Many of these weapons have been developed by private companies, who may be happy to sell to the highest bidder. This could further destabilize certain regions, and make it more difficult to keep terrorism at bay.
The American military is invested in developing AI technology for better or worse. Although its growth seems rapid, there are still many technical problems to be solved before science fiction becomes reality. AI could be safely developed if leaders choose to give as much focus to the ethical problems as the technical ones.
For more on AI ethics and plant-based computer networks, tune in to RSnake’s conversation with Charlie Burgoyne now!