Why it does not make sense to ban autonomous weapons

In May 2019, the Defense Advanced Research Projects Agency (DARPA) stated, “There is currently no AI that can overtake a person strapped to a fighter jet in a high-G, high-speed aerial combat.”

Fast forward to August 2020, when an AI was built by Heron Systems In DARPA’s AlphaDogFight tests, you were able to beat the best fighter pilots 5 to 0 without any mistakes. Heron’s AI maneuvered human pilots again and again as they exceeded the limits of G-forces with unconventional tactics, lightning-fast decision-making and deadly accuracy.

Former US Secretary of Defense Mark Esper announced in September that the Air Combat Evolution (ACE) program will bring AI into the cockpit by 2024. It is very clear that the goal is to “assist” pilots rather than “replace” them. However, it’s hard to imagine how a human can be reliably kept up to date in the heat of battle against other AI-enabled platforms when humans just aren’t fast enough.

On Tuesday, January 26, the National Security Commission for Artificial Intelligence met and recommended not to ban AI for such applications. In fact, Vice Chairman Robert Work stated that AI can make fewer mistakes than human counterparts. The Commission’s recommendations, which are expected to be sent to Congress in March, are in direct contradiction to The Campaign to Stop Killer Robots, a coalition of 30 countries and numerous non-governmental organizations that have been campaigning against autonomous weapons since 2013.

There seem to be many good reasons to support a ban on autonomous weapons systems, including destabilizing military advantage. The problem is that AI development cannot be stopped. In contrast to visible nuclear enrichment facilities and material restrictions, AI development is much less visible and therefore almost impossible for the police. Additionally, the same AI advances used to transform smart cities can easily be used to increase the effectiveness of military systems. In other words, this technology will be available to countries with aggressive attitudes that will use it to achieve military dominance, whether we like it or not.

So we know these AI systems are coming. We also know that no one can guarantee that people will stay tuned in the heat of battle – and as Robert Work argues, we may not even want it. Whether as a deterrent or fuel for a security dilemma – the reality is that the AI ​​arms race has already begun.

[Read: How Polestar is using blockchain to increase transparency]

“I think we should be very careful with artificial intelligence. If I had to guess what our greatest existential threat is, it probably is. ”- Elon Musk

As with most technological innovations, the possible unintended consequences of which lead to a break, the answer is almost never to prohibit, but rather to ensure that their use is “acceptable” and “protected”. Indeed, as Elon Musk suggests, we should be very careful.

Acceptable use

Such as Face recognition, which is also coming under immense scrutiny with increasing bans in the US, is not the technology that is the problem, but the acceptable use. We need to define the circumstances under which such systems can and cannot be used. For example, no modern law enforcement agency would ever get away with showing a victim a single suspicious photo and asking, “Is this the person you saw?” Equally unacceptable is facial recognition for blindly identifying potential suspects (not to mention) the trend of such technologies across different ethnicities, which goes far beyond the limitations of the AI ​​training data for the camera sensors themselves).

A police car equipped with an Automated License Plate Reader (ALPR) (Adobe Stock)

Another technology that suffered from early abuse is automated license plate readers (ALPRs). Not only were ALPRs useful for identifying target vehicles of interest (e.g., expired registrations, suspended drivers, even arrest warrants), but the database of license plates and their geographic locations proved very useful for locating suspicious vehicles following a crime. It was quickly determined that this practice was sidelined as it violated civil liberties, and we now have formal guidelines on data retention and acceptable use.

Both AI innovations are examples of incredibly useful but controversial technologies that need to be reconciled with well-designed Acceptable Use Policies (AUPs) that address issues of concern Explainability, Bias, Privacy, and Civil Liberties.

protection

Unfortunately, defining AUPs can soon be seen as the “easy” part, as we just need to think more carefully and formalize which circumstances are appropriate and which are not, even though we need to go much faster in doing so. The hardest consideration in adopting AI is making sure we are protected from the dangers of such systems that are not widely known today AI is hackable.

AI is vulnerable to controversial data poisoning and model bypass attacks that can affect the behavior of automated decision-making systems. Such attacks cannot be prevented with traditional cybersecurity techniques because the inputs to the AI ​​during both model training and model deployment time are outside the cybersecurity realm of the company. Additionally, there is a large gap in the skills required to protect these systems because of cybersecurity and security Deep learning are often mutually exclusive. Deep learning professionals typically don’t have an eye for how malicious actors think, and cybersecurity professionals typically don’t have the in-depth knowledge of AI to understand the potential vulnerabilities.

Main battle tankImages are exposed to data poisoning attacks that are only visible during AI model training (Adobe Stock).

As an example, consider the task of training an Automated Target Recognition (ATR) system to identify tanks. The first step in this task is to curate thousands of training images to teach the AI ​​what to look for. A malicious actor who understands how AI works, can Embed hidden images that are nearly invisible to data scientists but completely change to a new invisible image when resized to match the dimension of input training during model development. In this case, the above picture of a tank can be poisoned to completely switch to a school bus during the model training period. The resulting ATR is trained to identify both tanks and school buses as threat targets. Do you remember the difficulty of keeping people informed?

School busAn example of a hidden image that will only appear during AI training time (Adobe Stock)

Many will dismiss this example as unlikely or even impossible, but remember that neither the AI ​​experts nor the cybersecurity experts understand the full problem. Even when data supply chains are secure, violations and Inside Threats happen every day, and this is just one example of literally an unknown number of possible attack vectors. If we’ve learned anything, it is that all systems are hackable in the face of a motivated malicious actor with enough computing power – and AI was never built with security in mind.

It doesn’t make sense to ban AI weapon systems because they are already here. We can’t monitor what’s going on and we can’t guarantee that people will stay informed as this is the reality of AI innovation. Instead, we need to define when it is acceptable to use such technologies and, in addition, take all measurable measures to protect such technologies from adversarial attacks, which are undoubtedly being developed by malicious and state actors.

This article was originally published by James Stewart on TechTalks, a publication that examines technology trends, how they affect the way we live and do business, and what problems they solve. But we also discuss the evil side of technology, the darker effects of the new technology, and what to look out for. You can read the original article here.

Published on February 12, 2021 – 13:00 UTC

Comments are closed.