The researchers suggest an “ethically appropriate AI” for good weapons that excludes mass shooters

A trio of computer scientists from the Rensselaer Polytechnic Institute in New York recently published research describing a possible AI intervention for murder: an ethical lockout.

The big idea here is to stop mass shootings and other ethically incorrect uses of firearms by developing an AI that can detect intentions, judge whether they are ethical uses, and ultimately inert a firearm if a user tries to use it for prepare improper fire.

That sounds like a lofty goal, the researchers themselves refer to it as the “blue sky” idea, but the technology that makes this possible is already in place.

According to the team’s investigations:

Predictably, some will object like this: “The concept you are introducing is attractive. But unfortunately it is nothing more than a dream; actually nothing more than a pipe dream. Is this AI really doable, scientifically and technically? “We confidently affirm this.

The research goes on to explain how recent breakthroughs in long-term studies have led to the development of various AI-powered reasoning systems that could serve to trivialize and implement a relatively simple ethical firearms scoring system.

This article isn’t going to describe the development of a smart weapon itself, but rather the potential effectiveness of an AI system that can make the same decisions for firearm users that cars can, for example, lock drivers out if they can’t pass a breathalyzer.

In this way, the AI ​​would be trained to recognize the human intent behind an action. Describing the recent mass shootings at a Wal Mart in El Paso, the researchers offer different views on what could have happened:

The shooter drives his vehicle to Walmart, an assault rifle and a huge amount of ammunition. The AI ​​we envision knows that this weapon is there and that it can only be used for very specific purposes in very specific environments (and of course it knows what those purposes and environments are).

At Walmart itself in the parking lot, any attempt by the potential attacker to use his weapon or position it in any way will result in it being locked out by the AI. In the present case, the AI ​​knows that killing someone with a gun is unethical, except perhaps for self-defense purposes. Since the AI ​​excludes self-defense, the weapon becomes unusable and locked out.

This paints a wonderful picture. It’s hard to imagine objecting to a system that worked perfectly. Nobody has to load, park, or fire a firearm in a Wal Mart parking lot unless they are in danger. If the AI ​​could be engineered so that users could only shoot in ethical situations like self-defense at a rifle range or in designated legal hunting areas, thousands of lives could be saved each year.

Of course, the researchers predict innumerable objections. After all, they focus on navigating the US political landscape. Gun control is common sense in most civilized countries.

The team expects people to point out that criminals only use firearms that don’t have an AI watch dog embedded in them:

In response, we note that our blue sky conception is in no way limited to the idea that the guarding AI is only contained in the weapons in question.

The contribution here is clearly not the development of an intelligent weapon but the creation of an ethically correct one AI. If criminals don’t put the AI ​​on their guns or keep using stupid guns, the AI ​​can still be effective if it’s installed in other sensors. It could hypothetically be used to perform any number of functions once it determines violent human intent.

It could lock doors, stop elevators, alert authorities, change traffic light patterns, send location-based alerts, and any number of other reactionary measures, including unlocking defense and security personnel weapons for defense.

The researchers also assume that there will be objections based on the idea that humans could hack the guns. This one is pretty easy to fire: firearms are easier to secure than robots, and we are already using AI in them.

While there is no such thing as total security, the US military is filling its ships, planes, and missiles with AI, and we’ve managed to figure out how to stop the enemy from hacking them. We should be able to keep police officers’ service weapons just as safe.

Realistically, it takes a leap in confidence to assume that ethical AI can be crafted to understand the difference between situations such as home invasion and domestic violence, but the basics are already in place.

If you look at driverless cars, we know that people have already died because they relied on AI to protect them. But we also know that the potential to save tens of thousands of lives is too great to ignore, given the relatively low number of accidental deaths to date.

It’s likely that just like Tesla’s AI, gun control AI can lead to accidental and unnecessary deaths. In the United States, approximately 24,000 people die each year as a result of gun suicide, 1,500 children are killed by gun violence, and nearly 14,000 adults are murdered with guns. It is clear that an AI intervention could significantly reduce these numbers.

You can read the whole paper here.

Published on February 19, 2021 – 19:35 UTC

Comments are closed.