A better have a look at the AI Incident Database with machine studying errors
The failure of artificial intelligent systems has become a recurring topic in the technology news. Credit scoring algorithms that discriminate against women. Computer vision systems that misclassify dark skinned people. Recommendation systems that encourage violent content. Trending algorithms that amplify fake news.
Most complex software systems fail at some point and need to be updated regularly. We have procedures and tools in place to help us find and fix these errors. But current AI systems, mostly dominated by Machine learning algorithms are different from traditional software. We are still studying the impact of their application on different applications, and preventing errors requires new ideas and approaches.
This is the idea behind that AI Incident Database A repository with documented failures of AI systems in the real world. The purpose of the database is to make it easier to identify previous errors and avoid repeating them.
The AIID is sponsored by the Partnership for KI (PAI), an organization that aims to develop best practices for AI, improve the public’s understanding of the technology, and reduce the potential harm that AI systems could cause. Founded in 2016 by AI researchers at Apple, Amazon, Google, Facebook, IBM, and Microsoft, PAI has since expanded to include more than 50 member organizations, many of which are non-profit.
[Read: How Netflix shapes mainstream culture, explained by data]
Past experience in documenting errors
In 2018, the members of the PAI discussed research on an “AI error taxonomy” or a way to classify AI errors in a consistent way. The problem, however, was that there wasn’t a collection of AI bugs to develop the taxonomy. This led to the idea of developing the AI Incident Database.
“I knew about aviation incident and accident databases and committed to creating the AI version of the aviation database during a Partnership on AI meeting,” said Sean McGregor, senior technical advisor for IBM Watson AI XPRIZE, in writing Comments too TechTalks. Since then, McGregor has overseen the AIID efforts and helped develop the database.
The structure and format of AIID was inspired in part by event databases in the aviation and computer security industries. The commercial air travel industry has managed to increase flight safety through systematic analysis and archiving of past accidents and incidents in a common database. Likewise, a shared database of AI incidents can help share knowledge and improve the security of AI systems used in the real world.
Common Vulnerabilities and Exposures (CVE), maintained by MITER Corp, is a good example of a database of software defects in various industries. It helped shape the vision of AIID as a system that documents errors in AI applications in various areas.
“The goal of AIID is to prevent intelligent systems from causing damage or at least reducing their likelihood and severity,” says McGregor.
McGregor points out that traditional software behavior is usually well understood, but modern machine learning systems cannot be fully described or extensively tested. Machine learning derives its behavior from its training data, and therefore its behavior has the ability to do so change in an unintended way as the underlying data changes over time.
“These factors combined with Deep learning systems The ability to step into the unstructured world we live in means malfunctions are more likely, more complicated, and more dangerous, ”says McGregor.
Today we have deep learning systems that can do this Recognize objects and people in images, process audio data, and extract information from millions of text documents in ways that traditional rule-based software could not. You expect data to be neatly structured in tabular form. This has made it possible to apply AI to the physical world, such as Self-driving cars, surveillance cameras, hospitals and voice-operated assistants. And all of these new areas create new vectors for failure.
Document AI incidents
Since its inception, AIID has collected information on more than 1,000 AI incidents from the media and publicly available sources. Fairness issues are the most common AI incidents reported to AIID, especially in cases where governments are using a smart system, e.g. B. Face recognition programs. “We’re also seeing increasing incidents involving robotics,” says McGregor.
There are hundreds of other incidents currently being reviewed and added to the AI Incident Database, McGregor. “Unfortunately, I don’t think there will be a shortage of new incidents,” he says.
Visitors can query the database for incidents based on the source, author, submitter, incident ID or keywords. For example, a search for “translation” reveals that there are 42 reports of AI incidents involving machine translation. You can then further filter the research based on other criteria.
Using the AI Incident Database
A consolidated database of incidents involving AI systems can serve various purposes in the research, development and deployment of AI systems.
For example, when a product manager evaluates the addition of an AI-powered recommendation system to an application, they can review 13 reports and 10 incidents in which such systems have harmed people. This helps the product manager set the right requirements for the role his team is developing.
Other executives can use the AI Incident Database to make better decisions. For example, risk officers can query the database for possible damage caused by the use of machine translation systems and develop the right measures to reduce risk.
Engineers can use the database to find out what potential damage their AI systems can cause when used in the real world. And researchers can use it as a source of citations in articles on the fairness and safety of AI systems.
Finally, the growing database of incidents can prove to be an important warning for companies that implement AI algorithms in their applications. “Tech companies are known for their penchant for acting quickly without evaluating any potential bad results. When bad results are enumerated and shared, it becomes impossible to proceed unaware of the damage, ”says McGregor.
The AI Incident Database is based on a flexible architecture that allows various applications to be developed to query the database and obtain other insights such as key terminology and contributors. In an article presented at the 33rd Annual Conference on Innovative Artificial Intelligence Applications (IAAI-21), McGregor discussed the full details of the architecture. AIID is also one Open source project on GitHub where the community can help improve and expand its functions.
With a solid database, McGregor is now working with Partnership on AI to develop a flexible taxonomy for classifying AI incidents. In the future, the AIID team hopes to expand the system to automate the monitoring of AI incidents.
“The AI community has started sharing event records to motivate changes to their products, control procedures, and research programs,” says McGregor. “The website was launched in November so we’re just beginning to see the benefits of the system.”
This article was originally published by Ben Dickson on TechTalks, a publication that examines technology trends, how they affect the way we live and do business, and what problems they solve. But we also discuss the evil side of technology, the darker effects of the new technology, and what to look out for. You can read the original article here.
Published on January 23, 2021 – 10:00 UTC