AI is not all unhealthy – listed here are four good issues it may possibly do by 2030
Artificial intelligence has been portrayed as an uncanny force in science fiction for decades. Think of HAL-9000, the main antagonist in Arthur C. Clarke’s Space Odyssey series. But while applications of AI and machine learning are indeed sophisticated and have the potential to be dangerous. I believe that over the course of this decade, the most common encounters with these neural technologies will seem both ordinary and positive. However, there is one important area of algorithmic usage that requires real work.
The benign uses first. I’m thinking of areas where prototypes already exist: AI-powered activities that are expected to normalize by the end of this decade: conversational commerce, home technical support and Autonomous vehicles. A fourth, however, institutional decision making, currently has few satisfactory prototypes and is therefore more difficult to repair.
This refers to voice controlled Sales activity in which the natural voice is that of the customer and which, at the end of the vendor, interacts with an AI-controlled bot voice. It differs from today’s ecommerce pattern, where the customer goes through a series of steps: visiting the supplier’s website, reviewing a series of images, entering their selections, entering delivery instructions, providing credit card information, and then confirming the purchase. Instead, the customer would either visit the website first or speak to their smart speaker. A bot would greet the person and ask how they could help, while drawing on knowledge of previous searches and purchases. Everything would take place in natural language. Over time, the AI bot was even able to contact you and make suggestions for gifts, reorders or special offers. I expect half of all commerce to switch to voice technology by the middle to end of the decade.
[Read: How do you build a pet-friendly gadget? We asked experts and animal owners]
Technical support at home
Today, finding help with a problem with household appliances usually begins with a call to the OEM customer service or local service center. The customer describes their problem, a technician is sent home and the problem is fixed on site. Depending on the problem, it may take days to resolve. For the next several years, however, that first call will be answered by a 24/7 bot. You will be instructed to use your cellphone and point the camera at the model number identification label, control settings, installation details and the problem. You will be given a series of questions to help narrow down the diagnosis and identify replacement parts. You will then be shown a tutorial video that has been enhanced by augmented reality and enables you to do most of the maintenance yourself. Should that fail, your call will be routed to a human technician whose advice will also be picked up by the AI system and used to improve future service calls.
Fully autonomous cars and trucks are already in preparation, but human intervention for their safe operation is still a necessity. This will no longer be the case by the end of this decade. How to react appropriately to problematic road situations – especially construction work, road hazards, hand signals and reckless drivers – was learned from self-driving cars and trucks and quickly implemented. This allows commuters to do other things while driving, reduce the shortage of commercial drivers, and change the landscape of product delivery. Their on-demand capabilities are also likely to affect patterns of private vehicle ownership.
Institutional decision making
The toughest uses of AI are not those embedded in digital devices. instead, they are the ones embedded in the Policy making Public and private facility machines where they make decisions about human services: obtaining a loan, securing insurance, setting interest rates, being eligible for government benefits, criminal conviction, suitability for bail, likelihood of success at work, and qualification for, among others receive medical care. However, private developers of these algorithms jealously protect their creations, and government agencies rarely disclose how their algorithms work. Since algorithms keep changing as more data is ingested, an expert needs to understand exactly how they work, and even less so to defend them in court. However, it is more damaging if the data sets on which an AI system has been trained are available Unintentionally biased against a minority, as some claim is the case with police data, it can effectively automate discrimination.
Algorithms are not based on ethics. While they can learn to model good behavior, they are also able to learn bad behavior based on data whose biases skew the conclusions. However, the speed, completeness, and cost savings of AI technology are far too valuable to simply write off. As a result, for the remainder of this decade, I expect new forms of transparency to emerge. They enable ordinary citizens and their proponents to better understand faulty algorithms and, if necessary, challenge them, and provide a meaningful review of the potential for harm from faulty AI systems.
This article was originally published by Gautam Goswami on TechTalks, a publication that examines technology trends, how they affect the way we live and do business, and what problems they solve. But we also discuss the evil side of technology, the darker effects of the new technology, and what to look out for. You can read the original article here.
Published on February 19, 2021 – 09:18 UTC