Neural AI predictions for 2021

It’s that time of year again! We continue our longstanding tradition of posting a list of predictions from AI experts who know what is going on on site, in the research labs, and at the meeting tables.

Without further ado, let’s dive in and see what the pros expect after 2020.

DR. Arash Rahnama, Head of Applied AI Research at Modzy:

As AI systems advances, so too are opportunities and skills for adversaries to trick AI models into making false predictions. Deep neural networks are prone to subtle contrary disturbances being applied to their inputs – contrary AI – that are imperceptible to the human eye. These attacks pose a major risk to the successful deployment of AI models in mission-critical environments. At the speed we’re going, there will be a major AI security incident in 2021 – unless companies start using proactive adversary defenses in record their AI security situation.

2021 will be the year of explainability. When organizations incorporate AI, explainability becomes an important part of ML pipelines to build trust for users. Understanding how machine learning drives real data can help you build trust between people and models. Without understanding the results and decision-making processes, there will never be any real trust in AI-enabled decision-making. Explainability will be crucial in order to move on to the next phase of AI introduction.

The combination of explainability and new training approaches originally developed for dealing with enemy attacks will lead to a revolution in this area. Explainability can help understand what data influenced a model’s prediction and how to understand bias – information that can then be used to train robust models that are more trustworthy, reliable, and resistant to attack. This tactical knowledge of how a model works helps improve the quality and safety of the model as a whole. AI scientists will redefine model performance to address not only predictive accuracy, but issues like lack of bias, robustness, and strong generalizability for unforeseen environmental changes.

Dr. Kim Duffy, Life Science Product Manager at Vicon.

Making predictions for artificial intelligence (AI) and machine learning (ML) is especially difficult when you look just a year into the future. In clinical gait analysis, which examines the movement of a patient’s lower extremities to identify underlying problems that are causing difficulty walking and running, methods such as AI and ML are still in their infancy. This is something Vicon highlights in our recent life science report, “A Deeper Understanding of Human Movement.” It will take several years to apply these methods and to see the real benefits and advances in clinical practice. Effective AI and ML require a large amount of data to effectively train trends and pattern identifications using the appropriate algorithms.

However, by 2021, more clinicians, biomechanics, and researchers may be using these approaches during data analysis. In recent years we have seen more literature presenting AI and ML work in progress. I believe this will stay that way through 2021 as more collaborations take place between clinical and research groups to develop machine learning algorithms that enable automatic interpretation of gait data. Ultimately, these algorithms can help suggest interventions in the clinical space earlier.

It is unlikely that we will see the real benefits and effects of machine learning in 2021. Instead, we will adopt and consider this approach more strongly when processing gait data. For example, the presidents of the Gait and Posture member society gave a perspective on the clinical implications of instrumented motion analysis in their latest edition, emphasizing the need to use methods like ML for big data to create better evidence of the efficiency of the instrumented gait analysis. This would also allow for better understanding and less subjectivity in clinical decision-making based on instrumented gait analysis. We’re also seeing more credible advocates for AI / ML – like the Gait and Clinical Movement Analysis Society – that will also encourage further acceptance by the clinical community in the future.

Joe Petro, CTO of Nuance Communications:

In 2021, AI will continue to emerge from the hype cycle, and the promise, claims, and aspirations of AI solutions increasingly need to be backed by demonstrable advances and measurable results. As a result, companies will focus more on solving specific problems and developing solutions that deliver real results that translate into tangible ROI – not gimmicks or building technology for technology’s sake. Companies that have a deep understanding of the complexities and challenges their customers want to solve retain the on-site advantage. This affects not only how tech companies invest their R&D dollars, but also how technologists approach their career paths and education pursuits.

As AI pervades almost every aspect of technology, there will be an increased emphasis on ethics and an in-depth understanding of the impact AI has on creating unintended consequential harm. Consumers are becoming more aware of their digital footprint and how their personal information is used across systems, industries and brands with which they interact. This means that companies working with AI providers will use it to increase the accuracy and control of their customers’ data and whether or not it is monetized by a third party.

Dr. Max Versace, CEO and Co-Founder of Neurala:

We will see that AI is delivered in the form of inexpensive and lightweight hardware. It’s no secret that 2020 was a turbulent year, and the economic outlook is such that capital-intensive, complex solutions will be bypassed for lighter, possibly software-only, lower-cost solutions. In this way, manufacturers can achieve a ROI in the short term without massive upfront investments. It also gives them the flexibility needed to respond to fluctuations in the supply chain and customer demands – something that has played out on a larger scale throughout the pandemic.

People will turn their attention to why AI makes the decisions it makes. When we ponder the explainability of AI, it has often been discussed in the context of bias and other ethical challenges. But as AI grows up, becomes more precise and reliable, and finds more application in real-world scenarios, people will start to question the “why”. The reason? Trust: People are reluctant to power automatic systems that they do not fully understand. In manufacturing environments, for example, AI not only has to be precise, but also “explain” why a product was classified as “normal” or “defective” so that human operators can develop trust in the system and “let it do its job”.

Another year, another set of predictions. Click here to see how our experts fared over the past year. You can see how our experts have done this year by building a time machine and traveling into the future. Happy Holidays!

Published on December 28, 2020 – 07:00 UTC

Comments are closed.