Businesses are increasingly embedding AI solutions into every process and product to drive insights, automation and innovation. Yet, as AI adoption skyrockets, fueled by emerging solutions such as ChatGPT and DALL-E, so do the risks.
To help better understand how businesses are approaching AI risk, KPMG LLP asked executives across multiple sectors for their views of the risks associated with their AI and predictive analytics models. This report sheds light on their perceptions of those risks and the challenges they face in addressing them.
Data integrity followed by statistical validity and model accuracy are the top three risks that businesses are actively managing or mitigating.
of the organizations have a clear definition of AI and predictive analytics models, though traditional sectors like IM & ENRC sector still need to build more clarity.
of the respondents expect an increase in the use of AI and predictive analytics models.
of the respondents are very likely to buy a rapid diagnostic tool to help assess what risk categories and potential impacts are there in their existing AI models. Most would prefer to buy diagnostic tools as a subscription or routine services. This might be due to the high cost of these tools.
of the respondents report a degree of regulatory oversight of predictive analytics models. Lack of skilled resources, budget constraint and tools were identified as the biggest limiting factors in the risk review process.
Majority of firms who don’t have a formalized AI risk management function are aiming to do so in the next 1-4 years.
believe that audit of AI models will be a requirement within the next 1-4 years.
This report shares key findings and insights collected from 140 US-based executives in public and private organizations spanning seven industry sectors. All respondents were from companies with revenue greater than $1 billion, and sixty percent were from companies with revenue from $1 billion to $9.9 billion. (Source: KPMG Artificial Intelligence Risk survey, September 2022)
Note: percentages have been rounded and may not total to 100%.
Perhaps predictably, larger companies — those with $20 billion or more in revenue — were overwhelmingly the ones with more than 100 models (just 3 percent of those with lower revenues had that many).
These risks are listed in the order that respondents ranked them for their potential to negatively impact their business, with data integrity, statistical validity and model accuracy reported as the three most significant risks. They also said that these three are the risks they are managing or mitigating most actively.
Who is responsible for managing these risks? Our survey revealed that C-Suite executives are more involved in providing direction and creation of new analytic processes, but the implementation, refinement and risk review of the models are left to management. Similarly, relatively few C-Suite executives were directly involved in or responsible for strategies to manage risk and data/model governance. This may be our first indication that while AI-related risks may be recognized, they might not be fully addressed.
Participate directly in establishing new processes or procedures
Responsible for developing and/or implementing governance to mitigate AI risk
Responsible for review of AI risks
Understanding what a model is provides a baseline requirement for managing model risk, but so is understanding how those models are developed and work.
Respondents reported that a lack of transparency is a serious risk; it was ranked fourth, but we were somewhat surprised it was not higher. Many companies are using “blackbox” models developed by others that provide no visibility. Are they assuming that the software vendor has identified and addressed all potential risks? How do they know? Currently, there is no AI equivalent of a Service Organization Control (SOC) report, and no self-certification or assessment standard is in sight.
Detecting and preventing errors or unfair outcomes in AI models can be remarkably challenging even if you have complete access to both the model and the data it uses. One of the reasons we turn to AI is precisely because it can detect patterns amid chaos that humans are incapable of seeing or even understanding. But what happens when you don’t even know the model is there? Increasingly, AI or predictive models are being “hidden” inside some enterprise software on which many rely. How exactly is your human resources software or the software used by your third-party recruiter helping you sift through résumés, for example?
Yes
No
Don't know
Such lack of ownership can be pervasive and even exacerbated by leading-edge technologies. Data Lakes, for example, provide remarkably convenient access to a wealth of data — a “single source of truth” — but by centralizing it, that data can be divorced from its source and therefore stripped of any ownership. Domain-specific knowledge associated with that data, including its lineage, may be lost. Our survey shows that data integrity is the top concern of respondents, but would you be able to spot if a malicious actor had introduced deliberate errors to influence results in their favor at the data’s source?
Is there a role for government oversight? Seventy-three percent of respondents reported there is already some degree of regulatory oversight over their models. As you might expect, those in financial services and healthcare/life sciences reported the most regulatory oversight (80 percent), with energy/natural resources and industrial manufacturing the least (60 percent). Companies with revenue over $10 billion are more likely to have predictive models requiring regulation and are more likely to have formal review processes.
Yes
No
Somewhat
Our survey also shows that 84 percent believe that an independent audit of their AI models will be a requirement within the next one to four years. In New York City, for example, a law is scheduled to go into effect in April 2023 requiring any automated employment decision tools to undergo an annual independent bias audit.1 The European Union (EU) is also proposing an AI Act that will regulate the safe and ethical use of AI models.2 This regulation has a broad scope because it will apply to any provider that puts an AI system into service in the EU or that produces outputs that could be used there, including potential fines.
Given the possible consequences of laws like this, our respondents are likely correct and more audit requirements and regulations are on the way.
But who will manage these situations? Sixty-six percent of respondents who said they do not yet have a formal AI risk management function aim to have one in the next one to four years. Yet only 19 percent of respondents say that they explicitly have the expertise to conduct such audits internally, and 53 percent cite a lack of appropriately skilled resources as the leading factor limiting their ability to review AI-related risks. It appears that AI adoption and maturity is outpacing organizations’ ability to fully assess and manage the risk associated with it.
You might exclude gender from a dataset used by an AI model, for example, and then check the box “done,” believing the risk of gender bias has been eliminated. But can the model still access first name? Did anyone consider that it might use first name as a proxy for gender?
There is also “cascading” risk to consider. It is increasingly common for AI models to be chained together in a sequence, where the output of one model is used as the input to another. You might, for example, use a model that produces results considered to be accurate 97 percent of the time — accepting the 3 percent error rate. But what happens when multiple models with similar tolerances are chained together? The cascade of errors can add up quickly, especially if the first model in the sequence starts the ball rolling by pointing subsequent models in the wrong direction.
It is also important to understand that AI risk is not limited to the AI models themselves or the data on which they rely. To successfully manage AI risk, you must consider the entire AI ecosystem and the complete lifecycle of everything within it. It requires a well-designed operating model and processes that reflect leading governance practices.
How do you address these risks? The answer is through a responsible AI program. Responsible AI is an approach to design, build and deploy AI systems in a safe, trustworthy and ethical manner so that companies can accelerate value with confidence. The KPMG responsible AI offering encompasses eight guiding principles of risk:
By implementing a robust responsible AI program, you can recognize and manage risks related to your AI and predictive analytics models with the same weight you give to other corporate risks.
KPMG understands responsible AI involves complex business, regulatory and technical challenges and we are committed to helping clients put it into practice properly.
We combine our deep industry experience, modern technical skills, leading solutions and robust partner ecosystem to help business leaders harness the power of AI in a trusted manner — from strategy and design through to implementation and ongoing operations.
Wherever you are in your responsible AI journey, we can tailor our considerable experience, field-tested approach and innovative solutions to your unique needs and challenges, helping you to accelerate the value of AI with confidence.
The latest thinking from our technology and subject matter professionals. See more
Generative artificial intelligence (AI) has the potential to open up entirely new avenues for improving the user experience and for creating business advantages. It promises to change the way many people work, from how software is developed to how text is written and summarized. See how we can help you make that promise a reality.
Our professionals immerse themselves in your organization, applying industry knowledge, powerful solutions and innovative technology to deliver sustainable results. Whether it’s helping you lead an ESG integration, risk mitigation or digital transformation, KPMG creates tailored data-driven solutions that help you deliver value, drive innovation and build stakeholder trust.
Kelly Combs
Director, Leader of Responsible AI, Lighthouse
KPMG in the US
+1 312 665 1027
Emily Frolick
Partner, Advisory
KPMG in the US
+1 513 763 2453
Sreekar Krishna, Ph.D.
Principal, Leader for Artificial Intelligence
KPMG in the US
+1 480 326 6334
Aisha Tahirkheli
Managing Director, Advisory, Lighthouse
KPMG in the US
+1 312 339 1891
Shivam Batra, Radhika Goel, Sandeep Sharma and Pratham Singh contributed to the interpretation of the survey results, provided critical feedback, and delivered the survey analysis.
1Source: https://www.jdsupra.com/legalnews/nyc-delays-enforcement-of-automated-2040364/
2Source: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206