Businesses are taking a close look at generative artificial intelligence (AI), since it allows for the quick production of high-quality content — text, images, video, code — with minimal human effort. With less need for human resources, companies expect to produce content at faster speed and lower cost, enabling the creation of new kinds of content that were too costly to develop before. For example, generative AI can handle high-volume, low-value activities, such as summarizing articles, drafting emails, creating first drafts of code or high-level activities, like creating images and video or debugging code, that would leverage intellectual property, but expose the company to the risk of fraud or theft of proprietary or private information.
Creating a generative AI tool takes a lot of time, money and effort, so for now, most organizations will need to rely on third-party generative AI solutions, like OpenAI and Stability AI. Weighing potential risks, including error, potential fraud and loss of intellectual property, will likely remain a major consideration when deciding whether to use a third-party generative AI or relying on another type of AI in house.
With generative AI gaining rapid momentum and entering the mainstream, its growing popularity is yet another reason responsible AI — developing and deploying AI in an ethical manner — should be a top concern for organizations looking to use it or protect themselves against its misuse. In this paper, we’ll look at the risk management challenges raised by generative AI and underscore the importance for organizations to identify these risks early, form governance constructs to help adopt generative AI responsibly, and understand how risks may impact building trust in the use of generative AI. We’ll also provide our perspectives on steps companies can take now before adopting generative AI.
Among the top risks around the use of generative AI are those to intellectual property. Generative AI technology uses neural networks that can be trained on large existing data sets to create new data or objects like text, images, audio or video based on patterns it recognizes in the data it has been fed.1 That includes the data that is inputted from its various users, which the tool retains to continually learn and build its knowledge. That data, in turn, could be used to answer a prompt inputted by someone else, possibly exposing private or proprietary information to the public. The more businesses use this technology, the more likely their information could be accessed by others.
Amazon has already sounded the alarm with its employees, warning them not to share code with ChatGPT.2 A company lawyer specifically stated that their inputs could be used as training data for the bot and its future output could include or resemble Amazon’s confidential information.3
Also, generative AI content created according to an organization’s prompts could contain another company’s IP. That could cause ambiguities over the authorship and ownership of the generated content, raising possible allegations of plagiarism or the risk of copyright lawsuits.
Organizations will need to figure out how to protect their intellectual property, while still being able to gain the benefits of generative AI. Given organizations’ desire to use AI for competitive advantage and to harvest their existing data, specific considerations around how their data is used for training and public consumption should be evaluated. One solution is data anonymization and de-identification, but that would have to be a service offered by the vendor, or applied prior to sending data to the vendor, and based on contract agreements. The downside would be that businesses could be limited in reaping the benefits from other organization’s data.
Using generative AI offers business great efficiencies but also powerful temptations for misuse by employees. Educators have voiced concern that students could use generative AI to write their essays and other assignments; there have already been cases where a teacher has alleged a student has used ChatGPT to write an essay for a class.4 Employees, too, might be tempted to use generative AI and pass off the result as their own.
A related misuse would be for contract workers to pass off generative AI work as their own and billing the company for hours of work they didn’t in fact perform.
A more serious example of employee misuse is using generative AI to automate legal confirmations or reviews that may skirt appropriate ethics and compliance, independence or other programs, which may affect regulatory culpability.
Even the legitimate use of generative AI carries risks. Consider the inaccuracy of outcomes. Employees using generative AI will need to be vigilant in applying professional skepticism and an extra emphasis on quality assurance to the results. Generative AI has limitations on learning “new” outcomes, meaning additional training and research is needed, along with monitoring of end results to verify that it is delivering in line with expectations.
Should the generative AI content contain inaccuracies, it could cause any number of failures that could impact business outcomes or create liability issues for the business. Meta’s generative AI bot Galactica, for instance, was created to condense scientific information to help academics and researchers quickly find papers and studies. Instead, it produced vast amounts of misinformation that incorrectly cited reputable scientists.5
Lack of transparency in the use of generative AI content can also create reputational issues for organizations. Tech publisher CNET recently came under criticism for quietly using the technology to write 73 articles since November 2022, some of which contained errors, even though the publisher said on its website that a team of editors is involved in the content “from ideation to publication.”6
Other risks around generative AI include perpetuating or even amplifying societal biases that may be present in the data used to train the tool or the possibility that the technology could generate sensitive information, such as personal data, that could be used for identity theft or invade privacy. Even a disgruntled employee or angry customer could create fictious material that could maligna company’s reputation or that of one of its employees or executives.
Risks from generative AI can come from the outside as well, with unscrupulous users having the potential to create a lot of mischief and headaches for companies. Many of these malicious actions can already be perpetrated without generative AI. But generative AI can make the deed that much easier and quicker to pull off—and much harder to detect.
Generative AI also can be used to create so-called deepfake images or videos with uncanny realism and without the forensic traces left behind in edited digital media, making them extremely difficult for humans or even machines to detect.7 A deepfake image could be created depicting a company executive in a scandalous situation. Or an individual could use generative AI to create fictious images or video and use them to file fraudulent insurance claims.
Finally, generative AI also raises cybersecurity risks. Cybercriminals can use the technology to create more realistic and sophisticated phishing scams or credentials to hack into systems.
The business use of generative AI is still in the early days. But there are steps companies can take now to begin building responsible AI governance and set accountability for strategic decisions around the use of data and generative AI.
To start, the development of new governance constructs to address risk and ethical implications of using AI should involve the evaluation of critical questions across multiple functions from across the organization.
From these inquiries, organizations can move forward to building a governance construct and set of principles to guide them in making decisions about the ethical use of data and AI, improving the digital literacy within the organization to build confidence in using advanced analytics techniques (such as generative AI), and creating automated workflows and validations to enforce AI standards throughout the development through production lifecycle.
With a Responsible AI program in place, organizations can begin moving forward with developing processes and procedures around the use of generative AI. These efforts should include:
Generative AI is becoming another powerful technique in an organization’s toolbox of advanced analytics and AI. However, with this growing set of capabilities comes the need to develop literacy and governance to ensure the use of generative AI builds trust in the outcomes it creates and is done in line with the organization’s standards while considering business and customer impacts.
We all have a responsibility to learn about the risks of generative AI and control these risks to prevent harm to customers, businesses and society. These risks will grow and evolve as AI technology advances and becomes more pervasive, and as public pressure from regulators increases.
KPMG’s responsible AI offering is a set of frameworks, controls, processes and tools to ensure AI systems are being designed and deployed in a trustworthy and ethical manner so that companies can accelerate value. We understand responsible AI is a complex business, regulatory and technical challenge, and we are committed to helping clients put it into practice.
Ensure models are free from bias and equitable.
Ensure AI can be understood, documented and open for review.
Ensure mechanisms arein place to drive responsibilityacross the lifecycle.
Safeguard against unauthorized access, corruption or attacks.
Ensure compliance with dataprivacy regulations and consumer data usage.
Ensure AI does not negativelyimpact humans, property and environment.
Ensure data quality, governance and enrichment steps embed trust.
Ensure AI systems perform at the desired level of precision and consistency.
The latest thinking from our technology and subject matter professionals. See more
Wherever you are in your responsible AI journey, our 15,000+ technologists can tailor our vast experiences, field-tested approach and cutting-edge solutions to your unique needs and challenges, helping you to accelerate the value of generative AI with confidence.
To learn more about how we can help your organization, please contact us.
Matteo Colombo
Principal, Advisory, Lighthouse
KPMG in the U.S.
+1 206 913 4460
Kelly Combs
Director, Leader of Responsible AI, Lighthouse
KPMG in the U.S.
+1 312 665 1027
Jordan Seiferas
Managing Director, Data & Analytics, Lighthouse
KPMG in the U.S.
+1 212 954 8563
Aisha Tahirkheli
Managing Director, Advisory, Lighthouse
KPMG in the U.S.
+1 312 339 1891
1 Source: Unlock the Potential of Generative AI: A Guide for Tech Leaders. Forbes.com. January 26, 2023.
2 ChatGPT is an online tool created by San Francisco start up OpenAI. GPT stands for Generative Pre-trained Transformer.
3 Source: Amazon Warns Employees to Beware of ChatGPT. Gizmodo.com. January 23, 2023.
4 Source: Professor catches student cheating with ChatGPT: ‘I feel abject terror.’ The New York Post. December 26, 2022.
5 Source: Meta AI Bot Contributed to Fake Research and Nonsense Before Being Pulled Offline. Gizmodo.com. November 22, 2022.
6 Source: CNET Cops to Error Prone AI Writer, Doubles Down on Using It. Gizmodo.com. January 25, 2023, and CNET Has Been Quietly Publishing AI- Written Stories for Months. Gizmodo.com. January 11, 2023.
7 Source: Deepfakes: An insurance industry threat. Propertycasualty360.com. September 14, 2021.