INSIGHTS
9 min read
Published on 07/18/2024
Last updated on 07/18/2024
Enterprise AI risk management: Turn shadow AI into an opportunity for innovation
Share
Generative artificial intelligence (GenAI) has changed the way people work and create, facilitating everyday activities from technical troubleshooting to visual art. Naturally, employees want to extend those same GenAI capabilities into the workplace, bringing the creative and time-saving benefits of tools like large language models (LLMs) to their nine-to-five.
However, the proliferation of GenAI means employees are using an alarming number of unsanctioned AI tools for professional work—a practice known as shadow AI. While generative solutions support workflows like content creation and email drafting, shadow AI introduces a plethora of security and compliance risks that can harm enterprise objectives. Risks from shadow AI can compromise an enterprise’s reputation, lead to serious financial repercussions, and jeopardize the success of formal AI initiatives.
That doesn’t mean organizations should fear or ban GenAI. Instead, organizations can empower workforces and channel enthusiasm for transformation into more responsible and effective AI practices by providing official pathways for AI integration and establishing clear usage guidelines.
What is shadow AI? The increasingly common practice of unofficial AI usage
Shadow AI refers to the use of AI apps or tools in the workplace that aren’t officially authorized by an employer. Unlike shadow IT, which describes the use of software or hardware beyond formal IT governance, shadow AI refers specifically to unofficial AI usage.
A 2023 Deloitte study illustrates the rapid emergence of shadow AI. After surveying over 1,000 professionals in Switzerland, Deloitte found:
- 61% of employees use GenAI in their everyday work, often without a manager’s knowledge. At the same time, 61% of organizations lack internal guidelines for using AI.
- The majority (47%) of employees surveyed use AI tools for generating text. Others use generative tools for images (26%) and code (24%).
- Workers are generally satisfied with the performance of these tools for the efficiency, creativity, and work quality improvements they deliver.
The rise of shadow AI aligns with the current state of the employee experience. Research from Cisco indicates 43% of workers feel frustrated with inefficient processes like joining online meetings, while 95% believe AI can improve work tasks. The bottom line is enterprise professionals, now familiar with the advantages of widely available AI tools, want to use them to do their jobs more effectively. If employers don’t provide the right solutions, workers will very likely take AI access into their own hands.
Shadow AI security and compliance risks
Although employees want to take advantage of AI tools, shadow AI presents heightened risks for enterprises in areas like cybersecurity and compliance.
Data protection and privacy compliance
Shadow AI is a major compliance barrier since organizations have no visibility into what types of data unsanctioned AI tools use or how it is used. Employees using shadow AI may unknowingly leverage personal data through these tools, inadvertently causing employers to violate data privacy laws. With shadow AI, it’s also difficult, if not impossible, for enterprises to honor an individual’s right to be forgotten, comprehensively track AI-generated work, or document a model’s architecture.
For instance, under the General Data Protection Regulation (GDPR), organizations are required to edit or remove personal data from AI systems at an individual’s request. Companies must also document how their AI uses personal data to generate outputs. Under the European Union AI Act, enterprises need to disclose AI-generated work and document how their models are trained.
Compliance risks can escalate if attackers leak AI data. Enterprises often use sensitive inputs, including personal data, customer information, or intellectual property, to train AI models. Organizations may face serious legal and financial repercussions from AI regulators if they fail to safeguard this private data.
AI reliability
Because companies can’t govern the reliability of unapproved solutions, employees introduce risk when using shadow AI to inform decision-making or create work. For instance, some AI tools on the market may use outdated or biased training data, generating inaccurate outputs or those conflicting with a company’s values.
Using such outputs to guide business activities can negatively impact enterprise goals. For example, an employee may use a shadow AI tool to anticipate market activity and make budgeting decisions. If the model’s training data is outdated, financial forecasts may be incorrect, leading to poor budgeting decisions and business performance.
Copyright and intellectual property
Under United States law, AI-generated works do not qualify for copyright protection since they are not fully authored by humans. This means when employees use GenAI in some capacity to create proprietary work, like writing code, companies may be unable to copyright or patent this work. In this case, shadow AI can prevent organizations from gaining a competitive advantage through work that would otherwise be protected by a patent or copyright.
Shadow AI also puts companies at risk of violating copyright laws. AI tools often ingest copyrighted work during training, which courts may consider a violation of fair use. This is already a concern with authorized AI tools because it can be difficult to locate copyrighted material in vast training datasets. Shadow AI further complicates the issue because companies have no access to or control over the training data these tools use.
Employees may also cause companies to violate copyright laws if shadow AI tools generate work that is similar to copyrighted material, whether or not this is intentional. The precedent for what is considered fair use—in other words, how different an output must be from an input—is still up for debate. This creates a considerable legal risk if companies don’t manage shadow AI or carefully advise users on how they can safely use AI-generated outputs.
AI data security
Organizations have no authority over how shadow AI applications are secured, creating additional vulnerabilities. For instance, many individuals use shadow AI on personal devices or with software versions lacking the same security infrastructure as official enterprise applications. According to a 2024 report by Cyberhaven Labs, almost 75% of ChatGPT accounts used in the workplace are non-corporate accounts with fewer security and privacy controls than ChatGPT Enterprise.
Among these security and privacy controls is the guarantee that ChatGPT Enterprise does not use inputs to further train publicly used models. Without such a safeguard, sensitive information employees use as inputs could become available to external parties. Shadow AI tools may also leverage this data to expand their knowledge bases.
The same report from Cyberhaven Labs discovered that the volume of corporate data employees fed into AI tools increased by 485% between March 2023 and March 2024, with nearly a third of data classified as sensitive. While much of this input includes confidential customer information, shadow AI tools also absorb proprietary information like research and development, financial documents, and source code. Once ingested, this data is then subject to the controls of shadow AI companies, meaning that its security is out of the original enterprise’s hands.
Enterprise AI risk management: 4 strategies for addressing shadow AI
While shadow AI poses serious risks, you can turn the issue into an opportunity for greater innovation and efficiency. Employees are likely already actively engaged with and getting value from AI: redirect this interest to official solutions that align with your compliance, security, and responsible AI goals.
1. Identify shadow AI
The first step in addressing shadow AI is to inventory all AI activities within your organization. Encourage teams to be open about how they use AI and comprehensively document these behaviors. You can also build or adopt tools that automate shadow AI detection. These typically work by analyzing your network traffic for digital footprints indicating unauthorized AI usage. Routine audits will help you stay on top of new shadow AI creeping into enterprise processes.
2. Establish guidelines and policies for AI
Leading companies base transformation efforts on a responsible AI framework, which outlines goals and policies for building reliable, secure, and transparent technology. As a part of this framework, establish clear guidelines to prevent the spread of shadow AI. Consider what tools are acceptable and how employees can use them. For example, define to what extent employees can use AI to inform their work or decision-making processes and what corporate data they may use in AI inputs.
3. Develop an AI strategy
Auditing shadow AI in your enterprise will illuminate what AI tasks employees actively use and where functional gaps exist in current software and processes. Leverage these insights to develop an AI strategy and build official solutions that support employee workflows. After assessing employee use cases, consider investing in techniques like fine-tuning or retrieval-augmented generation (RAG) to tailor AI tools for specific tasks. Unlike shadow AI, you can decide what data these tools are trained on, how they’re built, who can access them, and what security infrastructure they use.
4. Educate employees
It’s important to gain employee buy-in when introducing new AI tools and policies. Educate your workforce on the benefits of these changes, emphasizing that they’re intended to help them perform even more efficiently while reducing both personal and enterprise risk. Teach staff how to use AI solutions following company policies and what the potential consequences of shadow AI are. Some employees may also need reassurance about how AI integration will affect their role and job security. Be sure to communicate that AI doesn’t mean their position will become obsolete, but their skills and expertise can be used in more impactful ways.
Shadow AI warrants a proactive approach to transformation
Ensuring all enterprise AI is formally integrated into official processes is crucial for avoiding cyber exposure as well as legal and compliance issues. Once you address these core risks, you can also benefit from gaining more control over transformation initiatives so you can optimize AI and achieve a competitive advantage.
The key is to remain adaptable and proactive. AI technologies are advancing rapidly, with new tools becoming publicly available daily. You must get ahead of these emerging tools and be ready to adopt best-fit solutions before shadow AI can fill functional gaps in your workforce.
Stay up to date on the latest issues and trends with enterprise generative AI.
Get emerging insights on innovative technology straight to your inbox.
Driving productivity and improved outcomes with Generative AI-powered assistants
Discover how AI assistants can revolutionize your business, from automating routine tasks and improving employee productivity to delivering personalized customer experiences and bridging the AI skills gap.
The Shift is Outshift’s exclusive newsletter.
The latest news and updates on generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.