Published on 00/00/0000
Last updated on 00/00/0000
Published on 00/00/0000
Last updated on 00/00/0000
Share
Share
INSIGHTS
9 min read
Share
Businesses and consumers alike are increasingly interested in the ability of generative AI (GenAI) tools to drive efficiency and support human creativity. However, while organizations recognize the urgency to leverage GenAI, they must consider the technology’s emerging risks.
Key among these risks are the gray areas that AI can create around copyrights and other forms of intellectual property (IP). As machine-generated work proliferates, companies involved in the AI lifecycle face new legal uncertainties. Stakeholders in the lifecycle include cloud providers, which often deliver the foundational models used to build smaller AI applications, as well as the companies developing those applications. Copyright and other IP concerns also impact AI end users, including tech employees and customers.
According to Leah Waterland, Associate General Counsel, Incubation, Data, and AI Strategy at Cisco, achieving that balance between AI innovation while continuing to protect all stakeholder interests and rights is an important but growing challenge. “There’s always going to be tension between racing ahead with emerging technology and ensuring the guardrails and safety nets that customers and employees expect remain in place,” she said.
Organizations embracing transformation must create strategies to protect their IP, respect stakeholder ownership rights, and avoid infringement claims so they can innovate responsibly with AI.
One of the most interesting legal challenges for companies using GenAI is how to allocate ownership. Because AI, unlike humans, cannot legally be an author or inventor in the United States, organizations may not be able to copyright works or patent ideas if they are created using GenAI in some capacity.
The US Patent and Trademark Office and the US Copyright Office offer some guidance in this area. There is still debate, however, about the level of human involvement required to make AI-generated works copyrightable and AI-assisted inventions patentable. “We don’t know where that bar is right now,” says Kerri Braun, Senior Corporate Counsel, AI/ML, Trade Secrets, and Data Strategy at Cisco. “[Some people will argue that] if I’m using a GenAI tool, it’s unlike using a camera in the amount of human control exercised over the output. I may vaguely know what to expect when entering a prompt, but I can’t control that process.”
Several cases have set the precedent for copyrightability of AI-generated works. In Thaler v. Vidal, courts ruled that Thaler’s AI tool, “DABUS,” could not be an author of a copyrightable work. In another ruling, a human applicant for copyright of an award-winning art piece, Théâtre D’opéra Spatial, was unable to obtain a copyright despite extensively documenting the hundreds of AI prompts used to generate the image. According to the court, the piece did not have enough human involvement for copyright eligibility.
Companies may use AI to generate code, content, or software rather than art, but for now, this precedent and administrative guidance still applies. Using GenAI, even in part, to create otherwise copyrightable works will require organizations to weigh their business objectives against their risk tolerance for potentially losing copyrights.
Organizations using GenAI are also navigating a complex landscape of copyright infringement risks for both model inputs and outputs. This means that organizations could risk violating copyright laws depending on what training data has been used for their model and how closely their GenAI-created works resemble copyrighted training data.
AI models are trained on massive datasets originating from sources like the web or a company’s proprietary information. Training or fine-tuning models may require developers to make copies or partial copies of this data, which could include copyrighted material. Duplicating this information without the owner’s permission may violate US copyright law, even if developers aren’t aware that the training data is copyrighted.
Some copyright owners claim that training AI tools using their works without permission is an infringement. On the other hand, many developers believe that using a copyrighted work qualifies as fair use because the act of training a model with this material is transformative, or different enough, from the work’s original purpose. Model creators also fear that output quality may degrade if training data must be limited only to licensed or public works.
There could also be liability issues surrounding prompt engineering or how organizations attempt to shape an output through their inputs. For example, prompting a model to generate an output similar to a copyrighted work, but not an exact copy, could be interpreted as an intentional copyright violation.
There is some risk that model outputs may duplicate or closely resemble a copyrighted work. Companies or individuals using GenAI may argue that the generated works differ sufficiently from potentially copyrighted training data, or that the outputs are not sufficiently connected to the training data to satisfy copyright infringement requirements. However, it’s not clear whether a court would agree with them. According to Braun, copyright infringement cases are still pending in courts, so the legal precedent is not yet set.
This legal uncertainty introduces infringement risks for businesses—risks that companies must decide how to mitigate until AI copyright laws become more established. For example, there are potential liabilities associated with advising users on what they can safely do with outputs. Allowing your employees to use GenAI outputs verbatim is riskier than only permitting them to use outputs as inspiration for further work. Guidance around AI and copyright law in the US suggests that AI users and companies could be liable for infringement violations, which creates legal challenges for businesses and could damage employee and customer trust.
AI copyright and ownership issues are also changing how organizations navigate contract negotiations. Establishing terms among the various stakeholders—including model providers, companies developing AI tools, and end users—becomes complex when accounting for ownership and IP rights.
Organizations must protect trade secrets in these relationships while respecting user privacy and safety. “When using third-party providers, companies need to consider how their IP could become entangled in that partnership,” says Waterland. “It’s also important to protect your customers’ data and your proprietary information if these are being used for inputs.”
Implementing best practices, such as engaging a comprehensive legal team, is the best way to safeguard your organization against these risks. Other strategies, like building documentation tools, will depend on the scope of your AI policies and development roadmap.
Rather than addressing AI copyright and ownership issues with a single lawyer, Waterland argues AI transformation is best handled by a well-rounded legal team with expertise in different areas. “One AI lawyer or IP team won’t be enough. You’ll need a team that’s collaborative and cross-functional,” she said. “As a baseline, appoint legal professionals specializing in human rights, privacy, copyright, trade secrets, and evolving legislation.”
Businesses should foster strong partnerships between legal teams and AI developers and engineers. Collaborating throughout the development lifecycle will help ensure AI products and features are legally sound.
For Waterland, this is an important step in proactively addressing legal risks and avoiding development roadblocks. “I’m a lawyer who wants to be in the room at the beginning. I want to talk to engineers while they’re brainstorming because I can start thinking through potential legal implications and how to build guardrails early,” she said.
Rather than waiting for AI copyright and IP laws to come into effect, which could take years, enterprises should start building their own policies now. Waterland suggests using pre-existing frameworks, such as privacy impact assessments and data privacy rules, and enhancing these processes and guidelines to address new concerns related to AI. “While the laws aren’t there yet, you can at least be clear and transparent with your customers and employees about what your company is doing,” she said.
Additionally, examine how employees or customers are likely to interact with your AI tools now and build policies and frameworks around that behavior. You can also better determine the scope of legal considerations needed when a legal advisor is included at the brainstorming or design stage of AI development.
When partnering with cloud providers and customers, negotiate contract terms that make ownership and copyright rights transparent. Clearly outline processes and responsibilities in the event of an infringement violation, whether you’re combating infringement of your own work or defending against an infringement claim.
As a best practice, a well-rounded team of lawyers will carefully review any existing terms set by the foundational model providers. For example, in its Terms of Service, OpenAI “assigns to you all its right[s], title, and interest in and to [o]utput,” essentially giving output rights to its customers under the parameters set by that agreement. Enterprises may negotiate their own legal terms with foundational model providers customized to their unique concerns and use cases.
Consider investing in tools that add a layer of legal scrutiny to your AI systems and processes. One approach is to develop GenAI software that checks outputs for exact copies of training data to avoid infringement violations. Watermarking solutions can make it easier for AI developers to identify copyrighted training material before it’s used in a model.
Braun also anticipates the growth of documentation tools designed to make model usage more transparent while supporting evidence collection for infringement claims. “Imagine if you had to manually note every time you used GenAI to create your code. I predict GenAI providers will develop easy ways to automatically log, track, and document how the technology is used,” she said.
Striking a balance between AI innovation and IP rights is an iterative process that demands ongoing re-evaluation and input from several areas of legal expertise. It’s hard to predict what will happen in this landscape: GenAI is advancing quickly, but rulings and legislation surrounding more complex copyright and IP issues could still be years away.
From Braun’s perspective, companies can expect to see laws combating deep fakes and right-to-publicity violations, as well as copyright claims shifting from outputs to AI prompts. “We can piece together some guidance from the Copyright Office that prompts themselves could be copyrightable rather than the outputs. It’s the Wild West out there when it comes to where these cases are going to take us,” she said.
Until laws around AI and copyright issues settle, organizations committed to ethical AI development must create policies and remain transparent and flexible in how they approach legal gray areas.
You don’t have to navigate the risks and uncertainties of GenAI alone. Find out how you can manage AI risks and enforce your own policies in partnership with Outshift.
Get emerging insights on innovative technology straight to your inbox.
Discover how AI assistants can revolutionize your business, from automating routine tasks and improving employee productivity to delivering personalized customer experiences and bridging the AI skills gap.
The Shift is Outshift’s exclusive newsletter.
The latest news and updates on generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.