AI Liability for Copyright Infringement: Who is Responsible?
AI and copyright: who is responsible?
Artificial intelligence (AI) developers argue that it is not their fault that their machine learning programs produce copyrighted material, even if they trained their systems on copyrighted material. Instead, they want users to take legal responsibility for the material generated by their systems.
The U.S. Copyright Office is considering new regulations regarding generative artificial intelligence and, in August, issued a request for comments on artificial intelligence and copyright. Responses to the request are public and can be found here.
Among the responses submitted, companies including Google, Dall-E developer OpenAI, and Microsoft argued that only unauthorized production of copyrighted materials violates existing protections. According to them, AI software is similar to audio or video recording devices, photocopiers or cameras, all of which can be used to infringe copyright. The manufacturers of these products are not held responsible when this happens, so why should AI companies be held responsible, or at least that is the reasoning adopted.
Microsoft, which has a multibillion-dollar partnership with OpenAI, wrote:
“Users must take responsibility for using the tools responsibly and as designed. … To address rights holders' concerns, AI developers have taken steps to mitigate the risk of AI tools being misused to infringe copyright. Microsoft incorporates many of these measures and safeguards to mitigate possible malicious uses across all of our AI tools. These measures include meta-reminders and classifiers, controls that add additional instructions to a user prompt to limit malicious or violative exits.”
Importantly, the safeguards Microsoft supposedly has in place have done little to prevent trademark and copyright infringement. In fact, The Walt Disney Company recently asked the tech giant to prevent users from infringing its trademarks.
Google, however, claimed:
“The possibility that a generative AI system can, through prompt engineering, be trained to replicate content from training data raises questions about the correct boundary between direct and indirect violation. When an AI system is prompted by a user to produce a violative output, any resulting liability should be attributed to the user as the party whose volitional behavior directly caused the violation. …A rule that would make AI developers directly (and strictly) liable for any infringing results created by users would impose overwhelming liability on AI developers, even if they had taken reasonable steps to prevent infringing activity by users. If this standard had been applied in the past, we would not have had legal access to photocopiers, personal audio and video recording devices, or personal computers, all of which are capable of being used for infringing as well as substantially beneficial purposes.”
And OpenAI wrote:
“When analyzing claims of infringement related to releases, the analysis begins with the user. After all, there is no exit without a prompt from a user, and the nature of the exit is directly influenced by what was requested.”
It should be noted that all of the above companies have used unauthorized copyrighted and trademarked materials to train their software, and OpenAI is currently the subject of lawsuits filed by more than a dozen prominent authors accusing the company of infringing their Copyright.
To further complicate matters, even as these companies tell the US government that users should be responsible for their system failures, many of them, including Google, OpenAI, Microsoft, and Amazon, are offering to cover the legal costs of their clients in copyright infringement cases.
But ultimately, the companies argue that current copyright law is on their side and that there is no need for the Copyright Office to change this, at least not at the moment. They argue that if the office goes against developers and changes any copyright laws, it could block the nascent technology. In its letter, OpenAI said it “urges the Copyright Office to proceed with caution in seeking new legislative solutions that may prove premature or misleading given the rapidly evolving technology.”
Perhaps surprisingly, the major studios are on the side of big tech, even if they do so from a different angle. In its submission to the Copyright Office, the Motion Picture Association (MPA) drew a distinction between generative AI and the use of AI in the film industry, where “AI is a tool that supports , but does not replace, the human creation of the members' works". The MPA also advocated not updating the current legislation:
“MPA members have a unique and balanced perspective on the interaction between AI and copyright. Members' copyrighted content is extremely popular and valuable. Strong copyright protection is the backbone of their industry. At the same time, MPA members have a strong interest in developing creator-driven tools, including AI technologies, to support world-class content creation. AI, like other tools, supports and enhances creativity, and engages audiences in the stories and experiences that are the hallmark of the entertainment industry. MPA's overall view, based on the current state, is that although AI technologies raise a number of new issues, those issues involve well-established copyright principles and doctrines. At this time, there is no reason to conclude that these existing doctrines and principles will not be sufficient to provide courts and the Copyright Office with the tools they need to respond to AI issues when appropriate.”
While the MPA maintains that current copyright laws are sufficient, it has expressed strong objections to the idea that AI companies should be able to freely train their systems on their works. In its letter, the MPA wrote:
“The MPA currently believes that existing copyright law should be able to handle these issues. A copyright owner who demonstrates infringement should be able to pursue available remedies existing in sections 502-505, including monetary damages and injunctive relief. … At this time, there is no reason to believe that copyright owners and companies engaged in training generative AI models and systems cannot enter into voluntary licensing agreements, so that government intervention may be necessary” .
In conclusion, the issue of responsibility for the outputs of AI systems is complex and involves a mix of interests between developers, copyright holders and regulators. It is clear that this will be an evolving area with significant implications for the future of AI and copyright law.