As you’ve probably already read, The New York Times has filed a civil lawsuit against OpenAI and Microsoft in Federal District Court in Manhattan based on claims that the technology companies used the newspaper's content to train their artificial intelligence systems without permission and thereby breached The Times’ copyright protections.
While many of my more conservative colleagues would quip that any use of the word “intelligence” and the name “The New York Times” in the same sentence would be inappropriate, the fact remains that the lawsuit could mark a significant turning point in the battle over the training of large language models like ChatGPT.
At the heart of this legal skirmish lies the contentious utilization of The Times's copyrighted articles to train AI models, notably the ChatGPT, which has burgeoned into a formidable source of information, rivaling traditional news outlets.
This lawsuit, devoid of a specified monetary figure but hinting at "billions of dollars in statutory or actual damages," underscores a pivotal juncture for artificial intelligence. As AI technology burgeons, the demand for copious training data raises pressing questions about delineating protected data and discerning fair use—a quandary encapsulated by Shelly Palmer, CEO of The Palmer Group, a tech advisory firm.
OpenAI, founded in 2015, isn't a stranger to legal entanglements. Its recent turmoil, embroiled in a power struggle revolving around co-founder and CEO Sam Altman, intersects with allegations of copyright infringement. Comedian Sarah Silverman and a cohort lodged a lawsuit against OpenAI and Meta, alleging the unlawful ingestion of copyrighted materials to train ChatGPT. Notably, a league of renowned authors, including Jonathan Franzen and George R.R. Martin, joined the fray, asserting that OpenAI utilized their works without consent for AI training purposes.
Getty Images, in a parallel move, sued Stability AI for what it decried as a "brazen infringement" on its intellectual property, spotlighting the pervasive nature of copyright disputes in the AI realm.
Media convergence with AI technology isn't confined to legal battles; alliances and agreements have emerged. The Associated Press inked a licensing deal with OpenAI, allowing the latter to use news stories. Similarly, Axel Springer, the conglomerate behind POLITICO and Business Insider, struck a comparable agreement permitting ChatGPT to offer article summaries from its publications.
However, these partnerships don't mask the storm brewing. The lawsuit by The New York Times against OpenAI and Microsoft signifies a watershed moment. It marks the first instance of a major media entity confronting AI creators over intellectual property infringement. The legal tussle elucidates the multifaceted quandaries encompassing journalistic integrity, financial ramifications, and legal precedents amid the AI surge.
Amid the legal clash, statements from involved parties paint differing portraits. The Times alleges a blatant attempt by OpenAI and Microsoft to leverage its journalism without recompense, accusing them of creating products that siphon audiences from the newspaper.
Conversely, OpenAI asserts its commitment to respecting content creators' rights while expressing surprise and disappointment at The Times's lawsuit. The company emphasizes its ongoing efforts to collaborate with publishers for mutual benefit.
An OpenAI spokesperson said the company respects “the rights of content creators and owners and are committed to working with them to ensure they benefit from AI technology and new revenue models.”
“Our ongoing conversations with The New York Times have been productive and moving forward constructively, so we are surprised and disappointed with this development. We’re hopeful that we will find a mutually beneficial way to work together, as we are doing with many other publishers,” the spokesperson added.
In its complaint, the Times said it approached Microsoft and OpenAI in the spring to express worries about the use of its intellectual property but that those conversations had not succeeded.
The legal saga isn't an isolated incident; it mirrors a global trend. Spanish media organizations are rallying against Meta in a $600 million lawsuit, citing unfair competition. Meanwhile, Google's agreement with the Canadian government, entailing annual payments to news companies, reflects a proactive approach to navigating the evolving landscape of AI and media relations.
The complexity surrounding AI's integration into media extends beyond legal confrontations. Ethical quandaries persist, prompting divergent strategies among media entities. Some embrace collaborations with AI entities, while others, nearly 600 media companies, employ blockers to stave off AI's unrestricted access to their content.
In essence, the clash between media titans and AI illuminates a nuanced battleground—one where legal, ethical, and financial stakes converge. As AI's footprint in media amplifies, navigating this intricate terrain demands a delicate balance between innovation, protection of intellectual property, and preserving journalistic integrity.
Why It Matters. The lawsuit marks the first time that a major American media outlet has sued the companies behind the generative AI chatbot ChatGPT and other AI tools. The case also underscores questions that have proliferated from news organizations over the past year over the journalistic, financial and legal implications of generative AI.