icon caret-left icon caret-right instagram pinterest linkedin facebook twitter goodreads question-circle facebook circle twitter circle linkedin circle instagram circle goodreads circle pinterest circle

Writers and Editors (RSS feed)

Artificial Intelligence (AI) and Copyright

"Generative AI will change the nature of content creation, enabling many to do what, until now, only a few had the skills or advanced technology to accomplish at high speed.
~ article in Harvard Business Review

 

See also

AI: What problems does it bring? solve? What the heck is a bot?

and
Artificial intelligence, ChatGPT, Dall-E, and OSINT (open source intelligence)


The A.I. Lie (David Palumbo, Muddy Colors, 4-24-24) 'Something that needs to be clearly understood is that A.I. has no intelligence. It does not “think.” It is a predictive text program that simulates human expression by ingesting unfathomable amounts of data and trying to replicate that data. It does not know and can not know what meaning its outputs have. Further, it has no desire and no emotion to motivate action or decisions. It simply runs a program and assembles pixels or words to match what seems most like other correct pixels and words in its vast data set. It aggregates. It produces averages.' (H/T Andrew Thomsen)
OpenAI whistleblower found dead in San Francisco apartment (Jakob Rodgers, Bay Area News Group, SiliconValley.com, 12-13-24) Suchir Balaji, 26, a former OpenAI researcher known for whistleblowing the blockbuster artificial intelligence company facing a swell of lawsuits over its business model, was found dead of apparent suicide in his San Francisco apartment. His death comes three months after he publicly accused OpenAI of violating U.S. copyright law while developing ChatGPT, a generative artificial intelligence program that has become a moneymaking sensation used by hundreds of millions of people across the world.

    "ChatGPT's public release in late 2022 spurred a torrent of lawsuits against OpenAI from authors, computer programmers and journalists, who say the company illegally stole their copyrighted material to train its program and elevate its value past $150 billion. In an interview with the New York Times published Oct. 23, Balaji argued OpenAI was harming businesses and entrepreneurs whose data were used to train ChatGPT.
---Former OpenAI Researcher Says the Company Broke Copyright Law (Cade Metz, NY Times, 10-23-24) Suchir Balaji helped gather and organize the enormous amounts of internet data used to train the startup’s ChatGPT chatbot.
Core copyright violation claim moves ahead in The Intercept’s lawsuit against OpenAI (Andrew Deck, Nieman Lab, 11-27-24)

    I quote more than usual from this important article, yet urge you to read the full article. 

     A "New York federal judge ruled a key copyright violation claim by The Intercept against OpenAI would move ahead in court... the latest in a series of major legal decisions involving the AI developer this month, after OpenAI sought to dismiss lawsuits from several digital news publishers. The ruling comes after a judge dismissed similar claims filed by Raw Story and AlterNet earlier this month.

    "Judge Jed Rakoff said he’d hear the claim that OpenAI removed authorship information when it allegedly fed The Intercept’s articles into the training data sets it used to build ChatGPT. Doing so could be a violation of the Digital Millennium Copyright Act (DMCA), a 1998 law that, among other protections, makes it illegal to remove the author name, usage terms, or title from a digital work. 

      "Earlier this year I reported that The Intercept’s case was carving out a new legal strategy for digital news publishers to sue OpenAI...The New York Times’ lawsuit against OpenAI, and similar suits filed by The New York Daily News and Mother Jones, lead with claims of copyright infringement. Infringement suits require that relevant works were first registered with the U.S. Copyright Office (USCO). But most digital news publishers don’t have their article archives registered....

      "Until this summer, the government body required each individual website article page be filed and charged separately. In August, though, the USCO added a rule that allows “news websites” to file articles in bulk....But for most digital news publishers seeking legal action against OpenAI, particularly for its use of their work to train ChatGPT, the new rule came too late. "For now, The Intercept case is the only litigation by a news publisher, that is not tied to copyright infringement, to move past the motion-to-dismiss stage.

    "Earlier this month, the DMCA-focused legal strategy took a major hit when another New York federal judge dismissed all DMCA claims against OpenAI filed by Raw Story and AlterNet. The progressive digital news sites are jointly represented by Loevy & Loevy. "

     “Let us be clear about what is really at stake here. The alleged injury for which Plaintiffs truly seek redress is not the exclusion of [content management information] from Defendants’ training sets, but rather Defendants’ use of Plaintiffs’ articles to develop ChatGPT without compensation,” wrote Judge Colleen MacMahon in that decision. It is unclear if the Intercept ruling will embolden other publications to consider DMCA litigation... particularly if news publishers want to cite the training data sets underlying ChatGPT. But the ruling is one signal that Loevy & Loevy is narrowing in on a specific DMCA claim that can actually stand up in court."

     See also this website's subsection on DMCA Takedown Notices, Safe Harbors, and Related Issues.

[Back to Top]


Database of 16,000 Artists Used to Train Midjourney AI, Including 6-Year-Old Child, Garners Criticism (Karen K. Ho, ARTNews, 1-2-24) "During the New Year’s weekend, artists linked to a Google Sheet on the social media platforms X (formerly known as Twitter) and Bluesky, alleging that it showed how Midjourney developed a database of time periods, styles, genres, movements, mediums, techniques, and thousands of artists to train its AI text-to-image generator. Jon Lam, a senior storyboard artist at Riot Games, also posted several screenshots of Midjourney software developers discussing the creation of a database of artists to train its AI image generator to emulate. "Last September, the US Copyright Review Board decided that an image generated using Midjourney’s software could not be copyright due to how it was produced. Jason M. Allen’s image had garnered the $750 top prize in the digital category for art at the Colorado State Fair in 2022. The win went viral online, but prompted intense worry and anxiety among artists about the future of their careers."
---New Data ‘Poisoning’ Tool Enables Artists To Fight Back Against Image Generating AI (Karen K. Ho, ARTNews, 10-25-23) Concern about artworks being scraped without permission and used to train AI image generators also prompted researchers from the University of Chicago to create a digital tool for artists to help “poison” massive image sets and destabilize text-to-image outputs.
Is A.I. the Death of I.P.? (Louis Menand, New Yorker, 1-22-24) Generative A.I. is the latest in a long line of innovations to put pressure on our already dysfunctional copyright system. "I.P. ownership comes in several legal varieties: copyrights, patents, design rights, publicity rights, and trademarks....David Bellos and Alexandre Montagu use the story of Sony’s big Springsteen buy to lead off their lively, opinionated, and ultra-timely book, Who Owns This Sentence? A History of Copyrights and Wrongs (by David Bellos and Alexandre Montagu, Norton), because it epitomizes the trend that led them to write it. The rights to a vast amount of created material—music, movies, books, art, games, computer software, scholarly articles, just about any cultural product people will pay to consume—are increasingly owned by a small number of large corporations and are not due to expire for a long time."

[Back to Top]

Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence (United States Copyright Office)
What is fair use? US Supreme Court weighs in on AI’s copyright dilemma (Luke Huigsloot, Cointelegraph: The Future of Money, 5-30-23) Many firms with generative AI models are being sued for copyright infringement, and the Supreme Court may have just ruined their primary legal defense. On Feb. 3, "stock photo provider Getty Images sued artificial intelligence firm Stability AI, alleging that it copied over 12 million photos from its collections as part of an effort to build a competing business. It notes in the filing:
     “On the back of intellectual property owned by Getty Images and other copyright holders, Stability AI has created an image-generating model called Stable Diffusion that uses artificial intelligence to deliver computer-synthesized images in response to text prompts."

    "While the European Commission and other regions are scrambling to develop regulations to keep up with the rapid development of AI, the question of whether training AI models using copyrighted works classifies as an infringement may be decided in court cases such as this one."
     "On May 18, the Supreme Court of the United States, considering these factors, issued an opinion that may play a significant role in the future of generative AI. The ruling in Andy Warhol Foundation for the Visual Arts v. Goldsmith found that famous artist Andy Warhol’s 1984 work “Orange Prince” infringed on the rights of rock photographer Lynn Goldsmith, as the work was intended to be used commercially and, therefore, could not be covered by the fair use exemption.
      While the ruling doesn’t change copyright law, it does clarify how transformative use is defined.

 

AI Licensing for Authors: Who Owns the Rights and What’s a Fair Split? (Authors Guild, 12-12-24) "The Authors Guild believes it is crucial that authors, not publishers or tech companies, have control over the licensing of AI rights. Authors must be able to choose whether they want to allow their works to be used by AI and under what terms. Our statement on AI licensing explains why AI licensing is a right reserved by trade authors and what a fair split for most deals will be."

 

[Back to Top]

Examples of artificial intelligence: Manufacturing robots. Facial detection and recognition. Self-driving cars. E-Payments. Smart assistants. Healthcare management. Search and Recommendation Algorithms. Automated financial investing. Virtual travel booking agent. Social media monitoring. Marketing chatbots.  Maps and navigation. Text editors or Autocorrect. Chatbots. Digital assistants.  Aspects of social media.


Thousands of authors urge AI companies to stop using work without permission (Chloe Veltman, Morning Edition, NPR, 7-17-23) "Thousands of writers including Nora Roberts, Viet Thanh Nguyen, Michael Chabon and Margaret Atwood have signed a letter asking artificial intelligence companies like OpenAI and Meta to stop using their work without permission or compensation. It's the latest in a volley of counter-offensives the literary world has launched in recent weeks against AI. But protecting writers from the negative impacts of these technologies is not an easy proposition. Alexander Chee, the bestselling author of novels like Edinburgh and The Queen of the Night, is among the nearly 8,000 authors who just signed a letter addressed to the leaders of six AI companies, including OpenAI, Alphabet and Meta.

• What are the Open AI companies? Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI.

[Back to Top]


Authors Guild Recommends Clause in Publishing and Distribution Agreements Prohibiting AI Training Uses (3-1-23)
Authors Sue OpenAI Claiming Mass Copyright Infringement of Hundreds of Thousands of Novels (Winston Cho, Hollywood Reporter, 6-29-23) Courts are wrestling with the legality of using copyrighted works to train AI systems. The proposed class action filed in San Francisco federal court on Wednesday alleges that OpenAI “relied on harvesting mass quantities” of copyright-protected works “without consent, without credit, and without compensation.”
---AI is the wild card in Hollywood's strikes. Here's an explanation of its unsettling role (Andrew Dalton, ABC News, 7-21-23) Getting control of the use of artificial intelligence is a central issue in the current strikes of Hollywood's actors and writers. As the technology to create without creators emerges, star actors fear they will lose control of their lucrative likenesses. Unknown actors fear they’ll be replaced altogether. Writers fear they’ll have to share credit or lose credit to machines.
     "It may be fitting that "voice" comes first on that list. While many viewers still cringe at the visual avatars of actors like Hamill and Jackson, the aural tech feels further along.
      "The voices of the late Anthony Bourdain and the late Andy Warhol have both been recreated for recent documentaries.
       "Union members who make a living doing voiceovers have taken note."
---A.I. Needs an International Watchdog, ChatGPT Creators Say (Gregory Schmidt, NY Times, 5-24-23)

"To regulate the risks of A.I. systems, there should be an international watchdog, similar to the International Atomic Energy Agency, the organization that promotes the peaceful use of nuclear energy, OpenAI's founders, Greg Brockman and Ilya Sutskever, and its chief executive, Sam Altman, wrote in a note posted Monday on the company's website."
---A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn (Kevin Roose, NY Times, 5-30-23) Leaders from OpenAI, Google DeepMind, Anthropic and other A.I. labs warn that future systems could be as deadly as pandemics and nuclear weapons.

---The future of intellectual property law in the era of artificial intelligence (Wisconsin Law Journal, 4-3-23) "Another challenge is how to protect intellectual property rights in the face of AI-enabled infringement. AI systems can be used to create counterfeit goods, to automate the process of copyright infringement, and to even generate fake news. This makes it more difficult for creators to protect their work and to enforce their intellectual property rights.The rise of AI also raises questions about the future of patent law."

[Back to Top]


Europe takes another big step toward agreeing an AI rulebook (Natasha Lomas, TechCrunch, 6-14-23) "Parliamentarians backed an amended version of the Commission proposal that expands the rulebook in a way they say is aimed at ensuring AI that’s developed and used in Europe is “fully in line with EU rights and values including human oversight, safety, privacy, transparency, non-discrimination and social and environmental wellbeing”.
    "Among the changes MEPs have backed is a total ban on remote biometric surveillance and on predictive policing. They have also added a ban on “untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases” — so basically a hard prohibition on Clearview AI and its ilk.    
     "The proposed ban on remote biometric surveillance would apply to both real-time or post (after the fact) applications of technologies like facial recognition, except, in the latter case, for law enforcement for the prosecution of serious crimes with judicial sign off.
     "MEPs also added a ban on the use of emotional recognition tech being used by law enforcement, border agencies, workplaces and educational institutions.
     "Parliamentarians also expanded the classification of high-risk AI systems to include those that pose significant harm to people’s health, safety, fundamental rights or the environment, as well as AI systems used to influence voters and the outcome of elections."

[Back to Top]


‘Life or Death:’ AI-Generated Mushroom Foraging Books Are All Over Amazon (Samantha Cole, 404 Media, 8-29-23) Experts are worried that books produced by ChatGPT for sale on Amazon, which target beginner foragers, could end up killing someone. A genre of AI-generated books on Amazon is scaring foragers and mycologists: cookbooks and identification guides for mushrooms aimed at beginners.

Speech in the Machine: Generative AI's Implications for Free Expression (Summer Lopez, Nadine Farid Johnson, and Nadine Farid Johnson, PEN America, 7-31-23) See also PEN's Twitter feed

"We cannot anticipate exactly how these technologies will be used or the magnitude of the risks."

      "In the hands of bad actors—whether public or private—generative AI tools can supercharge existing threats to free expression. If machines increasingly displace writers and creators, that poses a threat not only to those creative artists, but to the public as a whole. The scope of inspiration from which truly new creative works draw may be narrowed, undermining the power of literature, television, and film to catalyze innovative ways of thinking.
       "Generative AI tools have democratized and simplified the creation of all types of content, including false and misleading information; now they are poised to catapult disinformation to new levels, requiring new thinking about how to counter the negative effects without infringing on free expression. Without further attention to the ways in which generative AI could potentially escalate the threat of online abuse, those targeted may be more likely to leave online spaces, and those at risk of being targeted might be more likely to self–censor to avoid the threat.
      "The use of generative AI in targeted political ads and campaign materials could make those messages even more effective, further hardening existing divides and making constructive discourse across political lines even more challenging. Because generative AI tools are trained on bodies of content, they can easily reproduce patterns of either deliberate censorship or unconscious bias. The use of generative AI in creative fields could produce works that are less rich or reflective of the expansive nuances of human experience and expression. . . [These] tools could be wielded—or weaponized—to manipulate opinions and skew public discourse via subtle forms of influence on their users."

[Back to Top]

PEN's Recommendations for Government:
Pass long overdue, foundational legislation.
Establish and maintain multi-stakeholder policymaking processes
Ground regulatory frameworks in fundamental rights
Engage in policymaking that is measured and iterative
Build flexibility into regulatory schemes
Emphasize and operationalize transparency

PEN's Recommendations for Industry:
Promote fair and equitable use:
Facilitate secure and privacy–protecting use
Emphasize and operationalize transparency
Provide appeals and remedy options
Consider revenue models
Safeguard the ownership rights of writers, artists, and other content owners.

[Back to Top]


Marvel’s Secret Invasion AI Scandal Is Strangely Hopeful (Angela Watercutter, Wired, 6-23-23) News broke this week that the show’s opening credits were made using artificial intelligence. Fans immediately cried foul.
     "It’s only slightly coincidental that news of AI in Secret Invasion came a day after star Samuel L. Jackson told Rolling Stone that he’s long been cautious of studios wanting to use his likeness in perpetuity, saying when he encounters those clauses in contracts “I cross that shit out.” A few months ago, Keanu Reeves told me that he's long had a clause in his contracts saying that his performances can't be digitally altered without his approval. Actors, and their lawyers, have been wary of the implications of technology and AI for a while. So have writers. Now, as AI infiltrates everyone’s daily lives, fans are monitoring the invasion."
    More Wired stories on the topic:
---This Is the Worst Part of the AI Hype Cycle (Angela Watercutter) Feeling hype burnout? You're not alone.
---Workers Are Worried About Their Bosses Embracing AI

    And there are many more. Search for "Wired" and "AI" or artificial intelligence.

[Back to Top]

 

Generative AI Has an Intellectual Property Problem (Gil Appel, Juliana Neelbauer, and David A. Schweidel, Harvard Business Review, 4-7-23) "Generative AI, which uses data lakes and question snippets to recover patterns and relationships, is becoming more prevalent in creative industries. However, the legal implications of using generative AI are still unclear, particularly in relation to copyright infringement, ownership of AI-generated works, and unlicensed content in training data. Courts are currently trying to establish how intellectual property laws should be applied to generative AI, and several cases have already been filed. To protect themselves from these risks, companies that use generative AI need to ensure that they are in compliance with the law and take steps to mitigate potential risks, such as ensuring they use training data free from unlicensed content and developing ways to show provenance of generated content."

 

Generative Artificial Intelligence and Copyright Law (Legal Sidebar, Congressional Research Service, 5-11-23) A long, important article. "Recent innovations in artificial intelligence (AI) are raising new questions about how copyright law principles such as authorship, infringement, and fair use will apply to content created or used by AI. Socalled “generative AI” computer programs—such as Open AI’s DALL-E 2 and ChatGPT programs, Stability AI’s Stable Diffusion program, and Midjourney’s self-titled program—are able to generate new images, texts, and other content (or “outputs”) in response to a user’s textual prompts (or “inputs”). These generative AI programs are trained to generate such outputs partly by exposing them to large quantities of existing works such as writings, photos, paintings, and other artworks. This Legal Sidebar explores questions that courts and the U.S. Copyright Office have begun to confront regarding whether the outputs of generative AI programs are entitled to copyright protection, as well as how training and using these programs might infringe copyrights in other works."

[Back to Top]


New US copyright rules protect only AI art with ‘human authorship’ (Daniel Grant, The Art Newspaper, 5-4-23) The US Copyright Office has eased its stance in new guidelines, and a decision on a comic book created using artificial intelligence. Shows "Detail from the cover for the comic book, Zarya of the Dawn (2023), whose author, Kris Kashtanova, used the AI-powered text-to-image generator Midjourney to create the illustrations. She was granted copyright in the book but not its AI-generated images.'

 

AI and art: how recent court cases are stretching copyright principles (Hetty Gleave and Eddie Powell, The Art Newspaper, 3-28-23) Two specialists from a leading London law firm analyse the issues raised in recent lawsuits relating to the use of artwork images by tech companies in order to “train” their artificial intelligence tools.

 

Artists and visual media company sue AI image generator for copyright breach (Daniel Grant, The Art Newspaper, 2-15-23) Lawsuits against firm behind Stable Diffusion image generator in recent attempt to define the legal status of such images.

 

How We Think About Copyright and AI Art (Kit Walsh, Electronic Frontier Foundation, 4-3-23) "This legal analysis is a companion piece to our post describing AI image-generating technology and how we see its potential risks and benefits." An interesting analysis. As with most creative tools, it is possible that a user could be the one who causes the system to output a new infringing work by giving it a series of prompts that steer it towards reproducing another work. In this instance, the user, not the tool’s maker or provider, would be liable for infringement.

 

Artificial Intelligence and Seinfeld (“Nothing Forever”) Aharon Schrieber, Seinfeld Law, 6-13-23) "If an AI generates a TV show apparently in the style of Seinfeld, but without using content from Seinfeld, is that a copyright violation?"

 

Nothing Giant. Nothing Forever (Aharon Schrieber, Seinfeld Law, The Browser, 6-17-23) The AI generated Seinfeld parody “Nothing, Forever,” ran '24 hours a day until February 6, 2023, but the show was completely procedurally generated via artificial intelligence. While the show was pretty well received, with even the official Seinfeld twitter account linking to the Twitch channel, “Nothing, Forever” opens up a series of new questions regarding the intersection of copyright law and artificial intelligence. Most importantly, does Seinfeld, Jerry, the actors, NBC, or any other person or entity with rights to Seinfeld the show have a copyright claim against “Nothing, Forever?”


I Would Rather See My Books Get Pirated Than This (Or: Why Goodreads and Amazon Are Becoming Dumpster Fires) (Jane Friedman, 8-7-23) "Garbage books getting uploaded to Amazon where my name is credited as the author. Whoever’s doing this is obviously preying on writers who trust my name and think I’ve actually written these books. I have not. Most likely they’ve been generated by AI."
"Hours after this post was published, my Goodreads profile was cleaned of the offending titles. However, the garbage books remain available for sale at Amazon with my name attached."


What is generative AI? Everything you need to know (George Lawton, Tech Target) Scroll down for interesting timeline of Generative AI's Evolution. Transformers, large language models (LLMs), innovations in multimodal AI, etc. "Recent progress in transformers such as Google's Bidirectional Encoder Representations from Transformers (BERT), OpenAI's GPT and Google AlphaFold have also resulted in neural networks that can not only encode language, images and proteins but also generate new content."



[Back to Top]
Be the first to comment