Uncovering Hidden AI in Commercial Artwork

Scholar considers how and why content producers should disclose generative AI in creative works.

Text, images, and sounds generated by artificial intelligence (AI) are becoming increasingly commonplace in both physical and digital media. As AI technology rapidly improves, consumers may have difficulty distinguishing between AI-generated and human-made content—and when commercial content producers fail to disclose their AI use, consumers could be left without a meaningful choice between the two.

In a recent article, Jacob Noti-Victor, a professor at Cardozo School of Law, argues that “hidden AI authorship” is prevalent because financial incentives related to marketability and copyright law discourage content producers from disclosing their use of generative AI technology. Noti-Victor identifies the harms of hidden AI authorship in commercial works and proposes industry and government interventions to empower consumers to make informed decisions about engaging with AI-generated material.

Noti-Victor describes how generative AI technology can be used to produce commercial works of art, including visual art, books, television, music, and films. Machine learning programs, armed with massive amounts of training data, can produce “sophisticated, high-quality, and generally unique text, image, and/or sound-based outputs.”

Noti-Victor outlines various techniques that content producers can use to harness these AI-generated outputs into commercial media. Users can draft tailored prompts that instruct the generative AI system to generate the content they want directly. Content companies can do the same, while also developing their own proprietary generative AI systems built on internal datasets. Noti-Victor explains that the resulting AI-generated material can then be refined and incorporated into larger works by human creators. AI can also revise human-made material.

Generative AI has many possible applications in the commercial art and entertainment industry, but producers may not want to disclose the role that AI plays in their creations, Noti-Victor argues. He asserts that producers have clear financial incentives to hide their use of AI-generated material from consumers, citing empirical evidence that consumers are less likely to purchase a work when they believe that it was produced by generative AI.

Copyright law also encourages concealment, Noti-Victor contends. He notes that, under current law, a creative work created by a nonhuman is ineligible for copyright protection and subject to free use by the public, leaving content producers unable to stop unwanted use of their creations—and unable to reap maximum financial benefit from them. Although the U.S. Copyright Office has announced that applicants must disclose the inclusion of AI-generated content when they submit a work for registration, most applicants will be motivated to conceal AI use in light of these financial consequences, Noti-Victor explains.

Noti-Victor argues that this concealment is problematic because consumers have an interest in knowing whether the works of art or entertainment they purchase were created using generative AI. He explains that many consumers “have strong ethical and aesthetic preferences for human-created works” that inform their purchasing decisions. Noti-Victor contends that these preferences, which often arise from ethical concerns about labor markets or personal beliefs about authenticity, make human-made works more desirable to consumers, even when “a work cannot be distinguished as human-made or AI-generated on its face.”

Given these consumer preferences, producers’ failure to disclose AI authorship is “a kind of deception,” Noti-Victor suggests. When AI-generated creative works “masquerade as human-made,” he argues, a consumer cannot make an informed decision about whether to consume those works. Noti-Victor insists that AI use must be disclosed to prevent deception and allow consumers to choose whether to engage with AI-generated material.

Noti-Victor proposes industry and regulatory reforms that could encourage content producers to disclose the use of generative AI in the creation of commercial works. He suggests that, within the commercial art and entertainment industry, producers could voluntarily adopt provenance-tracking technology, which marks AI-generated materials at their inception, or publicly certify whether each piece of their content was produced using generative AI.

Alternatively, legislators could mandate disclosure. Noti-Victor cites the European Union’s recent AI Act, which requires that generative AI systems mark all outputs as AI-generated using provenance-tracking technology, and California’s AI Transparency Act, which requires large generative-AI companies to provide the public with AI-detection tools, as examples of AI transparency legislation. Members of Congress proposed a similar federal law in 2023, but the bill has not gained much political traction.

Noti-Victor also offers a potential regulatory solution: policing deceptive omissions of AI use by the U.S. Federal Trade Commission (FTC). This approach would not require blanket disclosure of all AI-generated material. Rather, the FTC could “target specific instances where the omission of information regarding a work’s provenance materially misleads consumers” under its mandate to protect consumers from unfair or deceptive practices. Noti-Victor notes that, to date, the FTC has focused its AI enforcement on issues such as competition and exploitation of personal data, but not hidden AI authorship.

Lastly, Noti-Victor suggests that private intellectual property litigation could be used to “raise the financial stakes of non-disclosure.” He explains that private plaintiffs could sue content producers who obtain copyright protection for works containing undisclosed AI-generated material for copyright misuse, which could result in a financial penalty.

Noti-Victor emphasizes that industry or regulatory action is needed to protect the public from deception, because consumers “deserve to know the role of AI in a work’s creation so they can choose whether, and on what terms, to engage with it.” He concludes that regulatory conversations must include hidden AI authorship alongside generative AI’s other potential harms.