AI Fraud Act Could Outlaw Parodies, Political Cartoons, and More 


brain on purple background | Photo by <a href="https://unsplash.com/@fakurian?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Milad Fakurian</a> on <a href="https://unsplash.com/photos/blue-and-green-peacock-feather-58Z17lnVS4U?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a>

Mixing new technology and new laws is always a fraught business, especially if the tech in question relates to communication. Lawmakers routinely propose bills that would sweep up all sorts of First Amendment-protected speech. We’ve seen a lot of this with social media, and we’re starting to see it with artificial intelligence. Case in point: the No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act. Under the auspices of protecting “Americans’ individual right to their likeness and voice,” the bill would restrict a range of content wide enough to ensnare parody videos, comedic impressions, political cartoons, and much more.

The bill’s sponsors, Reps. María Elvira Salazar (R-Fla.) and Madeleine Dean (D-Pa.), say they’re concerned about “AI-generated fakes and forgeries,” per a press release. They aim to protect people from unauthorized use of their own images and voices by defining these things as the intellectual property of each individual.

The No AI Fraud Act cites several instances of AI being used to make it appear that celebrities created ads or art that they did not actually create. For instance, “AI technology was used to create the song titled ‘Heart on My Sleeve,’ emulating the voices of recording artists Drake and The Weeknd,” states the bill’s text. AI technology was also used “to create a false endorsement featuring Tom Hanks’ face in an advertisement for a dental plan.”

But while the examples in the bill are directly related to AI, the bill’s actual reach is much more expansive, targeting a wide swath of “digital depictions” or “digital voice replicas.”

Salazar and Dean say the bill balances people’s “right to control the use of their identifying characteristics” with “First Amendment protections to safeguard speech and innovation.” But while the measure does nod to free speech rights, it also expands the types of speech deemed legally acceptable to restrict. It could mean way more legal hassles for creators and platforms interested in exercising their First Amendment rights, and result in a chilling effect on certain sorts of comedy, commentary, and artistic expression.

An Insanely Broad Bill 

At its core, the No AI Fraud Act is about creating a right to sue someone who uses your likeness or voice without your permission. It states that “every individual has a property right in their own likeness and voice,” and people can only use someone’s “digital depiction or digital voice replica” in a “manner affecting interstate or foreign commerce” if the individual agrees (in writing) to said use. This agreement must involve a lawyer, and its terms must be governed by a collective bargaining agreement. If any of these three elements are missing, the person whose voice or likeness was used can sue for damages.

The bit about interstate or foreign commerce might appear to significantly limit this bill’s provisions. But basically, anything involving the internet can be deemed a matter of interstate or foreign commerce.

So just how broad is this bill? For starters, it applies to the voices and depictions of all human beings “living or dead.” And it defines digital depiction as any “replica, imitation, or approximation of the likeness of an individual that is created or altered in whole or part using digital technology.” Likeness means any “actual or simulated image… regardless of the means of creation, that is readily identifiable as the individual.” Digital voice replica is defined as any “audio rendering that is created or altered in whole or part using digital technology and is fixed in a sound recording or audiovisual work which includes replications, imitations, or approximations of an individual that the individual did not actually perform.” This includes “the actual voice or a simulation of the voice of an individual, whether recorded or generated by computer, artificial intelligence, algorithm, or other digital means, technology, service, or device.”

These definitions go way beyond using AI to create a fraudulent ad endorsement or musical recording.

They’re broad enough to include reenactments in a true-crime show, a parody TikTok account, or depictions of a historical figure in a movie.

They’re broad enough to include sketch-comedy skits, political cartoons, or those Dark Brandon memes.

They’re broad enough to encompass you using your phone to record an impression of President Joe Biden and posting this online, or a cartoon like South Park or Family Guy including a depiction of a celebrity.

And it doesn’t matter if the intent is not to trick anyone. The bill says that it’s no defense to inform audiences that a depiction “was unauthorized or that the individual rights owner did not participate in the creation, development, distribution, or dissemination of the unauthorized digital depiction, digital voice replica, or personalized closing service.”

What’s more, it’s not just the creators of off-limits content that could be sued. Potentially liable parties include anyone who “distributes, transmits, or otherwise makes available to the public a personalized closing service”; anyone who “publishes, performs, distributes, transmits, or otherwise makes available to the public a digital voice replica or digital depiction”; and anyone who “materially contributes to, directs, or otherwise facilitates any of the above” with knowledge that the individual depicted had not consented. This is broad enough to ensnare social media platforms, video platforms, newsletter services, web hosting services, and any entity that enables the sharing of art, entertainment, and commentary. It also applies to the makers of tools that merely allow others to create audio replicas or visual depictions, including tools like ChatGPT that allow for the creation of AI-generated images.

But… the First Amendment

Apparently aware of obvious First Amendment issues with this proposal, the lawmakers inserted a section saying that “First Amendment protections shall constitute a defense to an alleged violation.” But this isn’t terribly reassuring, considering that lawmakers are simultaneously trying to expand the categories of speech unprotected by the First Amendment.

At present, intellectual property—such as copyrighted works and trade secrets—falls under free speech exceptions, meaning restrictions are permitted. By defining one’s voice and likeness as intellectual property, the lawmakers are trying to shoehorn depictions of someone else’s voice or likeness into the category of unprotected speech.

Even as intellectual property, voice replicas and digital depictions of others wouldn’t always be prohibited. Just as the doctrine of fair use provides some leeway with copyright protections, this bill defines circumstances under which replicas and depictions would be OK, such as when “the public interest in access to the use” outweighs “the intellectual property interest in the voice or likeness.”

But even if people being sued ultimately prevail on First Amendment grounds, it still means they have to go to court, with all the time and expense that entails. Even for those with the resources to do this, it would be a big headache. And many people do not have the resources to do this, which means that even when the First Amendment is on their side, they’re still likely to lose, to cave (by taking down whatever content is being challenged), or to avoid making said content in the first place.

You can see how this might seriously chill speech that is protected. People may be afraid to even create art, comedy, or commentary that could get challenged. And tech companies could be afraid to allow such content on their platforms.

If this measure becomes law, I would expect to see a lot more takedowns of anything that might come close to being a violation, be it a clip of a Saturday Night Live skit lampooning Trump, a comedic impression of Taylor Swift, or a weird ChatGPT-generated image of Ayn Rand. I would also expect to see more platforms institute blanket bans on parody accounts and the like.

The bill stipulates that imitators aren’t liable “if the harm caused by such conduct is negligible.” But emotional distress counts as harm, which makes this a pretty subjective designation.

And some categories of content—such as “sexually explicit” content and “intimate images”—are declared per se harmful (which means there could be no arguing that they did not actually harm the party being depicted and therefore were not actually violations). Supporters will likely argue that this targets things like deepfake porn (where AI is used to make it appear someone appeared in a porn video when they did not). But the language of the law is broad enough to potentially ensnare a wide range of content, including erotic art, commentary that conjures two political figures intimately involved (like those—yes, often silly and sophomoric—images of Trump and Putin in bed together), and comedic/parodic depictions of sexual encounters.

There’s no doubt that AI is opening up new parameters for creative expression and for deception, raising new questions and presenting new issues that society will have to deal with. But we shouldn’t let lawmakers use these hiccups to justify a broad new incursion on free speech rights.

The post AI Fraud Act Could Outlaw Parodies, Political Cartoons, and More  appeared first on Reason.com.

from Latest https://ift.tt/5I74WfM
via IFTTT

Leave a Reply

Your email address will not be published. Required fields are marked *