How Schools Across America Are Struggling With AI Deepfakes
Authored by Aaron Gifford via The Epoch Times (emphasis ours),
Gone are the days where the biggest concern is students drawing alien ears on their science teacher or printing images of a friend’s face connected to a four-legged body with scales and a tail.
That was 30-something years ago. Now, schools are being forced to develop emergency response plans in case sexually explicit images of students or teachers generated by artificial intelligence (AI) pop up on social media.
In two separate cases, school principals were seen or heard spewing racist, violent language against black students. Both were AI-generated deepfakes—one was produced by students and the other was made by a disgruntled athletic director who later was arrested.
Deepfakes are defined as “non-consensually AI-generated voices, images, or videos that are created to produce sexual imagery, commit fraud, or spread misinformation,” according to a nonprofit group focused on AI regulation.
As education leaders scramble to set policy to mitigate the damage of deepfakes—and as state legislators work to criminalize such malicious acts specific to schools or children—the technology to combat AI tools that can replicate a person’s image and voice doesn’t yet exist, says Andrew Buher, founder and managing director of the Opportunity Labs nonprofit research organization.
“There is a lot of work to do, both with prevention and incident response,” he said during a virtual panel discussion held by Education Week last month on teaching digital and media literacy in the age of AI. “This is about social norming [because] the technical mitigation is quite a ways away.”
Legislation Targets Deepfakes
On Sept. 29, California Gov. Gavin Newsom signed into law a bill criminalizing AI-generated child porn. It’s now a felony in the Golden State to possess, publish, or pass along images of individuals under the age of 18 simulating sexual conduct.
There are similar new laws in New York, Illinois, and Washington State.
At the national level, Sen. Ted Cruz (R-Texas) has proposed the Take It Down Act, which would criminalize the “intentional disclosure of nonconsensual intimate visual depictions.”
The federal bill defines a deepfake as “a video or image that is generated or substantially modified using machine-learning techniques or any other computer-generated or machine-generated means to falsely depict an individual’s appearance or conduct within an intimate visual depiction.”
School districts, meanwhile, seek guidance on an emerging problem that threatens not just students, but also staff.
At Maryland’s Pikesville High School in January, a fake audio recording was made of the principal. School officials enlisted the help of local police agencies and the FBI.
The suspect, Dazhon Darien, 31, an athletic director, was charged with theft, stalking, disruption of school operations, and retaliation against a witness.
He allegedly made the recording to retaliate against the principal, who was investigating Darien’s alleged mishandling of school funds, according to an April 25 news release on the Baltimore County Government website.
Jim Siegl, a senior technologist with the Future of Privacy Forum, said during the Education Week panel discussion that investigators in the Baltimore case were able to link the suspect to the crime by reviewing “old school computer access logs.”
But as AI technology continues to evolve, he said, it may be necessary to develop a watermarking system for generated audio or video to replace outdated systems for monitoring and safeguarding school computer use.
In February 2023, high school students in Carmel, New York, used AI to impersonate a middle school principal. The deepfakes were posted on TikTok. Investigators were able to link the students’ activities to their accounts. They were disciplined under school code of conduct guidelines but not charged criminally, according to a statement released on the district’s Facebook page.
“As an organization committed to diversity and inclusion,” the statement said, “the Carmel Central School District Board of Education is appalled at, and condemns, these recent videos, along with the blatant racism, hatred, and disregard for humanity displayed in some of them.”
A parent, Abigail Lyons, said a co-worker who also has children in the district showed her a text containing seven different videos.
“I basically fell to the floor,” said Lyons, who is biracial. “It was horrific. It looked so real.”
They re-watched the videos and noticed that the lip movement and body language were a bit off from the sound. Lyons said most parents in the district had already seen or heard about the videos and probably knew they were deepfakes before Carmel school officials publicly acknowledged the incident and declared there “was no threat.”
Lyons said the event scared her daughter, and that events like school lockdowns or emergency drills still trigger anxiety and fear stemming from the 2023 deepfake.
“Seventh graders should not have to worry about these things,” she told The Epoch Times.
Lyons said she is unaware of any deepfake incidents so far this semester, but students have threatened each other on social media, including one threat that led to a two-hour building lockdown.
“We still don’t know what it [lockdown] was for,” she said. “The transparency still isn’t there.”
The Epoch Times reached out to the district offices in Carmel, New York, and Baltimore County, Maryland, but didn’t receive a response.
California’s new law was prompted by several deepfake incidents that victimized students.
Read more here…
Tyler Durden
Wed, 10/23/2024 – 20:55
via ZeroHedge News https://ift.tt/HWI7GSD Tyler Durden