Imagine a world where the evidence presented in court isn't what it seems, where a convincing video or audio clip could be entirely fabricated. This isn't a scene from a dystopian movie; it's a rapidly growing threat in our legal system, and it's alarming judges across the nation.
Recently, Judge Victoria Kolakowski of California's Alameda County Superior Court experienced this firsthand. While reviewing evidence in a housing dispute, something felt off about Exhibit 6C. The video, supposedly featuring a real witness, displayed unsettling inconsistencies: a disjointed voice, a fuzzy face devoid of emotion, and repetitive twitches. The chilling truth? It was an AI-generated "deepfake." This case, Mendones v. Cushman & Wakefield, Inc., appears to be among the first documented instances where a suspected deepfake was submitted as genuine evidence and detected. But here's where it gets controversial... could there be many more that have slipped through the cracks undetected?
Kolakowski, citing the plaintiffs' use of this fabricated evidence, dismissed the case on September 9th. While the plaintiffs argued that the judge only suspected the evidence was AI-generated without definitively proving it, their request for reconsideration was denied on November 6th. They didn't respond to requests for comment.
The rise of powerful AI tools means that AI-generated content is increasingly infiltrating our courts. Judges and legal experts are expressing serious concerns that realistic fake evidence could soon overwhelm courtrooms, undermining the very foundation of trust upon which our legal system is built. NBC News spoke with five judges and ten legal experts who warned that generative AI's ability to produce convincing fake videos, images, documents, and audio is a serious threat.
Judge Kolakowski stated, "The judiciary in general is aware that big changes are happening and want to understand AI, but I don’t think anybody has figured out the full implications. We’re still dealing with a technology in its infancy."
Prior to this case, courts had encountered what's been called the "Liar's Dividend" – when parties try to cast doubt on real evidence by claiming it could be AI-generated. But this case was different: the plaintiffs allegedly attempted to introduce AI-generated video as genuine evidence.
Judge Stoney Hiljus of Minnesota's 10th Judicial District, who chairs the Minnesota Judicial Branch's AI Response Committee, highlighted the growing fear among judges: "I think there are a lot of judges in fear that they’re going to make a decision based on something that’s not real, something AI-generated, and it’s going to have real impacts on someone’s life."
Even judges who advocate for AI's use in court are worried. Judge Scott Schlegel of the Fifth Circuit Court of Appeal in Louisiana, a leading proponent of judicial AI adoption, shared a chilling example: "My wife and I have been together for over 30 years, and she has my voice everywhere. She could easily clone my voice on free or inexpensive software to create a threatening message that sounds like it’s from me and walk into any courthouse around the country with that recording." And this is the part most people miss... the ease with which this can be done.
"The judge will sign that restraining order. They will sign every single time," Schlegel continued, referring to the hypothetical recording. "So you lose your cat, dog, guns, house, you lose everything."
Judge Erica Yew of California’s Santa Clara County Superior Court, emphasizes AI's potential to increase access to justice, but also acknowledges the risks. She points out that forged audio could easily lead to a protective order and advocates for centralized tracking of such incidents. "I am not aware of any repository where courts can report or memorialize their encounters with deep-faked evidence," Yew said. "I think AI-generated fake or modified evidence is happening much more frequently than is reported publicly."
Yew also fears that deepfakes could corrupt trusted methods of evidence gathering. For instance, someone could create a false record of title for a car and submit it to the county clerk, who might not have the expertise to verify its authenticity. This falsified document could then be presented in court. "So now do I, as a judge, have to question a source of evidence that has traditionally been reliable?" Yew wonders.
While fraudulent evidence isn't new, Yew believes AI could lead to an unprecedented surge in realistic forgeries. "We’re in a whole new frontier," she said.
Judges Schlegel and Yew are among those spearheading efforts to combat the threat of deepfakes in court. They are working with a consortium including the National Center for State Courts and the Thomson Reuters Institute, which is developing resources for judges. The consortium distinguishes between "unacknowledged AI evidence" (deepfakes) and "acknowledged AI evidence" (like AI-generated accident reconstruction videos).
The consortium has also published a cheat sheet advising judges to ask about the origin of potentially AI-generated evidence, who had access to it, whether it was altered, and to seek corroborating evidence. In April 2024, a Washington state judge blocked the use of AI to clarify a video.
Judge Hiljus is surveying state judges in Minnesota to understand how generative AI is showing up in their courtrooms. "Judges are starting to consider, is this evidence authentic? Has it been modified? Is it just plain old fake? We’ve learned over the last several months, especially with OpenAI’s Sora coming out, that it’s not very difficult to make a really realistic video of someone doing something they never did," Hiljus said.
To address the issue, some legal experts are proposing changes to judicial rules. One proposal would require parties alleging deepfakes to thoroughly substantiate their claims. Another would transfer the responsibility of deepfake identification from juries to judges. These proposals were considered by the U.S. Judicial Conference’s Advisory Committee on Evidence Rules in May, but weren't approved, as members felt existing standards of authenticity were sufficient. But this decision could be short-sighted, especially given the rapid pace of AI development.
The Trump administration’s AI Action Plan highlighted the need to "combat synthetic media in the court system." However, other legal practitioners advocate for a more cautious approach, preferring to wait and see how frequently deepfakes are used and how judges respond before changing overarching rules.
Jonathan Mayer, former chief science and technology adviser at the U.S. Justice Department, said existing law was generally sufficient, but acknowledged that "the impact of AI could change — and it could change quickly."
In the meantime, attorneys may become the first line of defense. Judge Schlegel points to Louisiana's Act 250, which requires attorneys to exercise "reasonable diligence" to determine if evidence is AI-generated.
"The courts can’t do it all by themselves," Schlegel said. "When your client walks in the door and hands you 10 photographs, you should ask them questions. Where did you get these photographs? Did you take them on your phone or a camera?"
Daniel Garrie, co-founder of Law & Forensics, emphasizes that human expertise will remain crucial. Metadata, like the file's origin and creation date, could be a key defense against deepfakes. In the Mendones case, the metadata showed that the video was captured on an iPhone 6, which was impossible given the plaintiff's argument required newer iPhone capabilities.
Courts could also mandate that recording hardware include mathematical signatures to verify content authenticity. However, such solutions may face challenges similar to those encountered with DNA testing or fingerprint analysis, where parties lacking technical expertise may be at a disadvantage.
Maura Grossman of the University of Waterloo urges judges to remain vigilant. "Anybody with a device and internet connection can take 10 or 15 seconds of your voice and have a convincing enough tape to call your bank and withdraw money. Generative AI has democratized fraud."
So, what do you think? Is the legal system adequately prepared for the rise of AI-generated evidence, or are more drastic measures needed? Should we be prioritizing technological solutions or focusing on educating legal professionals? Share your thoughts in the comments below. This is a conversation we all need to be having.