Artificial intelligence (“AI”) is no longer a futuristic concept for the legal profession — it’s here, and it’s reshaping how we practice. From e-discovery to contract review, AI tools have already made their presence felt. Rapid developments in generative AI over the past two years have forced lawyers to confront how these new technologies can and will impact their practices. The next frontier lies in evidence. The U.S. Judicial Conference's Advisory Committee on Evidence Rules has therefore proposed amendments to the Federal Rules of Evidence aimed at addressing this new litigation landscape.
The proposed changes underscore a fundamental reliability concern. First, Rule 901(b) concerns the authentication of evidence generated by a process or system. The proposed update tightens the reins: if the evidence is AI-generated, the proponent must not only describe the system or software but also demonstrate that it produces valid and reliable results. In other words, saying, “The computer said so,” is not enough. Under proposed Rule 707, any AI-generated output that will require expert testimony must meet the high standards of reliability and methodology set forth in Rule 702. That means lawyers presenting such evidence may need to spend less time on cross-examination theatrics and more time consulting data scientists.
The potential changes reflect a growing realization: AI is only as good — or as bad — as the data it’s fed and the algorithms that process it. Legal practitioners, who may have once viewed AI as a helpful but distant tool, will now have to roll up their sleeves and dive into the technical details. How was the AI trained? What biases might lurk in its decision-making? And does it truly produce consistent and valid outputs?
The stakes are particularly high when it comes to “deepfakes.” Generative AI raises the risk of the new technology being used to alter critical evidence in a courtroom, particularly video. The proposed changes tackle this directly: if a party raises reasonable doubts about the authenticity of AI-altered evidence, it must be authenticated to show it’s “more likely than not” genuine.
The proposed Rule 901(c) aims to establish a framework for authenticating evidence that may have been altered or fabricated using AI technologies. The suggested provision includes a two-step process with shifting burdens: (1) The party challenging the authenticity of the evidence must demonstrate to the court that a jury could reasonably find the evidence has been altered or fabricated by AI; (2) If the opponent meets this initial burden, the proponent must then prove to the court that the evidence is more likely than not authentic for it to be admissible. This approach seeks to balance the need for vigilance against AI-generated falsifications with the practicalities of evidence admission. By requiring an initial showing from the opponent, the rule aims to prevent frivolous challenges, while ensuring that potentially manipulated evidence is thoroughly vetted before being presented to a jury.
We remain in a “wait and see” phase as the legal community continues to address the interaction of advancing technologies in a profession normally viewed as chained to the past. It’s an opportunity to embrace innovation while safeguarding the principles that underpin justice. The courtroom of tomorrow may look different, but with the right guardrails, it can also look fairer.
Michael P. Dickman is a civil litigator and trial lawyer. He is a senior associate at Kenney & Sams PLLC, where he focuses his practice on business, construction and personal injury disputes. He is chair-elect of the Massachusetts Bar Association’s Young Lawyers Division. He is also a member of the MBA’s Well-Being Committee.