Judge Paul Grimm


speaker Judge Paul Grimm

Generative AI creates a double evidentiary quandary: the 'liar’s dividend,' where real evidence is dismissed as fake, and fake evidence is accepted as real.

Paul W. Grimm is a law professor and Director of the Bolch Judicial Institute at Duke Law School. From December 2012 until his retirement in December 2022, he served as a district judge of the United States District Court for the District of Maryland, with chambers in Greenbelt, Maryland. He also has written extensively and taught courses for lawyers and judges in the United States and around the world on topics relating to e-discovery, technology and law, and evidence. Judge Grimm served on the Advisory Committee for the Federal Rules of Civil Procedure from 2009 to 2015 and chaired its discovery subcommittee, which crafted, in part, the 2015 amendments to the Federal Rules of Civil Procedure. Judge Grimm served both on active duty and in the Army Reserve as a Judge Advocate General’s Corps officer and retired in the rank of lieutenant colonel.

Talks by Judge Paul Grimm


related talk Delete before going live
Delete before going live

Let's see how this goes.  Testing for testing sake. 

 

At midnight the monitors glow like watchful moons,
developers hunch, caffeinated constellations in swivel chairs.
They shepherd sprint boards where cards migrate like restless birds,
each colored square a promise or a threat.

First the scouts: unit tests patrol single functions,
probing boundaries, murmuring assertions like hushed spells.
Green beacons flare—brief auroras of success—then vanish,
for red is always hiding, crimson as a syntax wound.

Integration follows, a diplomatic banquet of modules
whose table manners are uncertain.  APIs shake hands,
sometimes crush knuckles, sometimes miss the grip entirely.
A forgotten comma topples the conversation, silence rains.

Browsers arrive uninvited: Chrome with swagger,
Safari poised like a minimalist feline,
Edge wearing polished shoes, Firefox smelling of campfire smoke.
Each demands different seating, demands different forks.

The tester is maître d’, cartographer, and bard,
mapping click‑paths through labyrinthine menus,
listening for the telltale cough of a null pointer
slinking behind a modal’s velvet curtain.

Bug reports sprout—wild dandelions on the issue tracker—
their seeds catch wind in @mentions and Slack pings,
floating toward backlogs, burrowing into sprints.
Someone mutters a vow to “refactor tomorrow.”

Morning blurs the monitors, revealing fingerprints, crumbs,
and at last the bug whose shadow warped a checkout flow:
an off‑by‑one that skimmed a penny from each order,
humble glitch turned accidental heist.

Coffee cooled, the team writes a terse commit:
“Fix rounding.”  Tests bloom green, deploy pipeline hums,
and the site exhales, pages loading like calm surf.

Outside, users awaken unaware of nocturnal wars.
They tap, scroll, purchase, exit—unbroken loops of ordinary magic.
Inside, developers log the victory, close their laptops,
and dream of code without corners sharp enough to cut.

Documentation grows, a lantern for future wanderers, illuminating paths once perilous, charted.

But night will fall again; servers never sleep.
The cycle spins—bright, relentless, necessary as breath.


related talk Artificial Justice – Navigating an AI Evidence Crisis
Artificial Justice – Navigating an AI Evidence Crisis

Generative AI, with its ability to produce hyper-realistic deepfakes, is not just a technological marvel—it’s a profound challenge for the justice system. In this conversation, Judge Paul Grimm, former U.S. District Judge for the District of Maryland and now a professor at Duke Law School, explores the intricate legal and evidentiary issues posed by AI in courtrooms.

Central to the discussion is the concept of the "liar’s dividend", where generative AI creates a dual evidentiary crisis. On one hand, legitimate evidence can be dismissed as fake, undermining its credibility. On the other hand, fabricated evidence—entirely plausible and generated by AI—can be accepted as real, influencing outcomes in critical cases. Judge Grimm highlights how AI tools democratize the creation of fraudulent evidence, making sophisticated forgeries accessible to anyone with a smartphone.

Judge Grimm also delves into how AI impacts the judicial system's ability to evaluate evidence. He describes the unprecedented challenges posed by deepfakes, which exploit our natural trust in what we see and hear. Unlike forged documents or manipulated images of the past, AI-generated content can convincingly mimic voices, faces, and even actions. These creations, Judge Grimm explains, often exploit the "seeing is believing" instinct of jurors and judges, creating what he calls a "perfect evidentiary storm."

AI isn’t simply a threat to the justice system, the technology also offers the prospect of powerful tools to authenticate evidence and streamline complex legal processes. However, Judge Grimm warns of overreliance on such technologies, especially as "black-box" AI models often operate with mechanisms that even their creators cannot fully explain. As Judge Grimm puts it, the justice system must adapt to ensure that technology enhances truth rather than distorting it.

Judge Paul Grimm is a professor of law at Duke Law School and an expert on evidence and artificial intelligence. He brings decades of experience as a judge and legal scholar to this timely and critical topic.