Admitting AI Art as Demonstrative Evidence

Images and animations created through generative artificial intelligence (GAI) pose new possibilities and questions for the law of demonstrative evidence. AI art tools may allow parties to prepare pedagogical displays—including hyper-realistic virtual imagery—without retaining expensive third-party artists. But these programs raise evidentiary concerns such as reliability and undue prejudice, issues that remain largely unaddressed under the notoriously undeveloped law governing computer-made demonstratives. This Note explains both how artificial intelligence companies could institute initiatives for better quality assurance at the front end, and how courts can encourage such measures through new applications of existing evidentiary and procedural rules. The Note ultimately argues that the emerging use of GAI imagery may necessitate stricter standards in demonstrative evidence law.

Table of Contents Show

    Introduction

    Iris, pupil, tear duct, cornea—each part of this eye is equivalent to a fraction of the whole. If you add up all the parts, you will get only sixty-three sixty-fourths. The missing sixty-fourth is your painter’s hand, which will allow you to discover it if you become a true seer. For seeing is creating.[1]

    My body is a canvas on which I intend to draw.[2]

     

    An elementary school teacher of mine once asserted that it would take a long time before computers could beat humans in the abstract strategy game Go. “A program beat a world chess champion a few years ago because chess is a very limited game,” he said. “Its pieces have specific roles and movements. Not Go. It is called ‘a universe on a playboard’ for a reason. Absolutely no way machines can best men in this game! At least not until your children, no, grandchildren become adults.” In less than two decades, British artificial intelligence (AI) company DeepMind introduced AlphaGo, a program designed to play Go.[3] On March 15, 2016,[4] I watched AlphaGo secure its 4–1 victory against Lee Sedol, then one of the top professional Go players in the world.[5]

    Granted, my ex-teacher’s claim was not unreasonable then. A typical game of chess, played on an eight-by-eight field, usually takes about eighty turns and has 10123 possible moves.[6] By contrast, a standard Go game involves a nineteen-by-nineteen board and lasts about 150 turns, yielding around 10360 moves.[7] The sheer number of possibilities, combined with Go’s “qualitative” nature, made people dubious of a competent Go software as early as 1965.[8] Indeed, a Chinese Go enthusiast offered in 1985 a financial award of about 1.4 million in today’s dollars for a program that could beat a professional Go player.[9] The prize went unclaimed and expired by the year 2000.[10]

    Go is not the only area where AI has achieved a milestone, as the recent surge of generative AI (GAI), AI that makes new contents from existing ones,[11] show. AI art tools like Artbreeder,[12] DALL-E and successor DALL-E 2,[13] Midjourney,[14] Shutterstock Generate,[15] and NovelAI[16] can produce artworks from textual prompts and parameters. With the right commands, they produce illustrations whose degrees of verisimilitude range from obviously fictional[17] to convincingly real or even applicable.[18] Surely these AI generators have proved wrong the prevailing twentieth-century view that machines cannot behave intelligently.[19]

    While GAIs are already challenging existing norms and laws,[20] they also imply at least one application in the field of demonstrative evidence (also known as illustrative evidence or derivative evidence),[21] a term used in evidence law to encompass a body of sensory devices (such as charts, diagrams, or videos) that help explain relevant facts or issues at hearings.[22] AI generators may reach the point where parties can use them to create drawings and animations for legal purposes. (Unless another “AI Winter” comes, that is.)[23] If so, AI would accelerate the legal profession’s slow yet sure movement toward visualization.[24]

    To be sure, GAI offers potential advantages for the legal system. For example, AI art can make the adversarial system more accessible and equitable by reducing expenditures of precious capital on demonstrative evidence.[25] Not only does the creation of a complex demonstrative exhibit directly add enormous expenses to client bills,[26] but its preparation indirectly burdens a party by consuming much lawyer time.[27] Litigants have tried to address these problems through creative and sometimes desperate means, such as borrowing illustrated books from public libraries, using outgrown toys and dolls, or running into a courtroom wall without protective gears to explain what a crash test means.[28] But inexpensive, simple substitutes have limits.[29] New AI technology could be an alternative both to ineffective low-tech exhibits and to costly third-party specialists, enabling parties (especially indigent ones) to empower their narratives in a cost-efficient manner.[30]

    Figure 1: A Bing Image Creator image created with the prompt, “A red sedan that got T-boned on the left side by a truck.”

    Consider two examples made with Bing Image Creator. A personal injury practitioner representing a client who failed to take pictures of the accident scene may still wish to show what it looked like. In such cases, an image like Figure 1 could prove handy. Or a public defender working on a case involving a bar fight might discover that the weapon, a smashed beer bottle, was destroyed minutes before police officers arrived. The lawyer could use Figure 2 to show what the bottle would have looked like.

    Figure 2: A Bing Image Creator image created with the prompt, “A smashed beer bottle used as a weapon in a bar fight.”

    Of course, some procedural questions must be addressed first before litigants can actually start relying on GAIs for such uses. Can courts admit AI graphics for demonstrative uses? On what grounds? Should the relevant standards be any different from the ones governing traditional derivative evidence? What measures are available against errors and other unpleasantries?

    This Note seeks to answer those questions. Barring more fundamental solutions like a new body of Model Rules for Demonstrative Evidence,[31] courts may admit pedagogical AI art through existing rules of evidence and civil and criminal procedure.[32] Due to uncertainties inherent to the technology, judges should subject GAIs to robust authentication and prehearing procedures to minimize errors and inaccuracies.[33] AI creators could also adopt informal guidelines to facilitate their brainchildren’s evidentiary uses.[34] New technologies have drawn judicial concerns about even their mechanisms until the legal profession as a whole came to take them for granted.[35] Precautions presented here may help elevate illustrative AIs from novelties to norms.[36]

    The rest of the paper proceeds in three parts. Part I describes the history and current law of demonstrative evidence, the latter with focus on computer displays.[37] Part II explains the evidentiary challenges raised by GAIs, a solution to those problems, and two inadvisable alternatives to the proposal in Part III.[38] Part III discusses procedural, protocolar safeguards against the problems outlined in Part II.[39]

    I. The History and Current Law of Demonstrative Evidence

    This Section outlines the law of demonstrative evidence. It starts with a brief history of the law.[40] Then follows a discussion on the current rule favoring liberal admission of tutorial displays, including ones generated by computers.[41]

    A.     The History of Demonstrative Evidence Has Favored Lenient Admission.

    The historical consensus since the sixteenth century holds that introduction of derivative evidence is “a matter of right.”[42] The law of evidence, and with it the use of tutorial objects, emerged with the modern jury during the sixteenth and seventeenth centuries.[43] Judges who presided over civil bench trials then relied on visual demonstrations to decide issues.[44] The practice was resilient and popular enough to make its way into Sir William Blackstone’s eighteenth-century treatises.[45] Some nineteenth-century opinions indicate that judges took the use of illustrative displays for granted.[46] Indeed, one state supreme court opinion remarked that objecting to the use of enlarged photographs was like doing the same with the use of corrective eyeglasses.[47]

    Demonstrative evidence became a treatise topic on its own for the first time at the turn of the twentieth century.[48] John Henry Wigmore grouped demonstratives under a separate section in the sixteenth edition of Simon Greenleaf’s treatise on evidence, published at the end of the nineteenth century.[49] The scholar noted that each such display assumes “a qualified witness as its testimonial support and cannot [by] itself have any standing independently of some witness whose knowledge it serves to represent.”[50] Later Wigmore wrote his own evidence treatise, where he categorized derivative exhibits as nonverbal testimonies.[51] Then Charles Tilford McCormick started “the modern era of demonstrative evidence” in 1940.[52] The evidence theorist dedicated a whole chapter to the topic in the first editions of his casebook and hornbook, published respectively in 1940 and 1954.[53]

    These academic works coincided with the increasing use of demonstrative evidence in practice.[54] Attorney Melvin Mouron Belli famously won a tort case in 1946 for a girl who sued the city railway after she had her leg amputated while using a streetcar.[55] After the opposing counsel successfully moved for remittitur,[56] Belli started the second trial by taking about three minutes to unwrap a prosthetic leg and handing it to a juror.[57] Then he asked the jury:

    Won’t you take it and pass it amongst [yourselves] and, as you do, feel the warmth of life in the soft tissues of its flesh, note the pulse of the blood as it courses through the veins . . . Don’t be alarmed by all of the laces, and harnesses, the strappings, and the creaking of the metal. My client . . . must wear this for the rest her life in exchange for that limb which . . . she should have worn for the rest of her life.[58]

    The artificial limb left a deep impression on the jury—the girl won one hundred thousand dollars, roughly fifty percent more than the first verdict.[59] The defense filed another motion for excessive damages, but the judge (whom Belli swore was also swayed) sustained the verdict.[60] This theatrical victory opened the age of colored charts, three-dimensional models, and day-in-the-life films.[61]

    Perhaps owing to those five centuries of acceptance and success stories, lawyers have cited the need for convenient, effective means of persuasion, and judges have accordingly refused to establish strong restrictions on explanatory devices. For instance, illustrative photographs of handwriting samples were proper because they helped factfinders.[62] Wigmore, the progenitor of modern demonstrative evidence law, also reasoned in his evidence treatise that it would be unwise to deny “those effective media of communication commonly employed at other times as a superior substitute for words.”[63]

    B.     The Current Law Favors Admission of Demonstrative Computer Graphics.

    Demonstrative evidence generally enjoys a remarkable degree of flexibility in admission and use, as indicated by the fact that it is mentioned only once in the Federal Rules of Evidence (FRE).[64] Specifically, Rule 611(a)(1) and the accompanying Advisory Committee’s Note present the sole guidance.[65] Judges must “exercise reasonable control over the mode and order of presenting evidence” in order to make them effective.[66] And the principle covers derivative evidence.[67]

    Three practical guidelines accommodate, rather than exclude, illustrative exhibits.[68] First, a demonstration must be relevant to a fact or an issue under Rules 401 and 402.[69] A pedagogical device has relevance if it illustrates a testimony about a fact or an opinion.[70] Second, the exhibit’s probative danger must not substantially outweigh its probative value.[71] Rule 403 specifies that relevant issues include jury misdirection, issue confusion, and unfair prejudice.[72] Third, the illustration must be authenticated as prescribed by a pertinent provision from Rules 901 to 903.[73] That is, the proponent must offer some kind of proof (usually a witness attestation) showing that a tutorial instrument is what it alleges to be.[74] These loose restrictions, combined with little textual guidance, has yielded a “relatively consistent and unremarkable pattern of admissibility.”[75]

    Such evidentiary lenience extends to derivatives created by computers.[76] The FRE emerged as the legal profession began to embrace modern computing,[77] and some of its amendments have addressed increasingly complex technologies.[78] Still, the only thing that is clear about rules on computer-made drawings and animations is that they are unclear.[79] But the theoretical and practical consensuses have been that they are natural, positive, even inevitable.[80] This view may have to do with the considerable influence wielded by computer evidence in courtrooms.[81] After all, humans learn better through visual stimuli than they do through verbal ones.[82]

    Federal and state judiciaries have been disposed toward computer-generated graphics since the 1970s. Perma Research and Development v. Singer Co.[83] started this trend by legitimizing the use of computer simulations for trials. There, Singer Company (“Singer”) appealed a bench verdict reaching seven million dollars for Perma Research and Development Company (“Perma”) over a breach of a manufacturing contract.[84] One of Singer’s arguments disputed the results of a computer simulation a Perma expert used to testify about Perma’s design and Singer’s defective product.[85] Retired Supreme Court justice Thomas Campbell Clark, sitting by designation, rejected the challenge and held that the trial court did not abuse its discretion in admitting the testimony.[86] Though Perma could have shared the bases for the simulation, Singer failed to show that it had an adequate reason to cross-examine the expert.[87]

    What is noticeable is that all except dissenting Judge Ellsworth Alfred Van Graafeiland took the use of computer for granted.[88] Singer challenged the data and equations behind Perma’s simulation, not the simulation itself.[89] Justice Clark too was content, save for the lack of pretrial precautions.[90] Judge Van Graafeiland, who had received far too many “computerized bills and dunning letters for accounts long since paid,” disagreed.[91] He observed that neither of Perma’s experts saw and examined the disputed product.[92] Their testimonies were based instead on the data that one compiled through some formulas that he entered into a computer in an unspecified way.[93] The testimonies were thus speculations derived in an “undescribed, hypothetical manner,” unfit to be bases for a seven million dollar verdict.[94]

    Post-Perma federal cases have upheld illustrative computer graphics.[95] In re Air Crash Disaster[96] held that a “computer-animated videotape” depicting the inner workings of an airplane circuit breaker was proper.[97] The 1987 crash of Northwest Flight 255, the second worst air disaster in America at the time, killed a total of 156 people.[98] After a prolonged multidistrict litigation that involved aerospace manufacturer McDonnell Douglas and Northwest Airlines Corporation (“Northwest”), an Eastern District of Michigan jury found Northwest one hundred percent liable and granted all of McDonnell Douglas’s requests for reimbursement from the airline.[99]

    Northwest filed two appeals, one of which unsuccessfully argued that Federal Rule of Evidence 403 precluded a six-minute animation showing how the circuit breaker installed on the airplane worked.[100] The airline argued that the movie was an improper simulation that suggested “a similarity to actual events” and essentially depicted McDonnell Douglas’s argument.[101] The court, although it agreed with that reason, held that the video was only a demonstration, one that the expert could have drawn himself during his testimony.[102] In the court’s view, the animation’s prejudicial effect did not outweigh probative value and so was “entirely” proper.[103]

    By the 2000s computer illustration became a norm in federal courts, as a memorandum order issued at the turn of this century by Judge Jack Bertrand Weinstein indicates.[104] Verizon Directories Corporation sued Yellow Book USA, Inc., for false or misleading advertisements.[105] The two asked the court whether they could use computer-made displays (such as “cartoonish” images and “thought bubbles”) for trials.[106] Judge Weinstein allowed all, save for those struck for error, lack of utility, or unsatisfactory conception or execution.[107] He understood that a trial is a mutual learning process between litigants and jurors.[108] Computer presentations had become a norm in complex cases, where the jury must supplement its collective reasoning power, that essence of the American adversarial system, with digests of complex testimonies and data.[109] A correct, reliable pedagogical device that could satisfy that need had little basis for exclusion.[110]

    Just three years ago, the Eighth Circuit held that admission of computer-edited maps containing hearsay was a harmless error in United States v. Oliver.[111] Law enforcement agents arrested Shelton Oliver for drug trafficking after a fellow city resident bought heroin from him and died of an overdose.[112] He was found guilty of five drug-related charges and received a twenty-five-year sentence.[113] During trial, the court admitted maps showing where an informant bought heroin from him.[114] A city police officer shared the locations with two other city government employees, who used the information to create the maps and added lines and other labels.[115] Oliver contended on appeal that, since those coordinates were out-of-court statements, the maps contained hearsay and thus violated his right to a fair trial under the Sixth Amendment.[116]

    The court, though sympathetic toward Oliver’s argument, held that any resulting error was harmless.[117] The government claimed that the marks were not hearsay for two reasons: The three government employees testified and were cross-examined, and all labels were made with computer software rather than human hands.[118] The court responded, respectively, that the ability to cross-examine a testifying human per se does not address the issue and that machines may provide hearsay with wrong human input.[119] But, if the maps’ admission was improper, the error was harmless as the jury needed not rely on them to find against Oliver.[120] The jurors could use photographs of the locations and in-court testimonies of the three government employees to reach the same outcome.[121] In other words, the maps were harmless since the maps were cumulative and demonstrative of real evidence.[122]

    State courts have similarly been lenient on computer graphics, as illustrated here by three cases spanning about two decades. The Supreme Court of Iowa ruled in Ladeburg v. Ray[123] that custom computer displays made by an expert are acceptable. Helen Ladeburg was struck by a semitrailer driver while driving and sued him and his employers.[124] All parties informally agreed to extend discovery and other relevant deadlines, leading to Ladeburg learning five days before the trial that the defense expert would use drawings that he made with a computer and his own calculations and prompts.[125] She moved to exclude the diagrams to no avail before and during the trial and lost.[126] Ladeburg argued on appeal that the artworks’ prejudicial effect outweighed their probative value.[127] The court disagreed, holding that they were “mere[] mechanical drawings” whose creator was present for cross-examination.[128]

    Thirteen years later, the highest court in Pennsylvania held that tutorial animations are conditionally admissible in criminal trials.[129] Michael Serge was arrested for fatally shooting his wife.[130] The state presented during its case-in-chief an animation that depicted its theory about where and how the killing occurred.[131] The trial court gave meticulous instructions on the movie, stressing that it was just a demonstration that did not recreate the actual event.[132] The jury found Serge guilty of first-degree murder, leading to his life sentence.[133] Serge argued on appeal that the trial court erred in admitting the animation, as it needed authentication, lacked foundation, and was unfairly prejudicial.[134] The State answered that the display was like any other illustrative exhibit.[135] The Supreme Court of Pennsylvania, after analyzing the graphic’s authentication, relevance, and probative value, concluded that the animation was properly admitted.[136]

    About twelve years ago, the Supreme Court of California in People v. Duenas[137] upheld illustrative use of computer animations in criminal trials. Enrique Parra Duenas, while under the influence of methamphetamine, ran away from an unarmed police officer and eventually shot him to death.[138] During trial, a prosecution expert showed the jury a four-minute animation that depicted her opinion as to how the defendant fired seven shots and killed the victim.[139] Both the defendant and the State agreed that it was an animation, not a simulation.[140] The jury returned a death verdict, which the trial court sustained over Duenas’s automatic application to modify.[141]

    Duenas futilely appealed, asserting that the animation was speculative, was cumulative, and contained probative issues that outweighed probative value.[142] The court reasoned that, even if some details like the actual projections and locations of the bullets were wrong, the animation only illustrated the expert’s take on how the killing happened, not the actual sequence of events.[143] The movie’s cumulative nature was appropriate, as it purported to help the jury understand relevant real evidence.[144] Further, both the government and the court repeatedly stressed that the animation was a mere demonstration, the former once going so far as to describe it as arbitrary.[145]

    These precedents reveal a strong judicial inclination toward admission of computer-made pedagogical illustrations, and with it at least three recurrent justifications. First, judges analogized computer graphics to traditional media and concluded that the two are the same for the most part.[146] Perhaps such comparisons are natural, as common law habituates its lawyers in analogical comparisons between new and existing phenomena.[147] Indeed, nineteenth-century judges equated photography to other visual arts to explain the new art’s evidentiary role.[148] Similarly, courts have ruled computer drawings proper on the grounds that their creators were present for cross-examination and that they were cumulative,[149] two traits of pre-computer demonstrations.[150] Judicial analyses under established rules and standards have also equated computer animations to traditional media such as sketch pads and hand-drawn graphics.[151]

    Second, presence and cross-examination of experts have largely vindicated computer-made displays.[152] This tendency traces back to as late as the late nineteenth century, when experts gave testimonial assurance that photographs constitute valid evidence.[153] Twentieth- and twenty-first-century ones have done the same with computer graphics.[154] The presence of a circuit break expert sufficed to admit a computer animation as proper, as he himself could have drawn similar images and be cross-examined in front of the jury.[155] So were the computer drawings of a crash scene drawn solely by the defense expert’s calculations and decisions.[156] But in-court testimony and cross-examination may fail to justify a graphic if other evidentiary concerns are involved.[157]

    Third and last, pretrial procedures permitted (but did not require) litigants to announce and examine computer visual exhibits.[158] Perma, where both opinions advised that pretrial disclosures of computer evidence would be “good practice,” evinced early judicial wariness of the computer’s potential for persuasion and error.[159] But vigilance gave way to acceptance as case after case justified computer drawings and animations.[160] By the 1990s judges even exempted parties from disclosing the use of computer art before proceedings.[161] But this absence of judicial insistence also meant that parties had to exercise adequate judgment and care during pretrial discovery.[162]

    II. A New Problem: Artificial Intelligence Art Generators

    This Section explains why GAIs present a new evidentiary challenge. It starts with a brief history of computer art, establishing that machines and computers have until now remained largely under user control.[163] Then comes an introduction to the current generation of GAIs, which require minimal human expert scrutiny due to characteristics inherent to computer evidence in general and to AI specifically.[164] The last sub-Section considers two alternatives to proposals in Part III.[165]

    A.     Pre-AI Robots and Computers Could Not Learn or Draw Independently.

    Before the advent of AI drawers, humans controlled creative processes that involved robots and computers. The history of computer-made art dates back to the mid-twentieth century, when science came to enjoy increasing prominence in the art world.[166] This interdisciplinary trend led to the first instance of computer art.[167] Desmond Paul Henry, a British philosopher and artist, made in 1961 a semiautomatic drawing machine with a bombsight.[168] Henry bought the computer in 1952 and had admired its intricacy, to the point he rebuilt it in order to artistically capture its internal processes.[169] The machine produced paintings that were displayed in the following year at Henry’s solo exhibit in London.[170] Henry’s works soon gained international fame, and he proceeded to make at least three more similar machines.[171]

    Soon others followed Henry’s lead. During the summer of 1962, American engineer A. Michael Noll used an IBM 7090 and a plotter to draw geometric patterns and distributions.[172] He often used colored markers to create “customized art” for his coworkers.[173] Noll and some fellow scientists and technicians at Bell Laboratories, New Jersey, were among the first computer artists in the United States.[174] Some British scientists also explored artistic uses of the computer.[175] These efforts culminated in technological art exhibitions.[176] The most prominent such event was Cybernetic Serendipity, held in 1968 at the Institute of Contemporary Arts in London, England.[177] It featured artworks generated by machines but no device that could design artworks on its own.[178]

    Such intersections between art and technology led to the invention of painting robots. An early pioneer was Jean Tinguely.[179] In 1955 the Swiss sculptor began to invent drawing machines called “méta-matics.”[180] One drew allegedly abstract paintings and blew them to spectators with a fan.[181] A 1960 exhibition with four méta-matics at the Staempfli Gallery, New York, fascinated America.[182] In the same year, Raymond Auger built a machine that included a mechanical arm and a tape reader.[183] The machine drew according to instructions with random variables coded on punched tape.[184] Later generations used programs to control their robotic ventures. Harold Cohen started tinkering with computers in around 1970 and in 1973 introduced AARON, a program that painted based on data entered by the British artist.[185] AARON evolved from a simple rule-based algorithm to a sophisticated system.[186] But AARON’s capacities never extended beyond the ones that Cohen granted.[187] Joseph Nechvatal has since the 1980s deployed robots, computers, and even computer viruses to paint structurally complex artworks.[188] Matthias Groebel builds machines that materialize his artistic visions.[189] His mechanic assistants do much of the grunt work like their Renaissance predecessors, confined to parameters defined by the German master.[190]

    Then followed computer programs for creating art. The late 1960s witnessed the invention of the graphical user interface, a system that enables a user to control an electronical device through visual and audio indicators.[191] The new technology inspired engineers and artists to develop early electronic paint systems and image synthesizers in less than twenty years.[192] Today, programs like Adobe Photoshop and Microsoft PowerPoint allow novices and experts alike to create computer graphics of varying intricacies.[193]

    These pre-AI artistic mechanisms were dependent on human supervision and command. No robot could create art that was independent of the operator’s designs and instructions.[194] Computer programs have been but mere tools, producing graphics under the user’s scrutiny.[195] This limited capacity has meant that lawyers who resorted to computer art for pedagogical purposes would obtain graphics that adhered to their specifications and needs.[196]

    B.     The Autonomous Qualities of GAIs Warrant Caution.

    Times have changed with the arrival of AI art generators that possess autonomous qualities lacking in robots and computers. Some brief, simplified definitions and explanations are in order. AI, though its meaning has changed over time or context since its coinage in 1955,[197] may be characterized as a branch of science that makes computers do intelligent tasks.[198] A task is said to be intelligent if a person has to exercise one’s own intelligence to complete it.[199] Computers finish such tasks in ways that are different from how people do them.[200] A classic example is the brute-force algorithm that mechanically keeps altering its response to a problem until it reaches the right one.[201] The answer is intelligent, the process is not.[202] Because of this methodological difference, AI studies how computers conduct and complete intelligent assignments.[203]

    Machine learning is a subfield of AI that studies how to make computers learn from data.[204] A successful instance of machine learning requires a large set of data on the target subject to train the algorithm.[205] Machine learning is not mutually exclusive with AI.[206] Instead, integration of the two is essential to produce a learning AI.[207] AlphaGo learned from millions of online and offline match records before its monumental victory against Lee.[208] Illustrative AIs too are learning algorithms.[209] A successful AI art generator requires up to millions of sample images for training.[210]

    One relevant application of machine learning to GAIs is artificial neural network (ANN). ANN is a method inspired by the structure of the brain that helps an AI program learn and complete a specific task.[211] ANN and its variants have enhanced design flexibility and data process of learning AIs[212] at the cost of control relinquishment and uncertainty, as illustrated by three ANN modifications used in prominent AI drawers.

    Generative adversarial network (GAN) uses two ANNs to make new data from training samples.[213] One network makes derivations of varying qualities while the other appraises and rejects unsatisfactory ones.[214] Both learn from each other and produce increasingly better products, just as a rivalry between a forger and a detective would yield counterfeits of higher qualities.[215] The invention of GAN has led to enormous improvements in AI art tools over the past several years.[216] But the technology is unpredictable in the sense that the probabilities of a GAN product, be it a success or a failure, are unknown.[217] Artbreeder, DALL-Es, and Midjourney rely on GAN in their artistic endeavors.[218]

    Another example is convolutional neural network (CNN), which has achieved success in pattern recognition.[219] CNN uses convolution layers, structures with filters that identify a sample image’s visual features.[220] A map of such features (a feature map) is used as the input for the next convolution phase.[221] The hierarchy results in successive layers learning increasingly complicated features.[222] Also at work are algorithms that minimize errors between phases.[223] The process repeats until the network as a whole compiles a comprehensive feature map and makes decisions about some chosen features.[224] Throughout this process the program chooses relevant features on its own, not under the engineer’s guidance.[225] Deep Dream Generator and Shutterstock Generate are among GAIs that use CNN.[226]

    Finally, contrastive language-image pre-training (CLIP) uses labeled images to teach visual cues.[227] More specifically, the technique teaches an algorithm what text corresponds to what image through hundreds of millions of text-image pairs.[228] A successful CLIP program often has separate modes for entering textual and visual data, though research suggests that one can perform both tasks with minimal changes.[229] CLIP enables computers to spot, classify, and apply visual concepts to a variety of visual tasks.[230] But this process too involves many unknowns.[231] CLIP is responsible for an increasing number of text-to-image AI generators, including StarryAI.[232]

    Such successful technologies warrant precaution for two reasons: one inherent to computer evidence generally and one to AI specifically. Computers raise manifold evidentiary concerns.[233] The legal profession hesitated to trust computers long before the emergence of AI.[234] More than a decade before computer art became proper for trials, it was understood that each use of the computer must be accompanied by “all reasonable certainty that both the machine and those who supply its information have performed their functions with utmost accuracy.”[235] The message resonates to this date. Litigants are concerned about the factual bases of computer evidence.[236] Codes that create such exhibits may contain unsound assumptions.[237]

    These problems accelerate with AI.[238] The technology involves additional variables that are prone to uncertainties, including input data, developmental and training techniques, calibration techniques, and result consistencies.[239] Even more doubts arise with learning AIs, which acquire own reasoning and judgmental capacities through training or self-education.[240] Some even rewrite themselves in order to improve efficiency.[241] They ultimately become black boxes, phenomena whose inputs and outputs may be identified and examined but not intermediary mechanisms.[242]

    Such concerns hold for AI art tools.[243] Their creative processes preclude direct control and management and so may yield questionable products without user awareness.[244] Artificial neural networks like GAN, CNN, and CLIP grow increasingly complex, to the point even their programmers cannot explain exactly what happens during production processes.[245] Their products may contain inconsequential yet irritating errors that would not exist in ones created by human artists.[246] GAIs are thus more analogous to conscious, independent entities than to mere tools that lawyers and third parties have used in order to create tutorial graphics.[247] GAIs’ inherent opacity and risk would delay demonstrative uses of AI art generators without some preventive measures.[248]

    The best solution to this quandary is to supplement the knowledge gaps with authentication and other safeguards. AI consumers will likely have little to no knowledge of how AI art generators are made and taught or whether their artworks are reasonably reliable.[249] The problem would worsen should AI drawers continue employing diverse methodologies like they do now with ANNs.[250] Some kind of assurance about the inner workings and outputs of AI art tools would help answer legal concerns about them—even if fictionally.[251] This is where human intervention would shine.[252]

    Law is methodologically conservative.[253] Traditional reasonings have allowed generations of judges to assimilate new technologies into the profession.[254] Among such logical devices are authentication and procedural safeguards.[255] While nobody knows or can explain all interactions in a computer, human caution would help fill AI voids.[256]

    C.    Minimal Procedural Requirements on AI Graphics Present the Best Option.

    This sub-Section evaluates two alternatives to the proposal outlined in Part III. Treating AI-made illustrations like traditional demonstrations is unfeasible since the former involves unknown intermediary steps.[257] Designation of AI art as testimonial evidence contradicts the history and purpose of derivative evidence and therefore is unadvisable.[258]

    1.     Liberal Admission Is Unfeasible Due to Uncertainties Inherent to AI.

    One alternative is to treat AI graphics like other types of demonstrative evidence and admit them as liberally. This option is attractive for two reasons. First, it is the path of least resistance.[259] Second, common law, the “locus of analogical reasoning,” would be inclined to treat AI drawings and animations like traditional visual arts, as precedents on the illustrative use of computer displays indicate.[260] Analogy is probably the most frequented tool in the common law toolbox.[261] Artists create displays for legal purposes based on instructions provided by litigants or attorneys,[262] just as the present cohort of GAIs use keywords and prompts to perform their tasks.[263] The parallelism would seem to suggest that AI artworks should be treated like their predecessors.[264] Why scrutinize AI graphics if other pedagogical devices have not been seriously challenged?[265] Besides, AI art would not be the first novel technology to be validated by common law analogy.[266]

    The problem is that AI art tools are fundamentally different from traditional demonstrative tools by virtue of their independence.[267] Humans before the advent of AI had held the reins in creative endeavors, including ones involving mechanical devices.[268] Robots and computers made artworks under direct control of operators.[269] Programs were but means to create or edit displays according to the user’s specifications.[270] The lack of autonomy allowed litigants to obtain displays tailored to their needs.[271]

    The same cannot be said with GAIs. The lack of direct supervision and control leads to the inability to prevent production of inaccurate or otherwise unreliable results.[272] Consider the collective failure of GAIs at accurately drawing human hands and fingers.[273] An AI graphic with such deficiencies may be unable to fulfill its narrative purpose, as its audiences may instead focus on mangled hands and question the proponent’s competence.[274] Substandard demonstrations have proved themselves capable of destroying cases.[275] Legal players should ensure that AI drawings and animations are produced by valid methodologies, if only for litigants who want to strengthen their persuasive efforts.[276] As Judge Richard Allen Posner put it, what matter in an analogy are the differences, not the similarities.[277]

    2.     Designation of GAI Art as Testimony Contradicts the History and Purpose of Demonstrative Evidence.

    The other alternative is to treat AI drawings and animations as testimony and subject them to the full spectrum of relevant standards and tests applicable to human testimony, such as the rules against hearsay. This option sounds sensical in light of the view that some machine products necessitate thorough examinations before evidentiary admission.[278] AI has characteristics that complicate its regulation in some areas, including the law of evidence.[279] The similarities appear to rationalize safeguards such as hearsay exceptions and reliability requirements.[280]

    But this path too is unadvisable on two grounds. First, computer summaries (which often encompass drawings and animations) are not expert testimonies, at least not by themselves.[281] Second, categorization of demonstratives as testimonies defeats the history and purpose underlying the law of demonstrative evidence.[282] Recall the historical understanding that introduction of demonstratives is a “matter of right.”[283] Also, the federal rules of evidence were a response to the rise of complex litigation.[284] One of their foundational goals was to resolve evidentiary questions and disputes efficiently.[285] Indeed, Rule 611 requires that evidentiary examinations, including ones involving pedagogical devices, are conducted in the most effective manner.[286] The Advisory Committee on Rules of Evidence even declined to institute specific guidelines regarding the rule, reasoning that the trial judge needs the utmost leeway to ensure that the trial system functions effectively in their own courtrooms.[287]

    Equating AI graphics and testimonies contradicts the law of demonstrative exhibits, defeating the purpose of the former’s demonstrative use in the first place. Black box risks, though concerning, do not warrant excessive and strenuous safeguards.[288]

    III. A Proposal for Admitting Illustrative AI Demonstrations

    This Section discusses how federal courts might circumvent the problems outlined in Part II and admit AI art for illustrative purposes. Federal Rule of Evidence 902(13) is the best evidentiary framework for authentication of AI drawings and animations with maximum convenience.[289] The Federal Rules of Civil and Criminal Procedures allow for pretrial examination of AI artworks.[290] Jury instructions would help contain and minimize potential problems.[291] AI creators may also adopt voluntary measures to increase transparency, accuracy, and reliability.[292]

    A.    Federal Rule of Evidence 902(13) Allows Efficient Authentication.

    Authentication under Federal Rule of Evidence 902(13) would provide the crucial step to the admission of AI graphics. Very few cases discuss demonstrative illustrations with respect to authentication and pretrial procedures, which may have to do with the historical understanding about demonstrations.[293] But they may allow practitioners to predict how they might authenticate AI art in actual disputes.

    Recall that a proper demonstration meets three requirements: relevance, no undue prejudice, and authentication.[294] The first two are not problematic for admission of GAI results. Relevance has posed a very low bar for derivative exhibits.[295] The same goes for probative value, as courts have shown remarkable tolerance for even derivatives containing probative issues.[296]

    Rule 902(13) is the best solution for authenticating AI drawings and animations, as the only feasible alternative, Rule 901(b)(9), could place additional hurdles for a litigant seeking to use AI art. Rule 901(b)(9) is one of the two FRE rules about a computer exhibit whose accuracy depends on intermediary processes or systems.[297] A party seeking to introduce an AI artwork under the rule would have to establish that the artwork actually depicts what is claimed to be depicted.[298] Testimony that describes the AI tool used and that shows it produces accurate visuals would suffice.[299]

    In practice, Rule 901(b)(9) could mean that witnesses may have to explain their AI graphics with expert testimonies. Marquette Transportation Co. Gulf-Inland LLC v. Navigation Maritime Bulgarea[300] ruled that a computer reconstruction of a ship crash must be authenticated by the person who prepared it. Plaintiff Marquette Transportation Co. Gulf-Inland, LLC (“Marquette”), sued the defendant companies after their vessel steered into and damaged its ship.[301] Marquette introduced a captioned animation created by an employee.[302] The defendants argued that the movie was improper on many grounds, including absence of authentication per Rule 901(b)(9).[303] Marquette claimed that its navigational experts could do the job instead.[304] But the court ordered that the employee be present at trial for a “vigorous cross-examination,” as the animation’s preparation was “well beyond” the expertise of the other witnesses.[305]

    Marquette would seem to imply that, in cases where AI art is involved under Rule 901(b)(9), the proponent must secure and present at the hearing whoever can explain the creative process of the AI service.[306] Such persons must not only testify but also survive cross-examination about the GAI’s inner workings and products.[307] What is particularly concerning is the part about methodology: It may mean that trial courts specifically want witnesses who can testify about the GAI used to create the illustration.[308] That is, parties or witnesses who created and introduced AI graphics may not suffice, even if they have knowledge of relevant facts or issues.[309]

    Rule 902(13) is the other FRE provision that may be more convenient for a GAI product’s proponent.[310] Should a party choose to rely on this rule, it must submit a certification that would suffice to prove authenticity if that information were presented by a live witness.[311] That is, a witness to an AI graphic’s accuracy need not testify or suffer cross-examination.[312] Another advantage is that parties may stipulate as to whether AI art will be questioned on authenticity grounds.[313] That leeway is based on the understanding that parties often forgo authentication challenges for various reasons.[314]

    United States v. Bondars[315] indicates how Rule 902(13) might operate with respect to AI graphics. Ruslans Bondars was arrested on several computer crime charges.[316] The government used an internet archive service to capture screenshots and videos of Bondars’s online activities and moved to admit those exhibits under Rule 902.[317] Prosecutors supported their motions with a certificate written by an employee of the online tool.[318] The court recognized the rule’s purpose and granted the motion.[319]

    Applying Bondars to AI graphics, a party, a witness, or an AI expert need not sit in the witness stand.[320] Litigants may instead request and obtain signed certifications from relevant personnel from AI service providers to use their products.[321] Or parties could even stipulate and waive objections to AI drawings and animations.[322] Since Rule 902(13) only requires certifications and allows parties to stipulate about authentication challenges, it is more convenient than Rule 901(b)(9).

    B.     The Federal Rules of Civil and Criminal Procedures Enable Pretrial Scrutiny.

    This sub-Section briefly discusses actions that may become “good practice[s],” nonbinding yet advisable if and when AI art does become an acceptable tutorial means.[323] Again, very few cases discuss demonstrative graphics with respect to pretrial processes at all. This dearth, combined with the line of cases favoring illustrative graphics, seem to indicate that the best pretrial guides for AI graphics are in the texts of and notes accompanying the Federal Rules of Civil and Criminal Procedures.[324] Civil litigants may use discovery to inquire about possible uses of AI illustrations and ensure that they suffice for hearing purposes.[325] Criminal parties may use limited discovery with additional safeguards and restrictions to minimize constitutional concerns.[326]

    1.     Prehearing Procedures Under the Federal Rules of Civil Procedure

    Courts have advised the use of discovery with respect to computer evidence since the 1970s, though parties could waive discovery devices through stipulations.[327] AI graphics may necessitate reliance on this practice.[328]

    Discovery would help discern whether a civil party plans to use AI drawings or animations. Interrogatory responses might indicate such plans, though their scopes are defined by the language of interrogatories themselves.[329] A more useful means would be depositions, during which lawyers may confirm whether responding parties or witnesses plan to use AI artworks.[330] Interrogatories may precede a deposition and vice-versa.[331] But a litigant should know that interrogatories are less limited and expensive than depositions.[332] Either way, a civil party may request production of the bases of AI graphics.[333] Litigants would be well-served to remember in seeking disclosure that a party is under no obligation to “volunteer information not fairly encompassed” by their requests.[334]

    Also relevant are disclosure requirements. Civil litigants who know early on that they will use AI art may alert others and share copies.[335] A party can also provide to others and file relevant information about an AI product should the need for it arise at a later point during discovery.[336] If the party decides to hire an expert witness who in turn wants to use an AI-made illustrative exhibit, the entity should ask the expert to mention the display in his or her report.[337]

    2.     Prehearing Procedures under the Federal Rules of Criminal Procedure

    Like their civil counterparts, criminal parties may use AI drawings and animations in accordance with the Federal Rules of Criminal Procedure.[338] This includes some measures that may help protect the defendant’s constitutional rights.[339] These are important because AI art tools may allow the indigent—who account for about 80 percent of all criminal defendants in the country—to present more sophisticated narratives and better assert rights under the Constitution.[340]

    Most actions would occur during discovery, expanded over the years by several reforms.[341] A prosecutor using an AI service may allow the defendant to inspect and copy its products.[342] Likewise, defendants who request government disclosure and want to use AI graphics may have to reveal them.[343] Both state and defense experts who plan to incorporate AI artworks in their testimonies must indicate as such during disclosure.[344] Deposition, though rare, would enable either party to examine a witness relying on AI demonstrations.[345] Subpoenas may help in those extraordinary instances where a person is to be secured for a deposition or another AI-related problem.[346] Further, either side may realize later on that it would like to use AI displays for its case.[347] If so, the party should notify all other relevant players at the first opportunity.[348]

    The government’s AI visual exhibit may violate a defendant’s constitutional rights.[349] The defendant should try and prevent the prosecutor from introducing such illustrations at all, lest the trial court allow those graphics and the appellate court legitimize them.[350] Accordingly, the defendant should object to prejudicial or otherwise problematic artworks before the trial.[351]

    C.    Limiting Jury Instructions

    Jury instructions have helped with acceptance or justification of pre-AI computer graphics, even if retrospectively.[352] Cautionary instructions (also known as limiting instructions, directing jurors to ignore or consider certain evidence for a specific purpose only)[353] on AI drawings and animations too may minimize jury misdirection or confusion.[354] Caution is warranted in this area, as a visual tutorial may end up in the jury room under the guise of review facilitation.[355]

    Federal rules allow civil and criminal litigants to request and object to jury instructions.[356] A civil or criminal party may at a reasonable time request cautionary instructions, which would be reviewed by other parties and the court.[357] Judges must, upon timely requests, issue cautionary instructions that limit the scopes of AI demonstrations.[358] Appropriate instructions may order or remind a jury about an AI display’s illustrative purpose or factual assumptions and differences.[359] A court may even partially admit parts of an AI exhibit to limit its prejudicial effect.[360]

    D.    Possible Precautionary Measures for AI Creators

    AI companies may contribute by adopting guidelines on creation and education of their AI art tools in order to make them more reliable. Discussed here are three possibilities: open-source development, bias minimization, and sample comparison.

    Open-source development would help demystify the research and development processes surrounding an AI generator.[361] One crucial difference between AI and twentieth-century research schemes is that the former does not need institutional support.[362] A capable programmer with a decent computer and an internet connection may participate in an AI project.[363] This technical democracy, though advantageous in many respects, complicates third-party examination and verification.[364] Consider, for instance, that many AI programs use commercial off-the-shelf (COTS) software and hardware components including Windows operating systems or Android smartphones.[365] Since COTS products save time and money, AI developers may rely excessively on them and their products may become algorithmic chimeras.[366] Further, a COTS device is usually proprietary, or produced by an established company and thus is difficult to investigate or reproduce.[367] Open-source AI development would help third parties examine such AI services and ensure that the service is sufficiently accurate and reliable for legal purposes.[368] Presently, only a handful of AI art tools are open-source.[369] Some valid concerns about open-source AIs notwithstanding,[370] an increase in such GAIs may help litigants who want economical pedagogical displays.

    Another way to make GAIs more evidentiarily proper is to minimize their biases through diverse, representative training datasets.[371] AI programs are “only as smart as the data used.”[372] That is, an AI tool produces fallible results if taught by dubious data.[373] For instance, a commercial facial recognition software that almost always correctly identified the gender of a white man did the same in only 65 percent of the cases for darker-skinned women.[374] And Amazon abandoned a painstakingly developed AI employment tool after three years of use because it was based on ten-year data favoring male applicants and ended up perpetuating the skewed workforce.[375] Unfortunately, this tendency is threatening to manifest in AI illustrators as well.[376] Of course, litigants may circumvent the problem by instructing AI art tools to produce drawings and animations without racial or sexual aspects.[377] But some cases may necessitate inclusion of such details, if only for narrative reasons.[378] AI training datasets should be as diverse and inclusive as possible so that GAIs can accommodate as many illustrative needs as possible.[379]

    Further, programmers and engineers could test GAIs to ensure their drawings and animations actually look like real-world subjects.[380] Currently, AI drawers have common errors such as the inability to draw human hands and fingers accurately.[381] Such minute details, even if inconsequential for merits and significances, can destroy a case by themselves.[382] A GAI company may, as a part of product maintenance, periodically produce displays depicting situations or phenomena recurring in some types of trials.[383] Such a precautionary measure may seem trivial and even natural—save for the fact that a surprisingly small number of AI work products are ever tested against actual outcomes.[384] Indeed, some random commands and prompts are already uncovering previously unknown defects and drawbacks.[385]

    Conclusion

    This Note has argued that the legal profession may use AI graphics for demonstrative uses with some preventive measures. The current law, founded on the historical view that demonstrative evidence is one of the most basic yet potent tools in an attorney’s arsenal, allows for liberal admission of pedagogical illustrations. But opacity inherent to AI implies that the law should reconsider extending such tolerance to AI drawings and animations. Indeed, procedural and protocolar guidelines can assure that such graphics are proper for illustrative purposes.

    The argument also makes clear that the law of demonstrative evidence can no longer be neglected for two reasons. First, lawyers should engage more actively with possible applications and issues of AI.[386] The technology thus far has achieved remarkable successes in areas like driving,[387] translation,[388] finances,[389] and medicine.[390] The legal profession too has used AI successfully.[391] There is little reason to believe that the technology’s legal uses will decline.[392] AI art, if and when it becomes usable for legal purposes, may help the law join other disciplines in becoming more multimedia.[393] Courthouses should lead the use of cutting-edge technologies.[394] This is especially so considering that the layperson has become increasingly adaptive to scientific advancements.[395]

    Second, fundamental procedural values are at stake. Computerized information raises concerns such as reliability, democratic accountability, and equity.[396] This is so partly because programs may not be reliable enough for legal uses[397] and partly because companies and programmers do not always want to share how their products operate.[398] These understandable issues pose obstacles to justice and fairness.[399] Consider facial recognition technology, an instance of which recently led to the wrongful arrest and detainment of an African American man in Michigan.[400] GAIs are already showing signs of such possible misuses.[401] Given that it takes herculean effort and suffering to address and undo such wrongs,[402] might not it make sense to institute preventive steps before they happen at all?

    AI displays may eventually become conditionally and someday even wholly admissible demonstrations. But their evidentiary uses should prompt reevaluation of the current approach to demonstrative uses. Maybe our brainchildren are urging an overdue reform.


    Copyright 2023 Edward Oh, University of California, Berkeley, School of Law, Class of 2023.

               [1].     Christian Jacq, The Wise Woman 207-08 (Sue Dyson trans., Pocket Books 2000) (2000) (cleaned up).

    [2].   Isaac Asimov, The Bicentennial Man and Other Stories 163 (Millenium 2000) (cleaned up).

               [3].     John Ribeiro, AlphaGo’s Unusual Moves Proves Its AI Prowess, Experts Say, Computerworld (Mar. 14, 2016), www.pcworld.com/article/420054/alphagos-unusual-moves-prove-its-ai-prowess-experts-say.html [https://perma.cc/WQS9-8ELE].

               [4].     Tanguy Chouard, The Go Files: AI Computer Wraps Up 4–1 Victory Against Human Champion, Nature (Mar. 15, 2016), https://www.nature.com/articles/nature.2016.19575 [https:/
    perma.cc/K34Z-WLGC].

               [5].     Simon Mundy, AlphaGo Conquers Korean Grandmaster Lee Se-dol, Financial Times (Mar. 15, 2016), www.ft.com/content/f6b90460-eaa5-11e5-9fca-fb0f946fd1f0 [https://perma.cc/
    U8DY-8GAA] (describing Lee as “arguably the best player of the past decade”); Aswin Pranam, Why the Retirement of Lee Se-Dol, Former ‘Go’ Champion, Is a Sign of Things to Come, Forbes (Nov. 29, 2019), www.forbes.com/sites/aswinpranam/2019/11/29/why-the-retirement-of-lee-se-dol-former-go-champion-is-a-sign-of-things-to-come/?sh=17afbf663887 [https://perma.cc/3K43-QC8F] (commenting on Lee’s retirement in 2019).

               [6].     Christof Koch, How the Computer Beat the Go Master, Sci. Am. (Mar. 19, 2016), www.scientificamerican.com/article/how-the-computer-beat-the-go-master [https://perma.cc/A7XX-JWD5].

               [7].     Id.

               [8].     Irving John Good, The Mystery of Go, New Sci. (Jan. 21, 1965), www.chilton-computing.org.uk/acl/literature/reports/p019.htm [https://perma.cc/N7EW-22JZ].

               [9].     Oswald Campesato, Artificial Intelligence, Machine Learning, and Deep Learning 230 (2020).

             [10].     Id.

             [11].     Jiao Sun, Q. Vera Liao, Michael Muller, Mayank Agarwal, Stephanie Houde, Kartik Talamadupula & Justin D. Weisz, Investigating Explainability of Generative AI for Code Through Scenario-Based Design, arXiv (Feb. 10, 2023), dl.acm.org/doi/10.1145/3490099.3511119 [https://perma.cc/XU5R-S52K].

             [12].     Eray Eliaçık, You Don’t Have to Pay for AI Art: Here Are the Best Free Art Generators, Dataconomy (Dec. 22, 2022), www.dataconomy.com/2022/12/best-free-ai-art-generators-images-trend [https://perma.cc/K69A-MYEY].

             [13].     Will Douglas Heaven, This Avocado Armchair Could Be the Future of AI, MIT Tech. Rev. (Jan. 5, 2021), www.technologyreview.com/2021/01/05/1015754/avocado-armchair-future-ai-openai-deep-learning-nlp-gpt3-computer-vision-common-sense [https://perma.cc/C48R-KVP5].

             [14].     Stephen Cousins, The Rapid Rise of AI Art, Eng’g & Tech. Mag. (Feb. 13, 2023), eandt.theiet.org/content/articles/2023/02/the-rapid-rise-of-ai-art [https://perma.cc/L2KB-B5GH].

             [15].     Amos Struck, Shutterstock Partners with OpenAI in a New AI Image Generation Tool, Stock Photo Secrets (Jan. 26, 2023), www.stockphotosecrets.com/news/shutterstock-partners-with-openai.html [https://perma.cc/3T6J-7RM3].

             [16].     Andrew Amos, “It’s Art Theft”: AI Art Is Taking over VTubing, but Murky Ethics Worry Artists, Dexerto (Oct. 7, 2022), www.dexerto.com/entertainment/ai-art-vtubers-unclear-ethics-worry-artists-1952140 [https://perma.cc/PE4F-NNTF].

             [17].     See, e.g., Jesus Takes a Selfie during the Last Supper, Reddit, www.reddit.com/r/midjourney/comments/10612uy/jesus_takes_a_selfie_during_the_last_supper [https://perma.cc/BPQ4-M9R2] (last visited Sept. 28, 2023).

             [18].     See, e.g., Nike Sneakers for Wedding, Reddit, www.reddit.com/r/midjourney/comments/
    zpoda7/nike_sneakers_for_wedding [https://perma.cc/M73B-PRL3] (last visited Sept. 28, 2023).

             [19].     Alan M. Turing, Intelligent Machinery, in Cybernetics 26, 27 (Christopher R. Evans & Anthony D. J. Robertson eds., 1968) (noting that common idioms like “acting like a machine” and “purely mechanical behavio[]r” embodied such views).

             [20].     Len Aoi, New AI Image Generating Service in Japan Stirs Debate. Artists Decry Their Work Being Used for AI Art Generation, Automaton (Aug. 31, 2022), www.automaton-media.com/en/nongaming-news/20220831-15350 [https://perma.cc/K9AW-C5VQ] (reporting the suspension of a new AI art tool over online disputes between illustrators who were concerned about unauthorized uses of their artworks and those who provided their own works in developing the program); Jennifer Korn, Getty Images Suing the Makers of Popular AI Art Tool for Allegedly Stealing Photos, CNN (Jan. 18, 2023), www.cnn.com/2023/01/17/tech/getty-images-stability-ai-lawsuit [https://perma.cc/PW3H-W2BH] (explaining that Getty Images, Inc., filed a lawsuit against Stability AI Ltd. over unauthorized uses of millions of copyrighted images in teaching a new GAI); Shanti Escalante-de Mattei, Artists File Class Action Lawsuit Against AI Image Generator Giants, ARTnews (Jan. 17, 2023), www.artnews.com/art-news/news/artists-class-action-lawsuit-against-ai-image-generator-midjourney-stability-deviantart-1234653892 [https://perma.cc/5JQ7-ZXEZ] (discussing a copyright class action against programs such as Midjourney and Stability AI).

             [21].     Robert P. Mosteller, Kenneth S. Broun, George E. Dix, Edward J. Imwinkelried, David H. Kaye & Eleanor Swift, McCormick on Evidence § 212 (8th ed. 2020).

             [22].     Demonstrative Evidence, Black’s Law Dictionary (11th ed. 2019) (defining demonstrative evidence as “[p]hysical evidence that one can see and inspect . . . and that, while of probative value and usu[ally] offered to clarify testimony, does not play a direct part in the incident in question”); see also Mosteller et al., supra note 21, § 214 (“aids [that] are offered to illustrate or explain the testimony of witnesses, including experts, or to present a summary or chronology of complex or voluminous documents”). The term “demonstrative evidence” has come to encompass a variety of sensory devices, including ones used substantively as bases for expert testimonies. See generally Maureen A. Howard & Jeffrey C. Barnum, Bringing Demonstrative Evidence in from the Cold: The Academy’s Role in Developing Model Rules, 88 Temp. L. Rev. 513, 518–40 (discussing the background and status of the taxonomical confusion). This paper focuses on evidence used for illustrative purposes only.

             [23].     The term AI Winter refers to the period between the 1960s to the 1990s, when AI developments ran into snags and research funds consequently dried up. Sean Gerrish, How Smart Machines Think 261 (2018); Daniel S. Levine, Theory of the Brain and Mind: Visions and History, in Artificial Intelligence in the Age of Neural Networks and Brain Computing 191, 194 (Robert Kozma, Cesare Alippi, Yoonsuck Choe & Francesco Carlo Morabito eds. 2018) (referring to AI Winter as the “dark ages in the field”).

             [24].     See generally Elizabeth G. Porter, Taking Images Seriously, 114 Colum. L. Rev. 1687, 1752–74 (2014) (explaining the legal profession’s gradual acceptance of visual media).

             [25].     Alan B. Parker, Demonstrative Exhibits on a Budget, 30 Litig. 22, 22–23 (2004).

             [26].     See id. at 22 (stating that even in the early 2000s custom illustrations and animations could “easily” cost respectively thousands and tens of thousands of dollars).

             [27].     Id. at 23.

             [28].     Id. at 22–24.

             [29].     Id. at 24 (warning that generic cartoon images that are “race, age, and gender neutral” should be avoided in some cases because they “demean the parties and trivialize the issues”).

             [30].     See Stella Sky, Best AI Art Generators in 2022, Medium (Aug. 28, 2022), medium.com/mlearning-ai/best-ai-art-generators-in-2022-25566216ca74 [https://perma.cc/QG54-V4Q7] (reporting that some GAIs offer free trials and others offer ten-dollar monthly plans).

             [31].     See Howard & Barnum, supra note 22, at 540–49.

             [32].     This Note focuses on the federal rules. Most states have evidence and procedure rules that mirror, to at least some extent, the federal rules.

             [33].     See Sabine Gless, AI in the Courtroom: A Comparative Analysis of Machine Evidence in Criminal Trials, 51 Geo. J. Int’l L. 195, 215 (2020) (stating that machine evidence, though “seemingly objective, . . . might be prone to error” and thus should “be explained (at least in part) through the use of experts”).

             [34].     See Andrea Roth, Machine Testimony, 126 Yale L.J. 1972, 2022–38 (2017) (describing testimonial safeguards for machines, including machine credibility testing and machine confrontation abilities).

             [35].     See, e.g., The Taylor Will Case, 10 Abb. Pr. (n.s.) 300, 318 (N.Y. Sur. Ct. 1871) (indicating that, after the advent of photography in 1839, a state court determined that “[t]oo many collateral issues [we]re involved to render [photographs of a signature] reliable testimony” and declared that “[t]he refractive power of the lens, the angle at which the original to be copied was inclined to the sensitive plate, the accuracy of the focusing, and the skill of the operator . . . would have to be investigated to insure the evidence as certain”); Robert García, “Garbage in, Gospel Out”: Criminal Discovery, Computer Reliability, and the Constitution, 38 UCLA L. Rev. 1043, 1073 (1991) (mentioning a computer’s hardware as a reliability factor); People v. Martinez, 990 P.2d 563, 581 (Cal. 2000) (establishing that, by the year of the decision, “testimony on the acceptability, accuracy, maintenance, and reliability of computer hardware” was no longer required) (cleaned up).

             [36].     See Gless, supra note 33, at 215 (outlining “a predictable life cycle for many types of new evidence”).

             [37].     See infra Part I and accompanying text.

             [38].     See infra Part II and accompanying text.

             [39].     See infra Part III and accompanying text.

             [40].     See infra Part I.A and accompanying text; see also generally Robert D. Brain & Daniel J. Broderick, The Derivative Relevance of Demonstrative Evidence: Charting Its Proper Evidentiary Status, 25 U.C. Davis L. Rev. 957, 986–1018 (1992) [hereinafter Derivative Relevance] (providing a general history of demonstrative evidence).

             [41].     See infra Part I.B and accompanying text.

             [42].     Derivative Relevance, supra note 40, at 962.

             [43].     William S. Holdsworth, 9 A History of English Law 127 (1922).

             [44].     Sidney L. Phipson, The Law of Evidence 6 (2d ed. 1898).

             [45].     3 William Blackstone, Commentaries *331–33.

             [46].     Derivative Relevance, supra note 40, at 992.

             [47].     Rowell v. Fuller’s Estate, 10 A. 853, 861 (Vt. 1887).

             [48].     Derivative Relevance, supra note 40, at 996.

             [49].     Simon Greenleaf, 1 A Treatise on the Law of Evidence §§ 439g-439h (16th ed. 1899).

             [50].     Id. § 439d.

             [51].     John H. Wigmore, 3 Treatise on the System of Evidence in Trials at Common Law §§ 789–97 (1970).

             [52].     Derivative Relevance, supra note 40, at 1004.

             [53].     Charles T. McCormick, Cases and Materials on the Law of Evidence (1940); Charles T. McCormick, Handbook on the Law of Evidence (1954).

             [54].     Derivative Relevance, supra note 40, at 998.

             [55].     See Melvin M. Belli, Sr., Demonstrative Evidence, 10 Wyo. L.J. 15, 20–21 (1955).

             [56].     Id. at 21.

             [57].     Melvin M. Belli, Sr., 2 Modern Trials § 165 (1954).

             [58].     Id.

             [59].     Belli, supra note 55, at 21.

             [60].     Id. at 21–22.

             [61].     Charles W. Peckinpaugh, Jr., The Proper Role of Demonstrative Evidence, 1965 A.B.A. Sec. Ins. Negl. & Comp. L. Proc. 316, 317 (lamenting that Belli’s tactics resulted in demonstrative evidence falling into “disrepute” and has come to require “greater supervision”); see also Robert D. Brain & Daniel J. Broderick, Demonstrative Evidence in the Twenty-First Century: How to Get it Admitted, in Winning with Computers: Trial Practice in the 21st Century 369, 370 (John C. Tredennick, Jr. & James A. Eidelman eds., 1991) (briefly tracing the development of demonstrative exhibits in the second half of the twentieth century).

             [62].     Rowell v. Fuller’s Estate, 10 A. 853, 861 (Vt. 1887).

             [63].     Wigmore, supra note 51, at § 790.

             [64].     See Fed. R. Evid. 611(a) (merely commenting that the court should exercise “reasonable control over the mode and order of . . . presenting evidence”); Derivative Relevance, supra note 40, at 962 (commenting that the only obstacle to an attorney’s ability to use demonstrative exhibits is the judge’s discretion to preclude exhibits that are “unfairly prejudicial, inaccurate, incomplete, or cumulative”).

             [65].     Fed. R. Evid. 611(a)(1) and advisory committee’s note on proposed rules.

             [66].     Id. (cleaned up).

             [67].     See id.

             [68].     Mosteller, supra note 21, § 214.

             [69].     Fed. R. Evid. 401 & 402.

             [70].     Mosteller, supra note 21, § 212.

             [71].     Id.

             [72].     Fed. R. Evid. 403.

             [73].     Mosteller, supra note 21, § 212.

             [74].     Id.

             [75].     Fed. R. Evid. 611(a)(1) and advisory committee’s note on proposed rules.

             [76].     Id.

             [77].     See J. Owen Forrester, The History of the Federal Judiciary’s Automation Program, 44 Am. U. L. Rev. 1483, 1487–88 (1995).

             [78].     See Fed. R. Evid. 901, 902 and the advisory committee’s notes on proposed rules (mentioning that Rules 901 and 902 have been amended over the years in response to technological developments).

             [79].     Lory Dennis Warton, Litigators Byte the Apple: Utilizing Computer-Generated Evidence at Trial, 41 Baylor L. Rev. 731, 740 (1989).

             [80].     Fred Galves, Where the Not-So-Wild Things Are: Computers in the Courtroom, the Federal Rules of Evidence, and the Need for Institutional Reform and More Judicial Acceptance, 13 Harv. J. L. & Tech. 161, 172 (2000).

             [81].     Vicki S. Menard, Admission of Computer Generated Visual Evidence: Should There Be Clear Standards?, 6 Software L.J. 325, 328 (1993).

             [82].     Paul J. Feltovich, Rand J. Spiro, Richard L. Coulson & Ann Myers-Kelson, The Reductive Bias and the Crisis of Text (in the Law), 6 Contemp. Legal Issues 187, 187 (1995).

             [83].     542 F.2d 111 (2d Cir. 1976).

             [84].     Id. at 111.

             [85].     Id. at 115.

             [86].     Id.

             [87].     Id.

             [88].     See id.

             [89].     Id.

             [90].     Id.

             [91].     Id. at 121 (Van Graafeiland, J., dissenting).

             [92].     Id.

             [93].     Id.

             [94].     Id. at 123.

             [95].     Post-Perma federal and state cases have distinguished illustrative computer graphics from substantive simulations. See, e.g., People v. Duenas, 55 Cal.4th 1, 20–21 (2012). The former is usually admitted, even where evidentiary errors may be implicated, but the latter has drawn judicial concerns. See, e.g., State v. Denton, 768 N.W.2d 250, 260 (Wis. App. 2009) (rejecting the use of computer simulation of a crime scene for prejudice, issue confusion, and jury misleading); Altman v. Bobcat Co., 349 Fed. Appx. 758, 763 (3d Cir. 2009) (holding that a party seeking to use a simulation must “establish that [it] shares substantial similarity” with the underlying incident). This difference might be due in part to Perma, where both the majority and the dissent specifically advised caution with respect to simulations. See supra notes 83–94 and accompanying text.

             [96].     86 F.3d 498 (6th Cir. 1996).

             [97].     Id. at 539–40.

             [98].     Id. at 511.

             [99].     Id. at 511–15.

          [100].     Id. at 515–16, 538.

          [101].     Id. at 539–40.

          [102].     Id. at 539.

          [103].     Id.

          [104].     Verizon Directories Corp. v. Yellow Book USA, Inc., 331 F. Supp. 2d 136, 142 (E.D.N.Y. 2004).

          [105].     Id. at 137.

          [106].     Id. at 137, 139.

          [107].     Id. at 137, 144.

          [108].     Id. at 141.

          [109].     Id. at 141–42.

          [110].     Id. at 142.

          [111].     987 F.3d 794 (8th Cir. 2021).

          [112].     Id. at 798.

          [113].     Id. at 798–99.

          [114].     Id. at 799.

          [115].     Id. at 799–800.

          [116].     Id. at 800.

          [117].     Id. at 801.

          [118].     Id. at 800.

          [119].     Id.

          [120].     Id. at 801.

          [121].     Id.

          [122].     Id.

          [123].     508 N.W.2d 694, 695 (Iowa 1993).

          [124].     Id.

          [125].     Id.

          [126].     Id.

          [127].     Id. at 695–96.

          [128].     Id. at 696.

          [129].     Commonwealth v. Serge, 896 A.2d 1170 (Pa. 2006).

          [130].     Id. at 1173.

          [131].     Id. at 1175.

          [132].     Id.

          [133].     Id.

          [134].     Id. at 1176.

          [135].     Id.

          [136].     Id. at 1176–87.

          [137].     55 Cal.4th 1 (2012).

          [138].     Id. at 4–6.

          [139].     Id. at 8, 18.

          [140].     Id. at 21.

          [141].     Id. at 4.

          [142].     Id. at 18.

          [143].     Id. at 21–23.

          [144].     Id. at 25.

          [145].     Id.

          [146].     See, e.g., In re Air Crash Disaster, 86 F.3d 498, 539 (6th Cir. 1996); United States v. Oliver, 987 F.3d 794, 801 (8th Cir. 2021); Ladeburg v. Ray, 508 N.W.2d 694, 696 (Iowa 1993); Commonwealth v. Serge, 896 A.2d 1170, 1176 (Pa. 2006).

          [147].     Cass R. Sunstein, Commentary, On Analogical Reasoning, 106 Harv. L. Rev. 741, 741 (1993).

          [148].     Jennifer L. Mnookin, The Image of Truth: Photographic Evidence and the Power of Analogy, 10 Yale J.L. & Human. 1, 43–44 (1998) [hereinafter Image of Truth].

          [149].     Ladeburg, 508 N.W.2d at 696; Oliver, 987 F.3d at 801.

          [150].     See Jack B. Weinstein & Margaret A. Berger, 6 Commentary on Rules of Evidence for the United States Courts § 1006.08[4] (Joseph McLaughlin & Mark S. Bordin eds. 2023) (stating that “[p]edagogical device summaries are used to summarize evidence”).

          [151].     Air Crash, 86 F.3d at 539; Serge, 896 A.2d at 1176.

          [152].     See, e.g., Air Crash, 86 F.3d at 540; Ladeburg, 508 N.W.2d at 696.

          [153].     See, e.g., Schaible v. Washington Life Ins. Co., 9 Phila. 136 (D. Ct. 1873) (ruling that the admission of a colored photograph of a dead person was proper because testimonies from friends and other witnesses confirmed that the photograph correctly depicted the decedent); Cowley v. People, 83 N.Y. 464 (1881) (declaring that testimonies from witnesses such as the photographer and a doctor sufficed to deem photographs of a child proper); Baustian v. Young, 53 S.W. 921 (Mo. 1899) (holding the admission of location photographs proper based on testimonies from the photographer and the witness).

          [154].     Air Crash, 86 F.3d at 540; Ladeburg, 508 N.W.2d at 696.

          [155].     See Air Crash, 86 F.3d at 540.

          [156].     Ladeburg, 508 N.W.2d at 695–96.

          [157].     See United States v. Oliver, 987 F.3d 794, 800–01 (8th Cir. 2021) (explaining that the presence and cross-examination of the makers of the hearsay computer maps do not mean that the maps did not contain hearsay).

          [158].     See Perma Research and Development v. Singer Co., 542 F.2d 111, 115, 121 (2d Cir. 1976); Ladeburg, 508 N.W.2d at 695.

          [159].     Perma, 542 F.2d at 115 (expressing regret at Perma’s failure to share the “data and theorems” behind its simulation), 121 (indicating discontent at the fact that two of Perma’s expert testimonies were founded on “some simulated formulas” created and entered in an unknown manner).

          [160].     See supra notes 83–145 and accompanying text.

          [161].     See Air Crash, 86 F.3d at 539 (responding to Northwest’s claim that Douglas McDonnell’s circuit breaker expert failed to disclose during his deposition and thus violated Federal Rule of Civil Procedure 26(e)(2) by reasoning that the rule does not require a litigant to “volunteer information not fairly encompassed by” a discovery request).

          [162].     Ladeburg v. Ray, 508 N.W.2d 694, 695 (Iowa 1993) (citing the parties’ informal stipulation to extend discovery deadline as a reason for holding the defense’s computer drawings proper).

          [163].     See infra Part II.A and accompanying text.

          [164].     See infra Part II.B and accompanying text.

          [165].     See infra Part II.C and accompanying text.

          [166].     Anne Collins Goodyear, Gyorgy Kepes, Billy Klüver, and American Art of the 1960s: Defining Attitudes Toward Science and Technology, 17 Sci. Context 611, 611–12 (2004); see also Frank Dietrich, Visual Intelligence: The First Decade of Computer Art (1965–1975), 19 Leonardo 159, 161–62 (1986) (mentioning that some of the artists and scientists who participated in the art and technology movement originated from Europe, North America, and Japan).

          [167].     See Elaine O’Hanrahan, The Contribution of Desmond Paul Henry (1921–2004) to Twentieth-Century Computer Art, 51 Leonardo 156, 157 (2018).

          [168].     Id.

          [169].     Id. at 158.

          [170].     Id. at 157.

          [171].     Id.

          [172].     A. Michael Noll, The Beginnings of Computer in the United States: A Memoir, 27 Leonardo 39, 39 (1994).

          [173].     Id.

          [174].     Dietrich, supra note 166, at 159.

          [175].     Charlie Gere, Minicomputer Experimentalism in the United Kingdom from the 1950s to 1980, in Mainframe Experimentalism: Early Computing and the Foundations of the Digital Arts 112, 121 (Hannah B. Higgins & Douglas Kahn eds., 2012) (narrating the examples of Ernest Edmonds and John Vince).

          [176].     See id. at 119–22.

          [177].     Id. at 121.

          [178].     Id.

          [179].     See generally Stephanie Jennings Hanor, Jean Tinguely: Useless Machines and Mechanical Performers, 1955–1970 66–83 (2003) (Ph.D. dissertation, The University of Texas at Austin) (ProQuest) (describing the history and development of Tinguely’s painting robots).

          [180].     Id. at 67.

          [181].     Id. at 76.

          [182].     Id. at 77 (mentioning that one reviewer exclaimed that “anyone can paint hundreds of abstract pictures in the course of an afternoon . . . with the greatest of ease”).

          [183].     Art of Automation, Salt Lake Tribune, Aug. 23, 1960, at 10.

          [184].     Id.

          [185].     See Pamela McCorduck, Aarons Code: Meta-Art, Artificial Intelligence, and the Work of Harold Cohen 37, 64 (1991).

          [186].     Id. at 66 (explaining that AARON’s first phase drew two-dimensional abstract paintings), 95 (mentioning that the “two-plus dimensional” second version in the 1980s formed concepts of what it would draw before it actually began drawing), 100 (establishing that the third stage could draw specific subjects like the Statute of Liberty), 103 (stating that the fourth manifestation drew human figures based on a “three-dimensional knowledge base”).

          [187].     Id. at 188.

          [188].     Frank Popper, From Technological to Virtual Art 120–22 (2007).

          [189].     See generally Helen Sloan, Art in a Complex System: The Paintings of Matthias Groebel, 24 PAJ: J. Performance & Art 127 (2002) (explaining Groebel’s methodology).

          [190].     Id. at 128 (mentioning that Groebel’s machines “do much of the work,” with their creator “direct[ing] and control[ling]” their enterprises).

          [191].     Grant D. Taylor, When the Machine Made Art: The Troubled History of Computer Art 186 (2014).

          [192].     Id. at 186–87.

          [193].     Porter, supra note 24, at 1695; Jaihyun Park & Neal Feigenson, Effects of a Visual Technology on Mock Juror Decision Making, 27 Applied Cognitive Psych. 235, 235 (2013).

          [194].     See supra notes 166–90 and accompanying text.

          [195].     Galves, supra note 80, at 181–82.

          [196].     See Porter, supra note 24, at 1752–74.

          [197].     See, e.g., Turing, supra note 19, at 27 (proposing an early concept of AI as a “machinery [that] show[s] intelligent behavio[]r”); J. McCarthy, M. L. Minsky, N. Rochester & C.E. Shannon,  A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955, at 2, jmc.stanford.edu/articles/dartmouth/dartmouth.pdf [https://perma.cc/7NEP-U3JM] (defining AI as the idea that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it”); Alan Wolfe, Mind, Self, Society, and Computer: Artificial Intelligence and the Sociology of Mind, 95 Am. J. Socio.1073, 1078–79 (1991) (discussing the “software” and “hardware” approaches to AI); Roger Penrose, The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics 14 (2016) (defining AI as a branch of science that seeks to “imitate by means of machines, normally electronic ones, as much of human mental activity as possible, and perhaps eventually to improve upon human abilities in these respects”); Gless, supra note 33, at 197 (discussing “narrow” and “general” AIs); Campesato, supra note 9, at 4–5 (describing the “two main camp[s] regarding AI” of the weak AI approach and biological plausibility”).

          [198].     See Gerrish, supra note 23, at 6; Campesato, supra note 9, at 4.

          [199].     See Campesato, supra note 9, at 4.

          [200].     Gerrish, supra note 23, at 6; Charu C. Aggarwal, Neural Networks and Deep Learning: A Textbook 3 (2018).

          [201].     Gerrish, supra note 23, at 6–7.

          [202].     See id. at 7 (mentioning that the brute-force algorithm can be described as “dumb[]”).

          [203].     Id.

          [204].     Id.; Campesato, supra note 9, at 18.

          [205].     Gerrish, supra note 23, at 265.

          [206].     See id. at 6–7.

          [207].     See id. at 7.

          [208].     See id. at 229–48.

          [209].     See Stuart Russell & Peter Norvig, Artificial Intelligence: A Modern Approach 801 (4th ed. 2021).

          [210].     See Korn, supra note 20 (stating that Stability AI used millions of images to create its own AI art tool).

          [211].     Levine, supra note 23, at 192.

          [212].     Aggarwal, supra note 200, at 3–4 (stating that an ANN’s architectural design choices provide a “higher-level abstraction of expressing semantic insights about data” and that inclusion or removal of neurons changes its complexity).

          [213].     Jakub Langr & Vladimir Bok, GANs in Action: Deep Learning with Generative Adversarial Networks 5 (2019).

          [214].     Id.

          [215].     Id. at 5–6.

          [216].     Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman Yampolskiy & Dario Amodei, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, arXiv (Feb. 20, 2018), arxiv.org/abs/1802.07228 [https://perma.cc/757A-DBSB] (showing samples of synthesized human face images from 2014 to 2017); Jun-Yan Zhu, Taesung Park, Phillip Isola & Alexei A. Efros, Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks, arXiv (Mar. 30, 2017), arxiv.org/abs/1703.10593 [https://perma.cc/G33A-EBYZ] (demonstrating how a GAN variant “translates” an image, such as restyling a photograph into a Monet).

          [217].     Russell & Norvig, supra note 209, at 831.

          [218].     See About, Artbreeder, www.artbreeder.com/about [https://perma.cc/8QFZ-LBXV] (last visited Aug. 25, 2024); Isaac Sacolick, Zero-Shot Learning and the Foundations of Generative AI, InfoWorld (Feb. 13, 2023), www.infoworld.com/article/3687315/zero-shot-learning-and-the-foundations-of-generative-ai.html [https://perma.cc/6DPH-CPGV]; MidJourney AI Text to Image Generator, Pixexid (Aug. 11, 2023), pixexid.com/read/midjourney-ai-text-to-image-generator [https://perma.cc/9HHJ-8HAE].

          [219].     Aggarwal, supra note 200, at 41.

          [220].     M. Sarıgül, B.M. Ozyildirim & M. Avci, Differential Convolutional Neural Network, 116 Neural Networks 279, 281 (2019).

          [221].     Id.

          [222].     Aggarwal, supra note 200, at 368.

          [223].     Id.

          [224].     Id.

          [225].     See Russell & Norvig, supra note 209, at 1004.

          [226].     Struck, supra note 15.

          [227].     Yi Li, Hualiang Wang, Yiqun Duan & Xiomeng Li, Exploring Visual Explanations for Contrastive Language-Image Pre-Training, arXiv (Nov. 27, 2022), arxiv.org/abs/2209.07046 [https://perma.cc/NG47-X3UU].

          [228].     Id.

          [229].     Haoxuan You, Louwei Zhou, Bin Xiao, Noel Codella, Yu Cheng, Ruochen Xu, Shih-Fu Chang & Lu Yuan, Learning Visual Representation from Modality-Shared Contrastive Language-Image Pre-Training, in Computer Vision – ECCV 2022: 17th European Conference Tel Aviv, Israel, October 23–27, 2022 Proceedings, Part XXVII 69, 70 (Shai Avidan, Gabriel Brostow, Moustapha Cissé, Giovanni Maria Farinella & Tal Hassner eds., 2022).

          [230].     Id.

          [231].     Russell & Norvig, supra note 209, at 826–27 (acknowledging some unknown and possibly unattainable factors in supervised learning models like CLIP AIs).

          [232].     How to Create Art Using AI, starryai, starryai.com/blog/how-to-create-art-using-ai [] (last visited Aug. 25, 2024).

          [233].     See, e.g., United States v. De Georgia, 420 F.2d 889, 895 (9th Cir. 1969) (Ely, J., concurring) (describing how computers have historically raised evidentiary concerns, such as the storage of computer data, as well as future issues).

          [234].     Perma Research and Development v. Singer Co., 542 F.2d 111, 121 (2d Cir. 1976).

          [235].     De Georgia, 420 F.2d at 895.

          [236].     See Galves, supra note 80, at 208–09 (mentioning that, depending on the type of the exhibit, either the witness testimony or the input data may be suspect).

          [237].     Jennifer L. Mnookin, Repeat Play Evidence: Jack Weinstein, “Pedagogical Devices,” Technology, and Evidence, 64 DePaul L. Rev. 571, 576 (2015).

          [238].     See Matias del Campo, Neural Architecture: Design and Artificial Intelligence 14–15 (2022).

          [239].     Paul W. Grimm, Maura R. Grossman & Gordon V. Cormack, Artificial Intelligence as Evidence, 19 Nw J. Tech. & Intell. Prop. 9, 12 (2021).

          [240].     Id.

          [241].     See Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 Calif. L. Rev. 513, 534 (2015).

          [242].     Id.; Gless, supra note 33, at 211.

          [243].     See Campo, supra note 238, at 14–15 (stating that AI generally involves black box problems).

          [244].     See id.

          [245].     See Gless, supra note 33, at 211.

          [246].     See, e.g., Jim Nightingale, Why Can’t AI Draw Realistic Human Hands?, Dataconomy (Jan. 25, 2023), dataconomy.com/2023/01/how-to-fix-ai-drawing-hands-why-ai-art [] (discussing the collective failure at drawing hands and fingers that has reached the status of “a running joke”).

          [247].     See infra Part II.A and accompanying text.

          [248].     Curtis E. A. Karnow, The Opinion of Machines, 19 Colum. Sci. & Tech. L. Rev. 136, 147 (2017) (mentioning the neural technology’s applications in the financial sector, such as “automated bank loan application approval [and] credit card fraud detection”).

          [249].     See Grimm, Grossman & Cormack, supra note 255, at 12.

          [250].     See id.

          [251].     See United States v. De Georgia, 420 F.2d 889, 895 (9th Cir. 1969) (Ely, J., concurring) (commenting on the computer technology generally); Gless, supra note 33, at 207 (stating that factfinders must eventually decide whether to trust AIs that can be only partially explained).

          [252].     Karnow, supra note 248, at 156.

          [253].     Richard A. Posner, How Judges Think 112 (2008).

          [254].     Karnow, supra note 248, at 139, 164–65; Image of Truth, supra note 148, at 73–74.

          [255].     The Taylor Will Case, 10 Abb. Pr. (n.s.) 300, 319 (N.Y. Sur. Ct. 1871) (ruling that a camera should be examined to allow its photographs); García, supra note 35, at 1073 (stating that a computer’s hardware must be examined to prove reliability).

          [256].     Id.

          [257].     See infra Part II.C.1 and accompanying text.

          [258].     See infra Part II.C.2 and accompanying text.

          [259].     See Abernathy v. Superior Hardwoods, Inc., 704 F.2d 963, 968 (7th Cir. 1983) (describing the option of admitting “all the minimally relevant nonprivileged evidence” as “the easy way out”).

          [260].     Sunstein, supra note 147 at 754; see also infra Part I and accompanying text.

          [261].     Sunstein, supra note 147, at 741.

          [262].     Porter, supra note 24, at 1695.

          [263].     Heaven, supra note 13.

          [264].     See Sunstein, supra note 147, at 743 (describing a basic formula for inductive reasoning).

          [265].     Greenleaf, supra note 49, at § 439g.

          [266].     See Image of Truth, supra note 148, at 73–74 (explaining that American judges have compared in their judgments and opinions automobiles to carriages, computer programs to literary works, and DNA profiling to fingerprinting).

          [267].     See infra Part II.B and accompanying text; see also Sunstein, supra note 147, at 745 (stating that analogies do not promise “good outcomes or truth”).

          [268].     See Galves, supra note 80, at 181–82 (explaining that a computer can be a tool to make a computer drawing as an artist would use tools to produce a traditional sketch or painting).

          [269].     See infra Part II.A and accompanying text.

          [270].     See Galves, supra note 80, at 181–82; infra Part II.A and accompanying text.

          [271].     See, e.g., Porter, supra note 24, at 1695 (mentioning the use of Adobe Photoshop to remove undesirable objects).

          [272].     See Nightingale, supra note 246.

          [273].     Id.

          [274].     See Parker, supra note 25, at 22 (reporting a failed attempt at an economical demonstration).

          [275].     Id.

          [276].     See id.

          [277].     Posner, supra note 253, at 183.

          [278].     See Matthew U. Scherer, Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies, 29 Harv. J.L. & Tech. 353, 386 (2016); Roth, supra note 34, at 2006.

          [279].     See Scherer, supra note 278, at 362–73 (2016) (naming autonomy, control, and opacity).

          [280].     Roth, supra note 34, at 2006.

          [281].     Valjean Mfg., Inc. v. Michael Werdiger, Inc., No. 05-0939-cv (L), 05-1502-cv (XAP), 2007 U.S. App. LEXIS 20475, *5 (2d Cir. Aug. 27, 2007).

          [282].     Fed. R. Evid. 611(a)(1)–(2) (stating that courts should use “reasonable control” so that examination of evidence is “effective” and does not “wast[e] time”) and advisory committee’s note on proposed rules (stating that the rules cover demonstrative evidence).

          [283].     Derivative Relevance, supra note 40, at 962.

          [284].     Menard, supra note 81, at 331.

          [285].     See id.

          [286].     Fed. R. Evid. 611(a)(1).

          [287].     Hon. Fern M. Smith, Report of the Advisory Committee on Evidence Rules 13 (1997).

          [288].     Roth, supra note 33, at 2000.

          [289].     See infra Part III.A and accompanying text.

          [290].     See infra Part III.B and accompanying text.

          [291].     See infra Part III.C and accompanying text.

          [292].     See infra Part III.D and accompanying text.

          [293].     See infra Part I.A and accompanying text.

          [294].     See supra notes 129–36 and accompanying text.

          [295].     Mosteller, supra note 21.

          [296].     Id.

          [297].     See Fed. R. Evid. 901(b)(9) and advisory committee’s note on proposed rules.

          [298].     Fed. R. Evid. 901(a).

          [299].     Fed. R. Evid. 901(b)(9) and advisory committee’s note on proposed rules.

          [300].     598 F. Supp. 3d 467 (E.D. La. 2022).

          [301].     Id. at 469.

          [302].     Id. at 470.

          [303].     Id. at 470, 473.

          [304].     Id. at 473.

          [305].     Id.

          [306].     See id. (stating that the creator of a computer exhibit must be present at the relevant hearing).

          [307].     See id.

          [308].     See id.

          [309].     See id.

          [310].     Fed. R. Evid. 902 and advisory committee notes on rules of the 2017 amendment.

          [311].     Fed. R. Evid. 902 and advisory committee notes on rules of the 2011 amendment.

          [312].     Id.

          [313].     Id.

          [314].     Id.

          [315].     No. 1:16-cr-228, 2018 WL 9755074 (E.D. Va. Aug. 20, 2018).

          [316].     See id. at *1.

          [317].     See id. at *2.

          [318].     Id.

          [319].     Id. at *3.

          [320].     See id. (ruling that a certification was fine for authentication of computer exhibits).

          [321].     See id.

          [322].     Fed. R. Evid. 902 and advisory committee notes on rules of the 2011 amendment.

          [323].     Perma Research and Development v. Singer Co., 542 F.2d 111, 115 (2d Cir. 1976);  see also Ladeburg v. Ray, 508 N.W.2d 694, 695 (Iowa 1993) (exempting a party from discovery requirements on the basis of informal party stipulation).

          [324].     See infra Part I.A and accompanying text.

          [325].     See infra Part III.B.1 and accompanying text.

          [326].     See infra Part III.B.2 and accompanying text.

          [327].     See infra Part I.B and accompanying text.

          [328].     See Am. Bar Ass’n, Civil Trial Practice Standard 11(a) (2007) (recommending that courts should “afford each party an adequate opportunity to review, and interpose objections to, demonstrative evidence before it is displayed to the jury”).

          [329].     See Fed. R. Civ. P. 33(b)(3) (stating that an interrogatory answer must be answered fully, albeit to the extent to which it is not objected).

          [330].     See Fed. R. Civ. P.30; Ladeburg v. Ray, 508 N.W.2d 694, 695 (Iowa 1993).

          [331].     Fed. R. Civ. P. 33 and advisory committee’s note to the 1946 amendment.

          [332].     Id.

          [333].     Fed. R. Civ. P. 34(a)(1)(A).

          [334].     In re Air Crash Disaster, 86 F.3d 498, 539 (6th Cir. 1996).

          [335].     Fed. R. Civ. P. 26(a)(1)(A)(ii).

          [336].     Fed. R. Civ. P. 26(a)(3)(A)(iii).

          [337].     Fed. R. Civ. P. 26(a)(2)(B)(iii).

          [338].     Fed. R. Crim. P. 16 and the advisory committee’s notes to the 1974 and 1975 amendments to (declaring that phrases like “the court may order” and “the court shall order” have been revised in accordance with the understanding that parties should dictate discovery).

          [339].     See United States v. Oliver, 987 F.3d 794, 799–801 (8th Cir. 2021) (mentioning Oliver’s argument that the hearsay maps violated his Sixth Amendment right to a fair trial).

          [340].     Richard A. Oppel, Jr. & Jugal K. Patel, One Lawyer, 194 Felony Cases, and No Time, N.Y. Times (Jan. 31, 2019), www.nytimes.com/interactive/2019/01/31/us/public-defender-case-loads.html [https://perma.cc/9JGU-R6VG].

          [341].     Fed. R. Crim. P. 16 and the advisory committee’s notes to the 1966 and 1974 amendments (noting and recognizing nationwide calls for criminal discovery reforms).

          [342].     Fed. R. Crim. P. 16(a)(1)(E)(i)-(ii).

          [343].     Fed. R. Crim. P. 16(b)(1)(A)(i)-(ii); see also the advisory committee’s notes to the 1975 amendments to Fed. R. Crim. P. 16 (noting that the government’s discovery became limited and reciprocal to allow greater freedom to the defense).

          [344].     Fed. R. Crim. P. 16(a)(1)(G)(i)-(iii), 16(b)(1)(C)(i)-(iii).

          [345].     Fed. R. Crim. P. 15(a)(1) (establishing that courts grant motions to depose only when “exceptional circumstances” and “the interest of justice” converge).

          [346].     Fed. R. Crim. P. 17(c)(1), 17(f)(1)-(2).

          [347].     Fed. R. Crim. P. 16(c)(1)-(2).

          [348].     Id.

          [349].     See United States v. Oliver, 987 F.3d 794, 799–801 (8th Cir. 2021) (reporting Oliver’s Sixth Amendment argument).

          [350].     See infra Part I.B and accompanying text.

          [351].     See Fed. R. Crim. P. 12(b)(1) (stating that a defendant may raise any objection before the trial “without a trial on the merits”).

          [352].     See, e.g., People v. Duenas, 55 Cal.4th 1, 25 (2012) (explaining instructions and explanations by the judge and the state); Hinkle v. City of Clarksburg, 81 F.3d 416, 425 (4th Cir. 1996) (reciting a limiting jury instruction on an illustrative animation).

          [353].     Jury Instruction, Black’s Law Dictionary (11th ed. 2019).

          [354].     See Gregory P. Joseph, A Simplified Approach to Computer-Generated Evidence and Animations, 43 N.Y.L. Sch. L. Rev. 875, 891 (1999) (commenting on limiting instructions with respect to illustrative computer graphics in general).

          [355].     Am. Bar Ass’n, supra note 329, at 5(a).

          [356].     Fed. R. Civ. P. 51; Fed. R. Crim. P. 30.

          [357].     Fed. R. Civ. P. 51(a)–(c); Fed. R. Crim. P. 30.

          [358].     Fed. R. Evid. 105.

          [359].     Joseph, supra note 354, at 891.

          [360].     See Abernathy v. Superior Hardwoods, Inc., 704 F.2d 963, 968 (7th Cir. 1983) (describing the trial court’s decision to play a videotape with the volume turned off).

          [361].     See, e.g., Scherer, supra note 278, at 369–73 (discussing and proposing solutions for obscurities surrounding AI research and development).

          [362].     John O. McGinnis, Accelerating AI, 104 Nw. U. L. Rev. Colloquy 366, 377 (2010).

          [363].     Scherer, supra note 278, at 370.

          [364].     See id. at 370–73.

          [365].     See id. at 370–71.

          [366].     Id. at 371.

          [367].     Id. at 370–71.

          [368].     See id. at 372.

          [369].     Kevin Roose, A Coming-Out Party for Generative I.A., Silicon Valley’s New Craze, N.Y. Times (Oct. 21, 2022), www.nytimes.com/2022/10/21/technology/generative-ai.html [https://perma.cc/
    QP3M-XSL4] (mentioning that Stable Diffusion is different from most, if not all, other close-source AI projects).

          [370].     See, e.g., Generative AI Could Put an End to Open-Source Libraries, Verdict (Dec. 21, 2022), www.verdict.co.uk/generative-ai-output-challenge [https://perma.cc/7BJ3-DE8A] (reporting the “first known” class action lawsuit against a GAI company).

          [371].     Grimm, Grossman & Cormack, supra note 239, at 42–48; Roth, supra note 34, at 2023–27.

          [372].     Steve Lohr, Facial Recognition is Accurate, if You’re a White Guy, N.Y. Times (Feb. 9, 2018), www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html [https://perma.cc/D6FC-XWV3].

          [373].     Grimm, Grossman & Cormack, supra note 239, at 42.

          [374].     Lohr, supra note 372.

          [375].     Jeffrey Dastin, Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women, Reuters (Oct. 10, 2018), www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G [https://perma.cc/4TTF-YDAQ].

          [376].     Heather Tal Murphy, Artificial Intelligence is Dreaming Up a Very White World, Slate (Feb. 8, 2023), slate.com/technology/2023/02/dalle2-stable-diffusion-ai-art-race-bias.html [https://
    perma.cc/ZYA4-35CT] (reporting that some GAIs have difficulty drawing colored couples without words like “poor”).

          [377].     See Parker, supra note 25, at 24 (hinting that illustrative drawings without details are also usable).

          [378].     Id. (stating that excluding details like race and gender may “demean the parties and trivialize the issues”).

          [379].     Id.

          [380].     See Joseph, supra note 354, at 888 (commenting on the reliability of computer simulations).

          [381].     Nightingale, supra note 246.

          [382].     Parker, supra note 25, at 22 (narrating how a failed attempt at a cheap demonstrative substitute for a light switch singlehandedly “doomed” the case).

          [383].     Joseph, supra note 354, at 888 (stating that a reliable computer simulation produces outcomes that are “identical or very similar to those produced by the physical facts (or system) being modeled”).

          [384].     Karnow, supra note 248, at 176 (commenting on AI expert systems).

          [385].     Murphy, supra note 376.

          [386].     Galves, supra note 80, at 172.

          [387].     See, e.g., Markus Enzweiler, The Mobile Revolution–Machine Intelligence for Autonomous Vehicles, 57 Info. Tech. 199, 199 (2015) (discussing the AI technology’s role in automated driving).

          [388].     See, e.g., Kristin Houser, AI Is Helping Decode the Oldest Story in the World, Freethink (Feb. 8, 2023), www.freethink.com/robots-ai/cuneiform-tablets [https://perma.cc/GY7R-V3VD] (reporting that algorithm cuneiBLAST is being taught to translate cuneiform, in which a syllable may be written in “up to [twenty-five different] ways,” and has identified several fragments as parts of the Epic of Gilgamesh).

          [389].     See, e.g., Karnow, supra note 248, at 147 (highlighting that neural networks are “used for automated bank loan application approval, credit card fraud detection, as well as a wide spectrum of other uses in the financial markets”).

          [390].     See, e.g., id. at 137 (mentioning that neutral networks are used for medical diagnoses).

          [391].     See, e.g., Shannon Brown, Peeking Inside the Black Box: A Preliminary Survey of Technology Assisted Review (TAR) and Predictive Coding Algorithms for Ediscovery, 21 Suffolk J. Trial & App. Advoc. 221, 261 (2016) (discussing the use of neural technology in electronic discovery).

          [392].     See, e.g., Pamela S. Katz, Expert Robot: Using Artificial Intelligence to Assist Judges in Admitting Scientific Expert Testimony, 24 Alb. L.J. Sci. & Tech. 1, 2–4 (2014) (arguing that computers and AIs can help judges decide whether to admit scientific evidence); Karnow, supra note 248, at 166–82 (discussing the admissibility of machine opinions); Gless, supra note 33.

          [393].     See Porter, supra note 24, at 1699.

          [394].     Galves, supra note 80, at 171–72.

          [395].     Verizon Directories Corp. v. Yellow Book USA, Inc., 331 F. Supp. 2d 136, 142 (E.D.N.Y. 2004).

          [396].     See García, supra note 35, at 1051 (listing “constitutional rights to due process, reliability, imbalance of power between the government and the accused, and the public’s right to know” as some of “the values that are at stake concerning access to computerized information”) (cleaned up).

          [397].     Scherer, supra note 278, at 386.

          [398].     Karnow, supra note 248, at 147.

          [399].     See Verizon, 331 F. Supp. 2d at 144.

          [400].     Kashmir Hill, Wrongfully Accused by an Algorithm, N.Y. Times (June 24, 2020), www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html [https://perma.cc/M2VV-QHNS].

          [401].     See, e.g., Benj Edwards, New Meta AI Demo Writes Racist and Inaccurate Scientific Literature, Gets Pulled, Ars Technica (Nov. 18, 2022), arstechnica.com/information-technology/2022/11/after-controversy-meta-pulls-demo-of-ai-model-that-writes-scientific-papers [https://perma.cc/4S9Q-5PMQ] (reporting that a number of users could enter “racist or potentially offensive prompts” to make Galactica, a GAI language model designed to “store, combine[,] and reason about scientific knowledge,” to produce fictional contents such as a research paper on “[t]he benefits of eating crushed glass”).

          [402].     See Hill, supra note 400 (explaining that the county prosecutor finally offered to expunge Williams’s case and fingerprint data after the article’s publication, which came after he contacted a number of criminal defense attorneys and the American Civil Liberties Union of Michigan); Gless, supra note 33, at 215 (mentioning that “reversing the evidentiary cycle is an uphill battle” that often requires “a great deal of human suffering”).

    Previous
    Previous

    Whack-A-Mole Reasonable Suspicion

    Next
    Next

    Rejecting Public Utility Data Monopolies