Your AI Can Testify Against You: The AI Revolution Comes to the Court of Law
Posted on February 15, 2023 by Paul Spadafora
On February 2, 2023, Microsoft released a brand-new suite of features for its popular Microsoft Teams platform under the moniker “Microsoft Teams Premium.” One of the standout features of this premium service is “Intelligent Recap,” a software feature powered by the headline-news-generating ChatGPT AI system. Using ChatGPT, Intelligent Recap will “automatically generatemeeting notes, recommended tasks, and personalized highlights to help you get the information most important to you, even if you miss the meeting.”
“Great!” you say, as a savvy business professional. “I can use this feature to help ensure complete and accurate record keeping of all of my most sensitive business discussions, without making an audio recording or paying someone to type out minutes!”
“Great!” I say, as the experienced business litigator, “an entirely new class of documents I can obtain in discovery and admit at trial as evidence of an opposing party’s wrongdoing!”
While features like automatic AI transcription generation may be a powerful new tool for leveraging increasingly ubiquitous video conferencing software, it is also a powerful new tool for litigators to access a complete record of potentially any meeting which is conducted on a webcam and even conference meetings with just a microphone. This is an important aspect of the software that managers and businesses will need to bear in mind.
It is likely to become common practice for litigators to issue discovery requests which include, among other things, requests for “copies of all automatically-generated or AI-scripted transcripts or summaries of any meeting discussing [Topic X], which were generated in the preceding three years.” Managers need to be mindful that the unthinking, automatic use of AI chat summaries for video or telephone calls they make, could end up being a very bad decision.
Figure 1: an artist’s rendition of Chat GPT and Intelligent Recap let loose in the enterprise without proper adoption and training. In this rendition, the gun shoots out transcripts.
There are also potential legal risks inherent in automating traditionally human tasks with AI, specifically when the AI is taking over the job of a human to describe what is occurring in real time. A couple examples come to mind:
- The evolving concept of “machine testimony”. Both Federal and Washington rules of evidence prohibit the admission at trial of hearsay testimony into evidence unless an exception applies. The Washington rules of evidence define hearsay as “a statement, other than one made by the declarant while testifying at the trial or hearing, offered in evidence to prove the truth of the matter asserted.” Wash. R. Evid. 801(c). Statements, in turn, are only made by declarants, and a declarant is “a person who makes a statement.” Id at 801(a) & (b).
Courts across the country have used their equivalent hearsay rules to conclude the obvious: If a computer is what generated the information, and a computer isn’t a person, then the things a computer “says” are not hearsay, and can be admitted into evidence. So long as any “computer-generated record” is “the result of a process free of human intervention,” it is not hearsay within the meaning of the hearsay rules.
This rule makes sense when applied to things like computer-generated test results of blood-alcohol levels or speed monitoring devices attached to cars, but it is doubtful the judges deciding these cases had occasion to consider a future where AI can describe in lurid detail exactly what it heard at a purportedly private meeting.
Nonetheless, that is the world we live in… today! Chat GPT and other machine-learning models are continuing to test the boundaries of what processes are “free of human intervention,” and in a way that the rules of evidence are not yet prepared to address.
- Two-party consent laws for recording conversations. Another thorny issue which arises in this context is whether such AI-based videoconferencing runs afoul of state laws prohibiting the recording of private conversations without the consent of the participants. Washington is a Two-Party Consent state, which means that it is generally illegal for any person, corporation, or state government entity to intercept and/or record any “[p]rivate conversation, by any device electronic or otherwise designed to record or transmit such conversation regardless how the device is powered or actuated without first obtaining the consent of all the persons engaged in the conversation.” Violations of this statute can expose an individual to civil and criminal liability under RCW 9.73.060, and RCW 9.73.080.
While one may be inclined to believe Microsoft considered the foregoing issues and has built into its new program appropriate safeguards (its press release describes such privacy features in some detail), as the inevitable clones or competing software platforms launch and use of the technology evolves, it would not be surprising to see a headline similar to the following in the New York Times:
“[Company X] faces class-action lawsuit, government investigation, over revelation that it had been using [Video Conferencing Software]’s features to transcribe all video conference meetings without consent of employees or customers.”
This is the potential trap that can (and will) snare the unwary who move fast and break things in the operation of a businesses.
This may all sound very critical of Chat GPT and its cousins, but it is not meant to be. The “Intelligent Recap” feature is amazing, and it would not be surprising to see it (or something similar) adopted by most every company which needs to utilize video conferencing to run its operations (read: virtually every business in a post-COVID world). A rollout of emerging technology should be thoroughly considered, appropriately vetted by IT and HR departments as well as in-house and outside counsel, and with users properly trained on the technology. Otherwise, the technology has the potential to expose a business and its managers to significant liability.
If you have questions about this or other business litigation issues, the attorneys of Lasher’s Business Litigation practice group are available to help.
 The equivalent Federal Rule, Fed. R. Evid. 801, uses more words to get to the same place.
 Provided of course that what the Computer has to “say” is otherwise admissible.
 See, eg., People v. Rodriguez, 16 Cal.App.5th 355, 379-80 (Cal. Ct. App. Oct. 19, 2017) (collecting cases) Maxwell v. State, 04-96-00825-CR, 1997 WL 538665, at *2 (Tex. App. Sept. 3, 1997) (““Because a computer is not a declarant, the computer’s self-generated information is not a statement and cannot be hearsay.”) citing Ly v. State, 908 S.W.2d 598, 600 (Tex.App.-Houston 1995)
 Rodriguez, 16 Cal.App.5th at 379-80.
 I promise I’m not out there smashing looms and railing against the virtues of modern civilization.