First Warrant
In what appears to be the first warrant for a ChatGPT Prompt, OpenAI delivered a collection of information on an alleged criminal. As has been well documented, users talking to ChatGPT don’t have legal protections to keep those conversations private. While Sam Altman, OpenAI’s CEO, has hoped for some legal protections afforded to their users. No such protections exist now. In practice, that means that warrants and subpoenas can be used to extract information from the company. In practice that means that law enforcement can use a warrant (when lawfully issued) to access things like input prompts, output responses, user names, IP addresses, and payment data. Similarly, under appropriate legal conditions, that collection of data would be subpoenable.
At an abstracted level, the story of Department of Homeland Security getting a warrant for OpenAI data is not new or interesting. Section 2703(d) of the Electronic Communications Privacy Act allows law enforcement to obtain what users would believe to be sensitive records without a search warrant. With a court order, rather than a warrant, “law enforcement can obtain subscriber’s name, device identity numbers, payment information, and other information from tech companies, telecom companies, and banks. Federal law enforcement officers must offer a judge only “specific and articulable facts” showing that there are reasonable grounds to believe that the records of a communication are “relevant and material to an ongoing criminal investigation” to obtain those records. This is a much easier standard to meet than a finding of probable cause. Even online records like email content, when stored for more than 180 days, can be obtained with less than probable cause.
The trick isn’t in providing legal protections to the conversations users have with an AI. Instead, it can and should be about lawyers having products that meet a similar desire for consumers to talk to AI, but protected under the attorney-client relationship. The relationships with the most coverage are between a lawyer and their client. As a society, we decided that we don’t have protections to things like our Google search histories. Talking to an AI is a logical next step. That is, it’s reasonable to think that we don’t and shouldn’t have heightened level of protections with an everyday AI chatbot.
Some folks argue that we talk to, interact with, and confess to AI tools in very intimate ways. Again, people in the US have been doing that behavior with Google. In 2017, Seth Stephens-Davidowitz in his book Everybody Lies used Google search results to analyze aggregate behavior of people on the internet. For many years, people have been confessing secrets or googling intimate questions about mental health, relationships, drugs, sex, crime, and more. Our current use of AI is also conceptually similar with a significant body of law surrounding whether diaries have legal protections. While we disclose intimate thoughts in a diary, they can be accessed (or protected) from evidence. As with all requests for evidence under the Rules, a substantial portion of the game is in arguing over the scope necessary for discovery.
So in the end, 2nd Chair doesn’t think that the question is extending legal protections to commercial, everyday AI tools, but rather about building precise tools that can be delivered by attorneys that solve the same customer behavior.