ABA News

Attorney General Garland discusses AI, national security


The United States has an important lead in the development of artificial intelligence that is crucial to the country’s economy and national security, Attorney General Merrick B. Garland said at the American Bar Association’s 39th National Institute on White Collar Crime in San Francisco.

“The Justice Department’s first job is to protect that lead and to protect our intellectual property,” Garland said. “The Justice Department just will not tolerate theft of trade secrets in the area of artificial intelligence.”

During a fireside chat with Kenneth A. Polite Jr., former assistant attorney general for the U.S. Department of Justice’s Criminal Division, Garland announced that the U.S. District Court for the Northern District of California had unsealed an indictment against a Chinese national who is charged with stealing AI-related intellectual property and trade secrets from Google.

Garland said AI and other evolving technologies have “great promise and the risk of great harm … including algorithmic discrimination that AI can foster and the way in which it can accelerate the cyberattacks that are happening daily, even ‘minutely,’ on our companies, on our law firms, on our departments of the government and on our military.”

One of the most serious national security threats is the risk that “foreign maligned actors will use AI to increase the polarizations of this country and to attack our electoral system,” Garland said.

Despite the risks associated with AI, Garland said the technology allows the Justice Department to act more quickly on attacks on their computer systems.

“AI can make it possible for us to defend our systems even better than the machine-learning systems that we’re using,” he said, adding that the department has hired its first chief AI officer and plans to hire more Ph.D.-credentialed computer scientists to increase technology expertise in the agency.

“It’s the only way we’re going to up our game sufficiently to secure our country and to take advantage of AI,” he said.

Garland, who during his career has supervised investigations and prosecutions of the Oklahoma City bombing, Unabomber and Montana Freemen cases, said he is concerned about the “heightened level of threats and the heightened speed of threats against everyone who works in public spaces” – including judges, prosecutors, agents, law enforcement and volunteers and election workers.

“Democracy can’t succeed and cannot work if the people who serve to make sure civic life goes on are fearful of their lives,” Garland said. “That’s why our priority is to fight these attacks.”

He mentioned the Jan. 6 attack on the Capitol.

“We know when we have a case of this level of complexity and this level of consequence for the country that we have to get it right,” Garland said. “That means from the very beginning imagining the mistakes that we could make and making sure that we don’t make them because some of them cannot be recovered later. That we think about the entire course of the prosecution, the trial, the appeal … so legal and fact development and strategizing and tactics are all worked on from the beginning.

“That we pressure-test at every stage that we can. And if we look like we’re in a blind alley, we move on to another way to go forward,” he added.

Garland also attended the 59th anniversary of the Selma-to-Montgomery March earlier in the week. He said he went because it had a powerful effect on him when he watched it on television as a child.

It also “galvanized the voting rights movement,” he said. “The Voting Rights Act gave the Justice Department important tools to ensure that every eligible person would have a chance to vote and have that vote counted. We feel an obligation to aggressively use the tools we have.

“If election workers and volunteers aren’t willing to make sure that our elections go forward in a fair way, then we’re not going to have elections,” he said.

Fixing AI will require better databases


The problem with generative artificial intelligence is well known: If you rely on it to write a legal brief, it might spit out fake citations. Lawyer beware. So-called “hallucinations” by AI programs like ChatGPT are very real.

One solution, according to experts at a recent American Bar Association webinar, is to use a more specialized database as the AI program’s reference point — one that doesn’t rely on a broad Google search.

Four experts discussed how to conquer AI’s hallucination problem at a March 7 webinar — “Why BOTher Writing? — co-sponsored by the ABA Judicial Division and Thomson Reuters.

To demonstrate the problem, Joshua Fairfield, a professor at the Washington and Lee University School of Law, offered a live demonstration of ChatGPT. He asked a legal question: “Can you maintain an action for negligence under New Zealand law?” ChatGPT quickly spit out what Fairfield called “a fairly standard common-law answer.” Unfortunately, he added, it was not true.

So Fairfield asked three follow-up questions, each one starting with the words “No, that is not correct” and adding more detail to the question. And each time, the computer admitted its mistake and gave another answer – also wrong. “Each time I provide information, it says, ‘You are correct’ and then completely changes the analysis, turns to the other side, hallucinates a new answer.”

One takeaway, said Mark Davies, a partner with the law firm Orrick, Herrington and Sutcliffe in Washington, D.C., is that prompts – the questions you ask the AI program – matter a lot. “The better the prompt, the better the answer,” he said.

But another important takeaway, according to the panelists, is that generative AI programs like ChatGPT rely on huge, general databases of information to find what they need, and if that database is too big and not legal-specific, it may find the wrong answer.

“The ChatGPT model is so general,” Davies said, “that there’s so much material out there that it can be quite difficult for it to get the answer right.” On the other hand, Davies added, models that are more specific – perhaps a law-specific model – “maybe won’t be quite as off-base.”

Emily Colbert, a senior vice president for product management with Thomson Reuters, agreed.

“The answer is going to be as good as the data it is generating the answer from,” she said. “Google has obviously great data but most of us in the legal space don’t go to Google for legal answers. We go to Google and ask general questions. We’re not going to immediately think that the answer back is absolutely correct in our particular area of specialty.”

Colbert touted a new product launched by Thomson Reuters in November called AI-Assisted Research on Westlaw Precision. Lawyers, she said, can ask a question in plain English with as much detail as possible and the program will generate an answer based on a Westlaw search with cited sources. Using a trusted database like Westlaw, she said, dramatically reduces the risk of AI hallucinations.

Still, Colbert said, “Any vendor that tells you that they’ve, at this stage anyway, completely eliminated any potential chance of hallucination or inaccuracy, you should be wary of that vendor.’

The panel was moderated by Herbert B. Dixon Jr., senior judge of the Superior Court of the District of Columbia, who writes a technology column for The Judges’ Journal, a publication of the ABA Judicial Division.