Microsoft Phrase is getting an AI authorized agent, which sounds useful till you bear in mind how badly this has gone earlier than. The brand new Authorized Agent can evaluate contracts, counsel edits, examine variations, and flag dangerous clauses inside Phrase. On paper, these options sound fairly helpful and handy, nonetheless, circumstances of generative AI instruments hallucinating and inventing whole circumstances, citations and quotes from skinny air have dragged some actual individuals in actual courtroom bother earlier than.
What can Microsoft’s Authorized Agent do?
Microsoft says Authorized Agent is obtainable by Copilot in Phrase for customers in its Frontier program within the U.S. It at present works on Phrase for Home windows desktop. There is no such thing as a separate app or set up required, although some customers could must restart Phrase earlier than the agent seems.
Authorized Agent is supposed for contract and doc evaluate. Microsoft says it will possibly test a contract clause by clause towards a authorized playbook, evaluate a full settlement, examine totally different variations, flag dangers and obligations, and counsel edits with tracked adjustments. It’s also retains the unique formatting, tables, lists, and negotiation historical past intact.
The corporate can be attempting to keep away from the plain nightmare situation for its customers and itself. The function has built-in safeguards like offering citations linked to supply language, so reviewers can test solutions earlier than utilizing them, together with clear disclaimers that it doesn’t present authorized recommendation, could produce inaccurate content material, and nonetheless requires evaluate by a certified authorized skilled earlier than something is relied on.
Microsoft
Why ought to attorneys nonetheless be nervous?
There may be already precedent for AI going rogue in authorized settings as two New York attorneys had been sanctioned in 2023 and ordered to pay a $5,000 fantastic after submitting a courtroom submitting that included pretend circumstances generated by ChatGPT. Michael Cohen, Donald Trump’s former lawyer, additionally admitted that he unknowingly gave his lawyer pretend case citations generated by Google Bard. Whereas Cohen was not sanctioned, the choose nonetheless referred to as the episode embarrassing and pressured the necessity for skepticism when utilizing AI in authorized work.
These are usually not remoted circumstances as Judges have questioned or disciplined attorneys in a number of cases involving AI-assisted filings, and one French information scientist and lawyer recognized tons of of courtroom paperwork containing pretend citations and nonexistent references over the previous yr.
Wikimedia Commons
The larger drawback is that hallucinations stay unresolved. AI chatbots can nonetheless produce solutions that sound assured whereas being partly or utterly incorrect. In authorized work, that’s particularly harmful, as a result of a made-up quotation or invented case can find yourself in a submitting and create severe penalties.
Microsoft has put many safeguards on Authorized Agent to forestall these points, nonetheless, the lesson is already written in courtroom information. AI can pace up authorized work, however the duty of truth checking nonetheless falls on the lawyer.

