LLMs are only about as useful in law practice as a uber-caffinated non-lawyer. The things simply don’t know the law, and any argument it may create should be checked with the same thoroughness that a lawyer would give that of a associate who was revealed to have had super early-onset dementia.
If you’re in the situation of going to court and are thinking of trusting the advice you get from a LLM, dont. You’d be better off appearing “pro se” with advice from the sovereign citizen movement.
(I mean, so long as you don’t try that admiralty court / all-caps pseudo-corp bullshit. The law is written in a keyword-less syntax with a case-insensitive parser. )
To be fair I really do think they can take the painful task of outlining something in long winded legalese if given the details and laws, but ofc everything needs to be double checked and the LLM needs to be constrained to exact paragraphs and details.
I like AI when it’s used right and I think ppl underestimate their ability to eliminate tedious tasks, but ofc I understand why AI is hated right now. I really hope we can get to a world where court transcriptions don’t lead to carpal tunnel syndrome because they only have to supervise an AI and not type every word out, or where burocracy gets eliminated for the average overworked citizen.
But yeah tech bros fucked that up for all of us for a while.
Out of the box LLMs are trash, but you can use their APIs to great efficacy on large amounts of data to find needles in haystacks. Complex needles, not just strings of text like any sophisticated search.
You’re right that anything it finds needs to be gone over by a human, but it HUGELY reduces problem sizes when used intelligently.
LLMs are only about as useful in law practice as a uber-caffinated non-lawyer. The things simply don’t know the law, and any argument it may create should be checked with the same thoroughness that a lawyer would give that of a associate who was revealed to have had super early-onset dementia.
If you’re in the situation of going to court and are thinking of trusting the advice you get from a LLM, dont. You’d be better off appearing “pro se” with advice from the sovereign citizen movement.
(I mean, so long as you don’t try that admiralty court / all-caps pseudo-corp bullshit. The law is written in a keyword-less syntax with a case-insensitive parser. )
To be fair I really do think they can take the painful task of outlining something in long winded legalese if given the details and laws, but ofc everything needs to be double checked and the LLM needs to be constrained to exact paragraphs and details.
I like AI when it’s used right and I think ppl underestimate their ability to eliminate tedious tasks, but ofc I understand why AI is hated right now. I really hope we can get to a world where court transcriptions don’t lead to carpal tunnel syndrome because they only have to supervise an AI and not type every word out, or where burocracy gets eliminated for the average overworked citizen.
But yeah tech bros fucked that up for all of us for a while.
Out of the box LLMs are trash, but you can use their APIs to great efficacy on large amounts of data to find needles in haystacks. Complex needles, not just strings of text like any sophisticated search.
You’re right that anything it finds needs to be gone over by a human, but it HUGELY reduces problem sizes when used intelligently.