In October 2023, major UK law firm Linklaters created a test comprised
of 50 "hard" questions from 10 different areas of legal practice to test
the ability of AI Large Language Models to answer legal questions. The questions are the sort that would typically require advice from a
competent mid-level (2 years' post qualification experience) lawyer, specialised in that practice area (so that's someone that's four years
out of law school).
They named the test the "UK Links AI English Law Benchmark".
They have recently repeated the tests and had the answers marked by
senior lawyers from each practice area.
In October 2023, OpenAI's ChatGPT2, 3, 4 and Bard were tested with Bard scoring the best at 4.4 out of 10. However, it is to be noted that all
were often wrong and included fictional citations.
Their second benchmarking exercise, completed recently, has seen
"significant improvement" from the LLMs.
Gemini 2.0 scored 6.0 out of 10 and OpenAI o1 achieved the best score
with 6.4 out of 10.
Despite these improved scores, the answers are "still not always right
and lack nuance" and Linklaters "recommend they should not be used for English law legal advice without expert human supervision" as they still
made mistakes, left out important information and invented citations
(albeit less than earlier models).
With supervision, Linklaters feel we are getting to the stage where LLM
could be useful, for example in creating a first draft or as a
cross-check, especially for tasks that involve summarising relatively well-known areas of law but they stressed the "dangers" of using them if lawyers "don't already have a good idea of the answer".
The full report can be found here:
https://lpscdn.linklaters.com/-/media/digital-marketing-image-library/files/01_insights/blogs/2025/linksai-english-law-benchmark-report1o-gemini32073228082.ashx?rev=01d02e30-08c9-4213-b2fb-4d9e718058ba&extension=pdf
In related news, international law firm Hill Dickenson has recently sent
an internal e-mail to all staff warning of the use of AI tools.
In common with many firms of this size, Hill Dickenson closely monitors
staff Internet usage and has an AI policy which requires staff to follow
a formal request process to allow use of AI tools.
According to Hill Dickenson's Chief Technology Officer, they detected
more than 32,000 hits to ChatGPT over a seven day period in January and February.
During the same time frame, they detected more than 3,000 hits to the
Chinese AI LLM DeepSeek.
Additionally, there were 50,000 hits to the writing assistance tool Grammarly.
Owing to how the stats were collected, it is not possible to determine
on how many occasions staff visited ChatGPT, DeekSeek or Grammarly nor
have Hill Dickenson revealed how many staff repeatedly visited these
sites (as several hits could be generated by a single user visiting the
site once).
Their AI Policy includes guidance that prohibits the uploading of client information and requires staff to verify the accuracy of LLM responses.
Finally, in September 2024, legal software provider Clio surveyed around
500 UK solicitors, 62% of whom anticipated an increase in AI usage over
the next 12 months.
The survey found that law firms across the UK are using the technology
to complete tasks such as drafting documents, reviewing or analysing contracts and for legal research.
I hope some here find this information interested.
Regards
S.P.
In October 2023, major UK law firm Linklaters created a test comprised
of 50 "hard" questions from 10 different areas of legal practice to test
the ability of AI Large Language Models to answer legal questions. The questions are the sort that would typically require advice from a
competent mid-level (2 years' post qualification experience) lawyer, specialised in that practice area (so that's someone that's four years
out of law school).
They named the test the "UK Links AI English Law Benchmark".
They have recently repeated the tests and had the answers marked by
senior lawyers from each practice area.
In October 2023, OpenAI's ChatGPT2, 3, 4 and Bard were tested with Bard scoring the best at 4.4 out of 10. However, it is to be noted that all
were often wrong and included fictional citations.
Their second benchmarking exercise, completed recently, has seen
"significant improvement" from the LLMs.
Gemini 2.0 scored 6.0 out of 10 and OpenAI o1 achieved the best score
with 6.4 out of 10.
Despite these improved scores, the answers are "still not always right
and lack nuance" and Linklaters "recommend they should not be used for English law legal advice without expert human supervision" as they still
made mistakes, left out important information and invented citations
(albeit less than earlier models).
With supervision, Linklaters feel we are getting to the stage where LLM
could be useful, for example in creating a first draft or as a cross-
check, especially for tasks that involve summarising relatively well-
known areas of law but they stressed the "dangers" of using them if
lawyers "don't already have a good idea of the answer".
The full report can be found here:
https://lpscdn.linklaters.com/-/media/digital-marketing-image-library/ files/01_insights/blogs/2025/linksai-english-law-benchmark-report1o- gemini32073228082.ashx?rev=01d02e30-08c9-4213- b2fb-4d9e718058ba&extension=pdf
In related news, international law firm Hill Dickenson has recently sent
an internal e-mail to all staff warning of the use of AI tools.
In common with many firms of this size, Hill Dickenson closely monitors
staff Internet usage and has an AI policy which requires staff to follow
a formal request process to allow use of AI tools.
According to Hill Dickenson's Chief Technology Officer, they detected
more than 32,000 hits to ChatGPT over a seven day period in January and February.
During the same time frame, they detected more than 3,000 hits to the
Chinese AI LLM DeepSeek.
Additionally, there were 50,000 hits to the writing assistance tool Grammarly.
I have had several anecdotal reasons recently to realise how poor
paid-for legal advice can be. But whether this is because of or
despite LLM use I have no idea. The assessment is a bit worrying; if I
pay a lawyer I honestly expect better than 6 out 10 accuracy.
On 2025-02-21, Roger Hayter <roger@hayter.org> wrote:
I have had several anecdotal reasons recently to realise how poor
paid-for legal advice can be. But whether this is because of or
despite LLM use I have no idea. The assessment is a bit worrying; if I
pay a lawyer I honestly expect better than 6 out 10 accuracy.
I've had absolutely terrible paid-for legal advice several times,
before LLMs were a thing, so it's not the fault of "AI".
On Fri, 21 Feb 2025 18:01:02 -0000 (UTC), Jon Ribbens
<jon+usenet@unequivocal.eu> wrote:
On 2025-02-21, Roger Hayter <roger@hayter.org> wrote:
I have had several anecdotal reasons recently to realise how poor
paid-for legal advice can be. But whether this is because of or
despite LLM use I have no idea. The assessment is a bit worrying; if I
pay a lawyer I honestly expect better than 6 out 10 accuracy.
I've had absolutely terrible paid-for legal advice several times,
before LLMs were a thing, so it's not the fault of "AI".
Yes, but if it caused you a loss (you discovered that your house
didn't actually belong to you after you bought it, got blamed for
starting a war because you let your country be invaded, etc) you could
(in theory at least) sue the lawyer and (in theory at least) their
insurers would compensate you.
When the lawyers are all using AI, will they still bill you at
£1000/hr? When they get it wrong, will they shrug and say it was the
AI wot dun it and successfully wriggle out of their liabilities? When
we all have access to AIs, will the law firms be sure their meatspace
staff are offering clients something better than the technology?
On 2025-02-21, Nick Odell <nickodell49@yahoo.ca> wrote:
On Fri, 21 Feb 2025 18:01:02 -0000 (UTC), Jon Ribbens
<jon+usenet@unequivocal.eu> wrote:
On 2025-02-21, Roger Hayter <roger@hayter.org> wrote:
I have had several anecdotal reasons recently to realise how poor
paid-for legal advice can be. But whether this is because of or
despite LLM use I have no idea. The assessment is a bit worrying; if I >>>> pay a lawyer I honestly expect better than 6 out 10 accuracy.
I've had absolutely terrible paid-for legal advice several times,
before LLMs were a thing, so it's not the fault of "AI".
Yes, but if it caused you a loss (you discovered that your house
didn't actually belong to you after you bought it, got blamed for
starting a war because you let your country be invaded, etc) you could
(in theory at least) sue the lawyer and (in theory at least) their
insurers would compensate you.
Indeed. The insurance is probably the main point of solicitors these
days.
When the lawyers are all using AI, will they still bill you at
£1000/hr? When they get it wrong, will they shrug and say it was the
AI wot dun it and successfully wriggle out of their liabilities? When
we all have access to AIs, will the law firms be sure their meatspace
staff are offering clients something better than the technology?
If you go to the AI directly you'll get no insurance. If you go to
the solicitors and they use the same AI, you'll be insured.
On 21 Feb 2025 at 21:59:31 GMT, "Jon Ribbens" <jon+usenet@unequivocal.eu> wrote:
On 2025-02-21, Nick Odell <nickodell49@yahoo.ca> wrote:
On Fri, 21 Feb 2025 18:01:02 -0000 (UTC), Jon Ribbens
<jon+usenet@unequivocal.eu> wrote:
On 2025-02-21, Roger Hayter <roger@hayter.org> wrote:
I have had several anecdotal reasons recently to realise how poor
paid-for legal advice can be. But whether this is because of or
despite LLM use I have no idea. The assessment is a bit worrying; if I >>>>> pay a lawyer I honestly expect better than 6 out 10 accuracy.
I've had absolutely terrible paid-for legal advice several times,
before LLMs were a thing, so it's not the fault of "AI".
Yes, but if it caused you a loss (you discovered that your house
didn't actually belong to you after you bought it, got blamed for
starting a war because you let your country be invaded, etc) you could
(in theory at least) sue the lawyer and (in theory at least) their
insurers would compensate you.
Indeed. The insurance is probably the main point of solicitors these
days.
When the lawyers are all using AI, will they still bill you at
£1000/hr? When they get it wrong, will they shrug and say it was the
AI wot dun it and successfully wriggle out of their liabilities? When
we all have access to AIs, will the law firms be sure their meatspace
staff are offering clients something better than the technology?
If you go to the AI directly you'll get no insurance. If you go to
the solicitors and they use the same AI, you'll be insured.
It is not hard to think of scenarios where getting bad advice from a solicitor is not compensatable by insurance, as it is impossible to
prove that the outcome would have been financially better if you had
been given better advice
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 498 |
Nodes: | 16 (2 / 14) |
Uptime: | 17:29:48 |
Calls: | 9,826 |
Calls today: | 5 |
Files: | 13,761 |
Messages: | 6,191,267 |