• Re: AI and the Legal Profession

    From Roger Hayter@21:1/5 to All on Fri Feb 21 14:15:33 2025
    On 21 Feb 2025 at 13:53:54 GMT, "Simon Parker" <simonparkerulm@gmail.com> wrote:

    In October 2023, major UK law firm Linklaters created a test comprised
    of 50 "hard" questions from 10 different areas of legal practice to test
    the ability of AI Large Language Models to answer legal questions. The questions are the sort that would typically require advice from a
    competent mid-level (2 years' post qualification experience) lawyer, specialised in that practice area (so that's someone that's four years
    out of law school).

    They named the test the "UK Links AI English Law Benchmark".

    They have recently repeated the tests and had the answers marked by
    senior lawyers from each practice area.

    In October 2023, OpenAI's ChatGPT2, 3, 4 and Bard were tested with Bard scoring the best at 4.4 out of 10. However, it is to be noted that all
    were often wrong and included fictional citations.

    Their second benchmarking exercise, completed recently, has seen
    "significant improvement" from the LLMs.

    Gemini 2.0 scored 6.0 out of 10 and OpenAI o1 achieved the best score
    with 6.4 out of 10.

    Despite these improved scores, the answers are "still not always right
    and lack nuance" and Linklaters "recommend they should not be used for English law legal advice without expert human supervision" as they still
    made mistakes, left out important information and invented citations
    (albeit less than earlier models).

    With supervision, Linklaters feel we are getting to the stage where LLM
    could be useful, for example in creating a first draft or as a
    cross-check, especially for tasks that involve summarising relatively well-known areas of law but they stressed the "dangers" of using them if lawyers "don't already have a good idea of the answer".

    The full report can be found here:

    https://lpscdn.linklaters.com/-/media/digital-marketing-image-library/files/01_insights/blogs/2025/linksai-english-law-benchmark-report1o-gemini32073228082.ashx?rev=01d02e30-08c9-4213-b2fb-4d9e718058ba&extension=pdf

    In related news, international law firm Hill Dickenson has recently sent
    an internal e-mail to all staff warning of the use of AI tools.

    In common with many firms of this size, Hill Dickenson closely monitors
    staff Internet usage and has an AI policy which requires staff to follow
    a formal request process to allow use of AI tools.

    According to Hill Dickenson's Chief Technology Officer, they detected
    more than 32,000 hits to ChatGPT over a seven day period in January and February.

    During the same time frame, they detected more than 3,000 hits to the
    Chinese AI LLM DeepSeek.

    Additionally, there were 50,000 hits to the writing assistance tool Grammarly.

    Owing to how the stats were collected, it is not possible to determine
    on how many occasions staff visited ChatGPT, DeekSeek or Grammarly nor
    have Hill Dickenson revealed how many staff repeatedly visited these
    sites (as several hits could be generated by a single user visiting the
    site once).

    Their AI Policy includes guidance that prohibits the uploading of client information and requires staff to verify the accuracy of LLM responses.

    Finally, in September 2024, legal software provider Clio surveyed around
    500 UK solicitors, 62% of whom anticipated an increase in AI usage over
    the next 12 months.

    The survey found that law firms across the UK are using the technology
    to complete tasks such as drafting documents, reviewing or analysing contracts and for legal research.

    I hope some here find this information interested.

    Regards

    S.P.

    I have had several anecdotal reasons recently to realise how poor paid-for legal advice can be. But whether this is because of or despite LLM use I have no idea. The assessment is a bit worrying; if I pay a lawyer I honestly expect better than 6 out 10 accuracy.

    --

    Roger Hayter

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Simon Parker on Fri Feb 21 14:58:21 2025
    On 21/02/2025 13:53, Simon Parker wrote:
    In October 2023, major UK law firm Linklaters created a test comprised
    of 50 "hard" questions from 10 different areas of legal practice to test
    the ability of AI Large Language Models to answer legal questions.  The questions are the sort that would typically require advice from a
    competent mid-level (2 years' post qualification experience) lawyer, specialised in that practice area (so that's someone that's four years
    out of law school).

    They named the test the "UK Links AI English Law Benchmark".

    They have recently repeated the tests and had the answers marked by
    senior lawyers from each practice area.

    In October 2023, OpenAI's ChatGPT2, 3, 4 and Bard were tested with Bard scoring the best at 4.4 out of 10.  However, it is to be noted that all
    were often wrong and included fictional citations.

    Their second benchmarking exercise, completed recently, has seen
    "significant improvement" from the LLMs.

    Gemini 2.0 scored 6.0 out of 10 and OpenAI o1 achieved the best score
    with 6.4 out of 10.

    Despite these improved scores, the answers are "still not always right
    and lack nuance" and Linklaters "recommend they should not be used for English law legal advice without expert human supervision" as they still
    made mistakes, left out important information and invented citations
    (albeit less than earlier models).

    With the exception of the last of those, which is unforgivable, the
    important question is whether the answers given were better or worse
    than human lawyers of various degrees of experience who incidentally
    cost an awful lot more than the nothing spent on the AI answers.

    After all, 6.4 out of 10 on 'hard questions' would normally be
    considered a good pass in a legal exam.

    With supervision, Linklaters feel we are getting to the stage where LLM
    could be useful, for example in creating a first draft or as a cross-
    check, especially for tasks that involve summarising relatively well-
    known areas of law but they stressed the "dangers" of using them if
    lawyers "don't already have a good idea of the answer".

    Charging at £500 an hour, they do have of course some self-interest in protecting their own expertise and being a bit dismissive of AI.

    And it's worth remembering that, when faced with similar questions in
    their day-to-day professional lives, they will of course be acting quite similarly to AI in that they'll be on their computers googling or
    Lexising for the answers too. It won't all be in their heads.

    The full report can be found here:

    https://lpscdn.linklaters.com/-/media/digital-marketing-image-library/ files/01_insights/blogs/2025/linksai-english-law-benchmark-report1o- gemini32073228082.ashx?rev=01d02e30-08c9-4213- b2fb-4d9e718058ba&extension=pdf

    In related news, international law firm Hill Dickenson has recently sent
    an internal e-mail to all staff warning of the use of AI tools.

    In common with many firms of this size, Hill Dickenson closely monitors
    staff Internet usage and has an AI policy which requires staff to follow
    a formal request process to allow use of AI tools.

    According to Hill Dickenson's Chief Technology Officer, they detected
    more than 32,000 hits to ChatGPT over a seven day period in January and February.

    That's one hell of a lot if their 'formal request process' is actually
    used. How many people does it take to administer that I wonder?

    During the same time frame, they detected more than 3,000 hits to the
    Chinese AI LLM DeepSeek.

    Additionally, there were 50,000 hits to the writing assistance tool Grammarly.

    Oh dear, that's very disturbing. Lawyers who can't write proper English
    on their own. Can they be trusted?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jon Ribbens@21:1/5 to Roger Hayter on Fri Feb 21 18:01:02 2025
    On 2025-02-21, Roger Hayter <roger@hayter.org> wrote:
    I have had several anecdotal reasons recently to realise how poor
    paid-for legal advice can be. But whether this is because of or
    despite LLM use I have no idea. The assessment is a bit worrying; if I
    pay a lawyer I honestly expect better than 6 out 10 accuracy.

    I've had absolutely terrible paid-for legal advice several times,
    before LLMs were a thing, so it's not the fault of "AI".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nick Odell@21:1/5 to jon+usenet@unequivocal.eu on Fri Feb 21 21:45:33 2025
    On Fri, 21 Feb 2025 18:01:02 -0000 (UTC), Jon Ribbens <jon+usenet@unequivocal.eu> wrote:

    On 2025-02-21, Roger Hayter <roger@hayter.org> wrote:
    I have had several anecdotal reasons recently to realise how poor
    paid-for legal advice can be. But whether this is because of or
    despite LLM use I have no idea. The assessment is a bit worrying; if I
    pay a lawyer I honestly expect better than 6 out 10 accuracy.

    I've had absolutely terrible paid-for legal advice several times,
    before LLMs were a thing, so it's not the fault of "AI".

    Yes, but if it caused you a loss (you discovered that your house
    didn't actually belong to you after you bought it, got blamed for
    starting a war because you let your country be invaded, etc) you could
    (in theory at least) sue the lawyer and (in theory at least) their
    insurers would compensate you. When the lawyers are all using AI, will
    they still bill you at £1000/hr? When they get it wrong, will they
    shrug and say it was the AI wot dun it and successfully wriggle out of
    their liabilities? When we all have access to AIs, will the law firms
    be sure their meatspace staff are offering clients something better
    than the technology?

    Nick

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jon Ribbens@21:1/5 to Nick Odell on Fri Feb 21 21:59:31 2025
    On 2025-02-21, Nick Odell <nickodell49@yahoo.ca> wrote:
    On Fri, 21 Feb 2025 18:01:02 -0000 (UTC), Jon Ribbens
    <jon+usenet@unequivocal.eu> wrote:

    On 2025-02-21, Roger Hayter <roger@hayter.org> wrote:
    I have had several anecdotal reasons recently to realise how poor
    paid-for legal advice can be. But whether this is because of or
    despite LLM use I have no idea. The assessment is a bit worrying; if I
    pay a lawyer I honestly expect better than 6 out 10 accuracy.

    I've had absolutely terrible paid-for legal advice several times,
    before LLMs were a thing, so it's not the fault of "AI".

    Yes, but if it caused you a loss (you discovered that your house
    didn't actually belong to you after you bought it, got blamed for
    starting a war because you let your country be invaded, etc) you could
    (in theory at least) sue the lawyer and (in theory at least) their
    insurers would compensate you.

    Indeed. The insurance is probably the main point of solicitors these
    days.

    When the lawyers are all using AI, will they still bill you at
    £1000/hr? When they get it wrong, will they shrug and say it was the
    AI wot dun it and successfully wriggle out of their liabilities? When
    we all have access to AIs, will the law firms be sure their meatspace
    staff are offering clients something better than the technology?

    If you go to the AI directly you'll get no insurance. If you go to
    the solicitors and they use the same AI, you'll be insured.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Roger Hayter@21:1/5 to All on Fri Feb 21 22:08:01 2025
    On 21 Feb 2025 at 21:59:31 GMT, "Jon Ribbens" <jon+usenet@unequivocal.eu> wrote:

    On 2025-02-21, Nick Odell <nickodell49@yahoo.ca> wrote:
    On Fri, 21 Feb 2025 18:01:02 -0000 (UTC), Jon Ribbens
    <jon+usenet@unequivocal.eu> wrote:

    On 2025-02-21, Roger Hayter <roger@hayter.org> wrote:
    I have had several anecdotal reasons recently to realise how poor
    paid-for legal advice can be. But whether this is because of or
    despite LLM use I have no idea. The assessment is a bit worrying; if I >>>> pay a lawyer I honestly expect better than 6 out 10 accuracy.

    I've had absolutely terrible paid-for legal advice several times,
    before LLMs were a thing, so it's not the fault of "AI".

    Yes, but if it caused you a loss (you discovered that your house
    didn't actually belong to you after you bought it, got blamed for
    starting a war because you let your country be invaded, etc) you could
    (in theory at least) sue the lawyer and (in theory at least) their
    insurers would compensate you.

    Indeed. The insurance is probably the main point of solicitors these
    days.

    When the lawyers are all using AI, will they still bill you at
    £1000/hr? When they get it wrong, will they shrug and say it was the
    AI wot dun it and successfully wriggle out of their liabilities? When
    we all have access to AIs, will the law firms be sure their meatspace
    staff are offering clients something better than the technology?

    If you go to the AI directly you'll get no insurance. If you go to
    the solicitors and they use the same AI, you'll be insured.

    It is not hard to think of scenarios where getting bad advice from a solicitor
    is not compensatable by insurance, as it is impossible to prove that the outcome would have been financially better if you had been given better advice

    --

    Roger Hayter

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jon Ribbens@21:1/5 to Roger Hayter on Fri Feb 21 22:10:14 2025
    On 2025-02-21, Roger Hayter <roger@hayter.org> wrote:
    On 21 Feb 2025 at 21:59:31 GMT, "Jon Ribbens" <jon+usenet@unequivocal.eu> wrote:

    On 2025-02-21, Nick Odell <nickodell49@yahoo.ca> wrote:
    On Fri, 21 Feb 2025 18:01:02 -0000 (UTC), Jon Ribbens
    <jon+usenet@unequivocal.eu> wrote:

    On 2025-02-21, Roger Hayter <roger@hayter.org> wrote:
    I have had several anecdotal reasons recently to realise how poor
    paid-for legal advice can be. But whether this is because of or
    despite LLM use I have no idea. The assessment is a bit worrying; if I >>>>> pay a lawyer I honestly expect better than 6 out 10 accuracy.

    I've had absolutely terrible paid-for legal advice several times,
    before LLMs were a thing, so it's not the fault of "AI".

    Yes, but if it caused you a loss (you discovered that your house
    didn't actually belong to you after you bought it, got blamed for
    starting a war because you let your country be invaded, etc) you could
    (in theory at least) sue the lawyer and (in theory at least) their
    insurers would compensate you.

    Indeed. The insurance is probably the main point of solicitors these
    days.

    When the lawyers are all using AI, will they still bill you at
    £1000/hr? When they get it wrong, will they shrug and say it was the
    AI wot dun it and successfully wriggle out of their liabilities? When
    we all have access to AIs, will the law firms be sure their meatspace
    staff are offering clients something better than the technology?

    If you go to the AI directly you'll get no insurance. If you go to
    the solicitors and they use the same AI, you'll be insured.

    It is not hard to think of scenarios where getting bad advice from a solicitor is not compensatable by insurance, as it is impossible to
    prove that the outcome would have been financially better if you had
    been given better advice

    Also true.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)