• Re: Lawyer presents fake cases in court

    From Peter Walker@21:1/5 to The Todal on Wed May 7 13:59:54 2025
    The Todal <the_todal@icloud.com> wrote in news:m814u8Fo8qmU2@mid.individual.net:

    Amazing. I think this has already happened in the USA but I thought
    our solicitors and barristers were better than this. The culprit
    probably being "artificial intelligence" which is reputed to take the
    place of lawyers and other professionals eventually.

    https://www.judiciary.uk/wp-content/uploads/2025/05/Ayinde-v-LB-Haringe y-Judgment-Ritchie-J-03.04.25-HD-2.pdf


    Whilst it is a judgment for wasted costs I can't see any recommendation for referral to the Law Society for misleading the court in such a substantial manner which comes as a surprise. I would expect a serious misconduct
    referral.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Todal@21:1/5 to All on Wed May 7 14:19:04 2025
    Amazing. I think this has already happened in the USA but I thought our solicitors and barristers were better than this. The culprit probably
    being "artificial intelligence" which is reputed to take the place of
    lawyers and other professionals eventually.

    https://www.judiciary.uk/wp-content/uploads/2025/05/Ayinde-v-LB-Haringey-Judgment-Ritchie-J-03.04.25-HD-2.pdf

    quote

    In this part of the judgment I deal with the application for wasted
    costs against the Claimant's lawyers made by the Defendant. By an
    application dated 7 March 2025, the Defendant applies for a wasted costs
    order against the Claimant's solicitors and barrister. The solicitors
    are the Haringey Law Centre, Ground Floor Office, 7 Holcombe Road,
    London N17 9AA and the barrister is Sarah Forey of 3 Bolt Court
    Chambers. The application is founded on three factual assertions:

    (1) The first is that the Claimant's barrister and solicitor put five
    fake cases in the Claimant's statement of facts and grounds for the
    judicial review. Those are in paragraphs 17, 20, 24, 27 and 28.

    (2) Secondly, that when requested to produce copies of those cases, they
    did not.

    (3) Thirdly, that in the statement of facts and grounds at paragraphs 15
    and 16 and by implication throughout, the Claimant's lawyers asserted
    that section 188(3) of the Housing Act 1996 was a "Must" provision
    instead of a discretionary "May" provision.

    On 4 February 2025, the Defendant wrote to the Claimant's solicitors
    stating that having read the statement of facts and grounds drafted by
    Ms Forey, they could not find five of the cases set out therein.

    I do not consider that it was fair or reasonable to say that the
    erroneous citations could easily be explained and then to refuse to
    explain them. Nor do I consider it was professional, reasonable or fair
    to say it was not necessary to explain the citations. The assertion that
    they agreed to correct the citations before April never came true, for
    they never did. The assertion that no further explanation or obligation
    to provide an explanation was necessary or arose is, in my judgment,
    quite wrong. Worst of all, the assertion that the citations are merely
    cosmetic errors is a grossly unprofessional categorisation.

    Ms Forey wrote:

    "In R (on the application of El Gendi) v Camden London Borough
    Council [2020] EWHC 2435 (Admin), the High Court emphasised that failing
    to provide interim accommodation during the review process undermines
    the protective purposes of homelessness legislation. The court found
    that such a failure not only constitutes a breach of statutory duty but
    also creates unnecessary hardship for vulnerable individuals. The
    respondent's similar failure in the present case demonstrates a
    procedural impropriety warranting judicial review".

    It transpires that when the Defendant looked that case up, it did not
    exist. As a result, the Defendant wrote to the Claimant and asked for
    have a copy. The Claimant's solicitors never provided one so then the
    Defendant asserted that the case did not exist. I find it remarkable
    that neither the Claimant's solicitors' firm nor barrister has put in
    any written explanation in relation to that assertion.

    What Ms Forey says about this twice in submissions was that these are
    "minor citation errors". When I challenged her the first time she
    backtracked on that and accepted they are serious. However, in her later submissions she returned to them being "minor citation errors". She said
    there was no dishonesty and submitted that there was no material
    prejudice. Then she sought, remarkably, without having put in a bundle
    of authorities or anything in writing, to provide in submissions
    references to further cases which she did not put before the court,
    which she says made out the principles that she had put out in each
    paragraph containing the fake cases.

    Ms Forey wrote this:

    "The appellant's situation mirrors the facts in R (on the
    application of H) v Ealing London Borough Council [2021] EWHC 939
    (Admin) where the court found the local authority's failure to provide
    interim accommodation irrational in light of the appellant's
    vulnerability and the potential consequences of homelessness. The
    respondent's conduct in this case similarly lacks a rational basis and demonstrates a failure to properly exercise its discretion".

    This was yet another fake case. It does not exist. Therefore, the
    description of what it is in the case was fake and untrue.

    Has the behaviour of Ms Forey and the Claimant's solicitors been
    improper, unreasonable or negligent? I consider that it has been all
    three. It is wholly improper to put fake cases in a pleading. It was unreasonable, when it was pointed out, to say that these fake cases were
    "minor citation errors" or to use the phrase of the solicitors,
    "Cosmetic errors". I should say it is the responsibility of the legal
    team, including the solicitors, to see that the statement of facts and
    grounds are correct. They should have been shocked when they were told
    that the citations did not exist. Ms Forey should have reported herself
    to the Bar Council. I think also that the solicitors should have
    reported themselves to the Solicitors Regulation Authority. I consider
    that providing a fake description of five fake cases, including a Court
    of Appeal case, qualifies quite clearly as professional misconduct.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to The Todal on Wed May 7 14:53:10 2025
    On 07/05/2025 14:19, The Todal wrote:
    Amazing. I think this has already happened in the USA but I thought our solicitors and barristers were better than this.  The culprit probably
    being "artificial intelligence" which is reputed to take the place of
    lawyers and other professionals eventually.

    https://www.judiciary.uk/wp-content/uploads/2025/05/Ayinde-v-LB- Haringey-Judgment-Ritchie-J-03.04.25-HD-2.pdf

    No, the culprits are the solicitors and barrister involved who naively
    believed everything the internet told them, and who did not acknowledge
    their errors even when pointed out to them but doubled down.

    One wonders what they added to the process and what they were being paid
    for.

    What punishment should they receive?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Todal@21:1/5 to Peter Walker on Wed May 7 15:06:00 2025
    On 07/05/2025 14:59, Peter Walker wrote:
    The Todal <the_todal@icloud.com> wrote in news:m814u8Fo8qmU2@mid.individual.net:

    Amazing. I think this has already happened in the USA but I thought
    our solicitors and barristers were better than this. The culprit
    probably being "artificial intelligence" which is reputed to take the
    place of lawyers and other professionals eventually.

    https://www.judiciary.uk/wp-content/uploads/2025/05/Ayinde-v-LB-Haringe
    y-Judgment-Ritchie-J-03.04.25-HD-2.pdf


    Whilst it is a judgment for wasted costs I can't see any recommendation for referral to the Law Society for misleading the court in such a substantial manner which comes as a surprise. I would expect a serious misconduct referral.


    It is mentioned at the end, actually. Quote

    I am going to do two further things which I am going to want recorded in
    the order. The first is that I order at public expense a transcript of
    these extemporary judgments that I have provided in this case, all three
    of them. Secondly, I will require the Defendant to send the transcript
    to the Bar Standards Board and to the Solicitors Regulation Authority.
    It will be a matter for both counsel whether they comply with, what I
    believe are their obligations of self-reporting and reporting of
    knowledge of another, and it will be a matter for the solicitors' firm
    as to whether they have a similar requirement of self-reporting under
    the Solicitors Regulation Authority rules.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Peter Walker@21:1/5 to The Todal on Wed May 7 14:10:17 2025
    The Todal <the_todal@icloud.com> wrote in news:m817m8For35U1@mid.individual.net:

    On 07/05/2025 14:59, Peter Walker wrote:
    The Todal <the_todal@icloud.com> wrote in
    news:m814u8Fo8qmU2@mid.individual.net:

    Amazing. I think this has already happened in the USA but I thought
    our solicitors and barristers were better than this. The culprit
    probably being "artificial intelligence" which is reputed to take
    the place of lawyers and other professionals eventually.

    https://www.judiciary.uk/wp-content/uploads/2025/05/Ayinde-v-LB-Harin
    ge y-Judgment-Ritchie-J-03.04.25-HD-2.pdf


    Whilst it is a judgment for wasted costs I can't see any
    recommendation for referral to the Law Society for misleading the
    court in such a substantial manner which comes as a surprise. I would
    expect a serious misconduct referral.


    It is mentioned at the end, actually. Quote

    I am going to do two further things which I am going to want recorded
    in the order. The first is that I order at public expense a transcript
    of these extemporary judgments that I have provided in this case, all
    three of them. Secondly, I will require the Defendant to send the
    transcript to the Bar Standards Board and to the Solicitors Regulation Authority. It will be a matter for both counsel whether they comply
    with, what I believe are their obligations of self-reporting and
    reporting of knowledge of another, and it will be a matter for the solicitors' firm as to whether they have a similar requirement of self-reporting under the Solicitors Regulation Authority rules.


    Missed it (despite downloading the full judgment), thank you :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Todal@21:1/5 to Norman Wells on Wed May 7 15:11:49 2025
    On 07/05/2025 14:53, Norman Wells wrote:
    On 07/05/2025 14:19, The Todal wrote:
    Amazing. I think this has already happened in the USA but I thought
    our solicitors and barristers were better than this.  The culprit
    probably being "artificial intelligence" which is reputed to take the
    place of lawyers and other professionals eventually.

    https://www.judiciary.uk/wp-content/uploads/2025/05/Ayinde-v-LB-
    Haringey-Judgment-Ritchie-J-03.04.25-HD-2.pdf

    No, the culprits are the solicitors and barrister involved who naively believed everything the internet told them, and who did not acknowledge
    their errors even when pointed out to them but doubled down.

    Yes, you are absolutely right, of course. But a lawyer who conducts his research on "the internet" and merely copies and pastes from the
    results, is a lazy and irresponsible lawyer. It means the work that
    should take several hours actually takes about fifteen minutes.
    Obviously a lawyer should read the entire transcript of a cited case,
    not merely rely on a brief precis. If they had tried to get the
    transcripts they would have realised themselves that the cases were fake
    and not included them in any written submissions.



    One wonders what they added to the process and what they were being paid
    for.

    What punishment should they receive?


    If there are findings of dishonesty by the professional bodies, then
    suspension and possibly strike-offs would be appropriate.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Theo@21:1/5 to Theo on Wed May 7 17:54:07 2025
    Theo <theom+news@chiark.greenend.org.uk> wrote:
    eg in the first minute of the video here: https://www.lexisnexis.co.uk/lexis-plus/lexis-plus-ai.html

    it shows how you can ask it a legal question and it then returns potential arguments with citations. In the example:

    "What potential age or disability discrimination claims could be brought under the Equality Act 2010 by a 52-year old employee with diabetes?"

    then ask it:

    "What is the relevant case law?"

    and more citations come out, with a tiny footnote "AI-generated content should
    be reviewed for accuracy". Then:

    Amusingly, one of the citations is:

    Richardson v. Newburgh Enlarged City Sch. Dist., 984 F. Supp. 735 (New York Southern District Court, 1997)

    which is a real citation: https://law.justia.com/cases/federal/district-courts/FSupp/984/735/1401858/

    but one that happens to be completely out of jurisdiction for a query about
    the UK's Equality Act 2010. So they can't get it right even in their demo video.

    Theo

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Theo@21:1/5 to Norman Wells on Wed May 7 17:47:31 2025
    Norman Wells <hex@unseen.ac.am> wrote:
    On 07/05/2025 14:19, The Todal wrote:
    Amazing. I think this has already happened in the USA but I thought our solicitors and barristers were better than this.  The culprit probably being "artificial intelligence" which is reputed to take the place of lawyers and other professionals eventually.

    https://www.judiciary.uk/wp-content/uploads/2025/05/Ayinde-v-LB- Haringey-Judgment-Ritchie-J-03.04.25-HD-2.pdf

    No, the culprits are the solicitors and barrister involved who naively believed everything the internet told them, and who did not acknowledge
    their errors even when pointed out to them but doubled down.

    One wonders what they added to the process and what they were being paid
    for.

    It seems that legal firms are using AI to discover relevant cases to cite rather than poring over legal journals. A lot of outfits are touting
    AI-based research tools, including big players like LexisNexis. Of course, this is to cut costs.

    eg in the first minute of the video here: https://www.lexisnexis.co.uk/lexis-plus/lexis-plus-ai.html

    it shows how you can ask it a legal question and it then returns potential arguments with citations. In the example:

    "What potential age or disability discrimination claims could be brought
    under the Equality Act 2010 by a 52-year old employee with diabetes?"

    then ask it:

    "What is the relevant case law?"

    and more citations come out, with a tiny footnote "AI-generated content should be reviewed for accuracy". Then:

    "Write an email to my client explaining the possible age or disability
    claims under the Equality Act 2010 for a 52-year old diabetic employee"

    and it writes a lawyer-style letter with the contents.

    What punishment should they receive?

    I think it's going to get worse before it gets better.

    Theo

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Wed May 7 17:47:23 2025
    According to The Todal <the_todal@icloud.com>:
    Amazing. I think this has already happened in the USA but I thought our >solicitors and barristers were better than this. ...

    Dream on. Here in the US it happens all the time, viz:

    https://www.loweringthebar.net/2025/04/counsel-would-you-be-surprised.html?utm_source=feedblitz&utm_medium=FeedBlitzRss&utm_campaign=loweringthebar

    Sensible US lawyers "shepardize" their citations by looking them up in an index that tells them whether they've been appealed or overruled or otherwise not good
    law. (Shepard's Citations started publishing its indices in 1873, hence the name.) Needless to say, if you can't find it in Shepard's you don't cite it.

    Surely there is a UK equivalent.

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From GB@21:1/5 to The Todal on Wed May 7 17:19:30 2025
    On 07/05/2025 15:11, The Todal wrote:
    On 07/05/2025 14:53, Norman Wells wrote:
    On 07/05/2025 14:19, The Todal wrote:
    Amazing. I think this has already happened in the USA but I thought
    our solicitors and barristers were better than this.  The culprit
    probably being "artificial intelligence" which is reputed to take the
    place of lawyers and other professionals eventually.

    https://www.judiciary.uk/wp-content/uploads/2025/05/Ayinde-v-LB-
    Haringey-Judgment-Ritchie-J-03.04.25-HD-2.pdf

    No, the culprits are the solicitors and barrister involved who naively
    believed everything the internet told them, and who did not
    acknowledge their errors even when pointed out to them but doubled down.

    Yes, you are absolutely right, of course. But a lawyer who conducts his research on "the internet" and merely copies and pastes from the
    results, is a lazy and irresponsible lawyer. It means the work that
    should take several hours actually takes about fifteen minutes.
    Obviously a lawyer should read the entire transcript of a cited case,
    not merely rely on a brief precis. If they had tried to get the
    transcripts they would have realised themselves that the cases were fake
    and not included them in any written submissions.



    One wonders what they added to the process and what they were being
    paid for.

    What punishment should they receive?


    If there are findings of dishonesty by the professional bodies, then suspension and possibly strike-offs would be appropriate.

    You have described them above as lazy and irresponsible. It's quite a
    step from that to dishonest.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Theo on Wed May 7 19:03:31 2025
    On 07/05/2025 17:54, Theo wrote:
    Theo <theom+news@chiark.greenend.org.uk> wrote:
    eg in the first minute of the video here:
    https://www.lexisnexis.co.uk/lexis-plus/lexis-plus-ai.html

    it shows how you can ask it a legal question and it then returns potential >> arguments with citations. In the example:

    "What potential age or disability discrimination claims could be brought
    under the Equality Act 2010 by a 52-year old employee with diabetes?"

    then ask it:

    "What is the relevant case law?"

    and more citations come out, with a tiny footnote "AI-generated content should
    be reviewed for accuracy". Then:

    Amusingly, one of the citations is:

    Richardson v. Newburgh Enlarged City Sch. Dist., 984 F. Supp. 735 (New York Southern District Court, 1997)

    which is a real citation: https://law.justia.com/cases/federal/district-courts/FSupp/984/735/1401858/

    but one that happens to be completely out of jurisdiction for a query about the UK's Equality Act 2010. So they can't get it right even in their demo video.

    Any lawyer who just reproduces what a computer churns out is doing
    nothing more than any unthinking person with a computer could. It's not
    acting as a lawyer at all, but in fact demonstrating that they're
    incompetent, naive, lazy, fraudulent or money-grabbing. Or perhaps all
    of those. It's utterly unprofessional and contemptible.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to NOTsomeone@microsoft.invalid on Wed May 7 23:55:50 2025
    "GB" <NOTsomeone@microsoft.invalid> wrote in message news:vvg16i$141cp$1@dont-email.me...

    On 07/05/2025 15:11, The Todal wrote:

    If there are findings of dishonesty by the professional bodies, then suspension and
    possibly strike-offs would be appropriate.

    You have described them above as lazy and irresponsible. It's quite a step from that to
    dishonest.

    If they're not doing work for which they are being paid, then they are being dishonest. Full stop. I can't honestly see quite how anyone could possibly disagree.


    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jon Ribbens@21:1/5 to NOTsomeone@microsoft.invalid on Wed May 7 20:53:53 2025
    On 2025-05-07, GB <NOTsomeone@microsoft.invalid> wrote:
    On 07/05/2025 15:11, The Todal wrote:
    On 07/05/2025 14:53, Norman Wells wrote:
    On 07/05/2025 14:19, The Todal wrote:
    Amazing. I think this has already happened in the USA but I thought
    our solicitors and barristers were better than this.  The culprit
    probably being "artificial intelligence" which is reputed to take the
    place of lawyers and other professionals eventually.

    https://www.judiciary.uk/wp-content/uploads/2025/05/Ayinde-v-LB-
    Haringey-Judgment-Ritchie-J-03.04.25-HD-2.pdf

    No, the culprits are the solicitors and barrister involved who naively
    believed everything the internet told them, and who did not
    acknowledge their errors even when pointed out to them but doubled down.

    Yes, you are absolutely right, of course. But a lawyer who conducts his
    research on "the internet" and merely copies and pastes from the
    results, is a lazy and irresponsible lawyer. It means the work that
    should take several hours actually takes about fifteen minutes.
    Obviously a lawyer should read the entire transcript of a cited case,
    not merely rely on a brief precis. If they had tried to get the
    transcripts they would have realised themselves that the cases were fake
    and not included them in any written submissions.

    One wonders what they added to the process and what they were being
    paid for.

    What punishment should they receive?

    If there are findings of dishonesty by the professional bodies, then
    suspension and possibly strike-offs would be appropriate.

    You have described them above as lazy and irresponsible. It's quite a
    step from that to dishonest.

    We're not talking about the general public here, we're talking about the members of professional bodies. It doesn't seem much of a stretch to me
    for the regulators of these bodies to decide that signing off the
    *unchecked* work of "AI" generative algorithms as their own paid work
    is sufficiently dishonest to warrant some sort of reprimand.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From RJH@21:1/5 to billy bookcase on Thu May 8 09:20:00 2025
    On 7 May 2025 at 23:55:50 BST, "billy bookcase" wrote:


    "GB" <NOTsomeone@microsoft.invalid> wrote in message news:vvg16i$141cp$1@dont-email.me...

    On 07/05/2025 15:11, The Todal wrote:

    If there are findings of dishonesty by the professional bodies, then
    suspension and
    possibly strike-offs would be appropriate.

    You have described them above as lazy and irresponsible. It's quite a step >> from that to
    dishonest.

    If they're not doing work for which they are being paid, then they are being dishonest. Full stop. I can't honestly see quite how anyone could possibly disagree.


    Indeed - strikes me as an act similar in manner to plagiarism.

    As the judge says - professional misconduct. Given the circumstances, and especially the way they've tried to pass it off as something trivial, I'd be surprised if they practise again.

    --
    Cheers, Rob, Sheffield UK

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From GB@21:1/5 to Jon Ribbens on Thu May 8 10:58:50 2025
    On 07/05/2025 21:53, Jon Ribbens wrote:

    If there are findings of dishonesty by the professional bodies, then
    suspension and possibly strike-offs would be appropriate.

    You have described them above as lazy and irresponsible. It's quite a
    step from that to dishonest.

    We're not talking about the general public here, we're talking about the members of professional bodies. It doesn't seem much of a stretch to me
    for the regulators of these bodies to decide that signing off the
    *unchecked* work of "AI" generative algorithms as their own paid work
    is sufficiently dishonest to warrant some sort of reprimand.


    I hear you, but virtually all lawyers these days use databases like
    Lexis to search for cases. Those provide case synopses, as well as
    leading the reader to the relevant part of the judgment.

    This looks like a Legal Aid case, as it seems to involve a homeless
    individual. Was there really the funding available to (as Todal put it)
    read the entire transcript of a cited case, not merely rely on a brief
    precis? Perhaps! And if the lawyers charged for hours that they didn't
    put it, I agree that that would be dishonest.

    OTOH, if Legal Aid funding is so low that it effectively requires
    corners to be cut, then that's what will happen. Of course, databases
    like Lexis are very unlikely to include cases that don't exist, and most
    of the time the corner cutters get away with it. It seems wrong to come
    down like a ton of bricks on a particular lot of corner cutters, if the
    system more or less relies on it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to NOTsomeone@microsoft.invalid on Thu May 8 12:24:39 2025
    "GB" <NOTsomeone@microsoft.invalid> wrote in message news:vvhv8q$1m5uf$1@dont-email.me...
    On 07/05/2025 21:53, Jon Ribbens wrote:

    If there are findings of dishonesty by the professional bodies, then
    suspension and possibly strike-offs would be appropriate.

    You have described them above as lazy and irresponsible. It's quite a
    step from that to dishonest.

    We're not talking about the general public here, we're talking about the
    members of professional bodies. It doesn't seem much of a stretch to me
    for the regulators of these bodies to decide that signing off the
    *unchecked* work of "AI" generative algorithms as their own paid work
    is sufficiently dishonest to warrant some sort of reprimand.


    I hear you, but virtually all lawyers these days use databases like Lexis to search for
    cases. Those provide case synopses, as well as leading the reader to the relevant part
    of the judgment.

    They uses databases including synopses so as to *point to* the relevant
    cases; which they will then need to *study in detail* using the relevant authoritative sources; either on paper or online.


    This looks like a Legal Aid case, as it seems to involve a homeless individual. Was
    there really the funding available to (as Toddle put it) read the entire transcript of
    a
    cited case, not merely rely on a brief precise? Perhaps! And if the lawyers charged for
    hours that they didn't put it, I agree that would be dishonest.

    Its precisely the opposite I'd have thought. Before the emergence of
    online databases knowledge of relevant cases would either have been
    carried around in lawyers heads or more likely in notes compiled as a
    result of previous cases/research

    Nowadays all they need do is use a database to point towards those
    cases which will need to studied in greater detail.


    OTOH, if Legal Aid funding is so low that it effectively requires corners to be cut,
    then that's what will happen. Of course, databases like Lexis are very unlikely to
    include cases that don't exist, and most of the time the corner cutters get away with
    it. It seems wrong to come down like a ton of bricks on a particular lot of corner
    cutters, if the system more or less relies on it.

    Except it doesn't.

    The judges themselves can easily acquaint themselves with those details
    of the relevant judgements which are going to be relied on. It's not as though they just sit there, and accept everything that's put forward in Court.


    bb

    JK Rowling "misprint! in "The Guardian

    quote:

    This article was amended on 19 April 2025. The sexual violence support service that JK Rowling co-founded is called Beira's Place, not "Beria's Place" as an earlier version said.

    :unquote

    https://www.theguardian.com/books/2025/apr/18/jk-rowling-harry-potter-gender-critical-campaigner

    https://en.wikipedia.org/wiki/Lavrentiy_Beria

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to RJH on Thu May 8 12:27:18 2025
    "RJH" <patchmoney@gmx.com> wrote in message news:vvht00$1lpsm$1@dont-email.me...
    On 7 May 2025 at 23:55:50 BST, "billy bookcase" wrote:


    "GB" <NOTsomeone@microsoft.invalid> wrote in message
    news:vvg16i$141cp$1@dont-email.me...

    On 07/05/2025 15:11, The Todal wrote:

    If there are findings of dishonesty by the professional bodies, then
    suspension and
    possibly strike-offs would be appropriate.

    You have described them above as lazy and irresponsible. It's quite a step >>> from that to
    dishonest.

    If they're not doing work for which they are being paid, then they are being >> dishonest. Full stop. I can't honestly see quite how anyone could possibly >> disagree.


    Indeed - strikes me as an act similar in manner to plagiarism.

    As the judge says - professional misconduct. Given the circumstances, and especially the way they've tried to pass it off as something trivial, I'd be surprised if they practise again.

    Its surely very simple. Presumably they're not advertising for work on the basis
    that they're lazy and irresponsible; but rather on the basis that they're dynamic
    and trustworthy. And are charging on that basis

    When, in this instance at least, they're very evidently not.


    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Peter Walker@21:1/5 to NOTsomeone@microsoft.invalid on Thu May 8 11:16:17 2025
    GB <NOTsomeone@microsoft.invalid> wrote in
    news:vvhv8q$1m5uf$1@dont-email.me:

    On 07/05/2025 21:53, Jon Ribbens wrote:

    If there are findings of dishonesty by the professional bodies,
    then suspension and possibly strike-offs would be appropriate.

    You have described them above as lazy and irresponsible. It's quite
    a step from that to dishonest.

    We're not talking about the general public here, we're talking about
    the members of professional bodies. It doesn't seem much of a stretch
    to me for the regulators of these bodies to decide that signing off
    the *unchecked* work of "AI" generative algorithms as their own paid
    work is sufficiently dishonest to warrant some sort of reprimand.


    I hear you, but virtually all lawyers these days use databases like >
    Lexis to search for cases. Those provide case synopses, as well as
    leading the reader to the relevant part of the judgment.

    This looks like a Legal Aid case, as it seems to involve a homeless individual. Was there really the funding available to (as Todal put
    it) read the entire transcript of a cited case, not merely rely on a
    brief precis? Perhaps! And if the lawyers charged for hours that they
    didn't put it, I agree that that would be dishonest.

    OTOH, if Legal Aid funding is so low that it effectively requires
    corners to be cut, then that's what will happen. Of course, databases
    like Lexis are very unlikely to include cases that don't exist, and
    most of the time the corner cutters get away with it. It seems wrong
    to come down like a ton of bricks on a particular lot of corner
    cutters, if the system more or less relies on it.


    In contrast there are either professional standards or there are not. If
    you adopt cloak & gown then you are bound by professional standards
    irrespecive of who you stand for and if you fail to meet those standards
    the you are worthy of professional sanction.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Todal@21:1/5 to All on Thu May 8 12:31:39 2025
    On 08/05/2025 10:58, GB wrote:
    On 07/05/2025 21:53, Jon Ribbens wrote:

    If there are findings of dishonesty by the professional bodies, then
    suspension and possibly strike-offs would be appropriate.

    You have described them above as lazy and irresponsible. It's quite a
    step from that to dishonest.

    We're not talking about the general public here, we're talking about the
    members of professional bodies. It doesn't seem much of a stretch to me
    for the regulators of these bodies to decide that signing off the
    *unchecked* work of "AI" generative algorithms as their own paid work
    is sufficiently dishonest to warrant some sort of reprimand.


    I hear you, but virtually all lawyers these days use databases like
    Lexis to search for cases. Those provide case synopses, as well as
    leading the reader to the relevant part of the judgment.

    This looks like a Legal Aid case, as it seems to involve a homeless individual. Was there really the funding available to (as Todal put it)
    read the entire transcript of a cited case, not merely rely on a brief precis? Perhaps! And if the lawyers charged for hours that they didn't
    put it, I agree that that would be dishonest.

    OTOH, if Legal Aid funding is so low that it effectively requires
    corners to be cut, then that's what will happen.  Of course, databases
    like Lexis are very unlikely to include cases that don't exist, and most
    of the time the corner cutters get away with it. It seems wrong to come
    down like a ton of bricks on a particular lot of corner cutters, if the system more or less relies on it.


    The judges rely on barristers to present their arguments on the law
    accurately and objectively. In many cases judges do not do their own
    research because they trust the honesty of barristers who seem to be
    reputable - it would be different if it was a litigant in person or a
    foreign lawyer who had qualified in a different jurisdiction.

    I can't imagine any competent lawyer sharing your sympathy for these
    lawyers. Not only did they cite fake cases, when asked for transcripts
    they failed to say it was a mistake, which might have mitigated their
    conduct, but tried to bluff their way out of it.

    Funding is not a relevant consideration. I can't believe that there are
    lawyers who cannot afford subscriptions to online legal libraries and
    textbooks but if there are, they should think about switching to a
    different way of making a living. And a lawyer who says "I started
    reading the transcript of a Court of Appeal judgment but I had only been
    paid for an hour's work so I stopped reading after 20 pages" would
    probably need to be sectioned under the Mental Health Act.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Roger Hayter@21:1/5 to The Todal on Thu May 8 12:08:14 2025
    On 8 May 2025 at 12:31:39 BST, "The Todal" <the_todal@icloud.com> wrote:

    On 08/05/2025 10:58, GB wrote:
    On 07/05/2025 21:53, Jon Ribbens wrote:

    If there are findings of dishonesty by the professional bodies, then >>>>> suspension and possibly strike-offs would be appropriate.

    You have described them above as lazy and irresponsible. It's quite a
    step from that to dishonest.

    We're not talking about the general public here, we're talking about the >>> members of professional bodies. It doesn't seem much of a stretch to me
    for the regulators of these bodies to decide that signing off the
    *unchecked* work of "AI" generative algorithms as their own paid work
    is sufficiently dishonest to warrant some sort of reprimand.


    I hear you, but virtually all lawyers these days use databases like
    Lexis to search for cases. Those provide case synopses, as well as
    leading the reader to the relevant part of the judgment.

    This looks like a Legal Aid case, as it seems to involve a homeless
    individual. Was there really the funding available to (as Todal put it)
    read the entire transcript of a cited case, not merely rely on a brief
    precis? Perhaps! And if the lawyers charged for hours that they didn't
    put it, I agree that that would be dishonest.

    OTOH, if Legal Aid funding is so low that it effectively requires
    corners to be cut, then that's what will happen. Of course, databases
    like Lexis are very unlikely to include cases that don't exist, and most
    of the time the corner cutters get away with it. It seems wrong to come
    down like a ton of bricks on a particular lot of corner cutters, if the
    system more or less relies on it.


    The judges rely on barristers to present their arguments on the law accurately and objectively. In many cases judges do not do their own
    research because they trust the honesty of barristers who seem to be reputable - it would be different if it was a litigant in person or a
    foreign lawyer who had qualified in a different jurisdiction.

    I can't imagine any competent lawyer sharing your sympathy for these
    lawyers. Not only did they cite fake cases, when asked for transcripts
    they failed to say it was a mistake, which might have mitigated their conduct, but tried to bluff their way out of it.

    Funding is not a relevant consideration. I can't believe that there are lawyers who cannot afford subscriptions to online legal libraries and textbooks but if there are, they should think about switching to a
    different way of making a living. And a lawyer who says "I started
    reading the transcript of a Court of Appeal judgment but I had only been
    paid for an hour's work so I stopped reading after 20 pages" would
    probably need to be sectioned under the Mental Health Act.

    The lawyers concerned were a voluntary legal aid group, poorly funded, poorly paid and overworked. It is also notable that the other side in the case (a London Borough) were almost equally (but differently) culpable in failing to obey previous court orders and failing to prepare a defence to the case. But they seem to have escaped much journalistic criticism.

    --

    Roger Hayter

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Roger Hayter on Thu May 8 16:31:47 2025
    "Roger Hayter" <roger@hayter.org> wrote in message news:0609461929.4972c017@uninhabited.net...
    On 8 May 2025 at 12:31:39 BST, "The Todal" <the_todal@icloud.com> wrote:

    On 08/05/2025 10:58, GB wrote:
    On 07/05/2025 21:53, Jon Ribbens wrote:

    If there are findings of dishonesty by the professional bodies, then >>>>>> suspension and possibly strike-offs would be appropriate.

    You have described them above as lazy and irresponsible. It's quite a >>>>> step from that to dishonest.

    We're not talking about the general public here, we're talking about the >>>> members of professional bodies. It doesn't seem much of a stretch to me >>>> for the regulators of these bodies to decide that signing off the
    *unchecked* work of "AI" generative algorithms as their own paid work
    is sufficiently dishonest to warrant some sort of reprimand.


    I hear you, but virtually all lawyers these days use databases like
    Lexis to search for cases. Those provide case synopses, as well as
    leading the reader to the relevant part of the judgment.

    This looks like a Legal Aid case, as it seems to involve a homeless
    individual. Was there really the funding available to (as Todal put it)
    read the entire transcript of a cited case, not merely rely on a brief
    precis? Perhaps! And if the lawyers charged for hours that they didn't
    put it, I agree that that would be dishonest.

    OTOH, if Legal Aid funding is so low that it effectively requires
    corners to be cut, then that's what will happen. Of course, databases
    like Lexis are very unlikely to include cases that don't exist, and most >>> of the time the corner cutters get away with it. It seems wrong to come
    down like a ton of bricks on a particular lot of corner cutters, if the
    system more or less relies on it.


    The judges rely on barristers to present their arguments on the law
    accurately and objectively. In many cases judges do not do their own
    research because they trust the honesty of barristers who seem to be
    reputable - it would be different if it was a litigant in person or a
    foreign lawyer who had qualified in a different jurisdiction.

    I can't imagine any competent lawyer sharing your sympathy for these
    lawyers. Not only did they cite fake cases, when asked for transcripts
    they failed to say it was a mistake, which might have mitigated their
    conduct, but tried to bluff their way out of it.

    Funding is not a relevant consideration. I can't believe that there are
    lawyers who cannot afford subscriptions to online legal libraries and
    textbooks but if there are, they should think about switching to a
    different way of making a living. And a lawyer who says "I started
    reading the transcript of a Court of Appeal judgment but I had only been
    paid for an hour's work so I stopped reading after 20 pages" would
    probably need to be sectioned under the Mental Health Act.

    The lawyers concerned were a voluntary legal aid group, poorly funded, poorly paid and overworked. It is also notable that the other side in the case (a London Borough) were almost equally (but differently) culpable in failing to obey previous court orders and failing to prepare a defence to the case. But they seem to have escaped much journalistic criticism.

    That being the case, it's all the more unfortunate. As for non-public-school educated lawyers, not being financed by the Bank of Mum and Dad, these very Legal Aid cases are the stepping stones, which should allow aspiring lawyers
    to really show their paces, and make a name for themselves. If only among
    a strictly limited, but influential circle,

    Which is all the more reason why they should take special care with their preparation; as such cases are possibly the only opportunity they're
    ever likely to get to show what they're capable of.. .


    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Todal@21:1/5 to Roger Hayter on Thu May 8 16:53:49 2025
    On 08/05/2025 13:08, Roger Hayter wrote:
    On 8 May 2025 at 12:31:39 BST, "The Todal" <the_todal@icloud.com> wrote:

    On 08/05/2025 10:58, GB wrote:
    On 07/05/2025 21:53, Jon Ribbens wrote:

    If there are findings of dishonesty by the professional bodies, then >>>>>> suspension and possibly strike-offs would be appropriate.

    You have described them above as lazy and irresponsible. It's quite a >>>>> step from that to dishonest.

    We're not talking about the general public here, we're talking about the >>>> members of professional bodies. It doesn't seem much of a stretch to me >>>> for the regulators of these bodies to decide that signing off the
    *unchecked* work of "AI" generative algorithms as their own paid work
    is sufficiently dishonest to warrant some sort of reprimand.


    I hear you, but virtually all lawyers these days use databases like
    Lexis to search for cases. Those provide case synopses, as well as
    leading the reader to the relevant part of the judgment.

    This looks like a Legal Aid case, as it seems to involve a homeless
    individual. Was there really the funding available to (as Todal put it)
    read the entire transcript of a cited case, not merely rely on a brief
    precis? Perhaps! And if the lawyers charged for hours that they didn't
    put it, I agree that that would be dishonest.

    OTOH, if Legal Aid funding is so low that it effectively requires
    corners to be cut, then that's what will happen. Of course, databases
    like Lexis are very unlikely to include cases that don't exist, and most >>> of the time the corner cutters get away with it. It seems wrong to come
    down like a ton of bricks on a particular lot of corner cutters, if the
    system more or less relies on it.


    The judges rely on barristers to present their arguments on the law
    accurately and objectively. In many cases judges do not do their own
    research because they trust the honesty of barristers who seem to be
    reputable - it would be different if it was a litigant in person or a
    foreign lawyer who had qualified in a different jurisdiction.

    I can't imagine any competent lawyer sharing your sympathy for these
    lawyers. Not only did they cite fake cases, when asked for transcripts
    they failed to say it was a mistake, which might have mitigated their
    conduct, but tried to bluff their way out of it.

    Funding is not a relevant consideration. I can't believe that there are
    lawyers who cannot afford subscriptions to online legal libraries and
    textbooks but if there are, they should think about switching to a
    different way of making a living. And a lawyer who says "I started
    reading the transcript of a Court of Appeal judgment but I had only been
    paid for an hour's work so I stopped reading after 20 pages" would
    probably need to be sectioned under the Mental Health Act.

    The lawyers concerned were a voluntary legal aid group, poorly funded, poorly paid and overworked. It is also notable that the other side in the case (a London Borough) were almost equally (but differently) culpable in failing to obey previous court orders and failing to prepare a defence to the case. But they seem to have escaped much journalistic criticism.



    Haringey Law Centre is well resourced. It isn't just a group of
    volunteers working out of an abandoned kebab shop (not that you imagined
    that was the case).

    quote

    Haringey Law Centre’s solicitors, caseworkers and volunteers provide
    free, fixed fee and no-win/no-fee independent legal advice and
    representation in immigration, debt, housing, employment and welfare
    benefits law matters.

    unquote

    They are regulated by the Law Society and SRA and have exactly the same
    duties of care as, say, a firm such as Linklaters.

    The specific case handler at Haringey Law Centre might not have been a solicitor but would have been working under the supervision of a
    solicitor who would be expected to ensure that professional standards
    were maintained even if the case handler was inexperienced.

    As for the barrister, one would expect her to check any brief from the
    Law Centre and verify any cases cited. The barrister will have access to
    the chambers library and will have represented many different clients.

    I agree of course that the London Borough of Haringey has rather
    foolishly failed to serve an acknowledgment of service or defence and
    has then tried to get "relief from sanctions" ie persuade the judge to
    let them serve a late defence and resist the claim, and the judge was
    right to refuse their request after their "wholesale breach of court
    orders". The claimant's case was that the Borough was failing to house
    him as a homeless person with medical needs under the Housing Act.
    Failing to comply with the procedural timetable or with court orders is actually quite common and the penalty is that you end up with a default judgment against you and here, this saves time and money for the
    claimant who can get the relief that he was seeking without prolonged
    further argument. The Borough's conduct is certainly not newsworthy or deserving of newspaper publicity.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Todal@21:1/5 to John Levine on Mon May 19 12:41:33 2025
    On 07/05/2025 18:47, John Levine wrote:
    According to The Todal <the_todal@icloud.com>:
    Amazing. I think this has already happened in the USA but I thought our
    solicitors and barristers were better than this. ...

    Dream on. Here in the US it happens all the time, viz:

    https://www.loweringthebar.net/2025/04/counsel-would-you-be-surprised.html?utm_source=feedblitz&utm_medium=FeedBlitzRss&utm_campaign=loweringthebar

    Sensible US lawyers "shepardize" their citations by looking them up in an index
    that tells them whether they've been appealed or overruled or otherwise not good
    law. (Shepard's Citations started publishing its indices in 1873, hence the name.) Needless to say, if you can't find it in Shepard's you don't cite it.

    Surely there is a UK equivalent.


    Probably no equivalent.

    And now there's another case in the legal press.

    https://www.lawgazette.co.uk/news/ex-solicitor-loses-appeal-after-google-search-spewed-out-fake-cases/5123295.article

    quote

    A former solicitor presented dozens of false cases to the High Court to
    support his appeal against strike-off, it has emerged.

    The Solicitors Regulation Authority, the respondent in the appeal,
    identified 27 cases presented by Venkateshwarlu Bandla which appeared
    not to exist. Mr Justice Fordham accepted that two were misspelt and
    could be linked to real cases, but said the remainder were false and
    amounted to an abuse of process.

    The case is another in a growing list where a litigant has submitted
    false case citations to support their argument and where the judge has
    raised the issue of whether they have been generated by AI.

    unquote

    And here's a quote from the judge:

    I asked the Appellant why, in the light of this citation of non-existent authorities, the Court should not of its own motion strike out the
    grounds of appeal in this case, as being an abuse of the process of the
    Court. His answer was as follows. He claimed that the substance of the
    points which were being put forward in the grounds of appeal were sound,
    even if the authority which was being cited for those points did not
    exist. He was saying, on that basis, that the citation of non-existent
    (fake) authorities would not be a sufficient basis to concern the Court,
    at least to the extent of taking that course. I was wholly unpersuaded
    by that answer.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jon Ribbens@21:1/5 to Simon Parker on Mon May 19 14:22:13 2025
    On 2025-05-19, Simon Parker <simonparkerulm@gmail.com> wrote:
    Without attempting to excuse what has happened in this and the other
    case, I believe part of the problem is the way Google Searches (more accurately SERP) has changed recently (certainly in Chrome - other
    browsers and search engines are available).

    Previously, in response to a request, Google would list pages that it considered relevant to the response and rank them in order for the
    enquirer to go through as they so wished.

    Now, it uses AI to generate an answer, based on the content of some or
    those pages which it includes as an "AI Overview" at the top.

    It is to be noted that the AI Overview concludes with the line "AI
    responses may include mistakes. For legal advice, consult a professional.".

    In short, do not blindly rely on this overview. Do your research.

    As I said at the outset, I am most certainly not trying to justify what transpired, but I can see how it happened.

    The SRA is, (rightly in my opinion), taking a hard line with such cases.

    They are absolutely correct to do so, not least because even without
    the existence of the "AI results", the web page results are of course
    in themselves entirely untrustworthy. Anyone can publish a web page
    saying anything they like. Just because it appears in search results
    doesn't mean it's in any way true, and never did.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Simon Parker on Mon May 19 14:52:39 2025
    On Mon, 19 May 2025 14:34:43 +0100, Simon Parker wrote:

    Without attempting to excuse what has happened in this and the other
    case,
    I believe part of the problem is the way Google Searches (more
    accurately SERP) has changed recently (certainly in Chrome - other
    browsers and search engines are available).

    Previously, in response to a request, Google would list pages that it considered relevant to the response and rank them in order for the
    enquirer to go through as they so wished.

    Now, it uses AI to generate an answer, based on the content of some or
    those pages which it includes as an "AI Overview" at the top.

    It is to be noted that the AI Overview concludes with the line "AI
    responses may include mistakes. For legal advice, consult a
    professional.".

    In short, do not blindly rely on this overview. Do your research.

    I'm no lawyer, but I know my tech. Incidentally is there any chance we
    can sue any company that claims to sell "AI" as being de facto
    fraudsters ? After all, if you can't define intelligence, how can you
    sell it artificially ?

    https://www.theregister.com/2025/05/14/openwebsearch_eu/

    But if you're too cheap to pay, there are some things you can do to clean
    up Google's search results. One of the most useful is an extra term you
    can add into the Google search URL:

    ?udm=14
    This tells the Big G to exclude AI-generated overviews from the results
    it returns. This simple switch is so helpful that it even has its own
    domain, udm14.com, which calls it "the disenshittification Konami code,"
    after the famous cheat code for Konami's 1987 game Contra.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From GB@21:1/5 to All on Mon May 19 18:27:42 2025
    On 19/05/2025 15:52, Jethro_uk wrote:

    But if you're too cheap to pay, there are some things you can do to clean
    up Google's search results. One of the most useful is an extra term you
    can add into the Google search URL:

    ?udm=14
    This tells the Big G to exclude AI-generated overviews from the results
    it returns. This simple switch is so helpful that it even has its own
    domain, udm14.com, which calls it "the disenshittification Konami code," after the famous cheat code for Konami's 1987 game Contra.


    I'm obviously naive, superficial, and lazy, but I rather like the AI
    overviews. I do rely on them for some things.

    It's a bit like Alexa. I'm happy for it to control the lights and
    entertainment system. But, remembering the 1968 film, I wouldn't put it
    in charge of the airlock doors.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Burns@21:1/5 to All on Mon May 19 20:33:55 2025
    GB wrote:

    I'm obviously naive, superficial, and lazy, but I rather like the AI overviews. I do rely on them for some things.

    You haven't noticed that it just makes stuff up whenever it doesn't know
    an answer?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sam Plusnet@21:1/5 to Andy Burns on Mon May 19 21:08:20 2025
    On 19/05/2025 20:33, Andy Burns wrote:
    GB wrote:

    I'm obviously naive, superficial, and lazy, but I rather like the AI
    overviews. I do rely on them for some things.

    You haven't noticed that it just makes stuff up whenever it doesn't know
    an answer?

    I (sort of) like them.

    It reminds me that _any_ result a search engine gives me should be
    treated with caution.

    "Here is a really crap reply to your query. Other, potentially less
    crap responses are given below."

    --
    Sam Plusnet

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From GB@21:1/5 to Andy Burns on Tue May 20 12:07:57 2025
    On 19/05/2025 20:33, Andy Burns wrote:
    GB wrote:

    I'm obviously naive, superficial, and lazy, but I rather like the AI
    overviews. I do rely on them for some things.

    You haven't noticed that it just makes stuff up whenever it doesn't know
    an answer?


    I hadn't, actually. Do you have an example from your own experience?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to All on Tue May 20 13:18:07 2025
    On Tue, 20 May 2025 12:07:57 +0100, GB wrote:

    On 19/05/2025 20:33, Andy Burns wrote:
    GB wrote:

    I'm obviously naive, superficial, and lazy, but I rather like the AI
    overviews. I do rely on them for some things.

    You haven't noticed that it just makes stuff up whenever it doesn't
    know an answer?


    I hadn't, actually. Do you have an example from your own experience?

    I've been told a few times that in order to achieve <something> in an application, then I just need to go to [non existent menu] and select <something>.

    Pointing out that [non existent menu] doesn't exist provokes and apology
    and [another non existent menu].

    Telling ChatGPT it's wrong has a weirdly familiar sensation to it. Which
    must be proof that it is becoming human ;)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jon Ribbens@21:1/5 to jethro_uk@hotmailbin.com on Tue May 20 16:24:08 2025
    On 2025-05-20, Jethro_uk <jethro_uk@hotmailbin.com> wrote:
    On Tue, 20 May 2025 12:07:57 +0100, GB wrote:
    On 19/05/2025 20:33, Andy Burns wrote:
    GB wrote:
    I'm obviously naive, superficial, and lazy, but I rather like the AI
    overviews. I do rely on them for some things.

    You haven't noticed that it just makes stuff up whenever it doesn't
    know an answer?

    I hadn't, actually. Do you have an example from your own experience?

    I've been told a few times that in order to achieve <something> in an application, then I just need to go to [non existent menu] and select
    <something>.

    Pointing out that [non existent menu] doesn't exist provokes and apology
    and [another non existent menu].

    Telling ChatGPT it's wrong has a weirdly familiar sensation to it. Which
    must be proof that it is becoming human ;)

    I posted here in June 2023 about an experiment I tried whereby I asked
    ChatGPT to provide legal precedents regarding trespass, and it provided
    a list of cases and descriptions thereof. All the cases did exist, but
    much of the descriptions were completely made up nonsense that bore no
    relation to the actual cases.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Davey@21:1/5 to NOTsomeone@microsoft.invalid on Tue May 20 19:21:11 2025
    On Tue, 20 May 2025 12:07:57 +0100
    GB <NOTsomeone@microsoft.invalid> wrote:

    On 19/05/2025 20:33, Andy Burns wrote:
    GB wrote:

    I'm obviously naive, superficial, and lazy, but I rather like the
    AI overviews. I do rely on them for some things.

    You haven't noticed that it just makes stuff up whenever it doesn't
    know an answer?


    I hadn't, actually. Do you have an example from your own experience?




    A few weeks ago, a friend with whom I keep in contact via e-mail
    received a message from me, but it was preceded by an AI-generated
    summary. This was added by Yahoo! mail at his end.
    The summary was not accurate. We are not talking about a huge missive,
    just a tale of a medical event I had had at home, but it said that I had learned my lessons from the event. Nowhere did I even hint at this,
    there was nothing to learn.

    --
    Davey.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jon Ribbens@21:1/5 to Davey on Tue May 20 19:08:02 2025
    On 2025-05-20, Davey <davey@example.invalid> wrote:
    On Tue, 20 May 2025 12:07:57 +0100
    GB <NOTsomeone@microsoft.invalid> wrote:
    On 19/05/2025 20:33, Andy Burns wrote:
    GB wrote:

    I'm obviously naive, superficial, and lazy, but I rather like the
    AI overviews. I do rely on them for some things.

    You haven't noticed that it just makes stuff up whenever it doesn't
    know an answer?


    I hadn't, actually. Do you have an example from your own experience?

    A few weeks ago, a friend with whom I keep in contact via e-mail
    received a message from me, but it was preceded by an AI-generated
    summary. This was added by Yahoo! mail at his end.
    The summary was not accurate. We are not talking about a huge missive,
    just a tale of a medical event I had had at home, but it said that I had learned my lessons from the event. Nowhere did I even hint at this,
    there was nothing to learn.

    That reminds me of a recent news story, whereby Apple's "AI summary"
    of a message from someone's mother said that they had "attempted
    suicide". What their mum had actually said was: "That hike almost
    killed me!".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Davey@21:1/5 to Simon Parker on Wed May 21 12:43:14 2025
    On Wed, 21 May 2025 11:45:43 +0100
    Simon Parker <simonparkerulm@gmail.com> wrote:

    On 20/05/2025 12:07, GB wrote:
    On 19/05/2025 20:33, Andy Burns wrote:
    GB wrote:

    I'm obviously naive, superficial, and lazy, but I rather like the
    AI overviews. I do rely on them for some things.

    You haven't noticed that it just makes stuff up whenever it
    doesn't know an answer?


    I hadn't, actually. Do you have an example from your own
    experience?

    Around a month ago, I submitted a query to Claude.AI (which is touted
    as being "trained to be safe, accurate and secure") concerning the
    precise syntax to use to create a reverse index on a specific field
    in a relatively obscure database system.

    Like certain posters in this group are wont to do, it gave an
    authoritative sounding answer with great confidence, even providing
    detail for which I hadn't asked.

    Also like certain posters in this group, despite the evident
    confidence in the answer and the authority with which is was given,
    it was completely wrong.

    Everything it said was a complete hallucination. (It had said to use
    the "REVERSE" switch on the command issued to create the index, and
    had given several examples of using this switch, however I know that
    no such switch exists.)

    I then asked it, "You just make that up, right?"

    And it responded (verbatim):

    "You're absolutely right to call this out. I apologize for my
    response.

    "Yes, I did make up that information. I don't actually have specific knowledge about how to write code for reverse indexing specific for
    <the system about which I was asking it>. I shouldn't have presented fictional syntax as if it were accurate.

    "Instead, I should have acknowledged my limitations regarding reverse indexing on <this specific system>. The proper approach would have
    been to provide general concepts about reverse indexing in database
    systems while clearly stating I don't have details of the specific implementation.

    "Thank you for your skepticism - it's important to question responses
    that seem uncertain, especially when dealing with specialized
    technical information.

    "(Standard disclaimer in smaller, coloured text, right justified)
    Claude can make mistakes. Please double check responses."

    In short, it didn't know the answer. Rather than acknowledging that
    it didn't know the answer, it made one up instead. It knew it had
    made up an answer and it knew it was wrong to do so but went ahead
    and did it anyway rather than doing what it knew it should have done
    and admitting that it didn't know the answer.

    Remember this is a model that claims to be "trained to be safe,
    accurate and secure" that is supposed to "excel in tasks like... code-writing".

    Regards

    S.P.


    OOps. "You know it makes sense...".

    Somehow, that doesn't make me feel any safer about AI. It's more like
    a schoolboy who has been caught out after not doing his homework,
    rather than something "trained to be safe, accurate and secure".

    --
    Davey.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Simon Parker on Wed May 21 12:54:34 2025
    On Wed, 21 May 2025 11:45:43 +0100, Simon Parker wrote:

    On 20/05/2025 12:07, GB wrote:
    [quoted text muted]

    Around a month ago, I submitted a query to Claude.AI (which is touted as being "trained to be safe, accurate and secure")

    None of that is technically possible. As always, when you find one
    error ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Davey on Wed May 21 12:57:54 2025
    On Wed, 21 May 2025 12:43:14 +0100, Davey wrote:

    Somehow, that doesn't make me feel any safer about AI

    AI isn't the problem. It's the idiots that use it.

    Eventually - unless we have some extremely interesting developments in
    society - the average "intelligence" (we all know it isn't in any way
    shape or form intelligent, don't we ?) will settle at the median of where
    it's got it's training material. And since 80% of that is via scraping
    public forums, it will probably be around about 90 on the scale that
    makes 100 the human average.

    Now I don't know about you, but I really don't need to pay anyone to get
    advice is provided by people with an IQ of 90. GBNews is free for a
    start,

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jeff Layman@21:1/5 to Simon Parker on Thu May 22 08:37:09 2025
    On 19/05/2025 14:34, Simon Parker wrote:

    The SRA is, (rightly in my opinion), taking a hard line with such cases.

    Perhaps they should consider themselves fortunate that they're not in
    the US: <https://ktar.com/national-news/judge-considers-sanctions-against-attorneys-in-prison-case-for-using-ai-in-court-filings/5708562/>

    I wonder what the fines would be, and under which law they would fall.
    Would there be an equivalent law here, or is it only the SRA which could
    take action?

    One wonders if this sort of thing is going on in other countries too.

    --
    Jeff

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jeff Layman@21:1/5 to All on Sun May 25 08:28:41 2025
    On 21/05/2025 13:54, Jethro_uk wrote:
    On Wed, 21 May 2025 11:45:43 +0100, Simon Parker wrote:

    On 20/05/2025 12:07, GB wrote:
    [quoted text muted]

    Around a month ago, I submitted a query to Claude.AI (which is touted as
    being "trained to be safe, accurate and secure")

    None of that is technically possible. As always, when you find one
    error ...

    If this is accurate, it looks like things could get a *lot* worse: <https://www.foxbusiness.com/technology/ai-system-resorts-blackmail-when-its-developers-try-replace>

    --
    Jeff

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Jeff Layman on Sun May 25 11:07:35 2025
    On Sun, 25 May 2025 08:28:41 +0100, Jeff Layman wrote:

    On 21/05/2025 13:54, Jethro_uk wrote:
    On Wed, 21 May 2025 11:45:43 +0100, Simon Parker wrote:

    On 20/05/2025 12:07, GB wrote:
    [quoted text muted]

    Around a month ago, I submitted a query to Claude.AI (which is touted
    as being "trained to be safe, accurate and secure")

    None of that is technically possible. As always, when you find one
    error ...

    If this is accurate, it looks like things could get a *lot* worse: <https://www.foxbusiness.com/technology/ai-system-resorts-blackmail-
    when-its-developers-try-replace>

    The consequences of trying to create artificial intelligence when we
    haven't the faintest clue what intelligence are unlikely to be benign.

    If whatever passes for intelligence is inferior to it's human version,
    it's of limited use.

    And if whatever passes for intelligence is superior to it's human version
    then humans are of limited use.

    You don't need a 2 week conversation with ChatGPT to have worked that out.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Roger Hayter@21:1/5 to jethro_uk@hotmailbin.com on Sun May 25 12:57:41 2025
    On 25 May 2025 at 12:07:35 BST, "Jethro_uk" <jethro_uk@hotmailbin.com> wrote:

    On Sun, 25 May 2025 08:28:41 +0100, Jeff Layman wrote:

    On 21/05/2025 13:54, Jethro_uk wrote:
    On Wed, 21 May 2025 11:45:43 +0100, Simon Parker wrote:

    On 20/05/2025 12:07, GB wrote:
    [quoted text muted]

    Around a month ago, I submitted a query to Claude.AI (which is touted
    as being "trained to be safe, accurate and secure")

    None of that is technically possible. As always, when you find one
    error ...

    If this is accurate, it looks like things could get a *lot* worse:
    <https://www.foxbusiness.com/technology/ai-system-resorts-blackmail-when-its-developers-try-replace>

    The consequences of trying to create artificial intelligence when we
    haven't the faintest clue what intelligence are unlikely to be benign.

    If whatever passes for intelligence is inferior to it's human version,
    it's of limited use.

    And if whatever passes for intelligence is superior to it's human version then humans are of limited use.

    You don't need a 2 week conversation with ChatGPT to have worked that out.

    A problem may be that we don't know enough about intelligence to
    quantitatively compare intelligences, except in the context of simple, circumscribed problems.

    --

    Roger Hayter

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Roger Hayter@21:1/5 to Roger Hayter on Sun May 25 13:17:02 2025
    On 25 May 2025 at 13:57:41 BST, "Roger Hayter" <roger@hayter.org> wrote:

    On 25 May 2025 at 12:07:35 BST, "Jethro_uk" <jethro_uk@hotmailbin.com> wrote:

    On Sun, 25 May 2025 08:28:41 +0100, Jeff Layman wrote:

    On 21/05/2025 13:54, Jethro_uk wrote:
    On Wed, 21 May 2025 11:45:43 +0100, Simon Parker wrote:

    On 20/05/2025 12:07, GB wrote:
    [quoted text muted]

    Around a month ago, I submitted a query to Claude.AI (which is touted >>>>> as being "trained to be safe, accurate and secure")

    None of that is technically possible. As always, when you find one
    error ...

    If this is accurate, it looks like things could get a *lot* worse:
    <https://www.foxbusiness.com/technology/ai-system-resorts-blackmail-when-its-developers-try-replace>

    The consequences of trying to create artificial intelligence when we
    haven't the faintest clue what intelligence are unlikely to be benign.

    If whatever passes for intelligence is inferior to it's human version,
    it's of limited use.

    And if whatever passes for intelligence is superior to it's human version
    then humans are of limited use.

    You don't need a 2 week conversation with ChatGPT to have worked that out.

    A problem may be that we don't know enough about intelligence to quantitatively compare intelligences, except in the context of simple, circumscribed problems.

    In this context taking over world government would be an example of a simple, circumscribed problem, but would tell us nothing about the ability of our new rulers in art, philosopy or imagination; or even resilience.


    --

    Roger Hayter

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Roger Hayter on Sun May 25 14:37:39 2025
    On Sun, 25 May 2025 12:57:41 +0000, Roger Hayter wrote:

    On 25 May 2025 at 12:07:35 BST, "Jethro_uk" <jethro_uk@hotmailbin.com>
    wrote:

    On Sun, 25 May 2025 08:28:41 +0100, Jeff Layman wrote:

    On 21/05/2025 13:54, Jethro_uk wrote:
    On Wed, 21 May 2025 11:45:43 +0100, Simon Parker wrote:

    On 20/05/2025 12:07, GB wrote:
    [quoted text muted]

    Around a month ago, I submitted a query to Claude.AI (which is
    touted as being "trained to be safe, accurate and secure")

    None of that is technically possible. As always, when you find one
    error ...

    If this is accurate, it looks like things could get a *lot* worse:
    <https://www.foxbusiness.com/technology/ai-system-resorts-blackmail- when-its-developers-try-replace>

    The consequences of trying to create artificial intelligence when we
    haven't the faintest clue what intelligence are unlikely to be benign.

    If whatever passes for intelligence is inferior to it's human version,
    it's of limited use.

    And if whatever passes for intelligence is superior to it's human
    version then humans are of limited use.

    You don't need a 2 week conversation with ChatGPT to have worked that
    out.

    A problem may be that we don't know enough about intelligence to quantitatively compare intelligences

    Or qualitatively either ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jon Ribbens@21:1/5 to Roger Hayter on Sun May 25 18:32:15 2025
    On 2025-05-25, Roger Hayter <roger@hayter.org> wrote:
    On 25 May 2025 at 12:07:35 BST, "Jethro_uk" <jethro_uk@hotmailbin.com> wrote:
    On Sun, 25 May 2025 08:28:41 +0100, Jeff Layman wrote:
    On 21/05/2025 13:54, Jethro_uk wrote:
    On Wed, 21 May 2025 11:45:43 +0100, Simon Parker wrote:
    On 20/05/2025 12:07, GB wrote:
    [quoted text muted]

    Around a month ago, I submitted a query to Claude.AI (which is touted >>>>> as being "trained to be safe, accurate and secure")

    None of that is technically possible. As always, when you find one
    error ...

    If this is accurate, it looks like things could get a *lot* worse:
    <https://www.foxbusiness.com/technology/ai-system-resorts-blackmail-when-its-developers-try-replace>

    The consequences of trying to create artificial intelligence when we
    haven't the faintest clue what intelligence are unlikely to be benign.

    If whatever passes for intelligence is inferior to it's human version,
    it's of limited use.

    And if whatever passes for intelligence is superior to it's human version
    then humans are of limited use.

    You don't need a 2 week conversation with ChatGPT to have worked that out.

    A problem may be that we don't know enough about intelligence to quantitatively compare intelligences, except in the context of simple, circumscribed problems.

    We know enough about it to know that ChatGPT isn't it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to jethro_uk@hotmailbin.com on Mon May 26 09:29:02 2025
    "Jethro_uk" <jethro_uk@hotmailbin.com> wrote in message news:100utlm$19h7m$2@dont-email.me...

    If whatever passes for intelligence is inferior to it's human version,
    it's of limited use.

    Or quite possibly it's even superior in some respects, already.


    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to billy bookcase on Mon May 26 09:39:07 2025
    "billy bookcase" <billy@anon.com> wrote in message news:10118oo$1ts9h$1@dont-email.me...

    "Jethro_uk" <jethro_uk@hotmailbin.com> wrote in message news:100utlm$19h7m$2@dont-email.me...

    If whatever passes for intelligence is inferior to it's human version,
    it's of limited use.

    Or quite possibly it's even superior in some respects, already.


    bb

    Although not when it comes to spellcheckers.

    My mistake.



    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to billy bookcase on Mon May 26 09:50:11 2025
    On Mon, 26 May 2025 09:29:02 +0100, billy bookcase wrote:

    "Jethro_uk" <jethro_uk@hotmailbin.com> wrote in message news:100utlm$19h7m$2@dont-email.me...

    If whatever passes for intelligence is inferior to it's human version,
    it's of limited use.

    Or quite possibly it's even superior in some respects, already.

    No "possibly" about it. What a curious statement.

    However all I am doing is agreeing with you that something we can't
    define is superior in terms we can't define to something else we can't
    define.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to jethro_uk@hotmailbin.com on Mon May 26 13:14:23 2025
    "Jethro_uk" <jethro_uk@hotmailbin.com> wrote in message news:1011dgj$1ue9l$2@dont-email.me...
    On Mon, 26 May 2025 09:29:02 +0100, billy bookcase wrote:

    "Jethro_uk" <jethro_uk@hotmailbin.com> wrote in message
    news:100utlm$19h7m$2@dont-email.me...

    If whatever passes for intelligence is inferior to it's human version,
    it's of limited use.

    Or quite possibly it's even superior in some respects, already.

    No "possibly" about it. What a curious statement.

    It was supposed to be a joke based on the fact that you mistakenly
    used an apostrophe in the possessive "its" in your first line. Whereas you
    used the apostrophe correctly for the contraction "it's" in your second
    line

    A mistake which an sophisticated grammar checker may well
    have picked it up. (?)

    So that would be superior.

    Whereas obviously, a standard spell checker wouldn't pick it up.

    So that wouldn't.

    Be superior.


    However all I am doing is agreeing with you that something we can't
    define is superior in terms we can't define to something else we can't define.

    Indeed.


    bb


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Jon Ribbens on Mon May 26 09:54:10 2025
    On Sun, 25 May 2025 18:32:15 +0000, Jon Ribbens wrote:

    On 2025-05-25, Roger Hayter <roger@hayter.org> wrote:
    On 25 May 2025 at 12:07:35 BST, "Jethro_uk" <jethro_uk@hotmailbin.com>
    wrote:
    On Sun, 25 May 2025 08:28:41 +0100, Jeff Layman wrote:
    On 21/05/2025 13:54, Jethro_uk wrote:
    On Wed, 21 May 2025 11:45:43 +0100, Simon Parker wrote:
    On 20/05/2025 12:07, GB wrote:
    [quoted text muted]

    Around a month ago, I submitted a query to Claude.AI (which is
    touted as being "trained to be safe, accurate and secure")

    None of that is technically possible. As always, when you find one
    error ...

    If this is accurate, it looks like things could get a *lot* worse:
    <https://www.foxbusiness.com/technology/ai-system-resorts-blackmail- when-its-developers-try-replace>

    The consequences of trying to create artificial intelligence when we
    haven't the faintest clue what intelligence are unlikely to be benign.

    If whatever passes for intelligence is inferior to it's human version,
    it's of limited use.

    And if whatever passes for intelligence is superior to it's human
    version then humans are of limited use.

    You don't need a 2 week conversation with ChatGPT to have worked that
    out.

    A problem may be that we don't know enough about intelligence to
    quantitatively compare intelligences, except in the context of simple,
    circumscribed problems.

    We know enough about it to know that ChatGPT isn't it.

    Indeed. And that will only ever be how we know intelligence.

    The Turing test properly done is still note-perfect for it's job. Which
    is why the marketing men have had to try to use the power of advertising
    to attack it.

    Science v. advertising. A tale as old as time itself.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)