Amazing. I think this has already happened in the USA but I thought
our solicitors and barristers were better than this. The culprit
probably being "artificial intelligence" which is reputed to take the
place of lawyers and other professionals eventually.
https://www.judiciary.uk/wp-content/uploads/2025/05/Ayinde-v-LB-Haringe y-Judgment-Ritchie-J-03.04.25-HD-2.pdf
Amazing. I think this has already happened in the USA but I thought our solicitors and barristers were better than this. The culprit probably
being "artificial intelligence" which is reputed to take the place of
lawyers and other professionals eventually.
https://www.judiciary.uk/wp-content/uploads/2025/05/Ayinde-v-LB- Haringey-Judgment-Ritchie-J-03.04.25-HD-2.pdf
The Todal <the_todal@icloud.com> wrote in news:m814u8Fo8qmU2@mid.individual.net:
Amazing. I think this has already happened in the USA but I thought
our solicitors and barristers were better than this. The culprit
probably being "artificial intelligence" which is reputed to take the
place of lawyers and other professionals eventually.
https://www.judiciary.uk/wp-content/uploads/2025/05/Ayinde-v-LB-Haringe
y-Judgment-Ritchie-J-03.04.25-HD-2.pdf
Whilst it is a judgment for wasted costs I can't see any recommendation for referral to the Law Society for misleading the court in such a substantial manner which comes as a surprise. I would expect a serious misconduct referral.
On 07/05/2025 14:59, Peter Walker wrote:
The Todal <the_todal@icloud.com> wrote in
news:m814u8Fo8qmU2@mid.individual.net:
Amazing. I think this has already happened in the USA but I thought
our solicitors and barristers were better than this. The culprit
probably being "artificial intelligence" which is reputed to take
the place of lawyers and other professionals eventually.
https://www.judiciary.uk/wp-content/uploads/2025/05/Ayinde-v-LB-Harin
ge y-Judgment-Ritchie-J-03.04.25-HD-2.pdf
Whilst it is a judgment for wasted costs I can't see any
recommendation for referral to the Law Society for misleading the
court in such a substantial manner which comes as a surprise. I would
expect a serious misconduct referral.
It is mentioned at the end, actually. Quote
I am going to do two further things which I am going to want recorded
in the order. The first is that I order at public expense a transcript
of these extemporary judgments that I have provided in this case, all
three of them. Secondly, I will require the Defendant to send the
transcript to the Bar Standards Board and to the Solicitors Regulation Authority. It will be a matter for both counsel whether they comply
with, what I believe are their obligations of self-reporting and
reporting of knowledge of another, and it will be a matter for the solicitors' firm as to whether they have a similar requirement of self-reporting under the Solicitors Regulation Authority rules.
On 07/05/2025 14:19, The Todal wrote:
Amazing. I think this has already happened in the USA but I thought
our solicitors and barristers were better than this. The culprit
probably being "artificial intelligence" which is reputed to take the
place of lawyers and other professionals eventually.
https://www.judiciary.uk/wp-content/uploads/2025/05/Ayinde-v-LB-
Haringey-Judgment-Ritchie-J-03.04.25-HD-2.pdf
No, the culprits are the solicitors and barrister involved who naively believed everything the internet told them, and who did not acknowledge
their errors even when pointed out to them but doubled down.
One wonders what they added to the process and what they were being paid
for.
What punishment should they receive?
eg in the first minute of the video here: https://www.lexisnexis.co.uk/lexis-plus/lexis-plus-ai.html
it shows how you can ask it a legal question and it then returns potential arguments with citations. In the example:
"What potential age or disability discrimination claims could be brought under the Equality Act 2010 by a 52-year old employee with diabetes?"
then ask it:
"What is the relevant case law?"
and more citations come out, with a tiny footnote "AI-generated content should
be reviewed for accuracy". Then:
On 07/05/2025 14:19, The Todal wrote:
Amazing. I think this has already happened in the USA but I thought our solicitors and barristers were better than this. The culprit probably being "artificial intelligence" which is reputed to take the place of lawyers and other professionals eventually.
https://www.judiciary.uk/wp-content/uploads/2025/05/Ayinde-v-LB- Haringey-Judgment-Ritchie-J-03.04.25-HD-2.pdf
No, the culprits are the solicitors and barrister involved who naively believed everything the internet told them, and who did not acknowledge
their errors even when pointed out to them but doubled down.
One wonders what they added to the process and what they were being paid
for.
What punishment should they receive?
Amazing. I think this has already happened in the USA but I thought our >solicitors and barristers were better than this. ...
On 07/05/2025 14:53, Norman Wells wrote:
On 07/05/2025 14:19, The Todal wrote:
Amazing. I think this has already happened in the USA but I thought
our solicitors and barristers were better than this. The culprit
probably being "artificial intelligence" which is reputed to take the
place of lawyers and other professionals eventually.
https://www.judiciary.uk/wp-content/uploads/2025/05/Ayinde-v-LB-
Haringey-Judgment-Ritchie-J-03.04.25-HD-2.pdf
No, the culprits are the solicitors and barrister involved who naively
believed everything the internet told them, and who did not
acknowledge their errors even when pointed out to them but doubled down.
Yes, you are absolutely right, of course. But a lawyer who conducts his research on "the internet" and merely copies and pastes from the
results, is a lazy and irresponsible lawyer. It means the work that
should take several hours actually takes about fifteen minutes.
Obviously a lawyer should read the entire transcript of a cited case,
not merely rely on a brief precis. If they had tried to get the
transcripts they would have realised themselves that the cases were fake
and not included them in any written submissions.
One wonders what they added to the process and what they were being
paid for.
What punishment should they receive?
If there are findings of dishonesty by the professional bodies, then suspension and possibly strike-offs would be appropriate.
Theo <theom+news@chiark.greenend.org.uk> wrote:
eg in the first minute of the video here:
https://www.lexisnexis.co.uk/lexis-plus/lexis-plus-ai.html
it shows how you can ask it a legal question and it then returns potential >> arguments with citations. In the example:
"What potential age or disability discrimination claims could be brought
under the Equality Act 2010 by a 52-year old employee with diabetes?"
then ask it:
"What is the relevant case law?"
and more citations come out, with a tiny footnote "AI-generated content should
be reviewed for accuracy". Then:
Amusingly, one of the citations is:
Richardson v. Newburgh Enlarged City Sch. Dist., 984 F. Supp. 735 (New York Southern District Court, 1997)
which is a real citation: https://law.justia.com/cases/federal/district-courts/FSupp/984/735/1401858/
but one that happens to be completely out of jurisdiction for a query about the UK's Equality Act 2010. So they can't get it right even in their demo video.
On 07/05/2025 15:11, The Todal wrote:
If there are findings of dishonesty by the professional bodies, then suspension and
possibly strike-offs would be appropriate.
You have described them above as lazy and irresponsible. It's quite a step from that to
dishonest.
On 07/05/2025 15:11, The Todal wrote:
On 07/05/2025 14:53, Norman Wells wrote:
On 07/05/2025 14:19, The Todal wrote:
Amazing. I think this has already happened in the USA but I thought
our solicitors and barristers were better than this. The culprit
probably being "artificial intelligence" which is reputed to take the
place of lawyers and other professionals eventually.
https://www.judiciary.uk/wp-content/uploads/2025/05/Ayinde-v-LB-
Haringey-Judgment-Ritchie-J-03.04.25-HD-2.pdf
No, the culprits are the solicitors and barrister involved who naively
believed everything the internet told them, and who did not
acknowledge their errors even when pointed out to them but doubled down.
Yes, you are absolutely right, of course. But a lawyer who conducts his
research on "the internet" and merely copies and pastes from the
results, is a lazy and irresponsible lawyer. It means the work that
should take several hours actually takes about fifteen minutes.
Obviously a lawyer should read the entire transcript of a cited case,
not merely rely on a brief precis. If they had tried to get the
transcripts they would have realised themselves that the cases were fake
and not included them in any written submissions.
One wonders what they added to the process and what they were being
paid for.
What punishment should they receive?
If there are findings of dishonesty by the professional bodies, then
suspension and possibly strike-offs would be appropriate.
You have described them above as lazy and irresponsible. It's quite a
step from that to dishonest.
"GB" <NOTsomeone@microsoft.invalid> wrote in message news:vvg16i$141cp$1@dont-email.me...
On 07/05/2025 15:11, The Todal wrote:
If there are findings of dishonesty by the professional bodies, then
suspension and
possibly strike-offs would be appropriate.
You have described them above as lazy and irresponsible. It's quite a step >> from that to
dishonest.
If they're not doing work for which they are being paid, then they are being dishonest. Full stop. I can't honestly see quite how anyone could possibly disagree.
If there are findings of dishonesty by the professional bodies, then
suspension and possibly strike-offs would be appropriate.
You have described them above as lazy and irresponsible. It's quite a
step from that to dishonest.
We're not talking about the general public here, we're talking about the members of professional bodies. It doesn't seem much of a stretch to me
for the regulators of these bodies to decide that signing off the
*unchecked* work of "AI" generative algorithms as their own paid work
is sufficiently dishonest to warrant some sort of reprimand.
On 07/05/2025 21:53, Jon Ribbens wrote:
If there are findings of dishonesty by the professional bodies, then
suspension and possibly strike-offs would be appropriate.
You have described them above as lazy and irresponsible. It's quite a
step from that to dishonest.
We're not talking about the general public here, we're talking about the
members of professional bodies. It doesn't seem much of a stretch to me
for the regulators of these bodies to decide that signing off the
*unchecked* work of "AI" generative algorithms as their own paid work
is sufficiently dishonest to warrant some sort of reprimand.
I hear you, but virtually all lawyers these days use databases like Lexis to search for
cases. Those provide case synopses, as well as leading the reader to the relevant part
of the judgment.
This looks like a Legal Aid case, as it seems to involve a homeless individual. Was
there really the funding available to (as Toddle put it) read the entire transcript of
a
cited case, not merely rely on a brief precise? Perhaps! And if the lawyers charged for
hours that they didn't put it, I agree that would be dishonest.
OTOH, if Legal Aid funding is so low that it effectively requires corners to be cut,
then that's what will happen. Of course, databases like Lexis are very unlikely to
include cases that don't exist, and most of the time the corner cutters get away with
it. It seems wrong to come down like a ton of bricks on a particular lot of corner
cutters, if the system more or less relies on it.
On 7 May 2025 at 23:55:50 BST, "billy bookcase" wrote:
"GB" <NOTsomeone@microsoft.invalid> wrote in message
news:vvg16i$141cp$1@dont-email.me...
On 07/05/2025 15:11, The Todal wrote:
If there are findings of dishonesty by the professional bodies, then
suspension and
possibly strike-offs would be appropriate.
You have described them above as lazy and irresponsible. It's quite a step >>> from that to
dishonest.
If they're not doing work for which they are being paid, then they are being >> dishonest. Full stop. I can't honestly see quite how anyone could possibly >> disagree.
Indeed - strikes me as an act similar in manner to plagiarism.
As the judge says - professional misconduct. Given the circumstances, and especially the way they've tried to pass it off as something trivial, I'd be surprised if they practise again.
On 07/05/2025 21:53, Jon Ribbens wrote:
If there are findings of dishonesty by the professional bodies,
then suspension and possibly strike-offs would be appropriate.
You have described them above as lazy and irresponsible. It's quite
a step from that to dishonest.
We're not talking about the general public here, we're talking about
the members of professional bodies. It doesn't seem much of a stretch
to me for the regulators of these bodies to decide that signing off
the *unchecked* work of "AI" generative algorithms as their own paid
work is sufficiently dishonest to warrant some sort of reprimand.
I hear you, but virtually all lawyers these days use databases like >
Lexis to search for cases. Those provide case synopses, as well as
leading the reader to the relevant part of the judgment.
This looks like a Legal Aid case, as it seems to involve a homeless individual. Was there really the funding available to (as Todal put
it) read the entire transcript of a cited case, not merely rely on a
brief precis? Perhaps! And if the lawyers charged for hours that they
didn't put it, I agree that that would be dishonest.
OTOH, if Legal Aid funding is so low that it effectively requires
corners to be cut, then that's what will happen. Of course, databases
like Lexis are very unlikely to include cases that don't exist, and
most of the time the corner cutters get away with it. It seems wrong
to come down like a ton of bricks on a particular lot of corner
cutters, if the system more or less relies on it.
On 07/05/2025 21:53, Jon Ribbens wrote:
If there are findings of dishonesty by the professional bodies, then
suspension and possibly strike-offs would be appropriate.
You have described them above as lazy and irresponsible. It's quite a
step from that to dishonest.
We're not talking about the general public here, we're talking about the
members of professional bodies. It doesn't seem much of a stretch to me
for the regulators of these bodies to decide that signing off the
*unchecked* work of "AI" generative algorithms as their own paid work
is sufficiently dishonest to warrant some sort of reprimand.
I hear you, but virtually all lawyers these days use databases like
Lexis to search for cases. Those provide case synopses, as well as
leading the reader to the relevant part of the judgment.
This looks like a Legal Aid case, as it seems to involve a homeless individual. Was there really the funding available to (as Todal put it)
read the entire transcript of a cited case, not merely rely on a brief precis? Perhaps! And if the lawyers charged for hours that they didn't
put it, I agree that that would be dishonest.
OTOH, if Legal Aid funding is so low that it effectively requires
corners to be cut, then that's what will happen. Of course, databases
like Lexis are very unlikely to include cases that don't exist, and most
of the time the corner cutters get away with it. It seems wrong to come
down like a ton of bricks on a particular lot of corner cutters, if the system more or less relies on it.
On 08/05/2025 10:58, GB wrote:
On 07/05/2025 21:53, Jon Ribbens wrote:
If there are findings of dishonesty by the professional bodies, then >>>>> suspension and possibly strike-offs would be appropriate.
You have described them above as lazy and irresponsible. It's quite a
step from that to dishonest.
We're not talking about the general public here, we're talking about the >>> members of professional bodies. It doesn't seem much of a stretch to me
for the regulators of these bodies to decide that signing off the
*unchecked* work of "AI" generative algorithms as their own paid work
is sufficiently dishonest to warrant some sort of reprimand.
I hear you, but virtually all lawyers these days use databases like
Lexis to search for cases. Those provide case synopses, as well as
leading the reader to the relevant part of the judgment.
This looks like a Legal Aid case, as it seems to involve a homeless
individual. Was there really the funding available to (as Todal put it)
read the entire transcript of a cited case, not merely rely on a brief
precis? Perhaps! And if the lawyers charged for hours that they didn't
put it, I agree that that would be dishonest.
OTOH, if Legal Aid funding is so low that it effectively requires
corners to be cut, then that's what will happen. Of course, databases
like Lexis are very unlikely to include cases that don't exist, and most
of the time the corner cutters get away with it. It seems wrong to come
down like a ton of bricks on a particular lot of corner cutters, if the
system more or less relies on it.
The judges rely on barristers to present their arguments on the law accurately and objectively. In many cases judges do not do their own
research because they trust the honesty of barristers who seem to be reputable - it would be different if it was a litigant in person or a
foreign lawyer who had qualified in a different jurisdiction.
I can't imagine any competent lawyer sharing your sympathy for these
lawyers. Not only did they cite fake cases, when asked for transcripts
they failed to say it was a mistake, which might have mitigated their conduct, but tried to bluff their way out of it.
Funding is not a relevant consideration. I can't believe that there are lawyers who cannot afford subscriptions to online legal libraries and textbooks but if there are, they should think about switching to a
different way of making a living. And a lawyer who says "I started
reading the transcript of a Court of Appeal judgment but I had only been
paid for an hour's work so I stopped reading after 20 pages" would
probably need to be sectioned under the Mental Health Act.
On 8 May 2025 at 12:31:39 BST, "The Todal" <the_todal@icloud.com> wrote:
On 08/05/2025 10:58, GB wrote:
On 07/05/2025 21:53, Jon Ribbens wrote:
If there are findings of dishonesty by the professional bodies, then >>>>>> suspension and possibly strike-offs would be appropriate.
You have described them above as lazy and irresponsible. It's quite a >>>>> step from that to dishonest.
We're not talking about the general public here, we're talking about the >>>> members of professional bodies. It doesn't seem much of a stretch to me >>>> for the regulators of these bodies to decide that signing off the
*unchecked* work of "AI" generative algorithms as their own paid work
is sufficiently dishonest to warrant some sort of reprimand.
I hear you, but virtually all lawyers these days use databases like
Lexis to search for cases. Those provide case synopses, as well as
leading the reader to the relevant part of the judgment.
This looks like a Legal Aid case, as it seems to involve a homeless
individual. Was there really the funding available to (as Todal put it)
read the entire transcript of a cited case, not merely rely on a brief
precis? Perhaps! And if the lawyers charged for hours that they didn't
put it, I agree that that would be dishonest.
OTOH, if Legal Aid funding is so low that it effectively requires
corners to be cut, then that's what will happen. Of course, databases
like Lexis are very unlikely to include cases that don't exist, and most >>> of the time the corner cutters get away with it. It seems wrong to come
down like a ton of bricks on a particular lot of corner cutters, if the
system more or less relies on it.
The judges rely on barristers to present their arguments on the law
accurately and objectively. In many cases judges do not do their own
research because they trust the honesty of barristers who seem to be
reputable - it would be different if it was a litigant in person or a
foreign lawyer who had qualified in a different jurisdiction.
I can't imagine any competent lawyer sharing your sympathy for these
lawyers. Not only did they cite fake cases, when asked for transcripts
they failed to say it was a mistake, which might have mitigated their
conduct, but tried to bluff their way out of it.
Funding is not a relevant consideration. I can't believe that there are
lawyers who cannot afford subscriptions to online legal libraries and
textbooks but if there are, they should think about switching to a
different way of making a living. And a lawyer who says "I started
reading the transcript of a Court of Appeal judgment but I had only been
paid for an hour's work so I stopped reading after 20 pages" would
probably need to be sectioned under the Mental Health Act.
The lawyers concerned were a voluntary legal aid group, poorly funded, poorly paid and overworked. It is also notable that the other side in the case (a London Borough) were almost equally (but differently) culpable in failing to obey previous court orders and failing to prepare a defence to the case. But they seem to have escaped much journalistic criticism.
On 8 May 2025 at 12:31:39 BST, "The Todal" <the_todal@icloud.com> wrote:
On 08/05/2025 10:58, GB wrote:
On 07/05/2025 21:53, Jon Ribbens wrote:
If there are findings of dishonesty by the professional bodies, then >>>>>> suspension and possibly strike-offs would be appropriate.
You have described them above as lazy and irresponsible. It's quite a >>>>> step from that to dishonest.
We're not talking about the general public here, we're talking about the >>>> members of professional bodies. It doesn't seem much of a stretch to me >>>> for the regulators of these bodies to decide that signing off the
*unchecked* work of "AI" generative algorithms as their own paid work
is sufficiently dishonest to warrant some sort of reprimand.
I hear you, but virtually all lawyers these days use databases like
Lexis to search for cases. Those provide case synopses, as well as
leading the reader to the relevant part of the judgment.
This looks like a Legal Aid case, as it seems to involve a homeless
individual. Was there really the funding available to (as Todal put it)
read the entire transcript of a cited case, not merely rely on a brief
precis? Perhaps! And if the lawyers charged for hours that they didn't
put it, I agree that that would be dishonest.
OTOH, if Legal Aid funding is so low that it effectively requires
corners to be cut, then that's what will happen. Of course, databases
like Lexis are very unlikely to include cases that don't exist, and most >>> of the time the corner cutters get away with it. It seems wrong to come
down like a ton of bricks on a particular lot of corner cutters, if the
system more or less relies on it.
The judges rely on barristers to present their arguments on the law
accurately and objectively. In many cases judges do not do their own
research because they trust the honesty of barristers who seem to be
reputable - it would be different if it was a litigant in person or a
foreign lawyer who had qualified in a different jurisdiction.
I can't imagine any competent lawyer sharing your sympathy for these
lawyers. Not only did they cite fake cases, when asked for transcripts
they failed to say it was a mistake, which might have mitigated their
conduct, but tried to bluff their way out of it.
Funding is not a relevant consideration. I can't believe that there are
lawyers who cannot afford subscriptions to online legal libraries and
textbooks but if there are, they should think about switching to a
different way of making a living. And a lawyer who says "I started
reading the transcript of a Court of Appeal judgment but I had only been
paid for an hour's work so I stopped reading after 20 pages" would
probably need to be sectioned under the Mental Health Act.
The lawyers concerned were a voluntary legal aid group, poorly funded, poorly paid and overworked. It is also notable that the other side in the case (a London Borough) were almost equally (but differently) culpable in failing to obey previous court orders and failing to prepare a defence to the case. But they seem to have escaped much journalistic criticism.
According to The Todal <the_todal@icloud.com>:
Amazing. I think this has already happened in the USA but I thought our
solicitors and barristers were better than this. ...
Dream on. Here in the US it happens all the time, viz:
https://www.loweringthebar.net/2025/04/counsel-would-you-be-surprised.html?utm_source=feedblitz&utm_medium=FeedBlitzRss&utm_campaign=loweringthebar
Sensible US lawyers "shepardize" their citations by looking them up in an index
that tells them whether they've been appealed or overruled or otherwise not good
law. (Shepard's Citations started publishing its indices in 1873, hence the name.) Needless to say, if you can't find it in Shepard's you don't cite it.
Surely there is a UK equivalent.
Without attempting to excuse what has happened in this and the other
case, I believe part of the problem is the way Google Searches (more accurately SERP) has changed recently (certainly in Chrome - other
browsers and search engines are available).
Previously, in response to a request, Google would list pages that it considered relevant to the response and rank them in order for the
enquirer to go through as they so wished.
Now, it uses AI to generate an answer, based on the content of some or
those pages which it includes as an "AI Overview" at the top.
It is to be noted that the AI Overview concludes with the line "AI
responses may include mistakes. For legal advice, consult a professional.".
In short, do not blindly rely on this overview. Do your research.
As I said at the outset, I am most certainly not trying to justify what transpired, but I can see how it happened.
The SRA is, (rightly in my opinion), taking a hard line with such cases.
Without attempting to excuse what has happened in this and the other
case,
I believe part of the problem is the way Google Searches (more
accurately SERP) has changed recently (certainly in Chrome - other
browsers and search engines are available).
Previously, in response to a request, Google would list pages that it considered relevant to the response and rank them in order for the
enquirer to go through as they so wished.
Now, it uses AI to generate an answer, based on the content of some or
those pages which it includes as an "AI Overview" at the top.
It is to be noted that the AI Overview concludes with the line "AI
responses may include mistakes. For legal advice, consult a
professional.".
In short, do not blindly rely on this overview. Do your research.
But if you're too cheap to pay, there are some things you can do to clean
up Google's search results. One of the most useful is an extra term you
can add into the Google search URL:
?udm=14
This tells the Big G to exclude AI-generated overviews from the results
it returns. This simple switch is so helpful that it even has its own
domain, udm14.com, which calls it "the disenshittification Konami code," after the famous cheat code for Konami's 1987 game Contra.
I'm obviously naive, superficial, and lazy, but I rather like the AI overviews. I do rely on them for some things.
GB wrote:
I'm obviously naive, superficial, and lazy, but I rather like the AI
overviews. I do rely on them for some things.
You haven't noticed that it just makes stuff up whenever it doesn't know
an answer?
GB wrote:
I'm obviously naive, superficial, and lazy, but I rather like the AI
overviews. I do rely on them for some things.
You haven't noticed that it just makes stuff up whenever it doesn't know
an answer?
On 19/05/2025 20:33, Andy Burns wrote:
GB wrote:I hadn't, actually. Do you have an example from your own experience?
I'm obviously naive, superficial, and lazy, but I rather like the AI
overviews. I do rely on them for some things.
You haven't noticed that it just makes stuff up whenever it doesn't
know an answer?
On Tue, 20 May 2025 12:07:57 +0100, GB wrote:
On 19/05/2025 20:33, Andy Burns wrote:
GB wrote:I hadn't, actually. Do you have an example from your own experience?
I'm obviously naive, superficial, and lazy, but I rather like the AI
overviews. I do rely on them for some things.
You haven't noticed that it just makes stuff up whenever it doesn't
know an answer?
I've been told a few times that in order to achieve <something> in an application, then I just need to go to [non existent menu] and select
<something>.
Pointing out that [non existent menu] doesn't exist provokes and apology
and [another non existent menu].
Telling ChatGPT it's wrong has a weirdly familiar sensation to it. Which
must be proof that it is becoming human ;)
On 19/05/2025 20:33, Andy Burns wrote:
GB wrote:
I'm obviously naive, superficial, and lazy, but I rather like the
AI overviews. I do rely on them for some things.
You haven't noticed that it just makes stuff up whenever it doesn't
know an answer?
I hadn't, actually. Do you have an example from your own experience?
On Tue, 20 May 2025 12:07:57 +0100
GB <NOTsomeone@microsoft.invalid> wrote:
On 19/05/2025 20:33, Andy Burns wrote:
GB wrote:
I'm obviously naive, superficial, and lazy, but I rather like the
AI overviews. I do rely on them for some things.
You haven't noticed that it just makes stuff up whenever it doesn't
know an answer?
I hadn't, actually. Do you have an example from your own experience?
A few weeks ago, a friend with whom I keep in contact via e-mail
received a message from me, but it was preceded by an AI-generated
summary. This was added by Yahoo! mail at his end.
The summary was not accurate. We are not talking about a huge missive,
just a tale of a medical event I had had at home, but it said that I had learned my lessons from the event. Nowhere did I even hint at this,
there was nothing to learn.
On 20/05/2025 12:07, GB wrote:
On 19/05/2025 20:33, Andy Burns wrote:
GB wrote:
I'm obviously naive, superficial, and lazy, but I rather like the
AI overviews. I do rely on them for some things.
You haven't noticed that it just makes stuff up whenever it
doesn't know an answer?
I hadn't, actually. Do you have an example from your own
experience?
Around a month ago, I submitted a query to Claude.AI (which is touted
as being "trained to be safe, accurate and secure") concerning the
precise syntax to use to create a reverse index on a specific field
in a relatively obscure database system.
Like certain posters in this group are wont to do, it gave an
authoritative sounding answer with great confidence, even providing
detail for which I hadn't asked.
Also like certain posters in this group, despite the evident
confidence in the answer and the authority with which is was given,
it was completely wrong.
Everything it said was a complete hallucination. (It had said to use
the "REVERSE" switch on the command issued to create the index, and
had given several examples of using this switch, however I know that
no such switch exists.)
I then asked it, "You just make that up, right?"
And it responded (verbatim):
"You're absolutely right to call this out. I apologize for my
response.
"Yes, I did make up that information. I don't actually have specific knowledge about how to write code for reverse indexing specific for
<the system about which I was asking it>. I shouldn't have presented fictional syntax as if it were accurate.
"Instead, I should have acknowledged my limitations regarding reverse indexing on <this specific system>. The proper approach would have
been to provide general concepts about reverse indexing in database
systems while clearly stating I don't have details of the specific implementation.
"Thank you for your skepticism - it's important to question responses
that seem uncertain, especially when dealing with specialized
technical information.
"(Standard disclaimer in smaller, coloured text, right justified)
Claude can make mistakes. Please double check responses."
In short, it didn't know the answer. Rather than acknowledging that
it didn't know the answer, it made one up instead. It knew it had
made up an answer and it knew it was wrong to do so but went ahead
and did it anyway rather than doing what it knew it should have done
and admitting that it didn't know the answer.
Remember this is a model that claims to be "trained to be safe,
accurate and secure" that is supposed to "excel in tasks like... code-writing".
Regards
S.P.
On 20/05/2025 12:07, GB wrote:
[quoted text muted]
Around a month ago, I submitted a query to Claude.AI (which is touted as being "trained to be safe, accurate and secure")
Somehow, that doesn't make me feel any safer about AI
The SRA is, (rightly in my opinion), taking a hard line with such cases.
On Wed, 21 May 2025 11:45:43 +0100, Simon Parker wrote:
On 20/05/2025 12:07, GB wrote:
[quoted text muted]
Around a month ago, I submitted a query to Claude.AI (which is touted as
being "trained to be safe, accurate and secure")
None of that is technically possible. As always, when you find one
error ...
On 21/05/2025 13:54, Jethro_uk wrote:when-its-developers-try-replace>
On Wed, 21 May 2025 11:45:43 +0100, Simon Parker wrote:
On 20/05/2025 12:07, GB wrote:
[quoted text muted]
Around a month ago, I submitted a query to Claude.AI (which is touted
as being "trained to be safe, accurate and secure")
None of that is technically possible. As always, when you find one
error ...
If this is accurate, it looks like things could get a *lot* worse: <https://www.foxbusiness.com/technology/ai-system-resorts-blackmail-
On Sun, 25 May 2025 08:28:41 +0100, Jeff Layman wrote:
On 21/05/2025 13:54, Jethro_uk wrote:
On Wed, 21 May 2025 11:45:43 +0100, Simon Parker wrote:
On 20/05/2025 12:07, GB wrote:
[quoted text muted]
Around a month ago, I submitted a query to Claude.AI (which is touted
as being "trained to be safe, accurate and secure")
None of that is technically possible. As always, when you find one
error ...
If this is accurate, it looks like things could get a *lot* worse:
<https://www.foxbusiness.com/technology/ai-system-resorts-blackmail-when-its-developers-try-replace>
The consequences of trying to create artificial intelligence when we
haven't the faintest clue what intelligence are unlikely to be benign.
If whatever passes for intelligence is inferior to it's human version,
it's of limited use.
And if whatever passes for intelligence is superior to it's human version then humans are of limited use.
You don't need a 2 week conversation with ChatGPT to have worked that out.
On 25 May 2025 at 12:07:35 BST, "Jethro_uk" <jethro_uk@hotmailbin.com> wrote:
On Sun, 25 May 2025 08:28:41 +0100, Jeff Layman wrote:
On 21/05/2025 13:54, Jethro_uk wrote:
On Wed, 21 May 2025 11:45:43 +0100, Simon Parker wrote:
On 20/05/2025 12:07, GB wrote:
[quoted text muted]
Around a month ago, I submitted a query to Claude.AI (which is touted >>>>> as being "trained to be safe, accurate and secure")
None of that is technically possible. As always, when you find one
error ...
If this is accurate, it looks like things could get a *lot* worse:
<https://www.foxbusiness.com/technology/ai-system-resorts-blackmail-when-its-developers-try-replace>
The consequences of trying to create artificial intelligence when we
haven't the faintest clue what intelligence are unlikely to be benign.
If whatever passes for intelligence is inferior to it's human version,
it's of limited use.
And if whatever passes for intelligence is superior to it's human version
then humans are of limited use.
You don't need a 2 week conversation with ChatGPT to have worked that out.
A problem may be that we don't know enough about intelligence to quantitatively compare intelligences, except in the context of simple, circumscribed problems.
On 25 May 2025 at 12:07:35 BST, "Jethro_uk" <jethro_uk@hotmailbin.com>
wrote:
On Sun, 25 May 2025 08:28:41 +0100, Jeff Layman wrote:
On 21/05/2025 13:54, Jethro_uk wrote:
On Wed, 21 May 2025 11:45:43 +0100, Simon Parker wrote:
On 20/05/2025 12:07, GB wrote:
[quoted text muted]
Around a month ago, I submitted a query to Claude.AI (which is
touted as being "trained to be safe, accurate and secure")
None of that is technically possible. As always, when you find one
error ...
If this is accurate, it looks like things could get a *lot* worse:
<https://www.foxbusiness.com/technology/ai-system-resorts-blackmail- when-its-developers-try-replace>
The consequences of trying to create artificial intelligence when we
haven't the faintest clue what intelligence are unlikely to be benign.
If whatever passes for intelligence is inferior to it's human version,
it's of limited use.
And if whatever passes for intelligence is superior to it's human
version then humans are of limited use.
You don't need a 2 week conversation with ChatGPT to have worked that
out.
A problem may be that we don't know enough about intelligence to quantitatively compare intelligences
On 25 May 2025 at 12:07:35 BST, "Jethro_uk" <jethro_uk@hotmailbin.com> wrote:
On Sun, 25 May 2025 08:28:41 +0100, Jeff Layman wrote:
On 21/05/2025 13:54, Jethro_uk wrote:
On Wed, 21 May 2025 11:45:43 +0100, Simon Parker wrote:
On 20/05/2025 12:07, GB wrote:
[quoted text muted]
Around a month ago, I submitted a query to Claude.AI (which is touted >>>>> as being "trained to be safe, accurate and secure")
None of that is technically possible. As always, when you find one
error ...
If this is accurate, it looks like things could get a *lot* worse:
<https://www.foxbusiness.com/technology/ai-system-resorts-blackmail-when-its-developers-try-replace>
The consequences of trying to create artificial intelligence when we
haven't the faintest clue what intelligence are unlikely to be benign.
If whatever passes for intelligence is inferior to it's human version,
it's of limited use.
And if whatever passes for intelligence is superior to it's human version
then humans are of limited use.
You don't need a 2 week conversation with ChatGPT to have worked that out.
A problem may be that we don't know enough about intelligence to quantitatively compare intelligences, except in the context of simple, circumscribed problems.
If whatever passes for intelligence is inferior to it's human version,
it's of limited use.
"Jethro_uk" <jethro_uk@hotmailbin.com> wrote in message news:100utlm$19h7m$2@dont-email.me...
If whatever passes for intelligence is inferior to it's human version,
it's of limited use.
Or quite possibly it's even superior in some respects, already.
bb
"Jethro_uk" <jethro_uk@hotmailbin.com> wrote in message news:100utlm$19h7m$2@dont-email.me...
If whatever passes for intelligence is inferior to it's human version,
it's of limited use.
Or quite possibly it's even superior in some respects, already.
On Mon, 26 May 2025 09:29:02 +0100, billy bookcase wrote:
"Jethro_uk" <jethro_uk@hotmailbin.com> wrote in message
news:100utlm$19h7m$2@dont-email.me...
If whatever passes for intelligence is inferior to it's human version,
it's of limited use.
Or quite possibly it's even superior in some respects, already.
No "possibly" about it. What a curious statement.
However all I am doing is agreeing with you that something we can't
define is superior in terms we can't define to something else we can't define.
On 2025-05-25, Roger Hayter <roger@hayter.org> wrote:
On 25 May 2025 at 12:07:35 BST, "Jethro_uk" <jethro_uk@hotmailbin.com>
wrote:
On Sun, 25 May 2025 08:28:41 +0100, Jeff Layman wrote:
On 21/05/2025 13:54, Jethro_uk wrote:
On Wed, 21 May 2025 11:45:43 +0100, Simon Parker wrote:
On 20/05/2025 12:07, GB wrote:
[quoted text muted]
Around a month ago, I submitted a query to Claude.AI (which is
touted as being "trained to be safe, accurate and secure")
None of that is technically possible. As always, when you find one
error ...
If this is accurate, it looks like things could get a *lot* worse:
<https://www.foxbusiness.com/technology/ai-system-resorts-blackmail- when-its-developers-try-replace>
The consequences of trying to create artificial intelligence when we
haven't the faintest clue what intelligence are unlikely to be benign.
If whatever passes for intelligence is inferior to it's human version,
it's of limited use.
And if whatever passes for intelligence is superior to it's human
version then humans are of limited use.
You don't need a 2 week conversation with ChatGPT to have worked that
out.
A problem may be that we don't know enough about intelligence to
quantitatively compare intelligences, except in the context of simple,
circumscribed problems.
We know enough about it to know that ChatGPT isn't it.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 498 |
Nodes: | 16 (2 / 14) |
Uptime: | 60:29:16 |
Calls: | 9,812 |
Files: | 13,754 |
Messages: | 6,191,177 |