On 2025-08-15 08:48, Richard Heathfield wrote:
On 15/08/2025 15:27, olcott wrote:
On 8/15/2025 8:37 AM, Richard Heathfield wrote:
On 15/08/2025 13:48, olcott wrote:
Simulating Termination Analyzer HHH correctly
So you're asking it to pre-judge the issue.
Let's take out the word "correctly" and see what happens:
ChatGPT 5.0 is much more stupid and cannot be relied upon.
Yeah, AIs /are/ pretty stupid. Only an idiot would advance an AI's opinion in support of a
disputed "proof".
The following news story reminded me of Olcott:
<https://www.cnn.com/2025/08/14/us/video/ai-spiral-psychiatrist-mental-health-lcl-digvid>
André
chatbots can be really validating [a positive thing], telling you
what you want to hear [lol, I would describe that as "sycophantic"!]
On 8/15/2025 12:15 PM, Mike Terry wrote:
On 15/08/2025 16:57, André G. Isaak wrote:
On 2025-08-15 08:48, Richard Heathfield wrote:
On 15/08/2025 15:27, olcott wrote:
On 8/15/2025 8:37 AM, Richard Heathfield wrote:
On 15/08/2025 13:48, olcott wrote:
Simulating Termination Analyzer HHH correctly
So you're asking it to pre-judge the issue.
Let's take out the word "correctly" and see what happens:
ChatGPT 5.0 is much more stupid and cannot be relied upon.
Yeah, AIs /are/ pretty stupid. Only an idiot would advance an AI's
opinion in support of a disputed "proof".
The following news story reminded me of Olcott:
<https://www.cnn.com/2025/08/14/us/video/ai-spiral-psychiatrist-
mental-health-lcl-digvid>
André
Indeed. Some interesting take-aways...
The psychiatrist describes "psychosis" as something they understand
pretty well, and gives a two-out-of-three definition:
-Â presence of delusions (fixed false beliefs)
-Â disorganised thinking (saying stuff that just doesn't make sense)
-Â hallucinations [I don't see evidence that PO has visual/auditory
hallucinations]
I wouldn't say PO is exhibiting /new/ traits following his
introduction to the chatbots, so any issues he may have are not /
caused/ by those chatbots. The expert says as much (AI is not /
causing/ psychosis) but goes on to say while chatbots can be really
validating [a positive thing], telling you what you want to hear [lol,
I would describe that as "sycophantic"!], for people already having
issues they can "super-charge their vulnerabilities". He points out
chatbots might help patients feel validated, but without a human in
the loop, a feedback loop can ensue with the delusions becoming
stronger and stronger!
So... the bit about "delusions becoming stronger and stronger" may be
applying in PO's case? He has said that people were gaslighting him
before his talks with AI, but now he sees through that. Reinterpreted,
that might be expressed as "before talking to chatbots PO merely /
suspected/ that people were trying to trick him, but there was doubt
in his mind - perhaps he was wrong! Talking to chatbots has now
convinced him that his arguments are correct and everybody else is
deliberately lying, or an idiot."Â IOW the chatbots have reinforced
his delusional framework, as the article warns.
The chatbots are certainly "helping PO feel validated" (+cranks around
the world) as described, but in PO's case there are still humans (us!)
in the loop, to keep him grounded. If he decided to leave comp.theory
and concentrate exclusively on his chatbots, I suspect he would risk
totally losing touch with reality. [OTOH some regulars here would
receive a significant bonus of time to spend on other things in their
lives. +It's not their /duty/ to look after PO.]
Mike.
Mere implied ad hominem attacks that do not address
this subject matter.
Not one person in the last three years has been able
to correctly show that the conclusion of myself and
three chatbots is in any way incorrect.
In the last three years it has only been dishonest
gaslighting on this specific point.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern: abort simulation and
return 0.
(b) Simulated input reaches its simulated "return" statement: return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
 int Halt_Status = HHH(DD);
 if (Halt_Status)
   HERE: goto HERE;
 return Halt_Status;
}
What value should HHH(DD) correctly return?
<Input to LLM systems>
They figured out the *recursive simulation non-halting behavior pattern* entirely on their own without prompting and figured out that HHH(DD)==0
is correct also without prompting.
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 07:21:32 |
Calls: | 10,386 |
Calls today: | 1 |
Files: | 14,058 |
Messages: | 6,416,643 |