• Re: [OT[ Why three different LLM systems are correct when that say HHH(

    From Mike Terry@21:1/5 to All on Fri Aug 15 18:15:32 2025
    On 15/08/2025 16:57, André G. Isaak wrote:
    On 2025-08-15 08:48, Richard Heathfield wrote:
    On 15/08/2025 15:27, olcott wrote:
    On 8/15/2025 8:37 AM, Richard Heathfield wrote:
    On 15/08/2025 13:48, olcott wrote:
    Simulating Termination Analyzer HHH correctly

    So you're asking it to pre-judge the issue.

    Let's take out the word "correctly" and see what happens:


    ChatGPT 5.0 is much more stupid and cannot be relied upon.

    Yeah, AIs /are/ pretty stupid. Only an idiot would advance an AI's opinion in support of a
    disputed "proof".

    The following news story reminded me of Olcott:

    <https://www.cnn.com/2025/08/14/us/video/ai-spiral-psychiatrist-mental-health-lcl-digvid>

    André


    Indeed. Some interesting take-aways...

    The psychiatrist describes "psychosis" as something they understand pretty well, and gives a
    two-out-of-three definition:
    - presence of delusions (fixed false beliefs)
    - disorganised thinking (saying stuff that just doesn't make sense)
    - hallucinations [I don't see evidence that PO has visual/auditory hallucinations]

    I wouldn't say PO is exhibiting /new/ traits following his introduction to the chatbots, so any
    issues he may have are not /caused/ by those chatbots. The expert says as much (AI is not /causing/
    psychosis) but goes on to say while chatbots can be really validating [a positive thing], telling
    you what you want to hear [lol, I would describe that as "sycophantic"!], for people already having
    issues they can "super-charge their vulnerabilities". He points out chatbots might help patients
    feel validated, but without a human in the loop, a feedback loop can ensue with the delusions
    becoming stronger and stronger!


    So... the bit about "delusions becoming stronger and stronger" may be applying in PO's case? He
    has said that people were gaslighting him before his talks with AI, but now he sees through that.
    Reinterpreted, that might be expressed as "before talking to chatbots PO merely /suspected/ that
    people were trying to trick him, but there was doubt in his mind - perhaps he was wrong! Talking to
    chatbots has now convinced him that his arguments are correct and everybody else is deliberately
    lying, or an idiot." IOW the chatbots have reinforced his delusional framework, as the article warns.

    The chatbots are certainly "helping PO feel validated" (+cranks around the world) as described, but
    in PO's case there are still humans (us!) in the loop, to keep him grounded. If he decided to leave
    comp.theory and concentrate exclusively on his chatbots, I suspect he would risk totally losing
    touch with reality. [OTOH some regulars here would receive a significant bonus of time to spend on
    other things in their lives. +It's not their /duty/ to look after PO.]


    Mike.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Mike Terry on Fri Aug 15 18:30:42 2025
    On 15/08/2025 18:15, Mike Terry wrote:
    chatbots can be really validating [a positive thing], telling you
    what you want to hear [lol, I would describe that as "sycophantic"!]

    Hard not to think of Polonius in Hamlet.

    Ham. Do you see yonder cloud that's almost in shape of a camel?
    Pol. By th' mass, and 'tis like a camel indeed.
    Ham. Methinks it is like a weasel.
    Pol. It is back'd like a weasel.
    Ham. Or like a whale.
    Pol. Very like a whale.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Aug 15 13:41:48 2025
    On 8/15/25 1:22 PM, olcott wrote:
    On 8/15/2025 12:15 PM, Mike Terry wrote:
    On 15/08/2025 16:57, André G. Isaak wrote:
    On 2025-08-15 08:48, Richard Heathfield wrote:
    On 15/08/2025 15:27, olcott wrote:
    On 8/15/2025 8:37 AM, Richard Heathfield wrote:
    On 15/08/2025 13:48, olcott wrote:
    Simulating Termination Analyzer HHH correctly

    So you're asking it to pre-judge the issue.

    Let's take out the word "correctly" and see what happens:


    ChatGPT 5.0 is much more stupid and cannot be relied upon.

    Yeah, AIs /are/ pretty stupid. Only an idiot would advance an AI's
    opinion in support of a disputed "proof".

    The following news story reminded me of Olcott:

    <https://www.cnn.com/2025/08/14/us/video/ai-spiral-psychiatrist-
    mental-health-lcl-digvid>

    André


    Indeed.  Some interesting take-aways...

    The psychiatrist describes "psychosis" as something they understand
    pretty well, and gives a two-out-of-three definition:
    -  presence of delusions (fixed false beliefs)
    -  disorganised thinking (saying stuff that just doesn't make sense)
    -  hallucinations [I don't see evidence that PO has visual/auditory
    hallucinations]

    I wouldn't say PO is exhibiting /new/ traits following his
    introduction to the chatbots, so any issues he may have are not /
    caused/ by those chatbots.  The expert says as much (AI is not /
    causing/ psychosis) but goes on to say while chatbots can be really
    validating [a positive thing], telling you what you want to hear [lol,
    I would describe that as "sycophantic"!], for people already having
    issues they can "super-charge their vulnerabilities".  He points out
    chatbots might help patients feel validated, but without a human in
    the loop, a feedback loop can ensue with the delusions becoming
    stronger and stronger!


    So...  the bit about "delusions becoming stronger and stronger" may be
    applying in PO's case?  He has said that people were gaslighting him
    before his talks with AI, but now he sees through that. Reinterpreted,
    that might be expressed as "before talking to chatbots PO merely /
    suspected/ that people were trying to trick him, but there was doubt
    in his mind - perhaps he was wrong!  Talking to chatbots has now
    convinced him that his arguments are correct and everybody else is
    deliberately lying, or an idiot."  IOW the chatbots have reinforced
    his delusional framework, as the article warns.

    The chatbots are certainly "helping PO feel validated" (+cranks around
    the world) as described, but in PO's case there are still humans (us!)
    in the loop, to keep him grounded.  If he decided to leave comp.theory
    and concentrate exclusively on his chatbots, I suspect he would risk
    totally losing touch with reality.  [OTOH some regulars here would
    receive a significant bonus of time to spend on other things in their
    lives.  +It's not their /duty/ to look after PO.]


    Mike.


    Mere implied ad hominem attacks that do not address
    this subject matter.

    No ad hominem in there


    Not one person in the last three years has been able
    to correctly show that the conclusion of myself and
    three chatbots is in any way incorrect.


    Sure we have, you just don't want to belive them.

    In the last three years it has only been dishonest
    gaslighting on this specific point.

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern: abort simulation and
    return 0.
    (b) Simulated input reaches its simulated "return" statement: return 1.

    which ignores what DOES happen if HHH correctly simulates its input
    until it finds one of (a) or (b)

    (c) The Simulating Termination Analyzer might run forever waiting to
    find a condition that matchs (a) or (b), and thus not actually meet the requirements to be a Simulating Termination Analyzer.


    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
      int Halt_Status = HHH(DD);
      if (Halt_Status)
        HERE: goto HERE;
      return Halt_Status;
    }

    What value should HHH(DD) correctly return?
    <Input to LLM systems>

    which agian. presumes that an answer exist.

    Ask What value CAN HHH(DD) correctly return?



    They figured out the *recursive simulation non-halting behavior pattern* entirely on their own without prompting and figured out that HHH(DD)==0
    is correct also without prompting.

    https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c

    https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c

    https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
    eedd0f09e141



    That you input includes bias by ignoring the actual result of a machine
    built to those specification just shows your failure to understand.

    With that change, as I have pointed out before, they see the error.

    That you think the word of a "yes-man" chat bot is better than a
    reference to the actual defintions and facts, just shows your mental
    state (or lack there-of).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)