• Re: ChatGPT, Gemini, Grok and Claude all agree the input to HHH(DDD) sp

    From Richard Damon@21:1/5 to olcott on Fri Jun 27 17:42:35 2025
    XPost: sci.logic, sci.math

    On 6/27/25 4:10 PM, olcott wrote:
    On 6/27/2025 2:55 PM, Richard Damon wrote:
    On 6/27/25 3:43 PM, olcott wrote:
    On 6/27/2025 2:24 PM, Richard Damon wrote:
    On 6/27/25 3:11 PM, olcott wrote:>>
    Turing Machines can and do compute mappings from finite
    string inputs.

    Right, and those finite strings can be representation of other
    abstract things, like programs or numbers.



    *ChatGPT, Gemini and Grok all agree*
    DDD correctly simulated by HHH cannot possibly reach
    its simulated "return" statement final halt state.

    https://chatgpt.com/share/685ed9e3-260c-8011-91d0-4dee3ee08f46
    https://gemini.google.com/app/f2527954a959bce4
    https://grok.com/share/c2hhcmQtMg%3D%3D_b750d0f1-9996-4394-b0e4-
    f76f6c77df3d



    In other words, you ars admitting to accepting the LIES of a LLM
    because you have lied to them, over the reasoned proofs of people.


    <begin text input>
    typedef void (*ptr)();
    int HHH(ptr P);


    void DDD()
    {
      HHH(DDD);
      return;
    }

    int main()
    {
      HHH(DDD);
      DDD();
    }

    Termination Analyzer HHH simulates its input until
    it detects a non-terminating behavior pattern. When
    HHH detects such a pattern it aborts its simulation
    and returns 0.
    <end text input>

    The above is *all* that I told them.
    The above paragraph merely defines what a simulating
    termination analyzer is and how it works, thus cannot
    be a lie.

    *ChatGPT, Gemini, Grok and Claude all agree*
    DDD correctly simulated by HHH cannot possibly reach
    its simulated "return" statement final halt state.

    https://chatgpt.com/share/685ed9e3-260c-8011-91d0-4dee3ee08f46 https://gemini.google.com/app/f2527954a959bce4 https://grok.com/share/c2hhcmQtMg%3D%3D_b750d0f1-9996-4394-b0e4-
    f76f6c77df3d
    https://claude.ai/share/c2bd913d-7bd1-4741-a919-f0acc040494b


    And the LIE is that your HHH does what you say.

    Yes, if the one and only HHH doesn't ever abort, then the DDD built on
    it will be non-halting.

    But, if HHH ever aborts its simulation of the DIFFERENT DDD (since DDD
    is different for each different HHH that it is built oo), then it has
    stopped its simulation BEFORE it sees a REAL non-terminating pattern, as
    the pattern that HHH saw exists in the simulation of DDD when actually correctly simulated by a real correct simulator (which HHH isn't).

    You admit that the direct execution of DDD halts, and thus a correct
    simulation of it must reach that final state.

    Your "logic" is based on the LIE that a partial simulation that doesn't
    reach the terminal state can be thought of as "non-terminating"

    Sorry, you are just proving that you are an IDIOT that has burnt out his
    brain by repeating his lies so many times that he can not think about them.

    You have effectiveely ADMITTED that these are lies by acknoledging the
    basic facts that I present, and by NEVER actually trying to refute any
    of the error I point out you just show your stupidity by just repeating
    one of your lies that have been refuted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Jun 27 21:12:58 2025
    XPost: sci.logic, sci.math

    On 6/27/25 4:10 PM, olcott wrote:
    On 6/27/2025 2:55 PM, Richard Damon wrote:
    On 6/27/25 3:43 PM, olcott wrote:
    On 6/27/2025 2:24 PM, Richard Damon wrote:
    On 6/27/25 3:11 PM, olcott wrote:>>
    Turing Machines can and do compute mappings from finite
    string inputs.

    Right, and those finite strings can be representation of other
    abstract things, like programs or numbers.



    *ChatGPT, Gemini and Grok all agree*
    DDD correctly simulated by HHH cannot possibly reach
    its simulated "return" statement final halt state.

    https://chatgpt.com/share/685ed9e3-260c-8011-91d0-4dee3ee08f46
    https://gemini.google.com/app/f2527954a959bce4
    https://grok.com/share/c2hhcmQtMg%3D%3D_b750d0f1-9996-4394-b0e4-
    f76f6c77df3d



    In other words, you ars admitting to accepting the LIES of a LLM
    because you have lied to them, over the reasoned proofs of people.


    <begin text input>
    typedef void (*ptr)();
    int HHH(ptr P);


    void DDD()
    {
      HHH(DDD);
      return;
    }

    int main()
    {
      HHH(DDD);
      DDD();
    }

    Termination Analyzer HHH simulates its input until
    it detects a non-terminating behavior pattern. When
    HHH detects such a pattern it aborts its simulation
    and returns 0.
    <end text input>

    The above is *all* that I told them.
    The above paragraph merely defines what a simulating
    termination analyzer is and how it works, thus cannot
    be a lie.

    *ChatGPT, Gemini, Grok and Claude all agree*
    DDD correctly simulated by HHH cannot possibly reach
    its simulated "return" statement final halt state.

    https://chatgpt.com/share/685ed9e3-260c-8011-91d0-4dee3ee08f46 https://gemini.google.com/app/f2527954a959bce4 https://grok.com/share/c2hhcmQtMg%3D%3D_b750d0f1-9996-4394-b0e4-
    f76f6c77df3d
    https://claude.ai/share/c2bd913d-7bd1-4741-a919-f0acc040494b



    Perhaps I should point you to this too:

    https://www.youtube.com/watch?v=45ffs9s3DTc


    IT shows why LLM are not good at this field.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Jun 27 23:07:46 2025
    XPost: sci.logic, sci.math

    On 6/27/25 9:20 PM, olcott wrote:
    On 6/27/2025 8:12 PM, Richard Damon wrote:
    On 6/27/25 4:10 PM, olcott wrote:
    On 6/27/2025 2:55 PM, Richard Damon wrote:
    On 6/27/25 3:43 PM, olcott wrote:
    On 6/27/2025 2:24 PM, Richard Damon wrote:
    On 6/27/25 3:11 PM, olcott wrote:>>
    Turing Machines can and do compute mappings from finite
    string inputs.

    Right, and those finite strings can be representation of other
    abstract things, like programs or numbers.



    *ChatGPT, Gemini and Grok all agree*
    DDD correctly simulated by HHH cannot possibly reach
    its simulated "return" statement final halt state.

    https://chatgpt.com/share/685ed9e3-260c-8011-91d0-4dee3ee08f46
    https://gemini.google.com/app/f2527954a959bce4
    https://grok.com/share/c2hhcmQtMg%3D%3D_b750d0f1-9996-4394-b0e4-
    f76f6c77df3d



    In other words, you ars admitting to accepting the LIES of a LLM
    because you have lied to them, over the reasoned proofs of people.


    <begin text input>
    typedef void (*ptr)();
    int HHH(ptr P);


    void DDD()
    {
       HHH(DDD);
       return;
    }

    int main()
    {
       HHH(DDD);
       DDD();
    }

    Termination Analyzer HHH simulates its input until
    it detects a non-terminating behavior pattern. When
    HHH detects such a pattern it aborts its simulation
    and returns 0.
    <end text input>

    The above is *all* that I told them.
    The above paragraph merely defines what a simulating
    termination analyzer is and how it works, thus cannot
    be a lie.

    *ChatGPT, Gemini, Grok and Claude all agree*
    DDD correctly simulated by HHH cannot possibly reach
    its simulated "return" statement final halt state.

    https://chatgpt.com/share/685ed9e3-260c-8011-91d0-4dee3ee08f46
    https://gemini.google.com/app/f2527954a959bce4
    https://grok.com/share/c2hhcmQtMg%3D%3D_b750d0f1-9996-4394-b0e4-
    f76f6c77df3d
    https://claude.ai/share/c2bd913d-7bd1-4741-a919-f0acc040494b



    Perhaps I should point you to this too:

    https://www.youtube.com/watch?v=45ffs9s3DTc


    IT shows why LLM are not good at this field.

    Maybe ALL that you have is empty rhetoric entirely
    bereft of any supporting reasoning.

    It is stupidly simple that DDD correctly simulated by
    HHH cannot possible reach its own "return" statement
    final halt state.

    If you even know what ordinary recursion is you would
    know this. That is why I called my reviewers despicable
    lying bastards.



    But the problem is that the HHH that does a correct simulation doesn't
    answer, and is looking at a different input then the HHH that does answer.

    That is EXACTLY like you question about arresting John because you found
    that his twin brother Jack robbed the bank.

    The DDD that calls the HHH that answers is NOT guilt on being
    non-halting and is a DIFFERENT program than the DDD that is non-halting
    because it calls the HHH that does the correct simulation.

    Your attempts to call them the same input is just a LIE that you have
    admitted to the facts that prove it to be a lie, and you continued claim
    of it just proves that you either you are too stupid to understand these basics, or too corrupt to care that you are just lying.

    Sorry, but those are the facts, and you have accepted them by your
    refusal to even attempt to make of logical argument against them, but
    just make the fallacious restatement of the error. All that does is
    admit you have nothing to show your possition to have any validity, and
    tha that you just don't care about what is true.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sat Jun 28 09:04:50 2025
    XPost: sci.logic, sci.math

    On 6/27/25 11:16 PM, olcott wrote:
    On 6/27/2025 10:07 PM, Richard Damon wrote:
    On 6/27/25 9:20 PM, olcott wrote:
    On 6/27/2025 8:12 PM, Richard Damon wrote:
    On 6/27/25 4:10 PM, olcott wrote:
    On 6/27/2025 2:55 PM, Richard Damon wrote:
    On 6/27/25 3:43 PM, olcott wrote:
    On 6/27/2025 2:24 PM, Richard Damon wrote:
    On 6/27/25 3:11 PM, olcott wrote:>>
    Turing Machines can and do compute mappings from finite
    string inputs.

    Right, and those finite strings can be representation of other >>>>>>>> abstract things, like programs or numbers.



    *ChatGPT, Gemini and Grok all agree*
    DDD correctly simulated by HHH cannot possibly reach
    its simulated "return" statement final halt state.

    https://chatgpt.com/share/685ed9e3-260c-8011-91d0-4dee3ee08f46
    https://gemini.google.com/app/f2527954a959bce4
    https://grok.com/share/c2hhcmQtMg%3D%3D_b750d0f1-9996-4394-b0e4- >>>>>>> f76f6c77df3d



    In other words, you ars admitting to accepting the LIES of a LLM
    because you have lied to them, over the reasoned proofs of people. >>>>>>

    <begin text input>
    typedef void (*ptr)();
    int HHH(ptr P);


    void DDD()
    {
       HHH(DDD);
       return;
    }

    int main()
    {
       HHH(DDD);
       DDD();
    }

    Termination Analyzer HHH simulates its input until
    it detects a non-terminating behavior pattern. When
    HHH detects such a pattern it aborts its simulation
    and returns 0.
    <end text input>

    The above is *all* that I told them.
    The above paragraph merely defines what a simulating
    termination analyzer is and how it works, thus cannot
    be a lie.

    *ChatGPT, Gemini, Grok and Claude all agree*
    DDD correctly simulated by HHH cannot possibly reach
    its simulated "return" statement final halt state.

    https://chatgpt.com/share/685ed9e3-260c-8011-91d0-4dee3ee08f46
    https://gemini.google.com/app/f2527954a959bce4
    https://grok.com/share/c2hhcmQtMg%3D%3D_b750d0f1-9996-4394-b0e4-
    f76f6c77df3d
    https://claude.ai/share/c2bd913d-7bd1-4741-a919-f0acc040494b



    Perhaps I should point you to this too:

    https://www.youtube.com/watch?v=45ffs9s3DTc


    IT shows why LLM are not good at this field.

    Maybe ALL that you have is empty rhetoric entirely
    bereft of any supporting reasoning.

    It is stupidly simple that DDD correctly simulated by
    HHH cannot possible reach its own "return" statement
    final halt state.

    If you even know what ordinary recursion is you would
    know this. That is why I called my reviewers despicable
    lying bastards.



    But the problem is that the HHH that does a correct simulation doesn't
    answer, and is looking at a different input then the HHH that does
    answer.


    By this same psychotic reasoning no one can count
    at all until after they have counted to infinity.


    And where do you get that from?

    The claim is that you haven't counted ALL the numbers/steps until you
    have reached infinity, not that you haven't done any.

    The problem isn't that HHH has done a "partial simulation", it is that
    it (and you) think that this is enough to be a "Correct Simulation".

    In other words, you error is having your model be that completion of an infinite task occurs after finite work.

    Sorry, you mind just doesn't understand abstract things like the
    infinite, and don't understand the difference between partial and
    complate, or some and all.

    You really need to seek profesional help to handle your clearly present
    mental psychosis.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)