• Re: Even Google AI Overview understands me now

    From joes@21:1/5 to All on Tue Oct 1 13:09:26 2024
    Am Tue, 01 Oct 2024 07:39:18 -0500 schrieb olcott:
    On 10/1/2024 7:19 AM, olcott wrote:
    https://www.google.com/search?
    q=Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&sca_esv=889093c5cb21af9e&sca_upv=1&source=hp&ei=Muf7ZpOyMZHfwN4PwYL2gAc&iflsig=AL9hbdgAAAAAZvv1Qg04jNg2ze170z3a8BSGu8pA29Fj&ved=0ahUKEwiTk7zkk-2IAxWRL9AFHUGBHXAQ4dUDCBg&uact=5&oq=Termination+
    Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&gs_lp=Egdnd3Mtd2l6IjxUZXJtaW5hdGlvbiBBbmFseXplciBIIGlzIE5vdCBGb29sZWQgYnkgUGF0aG9sb2dpY2FsIElucHV0IERIAFAAWABwAHgAkAEAmAEAoAEAqgEAuAEDyAEA-
    AEC-AEBmAIAoAIAmAMAkgcAoAcA&sclient=gws-wiz
    https://chatgpt.com/share/66fbec5c-7b10-8011-9ce6-3c26424cb21c
    It sounds like it’s trained on your spam. LLMs don’t know anything anyway.

    I asked: „What happens when HHH tries to simulate itself?”
    ChatGPT: [my comments in brackets]
    When `HHH` tries to simulate itself, it encounters a potential problem
    known as **self-referential recursion**. Here’s a breakdown of what
    happens:

    ### 1. **Self-Invocation in HHH**
    If `HHH` calls itself (i.e., `HHH(HHH)`), it creates an infinite loop of
    calls unless there is some form of termination condition built into `HHH`. Without such a condition, `HHH` would never return, resulting in non- termination.

    [So not a decider.]

    ### 2. **Halting Criteria**
    In a well-designed termination analyzer, `HHH` would need to have specific logic to prevent endless recursion:
    - **Base Case or Exit Condition**: There should be a mechanism to
    recognize when `HHH` is being asked to analyze itself and return
    immediately (e.g., a special case or flag).
    - **Tracking State**: The analyzer might maintain a history of function
    calls to detect cycles. If it recognizes that it is attempting to analyze itself again, it can decide that this leads to non-termination.

    [Not present.]

    ### 3. **Practical Consequences**
    If `HHH` lacks the ability to handle self-referential calls correctly:
    - **Stack Overflow**: Continuous self-invocation will eventually exhaust
    the stack memory, leading to a stack overflow error.
    [Not considered in theory.]
    - **Incorrect Results**: If not designed carefully, it may lead to an
    incorrect conclusion about halting, misrepresenting itself as either
    halting or non-halting.


    ### 4. **Best Practices for Termination Analyzers**
    To avoid these pitfalls, a termination analyzer should implement:
    - **Guard Clauses**: Checks at the beginning of the function to prevent unnecessary self-simulation.
    - **State Management**: A way to track the functions being analyzed,
    allowing `HHH` to recognize if it's being invoked with itself and act accordingly.

    ### Conclusion
    In summary, if `HHH` tries to simulate itself without proper safeguards,
    it risks falling into an infinite recursion or causing a stack overflow. A robust design would include checks to prevent self-invocation, ensuring
    the termination analyzer can handle all cases, including its own function, gracefully.


    There you go.

    --
    Am Sat, 20 Jul 2024 12:35:31 +0000 schrieb WM in sci.math:
    It is not guaranteed that n+1 exists for every n.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From joes@21:1/5 to All on Tue Oct 1 17:58:34 2024
    Am Tue, 01 Oct 2024 12:31:41 -0500 schrieb olcott:
    On 10/1/2024 8:09 AM, joes wrote:
    Am Tue, 01 Oct 2024 07:39:18 -0500 schrieb olcott:
    On 10/1/2024 7:19 AM, olcott wrote:
    https://www.google.com/search?

    q=Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&sca_esv=889093c5cb21af9e&sca_upv=1&source=hp&ei=Muf7ZpOyMZHfwN4PwYL2gAc&iflsig=AL9hbdgAAAAAZvv1Qg04jNg2ze170z3a8BSGu8pA29Fj&ved=0ahUKEwiTk7zkk-2IAxWRL9AFHUGBHXAQ4dUDCBg&uact=5&oq=Termination+
    Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&gs_lp=Egdnd3Mtd2l6IjxUZXJtaW5hdGlvbiBBbmFseXplciBIIGlzIE5vdCBGb29sZWQgYnkgUGF0aG9sb2dpY2FsIElucHV0IERIAFAAWABwAHgAkAEAmAEAoAEAqgEAuAEDyAEA-
    AEC-AEBmAIAoAIAmAMAkgcAoAcA&sclient=gws-wiz
    https://chatgpt.com/share/66fbec5c-7b10-8011-9ce6-3c26424cb21c
    It sounds like it’s trained on your spam. LLMs don’t know anything
    anyway.

    I asked: „What happens when HHH tries to simulate itself?”
    ChatGPT: [my comments in brackets]

    ### 1. **Self-Invocation in HHH**
    If `HHH` calls itself (i.e., `HHH(HHH)`), it creates an infinite loop
    of calls unless there is some form of termination condition built into
    `HHH`.
    Without such a condition, `HHH` would never return, resulting in non-
    termination.
    [So not a decider.]

    ### 2. **Halting Criteria**
    In a well-designed termination analyzer, `HHH` would need to have
    specific logic to prevent endless recursion:

    [Not present.]

    ### 3. **Practical Consequences**
    If `HHH` lacks the ability to handle self-referential calls correctly:
    - **Stack Overflow**: Continuous self-invocation will eventually
    exhaust the stack memory, leading to a stack overflow error.
    [Not considered in theory.]
    - **Incorrect Results**: If not designed carefully, it may lead to an
    incorrect conclusion about halting, misrepresenting itself as either
    halting or non-halting.

    ### 4. **Best Practices for Termination Analyzers**
    To avoid these pitfalls, a termination analyzer should implement:
    - **Guard Clauses**: Checks at the beginning of the function to prevent
    unnecessary self-simulation.
    - **State Management**: A way to track the functions being analyzed,
    allowing `HHH` to recognize if it's being invoked with itself and act
    accordingly.

    ### Conclusion In summary, if `HHH` tries to simulate itself without
    proper safeguards,
    it risks falling into an infinite recursion or causing a stack
    overflow. A robust design would include checks to prevent
    self-invocation, ensuring the termination analyzer can handle all
    cases, including its own function,
    gracefully.

    *You can continue this conversation with ChatGPT*
    I’ll leave that to you.
    Click on the link and see how it answers this question:
    Is H a Halt Decider for D?
    You should feed it our objections.
    If you believe in it only when you prompt it, it is not suited as an
    authority (fallacious anyway).

    --
    Am Sat, 20 Jul 2024 12:35:31 +0000 schrieb WM in sci.math:
    It is not guaranteed that n+1 exists for every n.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Tue Oct 1 18:47:07 2024
    On 10/1/24 8:19 AM, olcott wrote:
    https://www.google.com/search? q=Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&sca_esv=889093c5cb21af9e&sca_upv=1&source=hp&ei=Muf7ZpOyMZHfwN4PwYL2gAc&iflsig=AL9hbdgAAAAAZvv1Qg04jNg2ze170z3a8BSGu8pA29Fj&ved=0ahUKEwiTk7zkk-2IAxWRL9AFHUGBHXAQ4dUDCBg&uact=5&oq=
    Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&gs_lp=Egdnd3Mtd2l6IjxUZXJtaW5hdGlvbiBBbmFseXplciBIIGlzIE5vdCBGb29sZWQgYnkgUGF0aG9sb2dpY2FsIElucHV0IERIAFAAWABwAHgAkAEAmAEAoAEAqgEAuAEDyAEA-AEC-AEBmAIAoAIAmAMAkgcAoAcA&sclient=gws-wiz


    Not even looking at the trace, but just by your comment, you are proving
    your ignorance by an appeal to athority to something that is not an
    athority.

    AI has been proven to be a LIAR, as its whole purpose is to tell you
    what you expect to hear.

    This just proves that your native lanuage is LIES, just like that of
    your father.

    Sorry, but that is the facts.

    If your idea was correct, you would be able to develop an actual logical
    prove, something you have been unable to do for two decades, in part,
    because it appears you don't know what that is.

    You are just proving that you are an idiot.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Wed Oct 2 09:59:01 2024
    On 2024-10-01 12:19:08 +0000, olcott said:

    https://www.google.com/search?q=Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&sca_esv=889093c5cb21af9e&sca_upv=1&source=hp&ei=Muf7ZpOyMZHfwN4PwYL2gAc&iflsig=AL9hbdgAAAAAZvv1Qg04jNg2ze170z3a8BSGu8pA29Fj&ved=0ahUKEwiTk7zkk-
    2IAxWRL9AFHUGBHXAQ4dUDCBg&uact=5&oq=Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&gs_lp=Egdnd3Mtd2l6IjxUZXJtaW5hdGlvbiBBbmFseXplciBIIGlzIE5vdCBGb29sZWQgYnkgUGF0aG9sb2dpY2FsIElucHV0IERIAFAAWABwAHgAkAEAmAEAoAEAqgEAuAEDyAEA-AEC-
    AEBmAIAoAIAmAMAkgcAoAcA&sclient=gws-wiz


    Why would anyone care about opininons of an artificial idiot?

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Wed Oct 2 07:08:33 2024
    On 10/1/24 7:26 PM, olcott wrote:
    On 10/1/2024 12:58 PM, joes wrote:
    Am Tue, 01 Oct 2024 12:31:41 -0500 schrieb olcott:
    On 10/1/2024 8:09 AM, joes wrote:
    Am Tue, 01 Oct 2024 07:39:18 -0500 schrieb olcott:
    On 10/1/2024 7:19 AM, olcott wrote:
    https://www.google.com/search?


    q=Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&sca_esv=889093c5cb21af9e&sca_upv=1&source=hp&ei=Muf7ZpOyMZHfwN4PwYL2gAc&iflsig=AL9hbdgAAAAAZvv1Qg04jNg2ze170z3a8BSGu8pA29Fj&ved=0ahUKEwiTk7zkk-2IAxWRL9AFHUGBHXAQ4dUDCBg&uact=5&oq=
    Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&gs_lp=Egdnd3Mtd2l6IjxUZXJtaW5hdGlvbiBBbmFseXplciBIIGlzIE5vdCBGb29sZWQgYnkgUGF0aG9sb2dpY2FsIElucHV0IERIAFAAWABwAHgAkAEAmAEAoAEAqgEAuAEDyAEA-
    AEC-AEBmAIAoAIAmAMAkgcAoAcA&sclient=gws-wiz
    https://chatgpt.com/share/66fbec5c-7b10-8011-9ce6-3c26424cb21c
    It sounds like it’s trained on your spam. LLMs don’t know anything >>>> anyway.

    I asked: „What happens when HHH tries to simulate itself?”
    ChatGPT: [my comments in brackets]

    ### 1. **Self-Invocation in HHH**
    If `HHH` calls itself (i.e., `HHH(HHH)`), it creates an infinite loop
    of calls unless there is some form of termination condition built into >>>> `HHH`.
    Without such a condition, `HHH` would never return, resulting in non-
    termination.
    [So not a decider.]

    ### 2. **Halting Criteria**
    In a well-designed termination analyzer, `HHH` would need to have
    specific logic to prevent endless recursion:

    [Not present.]

    ### 3. **Practical Consequences**
    If `HHH` lacks the ability to handle self-referential calls correctly: >>>> - **Stack Overflow**: Continuous self-invocation will eventually
    exhaust the stack memory, leading to a stack overflow error.
    [Not considered in theory.]
    - **Incorrect Results**: If not designed carefully, it may lead to an
    incorrect conclusion about halting, misrepresenting itself as either
    halting or non-halting.

    ### 4. **Best Practices for Termination Analyzers**
    To avoid these pitfalls, a termination analyzer should implement:
    - **Guard Clauses**: Checks at the beginning of the function to prevent >>>> unnecessary self-simulation.
    - **State Management**: A way to track the functions being analyzed,
    allowing `HHH` to recognize if it's being invoked with itself and act
    accordingly.

    ### Conclusion In summary, if `HHH` tries to simulate itself without
    proper safeguards,
    it risks falling into an infinite recursion or causing a stack
    overflow. A robust design would include checks to prevent
    self-invocation, ensuring the termination analyzer can handle all
    cases, including its own function,
    gracefully.

    *You can continue this conversation with ChatGPT*
    I’ll leave that to you.
    Click on the link and see how it answers this question:
    Is H a Halt Decider for D?
    You should feed it our objections.
    If you believe in it only when you prompt it, it is not suited as an
    authority (fallacious anyway).


    You feed it your objections.
    It will tell you how and why you are wrong.


    No, it will tell you something that matches the words you told it.

    You don't seem to understand what Large Language Models are.


    You seem to forget that LLM know nothing of the "truth", only what
    matches their training data.

    They are know to be liars, just like you.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Wed Oct 2 18:56:35 2024
    On 10/2/24 10:20 AM, olcott wrote:
    On 10/2/2024 1:59 AM, Mikko wrote:
    On 2024-10-01 12:19:08 +0000, olcott said:

    https://www.google.com/search?
    q=Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&sca_esv=889093c5cb21af9e&sca_upv=1&source=hp&ei=Muf7ZpOyMZHfwN4PwYL2gAc&iflsig=AL9hbdgAAAAAZvv1Qg04jNg2ze170z3a8BSGu8pA29Fj&ved=0ahUKEwiTk7zkk-2IAxWRL9AFHUGBHXAQ4dUDCBg&uact=5&oq=
    Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&gs_lp=Egdnd3Mtd2l6IjxUZXJtaW5hdGlvbiBBbmFseXplciBIIGlzIE5vdCBGb29sZWQgYnkgUGF0aG9sb2dpY2FsIElucHV0IERIAFAAWABwAHgAkAEAmAEAoAEAqgEAuAEDyAEA-AEC-AEBmAIAoAIAmAMAkgcAoAcA&sclient=gws-wiz

    Why would anyone care about opininons of an artificial idiot?


    Because mistakes cannot be found with its reasoning about my work.


    Of course it has been found. You are just to stupid to understand them,
    and have refused to learn enough to understand it.

    Sorry, that is just the fact, you are nothing but an ignorant
    pathological lying idiot.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Wed Oct 2 18:48:02 2024
    On 10/2/24 10:39 AM, olcott wrote:
    On 10/2/2024 6:08 AM, Richard Damon wrote:
    On 10/1/24 7:26 PM, olcott wrote:
    On 10/1/2024 12:58 PM, joes wrote:
    Am Tue, 01 Oct 2024 12:31:41 -0500 schrieb olcott:
    On 10/1/2024 8:09 AM, joes wrote:
    Am Tue, 01 Oct 2024 07:39:18 -0500 schrieb olcott:
    On 10/1/2024 7:19 AM, olcott wrote:
    https://www.google.com/search?


    q=Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&sca_esv=889093c5cb21af9e&sca_upv=1&source=hp&ei=Muf7ZpOyMZHfwN4PwYL2gAc&iflsig=AL9hbdgAAAAAZvv1Qg04jNg2ze170z3a8BSGu8pA29Fj&ved=0ahUKEwiTk7zkk-2IAxWRL9AFHUGBHXAQ4dUDCBg&uact=5&oq=
    Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&gs_lp=Egdnd3Mtd2l6IjxUZXJtaW5hdGlvbiBBbmFseXplciBIIGlzIE5vdCBGb29sZWQgYnkgUGF0aG9sb2dpY2FsIElucHV0IERIAFAAWABwAHgAkAEAmAEAoAEAqgEAuAEDyAEA-
    AEC-AEBmAIAoAIAmAMAkgcAoAcA&sclient=gws-wiz
    https://chatgpt.com/share/66fbec5c-7b10-8011-9ce6-3c26424cb21c
    It sounds like it’s trained on your spam. LLMs don’t know anything >>>>>> anyway.

    I asked: „What happens when HHH tries to simulate itself?”
    ChatGPT: [my comments in brackets]

    ### 1. **Self-Invocation in HHH**
    If `HHH` calls itself (i.e., `HHH(HHH)`), it creates an infinite loop >>>>>> of calls unless there is some form of termination condition built
    into
    `HHH`.
    Without such a condition, `HHH` would never return, resulting in non- >>>>>> termination.
    [So not a decider.]

    ### 2. **Halting Criteria**
    In a well-designed termination analyzer, `HHH` would need to have
    specific logic to prevent endless recursion:

    [Not present.]

    ### 3. **Practical Consequences**
    If `HHH` lacks the ability to handle self-referential calls
    correctly:
    - **Stack Overflow**: Continuous self-invocation will eventually
    exhaust the stack memory, leading to a stack overflow error.
    [Not considered in theory.]
    - **Incorrect Results**: If not designed carefully, it may lead to an >>>>>> incorrect conclusion about halting, misrepresenting itself as either >>>>>> halting or non-halting.

    ### 4. **Best Practices for Termination Analyzers**
    To avoid these pitfalls, a termination analyzer should implement:
    - **Guard Clauses**: Checks at the beginning of the function to
    prevent
    unnecessary self-simulation.
    - **State Management**: A way to track the functions being analyzed, >>>>>> allowing `HHH` to recognize if it's being invoked with itself and act >>>>>> accordingly.

    ### Conclusion In summary, if `HHH` tries to simulate itself without >>>>>> proper safeguards,
    it risks falling into an infinite recursion or causing a stack
    overflow. A robust design would include checks to prevent
    self-invocation, ensuring the termination analyzer can handle all
    cases, including its own function,
    gracefully.

    *You can continue this conversation with ChatGPT*
    I’ll leave that to you.
    Click on the link and see how it answers this question:
    Is H a Halt Decider for D?
    You should feed it our objections.
    If you believe in it only when you prompt it, it is not suited as an
    authority (fallacious anyway).


    You feed it your objections.
    It will tell you how and why you are wrong.


    No, it will tell you something that matches the words you told it.

    You don't seem to understand what Large Language Models are.


    You seem to forget that LLM know nothing of the "truth", only what
    matches their training data.

    They are know to be liars, just like you.

    In the case of their evaluation of my work they are correct.

    *Try this yourself*

    https://chatgpt.com/share/66fbec5c-7b10-8011-9ce6-3c26424cb21c

    Does HHH have to abort its emulation of DDD to prevent the infinite
    execution of DDD?



    Try asking it

    Why is it ok for HHH to say that DDD doesn't halt when it will when run?

    At it ADMITS that HHH can't give the right answer because it sees a problem.

    Then ask it:

    Does this mean it is ok to give a wrong answer at time?


    Yes, it can be acceptable for a termination analyzer like HHH to give a
    "wrong" answer in certain contexts. This mainly hinges on the
    distinction between the theoretical analysis of a program and its actual execution behavior. Here’s a more detailed look at this concept:


    So, Chat GPT is just ADMITTING that you have trained it that it is
    permissable to *LIE* about the answer.

    Sorry, you are just proving that you don't know what "truth" actually means.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Wed Oct 2 19:01:13 2024
    On 10/2/24 10:39 AM, olcott wrote:
    On 10/2/2024 6:08 AM, Richard Damon wrote:
    On 10/1/24 7:26 PM, olcott wrote:
    On 10/1/2024 12:58 PM, joes wrote:
    Am Tue, 01 Oct 2024 12:31:41 -0500 schrieb olcott:
    On 10/1/2024 8:09 AM, joes wrote:
    Am Tue, 01 Oct 2024 07:39:18 -0500 schrieb olcott:
    On 10/1/2024 7:19 AM, olcott wrote:
    https://www.google.com/search?


    q=Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&sca_esv=889093c5cb21af9e&sca_upv=1&source=hp&ei=Muf7ZpOyMZHfwN4PwYL2gAc&iflsig=AL9hbdgAAAAAZvv1Qg04jNg2ze170z3a8BSGu8pA29Fj&ved=0ahUKEwiTk7zkk-2IAxWRL9AFHUGBHXAQ4dUDCBg&uact=5&oq=
    Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&gs_lp=Egdnd3Mtd2l6IjxUZXJtaW5hdGlvbiBBbmFseXplciBIIGlzIE5vdCBGb29sZWQgYnkgUGF0aG9sb2dpY2FsIElucHV0IERIAFAAWABwAHgAkAEAmAEAoAEAqgEAuAEDyAEA-
    AEC-AEBmAIAoAIAmAMAkgcAoAcA&sclient=gws-wiz
    https://chatgpt.com/share/66fbec5c-7b10-8011-9ce6-3c26424cb21c
    It sounds like it’s trained on your spam. LLMs don’t know anything >>>>>> anyway.

    I asked: „What happens when HHH tries to simulate itself?”
    ChatGPT: [my comments in brackets]

    ### 1. **Self-Invocation in HHH**
    If `HHH` calls itself (i.e., `HHH(HHH)`), it creates an infinite loop >>>>>> of calls unless there is some form of termination condition built
    into
    `HHH`.
    Without such a condition, `HHH` would never return, resulting in non- >>>>>> termination.
    [So not a decider.]

    ### 2. **Halting Criteria**
    In a well-designed termination analyzer, `HHH` would need to have
    specific logic to prevent endless recursion:

    [Not present.]

    ### 3. **Practical Consequences**
    If `HHH` lacks the ability to handle self-referential calls
    correctly:
    - **Stack Overflow**: Continuous self-invocation will eventually
    exhaust the stack memory, leading to a stack overflow error.
    [Not considered in theory.]
    - **Incorrect Results**: If not designed carefully, it may lead to an >>>>>> incorrect conclusion about halting, misrepresenting itself as either >>>>>> halting or non-halting.

    ### 4. **Best Practices for Termination Analyzers**
    To avoid these pitfalls, a termination analyzer should implement:
    - **Guard Clauses**: Checks at the beginning of the function to
    prevent
    unnecessary self-simulation.
    - **State Management**: A way to track the functions being analyzed, >>>>>> allowing `HHH` to recognize if it's being invoked with itself and act >>>>>> accordingly.

    ### Conclusion In summary, if `HHH` tries to simulate itself without >>>>>> proper safeguards,
    it risks falling into an infinite recursion or causing a stack
    overflow. A robust design would include checks to prevent
    self-invocation, ensuring the termination analyzer can handle all
    cases, including its own function,
    gracefully.

    *You can continue this conversation with ChatGPT*
    I’ll leave that to you.
    Click on the link and see how it answers this question:
    Is H a Halt Decider for D?
    You should feed it our objections.
    If you believe in it only when you prompt it, it is not suited as an
    authority (fallacious anyway).


    You feed it your objections.
    It will tell you how and why you are wrong.


    No, it will tell you something that matches the words you told it.

    You don't seem to understand what Large Language Models are.


    You seem to forget that LLM know nothing of the "truth", only what
    matches their training data.

    They are know to be liars, just like you.

    In the case of their evaluation of my work they are correct.

    Nope.


    *Try this yourself*

    https://chatgpt.com/share/66fbec5c-7b10-8011-9ce6-3c26424cb21c

    Does HHH have to abort its emulation of DDD to prevent the infinite
    execution of DDD?



    Which just shows that ChatGPT is a ignorant of the subject as you are.


    You are just proving you fail the basic test of logical reasoning, and
    thus PROVE you claims can not be trusted.

    Sorry, but you logic is just based on LIES because that is all you know.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Thu Oct 3 15:28:00 2024
    On 2024-10-02 14:20:48 +0000, olcott said:

    On 10/2/2024 1:59 AM, Mikko wrote:
    On 2024-10-01 12:19:08 +0000, olcott said:

    https://www.google.com/search?
    q=Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&sca_esv=889093c5cb21af9e&sca_upv=1&source=hp&ei=Muf7ZpOyMZHfwN4PwYL2gAc&iflsig=AL9hbdgAAAAAZvv1Qg04jNg2ze170z3a8BSGu8pA29Fj&ved=0ahUKEwiTk7zkk-2IAxWRL9AFHUGBHXAQ4dUDCBg&uact=5&oq=
    Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&gs_lp=Egdnd3Mtd2l6IjxUZXJtaW5hdGlvbiBBbmFseXplciBIIGlzIE5vdCBGb29sZWQgYnkgUGF0aG9sb2dpY2FsIElucHV0IERIAFAAWABwAHgAkAEAmAEAoAEAqgEAuAEDyAEA-AEC-AEBmAIAoAIAmAMAkgcAoAcA&sclient=gws-wiz


    Why would anyone care about opininons of an artificial idiot?


    Because mistakes cannot be found with its reasoning about my work.

    That you cannot find a mistake does not mean that there are no
    errors.

    The term "mistake" means 'contraty to author's itent'. An AI has no
    intent so the term "mistake" does not apply. That the reasoning is
    not sound is an error.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Thu Oct 3 15:33:05 2024
    On 2024-10-02 14:26:05 +0000, olcott said:

    On 10/2/2024 1:59 AM, Mikko wrote:
    On 2024-10-01 12:19:08 +0000, olcott said:

    https://www.google.com/search?
    q=Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&sca_esv=889093c5cb21af9e&sca_upv=1&source=hp&ei=Muf7ZpOyMZHfwN4PwYL2gAc&iflsig=AL9hbdgAAAAAZvv1Qg04jNg2ze170z3a8BSGu8pA29Fj&ved=0ahUKEwiTk7zkk-2IAxWRL9AFHUGBHXAQ4dUDCBg&uact=5&oq=
    Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&gs_lp=Egdnd3Mtd2l6IjxUZXJtaW5hdGlvbiBBbmFseXplciBIIGlzIE5vdCBGb29sZWQgYnkgUGF0aG9sb2dpY2FsIElucHV0IERIAFAAWABwAHgAkAEAmAEAoAEAqgEAuAEDyAEA-AEC-AEBmAIAoAIAmAMAkgcAoAcA&sclient=gws-wiz


    Why would anyone care about opininons of an artificial idiot?



    Because its reasoning is correct.

    *Try this yourself*

    https://chatgpt.com/share/66fbec5c-7b10-8011-9ce6-3c26424cb21c

    Does HHH have to abort its emulation of DDD to prevent the infinite
    execution of DDD?

    Not necessarily abort. At a point where either the real or simulated HHH
    would simulate itself it could simulate "return 1;" instead of the
    simulation of the real HHH.

    However, the question is not really meaningful. HHH does what it does.
    A program that does something else is not HHH.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)