• Re: Premises cannot be shown to be false without proving that they cont

    From joes@21:1/5 to they on Tue Oct 22 09:50:57 2024
    Am Mon, 21 Oct 2024 22:04:49 -0500 schrieb olcott:
    On 10/21/2024 9:42 PM, Richard Damon wrote:
    On 10/21/24 7:08 PM, olcott wrote:
    On 10/21/2024 6:05 PM, Richard Damon wrote:
    On 10/21/24 6:48 PM, olcott wrote:
    On 10/21/2024 5:34 PM, Richard Damon wrote:
    On 10/21/24 12:29 PM, olcott wrote:
    On 10/21/2024 10:17 AM, joes wrote:
    Am Mon, 21 Oct 2024 08:41:11 -0500 schrieb olcott:
    On 10/21/2024 3:39 AM, joes wrote:
    Am Sun, 20 Oct 2024 17:36:25 -0500 schrieb olcott:
    On 10/20/2024 4:41 PM, Richard Damon wrote:
    On 10/20/24 4:23 PM, olcott wrote:
    On 10/20/2024 2:13 PM, Richard Damon wrote:
    On 10/20/24 1:33 PM, olcott wrote:

    The executed DDD calls HHH() and this call returns. The
    emulated DDD calls HHH(DDD) and this call cannot possibly >>>>>>>>>>> return.
    But whyyy doesn't HHH abort?
    You can click on the link and cut-and-paste the question to see >>>>>>>>> the whole answer in compete detail.
    I am not interested in arguing with a chatbot. Make the points >>>>>>>> yourself.

    1. **Nature of `DDD()`**:
        - `DDD()` simply calls `HHH(DDD)`. It does not perform any >>>>>>>> additional operations that could create a loop or prevent it from >>>>>>>> returning.
        - If `HHH` returns (whether by aborting or completing its >>>>>>>> simulation),
    `DDD()` can return to its caller.

    2. **Behavior of `HHH`**:
        - If `HHH` is able to simulate `DDD()` and return, it
        should
    report that `DDD()` terminates. If `HHH` aborts due to detecting >>>>>>>> non- termination,
    it does not reflect the actual execution of `DDD()`; it leads to >>>>>>>> a conclusion that may not align with the true behavior.

    3. **Contradiction in Results**:
        - If `HHH` claims that `DDD()` does not halt, but in
        reality,
    `DDD()`
    can terminate once `HHH` returns, then `HHH` is providing an
    incorrect analysis.
        - The contradiction lies in the ability of `HHH` to detect >>>>>>>>     non-
    termination theoretically while simultaneously allowing `DDD()` >>>>>>>> to halt in practical execution.

    ### Conclusion:
    Given the nature of `DDD()` and how `HHH` operates, it becomes >>>>>>>> clear that `HHH` cannot consistently provide a correct answer
    about whether `DDD()`
    halts. The dynamics of calling and returning create a scenario >>>>>>>> where the outcomes conflict. Thus, `HHH` is fundamentally flawed >>>>>>>> in its role as a termination analyzer for functions like `DDD()`. >>>>>>>
    Did ChatGPT generate that?
    If it did then I need *ALL the input that caused it to generate
    that*
    It's not like it will deterministically regenerate the same output.

    No, someone using some REAL INTELEGENCE, as opposed to a program
    using "artificial intelegence" that had been loaded with false
    premises and other lies.
    I specifically asked it to verify that its key assumption is correct >>>>> and it did.
    No, it said that given what you told it (which was a lie)
    I asked it if what it was told was a lie and it explained how what it
    was told is correct.
    "naw, I wasn't lied to, they said they were saying the truth" sure buddy.

    Because Chat GPT doesn't care about lying.
    ChatGPT computes the truth and you can't actually show otherwise.
    HAHAHAHAHA there isn't anything about truth in there, prove me wrong

    Because what you are asking for is nonsense.
    Of course an AI that has been programmed with lies might repeat the
    lies.
    When it is told the actual definition, after being told your lies, and
    asked if your conclusion could be right, it said No.
    Thus, it seems by your logic, you have to admit defeat, as the AI,
    after being told your lies, still was able to come up with the correct
    answer, that DDD will halt, and that HHH is just incorrect to say it
    doesn't.
    I believe that the "output" Joes provided was fake on the basis that she
    did not provide the input to derive that output and did not use the
    required basis that was on the link.
    I definitely typed something out in the style of an LLM instead of
    my own words /s

    If you want me to pay more attention to what you say, you first need to
    return the favor, and at least TRY to find an error in what I say, and
    be based on more than just that you think that can't be right.

    But you can't do that, as you don't actually know any facts about the
    field that you can point to qualified references.
    You cannot show that my premises are actually false.
    To show that they are false would at least require showing that they contradict each other.
    Accepting your premises makes the problem uninteresting.

    --
    Am Sat, 20 Jul 2024 12:35:31 +0000 schrieb WM in sci.math:
    It is not guaranteed that n+1 exists for every n.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Tue Oct 22 07:22:38 2024
    On 10/21/24 11:04 PM, olcott wrote:
    On 10/21/2024 9:42 PM, Richard Damon wrote:
    On 10/21/24 7:08 PM, olcott wrote:
    On 10/21/2024 6:05 PM, Richard Damon wrote:
    On 10/21/24 6:48 PM, olcott wrote:
    On 10/21/2024 5:34 PM, Richard Damon wrote:
    On 10/21/24 12:29 PM, olcott wrote:
    On 10/21/2024 10:17 AM, joes wrote:
    Am Mon, 21 Oct 2024 08:41:11 -0500 schrieb olcott:
    On 10/21/2024 3:39 AM, joes wrote:
    Am Sun, 20 Oct 2024 17:36:25 -0500 schrieb olcott:
    On 10/20/2024 4:41 PM, Richard Damon wrote:
    On 10/20/24 4:23 PM, olcott wrote:
    On 10/20/2024 2:13 PM, Richard Damon wrote:
    On 10/20/24 1:33 PM, olcott wrote:

    Note, I DID tell that to Chat GPT, and it agrees that DDD, >>>>>>>>>>>> when the
    criteria is what does DDD actually do, which is what the >>>>>>>>>>>> question
    MUST be about to be about the Termination or Halting
    problem, then
    DDD WILL HALT since HHH(DDD) will return 0 to it.
    No one ever bother to notice that (a) A decider cannot have >>>>>>>>>>> its actual
    self as its input.
    lolwut? A decider is a normal program, and it should be
    handled like
    every other input.

    (b) In the case of the pathological input DDD to emulating >>>>>>>>>>> termination
    analyzer HHH the behavior of the directly executed DDD (not >>>>>>>>>>> an input
    to HHH) is different than the behavior of DDD that is an >>>>>>>>>>> input to HHH.
    DDD *is* the input to HHH.

    The executed DDD calls HHH() and this call returns. The
    emulated DDD
    calls HHH(DDD) and this call cannot possibly return.
    But whyyy doesn't HHH abort?
    You can click on the link and cut-and-paste the question to see >>>>>>>>> the
    whole answer in compete detail.
    I am not interested in arguing with a chatbot. Make the points >>>>>>>> yourself.


    1. **Nature of `DDD()`**:
        - `DDD()` simply calls `HHH(DDD)`. It does not perform any >>>>>>>> additional
    operations that could create a loop or prevent it from returning. >>>>>>>>     - If `HHH` returns (whether by aborting or completing its >>>>>>>> simulation),
    `DDD()` can return to its caller.

    2. **Behavior of `HHH`**:
        - If `HHH` is able to simulate `DDD()` and return, it should >>>>>>>> report
    that `DDD()` terminates. If `HHH` aborts due to detecting non- >>>>>>>> termination,
    it does not reflect the actual execution of `DDD()`; it leads to a >>>>>>>> conclusion that may not align with the true behavior.

    3. **Contradiction in Results**:
        - If `HHH` claims that `DDD()` does not halt, but in
    reality, `DDD()`
    can terminate once `HHH` returns, then `HHH` is providing an
    incorrect
    analysis.
        - The contradiction lies in the ability of `HHH` to detect non- >>>>>>>> termination theoretically while simultaneously allowing `DDD()` >>>>>>>> to halt in
    practical execution.

    ### Conclusion:
    Given the nature of `DDD()` and how `HHH` operates, it becomes >>>>>>>> clear that
    `HHH` cannot consistently provide a correct answer about whether >>>>>>>> `DDD()`
    halts. The dynamics of calling and returning create a scenario >>>>>>>> where the
    outcomes conflict. Thus, `HHH` is fundamentally flawed in its
    role as a
    termination analyzer for functions like `DDD()`.

    Did ChatGPT generate that?
    If it did then I need *ALL the input that caused it to generate
    that*

    https://chatgpt.com/share/6709e046-4794-8011-98b7-27066fb49f3e
    If you did not start with the basis of this link then you cheated. >>>>>>>
    No, someone using some REAL INTELEGENCE, as opposed to a program
    using "artificial intelegence" that had been loaded with false
    premises and other lies.

    Sorry, you are just showing that you have NO intelegence, and are
    depending on a program that includes a disclaimed on every page
    that its answers may have mistakes.

    I specifically asked it to verify that its key
    assumption is correct and it did.

    No, it said that given what you told it (which was a lie)

    I asked it if what it was told was a lie and it
    explained how what it was told is correct.

    Because Chat GPT doesn't care about lying.


    ChatGPT computes the truth and you can't actually
    show otherwise.

    Of course it doesn't, that is why it has the disclaimer at the bottom of
    the page that it can make mistakes.



    Instead of me having to repeat the same thing to
    you fifty times why don't you do what I do to
    focus my own concentration read what I say many
    times over and over until you at least see what
    I said.

    Because what you are asking for is nonsense.

    Of course an AI that has been programmed with lies might repeat the lies.

    When it is told the actual definition, after being told your lies, and
    asked if your conclusion could be right, it said No.

    Thus, it seems by your logic, you have to admit defeat, as the AI,
    after being told your lies, still was able to come up with the correct
    answer, that DDD will halt, and that HHH is just incorrect to say it
    doesn't.


    I believe that the "output" Joes provided was fake on the
    basis that she did not provide the input to derive that
    output and did not use the required basis that was on the
    link.


    But that doesn't prove anything.

    If you want me to pay more attention to what you say, you first need
    to return the favor, and at least TRY to find an error in what I say,
    and be based on more than just that you think that can't be right.


    You are merely spouting off what you have been indoctrinated
    to believe and cannot provide any actual basis in reasoning
    why I am incorrect.

    No, I *HAVE* provided the reason, but you have brainwashed yourself


    But you can't do that, as you don't actually know any facts about the
    field that you can point to qualified references.


    You cannot show that my premises are actually false. The
    most that you can do is show that they are unconventional.

    Of course I have, they presume definitions in conflict with the Formal
    System you claim to be working in.


    To show that they are false would at least require showing
    that they contradict each other.

    No, just that they contradict statements already established in the
    system you claim to be working in.


    Failing to do that no one has any basis to even show that
    they are false. The most that they can do is show that they
    are unconventional.


    Nope, your claim just proves you have no understanding of Formal Systems
    and there logic.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From joes@21:1/5 to Just no. Do you believe that I didn on Tue Oct 22 15:18:56 2024
    Am Tue, 22 Oct 2024 08:47:39 -0500 schrieb olcott:
    On 10/22/2024 4:50 AM, joes wrote:
    Am Mon, 21 Oct 2024 22:04:49 -0500 schrieb olcott:
    On 10/21/2024 9:42 PM, Richard Damon wrote:
    On 10/21/24 7:08 PM, olcott wrote:
    On 10/21/2024 6:05 PM, Richard Damon wrote:
    On 10/21/24 6:48 PM, olcott wrote:
    On 10/21/2024 5:34 PM, Richard Damon wrote:
    On 10/21/24 12:29 PM, olcott wrote:
    On 10/21/2024 10:17 AM, joes wrote:
    Am Mon, 21 Oct 2024 08:41:11 -0500 schrieb olcott:
    On 10/21/2024 3:39 AM, joes wrote:

    Did ChatGPT generate that?
    If it did then I need *ALL the input that caused it to generate >>>>>>>>> that*
    It's not like it will deterministically regenerate the same output.

    No, someone using some REAL INTELEGENCE, as opposed to a program >>>>>>>> using "artificial intelegence" that had been loaded with false >>>>>>>> premises and other lies.
    I specifically asked it to verify that its key assumption is
    correct and it did.
    No, it said that given what you told it (which was a lie)
    I asked it if what it was told was a lie and it explained how what
    it was told is correct.
    "naw, I wasn't lied to, they said they were saying the truth" sure
    buddy.

    Because Chat GPT doesn't care about lying.
    ChatGPT computes the truth and you can't actually show otherwise.
    HAHAHAHAHA there isn't anything about truth in there, prove me wrong

    Because what you are asking for is nonsense.
    Of course an AI that has been programmed with lies might repeat the
    lies.
    When it is told the actual definition, after being told your lies,
    and asked if your conclusion could be right, it said No.
    Thus, it seems by your logic, you have to admit defeat, as the AI,
    after being told your lies, still was able to come up with the
    correct answer, that DDD will halt, and that HHH is just incorrect to
    say it doesn't.
    I believe that the "output" Joes provided was fake on the basis that
    she did not provide the input to derive that output and did not use
    the required basis that was on the link.
    I definitely typed something out in the style of an LLM instead of my
    own words /s

    If you want me to pay more attention to what you say, you first need
    to return the favor, and at least TRY to find an error in what I say,
    and be based on more than just that you think that can't be right.
    But you can't do that, as you don't actually know any facts about the
    field that you can point to qualified references.
    You cannot show that my premises are actually false.
    To show that they are false would at least require showing that they
    contradict each other.
    Accepting your premises makes the problem uninteresting.
    That seems to indicate that you are admitting that you cheated when you discussed this with ChatGPT. You gave it a faulty basis and then argued against that.
    Just no. Do you believe that I didn't write this myself after all?

    They also conventional within the context of software engineering. That software engineering conventions seem incompatible with computer science conventions may refute the latter.
    lol

    The a halt decider must report on the behavior that itself is contained within seems to be an incorrect convention.
    Just because you don't like the undecidability of the halting problem?

    u32 HHH1(ptr P) // line 721
    u32 HHH(ptr P) // line 801
    The above two functions have identical C code except for their name.

    The input to HHH1(DDD) halts. The input to HHH(DDD) does not halt. This conclusively proves that the pathological relationship between DDD and
    HHH makes a difference in the behavior of DDD.
    That makes no sense. DDD halts or doesn't either way. HHH and HHH1 may
    give different answers, but then exactly one of them must be wrong.
    Do they both call HHH? How does their execution differ?

    --
    Am Sat, 20 Jul 2024 12:35:31 +0000 schrieb WM in sci.math:
    It is not guaranteed that n+1 exists for every n.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mike Terry@21:1/5 to joes on Tue Oct 22 23:07:49 2024
    On 22/10/2024 16:18, joes wrote:
    Am Tue, 22 Oct 2024 08:47:39 -0500 schrieb olcott:
    On 10/22/2024 4:50 AM, joes wrote:
    Am Mon, 21 Oct 2024 22:04:49 -0500 schrieb olcott:
    On 10/21/2024 9:42 PM, Richard Damon wrote:
    On 10/21/24 7:08 PM, olcott wrote:
    On 10/21/2024 6:05 PM, Richard Damon wrote:
    On 10/21/24 6:48 PM, olcott wrote:
    On 10/21/2024 5:34 PM, Richard Damon wrote:
    On 10/21/24 12:29 PM, olcott wrote:
    On 10/21/2024 10:17 AM, joes wrote:
    Am Mon, 21 Oct 2024 08:41:11 -0500 schrieb olcott:
    On 10/21/2024 3:39 AM, joes wrote:

    Did ChatGPT generate that?
    If it did then I need *ALL the input that caused it to generate >>>>>>>>>> that*
    It's not like it will deterministically regenerate the same output.

    No, someone using some REAL INTELEGENCE, as opposed to a program >>>>>>>>> using "artificial intelegence" that had been loaded with false >>>>>>>>> premises and other lies.
    I specifically asked it to verify that its key assumption is
    correct and it did.
    No, it said that given what you told it (which was a lie)
    I asked it if what it was told was a lie and it explained how what >>>>>> it was told is correct.
    "naw, I wasn't lied to, they said they were saying the truth" sure
    buddy.

    Because Chat GPT doesn't care about lying.
    ChatGPT computes the truth and you can't actually show otherwise.
    HAHAHAHAHA there isn't anything about truth in there, prove me wrong

    Because what you are asking for is nonsense.
    Of course an AI that has been programmed with lies might repeat the
    lies.
    When it is told the actual definition, after being told your lies,
    and asked if your conclusion could be right, it said No.
    Thus, it seems by your logic, you have to admit defeat, as the AI,
    after being told your lies, still was able to come up with the
    correct answer, that DDD will halt, and that HHH is just incorrect to >>>>> say it doesn't.
    I believe that the "output" Joes provided was fake on the basis that
    she did not provide the input to derive that output and did not use
    the required basis that was on the link.
    I definitely typed something out in the style of an LLM instead of my
    own words /s

    If you want me to pay more attention to what you say, you first need >>>>> to return the favor, and at least TRY to find an error in what I say, >>>>> and be based on more than just that you think that can't be right.
    But you can't do that, as you don't actually know any facts about the >>>>> field that you can point to qualified references.
    You cannot show that my premises are actually false.
    To show that they are false would at least require showing that they
    contradict each other.
    Accepting your premises makes the problem uninteresting.
    That seems to indicate that you are admitting that you cheated when you
    discussed this with ChatGPT. You gave it a faulty basis and then argued
    against that.
    Just no. Do you believe that I didn't write this myself after all?

    They also conventional within the context of software engineering. That
    software engineering conventions seem incompatible with computer science
    conventions may refute the latter.
    lol

    The a halt decider must report on the behavior that itself is contained
    within seems to be an incorrect convention.
    Just because you don't like the undecidability of the halting problem?

    u32 HHH1(ptr P) // line 721
    u32 HHH(ptr P) // line 801
    The above two functions have identical C code except for their name.

    The input to HHH1(DDD) halts. The input to HHH(DDD) does not halt. This
    conclusively proves that the pathological relationship between DDD and
    HHH makes a difference in the behavior of DDD.
    That makes no sense. DDD halts or doesn't either way. HHH and HHH1 may
    give different answers, but then exactly one of them must be wrong.
    Do they both call HHH? How does their execution differ?


    DDD halts. HHH says DDD doesn't halt. HHH1 says DDD halts. HHH is wrong, just as you say.

    PO is totally confused about what his program does and why, so he invents all sorts of magical
    explanations for what happens, like "pathelogical self reference changes simulation behaviour" or
    whatever. I have tried in the past to explain VERY CAREFULLY to PO why his "exact copy" behaves
    differently from the source of the copy, but it just washes over his head, and he carries on
    spouting the same garbage a few days later. Bottom line: he just doesn't understand his own code.

    He claimed H1 was an exact copy of H and produced a different result (blamed on PSR) but it turns
    out... IT WASN'T AN EXACT COPY. H and H1 test addresses in the global trace entries against the
    addresses of H and H1 respectively, so they are not identical after all. No surprise they give
    different results for the same input D - nothing mysterious here!

    He claimed HHH1 was an exact copy of HHH and produced a different result (blamed on PSR) but it
    turns out... IT WASN'T AN EXACT COPY. HHH and HHH1 contain /their own/ *local static* variable
    (execution_trace) to communicate between their nested simulations. (This is what is tested to set
    Root differently for the outer simulation...) The result is that HHH1 is effectively isolated from
    all simulations of DDD, because DDD is using HHH's execution_trace rather than HHH1's. So HHH1 is
    effectively a UTM with no abort logic, and sees the full DDD simulation. Long and short of it:
    HHH/HHH1 implement different logic and are NOT copies, so no surprise they give different results
    for the same input - nothing mysterious here!

    PO doesn't half talk some bollocks about his own code! :)

    I see in a reply to you he claims:
    ..
    (b) HHH and HHH1 have verbatim identical c source
    code, except for their differing names.
    (c) DDD emulated by HHH has different behavior than
    DDD emulated by HHH1.

    Well, the criterion for a copy is whether the algorithm described by the code is exactly the
    original algorithm, NOT simply whether the C code is identical. By putting his naff local static
    execution_trace variable in HHH/HHH1 which is shared between DDD [which calls HHH] and HHH, but not
    between DDD and HHH1, he makes the /logic/ of HHH/HHH1 different. I think he knows that because he
    is deliberately trying to mislead you in his carefully wording focus on the "C" code being the same.
    Well, if PO had done his coding properly in the first place and stuck to the obvious restrictions
    like "no mutable static data" and so on, then simply copying the C code /would/ be enough to create
    a proper copy. PO doesn't understand any of that...

    Here is a link to my original post where I pointed all this out to PO a few months ago:

    msgid: <0amdndFJSZSzYD77nZ2dnZfqnPednZ2d@brightview.co.uk>
    subject: Re: This function proves that only the outermost HHH examines the execution trace
    date: Fri, 26 Jul 2024 20:46:52 +0100

    Of course, once his code is corrected so H1, HHH1 are proper copies of H, HHH respectively, the
    simulation behaviour is EXACTLY the same. (I've verified that whilst playing with the code,
    although it was not in any doubt.)


    Mike.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Tue Oct 22 23:02:21 2024
    On 10/22/24 10:27 AM, olcott wrote:
    On 10/22/2024 6:22 AM, Richard Damon wrote:
    On 10/21/24 11:04 PM, olcott wrote:
    On 10/21/2024 9:42 PM, Richard Damon wrote:
    On 10/21/24 7:08 PM, olcott wrote:
    On 10/21/2024 6:05 PM, Richard Damon wrote:
    On 10/21/24 6:48 PM, olcott wrote:
    On 10/21/2024 5:34 PM, Richard Damon wrote:
    On 10/21/24 12:29 PM, olcott wrote:
    On 10/21/2024 10:17 AM, joes wrote:
    Am Mon, 21 Oct 2024 08:41:11 -0500 schrieb olcott:
    On 10/21/2024 3:39 AM, joes wrote:
    Am Sun, 20 Oct 2024 17:36:25 -0500 schrieb olcott:
    On 10/20/2024 4:41 PM, Richard Damon wrote:
    On 10/20/24 4:23 PM, olcott wrote:
    On 10/20/2024 2:13 PM, Richard Damon wrote:
    On 10/20/24 1:33 PM, olcott wrote:

    Note, I DID tell that to Chat GPT, and it agrees that DDD, >>>>>>>>>>>>>> when the
    criteria is what does DDD actually do, which is what the >>>>>>>>>>>>>> question
    MUST be about to be about the Termination or Halting >>>>>>>>>>>>>> problem, then
    DDD WILL HALT since HHH(DDD) will return 0 to it.
    No one ever bother to notice that (a) A decider cannot have >>>>>>>>>>>>> its actual
    self as its input.
    lolwut? A decider is a normal program, and it should be >>>>>>>>>>>> handled like
    every other input.

    (b) In the case of the pathological input DDD to emulating >>>>>>>>>>>>> termination
    analyzer HHH the behavior of the directly executed DDD (not >>>>>>>>>>>>> an input
    to HHH) is different than the behavior of DDD that is an >>>>>>>>>>>>> input to HHH.
    DDD *is* the input to HHH.

    The executed DDD calls HHH() and this call returns. The >>>>>>>>>>>>> emulated DDD
    calls HHH(DDD) and this call cannot possibly return.
    But whyyy doesn't HHH abort?
    You can click on the link and cut-and-paste the question to >>>>>>>>>>> see the
    whole answer in compete detail.
    I am not interested in arguing with a chatbot. Make the points >>>>>>>>>> yourself.


    1. **Nature of `DDD()`**:
        - `DDD()` simply calls `HHH(DDD)`. It does not perform any >>>>>>>>>> additional
    operations that could create a loop or prevent it from returning. >>>>>>>>>>     - If `HHH` returns (whether by aborting or completing its >>>>>>>>>> simulation),
    `DDD()` can return to its caller.

    2. **Behavior of `HHH`**:
        - If `HHH` is able to simulate `DDD()` and return, it >>>>>>>>>> should report
    that `DDD()` terminates. If `HHH` aborts due to detecting non- >>>>>>>>>> termination,
    it does not reflect the actual execution of `DDD()`; it leads >>>>>>>>>> to a
    conclusion that may not align with the true behavior.

    3. **Contradiction in Results**:
        - If `HHH` claims that `DDD()` does not halt, but in >>>>>>>>>> reality, `DDD()`
    can terminate once `HHH` returns, then `HHH` is providing an >>>>>>>>>> incorrect
    analysis.
        - The contradiction lies in the ability of `HHH` to detect >>>>>>>>>> non-
    termination theoretically while simultaneously allowing
    `DDD()` to halt in
    practical execution.

    ### Conclusion:
    Given the nature of `DDD()` and how `HHH` operates, it becomes >>>>>>>>>> clear that
    `HHH` cannot consistently provide a correct answer about
    whether `DDD()`
    halts. The dynamics of calling and returning create a scenario >>>>>>>>>> where the
    outcomes conflict. Thus, `HHH` is fundamentally flawed in its >>>>>>>>>> role as a
    termination analyzer for functions like `DDD()`.

    Did ChatGPT generate that?
    If it did then I need *ALL the input that caused it to generate >>>>>>>>> that*

    https://chatgpt.com/share/6709e046-4794-8011-98b7-27066fb49f3e >>>>>>>>> If you did not start with the basis of this link then you cheated. >>>>>>>>>
    No, someone using some REAL INTELEGENCE, as opposed to a program >>>>>>>> using "artificial intelegence" that had been loaded with false >>>>>>>> premises and other lies.

    Sorry, you are just showing that you have NO intelegence, and
    are depending on a program that includes a disclaimed on every >>>>>>>> page that its answers may have mistakes.

    I specifically asked it to verify that its key
    assumption is correct and it did.

    No, it said that given what you told it (which was a lie)

    I asked it if what it was told was a lie and it
    explained how what it was told is correct.

    Because Chat GPT doesn't care about lying.


    ChatGPT computes the truth and you can't actually
    show otherwise.

    Of course it doesn't, that is why it has the disclaimer at the bottom
    of the page that it can make mistakes.


    Yet again you do not pay COMPLETE ATTENTION !!!
    I claim X and you refute an incorrect paraphrase of X.

    ChatGPT can and does make mistakes.
    ChatGPT made no mistakes in analyzing my work and you
    can't show otherwise with any actual reasoning.

    The most that you can possibly show (and I don't think that
    you can show this) is that my premises are unconventional.



    Instead of me having to repeat the same thing to
    you fifty times why don't you do what I do to
    focus my own concentration read what I say many
    times over and over until you at least see what
    I said.

    Because what you are asking for is nonsense.

    Of course an AI that has been programmed with lies might repeat the
    lies.

    When it is told the actual definition, after being told your lies,
    and asked if your conclusion could be right, it said No.

    Thus, it seems by your logic, you have to admit defeat, as the AI,
    after being told your lies, still was able to come up with the
    correct answer, that DDD will halt, and that HHH is just incorrect
    to say it doesn't.


    I believe that the "output" Joes provided was fake on the
    basis that she did not provide the input to derive that
    output and did not use the required basis that was on the
    link.


    But that doesn't prove anything.


    Correct we toss out Joes rebuttal because it doen't
    prove anything.

    If you want me to pay more attention to what you say, you first need
    to return the favor, and at least TRY to find an error in what I
    say, and be based on more than just that you think that can't be right. >>>>

    You are merely spouting off what you have been indoctrinated
    to believe and cannot provide any actual basis in reasoning
    why I am incorrect.

    No, I *HAVE* provided the reason, but you have brainwashed yourself


    All that you have is the dogma of the received view.

    The most that you can say is that the software engineering
    that I propose seems inconsistent with the received view
    in computer science.

    You cannot show that it is actually false. You can only
    show that my assumptions are incompatible with yours.


    But you can't do that, as you don't actually know any facts about
    the field that you can point to qualified references.


    You cannot show that my premises are actually false. The
    most that you can do is show that they are unconventional.

    Of course I have, they presume definitions in conflict with the Formal
    System you claim to be working in.


    Not at all. My definitions specify the formal system
    that I am working in and you cannot show that these
    definitions are false.

    Of course I have, you are just too stupid to understand.

    Halting / Termination are properties of PROGRAMS.

    Programs include ALL the code they use.

    Your input to HHH doesn't include all the code that DDD uses, so is
    incorrect,

    And your definition of what you are claiming to be correctly determining
    isn't a property of the PROGRAM that you talk about, so can't be correct.




    The most that you can show is that they are unconventional.
    I don't think that you can even do that. Within software
    engineering my definitions are conventional.

    No, they are in VIOLATION of the definitions of the field, so are just INCORRECT when you try to claim you are in the field/



    To show that they are false would at least require showing
    that they contradict each other.

    No, just that they contradict statements already established in the
    system you claim to be working in.


    Not at all.

    *X correctly_emulated_by Y*
    Is defined to mean one or more x86 instructions of X are
    emulated by Y according to the semantics of the x86 language.

    And by that definition, it does NOT produce a final behavior of the
    program emulated, so it is just an INVALID arguement to go from the
    partial emulation not reaching the return to saying that the input can't represent a Halting program.


    void DDD()
    {
      HHH(DDD);
      return;
    }

    When HHH is an x86 emulation based termination analyzer
    then each DDD *correctly_emulated_by* any HHH that this DDD
    calls cannot possibly return no matter what this HHH does.

    Nope, because your definition of "correct emulation" means that HHH has
    not determined ANY behavior of the input, unless it reaches a final
    state. Then it can call the input halting.


    The above is conventional software engineering.


    Nope, it is just ignorant idiodicy that don't understand the meaning of
    the words it is using, because you faked yourself out with an equivocation.

    Sorry, you are just proving how utterly stupid and ignorant you are that
    you can't see errors in what you say.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)