• Re: key error in all the proofs --- Mike's correction of Joes

    From joes@21:1/5 to All on Thu Aug 15 07:01:13 2024
    Am Wed, 14 Aug 2024 16:08:34 -0500 schrieb olcott:
    On 8/14/2024 3:56 PM, Mike Terry wrote:
    On 14/08/2024 18:45, olcott wrote:
    On 8/14/2024 11:31 AM, joes wrote:
    Am Wed, 14 Aug 2024 08:42:33 -0500 schrieb olcott:
    On 8/14/2024 2:30 AM, Mikko wrote:
    On 2024-08-13 13:30:08 +0000, olcott said:
    On 8/13/2024 6:23 AM, Richard Damon wrote:
    On 8/12/24 11:45 PM, olcott wrote:

    *DDD correctly emulated by HHH cannot possibly reach its* *own >>>>>>>>> "return" instruction final halt state, thus never halts*

    Which is only correct if HHH actuallly does a complete and
    correct emulation, or the behavior DDD (but not the emulation of >>>>>>>> DDD by HHH)
    will reach that return.

    A complete emulation of a non-terminating input has always been a >>>>>>> contradiction in terms.
    HHH correctly predicts that a correct and unlimited emulation of >>>>>>> DDD by HHH cannot possibly reach its own "return" instruction
    final halt state.

    That is not a meaningful prediction because a complete and
    unlimited emulation of DDD by HHH never happens.

    A complete emulation is not required to correctly predict that a
    complete emulation would never halt.
    What do we care about a complete simulation? HHH isn't doing one.

    Please go read how Mike corrected you.

    Lol, dude...  I mentioned nothing about complete/incomplete
    simulations.
    *You corrected Joes most persistent error*
    She made sure to ignore this correction.
    Would you please point it out again?

    But while we're here - a complete simulation of input D() would clearly
    halt.
    A complete simulation *by HHH* remains stuck in infinite recursion until aborted.
    Yes, HHH can't simulate itself completely. I guess no simulator can.

    Termination analyzers / halt deciders are only required to correctly
    predict the behavior of their inputs, thus the behavior of non-inputs is outside of their domain.
    The input is just the description of D, which halts if H aborts.
    The non-input would be if D called a non-aborting simulator,
    because it is not being simulated by one that doesn't abort.
    We only care about the recursive construction, not your implementation
    of D that does NOT call its own simulator.

    *This make the words you say below moot*
    You have seen that yourself, e.g. with main() calling DDD(), or
    UTM(DDD), or HHH1(DDD).  [All of those simulate DDD to completion and
    see DDD return.  What I said earlier was that HHH(DDD) does not
    simulate DDD to completion, which I think everyone recognises - it
    aborts before DDD() halts.

    --
    Am Sat, 20 Jul 2024 12:35:31 +0000 schrieb WM in sci.math:
    It is not guaranteed that n+1 exists for every n.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Fred. Zwarts@21:1/5 to All on Thu Aug 15 10:20:57 2024
    Op 14.aug.2024 om 23:08 schreef olcott:
    On 8/14/2024 3:56 PM, Mike Terry wrote:
    On 14/08/2024 18:45, olcott wrote:
    On 8/14/2024 11:31 AM, joes wrote:
    Am Wed, 14 Aug 2024 08:42:33 -0500 schrieb olcott:
    On 8/14/2024 2:30 AM, Mikko wrote:
    On 2024-08-13 13:30:08 +0000, olcott said:
    On 8/13/2024 6:23 AM, Richard Damon wrote:
    On 8/12/24 11:45 PM, olcott wrote:

    *DDD correctly emulated by HHH cannot possibly reach its* *own >>>>>>>>> "return" instruction final halt state, thus never halts*

    Which is only correct if HHH actuallly does a complete and correct >>>>>>>> emulation, or the behavior DDD (but not the emulation of DDD by >>>>>>>> HHH)
    will reach that return.

    A complete emulation of a non-terminating input has always been a >>>>>>> contradiction in terms.
    HHH correctly predicts that a correct and unlimited emulation of DDD >>>>>>> by HHH cannot possibly reach its own "return" instruction final halt >>>>>>> state.

    That is not a meaningful prediction because a complete and unlimited >>>>>> emulation of DDD by HHH never happens.

    A complete emulation is not required to correctly predict that a
    complete emulation would never halt.
    What do we care about a complete simulation? HHH isn't doing one.


    Please go read how Mike corrected you.


    Lol, dude...  I mentioned nothing about complete/incomplete simulations.


    *You corrected Joes most persistent error*
    She made sure to ignore this correction.

    But while we're here - a complete simulation of input D() would
    clearly halt.

    _DDD()
    [00002172] 55         push ebp      ; housekeeping
    [00002173] 8bec       mov ebp,esp   ; housekeeping
    [00002175] 6872210000 push 00002172 ; push DDD
    [0000217a] e853f4ffff call 000015d2 ; call HHH(DDD)
    [0000217f] 83c404     add esp,+04
    [00002182] 5d         pop ebp
    [00002183] c3         ret
    Size in bytes:(0018) [00002183]

    A complete simulation *by HHH* remains stuck in
    infinite recursion until aborted.

    It is aborted, so the infinite recursion is just a dream. Dreams are no substitute for facts.
    The complete simulation of the HHH that *aborts* does not remain stuck,
    as proven when it is simulated by e.g. HHH1 or UTM. No abort is needed
    to simulate it up to the end. But when simulated by itself, it is
    aborted prematurely.
    We are talking about the HHH that aborts. When we want a complete
    simulation of it, we don't want to change this input of a HHH that
    aborts to an HHH that does not abort.
    We know that you are cheating, by using the Root variable, so that the
    aborting HHH simulates another input, namely the non-aborting HHH.


    Termination analyzers / halt deciders are only required
    to correctly predict the behavior of their inputs.

    Exactly. And the input is a program based on the HHH that aborts and halts.


    Termination analyzers / halt deciders are only required
    to correctly predict the behavior of their inputs, thus
    the behavior of non-inputs is outside of their domain.

    So, it make no sense to dream about a HHH that does not halt when
    simulating a HHH that does halt. The HHH that does not halt is a
    non-input and outside the domain.

    It seems that you are unable to see the difference between the simulator
    and its input. The problem is that a simulator cannot possibly simulate correctly when its own algorithm is part of the input.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Fred. Zwarts@21:1/5 to All on Thu Aug 15 17:34:01 2024
    Op 15.aug.2024 om 15:18 schreef olcott:
    On 8/15/2024 2:01 AM, joes wrote:
    Am Wed, 14 Aug 2024 16:08:34 -0500 schrieb olcott:
    On 8/14/2024 3:56 PM, Mike Terry wrote:
    On 14/08/2024 18:45, olcott wrote:
    On 8/14/2024 11:31 AM, joes wrote:
    Am Wed, 14 Aug 2024 08:42:33 -0500 schrieb olcott:
    On 8/14/2024 2:30 AM, Mikko wrote:
    On 2024-08-13 13:30:08 +0000, olcott said:
    On 8/13/2024 6:23 AM, Richard Damon wrote:
    On 8/12/24 11:45 PM, olcott wrote:

    *DDD correctly emulated by HHH cannot possibly reach its* *own >>>>>>>>>>> "return" instruction final halt state, thus never halts* >>>>>>>>>>>
    Which is only correct if HHH actuallly does a complete and >>>>>>>>>> correct emulation, or the behavior DDD (but not the emulation of >>>>>>>>>> DDD by HHH)
    will reach that return.

    A complete emulation of a non-terminating input has always been a >>>>>>>>> contradiction in terms.
    HHH correctly predicts that a correct and unlimited emulation of >>>>>>>>> DDD by HHH cannot possibly reach its own "return" instruction >>>>>>>>> final halt state.

    That is not a meaningful prediction because a complete and
    unlimited emulation of DDD by HHH never happens.

    A complete emulation is not required to correctly predict that a >>>>>>> complete emulation would never halt.
    What do we care about a complete simulation? HHH isn't doing one.

    Please go read how Mike corrected you.

    Lol, dude...  I mentioned nothing about complete/incomplete
    simulations.
    *You corrected Joes most persistent error*
    She made sure to ignore this correction.
    Would you please point it out again?


    I did in the other post.

    But while we're here - a complete simulation of input D() would clearly >>>> halt.
    A complete simulation *by HHH* remains stuck in infinite recursion until >>> aborted.
    Yes, HHH can't simulate itself completely. I guess no simulator can.


    A simulating termination analyzer can correctly simulate
    itself simulating an input that halts.

    void DDD()
    {
      HHH(DDD);
      return;
    }

    HHH correctly predicts that an unlimited emulation of
    DDD by HHH would never reach the "return" instruction of DDD.

    This prediction is wrong, because the unlimited simulation of HHH by
    HHH1 shows that it halts.


    Termination analyzers / halt deciders are only required to correctly
    predict the behavior of their inputs, thus the behavior of non-inputs is >>> outside of their domain.

    The input is just the description of D, which halts if H aborts.

    DDD emulated by HHH according to the semantics of the x86
    language never reaches its own "return" instruction
    whether or not HHH aborts this emulation at some point
    or not, thus this DDD never halts.

    Incorrect conclusion. The correct conclusions is: thus the simulation
    failed.


    The non-input would be if D called a non-aborting simulator,
    because it is not being simulated by one that doesn't abort.
    We only care about the recursive construction, not your implementation
    of D that does NOT call its own simulator.

    *This make the words you say below moot*
    You have seen that yourself, e.g. with main() calling DDD(), or
    UTM(DDD), or HHH1(DDD).  [All of those simulate DDD to completion and >>>> see DDD return.  What I said earlier was that HHH(DDD) does not
    simulate DDD to completion, which I think everyone recognises - it
    aborts before DDD() halts.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Fri Aug 16 11:13:57 2024
    On 2024-08-15 13:18:06 +0000, olcott said:

    On 8/15/2024 2:01 AM, joes wrote:
    Am Wed, 14 Aug 2024 16:08:34 -0500 schrieb olcott:
    On 8/14/2024 3:56 PM, Mike Terry wrote:
    On 14/08/2024 18:45, olcott wrote:
    On 8/14/2024 11:31 AM, joes wrote:
    Am Wed, 14 Aug 2024 08:42:33 -0500 schrieb olcott:
    On 8/14/2024 2:30 AM, Mikko wrote:
    On 2024-08-13 13:30:08 +0000, olcott said:
    On 8/13/2024 6:23 AM, Richard Damon wrote:
    On 8/12/24 11:45 PM, olcott wrote:

    *DDD correctly emulated by HHH cannot possibly reach its* *own >>>>>>>>>>> "return" instruction final halt state, thus never halts* >>>>>>>>>>>
    Which is only correct if HHH actuallly does a complete and >>>>>>>>>> correct emulation, or the behavior DDD (but not the emulation of >>>>>>>>>> DDD by HHH)
    will reach that return.

    A complete emulation of a non-terminating input has always been a >>>>>>>>> contradiction in terms.
    HHH correctly predicts that a correct and unlimited emulation of >>>>>>>>> DDD by HHH cannot possibly reach its own "return" instruction >>>>>>>>> final halt state.

    That is not a meaningful prediction because a complete and
    unlimited emulation of DDD by HHH never happens.

    A complete emulation is not required to correctly predict that a >>>>>>> complete emulation would never halt.
    What do we care about a complete simulation? HHH isn't doing one.

    Please go read how Mike corrected you.

    Lol, dude...  I mentioned nothing about complete/incomplete
    simulations.
    *You corrected Joes most persistent error*
    She made sure to ignore this correction.
    Would you please point it out again?


    I did in the other post.

    But while we're here - a complete simulation of input D() would clearly >>>> halt.
    A complete simulation *by HHH* remains stuck in infinite recursion until >>> aborted.
    Yes, HHH can't simulate itself completely. I guess no simulator can.


    A simulating termination analyzer can correctly simulate
    itself simulating an input that halts.

    void DDD()
    {
    HHH(DDD);
    return;
    }

    That DDD halts if HHH halts but at least your HHH fails to simulate
    itself with DDD as parameter to its return. Perhaps it can simulate

    void XXX() {
    HHH(YYY);
    }

    void YYY() {
    Output("Hello!");
    }

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Fri Aug 16 16:13:20 2024
    On 2024-08-16 11:39:15 +0000, olcott said:

    On 8/16/2024 3:13 AM, Mikko wrote:
    On 2024-08-15 13:18:06 +0000, olcott said:

    On 8/15/2024 2:01 AM, joes wrote:
    Am Wed, 14 Aug 2024 16:08:34 -0500 schrieb olcott:
    On 8/14/2024 3:56 PM, Mike Terry wrote:
    On 14/08/2024 18:45, olcott wrote:
    On 8/14/2024 11:31 AM, joes wrote:
    Am Wed, 14 Aug 2024 08:42:33 -0500 schrieb olcott:
    On 8/14/2024 2:30 AM, Mikko wrote:
    On 2024-08-13 13:30:08 +0000, olcott said:
    On 8/13/2024 6:23 AM, Richard Damon wrote:
    On 8/12/24 11:45 PM, olcott wrote:

    *DDD correctly emulated by HHH cannot possibly reach its* *own >>>>>>>>>>>>> "return" instruction final halt state, thus never halts* >>>>>>>>>>>>>
    Which is only correct if HHH actuallly does a complete and >>>>>>>>>>>> correct emulation, or the behavior DDD (but not the emulation of >>>>>>>>>>>> DDD by HHH)
    will reach that return.

    A complete emulation of a non-terminating input has always been a >>>>>>>>>>> contradiction in terms.
    HHH correctly predicts that a correct and unlimited emulation of >>>>>>>>>>> DDD by HHH cannot possibly reach its own "return" instruction >>>>>>>>>>> final halt state.

    That is not a meaningful prediction because a complete and >>>>>>>>>> unlimited emulation of DDD by HHH never happens.

    A complete emulation is not required to correctly predict that a >>>>>>>>> complete emulation would never halt.
    What do we care about a complete simulation? HHH isn't doing one. >>>>>>>>
    Please go read how Mike corrected you.

    Lol, dude...  I mentioned nothing about complete/incomplete
    simulations.
    *You corrected Joes most persistent error*
    She made sure to ignore this correction.
    Would you please point it out again?


    I did in the other post.

    But while we're here - a complete simulation of input D() would clearly >>>>>> halt.
    A complete simulation *by HHH* remains stuck in infinite recursion until >>>>> aborted.
    Yes, HHH can't simulate itself completely. I guess no simulator can.


    A simulating termination analyzer can correctly simulate
    itself simulating an input that halts.

    void DDD()
    {
       HHH(DDD);
       return;
    }

    That DDD halts if HHH halts but at least your HHH fails to simulate
    itself with DDD as parameter to its return. Perhaps it can simulate

    void XXX() {
     HHH(YYY);
    }

    void YYY() {
     Output("Hello!");
    }


    void YYY()
    {
    OutputString("Hello!\n");
    }

    void XXX()
    {
    HHH(YYY);
    }

    int main()
    {
    XXX();
    }

    When corrected your code ran fine.
    You never have HHH simulating itself.

    Thanks!

    You seem to have two output functions: Output and OutputString.
    What is the differece?

    I would expect that HHH says that YYY halts. Have you tried?

    What does HHH say about XXX?

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)