• Re: Correct simulation of DDD by HHH is proven --- Zwarts stupidly igno

    From Fred. Zwarts@21:1/5 to All on Thu Aug 21 10:02:37 2025
    Op 20.aug.2025 om 16:31 schreef olcott:
    On 8/20/2025 4:16 AM, Fred. Zwarts wrote:
    Op 19.aug.2025 om 16:41 schreef olcott:
    On 8/19/2025 1:46 AM, Richard Heathfield wrote:
    On 19/08/2025 05:21, Mike Terry wrote:
    On 19/08/2025 03:40, Richard Heathfield wrote:
    On 19/08/2025 03:07, dbush wrote:

    <snip>

    So we see that Richard Heathfield agreed that HHH can't give a
    correct answer, as Linz and other have proved and as you have
    *explicitly* agreed is correct.


    I look at it this way.

    HHH cannot reasonably abort as soon as it detects recursion. After >>>>>> all, there are lots of recursive algorithms around, and plenty of
    them terminate. It has to dig a little deeper than that.

    So by the time we're some way in, we have several levels of
    recursion:

    DD -> HHH -> DD -> HHH -> DD -> HHH -> DD
           (a)          (b)          (c)

    Let's say (c) decides enough is enough.

    I think maybe you're not distinguishing properly between recursive
    call and recursive emulation.

    You may be right, or you may not be, but my explanation seems at
    least at first glance to hold together, makes intuitive sense, and
    goes some way to explaining why some one could cling to the wrong
    answer for 22 years.

    If we were talking *recursive call*, (c) might end the recursion in
    the way you describe due to having been called with different input,

    Or indeed identical input. After all, what's changing?

    allowing control to percolate back through (b) and then to (a).
    That's how a simple Factorial implementation might work:

       int Factorial (int n)
       {
         if (n == 1)
           return 1;
         else
           return n * Factorial (n-1);

    Terrible example, but has the merit of familiarity.

       }

    When evaluating Factorial(3), a nested Factorial(2) is called,
    which in turn nests a Factorial(1) call.  That call returns and the >>>>> recursion breaks (from inner to outer invocations).

    Great, everyone knows this example.  Note that it requires that the >>>>> nested calls are made with differnt arguments.

    Indeed, although of course it doesn't have to be that way.

    In the case of PO's DD/HHH, the arguments are (or at least
    represent) exactly the same computation to be emulated at each
    level.  So (c) [an L2 = Level 2 emulation] will not suddenly decide >>>>> its had enough - if it did, then (a) would have done it earlier.

    Then either there's some state kicking around, or HHH will
    automatically decide that all recursion is runaway recursion.

    But with DD/HHH we have *recursive emulation*.  So HHH [a] is still >>>>> running when (c) is reached - it's busy running around its "emulate
    instruction" loop, testing each time round whether its seen enough
    evidence to decide to quit emulating.

    Okay, so that's clearly a design flaw. Bit I do see the point. /
    Because/ of that design flaw, the fix isn't going to be an easy one.


    Lines 996 through 1006 matches the
    *recursive simulation non-halting behavior pattern*
    https://github.com/plolcott/x86utm/blob/master/Halt7.c

    And it shows the bug in the code. HHH incorrectly assumes a non-
    termination behaviour when there is only a finite recursion.
    It does not correctly analyse the conditional branch instructions
    during the simulation. It does not prove that the conditions for the
    alternate branches will never be met when the simulation would continue.


    <snip>

    OTOH you could say that since HHH only has to handle ONE INPUT CASE
    (DD), PO might have optimised the HHH code to just return 0
    straight away, and the result would be the same!  That's true - DD
    would still halt, and HHH would still claim it never halts. The
    problem here is that all the emulation stuff is /required/ so that
    PO can confuse himself into thinking something more magical is
    going on, justifying various crazy claims.

    Hell of a way to run a railroad. Still, no harm done. At least now I
    know how he /could/ have got the programming right, even if his
    theory is further round the bend than Harpic.


    Turing machine deciders only compute the mapping
    from their inputs...

    and the input to HHH(DD) specifies runaway recursion.

    As usual an incorrect claim without evidence.
    The input specifies a DD based on a HHH that aborts the simulation
    after a few cycles. This means that this input specifies a program
    with a finite recursion, followed by a final halt state.

    The reason that I want to quit talking to
    you is that I keep correctly your error
    that stopping running means halting when
    stopping running only means halting when
    the final halt state is reached and you
    ignore this correction.


    As usual incorrect claims without valid counter arguments.
    You keep saying that the failure of HHH to reach the specified halt
    state proves that non-halting behaviour is specified, but you can not
    prove that.
    The reason you don't want to talk any more is that you cannot find a
    proof for your incorrect claim that non-termination behaviour is
    specified, when HHH only fails to see the specified final halt state.
    You don't want to open your eyes and see the facts. You prefer to keep
    your eyes closed and pretend that the facts do not exist.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)