• Re: Correct simulation of DDD by HHH is proven --- Heathfield FINALLY a

    From Richard Heathfield@21:1/5 to olcott on Tue Aug 19 03:16:10 2025
    On 19/08/2025 03:00, olcott wrote:
    *Context for what Richard Heathfield agreed to*

    Close but no banana.

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input

    I certainly didn't agree to that.

    until:
    (a) Detects a non-terminating behavior pattern:
    abort simulation and return 0.

    I am prepared to accept that this is how you cope with runaway
    recursion, yes.

    (b) Simulated input reaches its simulated "return" statement:
    return 1.

    This doesn't happen as far as I'm aware.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
      int Halt_Status = HHH(DD);

    You know that'll be 0, so memoise it:

    Halt_Status = 0;


      if (Halt_Status)

    if(0)... so no...

        HERE: goto HERE;

    After skipping that, we get to:

      return Halt_Status;

    return 0;

    }

    What value should HHH(DD) correctly return?

    You've already said that it correctly returns 0 - which correctly
    describes HHH's action (aborted and concluded non-halting).

    What it doesn't tell you is what HHH should do next.

    DD tells you that.

    <AI nonsense snipped>

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to dbush on Tue Aug 19 03:40:09 2025
    On 19/08/2025 03:07, dbush wrote:

    <snip>

    So we see that Richard Heathfield agreed that HHH can't give a
    correct answer, as Linz and other have proved and as you have
    *explicitly* agreed is correct.


    I look at it this way.

    HHH cannot reasonably abort as soon as it detects recursion.
    After all, there are lots of recursive algorithms around, and
    plenty of them terminate. It has to dig a little deeper than that.

    So by the time we're some way in, we have several levels of
    recursion:

    DD -> HHH -> DD -> HHH -> DD -> HHH -> DD
    (a) (b) (c)

    Let's say (c) decides enough is enough.

    So (c) stops *its* simulation of DD. THIS HAS NO IMPACT ON (a)
    AND (b).

    (c) now returns 0 to (b)'s DD.

    (b) regains control, accepts 0 from (c), assigns 0 to
    Halt_Status, and returns 0 to (a).

    (a) regains control, accepts 0 from (b), assigns 0 to
    Halt_Status, and returns 0 to the original DD.

    If the original DD has a caller, it gets a 0, incorrectly
    indicating non-halting.

    Looking at it this way, I no longer see the need for memoisation.
    All that is necessary is for HHH *only* to abort the simulation
    it's hosting, *not* the simulation that invoked it.

    There's your bug, Mr Olcott.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to olcott on Tue Aug 19 04:22:19 2025
    On 19/08/2025 03:54, olcott wrote:
    On 8/18/2025 9:40 PM, Richard Heathfield wrote:

    <snip>

    If the original DD has a caller, it gets a 0, incorrectly
    indicating non-halting.

    Looking at it this way, I no longer see the need for
    memoisation. All that is necessary is for HHH *only* to abort
    the simulation it's hosting, *not* the simulation that invoked it.

    There's your bug, Mr Olcott.


    It is your failing to understand that HHH does not
    have enough evidence to abort (a) until after it has
    done more recursive simulations

    Let it. It doesn't matter, as long as you have enough stack for it.

    and then it aborts
    (a) killing them all.

    Why would you do that?

    Anyway, that's your mistake. You wanted to know where your bug
    was? Well, it's right there.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to olcott on Tue Aug 19 04:43:36 2025
    On 19/08/2025 04:33, olcott wrote:
    The question posed to HHH(DD) includes
    should I abort my simulation of this input
    on the basis that will never halt?

    You haven't established that it will never halt. You've
    established that the simulation is unlikely to complete its
    recursion.

    That's a good reason to call a halt to the recursion by unwinding
    the stack, but it's not a good reason to stop simulating.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mike Terry@21:1/5 to Richard Heathfield on Tue Aug 19 05:21:16 2025
    On 19/08/2025 03:40, Richard Heathfield wrote:
    On 19/08/2025 03:07, dbush wrote:

    <snip>

    So we see that Richard Heathfield agreed that HHH can't give a correct answer, as Linz and other
    have proved and as you have *explicitly* agreed is correct.


    I look at it this way.

    HHH cannot reasonably abort as soon as it detects recursion. After all, there are lots of recursive
    algorithms around, and plenty of them terminate. It has to dig a little deeper than that.

    So by the time we're some way in, we have several levels of recursion:

    DD -> HHH -> DD -> HHH -> DD -> HHH -> DD
          (a)          (b)          (c)

    Let's say (c) decides enough is enough.

    I think maybe you're not distinguishing properly between recursive call and recursive emulation.

    If we were talking *recursive call*, (c) might end the recursion in the way you describe due to
    having been called with different input, allowing control to percolate back through (b) and then to
    (a). That's how a simple Factorial implementation might work:

    int Factorial (int n)
    {
    if (n == 1)
    return 1;
    else
    return n * Factorial (n-1);
    }

    When evaluating Factorial(3), a nested Factorial(2) is called, which in turn nests a Factorial(1)
    call. That call returns and the recursion breaks (from inner to outer invocations).

    Great, everyone knows this example. Note that it requires that the nested calls are made with
    differnt arguments.

    In the case of PO's DD/HHH, the arguments are (or at least represent) exactly the same computation
    to be emulated at each level. So (c) [an L2 = Level 2 emulation] will not suddenly decide its had
    enough - if it did, then (a) would have done it earlier.

    But with DD/HHH we have *recursive emulation*. So HHH [a] is still running when (c) is reached -
    it's busy running around its "emulate instruction" loop, testing each time round whether its seen
    enough evidence to decide to quit emulating.

    So it's (a) that decides its had enough and aborts (b) [and so indirectly (c)]. I.e. it just
    decides to break out of its emulation loop and return 0. (b) and (c) were of course following the
    same path (a) followed, but they are behind (a) as it takes (a) [let's say] 80 instructions to
    emulate a single instruction of (b), and we need 80 (b) instructions to get one (c) instruction
    emulated and so on.

    Recursive /emulation/ can break either from the inside like Factorial, or from the outer emulation
    which is what HHH does.


    So (c) stops *its* simulation of DD. THIS HAS NO IMPACT ON (a) AND (b).

    (c) now returns 0 to (b)'s DD.

    That's not right. (a) stops its emulation of DD and returns 0 to original DD. DD then halts.
    [(b) and (c) cease to have any meaning, as they were only ever a part of something (a) was calculating.]


    (b) regains control, accepts 0 from (c), assigns 0 to Halt_Status, and returns 0 to (a).

    (a) regains control, accepts 0 from (b), assigns 0 to Halt_Status, and returns 0 to the original DD.

    Got to the same conclusion, but with wrong reasoning!


    If the original DD has a caller, it gets a 0, incorrectly indicating non-halting.

    Looking at it this way, I no longer see the need for memoisation. All that is necessary is for HHH
    *only* to abort the simulation it's hosting, *not* the simulation that invoked it.

    A simulation does not know it is being simulated, and cannot abort any outer simulation that is
    simulating it. There is literally no programming mechanism for a program to do that. [PO's x86utm
    /might/ have included an AbortMe() primitive operation that HHH/DD etc. could call, but there is no
    need for that. When a TM is done, it halts, and the equivalent in PO's "C" / x86 world is for the
    program (HHH/DD etc.) to /return/. That's fine. Adding a 2nd method of halting would just
    complicate things.


    There's your bug, Mr Olcott.


    Not really :( HHH /does/ (and can) only abort the emulation its hosting. When (a) aborts the
    emulation of (b), obviously (c) goes away, because that was part of something (b) was doing.

    OTOH you could say that since HHH only has to handle ONE INPUT CASE (DD), PO might have optimised
    the HHH code to just return 0 straight away, and the result would be the same! That's true - DD
    would still halt, and HHH would still claim it never halts. The problem here is that all the
    emulation stuff is /required/ so that PO can confuse himself into thinking something more magical is
    going on, justifying various crazy claims.


    Mike.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Mike Terry on Tue Aug 19 07:46:47 2025
    On 19/08/2025 05:21, Mike Terry wrote:
    On 19/08/2025 03:40, Richard Heathfield wrote:
    On 19/08/2025 03:07, dbush wrote:

    <snip>

    So we see that Richard Heathfield agreed that HHH can't give a
    correct answer, as Linz and other have proved and as you have
    *explicitly* agreed is correct.


    I look at it this way.

    HHH cannot reasonably abort as soon as it detects recursion.
    After all, there are lots of recursive algorithms around, and
    plenty of them terminate. It has to dig a little deeper than that.

    So by the time we're some way in, we have several levels of
    recursion:

    DD -> HHH -> DD -> HHH -> DD -> HHH -> DD
           (a)          (b)          (c)

    Let's say (c) decides enough is enough.

    I think maybe you're not distinguishing properly between
    recursive call and recursive emulation.

    You may be right, or you may not be, but my explanation seems at
    least at first glance to hold together, makes intuitive sense,
    and goes some way to explaining why some one could cling to the
    wrong answer for 22 years.

    If we were talking *recursive call*, (c) might end the recursion
    in the way you describe due to having been called with different
    input,

    Or indeed identical input. After all, what's changing?

    allowing control to percolate back through (b) and then to
    (a).  That's how a simple Factorial implementation might work:

      int Factorial (int n)
      {
        if (n == 1)
          return 1;
        else
          return n * Factorial (n-1);

    Terrible example, but has the merit of familiarity.

      }

    When evaluating Factorial(3), a nested Factorial(2) is called,
    which in turn nests a Factorial(1) call.  That call returns and
    the recursion breaks (from inner to outer invocations).

    Great, everyone knows this example.  Note that it requires that
    the nested calls are made with differnt arguments.

    Indeed, although of course it doesn't have to be that way.

    In the case of PO's DD/HHH, the arguments are (or at least
    represent) exactly the same computation to be emulated at each
    level.  So (c) [an L2 = Level 2 emulation] will not suddenly
    decide its had enough - if it did, then (a) would have done it
    earlier.

    Then either there's some state kicking around, or HHH will
    automatically decide that all recursion is runaway recursion.

    But with DD/HHH we have *recursive emulation*.  So HHH [a] is
    still running when (c) is reached - it's busy running around its
    "emulate instruction" loop, testing each time round whether its
    seen enough evidence to decide to quit emulating.

    Okay, so that's clearly a design flaw. Bit I do see the point.
    /Because/ of that design flaw, the fix isn't going to be an easy one.

    <snip>

    OTOH you could say that since HHH only has to handle ONE INPUT
    CASE (DD), PO might have optimised the HHH code to just return 0
    straight away, and the result would be the same!  That's true -
    DD would still halt, and HHH would still claim it never halts.
    The problem here is that all the emulation stuff is /required/ so
    that PO can confuse himself into thinking something more magical
    is going on, justifying various crazy claims.

    Hell of a way to run a railroad. Still, no harm done. At least
    now I know how he /could/ have got the programming right, even if
    his theory is further round the bend than Harpic.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mike Terry@21:1/5 to Richard Heathfield on Tue Aug 19 18:19:31 2025
    On 19/08/2025 07:46, Richard Heathfield wrote:
    On 19/08/2025 05:21, Mike Terry wrote:
    On 19/08/2025 03:40, Richard Heathfield wrote:
    On 19/08/2025 03:07, dbush wrote:

    <snip>

    So we see that Richard Heathfield agreed that HHH can't give a correct answer, as Linz and other
    have proved and as you have *explicitly* agreed is correct.


    I look at it this way.

    HHH cannot reasonably abort as soon as it detects recursion. After all, there are lots of
    recursive algorithms around, and plenty of them terminate. It has to dig a little deeper than that.

    So by the time we're some way in, we have several levels of recursion:

    DD -> HHH -> DD -> HHH -> DD -> HHH -> DD
    (a) (b) (c)

    Let's say (c) decides enough is enough.

    I think maybe you're not distinguishing properly between recursive call and recursive emulation.

    You may be right, or you may not be, but my explanation seems at least at first glance to hold
    together, makes intuitive sense, and goes some way to explaining why some one could cling to the
    wrong answer for 22 years.

    If we were talking *recursive call*, (c) might end the recursion in the way you describe due to
    having been called with different input,

    Or indeed identical input. After all, what's changing?

    allowing control to percolate back through (b) and then to (a).  That's how a simple Factorial
    implementation might work:

       int Factorial (int n)
       {
         if (n == 1)
           return 1;
         else
           return n * Factorial (n-1);

    Terrible example, but has the merit of familiarity.

       }

    When evaluating Factorial(3), a nested Factorial(2) is called, which in turn nests a Factorial(1)
    call.  That call returns and the recursion breaks (from inner to outer invocations).

    Great, everyone knows this example.  Note that it requires that the nested calls are made with
    differnt arguments.

    Indeed, although of course it doesn't have to be that way.

    Yes it does. Let's look at code that "not that way":

    int WrongFactorial (int n)
    {
    if (n == 1)
    return 1;
    else
    return n * WrongFactorial (n); /* NOTE difference from Factorial code above */
    }

    So, WrongFactorial recursively calls itself, but with the same argument each time. I.e.

    WrongFactorial(3) -> WrongFactorial(3) -> WrongFactorial(3)
    (a) (b) (c)

    You are trying to say that (c) says "enough is enough" and breaks the recursion, returning to (b).
    That can't happen - (c) is performing the same calculation as (a): same code and the same input. So
    (c) will progress just like (a) and call another WrongFactorial(3). What we'll get is

    ... -> WrongFactorial(3) -> WrongFactorial(3) -> WrongFactorial(3) -> etc.
    (c) (d) (e) Infinite recursion.

    If (c) is to break the recursion, it must be called with a different argument.

    This is all assuming no cheating from "impure functions" with global state etc.. If there is shared
    global state that needs to be changed to explicit input, then it's again clear that IF that input
    (including converted global state) is identical across the recursive calls, the recursion cannot
    break out and we have infinite recursion.



    In the case of PO's DD/HHH, the arguments are (or at least represent) exactly the same computation
    to be emulated at each level.  So (c) [an L2 = Level 2 emulation] will not suddenly decide its had
    enough - if it did, then (a) would have done it earlier.

    Then either there's some state kicking around, or HHH will automatically decide that all recursion
    is runaway recursion.

    Well, remember HHH is *emulating* DD, not calling it. HHH does not cede control to DD, and is
    always running even while DD is being emulated. It is HHH that is doing the emulation.

    So yes, naturally HHH needs its own state to :
    a) control the emulation, and
    b) to assess the progress of the emulation its performing. [Has a tight loop occured? etc.]

    E.g., for (a) HHH must maintain the state of a virtual x86 environment where emulated DD will "run",
    including the current emulated DD instruction pointer and other x86 registers, and the virtual
    address space that those emulated instructions manipulate (including the virtual stack and so on).
    For (b) HHH must maintain whatever state it needs beyond simply emulating instructions of DD. For
    PO's HHH that's a table of previously emulated instructions, so that it can spot loops etc..

    This state kicking around in HHH /ought/ to be local HHH state, but in PO's case he has made (b)
    global state. (He just couldn't see how to make it local, so he thought, well it will have to be
    global then, no problem...)

    PO has some rules for matching what he thinks is runaway recursive emulations. [E.g. there must be
    no conditional branch instructions *within the bounds of function DD* between the repeating calls.
    Details are in the halt7.c code...] So HHH will not think /all/ recursions are runaway.
    Specifically with DD, however, HHH flags DD as runaway recursion when it isn't. (An explicit Bug...)


    But with DD/HHH we have *recursive emulation*.  So HHH [a] is still running when (c) is reached -
    it's busy running around its "emulate instruction" loop, testing each time round whether its seen
    enough evidence to decide to quit emulating.

    Okay, so that's clearly a design flaw. Bit I do see the point. /Because/ of that design flaw, the
    fix isn't going to be an easy one.

    <snip>

    OTOH you could say that since HHH only has to handle ONE INPUT CASE (DD), PO might have optimised
    the HHH code to just return 0 straight away, and the result would be the same!  That's true - DD
    would still halt, and HHH would still claim it never halts. The problem here is that all the
    emulation stuff is /required/ so that PO can confuse himself into thinking something more magical
    is going on, justifying various crazy claims.

    Hell of a way to run a railroad. Still, no harm done. At least now I know how he /could/ have got
    the programming right, even if his theory is further round the bend than Harpic.

    It is more round the bend than those wiry plumber brushes that unblock your toilet by going round
    all sorts of bendy bends to places even Harpic cannot reach! :)

    Mike.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Mike Terry on Tue Aug 19 19:06:43 2025
    On 19/08/2025 18:19, Mike Terry wrote:
    On 19/08/2025 07:46, Richard Heathfield wrote:
    On 19/08/2025 05:21, Mike Terry wrote:
    On 19/08/2025 03:40, Richard Heathfield wrote:
    On 19/08/2025 03:07, dbush wrote:

    <snip>

    So we see that Richard Heathfield agreed that HHH can't give
    a correct answer, as Linz and other have proved and as you
    have *explicitly* agreed is correct.


    I look at it this way.

    HHH cannot reasonably abort as soon as it detects recursion.
    After all, there are lots of recursive algorithms around, and
    plenty of them terminate. It has to dig a little deeper than
    that.

    So by the time we're some way in, we have several levels of
    recursion:

    DD -> HHH -> DD -> HHH -> DD -> HHH -> DD
           (a)          (b)          (c)

    Let's say (c) decides enough is enough.

    I think maybe you're not distinguishing properly between
    recursive call and recursive emulation.

    You may be right, or you may not be, but my explanation seems
    at least at first glance to hold together, makes intuitive
    sense, and goes some way to explaining why some one could cling
    to the wrong answer for 22 years.

    If we were talking *recursive call*, (c) might end the
    recursion in the way you describe due to having been called
    with different input,

    Or indeed identical input. After all, what's changing?

    allowing control to percolate back through (b) and then to
    (a).  That's how a simple Factorial implementation might work:

       int Factorial (int n)
       {
         if (n == 1)
           return 1;
         else
           return n * Factorial (n-1);

    Terrible example, but has the merit of familiarity.

       }

    When evaluating Factorial(3), a nested Factorial(2) is called,
    which in turn nests a Factorial(1) call.  That call returns
    and the recursion breaks (from inner to outer invocations).

    Great, everyone knows this example.  Note that it requires
    that the nested calls are made with differnt arguments.

    Indeed, although of course it doesn't have to be that way.

    Yes it does.

    No, it doesn't.

    Let's look at code that "not that way":

      int WrongFactorial (int n)
      {
        if (n == 1)
          return 1;
        else
          return n * WrongFactorial (n); /* NOTE difference from
    Factorial code above */
      }


    But HHH is not a factorial calculation. It is a function that
    takes a function pointer it can only assign or dereference, and
    then returns 0.

    You are trying to say that (c) says "enough is enough" and breaks
    the recursion, returning to (b).

    No longer. I accept your explanation that that's not how it
    works. It *should* work that way, and the way it does work is
    clearly broken, but okay, it doesn't.


    That can't happen - (c) is
    performing the same calculation as (a): same code and the same
    input.

    Sure it could. When HHH detects that it's about to recurse into
    itself it could just start off a new simulation, and if it starts
    to run away with itself it could can /its/ simulation and return 0.

    That way, it might even get the rightwrongâ„¢ answer as opposed to
    the wrongwrongâ„¢ answer it gets at present.


    In the case of PO's DD/HHH, the arguments are (or at least
    represent) exactly the same computation to be emulated at each
    level.  So (c) [an L2 = Level 2 emulation] will not suddenly
    decide its had enough - if it did, then (a) would have done it
    earlier.

    Then either there's some state kicking around, or HHH will
    automatically decide that all recursion is runaway recursion.

    Well, remember HHH is *emulating* DD,

    *some of* DD. About a quarter, and the least interesting part at
    that.

    It is more round the bend than those wiry plumber brushes that
    unblock your toilet by going round all sorts of bendy bends to
    places even Harpic cannot reach!  :)

    ;-)

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mike Terry@21:1/5 to Richard Heathfield on Wed Aug 20 02:37:25 2025
    On 19/08/2025 19:06, Richard Heathfield wrote:
    On 19/08/2025 18:19, Mike Terry wrote:
    On 19/08/2025 07:46, Richard Heathfield wrote:
    On 19/08/2025 05:21, Mike Terry wrote:
    On 19/08/2025 03:40, Richard Heathfield wrote:
    On 19/08/2025 03:07, dbush wrote:

    <snip>

    So we see that Richard Heathfield agreed that HHH can't give a correct answer, as Linz and
    other have proved and as you have *explicitly* agreed is correct.


    I look at it this way.

    HHH cannot reasonably abort as soon as it detects recursion. After all, there are lots of
    recursive algorithms around, and plenty of them terminate. It has to dig a little deeper than
    that.

    So by the time we're some way in, we have several levels of recursion: >>>>>
    DD -> HHH -> DD -> HHH -> DD -> HHH -> DD
           (a)          (b)          (c)

    Let's say (c) decides enough is enough.

    I think maybe you're not distinguishing properly between recursive call and recursive emulation.

    You may be right, or you may not be, but my explanation seems at least at first glance to hold
    together, makes intuitive sense, and goes some way to explaining why some one could cling to the
    wrong answer for 22 years.

    If we were talking *recursive call*, (c) might end the recursion in the way you describe due to
    having been called with different input,

    Or indeed identical input. After all, what's changing?

    allowing control to percolate back through (b) and then to (a).  That's how a simple Factorial
    implementation might work:

       int Factorial (int n)
       {
         if (n == 1)
           return 1;
         else
           return n * Factorial (n-1);

    Terrible example, but has the merit of familiarity.

       }

    When evaluating Factorial(3), a nested Factorial(2) is called, which in turn nests a
    Factorial(1) call.  That call returns and the recursion breaks (from inner to outer invocations).

    Great, everyone knows this example.  Note that it requires that the nested calls are made with
    differnt arguments.

    Indeed, although of course it doesn't have to be that way.

    Yes it does.

    No, it doesn't. >
    Let's look at code that "not that way":

       int WrongFactorial (int n)
       {
         if (n == 1)
           return 1;
         else
           return n * WrongFactorial (n); /* NOTE difference from Factorial code above */
       }


    But HHH is not a factorial calculation. It is a function that takes a function pointer it can only
    assign or dereference, and then returns 0.

    ok, we've got onto cross purposes.

    Above I was discussing (only) "recursive call", like Factorial (3). I said recursive call where the
    same call arguments are passed at each recursion can never terminate. An example being
    WrongFactorial(3) - that recursion clearly never terminates, and similarly for ALL other examples of
    recursive /call/ where the same arguments being used for each recursion also never terminate.)
    Turning that around, if a recursive /call/ scenario DOES terminate it must have different arguments
    for each level of recursion.

    I was not talking about HHH, because HHH involves recursive simulation, not recursive call.


    You are trying to say that (c) says "enough is enough" and breaks the recursion, returning to (b).

    No longer. I accept your explanation that that's not how it works. It *should* work that way, and
    the way it does work is clearly broken, but okay, it doesn't.


    That can't happen - (c) is performing the same calculation as (a): same code and the same input.

    Sure it could. When HHH detects that it's about to recurse into itself it could just start off a new
    simulation, and if it starts to run away with itself it could can /its/ simulation and return 0.

    You are describing recursive /simulation/ (or emulation). (I think...) HHH does not involve
    recursive call, so what I was saying does not apply to HHH.


    That way, it might even get the rightwrong™ answer as opposed to the wrongwrong™ answer it gets at
    present.


    In the case of PO's DD/HHH, the arguments are (or at least represent) exactly the same
    computation to be emulated at each level.  So (c) [an L2 = Level 2 emulation] will not suddenly
    decide its had enough - if it did, then (a) would have done it earlier. >>>
    Then either there's some state kicking around, or HHH will automatically decide that all
    recursion is runaway recursion.

    Well, remember HHH is *emulating* DD,

    *some of* DD. About a quarter, and the least interesting part at that.

    It is more round the bend than those wiry plumber brushes that unblock your toilet by going round
    all sorts of bendy bends to places even Harpic cannot reach!  :)

    ;-)


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Mike Terry on Wed Aug 20 03:08:40 2025
    On 20/08/2025 02:37, Mike Terry wrote:
    On 19/08/2025 19:06, Richard Heathfield wrote:

    <snip>

    But HHH is not a factorial calculation. It is a function that
    takes a function pointer it can only assign or dereference, and
    then returns 0.

    ok, we've got onto cross purposes.

    Agreed.

    I was not talking about HHH, because HHH involves recursive
    simulation, not recursive call.

    Understood.

    Let me re-state: I accept your explanation that that's not how it
    works. It *should* work that way, and the way it does work is
    clearly broken, but okay, it doesn't.


    That can't happen - (c) is performing the same calculation as
    (a): same code and the same input.

    Sure it could. When HHH detects that it's about to recurse into
    itself it could just start off a new simulation, and if it
    starts to run away with itself it could can /its/ simulation
    and return 0.

    You are describing recursive /simulation/ (or emulation).  (I
    think...)  HHH does not involve recursive call, so what I was
    saying does not apply to HHH.

    Right.

    IF (as I originally thought) HHH worked by recursing into a new
    simulation every time it hit DD's HHH call, that would be quite
    clever. It would be easy to add static metrics to make the call
    about aborting the recursion and unwinding the stack back to
    where it can continue simulating DD, and nothing would be
    discarded as being "unreachable".

    Hence...
    That way, it might even get the rightwrongâ„¢ answer as opposed
    to the wrongwrongâ„¢ answer it gets at present.


    But like you say, it doesn't work like that.

    But it *could*, so the notion that the last few lines of DD are
    unreachable is simply wrong.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to olcott on Wed Aug 20 03:43:12 2025
    On 20/08/2025 03:27, olcott wrote:
    On 8/19/2025 9:08 PM, Richard Heathfield wrote:

    <snip>

    IF (as I originally thought) HHH worked by recursing into a new
    simulation every time it hit DD's HHH call, that would be quite
    clever. It would be easy to add static metrics to make the call
    about aborting the recursion and unwinding the stack back to
    where it can continue simulating DD, and nothing would be
    discarded as being "unreachable".

    Hence...
    That way, it might even get the rightwrongâ„¢ answer as opposed
    to the wrongwrongâ„¢ answer it gets at present.


    But like you say, it doesn't work like that.

    But it *could*, so the notion that the last few lines of DD are
    unreachable is simply wrong.


    void Infinite_Loop()
    {
      HERE: goto HERE;
      return;
    }

    Infinite_Loop() could be patched this same way
    so that it jumps to its own "return" statement.

    You don't get to change the input.

    That's cheating.

    void Infinite_Loop()
    {
      HERE: goto THERE;
      THERE:
      return;
    }

    You don't get to change the input.

    That's cheating.


    That is called cheating.

    Right.

    Nobody is suggesting changing the input.

    What I'm suggesting is that you fix HHH, because it doesn't work.
    Changing the simulator is allowed. You do it all the time, it seems.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Fred. Zwarts@21:1/5 to All on Wed Aug 20 11:16:28 2025
    Op 19.aug.2025 om 16:41 schreef olcott:
    On 8/19/2025 1:46 AM, Richard Heathfield wrote:
    On 19/08/2025 05:21, Mike Terry wrote:
    On 19/08/2025 03:40, Richard Heathfield wrote:
    On 19/08/2025 03:07, dbush wrote:

    <snip>

    So we see that Richard Heathfield agreed that HHH can't give a
    correct answer, as Linz and other have proved and as you have
    *explicitly* agreed is correct.


    I look at it this way.

    HHH cannot reasonably abort as soon as it detects recursion. After
    all, there are lots of recursive algorithms around, and plenty of
    them terminate. It has to dig a little deeper than that.

    So by the time we're some way in, we have several levels of recursion: >>>>
    DD -> HHH -> DD -> HHH -> DD -> HHH -> DD
           (a)          (b)          (c)

    Let's say (c) decides enough is enough.

    I think maybe you're not distinguishing properly between recursive
    call and recursive emulation.

    You may be right, or you may not be, but my explanation seems at least
    at first glance to hold together, makes intuitive sense, and goes some
    way to explaining why some one could cling to the wrong answer for 22
    years.

    If we were talking *recursive call*, (c) might end the recursion in
    the way you describe due to having been called with different input,

    Or indeed identical input. After all, what's changing?

    allowing control to percolate back through (b) and then to (a).
    That's how a simple Factorial implementation might work:

       int Factorial (int n)
       {
         if (n == 1)
           return 1;
         else
           return n * Factorial (n-1);

    Terrible example, but has the merit of familiarity.

       }

    When evaluating Factorial(3), a nested Factorial(2) is called, which
    in turn nests a Factorial(1) call.  That call returns and the
    recursion breaks (from inner to outer invocations).

    Great, everyone knows this example.  Note that it requires that the
    nested calls are made with differnt arguments.

    Indeed, although of course it doesn't have to be that way.

    In the case of PO's DD/HHH, the arguments are (or at least represent)
    exactly the same computation to be emulated at each level.  So (c)
    [an L2 = Level 2 emulation] will not suddenly decide its had enough -
    if it did, then (a) would have done it earlier.

    Then either there's some state kicking around, or HHH will
    automatically decide that all recursion is runaway recursion.

    But with DD/HHH we have *recursive emulation*.  So HHH [a] is still
    running when (c) is reached - it's busy running around its "emulate
    instruction" loop, testing each time round whether its seen enough
    evidence to decide to quit emulating.

    Okay, so that's clearly a design flaw. Bit I do see the point. /
    Because/ of that design flaw, the fix isn't going to be an easy one.


    Lines 996 through 1006 matches the
    *recursive simulation non-halting behavior pattern* https://github.com/plolcott/x86utm/blob/master/Halt7.c

    And it shows the bug in the code. HHH incorrectly assumes a
    non-termination behaviour when there is only a finite recursion.
    It does not correctly analyse the conditional branch instructions during
    the simulation. It does not prove that the conditions for the alternate branches will never be met when the simulation would continue.


    <snip>

    OTOH you could say that since HHH only has to handle ONE INPUT CASE
    (DD), PO might have optimised the HHH code to just return 0 straight
    away, and the result would be the same!  That's true - DD would still
    halt, and HHH would still claim it never halts. The problem here is
    that all the emulation stuff is /required/ so that PO can confuse
    himself into thinking something more magical is going on, justifying
    various crazy claims.

    Hell of a way to run a railroad. Still, no harm done. At least now I
    know how he /could/ have got the programming right, even if his theory
    is further round the bend than Harpic.


    Turing machine deciders only compute the mapping
    from their inputs...

    and the input to HHH(DD) specifies runaway recursion.

    As usual an incorrect claim without evidence.
    The input specifies a DD based on a HHH that aborts the simulation after
    a few cycles. This means that this input specifies a program with a
    finite recursion, followed by a final halt state.
    But HHH does not use this input, but assumes a non-input based on a hypothetical other HHH that does not abort.

    Olcott is basing his proofs on his dreams, not on the facts.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Fred. Zwarts@21:1/5 to All on Wed Aug 20 11:22:08 2025
    Op 19.aug.2025 om 19:58 schreef olcott:
    On 8/19/2025 12:19 PM, Mike Terry wrote:
    On 19/08/2025 07:46, Richard Heathfield wrote:
    On 19/08/2025 05:21, Mike Terry wrote:
    On 19/08/2025 03:40, Richard Heathfield wrote:
    On 19/08/2025 03:07, dbush wrote:

    <snip>

    So we see that Richard Heathfield agreed that HHH can't give a
    correct answer, as Linz and other have proved and as you have
    *explicitly* agreed is correct.


    I look at it this way.

    HHH cannot reasonably abort as soon as it detects recursion. After
    all, there are lots of recursive algorithms around, and plenty of
    them terminate. It has to dig a little deeper than that.

    So by the time we're some way in, we have several levels of recursion: >>>>>
    DD -> HHH -> DD -> HHH -> DD -> HHH -> DD
           (a)          (b)          (c)

    Let's say (c) decides enough is enough.

    I think maybe you're not distinguishing properly between recursive
    call and recursive emulation.

    You may be right, or you may not be, but my explanation seems at
    least at first glance to hold together, makes intuitive sense, and
    goes some way to explaining why some one could cling to the wrong
    answer for 22 years.

    If we were talking *recursive call*, (c) might end the recursion in
    the way you describe due to having been called with different input,

    Or indeed identical input. After all, what's changing?

    allowing control to percolate back through (b) and then to (a).
    That's how a simple Factorial implementation might work:

       int Factorial (int n)
       {
         if (n == 1)
           return 1;
         else
           return n * Factorial (n-1);

    Terrible example, but has the merit of familiarity.

       }

    When evaluating Factorial(3), a nested Factorial(2) is called, which
    in turn nests a Factorial(1) call.  That call returns and the
    recursion breaks (from inner to outer invocations).

    Great, everyone knows this example.  Note that it requires that the
    nested calls are made with differnt arguments.

    Indeed, although of course it doesn't have to be that way.

    Yes it does.  Let's look at code that "not that way":

       int WrongFactorial (int n)
       {
         if (n == 1)
           return 1;
         else
           return n * WrongFactorial (n); /* NOTE difference from
    Factorial code above */
       }

    So, WrongFactorial recursively calls itself, but with the same
    argument each time.  I.e.

        WrongFactorial(3) -> WrongFactorial(3) -> WrongFactorial(3)
              (a)                  (b)                  (c)

    You are trying to say that (c) says "enough is enough" and breaks the
    recursion, returning to (b). That can't happen - (c) is performing the
    same calculation as (a): same code and the same input.  So (c) will
    progress just like (a) and call another WrongFactorial(3).  What we'll
    get is

        ... -> WrongFactorial(3) -> WrongFactorial(3) -> WrongFactorial(3)
    -  > etc.
                     (c)                  (d) (e)            Infinite
    recursion.

    If (c) is to break the recursion, it must be called with a different
    argument.

    This is all assuming no cheating from "impure functions" with global
    state etc..  If there is shared global state that needs to be changed
    to explicit input, then it's again clear that IF that input (including
    converted global state) is identical across the recursive calls, the
    recursion cannot break out and we have infinite recursion.



    In the case of PO's DD/HHH, the arguments are (or at least
    represent) exactly the same computation to be emulated at each
    level.  So (c) [an L2 = Level 2 emulation] will not suddenly decide
    its had enough - if it did, then (a) would have done it earlier.

    Then either there's some state kicking around, or HHH will
    automatically decide that all recursion is runaway recursion.

    Well, remember HHH is *emulating* DD, not calling it.  HHH does not
    cede control to DD, and is always running even while DD is being
    emulated. It is HHH that is doing the emulation.

    So yes, naturally HHH needs its own state to :
    a)  control the emulation, and
    b)  to assess the progress of the emulation its performing.  [Has a
    tight loop occured? etc.]

    E.g., for (a) HHH must maintain the state of a virtual x86 environment
    where emulated DD will "run", including the current emulated DD
    instruction pointer and other x86 registers, and the virtual address
    space that those emulated instructions manipulate (including the
    virtual stack and so on). For (b) HHH must maintain whatever state it
    needs beyond simply emulating instructions of DD.  For PO's HHH that's
    a table of previously emulated instructions, so that it can spot loops
    etc..

    This state kicking around in HHH /ought/ to be local HHH state, but in
    PO's case he has made (b) global state.  (He just couldn't see how to
    make it local, so he thought, well it will have to be global then, no
    problem...)


    If you Totally understand DD correctly simulated by HHH
    and can show the details of how HHH can detect the behavior
    of an instance of itself emulating an instance of DD that
    would be appreciated. You may only have the gist of an idea
    that will not actually work in practice.

    We understand that HHH fails and that no correction is possible.


    PO has some rules for matching what he thinks is runaway recursive
    emulations.  [E.g. there must be no conditional branch instructions
    *within the bounds of function DD* between the repeating calls.
    Details are in the halt7.c code...]  So HHH will not think /all/
    recursions are runaway. Specifically with DD, however, HHH flags DD as
    runaway recursion when it isn't.  (An explicit Bug...)


    *No bug*
    <MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
        If simulating halt decider H correctly simulates its
        input D until H correctly determines that its simulated D
        would never stop running unless aborted then


    Yes a bug. The agreement of Sipser with a void statement does not change
    that. There is no correct simulation. The non-termination is not
    correctly determined. So, the assumptions are incorrect, making the
    statement void.

    The bug has been pointed out many times:
    HHH fails to analyse the conditional branch instructions encountered
    during the simulation. It does not prove that the conditions for the
    alternate branches will never be met when the simulation would continue.
    Due to this bug, it prematurely aborts the simulation.


    But with DD/HHH we have *recursive emulation*.  So HHH [a] is still
    running when (c) is reached - it's busy running around its "emulate
    instruction" loop, testing each time round whether its seen enough
    evidence to decide to quit emulating.

    Okay, so that's clearly a design flaw. Bit I do see the point. /
    Because/ of that design flaw, the fix isn't going to be an easy one.

    <snip>

    OTOH you could say that since HHH only has to handle ONE INPUT CASE
    (DD), PO might have optimised the HHH code to just return 0 straight
    away, and the result would be the same!  That's true - DD would
    still halt, and HHH would still claim it never halts. The problem
    here is that all the emulation stuff is /required/ so that PO can
    confuse himself into thinking something more magical is going on,
    justifying various crazy claims.

    Hell of a way to run a railroad. Still, no harm done. At least now I
    know how he /could/ have got the programming right, even if his
    theory is further round the bend than Harpic.

    It is more round the bend than those wiry plumber brushes that unblock
    your toilet by going round all sorts of bendy bends to places even
    Harpic cannot reach!  :)

    Mike.





    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mike Terry@21:1/5 to Richard Heathfield on Wed Aug 20 21:15:52 2025
    On 20/08/2025 03:08, Richard Heathfield wrote:
    On 20/08/2025 02:37, Mike Terry wrote:
    On 19/08/2025 19:06, Richard Heathfield wrote:

    <snip>

    But HHH is not a factorial calculation. It is a function that takes a function pointer it can
    only assign or dereference, and then returns 0.

    ok, we've got onto cross purposes.

    Agreed.

    I was not talking about HHH, because HHH involves recursive simulation, not recursive call.

    Understood.

    Let me re-state: I accept your explanation that that's not how it
    works. It *should* work that way, and the way it does work is
    clearly broken, but okay, it doesn't.


    That can't happen - (c) is performing the same calculation as (a): same code and the same input.

    Sure it could. When HHH detects that it's about to recurse into itself it could just start off a
    new simulation, and if it starts to run away with itself it could can /its/ simulation and return 0.

    You are describing recursive /simulation/ (or emulation).  (I think...)  HHH does not involve
    recursive call, so what I was saying does not apply to HHH.

    Right.

    IF (as I originally thought) HHH worked by recursing into a new simulation every time it hit DD's
    HHH call, that would be quite clever. It would be easy to add static metrics to make the call about
    aborting the recursion and unwinding the stack back to where it can continue simulating DD, and
    nothing would be discarded as being "unreachable".

    Let's see if I've fully got what you're saying. I'll use notation HHH[n] for HHH running at nested
    simulation level n. So HHH[0] is outer HHH etc.

    [Yeah, I know your eyes are already starting to glaze over! All I can say is the steps below are
    little steps with no big jumps, so if you grab a pad and pen and cup of tea you can get through it -
    believe that that can happen!!]

    1. HHH[0] is simulating DD[1], and spots DD's "call HHH".
    2. Rather than simulating that x86 call instruction, HHH decides to spin up a new
    simulation of ... DD? [must be...]
    Aargh, my nesting notation is broken already because this new simulation will also
    be DD[1], and HHH[0] will have two active simulations both "unnested". Never had
    to cater for that before, as no PO code has needed to do that.
    That's ok, I'll call it DD[1b].
    3. So HHH[0] is now simulating DD[1b] and sees DD call HHH.
    4. Rather than simulating that x86 call instruction, HHH decides to spin up a new
    simulation of DD[1c]
    5. Then it is going to spin up DD[1d] etc.
    So we never get (technically speaking) /nested/ simulations. All simulations
    are performed by HHH[0] and are only one level deep. HHH maintains a
    stack of simulations, with all but the
    top-of-stack simulation being temporarily "suspended".
    That's a possible way to go, and not hard to code...
    6. At some point HHH says "that's enough simulating - Ed." and aborts... what?
    I think you mean to abort the top-of-stack (innermost) simulation. Let's
    imagine that is DD[1d], which is popped from the stack and discarded,
    and DD[1c] is going to be "resumed".
    7. DD[1c] had previously just made a call to HHH, so HHH[0] must "fake" the
    DD[1c] call result, so DD[1c] sees
    the call as returning ... what? Let's say 0: neverhalts.
    That seems logical, because presumably HHH[0] has already decided
    that 0 [neverhalts] is the right halting decision it will finally make?
    8. So DD[1c] takes its code branch which will return, REACHING THE UNREACHABLE
    CODE! Yay! DD[1c] returns, HHH[0]'s DD[1c] simulation has ended "naturally"
    with DD halting.
    HHH[0] pops the simulation off the stack and "resumes" the DD[1b] simulation
    9. What does HHH do now? It has aborted DD[1d] thinking it was exhibiting
    non-halting behaviour, and now it has just seen DD[1c] terminate naturally.
    HHH[0] has to fake a return code for the DD[1b]'s "call HHH" operation.
    *I genuinely can't see what HHH[0] should do here!*
    10. So I'll stop without trying to guess further steps!


    Assuming my understanding above is more or less on target, aside from my confusions over how the
    "percolation" is going to work, here are my thoughts:

    Your idea of not actually simulating DD's "call HHH" x86 instruction, but instead spinning up some
    new simulation of DD, is actually what /most/ new posters think PO is actually doing! That's
    because they look at PO's traces and see DD's "call HHH" instruction followed by the first
    instruction of DD. Understandable, but what PO has done is filter out all the HHH instructions from
    the trace without explaining it! Eventually some nested simulation enters DD again, at which point
    the DD instructions are listed in the trace.

    Well, the other posters complain at this point, saying "HHH is not properly emulating itself. After
    emulating "call HHH" the next instruction should be the first instruction of HHH!" I would agree
    with them at this point, on the grounds of simplicity/naturalness/definition of "emulation", but
    it's not easy for me to justify beyond saying that's what I think is right. [Perhaps my maths
    background leads me that way...] As it turns out the next instruction after "call HHH" that HHH[0]
    emulates actually /is/ the first instruction of HHH which is what most people want, but PO's trace
    output misled them. :)

    Now there's your next idea [IIUC] where you want HHH[0] to do all the "runaway recursion detecting"
    and subsequent aborting, but /from the inner simulation and percolating out/ unwinding the stack
    etc.. Easy to say, but when I look in detail at the steps I can't see it logically working out!

    In step (6) above HHH[0] has decided there is runaway recursion for whatever reason. Logically it
    seems that already it knows it needs to decide neverhalts, i.e. it needs to return 0 to main().
    Anything else it does first is wasting time, so logically it should abandon all in progress
    emulations and return 0, surely? But OK, we want the HHH's to be able to percolate out from DD[1d]
    back to DD[1c] then to DD[1b] then to DD[1] then back to outer HHH. WHY?? This allows PO's
    so-called "unreachable code" to be "reached" by HHH[0], but /who cares/ ? That code was never
    actually unreachable, it was only ever "unreachable when simulated by HHH" which is of no
    consequence to anybody. PO is the only one here trying to make some big issue of unreachable code!

    And by engineering a way of making HHH/DD results percolate out from inner to outer simulations, now
    the logical problem of how that can actually work has been created.

    In step (9) above, what does HHH[0] tell its simulation DD[1b] that its call to HHH returned?
    HHH[0] previously spun up simulation DD[1c] presumably to resolve exactly that, and it's just seen
    DD[1c] halt, so presumably that means it should tell DD[1b] that HHH decided "halts"? That is
    following the meaning of "percolation".

    OTOH earlier HHH[0] previously aborted DD[1d] due to (supposed) runaway recursion, so should it tell
    DD[1b] that its HHH returned neverhalts?

    If your answer is that having spotted "runaway recursion" it should fake a neverhalts result for all
    HHH calls remaining on its stack of emulations, then the whole "percolation" process is nonsense -
    nothing is logically "percolating" from inner to outer HHH; they're all just being told the same
    thing by HHH[0] so HHH[0] might as well just discard the whole lot and return its result straight
    away. :)



    Hence...
    That way, it might even get the rightwrong™ answer as opposed to the wrongwrong™ answer it gets
    at present.


    But like you say, it doesn't work like that.

    But it *could*, so the notion that the last few lines of DD are unreachable is simply wrong.

    Yes, it's simply wrong, but not for that reason. It's simply wrong because :

    a) Deciders/simulators /other than HHH/ have no problem simulating DD to reach that code. It's
    simply a matter of them not aborting the simulation too early. HHH bears a unique one-to-one
    relationship to DD, which /guarantees/ that it will abort "too soon" to reach that code.

    b) When DD is run "natively" the so-called "unreachable" code is reached with no problem. The
    exception would occur if HHH never returns, causing DD to stall in its HHH(DD) call, but deciders
    must always return...


    Mike.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to All on Thu Aug 21 02:45:29 2025
    In a long and detailed reply, on 20/08/2025 21:15, Mike Terry
    wrote (among much else):

    Yeah, I know your eyes are already starting to glaze over!

    They did. But I persevered, and I followed 59.31% of it,
    according to my notes, which may not be entirely accurate. I
    think my biggest practical take from it was that you kinda got my
    point, which I'll take for what it's worth, and I got maybe more
    than half of yours.

    When DD is run "natively" the so-called "unreachable" code is
    reached with no problem.

    Presactly. And /therefore/ a correct simulation must reach it too.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mike Terry@21:1/5 to Richard Heathfield on Thu Aug 21 03:23:30 2025
    On 21/08/2025 02:45, Richard Heathfield wrote:
    In a long and detailed reply, on 20/08/2025 21:15, Mike Terry wrote (among much else):

    Yeah, I know your eyes are already starting to glaze over!

    They did. But I persevered, and I followed 59.31% of it, according to my notes, which may not be
    entirely accurate. I think my biggest practical take from it was that you kinda got my point, which
    I'll take for what it's worth, and I got maybe more than half of yours.

    When DD is run "natively" the so-called "unreachable" code is reached with no problem.

    Presactly. And /therefore/ a correct simulation must reach it too.

    Excisely. Or at least being careful we might say a "correct /full/ simulation", in case we are
    talking to people of the persuasion that partial simulations are ok things to talk about and indeed
    are the "default" meaning for "simulation" without further qualification. We want to converse
    without misunderstanding with as many people as possible!

    In any case we also agree that a correct /partial/ simulation [*] need /not/ "reach" that code.
    E.g. perhaps just one x86 instruction is (correctly) simulated and then the (partial) simulation is
    abandonned for whatever reason. There is nothing amiss here, unless the programmer claims the code
    after the first instruction is "unreachable"! Who's to say subsequent instructions can't be reached
    by a longer simulation, or simply by executing the code natively?


    Mike.

    [*] meaning a partial simulation that simulated the right sequence of instructions with the right
    data, up to the point where it decides to stop for whatever reason.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)