*Context for what Richard Heathfield agreed to*
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input
until:
(a) Detects a non-terminating behavior pattern:
abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
 int Halt_Status = HHH(DD);
 if (Halt_Status)
   HERE: goto HERE;
 return Halt_Status;
}
What value should HHH(DD) correctly return?
So we see that Richard Heathfield agreed that HHH can't give a
correct answer, as Linz and other have proved and as you have
*explicitly* agreed is correct.
On 8/18/2025 9:40 PM, Richard Heathfield wrote:
If the original DD has a caller, it gets a 0, incorrectly
indicating non-halting.
Looking at it this way, I no longer see the need for
memoisation. All that is necessary is for HHH *only* to abort
the simulation it's hosting, *not* the simulation that invoked it.
There's your bug, Mr Olcott.
It is your failing to understand that HHH does not
have enough evidence to abort (a) until after it has
done more recursive simulations
and then it aborts
(a) killing them all.
The question posed to HHH(DD) includes
should I abort my simulation of this input
on the basis that will never halt?
On 19/08/2025 03:07, dbush wrote:
<snip>
So we see that Richard Heathfield agreed that HHH can't give a correct answer, as Linz and other
have proved and as you have *explicitly* agreed is correct.
I look at it this way.
HHH cannot reasonably abort as soon as it detects recursion. After all, there are lots of recursive
algorithms around, and plenty of them terminate. It has to dig a little deeper than that.
So by the time we're some way in, we have several levels of recursion:
DD -> HHH -> DD -> HHH -> DD -> HHH -> DD
(a) (b) (c)
Let's say (c) decides enough is enough.
So (c) stops *its* simulation of DD. THIS HAS NO IMPACT ON (a) AND (b).
(c) now returns 0 to (b)'s DD.
(b) regains control, accepts 0 from (c), assigns 0 to Halt_Status, and returns 0 to (a).
(a) regains control, accepts 0 from (b), assigns 0 to Halt_Status, and returns 0 to the original DD.
If the original DD has a caller, it gets a 0, incorrectly indicating non-halting.
Looking at it this way, I no longer see the need for memoisation. All that is necessary is for HHH
*only* to abort the simulation it's hosting, *not* the simulation that invoked it.
There's your bug, Mr Olcott.
On 19/08/2025 03:40, Richard Heathfield wrote:
On 19/08/2025 03:07, dbush wrote:
<snip>
So we see that Richard Heathfield agreed that HHH can't give a
correct answer, as Linz and other have proved and as you have
*explicitly* agreed is correct.
I look at it this way.
HHH cannot reasonably abort as soon as it detects recursion.
After all, there are lots of recursive algorithms around, and
plenty of them terminate. It has to dig a little deeper than that.
So by the time we're some way in, we have several levels of
recursion:
DD -> HHH -> DD -> HHH -> DD -> HHH -> DD
      (a)         (b)         (c)
Let's say (c) decides enough is enough.
I think maybe you're not distinguishing properly between
recursive call and recursive emulation.
If we were talking *recursive call*, (c) might end the recursion
in the way you describe due to having been called with different
input,
allowing control to percolate back through (b) and then to
(a). That's how a simple Factorial implementation might work:
 int Factorial (int n)
 {
   if (n == 1)
     return 1;
   else
     return n * Factorial (n-1);
 }
When evaluating Factorial(3), a nested Factorial(2) is called,
which in turn nests a Factorial(1) call. That call returns and
the recursion breaks (from inner to outer invocations).
Great, everyone knows this example. Note that it requires that
the nested calls are made with differnt arguments.
In the case of PO's DD/HHH, the arguments are (or at least
represent) exactly the same computation to be emulated at each
level. So (c) [an L2 = Level 2 emulation] will not suddenly
decide its had enough - if it did, then (a) would have done it
earlier.
But with DD/HHH we have *recursive emulation*. So HHH [a] is
still running when (c) is reached - it's busy running around its
"emulate instruction" loop, testing each time round whether its
seen enough evidence to decide to quit emulating.
OTOH you could say that since HHH only has to handle ONE INPUT
CASE (DD), PO might have optimised the HHH code to just return 0
straight away, and the result would be the same! That's true -
DD would still halt, and HHH would still claim it never halts.
The problem here is that all the emulation stuff is /required/ so
that PO can confuse himself into thinking something more magical
is going on, justifying various crazy claims.
On 19/08/2025 05:21, Mike Terry wrote:
On 19/08/2025 03:40, Richard Heathfield wrote:
On 19/08/2025 03:07, dbush wrote:
<snip>
So we see that Richard Heathfield agreed that HHH can't give a correct answer, as Linz and other
have proved and as you have *explicitly* agreed is correct.
I look at it this way.
HHH cannot reasonably abort as soon as it detects recursion. After all, there are lots of
recursive algorithms around, and plenty of them terminate. It has to dig a little deeper than that.
So by the time we're some way in, we have several levels of recursion:
DD -> HHH -> DD -> HHH -> DD -> HHH -> DD
(a) (b) (c)
Let's say (c) decides enough is enough.
I think maybe you're not distinguishing properly between recursive call and recursive emulation.
You may be right, or you may not be, but my explanation seems at least at first glance to hold
together, makes intuitive sense, and goes some way to explaining why some one could cling to the
wrong answer for 22 years.
If we were talking *recursive call*, (c) might end the recursion in the way you describe due to
having been called with different input,
Or indeed identical input. After all, what's changing?
allowing control to percolate back through (b) and then to (a). That's how a simple Factorial
implementation might work:
int Factorial (int n)
{
if (n == 1)
return 1;
else
return n * Factorial (n-1);
Terrible example, but has the merit of familiarity.
}
When evaluating Factorial(3), a nested Factorial(2) is called, which in turn nests a Factorial(1)
call. That call returns and the recursion breaks (from inner to outer invocations).
Great, everyone knows this example. Note that it requires that the nested calls are made with
differnt arguments.
Indeed, although of course it doesn't have to be that way.
In the case of PO's DD/HHH, the arguments are (or at least represent) exactly the same computation
to be emulated at each level. So (c) [an L2 = Level 2 emulation] will not suddenly decide its had
enough - if it did, then (a) would have done it earlier.
Then either there's some state kicking around, or HHH will automatically decide that all recursion
is runaway recursion.
But with DD/HHH we have *recursive emulation*. So HHH [a] is still running when (c) is reached -
it's busy running around its "emulate instruction" loop, testing each time round whether its seen
enough evidence to decide to quit emulating.
Okay, so that's clearly a design flaw. Bit I do see the point. /Because/ of that design flaw, the
fix isn't going to be an easy one.
<snip>
OTOH you could say that since HHH only has to handle ONE INPUT CASE (DD), PO might have optimised
the HHH code to just return 0 straight away, and the result would be the same! That's true - DD
would still halt, and HHH would still claim it never halts. The problem here is that all the
emulation stuff is /required/ so that PO can confuse himself into thinking something more magical
is going on, justifying various crazy claims.
Hell of a way to run a railroad. Still, no harm done. At least now I know how he /could/ have got
the programming right, even if his theory is further round the bend than Harpic.
On 19/08/2025 07:46, Richard Heathfield wrote:
On 19/08/2025 05:21, Mike Terry wrote:
On 19/08/2025 03:40, Richard Heathfield wrote:
On 19/08/2025 03:07, dbush wrote:
<snip>
So we see that Richard Heathfield agreed that HHH can't give
a correct answer, as Linz and other have proved and as you
have *explicitly* agreed is correct.
I look at it this way.
HHH cannot reasonably abort as soon as it detects recursion.
After all, there are lots of recursive algorithms around, and
plenty of them terminate. It has to dig a little deeper than
that.
So by the time we're some way in, we have several levels of
recursion:
DD -> HHH -> DD -> HHH -> DD -> HHH -> DD
      (a)         (b)         (c)
Let's say (c) decides enough is enough.
I think maybe you're not distinguishing properly between
recursive call and recursive emulation.
You may be right, or you may not be, but my explanation seems
at least at first glance to hold together, makes intuitive
sense, and goes some way to explaining why some one could cling
to the wrong answer for 22 years.
If we were talking *recursive call*, (c) might end the
recursion in the way you describe due to having been called
with different input,
Or indeed identical input. After all, what's changing?
allowing control to percolate back through (b) and then to
(a). That's how a simple Factorial implementation might work:
  int Factorial (int n)
  {
    if (n == 1)
      return 1;
    else
      return n * Factorial (n-1);
Terrible example, but has the merit of familiarity.
  }
When evaluating Factorial(3), a nested Factorial(2) is called,
which in turn nests a Factorial(1) call. That call returns
and the recursion breaks (from inner to outer invocations).
Great, everyone knows this example. Note that it requires
that the nested calls are made with differnt arguments.
Indeed, although of course it doesn't have to be that way.
Yes it does.
Let's look at code that "not that way":
 int WrongFactorial (int n)
 {
   if (n == 1)
     return 1;
   else
     return n * WrongFactorial (n); /* NOTE difference from
Factorial code above */
 }
You are trying to say that (c) says "enough is enough" and breaks
the recursion, returning to (b).
That can't happen - (c) is
performing the same calculation as (a): same code and the same
input.
In the case of PO's DD/HHH, the arguments are (or at least
represent) exactly the same computation to be emulated at each
level. So (c) [an L2 = Level 2 emulation] will not suddenly
decide its had enough - if it did, then (a) would have done it
earlier.
Then either there's some state kicking around, or HHH will
automatically decide that all recursion is runaway recursion.
Well, remember HHH is *emulating* DD,
It is more round the bend than those wiry plumber brushes that
unblock your toilet by going round all sorts of bendy bends to
places even Harpic cannot reach! :)
On 19/08/2025 18:19, Mike Terry wrote:
On 19/08/2025 07:46, Richard Heathfield wrote:
On 19/08/2025 05:21, Mike Terry wrote:
On 19/08/2025 03:40, Richard Heathfield wrote:
On 19/08/2025 03:07, dbush wrote:
<snip>
So we see that Richard Heathfield agreed that HHH can't give a correct answer, as Linz and
other have proved and as you have *explicitly* agreed is correct.
I look at it this way.
HHH cannot reasonably abort as soon as it detects recursion. After all, there are lots of
recursive algorithms around, and plenty of them terminate. It has to dig a little deeper than
that.
So by the time we're some way in, we have several levels of recursion: >>>>>
DD -> HHH -> DD -> HHH -> DD -> HHH -> DD
(a) (b) (c)
Let's say (c) decides enough is enough.
I think maybe you're not distinguishing properly between recursive call and recursive emulation.
You may be right, or you may not be, but my explanation seems at least at first glance to hold
together, makes intuitive sense, and goes some way to explaining why some one could cling to the
wrong answer for 22 years.
If we were talking *recursive call*, (c) might end the recursion in the way you describe due to
having been called with different input,
Or indeed identical input. After all, what's changing?
allowing control to percolate back through (b) and then to (a). That's how a simple Factorial
implementation might work:
int Factorial (int n)
{
if (n == 1)
return 1;
else
return n * Factorial (n-1);
Terrible example, but has the merit of familiarity.
}
When evaluating Factorial(3), a nested Factorial(2) is called, which in turn nests a
Factorial(1) call. That call returns and the recursion breaks (from inner to outer invocations).
Great, everyone knows this example. Note that it requires that the nested calls are made with
differnt arguments.
Indeed, although of course it doesn't have to be that way.
Yes it does.
No, it doesn't. >
Let's look at code that "not that way":
int WrongFactorial (int n)
{
if (n == 1)
return 1;
else
return n * WrongFactorial (n); /* NOTE difference from Factorial code above */
}
But HHH is not a factorial calculation. It is a function that takes a function pointer it can only
assign or dereference, and then returns 0.
You are trying to say that (c) says "enough is enough" and breaks the recursion, returning to (b).
No longer. I accept your explanation that that's not how it works. It *should* work that way, and
the way it does work is clearly broken, but okay, it doesn't.
That can't happen - (c) is performing the same calculation as (a): same code and the same input.
Sure it could. When HHH detects that it's about to recurse into itself it could just start off a new
simulation, and if it starts to run away with itself it could can /its/ simulation and return 0.
That way, it might even get the rightwrong™ answer as opposed to the wrongwrong™ answer it gets at
present.
In the case of PO's DD/HHH, the arguments are (or at least represent) exactly the sameThen either there's some state kicking around, or HHH will automatically decide that all
computation to be emulated at each level. So (c) [an L2 = Level 2 emulation] will not suddenly
decide its had enough - if it did, then (a) would have done it earlier. >>>
recursion is runaway recursion.
Well, remember HHH is *emulating* DD,
*some of* DD. About a quarter, and the least interesting part at that.
It is more round the bend than those wiry plumber brushes that unblock your toilet by going round
all sorts of bendy bends to places even Harpic cannot reach! :)
;-)
On 19/08/2025 19:06, Richard Heathfield wrote:
But HHH is not a factorial calculation. It is a function that
takes a function pointer it can only assign or dereference, and
then returns 0.
ok, we've got onto cross purposes.
I was not talking about HHH, because HHH involves recursive
simulation, not recursive call.
That can't happen - (c) is performing the same calculation as
(a): same code and the same input.
Sure it could. When HHH detects that it's about to recurse into
itself it could just start off a new simulation, and if it
starts to run away with itself it could can /its/ simulation
and return 0.
You are describing recursive /simulation/ (or emulation). (I
think...)Â HHH does not involve recursive call, so what I was
saying does not apply to HHH.
That way, it might even get the rightwrongâ„¢ answer as opposed
to the wrongwrongâ„¢ answer it gets at present.
On 8/19/2025 9:08 PM, Richard Heathfield wrote:
IF (as I originally thought) HHH worked by recursing into a new
simulation every time it hit DD's HHH call, that would be quite
clever. It would be easy to add static metrics to make the call
about aborting the recursion and unwinding the stack back to
where it can continue simulating DD, and nothing would be
discarded as being "unreachable".
Hence...
That way, it might even get the rightwrongâ„¢ answer as opposed
to the wrongwrongâ„¢ answer it gets at present.
But like you say, it doesn't work like that.
But it *could*, so the notion that the last few lines of DD are
unreachable is simply wrong.
void Infinite_Loop()
{
 HERE: goto HERE;
 return;
}
Infinite_Loop() could be patched this same way
so that it jumps to its own "return" statement.
void Infinite_Loop()
{
 HERE: goto THERE;
 THERE:
 return;
}
That is called cheating.
On 8/19/2025 1:46 AM, Richard Heathfield wrote:
On 19/08/2025 05:21, Mike Terry wrote:
On 19/08/2025 03:40, Richard Heathfield wrote:
On 19/08/2025 03:07, dbush wrote:
<snip>
So we see that Richard Heathfield agreed that HHH can't give a
correct answer, as Linz and other have proved and as you have
*explicitly* agreed is correct.
I look at it this way.
HHH cannot reasonably abort as soon as it detects recursion. After
all, there are lots of recursive algorithms around, and plenty of
them terminate. It has to dig a little deeper than that.
So by the time we're some way in, we have several levels of recursion: >>>>
DD -> HHH -> DD -> HHH -> DD -> HHH -> DD
      (a)         (b)         (c)
Let's say (c) decides enough is enough.
I think maybe you're not distinguishing properly between recursive
call and recursive emulation.
You may be right, or you may not be, but my explanation seems at least
at first glance to hold together, makes intuitive sense, and goes some
way to explaining why some one could cling to the wrong answer for 22
years.
If we were talking *recursive call*, (c) might end the recursion in
the way you describe due to having been called with different input,
Or indeed identical input. After all, what's changing?
allowing control to percolate back through (b) and then to (a).
That's how a simple Factorial implementation might work:
  int Factorial (int n)
  {
    if (n == 1)
      return 1;
    else
      return n * Factorial (n-1);
Terrible example, but has the merit of familiarity.
  }
When evaluating Factorial(3), a nested Factorial(2) is called, which
in turn nests a Factorial(1) call. That call returns and the
recursion breaks (from inner to outer invocations).
Great, everyone knows this example. Note that it requires that the
nested calls are made with differnt arguments.
Indeed, although of course it doesn't have to be that way.
In the case of PO's DD/HHH, the arguments are (or at least represent)
exactly the same computation to be emulated at each level. So (c)
[an L2 = Level 2 emulation] will not suddenly decide its had enough -
if it did, then (a) would have done it earlier.
Then either there's some state kicking around, or HHH will
automatically decide that all recursion is runaway recursion.
But with DD/HHH we have *recursive emulation*. So HHH [a] is still
running when (c) is reached - it's busy running around its "emulate
instruction" loop, testing each time round whether its seen enough
evidence to decide to quit emulating.
Okay, so that's clearly a design flaw. Bit I do see the point. /
Because/ of that design flaw, the fix isn't going to be an easy one.
Lines 996 through 1006 matches the
*recursive simulation non-halting behavior pattern* https://github.com/plolcott/x86utm/blob/master/Halt7.c
<snip>
OTOH you could say that since HHH only has to handle ONE INPUT CASE
(DD), PO might have optimised the HHH code to just return 0 straight
away, and the result would be the same! That's true - DD would still
halt, and HHH would still claim it never halts. The problem here is
that all the emulation stuff is /required/ so that PO can confuse
himself into thinking something more magical is going on, justifying
various crazy claims.
Hell of a way to run a railroad. Still, no harm done. At least now I
know how he /could/ have got the programming right, even if his theory
is further round the bend than Harpic.
Turing machine deciders only compute the mapping
from their inputs...
and the input to HHH(DD) specifies runaway recursion.
On 8/19/2025 12:19 PM, Mike Terry wrote:
On 19/08/2025 07:46, Richard Heathfield wrote:
On 19/08/2025 05:21, Mike Terry wrote:
On 19/08/2025 03:40, Richard Heathfield wrote:
On 19/08/2025 03:07, dbush wrote:
<snip>
So we see that Richard Heathfield agreed that HHH can't give a
correct answer, as Linz and other have proved and as you have
*explicitly* agreed is correct.
I look at it this way.
HHH cannot reasonably abort as soon as it detects recursion. After
all, there are lots of recursive algorithms around, and plenty of
them terminate. It has to dig a little deeper than that.
So by the time we're some way in, we have several levels of recursion: >>>>>
DD -> HHH -> DD -> HHH -> DD -> HHH -> DD
      (a)         (b)         (c)
Let's say (c) decides enough is enough.
I think maybe you're not distinguishing properly between recursive
call and recursive emulation.
You may be right, or you may not be, but my explanation seems at
least at first glance to hold together, makes intuitive sense, and
goes some way to explaining why some one could cling to the wrong
answer for 22 years.
If we were talking *recursive call*, (c) might end the recursion in
the way you describe due to having been called with different input,
Or indeed identical input. After all, what's changing?
allowing control to percolate back through (b) and then to (a).
That's how a simple Factorial implementation might work:
  int Factorial (int n)
  {
    if (n == 1)
      return 1;
    else
      return n * Factorial (n-1);
Terrible example, but has the merit of familiarity.
  }
When evaluating Factorial(3), a nested Factorial(2) is called, which
in turn nests a Factorial(1) call. That call returns and the
recursion breaks (from inner to outer invocations).
Great, everyone knows this example. Note that it requires that the
nested calls are made with differnt arguments.
Indeed, although of course it doesn't have to be that way.
Yes it does. Let's look at code that "not that way":
  int WrongFactorial (int n)
  {
    if (n == 1)
      return 1;
    else
      return n * WrongFactorial (n); /* NOTE difference from
Factorial code above */
  }
So, WrongFactorial recursively calls itself, but with the same
argument each time. I.e.
   WrongFactorial(3) -> WrongFactorial(3) -> WrongFactorial(3)
         (a)                 (b)                 (c)
You are trying to say that (c) says "enough is enough" and breaks the
recursion, returning to (b). That can't happen - (c) is performing the
same calculation as (a): same code and the same input. So (c) will
progress just like (a) and call another WrongFactorial(3). What we'll
get is
   ... -> WrongFactorial(3) -> WrongFactorial(3) -> WrongFactorial(3)
- Â > etc.
                (c)                 (d) (e)           Infinite
recursion.
If (c) is to break the recursion, it must be called with a different
argument.
This is all assuming no cheating from "impure functions" with global
state etc.. If there is shared global state that needs to be changed
to explicit input, then it's again clear that IF that input (including
converted global state) is identical across the recursive calls, the
recursion cannot break out and we have infinite recursion.
In the case of PO's DD/HHH, the arguments are (or at least
represent) exactly the same computation to be emulated at each
level. So (c) [an L2 = Level 2 emulation] will not suddenly decide
its had enough - if it did, then (a) would have done it earlier.
Then either there's some state kicking around, or HHH will
automatically decide that all recursion is runaway recursion.
Well, remember HHH is *emulating* DD, not calling it. HHH does not
cede control to DD, and is always running even while DD is being
emulated. It is HHH that is doing the emulation.
So yes, naturally HHH needs its own state to :
a)Â control the emulation, and
b) to assess the progress of the emulation its performing. [Has a
tight loop occured? etc.]
E.g., for (a) HHH must maintain the state of a virtual x86 environment
where emulated DD will "run", including the current emulated DD
instruction pointer and other x86 registers, and the virtual address
space that those emulated instructions manipulate (including the
virtual stack and so on). For (b) HHH must maintain whatever state it
needs beyond simply emulating instructions of DD. For PO's HHH that's
a table of previously emulated instructions, so that it can spot loops
etc..
This state kicking around in HHH /ought/ to be local HHH state, but in
PO's case he has made (b) global state. (He just couldn't see how to
make it local, so he thought, well it will have to be global then, no
problem...)
If you Totally understand DD correctly simulated by HHH
and can show the details of how HHH can detect the behavior
of an instance of itself emulating an instance of DD that
would be appreciated. You may only have the gist of an idea
that will not actually work in practice.
PO has some rules for matching what he thinks is runaway recursive
emulations. [E.g. there must be no conditional branch instructions
*within the bounds of function DD* between the repeating calls.
Details are in the halt7.c code...]Â So HHH will not think /all/
recursions are runaway. Specifically with DD, however, HHH flags DD as
runaway recursion when it isn't. (An explicit Bug...)
*No bug*
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
   If simulating halt decider H correctly simulates its
   input D until H correctly determines that its simulated D
   would never stop running unless aborted then
But with DD/HHH we have *recursive emulation*. So HHH [a] is still
running when (c) is reached - it's busy running around its "emulate
instruction" loop, testing each time round whether its seen enough
evidence to decide to quit emulating.
Okay, so that's clearly a design flaw. Bit I do see the point. /
Because/ of that design flaw, the fix isn't going to be an easy one.
<snip>
OTOH you could say that since HHH only has to handle ONE INPUT CASE
(DD), PO might have optimised the HHH code to just return 0 straight
away, and the result would be the same! That's true - DD would
still halt, and HHH would still claim it never halts. The problem
here is that all the emulation stuff is /required/ so that PO can
confuse himself into thinking something more magical is going on,
justifying various crazy claims.
Hell of a way to run a railroad. Still, no harm done. At least now I
know how he /could/ have got the programming right, even if his
theory is further round the bend than Harpic.
It is more round the bend than those wiry plumber brushes that unblock
your toilet by going round all sorts of bendy bends to places even
Harpic cannot reach! :)
Mike.
On 20/08/2025 02:37, Mike Terry wrote:
On 19/08/2025 19:06, Richard Heathfield wrote:
<snip>
But HHH is not a factorial calculation. It is a function that takes a function pointer it can
only assign or dereference, and then returns 0.
ok, we've got onto cross purposes.
Agreed.
I was not talking about HHH, because HHH involves recursive simulation, not recursive call.
Understood.
Let me re-state: I accept your explanation that that's not how it
works. It *should* work that way, and the way it does work is
clearly broken, but okay, it doesn't.
That can't happen - (c) is performing the same calculation as (a): same code and the same input.
Sure it could. When HHH detects that it's about to recurse into itself it could just start off a
new simulation, and if it starts to run away with itself it could can /its/ simulation and return 0.
You are describing recursive /simulation/ (or emulation). (I think...) HHH does not involve
recursive call, so what I was saying does not apply to HHH.
Right.
IF (as I originally thought) HHH worked by recursing into a new simulation every time it hit DD's
HHH call, that would be quite clever. It would be easy to add static metrics to make the call about
aborting the recursion and unwinding the stack back to where it can continue simulating DD, and
nothing would be discarded as being "unreachable".
Hence...
That way, it might even get the rightwrong™ answer as opposed to the wrongwrong™ answer it gets
at present.
But like you say, it doesn't work like that.
But it *could*, so the notion that the last few lines of DD are unreachable is simply wrong.
Yeah, I know your eyes are already starting to glaze over!
When DD is run "natively" the so-called "unreachable" code is
reached with no problem.
In a long and detailed reply, on 20/08/2025 21:15, Mike Terry wrote (among much else):
Yeah, I know your eyes are already starting to glaze over!
They did. But I persevered, and I followed 59.31% of it, according to my notes, which may not be
entirely accurate. I think my biggest practical take from it was that you kinda got my point, which
I'll take for what it's worth, and I got maybe more than half of yours.
When DD is run "natively" the so-called "unreachable" code is reached with no problem.
Presactly. And /therefore/ a correct simulation must reach it too.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 01:04:02 |
Calls: | 10,385 |
Calls today: | 2 |
Files: | 14,057 |
Messages: | 6,416,577 |