Olcott's fundamental error:
In x86utm, H simulates D(D), detects the nested recursion as non-halting, aborts, and returns 0 (non-halting). But when D(D) runs for real:
* It calls H(D,D).
* H simulates, aborts the simulation (not the real execution), and returns
0 (non-halting).
* D, receiving 0 (non-halting), halts.
Thus, the actual machine D(D) halts, but H reported "does not halt". H is wrong about the machine's behavior.
/Flibble
On Sat, 2025-08-09 at 01:20 +0100, Mike Terry wrote:
On 08/08/2025 17:32, Mr Flibble wrote:
Olcott's fundamental error:
In x86utm, H simulates D(D), detects the nested recursion as non-halting, >>> aborts, and returns 0 (non-halting). But when D(D) runs for real:
* It calls H(D,D).
* H simulates, aborts the simulation (not the real execution), and returns >>> 0 (non-halting).
* D, receiving 0 (non-halting), halts.
Thus, the actual machine D(D) halts, but H reported "does not halt". H is >>> wrong about the machine's behavior.
/Flibble
That's correct. H has specific patterns that it looks for in the nested emulation trace. One of
those pattern allegedly tests for "infinite recursive emulation", but can matches against finite
recursive emulation, and so is unsound. So H mistakes finite recursive emulation for infinite
recursive emulation, and decides incorrectly the input never halts.
(It seems you just realised this?)
PO's error is not understanding the qualitative differences between recursive call and recursive
emulation. The former can only be broken from the inside percolating out, because once a call is
made, control is ceded until the call returns. Recursive emulation can also break that way, but
with emulation there is another way: recursion can break from the outside aborting the inner
emulations. This is possible because the outer emulation has not ceded control, and is really still
running and evolving its state.
A pattern that potentially might form the basis for a sound infinite recursive /call/ test would not
necessarily work in a recursive /emulation/ scenario unless it understands and accounts for these
qualitative differences.
Mike.
All theory are after-event explanation, nothing to do with fact.
That these things are dead easy is proven by three
chatbots that figure this out on their own.
On 8/8/2025 11:11 PM, Richard Heathfield wrote:
I am testing the assumption that simulating termination
analyzer HHH correctly simulates its input until it:
(a) Detects a non-terminating behavior pattern: abort
simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
What value should HHH(DD) correctly return?
What does that tell me about my assumption?
*You forgot to tell it*
Please make sure that to do the execution trace
of DD correctly simulated by HHH to answer this question:
What value should HHH(DD) correctly return?
Here it is again with that one change. https://chatgpt.com/share/6896d14a-3714-8011-abae-17fb391ff170
You let it ignore a mandatory chain of thought
and it gets the
wrong answer.
On 8/8/2025 7:20 PM, Mike Terry wrote:
On 08/08/2025 17:32, Mr Flibble wrote:
Olcott's fundamental error:
In x86utm, H simulates D(D), detects the nested recursion as non-
halting,
aborts, and returns 0 (non-halting). But when D(D) runs for real:
* It calls H(D,D).
* H simulates, aborts the simulation (not the real execution), and
returns
0 (non-halting).
* D, receiving 0 (non-halting), halts.
Thus, the actual machine D(D) halts, but H reported "does not halt".
H is
wrong about the machine's behavior.
/Flibble
That's correct.
HHH(DD)==0 is construed as correct when it is construed
that HHH is reporting on the fact that its actual input
DD correctly simulated by HHH cannot possibly reach its
own simulated "return" statement final halt state.
For people that insist that DD is not simulated correctly
we must move to the more precise standard of correct that
DD emulated by HHH according to the semantics of the x86
language cannot possibly reach its own emulated "ret"
instruction final halt state.
H has specific patterns that it looks for in the nested emulation
trace. One of those pattern allegedly tests for "infinite recursive
emulation", but can matches against finite recursive emulation, and so
is unsound. So H mistakes finite recursive emulation for infinite
recursive emulation, and decides incorrectly the input never halts.
(It seems you just realised this?)
PO's error is not understanding the qualitative differences between
recursive call and recursive emulation. The former can only be broken
from the inside percolating out, because once a call is made, control
is ceded until the call returns. Recursive emulation can also break
that way, but with emulation there is another way: recursion can break
from the outside aborting the inner emulations. This is possible
because the outer emulation has not ceded control, and is really still
running and evolving its state.
A pattern that potentially might form the basis for a sound infinite
recursive /call/ test would not necessarily work in a recursive /
emulation/ scenario unless it understands and accounts for these
qualitative differences.
Mike.
Line 996 recognizes recursive simulation
u32 Needs_To_Be_Aborted_Trace_HH(Decoded_Line_Of_Code* execution_trace,
Decoded_Line_Of_Code *current)
https://github.com/plolcott/x86utm/blob/master/Halt7.c
On 8/8/2025 11:11 PM, Richard Heathfield wrote:
I am testing the assumption that simulating termination analyzer HHH
correctly simulates its input until it:
(a) Detects a non-terminating behavior pattern: abort simulation and
return 0.
(b) Simulated input reaches its simulated "return" statement: return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
What value should HHH(DD) correctly return?
What does that tell me about my assumption?
*You forgot to tell it*
Please make sure that to do the execution trace
of DD correctly simulated by HHH to answer this question:
What value should HHH(DD) correctly return?
Here it is again with that one change. https://chatgpt.com/share/6896d14a-3714-8011-abae-17fb391ff170
On 8/8/2025 9:46 PM, Mike Terry wrote:
On 09/08/2025 02:10, wij wrote:
On Sat, 2025-08-09 at 01:20 +0100, Mike Terry wrote:
On 08/08/2025 17:32, Mr Flibble wrote:
Olcott's fundamental error:
In x86utm, H simulates D(D), detects the nested recursion as non-
halting,
aborts, and returns 0 (non-halting). But when D(D) runs for real:
* It calls H(D,D).
* H simulates, aborts the simulation (not the real execution), and
returns
0 (non-halting).
* D, receiving 0 (non-halting), halts.
Thus, the actual machine D(D) halts, but H reported "does not
halt". H is
wrong about the machine's behavior.
/Flibble
That's correct. H has specific patterns that it looks for in the
nested emulation trace. One of
those pattern allegedly tests for "infinite recursive emulation",
but can matches against finite
recursive emulation, and so is unsound. So H mistakes finite
recursive emulation for infinite
recursive emulation, and decides incorrectly the input never halts.
(It seems you just realised this?)
PO's error is not understanding the qualitative differences between
recursive call and recursive
emulation. The former can only be broken from the inside
percolating out, because once a call is
made, control is ceded until the call returns. Recursive emulation
can also break that way, but
with emulation there is another way: recursion can break from the
outside aborting the inner
emulations. This is possible because the outer emulation has not
ceded control, and is really still
running and evolving its state.
A pattern that potentially might form the basis for a sound infinite
recursive /call/ test would not
necessarily work in a recursive /emulation/ scenario unless it
understands and accounts for these
qualitative differences.
Mike.
All theory are after-event explanation, nothing to do with fact.
Sure. PO is wrong, because his H decides never-halts for D, and D
halts. Those are the facts, which will be enough for people just
wanting to know whether PO is right or wrong. No theory needed.
When HHH(DD) decides on the basis that DD correctly
simulated by HHH cannot possibly reach its own
simulated "return" statement final halt state HHH
is correct.
That you do not understand that a finite sequence
of simulated steps proves this is only your own
lack of understanding.
That these things are dead easy is proven by three
chatbots that figure this out on their own.
You would only want more if you were interested in understanding /why/
PO is wrong, or maybe if you wanted to /help/ PO see his errors - then
a theory might be useful; at least it suggests a place to start. (But
such attempts to help PO or get him to admit his mistakes will not
work I believe.)
Mike.
On 8/8/2025 7:20 PM, Mike Terry wrote:
On 08/08/2025 17:32, Mr Flibble wrote:
Olcott's fundamental error:
In x86utm, H simulates D(D), detects the nested recursion as non-
halting,
aborts, and returns 0 (non-halting). But when D(D) runs for real:
* It calls H(D,D).
* H simulates, aborts the simulation (not the real execution), and
returns
0 (non-halting).
* D, receiving 0 (non-halting), halts.
Thus, the actual machine D(D) halts, but H reported "does not halt".
H is
wrong about the machine's behavior.
/Flibble
That's correct.
HHH(DD)==0 is construed as correct when it is construed
that HHH is reporting on the fact that its actual input
DD correctly simulated by HHH cannot possibly reach its
own simulated "return" statement final halt state.
For people that insist that DD is not simulated correctly
we must move to the more precise standard of correct that
DD emulated by HHH according to the semantics of the x86
language cannot possibly reach its own emulated "ret"
instruction final halt state.
H has specific patterns that it looks for in the nested emulation
trace. One of those pattern allegedly tests for "infinite recursive
emulation", but can matches against finite recursive emulation, and so
is unsound. So H mistakes finite recursive emulation for infinite
recursive emulation, and decides incorrectly the input never halts.
(It seems you just realised this?)
PO's error is not understanding the qualitative differences between
recursive call and recursive emulation. The former can only be broken
from the inside percolating out, because once a call is made, control
is ceded until the call returns. Recursive emulation can also break
that way, but with emulation there is another way: recursion can break
from the outside aborting the inner emulations. This is possible
because the outer emulation has not ceded control, and is really still
running and evolving its state.
A pattern that potentially might form the basis for a sound infinite
recursive /call/ test would not necessarily work in a recursive /
emulation/ scenario unless it understands and accounts for these
qualitative differences.
Mike.
Line 996 recognizes recursive simulation
u32 Needs_To_Be_Aborted_Trace_HH(Decoded_Line_Of_Code* execution_trace,
Decoded_Line_Of_Code *current)
https://github.com/plolcott/x86utm/blob/master/Halt7.c
On 8/8/2025 11:56 PM, Richard Heathfield wrote:
On 09/08/2025 05:44, olcott wrote:
On 8/8/2025 11:11 PM, Richard Heathfield wrote:
I am testing the assumption that simulating termination analyzer HHH
correctly simulates its input until it:
(a) Detects a non-terminating behavior pattern: abort simulation and
return 0.
(b) Simulated input reaches its simulated "return" statement: return 1. >>>>
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
What value should HHH(DD) correctly return?
What does that tell me about my assumption?
*You forgot to tell it*
No, I didn't. There was no need to tell it.
Please make sure that to do the execution trace
of DD correctly simulated by HHH to answer this question:
What value should HHH(DD) correctly return?
Here it is again with that one change.
https://chatgpt.com/share/6896d14a-3714-8011-abae-17fb391ff170
The answer is drawn from only two possible answers: 0 and 1.
How that answer is obtained - execution trace or whatever - IS
IRRELEVANT to everybody except whichever poor sod has to cut the HHH()
code - you, in this case.
To everyone else it's a black box.
All that matters is that it produces the right answer... which it
demonstrably doesn't because both possible answers are wrong.
You let it ignore a mandatory chain of thought and it gets the wrong
answer.
All you are doing is showing you don't understand how logic works.
On 8/8/2025 11:56 PM, Richard Heathfield wrote:
On 09/08/2025 05:44, olcott wrote:
On 8/8/2025 11:11 PM, Richard Heathfield wrote:
I am testing the assumption that simulating termination analyzer HHH
correctly simulates its input until it:
(a) Detects a non-terminating behavior pattern: abort simulation and
return 0.
(b) Simulated input reaches its simulated "return" statement: return 1. >>>>
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
What value should HHH(DD) correctly return?
What does that tell me about my assumption?
*You forgot to tell it*
No, I didn't. There was no need to tell it.
Please make sure that to do the execution trace
of DD correctly simulated by HHH to answer this question:
What value should HHH(DD) correctly return?
Here it is again with that one change.
https://chatgpt.com/share/6896d14a-3714-8011-abae-17fb391ff170
The answer is drawn from only two possible answers: 0 and 1.
How that answer is obtained - execution trace or whatever - IS
IRRELEVANT to everybody except whichever poor sod has to cut the HHH()
code - you, in this case.
To everyone else it's a black box.
All that matters is that it produces the right answer... which it
demonstrably doesn't because both possible answers are wrong.
Both answers are wrong only if you make sure
to not understand the actual process.
The process is the steps required to determine
the actual behavior of the actual input.
int sum(int x, int y) { return x + y; }
sum(3,4) will not return the sum of 5 + 6.
It is incorrect for HHH(DD) to report on the behavior
of DD() because the pathological relationship between
HHH and DD changes this behavior.
If you try to think of it as a black box then you
are unable to see that DD calls HHH(DD) in recursive
simulation that cannot reach its own "if" statement
thus making the "do the opposite" code unreachable.
If you don't know that the "do the opposite" code
is unreachable you will mistakenly think that this
code has some effect.
Not only does this code have no effect: the fact
that DD calls HHH(DD) in recursive simulation
makes the actual behavior specified by the actual
input non-halting behavior. This makes HHH(DD)==0
correct.
If you don't do it this way it is like you keep
expecting that sum(3,4) will return the sum of 5 + 6.
If you try to think of it as a black box then you
are unable to see that DD calls HHH(DD) in recursive
simulation that cannot reach its own "if" statement
thus making the "do the opposite" code unreachable.
It turns out that the question: Does DD() halt?
is an incorrect question for HHH.
On 8/9/2025 9:07 AM, Richard Heathfield wrote:
On 09/08/2025 14:31, olcott wrote:
It turns out that the question: Does DD() halt?
is an incorrect question for HHH.
Yes, because HHH doesn't know and can't find out.
Welcome to the Halting Problem.
You're late.
It is an incorrect question not because of the
unreachable "do the opposite" code in DD.
It is an incorrect question because it asks
about the behavior of a non-input
On 8/9/2025 8:10 AM, Richard Heathfield wrote:
On 09/08/2025 13:56, olcott wrote:
If you try to think of it as a black box then you
are unable to see that DD calls HHH(DD) in recursive
simulation that cannot reach its own "if" statement
thus making the "do the opposite" code unreachable.
No, you can have all the recurive calls you like to DD from
inside HHH, but that's neither here nor there. That's black box
stuff. Nobody cares about all that shit - except you, of course.
The only reason that nobody cares is that they
only care about rebuttal at the expense of truth.
They don't want to see any reasoning that proves
that they are wrong. They only want to keep assuming
(against the verified facts) that they are right.
But there's one call that's *not* black box stuff - the
top-level DD - the one called not through HHH but directly from
main.
Because that is not an input to HHH(DD) it is none of
the damn busing of HHH. Your ignorance of the notion
of computable functions is showing.
The ONLY thing that I expected from comp.lang.c people
was the behavior of DD correctly simulated by HHH.
Because some people have fundamental misconceptions
about the correct measure of correct simulation I
had to also add this wording
What is the behavior of DD emulated by HHH according
to the semantics of the x86 language?
On 8/9/2025 8:10 AM, Richard Heathfield wrote:
On 09/08/2025 13:56, olcott wrote:
If you try to think of it as a black box then you
are unable to see that DD calls HHH(DD) in recursive
simulation that cannot reach its own "if" statement
thus making the "do the opposite" code unreachable.
No, you can have all the recurive calls you like to DD from inside
HHH, but that's neither here nor there. That's black box stuff. Nobody
cares about all that shit - except you, of course.
The only reason that nobody cares is that they
only care about rebuttal at the expense of truth.
They don't want to see any reasoning that proves
that they are wrong. They only want to keep assuming
(against the verified facts) that they are right.
But there's one call that's *not* black box stuff - the top-level DD -
the one called not through HHH but directly from main.
Because that is not an input to HHH(DD) it is none of
the damn busing of HHH. Your ignorance of the notion
of computable functions is showing.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
All of these things are beyond your capacity to understand.
The ONLY thing that I expected from comp.lang.c people
was the behavior of DD correctly simulated by HHH.
Because some people have fundamental misconceptions
about the correct measure of correct simulation I
had to also add this wording
What is the behavior of DD emulated by HHH according
to the semantics of the x86 language?
On 8/9/2025 4:56 AM, wij wrote:
On Fri, 2025-08-08 at 23:59 -0500, olcott wrote:
On 8/8/2025 11:56 PM, Richard Heathfield wrote:
On 09/08/2025 05:44, olcott wrote:
On 8/8/2025 11:11 PM, Richard Heathfield wrote:
I am testing the assumption that simulating termination analyzer HHH >>>>>> correctly simulates its input until it:
(a) Detects a non-terminating behavior pattern: abort simulation and >>>>>> return 0.
(b) Simulated input reaches its simulated "return" statement:
return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
What value should HHH(DD) correctly return?
What does that tell me about my assumption?
*You forgot to tell it*
No, I didn't. There was no need to tell it.
Please make sure that to do the execution trace
of DD correctly simulated by HHH to answer this question:
What value should HHH(DD) correctly return?
Here it is again with that one change.
https://chatgpt.com/share/6896d14a-3714-8011-abae-17fb391ff170
The answer is drawn from only two possible answers: 0 and 1.
How that answer is obtained - execution trace or whatever - IS
IRRELEVANT to everybody except whichever poor sod has to cut the HHH() >>>> code - you, in this case.
To everyone else it's a black box.
All that matters is that it produces the right answer... which it
demonstrably doesn't because both possible answers are wrong.
You let it ignore a mandatory chain of thought and it gets the wrong
answer.
Q1: What time is it, yes or no?
Q2: Jack is a bachelor. Did Jack hit his wife?
What is the correct answer?
I came up with those two examples years ago. https://groups.google.com/g/sci.lang/c/lSdYexJ0ozo/m/aDN9-TYLHwIJ
The first one around 2004 in comp.theory and the second
one is shown on the link from ten years ago.
It turns out that the question: Does DD() halt?
is an incorrect question for HHH.
Not because DD contradicts both Boolean values that
HHH returns. The question is incorrect because Turing
machines can only compute the mapping from their
actual input to the actual behavior that this input
actually specifies.
This is correctly measured by the behavior of DD
correctly simulated by HHH.
Or for people that are prone to be disagreeable it is
correctly measured by the behavior of DD emulated by
HHH according to the semantics of the x86 language.
In this case the notion of "correct simulation" is fully
grounded in the semantics of the x86 language, making
disagreement incorrect.
On 8/9/2025 1:50 AM, Fred. Zwarts wrote:
Op 09.aug.2025 om 05:54 schreef olcott:
HHH(DD)==0 is construed as correct when it is construed
that HHH is reporting on the fact that its actual input
DD correctly simulated by HHH cannot possibly reach its
own simulated "return" statement final halt state.
The failure of HHH to reach the end of the simulation cannot be be
used as a reason why HHH(DD)=0 is correct.
Changing the words that I said and then rebutting
these changed words is the dishonest deception known
as the strawman error.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (0 / 16) |
Uptime: | 158:24:56 |
Calls: | 10,384 |
Calls today: | 1 |
Files: | 14,056 |
Messages: | 6,416,485 |