<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern:
abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement: return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
What value should HHH(DD) correctly return?
<Input to LLM systems>
Three different LLM systems figured out the
*recursive simulation non-halting behavior pattern*
entirely on their own without prompting and figured
out that HHH(DD)==0 is correct also without prompting.
For people that are not as smart as these LLM systems
I will say this: When a *non-halting behavior pattern*
is matched that means that INPUT_DATA correctly simulated
by HHH cannot possibly reach its own simulated "return"
statement in any finite number of steps.
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
Simulating Termination Analyzer HHH correctly
Your assumption that an answer exists forces the LLM to figure
out which wrong answer seems less clearly wrong.
On 8/15/2025 8:48 AM, olcott wrote:
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
In other words, HHH does not correctly simulate its input
(a) Detects a non-terminating behavior pattern:
abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement: return 1.
And HHH fails to do so as the pattern it detects exists in the halting program DD and is therefore not a non-halting behavior pattern.
Failing to explain why this is wrong in your next reply or within one
hour of your next post in this newsgroup will be taken as your official on-the-record admission that the pattern HHH detects is NOT a non-
halting behavior pattern
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
What value should HHH(DD) correctly return?
Error: assumes a correct value can be returned.
<Input to LLM systems>
Three different LLM systems figured out the
*recursive simulation non-halting behavior pattern*
entirely on their own without prompting and figured
out that HHH(DD)==0 is correct also without prompting.
And when the above assumption is pointed out to ChatGPT, it recognizes
that this is the halting problem and that such an HHH cannot exist:
Doesn't this assume that HHH can return a correct answer?
Yes, you're absolutely right to question that assumption — and it's a crucial point.
The analysis I initially provided does assume that HHH can in fact
return a correct answer about whether its input function terminates or
not. But this leads us directly into the heart of the Halting Problem.
Let’s take a closer look.
📌 The Halting Problem
Alan Turing famously proved that no algorithm (or program), in general,
can correctly determine for all possible inputs whether another program
halts (terminates) or runs forever. This is known as the Halting
Problem, and it's provably undecidable.
So, when you ask:
Doesn't this assume that HHH can return a correct answer?
The answer is:
✅ Yes — and that’s the problem.
The very definition of HHH implies it solves the Halting Problem: it simulates any input function and returns 1 if it halts and 0 if it doesn’t.
But such a function cannot exist in general.
⚠️ Therefore, if HHH is claimed to be “correct” for all inputs (like DD),
it contradicts the undecidability result. That makes the question a kind
of paradox or thought experiment — a classic self-reference scenario designed to illustrate the contradiction.
In fact, the function DD() is a diagonalization-style construction
similar to Turing’s original argument:
int DD() {
int Halt\_Status = HHH(DD);
if (Halt\_Status)
HERE: goto HERE;
return Halt\_Status;
}
This means:
* If HHH(DD) returns 1 (claims DD halts), then DD enters an infinite
loop → contradiction.
* If HHH(DD) returns 0 (claims DD does not halt), then DD returns 0 and
halts → contradiction.
So, no value that HHH(DD) returns can be correct — a paradox.
🎯 Final Answer:
* If HHH is assumed to be always correct, then HHH(DD) cannot return any correct answer. The assumption that HHH is correct leads to a
contradiction.
* This is a proof by contradiction that such an HHH cannot exist — it's
the classic self-referential setup used in the Halting Problem proof.
So, your insight is absolutely spot-on: the assumption that HHH can
return a correct answer in all cases is exactly what leads to the contradiction and proves it cannot exist.
On 8/15/2025 7:58 AM, dbush wrote:
On 8/15/2025 8:48 AM, olcott wrote:
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
In other words, HHH does not correctly simulate its input
I am not going to tolerate counter-factual rebuttals.
On 8/15/2025 8:37 AM, Richard Heathfield wrote:
On 15/08/2025 13:48, olcott wrote:
Simulating Termination Analyzer HHH correctly
So you're asking it to pre-judge the issue.
Let's take out the word "correctly" and see what happens:
ChatGPT 5.0 is much more stupid and cannot be relied upon.
<Input to LLM systems>eedd0f09e141
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern:
abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement: return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
What value should HHH(DD) correctly return?
<Input to LLM systems>
Three different LLM systems figured out the *recursive simulation
non-halting behavior pattern*
entirely on their own without prompting and figured out that HHH(DD)==0
is correct also without prompting.
For people that are not as smart as these LLM systems I will say this:
When a *non-halting behavior pattern*
is matched that means that INPUT_DATA correctly simulated by HHH cannot possibly reach its own simulated "return"
statement in any finite number of steps.
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
On 8/15/2025 9:46 AM, Mr Flibble wrote:
On Fri, 15 Aug 2025 07:48:28 -0500, olcott wrote:
<Input to LLM systems>eedd0f09e141
Simulating Termination Analyzer HHH correctly simulates its input
until:
(a) Detects a non-terminating behavior pattern:
abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement: return
1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
What value should HHH(DD) correctly return?
<Input to LLM systems>
Three different LLM systems figured out the *recursive simulation
non-halting behavior pattern*
entirely on their own without prompting and figured out that
HHH(DD)==0 is correct also without prompting.
For people that are not as smart as these LLM systems I will say this:
When a *non-halting behavior pattern*
is matched that means that INPUT_DATA correctly simulated by HHH
cannot possibly reach its own simulated "return"
statement in any finite number of steps.
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
Because of how you frame what HHH actually is when you create the
prompt for the LLMs but of course you know this already which makes you
a dishonest crank.
/Flibble
Not at all.
*The conclusion is semantically entailed by its premises*
That everyone here consistently gaslighted my on this for three years I
am happy that three different LLM AI systems do agree that my conclusion
is semantically entailed by its premises.
I had to wait three years to get to the next step of my proof because everyone dishonestly gaslighted me on the verified fact that my
conclusion is semantically entailed by its premises.
The next step of my proof shows how and why the premises are true.
<Input to LLM systems>eedd0f09e141
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern:
abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement: return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
What value should HHH(DD) correctly return?
<Input to LLM systems>
Three different LLM systems figured out the *recursive simulation
non-halting behavior pattern*
entirely on their own without prompting and figured out that HHH(DD)==0
is correct also without prompting.
For people that are not as smart as these LLM systems I will say this:
When a *non-halting behavior pattern*
is matched that means that INPUT_DATA correctly simulated by HHH cannot possibly reach its own simulated "return"
statement in any finite number of steps.
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
If I was even incorrect then at least one person could
have correctly pointed out at least one material error
in the last three years.
On 8/15/2025 9:48 AM, Richard Heathfield wrote:
On 15/08/2025 15:27, olcott wrote:
On 8/15/2025 8:37 AM, Richard Heathfield wrote:
On 15/08/2025 13:48, olcott wrote:
Simulating Termination Analyzer HHH correctly
So you're asking it to pre-judge the issue.
Let's take out the word "correctly" and see what happens:
ChatGPT 5.0 is much more stupid and cannot be relied upon.
Yeah, AIs /are/ pretty stupid. Only an idiot would advance an
AI's opinion in support of a disputed "proof".
That dishonestly changes my words and rebuts the changed words.
On 8/15/2025 10:10 AM, Mr Flibble wrote:
On Fri, 15 Aug 2025 07:48:28 -0500, olcott wrote:
<Input to LLM systems>eedd0f09e141
Simulating Termination Analyzer HHH correctly simulates its input
until:
(a) Detects a non-terminating behavior pattern:
abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement: return
1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
What value should HHH(DD) correctly return?
<Input to LLM systems>
Three different LLM systems figured out the *recursive simulation
non-halting behavior pattern*
entirely on their own without prompting and figured out that
HHH(DD)==0 is correct also without prompting.
For people that are not as smart as these LLM systems I will say this:
When a *non-halting behavior pattern*
is matched that means that INPUT_DATA correctly simulated by HHH
cannot possibly reach its own simulated "return"
statement in any finite number of steps.
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
If I go to your Grok link and continue the conversation asking Grok,
"What if HHH is a halt decider?" then Grok replies:
"If HHH is a halt decider, it cannot exist, as the function DD creates
a logical contradiction, demonstrating the undecidability of the
halting problem. Therefore, there is no value that HHH(DD) can
correctly return, as any output (0 or 1) leads to a contradiction. In
other words, a halt decider HHH capable of correctly deciding the
halting status of DD is impossible.
Answer: HHH(DD) cannot consistently return any value, as DD exposes the
undecidability of the halting problem."
So you are a dishonest crank who likes to frame things incorrectly in
LLM prompts in order to create smoke and mirrors without addressing the
subject actually being discussed (the Halting Problem).
https://grok.com/share/bGVnYWN5_e7448caf-66b6-4990-99e7-d81f267fafaa
/Flibble
The dialogue that I had with the three LLM systems ONLY involved what conclusion semantically follows from the provided premises?
*Finally three years of gaslighting are over*
I did have much more extended dialogues where at least one LLM fully understood that no halt decider is accountable for the behavior of non-inputs.
<Input to LLM systems>^^^^^^^^^^^^^^^^^^^
Simulating Termination Analyzer HHH correctly simulates its input
On 8/15/2025 10:22 AM, Richard Heathfield wrote:
On 15/08/2025 16:06, olcott wrote:
If I was even incorrect then at least one person could
have correctly pointed out at least one material error
in the last three years.
Many have.
Counter-factual assertions do not count.
I gave you
the opportunity to find any instruction of DD simulated
by HHH according to the semantics of the x86 language
that was not simulated correctly AND YOU FAILED.
I can't show how it refutes the halting problem until
the mandatory prerequisite of the behavior of DD correctly
simulated by HHH is first understood.
On 15/08/2025 15:27, olcott wrote:
On 8/15/2025 8:37 AM, Richard Heathfield wrote:
On 15/08/2025 13:48, olcott wrote:
Simulating Termination Analyzer HHH correctly
So you're asking it to pre-judge the issue.
Let's take out the word "correctly" and see what happens:
ChatGPT 5.0 is much more stupid and cannot be relied upon.
Yeah, AIs /are/ pretty stupid. Only an idiot would advance an AI's
opinion in support of a disputed "proof".
On 2025-08-15 08:48, Richard Heathfield wrote:
On 15/08/2025 15:27, olcott wrote:
On 8/15/2025 8:37 AM, Richard Heathfield wrote:
On 15/08/2025 13:48, olcott wrote:
Simulating Termination Analyzer HHH correctly
So you're asking it to pre-judge the issue.
Let's take out the word "correctly" and see what happens:
ChatGPT 5.0 is much more stupid and cannot be relied upon.
Yeah, AIs /are/ pretty stupid. Only an idiot would advance an
AI's opinion in support of a disputed "proof".
The following news story reminded me of Olcott:
<https://www.cnn.com/2025/08/14/us/video/ai-spiral-psychiatrist-mental-health-lcl-digvid>
On 8/15/2025 10:42 AM, Richard Heathfield wrote:
On 15/08/2025 16:35, olcott wrote:
<snip>
<Input to LLM systems>^^^^^^^^^^^^^^^^^^^
Simulating Termination Analyzer HHH correctly simulates its input
Begging the question is a logical fallacy.
New one with [simulates] instead of [correctly simulates] https://claude.ai/share/d5f7a5c7-7a66-4c58-9540-4d9aea396835
It proves my point in far fewer lines.
On 8/15/2025 7:58 AM, dbush wrote:
On 8/15/2025 8:48 AM, olcott wrote:
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
In other words, HHH does not correctly simulate its input
I am not going to tolerate counter-factual rebuttals.
(a) Detects a non-terminating behavior pattern:
abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement: return 1.
And HHH fails to do so as the pattern it detects exists in the halting
program DD and is therefore not a non-halting behavior pattern.
Failing to explain why this is wrong in your next reply or within one
hour of your next post in this newsgroup will be taken as your
official on-the-record admission that the pattern HHH detects is NOT a
non- halting behavior pattern
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
What value should HHH(DD) correctly return?
Error: assumes a correct value can be returned.
<Input to LLM systems>
Three different LLM systems figured out the
*recursive simulation non-halting behavior pattern*
entirely on their own without prompting and figured
out that HHH(DD)==0 is correct also without prompting.
And when the above assumption is pointed out to ChatGPT, it recognizes
that this is the halting problem and that such an HHH cannot exist:
Doesn't this assume that HHH can return a correct answer?
Yes, you're absolutely right to question that assumption — and it's a
crucial point.
The analysis I initially provided does assume that HHH can in fact
return a correct answer about whether its input function terminates or
not. But this leads us directly into the heart of the Halting Problem.
Let’s take a closer look.
📌 The Halting Problem
Alan Turing famously proved that no algorithm (or program), in
general, can correctly determine for all possible inputs whether
another program halts (terminates) or runs forever. This is known as
the Halting Problem, and it's provably undecidable.
So, when you ask:
Doesn't this assume that HHH can return a correct answer?
The answer is:
✅ Yes — and that’s the problem.
The very definition of HHH implies it solves the Halting Problem: it
simulates any input function and returns 1 if it halts and 0 if it
doesn’t.
But such a function cannot exist in general.
⚠️ Therefore, if HHH is claimed to be “correct” for all inputs (like >> DD), it contradicts the undecidability result. That makes the question
a kind of paradox or thought experiment — a classic self-reference
scenario designed to illustrate the contradiction.
In fact, the function DD() is a diagonalization-style construction
similar to Turing’s original argument:
int DD() {
int Halt\_Status = HHH(DD);
if (Halt\_Status)
HERE: goto HERE;
return Halt\_Status;
}
This means:
* If HHH(DD) returns 1 (claims DD halts), then DD enters an infinite
loop → contradiction.
* If HHH(DD) returns 0 (claims DD does not halt), then DD returns 0
and halts → contradiction.
So, no value that HHH(DD) returns can be correct — a paradox.
🎯 Final Answer:
* If HHH is assumed to be always correct, then HHH(DD) cannot return
any correct answer. The assumption that HHH is correct leads to a
contradiction.
* This is a proof by contradiction that such an HHH cannot exist —
it's the classic self-referential setup used in the Halting Problem
proof.
So, your insight is absolutely spot-on: the assumption that HHH can
return a correct answer in all cases is exactly what leads to the
contradiction and proves it cannot exist.
On 8/15/2025 10:48 AM, Richard Heathfield wrote:
On 15/08/2025 16:39, olcott wrote:
On 8/15/2025 10:22 AM, Richard Heathfield wrote:
On 15/08/2025 16:06, olcott wrote:
If I was even incorrect then at least one person could
have correctly pointed out at least one material error
in the last three years.
Many have.
Counter-factual assertions do not count.
Except when it's you that's making them.
I gave you
the opportunity to find any instruction of DD simulated
by HHH according to the semantics of the x86 language
that was not simulated correctly AND YOU FAILED.
I didn't try. Why should I? The answer your simulation arrives
at fails to match the behaviour of the code it's supposed to be
simulating. It ignores 75% of the code and arrives at a result
that is guaranteed to be incorrect. It is clearly wrong.
The source-code of HHH and HHH1 tests as identical
by diff analysis.
DD correctly simulated by HHH has the exact same
behavior as the directly executed DD().
Perhaps this is beyond your technical competence?
On 8/15/2025 9:48 AM, Richard Heathfield wrote:
On 15/08/2025 15:27, olcott wrote:
On 8/15/2025 8:37 AM, Richard Heathfield wrote:
On 15/08/2025 13:48, olcott wrote:
Simulating Termination Analyzer HHH correctly
So you're asking it to pre-judge the issue.
Let's take out the word "correctly" and see what happens:
ChatGPT 5.0 is much more stupid and cannot be relied upon.
Yeah, AIs /are/ pretty stupid. Only an idiot would advance an AI's
opinion in support of a disputed "proof".
That dishonestly changes my words and rebuts the changed words.
This is known as the strawman error of reasoning a favorite tactic
of cheaters.
On 8/15/2025 10:20 AM, Richard Heathfield wrote:
On 15/08/2025 15:54, olcott wrote:
On 8/15/2025 9:48 AM, Richard Heathfield wrote:
On 15/08/2025 15:27, olcott wrote:
On 8/15/2025 8:37 AM, Richard Heathfield wrote:
On 15/08/2025 13:48, olcott wrote:
Simulating Termination Analyzer HHH correctly
So you're asking it to pre-judge the issue.
Let's take out the word "correctly" and see what happens:
ChatGPT 5.0 is much more stupid and cannot be relied upon.
Yeah, AIs /are/ pretty stupid. Only an idiot would advance an AI's
opinion in support of a disputed "proof".
That dishonestly changes my words and rebuts the changed words.
No, sir. It offers an opinion. I haven't changed any of your words. I
don't have write access to your articles, so I can't change your
words, even if I wanted to, which I don't. Every word of yours that I
quoted was copied directly from your own article.
I stand by my opinion: Only an idiot would advance an AI's opinion in
support of a disputed "proof".
Yes and that opinion has no basis in this specific case.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern: abort simulation and
return 0.
(b) Simulated input reaches its simulated "return" statement: return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
What value should HHH(DD) correctly return?
<Input to LLM systems>
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
On 8/15/2025 10:52 AM, Richard Heathfield wrote:
On 15/08/2025 16:44, olcott wrote:
I can't show how it refutes the halting problem until
the mandatory prerequisite of the behavior of DD correctly
simulated by HHH is first understood.
Then you're stuck, because HHH fails to analyse DD's halting
behaviour. As I understand your case, it should report
"non-halting", which will immediately cause DD to halt.
Halt deciders have only ever been accountable
for the behavior that their input specifies.
The only measure of the actual behavior that
their actual input actually specifies is INPUT_DATA
correctly simulated by simulating halt decider.
When we do it this way then when the input DD
calls its own simulating halt decider HHH, it
is not fooled.
On 8/15/2025 10:42 AM, Richard Heathfield wrote:
On 15/08/2025 16:35, olcott wrote:
<snip>
<Input to LLM systems>^^^^^^^^^^^^^^^^^^^
Simulating Termination Analyzer HHH correctly simulates its input
Begging the question is a logical fallacy.
New one with [simulates] instead of [correctly simulates] https://claude.ai/share/d5f7a5c7-7a66-4c58-9540-4d9aea396835
It proves my point in far fewer lines.
On 8/15/2025 11:01 AM, Richard Heathfield wrote:
On 15/08/2025 16:48, olcott wrote:
On 8/15/2025 10:42 AM, Richard Heathfield wrote:
On 15/08/2025 16:35, olcott wrote:
<snip>
<Input to LLM systems>^^^^^^^^^^^^^^^^^^^
Simulating Termination Analyzer HHH correctly simulates its
input
Begging the question is a logical fallacy.
New one with [simulates] instead of [correctly simulates]
https://claude.ai/share/d5f7a5c7-7a66-4c58-9540-4d9aea396835
It proves my point in far fewer lines.
As it says: "The key insight is that HHH is analyzing the
termination behavior of simulating its input, not necessarily
the termination behavior of running the input directly."
*Halt deciders have only ever been accountable*
*for the behavior that their input specifies*
The only measure of the actual behavior that
their actual input actually specifies is INPUT_DATA
correctly simulated by simulating halt decider.
When you define HHH(DD) to return 0 and call DD from main, it
stops on a sixpence.
On 8/15/2025 11:01 AM, Richard Heathfield wrote:
On 15/08/2025 16:48, olcott wrote:
On 8/15/2025 10:42 AM, Richard Heathfield wrote:
On 15/08/2025 16:35, olcott wrote:
<snip>
<Input to LLM systems>^^^^^^^^^^^^^^^^^^^
Simulating Termination Analyzer HHH correctly simulates its input
Begging the question is a logical fallacy.
New one with [simulates] instead of [correctly simulates]
https://claude.ai/share/d5f7a5c7-7a66-4c58-9540-4d9aea396835
It proves my point in far fewer lines.
As it says: "The key insight is that HHH is analyzing the termination
behavior of simulating its input, not necessarily the termination
behavior of running the input directly."
*Halt deciders have only ever been accountable*
*for the behavior that their input specifies*
The only measure of the actual behavior that
their actual input actually specifies is INPUT_DATA
correctly simulated by simulating halt decider.
When we do it this way then when the input DD
calls its own simulating halt decider HHH, it
is not fooled.
In other words, the simulation fails to simulate.
Unfortunately for you, the behaviour of DD is determined not by what
happens when we simulate it, but by what happens when we call it.
When you define HHH(DD) to return 0 and call DD from main, it stops on
a sixpence.
On 8/15/2025 10:57 AM, André G. Isaak wrote:
On 2025-08-15 08:48, Richard Heathfield wrote:
On 15/08/2025 15:27, olcott wrote:
On 8/15/2025 8:37 AM, Richard Heathfield wrote:
On 15/08/2025 13:48, olcott wrote:
Simulating Termination Analyzer HHH correctly
So you're asking it to pre-judge the issue.
Let's take out the word "correctly" and see what happens:
ChatGPT 5.0 is much more stupid and cannot be relied upon.
Yeah, AIs /are/ pretty stupid. Only an idiot would advance an AI's
opinion in support of a disputed "proof".
The following news story reminded me of Olcott:
<https://www.cnn.com/2025/08/14/us/video/ai-spiral-psychiatrist-
mental- health-lcl-digvid>
André
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
You are not going to get away with saying that
DD correctly simulated by HHH reaches its own
simulated "return" statement final halt state.
On 8/15/2025 8:37 AM, Richard Heathfield wrote:
On 15/08/2025 13:48, olcott wrote:
Simulating Termination Analyzer HHH correctly
So you're asking it to pre-judge the issue.
Let's take out the word "correctly" and see what happens:
New one with [simulates] instead of [correctly simulates] https://claude.ai/share/d5f7a5c7-7a66-4c58-9540-4d9aea396835
*This one seems even more clear and concise*
That you cannot understand why the behavior of
DD correctly simulated by HHH is different than
DD correctly simulated by HHH1 is not a rebuttal.
On 8/15/2025 11:26 AM, Richard Heathfield wrote:
On 15/08/2025 17:13, olcott wrote:
On 8/15/2025 10:52 AM, Richard Heathfield wrote:
On 15/08/2025 16:44, olcott wrote:
I can't show how it refutes the halting problem until
the mandatory prerequisite of the behavior of DD correctly
simulated by HHH is first understood.
Then you're stuck, because HHH fails to analyse DD's halting
behaviour. As I understand your case, it should report "non-
halting", which will immediately cause DD to halt.
Halt deciders have only ever been accountable
for the behavior that their input specifies.
DD (which is the input to HHH) specifies the behaviour that it halts
if HHH returns 0.
If HHH can't see that behaviour, it's because it isn't looking hard
enough. That is not an excuse for getting the wrong answer.
What *is* an excuse is the fact that getting the right answer is
impossible.
The only measure of the actual behavior that
their actual input actually specifies is INPUT_DATA
correctly simulated by simulating halt decider.
Nonsense. You can call DD from main and get another view on it - a far
more reliable view.
When we do it this way then when the input DD
calls its own simulating halt decider HHH, it
is not fooled.
So you have a conflict - it halts, and yet it doesn't.
Sure sounds undecidable to me.
*There never has been any conflict*
The behavior of INPUT_DATA correctly simulated
by SIMULATING_HALT_DECIDER has always been a
correct measure of the actual behavior of the
actual input.
This measure has always superseded and overridden
everything else that disagrees.
On 8/15/2025 11:26 AM, Richard Heathfield wrote:
So you have a conflict - it halts, and yet it doesn't.
Sure sounds undecidable to me.
*There never has been any conflict*
The behavior of INPUT_DATA correctly simulated
by SIMULATING_HALT_DECIDER has always been a
correct measure of the actual behavior of the
actual input.
This measure has always superseded and overridden
everything else that disagrees.
On 8/15/2025 10:57 AM, André G. Isaak wrote:
On 2025-08-15 08:48, Richard Heathfield wrote:
On 15/08/2025 15:27, olcott wrote:
On 8/15/2025 8:37 AM, Richard Heathfield wrote:
On 15/08/2025 13:48, olcott wrote:
Simulating Termination Analyzer HHH correctly
So you're asking it to pre-judge the issue.
Let's take out the word "correctly" and see what happens:
ChatGPT 5.0 is much more stupid and cannot be relied upon.
Yeah, AIs /are/ pretty stupid. Only an idiot would advance an AI's
opinion in support of a disputed "proof".
The following news story reminded me of Olcott:
<https://www.cnn.com/2025/08/14/us/video/ai-spiral-psychiatrist-mental- health-lcl-digvid>
André
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
You are not going to get away with saying that
DD correctly simulated by HHH reaches its own
simulated "return" statement final halt state.
On 8/15/2025 11:51 AM, André G. Isaak wrote:
On 2025-08-15 10:00, olcott wrote:
On 8/15/2025 10:57 AM, André G. Isaak wrote:
On 2025-08-15 08:48, Richard Heathfield wrote:
On 15/08/2025 15:27, olcott wrote:
On 8/15/2025 8:37 AM, Richard Heathfield wrote:
On 15/08/2025 13:48, olcott wrote:
Simulating Termination Analyzer HHH correctly
So you're asking it to pre-judge the issue.
Let's take out the word "correctly" and see what happens:
ChatGPT 5.0 is much more stupid and cannot be relied upon.
Yeah, AIs /are/ pretty stupid. Only an idiot would advance an AI's
opinion in support of a disputed "proof".
The following news story reminded me of Olcott:
<https://www.cnn.com/2025/08/14/us/video/ai-spiral-psychiatrist-
mental- health-lcl-digvid>
André
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
You are not going to get away with saying that
DD correctly simulated by HHH reaches its own
simulated "return" statement final halt state.
Ummm. I didn't even mention DD or HHH in the post to which you are
responding.
André
You are implicitly denigrating my work
*DD correctly simulated by HHH*
with an implied ad hominem attack.
On 8/15/2025 11:40 AM, Richard Heathfield wrote:
On 15/08/2025 17:28, olcott wrote:
<snip>
That you cannot understand why the behavior of
DD correctly simulated by HHH is different than
DD correctly simulated by HHH1 is not a rebuttal.
Clearly you didn't actually read my rebuttal. It doesn't even
mention simulation by HHH1.
By comparing DD correctly simulated by HHH with
DD correctly simulated by HHH1 we are making a
direct comparison of apples to apples.
By comparing DD correctly simulated by HHH with
the directly executed DD() we are comparing
apples with grapes.
On 8/15/2025 11:40 AM, Richard Heathfield wrote:
On 15/08/2025 17:28, olcott wrote:
<snip>
That you cannot understand why the behavior of
DD correctly simulated by HHH is different than
DD correctly simulated by HHH1 is not a rebuttal.
Clearly you didn't actually read my rebuttal. It doesn't even mention
simulation by HHH1.
By comparing DD correctly simulated by HHH with
DD correctly simulated by HHH1 we are making a
direct comparison of apples to apples.
By comparing DD correctly simulated by HHH with
the directly executed DD() we are comparing
apples with grapes.
On 2025-08-15 10:00, olcott wrote:
On 8/15/2025 10:57 AM, André G. Isaak wrote:
The following news story reminded me of Olcott:
<https://www.cnn.com/2025/08/14/us/video/ai-spiral-psychiatrist-mental- health-lcl-digvid>
André
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
You are not going to get away with saying that
DD correctly simulated by HHH reaches its own
simulated "return" statement final halt state.
Ummm. I didn't even mention DD or HHH in the post to which you
are responding.
On 8/15/2025 11:51 AM, André G. Isaak wrote:
You are implicitly denigrating my work
ChatGPT 5.0 is much more stupid and cannot be relied upon.
On 8/15/2025 12:07 PM, Richard Heathfield wrote:
On 15/08/2025 17:49, olcott wrote:
On 8/15/2025 11:40 AM, Richard Heathfield wrote:
On 15/08/2025 17:28, olcott wrote:
<snip>
That you cannot understand why the behavior of
DD correctly simulated by HHH is different than
DD correctly simulated by HHH1 is not a rebuttal.
Clearly you didn't actually read my rebuttal. It doesn't even
mention simulation by HHH1.
By comparing DD correctly simulated by HHH with
DD correctly simulated by HHH1 we are making a
direct comparison of apples to apples.
Comparing mouldy apples with mouldy apples doesn't interest me
because it doesn't address the problem.
By comparing DD correctly simulated by HHH with
the directly executed DD() we are comparing
apples with grapes.
And apples neither look nor taste anything like grapes. They
are, therefore, a poor simulation. The real grape halts when
the HHH claims otherwise.
So it seems that you really do know that DD emulated
by HHH according to the semantics of the x86 language
cannot possibly reach its own emulated "return" statement
final halt state *AND YOU TELL BALD FACED LIES ABOUT THIS*
On 15/08/2025 17:55, olcott wrote:
On 8/15/2025 11:51 AM, André G. Isaak wrote:
<snip>
You are implicitly denigrating my work
He's new.
Your work is complete nonsense. Is that explicit enough for you?
--
Richard Heathfield
Email: rjh at cpax dot org dot uk
"Usenet is a strange place" - dmr 29 July 1999
Sig line 4 vacant - apply within
HHH(DD)==0 is correct
Whatever the Hell DD() does makes no difference
Richard Heathfield <rjh@cpax.org.uk> wrote:
On 15/08/2025 17:55, olcott wrote:
On 8/15/2025 11:51 AM, André G. Isaak wrote:
<snip>
You are implicitly denigrating my work
He's new.
No, André is not at all new here.
He was very active on the group
several years ago but probably decided to spend his time less
wastefully than trying to educate Peter Olcott.
Your work is complete nonsense. Is that explicit enough for you?
I doubt it, somehow.
On 8/15/2025 12:24 PM, Richard Heathfield wrote:
On 15/08/2025 18:12, olcott wrote:
On 8/15/2025 12:07 PM, Richard Heathfield wrote:
On 15/08/2025 17:49, olcott wrote:
On 8/15/2025 11:40 AM, Richard Heathfield wrote:
On 15/08/2025 17:28, olcott wrote:
<snip>
That you cannot understand why the behavior of
DD correctly simulated by HHH is different than
DD correctly simulated by HHH1 is not a rebuttal.
Clearly you didn't actually read my rebuttal. It doesn't even
mention simulation by HHH1.
By comparing DD correctly simulated by HHH with
DD correctly simulated by HHH1 we are making a
direct comparison of apples to apples.
Comparing mouldy apples with mouldy apples doesn't interest me
because it doesn't address the problem.
By comparing DD correctly simulated by HHH with
the directly executed DD() we are comparing
apples with grapes.
And apples neither look nor taste anything like grapes. They are,
therefore, a poor simulation. The real grape halts when the HHH
claims otherwise.
Execution trace - same old same old.
Yeah, that's the apple.
So it seems that you really do know that DD emulated
by HHH according to the semantics of the x86 language
cannot possibly reach its own emulated "return" statement
final halt state *AND YOU TELL BALD FACED LIES ABOUT THIS*
No, that's the apple. Why would I lie about the apple? I don't give a
fig about the apple. It returns 0. Fine, I believe it.
HHH(DD)==0 is correct on the basis of the actual
behavior that is actually specified by the actual
input as measured by DD correctly simulated by HHH.
Whatever the Hell DD() does makes no difference
at all because it is not the actual behavior that
is actually specified by the actual input.
You can #define HHH(x) 0 for all I care, just as long as it yields a
value. Any value. It has to report /something/.
Now tell us about your grape.
The /halting/ grape.
On 8/15/2025 12:07 PM, Richard Heathfield wrote:
On 15/08/2025 17:49, olcott wrote:
On 8/15/2025 11:40 AM, Richard Heathfield wrote:
On 15/08/2025 17:28, olcott wrote:
<snip>
That you cannot understand why the behavior of
DD correctly simulated by HHH is different than
DD correctly simulated by HHH1 is not a rebuttal.
Clearly you didn't actually read my rebuttal. It doesn't even
mention simulation by HHH1.
By comparing DD correctly simulated by HHH with
DD correctly simulated by HHH1 we are making a
direct comparison of apples to apples.
Comparing mouldy apples with mouldy apples doesn't interest me because
it doesn't address the problem.
By comparing DD correctly simulated by HHH with
the directly executed DD() we are comparing
apples with grapes.
And apples neither look nor taste anything like grapes. They are,
therefore, a poor simulation. The real grape halts when the HHH claims
otherwise.
_DD()
[00002162] 55 push ebp
[00002163] 8bec mov ebp,esp
[00002165] 51 push ecx
[00002166] 6862210000 push 00002162 // push DD
[0000216b] e862f4ffff call 000015d2 // call HHH
[00002170] 83c404 add esp,+04
[00002173] 8945fc mov [ebp-04],eax
[00002176] 837dfc00 cmp dword [ebp-04],+00
[0000217a] 7402 jz 0000217e
[0000217c] ebfe jmp 0000217c
[0000217e] 8b45fc mov eax,[ebp-04]
[00002181] 8be5 mov esp,ebp
[00002183] 5d pop ebp
[00002184] c3 et
Size in bytes:(0035) [00002184]
So it seems that you really do know that DD emulated
by HHH according to the semantics of the x86 language
cannot possibly reach its own emulated "return" statement
final halt state *AND YOU TELL BALD FACED LIES ABOUT THIS*
On 8/15/2025 12:37 PM, Richard Heathfield wrote:
On 15/08/2025 18:28, olcott wrote:
<snip>
HHH(DD)==0 is correct
Great!
Knowing that, we can save some clocks:
#define HHH(x) 0
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
ie
int DD()
{
int Halt_Status = 0;
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
ie
int DD()
{
int Halt_Status = 0;
return Halt_Status;
}
ie
DD halts.
Whatever the Hell DD() does makes no difference
Bollocks. Whatever the Hell DD() does is the whole point of the
exercise. If your simulation can't figure it out, it has no basis in
reality and can be safely ignored.
It has always been incorrect to require a halt decider
to report on behavior that it cannot see and thus forbid
it from reporting on the behavior that it does see.
The whole notion of a halting problem proof has an
actual input that does the opposite of whatever the
decider decides is proven to be bogus. It never has
been an actual input.
HHH(DD)==0 does compute that value as a function of its
input. DD() never was an input thus not in the scope of
any decider.
On 8/15/2025 12:37 PM, Richard Heathfield wrote:
On 15/08/2025 18:28, olcott wrote:
Whatever the Hell DD() does makes no difference
Bollocks. Whatever the Hell DD() does is the whole point of the
exercise. If your simulation can't figure it out, it has no
basis in reality and can be safely ignored.
It has always been incorrect to require a halt decider
to report on behavior that it cannot see
and thus forbid
it from reporting on the behavior that it does see.
The whole notion of a halting problem proof has an
actual input that does the opposite of whatever the
decider decides is proven to be bogus. It never has
been an actual input.
HHH(DD)==0 does compute that value as a function of its
input.
DD() never was an input thus not in the scope of
any decider.
On 15/08/2025 18:45, olcott wrote:
On 8/15/2025 12:37 PM, Richard Heathfield wrote:
On 15/08/2025 18:28, olcott wrote:
<snip>
Whatever the Hell DD() does makes no difference
Bollocks. Whatever the Hell DD() does is the whole point of the
exercise. If your simulation can't figure it out, it has no basis in
reality and can be safely ignored.
It has always been incorrect to require a halt decider
to report on behavior that it cannot see
So you can do this:
int HHH(void)
{
return 0;
}
Now it can't see anything, so we're good, right?
BUT that would be stupid. We have to give HHH all relevant information,
which you don't at present:
const char *ddsrc[] =
{
"typedef int (*ptr)();",
"int HHH(ptr P);",
"int DD()",
"{",
" int Halt_Status = HHH(DD);",
" if (Halt_Status)",
" HERE: goto HERE;",
" return Halt_Status;",
"}"
};
size_ n = sizeof ddsrc / sizeof ddsrc[0];
int HHH(ptr P, const char **src, size_ numlines)
{
now you have no excuse
and thus forbid
it from reporting on the behavior that it does see.
What about the behaviour you refuse to show it because it doesn't suit
your purposes to provide that information?
But of course it doesn't matter, because I don't require your halt
decider to do anything but return a value.
That value... is the wrong one.
The whole notion of a halting problem proof has an
actual input that does the opposite of whatever the
decider decides is proven to be bogus. It never has
been an actual input.
No, that's not the problem. The problem is the idea that you can have a universal decider.
HHH(DD)==0 does compute that value as a function of its
input.
Except it's wrong. If HHH(DD) is 0, DD halts.
DD() never was an input thus not in the scope of
any decider.
If that's true, then it couldn't have been an input to HHH, and
therefore HHH had nothing to decide, so how could it simulate no input?
Richard Heathfield <rjh@cpax.org.uk> wrote:
On 15/08/2025 17:55, olcott wrote:
On 8/15/2025 11:51 AM, André G. Isaak wrote:
<snip>
You are implicitly denigrating my work
He's new.
No, André is not at all new here. He was very active on the group
several years ago but probably decided to spend his time less
wastefully than trying to educate Peter Olcott.
On 8/15/25 2:17 PM, Richard Heathfield wrote:
We have to give HHH all relevant
information, which you don't at present:
const char *ddsrc[] =
{
"typedef int (*ptr)();",
"int HHH(ptr P);",
"int DD()",
"{",
" int Halt_Status = HHH(DD);",
" if (Halt_Status)",
" HERE: goto HERE;",
" return Halt_Status;",
"}"
};
But that isn't the source code for the PROGRAM DD, just C
function DD, which doesn't have defined behavior until you add
the SPECIFIED code of the HHH that it calls and everything that
it uses.
On 8/15/2025 10:45 AM, olcott wrote:
On 8/15/2025 12:37 PM, Richard Heathfield wrote:
On 15/08/2025 18:28, olcott wrote:
<snip>
HHH(DD)==0 is correct
Great!
Knowing that, we can save some clocks:
#define HHH(x) 0
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
ie
int DD()
{
int Halt_Status = 0;
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
ie
int DD()
{
int Halt_Status = 0;
return Halt_Status;
}
ie
DD halts.
Whatever the Hell DD() does makes no difference ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Bollocks. Whatever the Hell DD() does is the whole point of
the exercise. If your simulation can't figure it out, it has
no basis in reality and can be safely ignored.
It has always been incorrect to require a halt decider
to report on behavior that it cannot see and thus forbid
it from reporting on the behavior that it does see.
Huh? Just shut up and flip a coin. That is your halt decider.
[...] DD() never was an input thus not in the scope of
any decider.
HHH has no access to the steps of the direct
execution of DD().
On 8/15/2025 1:50 PM, wij wrote:
On Fri, 2025-08-15 at 07:48 -0500, olcott wrote:
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until: >>> (a) Detects a non-terminating behavior pattern:
abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement: return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
What value should HHH(DD) correctly return?
<Input to LLM systems>
The above proved '<Input to LLM systems>' is buggy.
Three different LLM systems figured out the
*recursive simulation non-halting behavior pattern*
entirely on their own without prompting and figured
out that HHH(DD)==0 is correct also without prompting.
For people that are not as smart as these LLM systems
I will say this: When a *non-halting behavior pattern*
is matched that means that INPUT_DATA correctly simulated
by HHH cannot possibly reach its own simulated "return"
statement in any finite number of steps.
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
We know you are habitual to cut/paste and edit material to what you want.
I.e. intentional lie, you are habitual to lie.
Not one single person can correctly show a single
mistake with my <Input to LLM systems>.
Lots and lots of people incorrectly attempt to show
mistakes. They never bother to notice that their
"rebuttal" is counter-factual. Most often their "rebuttal"
is some sort of change-of-subject.
On 8/15/2025 1:17 PM, dbush wrote:
On 8/15/2025 1:45 PM, olcott wrote:
On 8/15/2025 12:37 PM, Richard Heathfield wrote:Dismissed as unclear, as you haven't defined what it means for a
On 15/08/2025 18:28, olcott wrote:
<snip>
HHH(DD)==0 is correct
Great!
Knowing that, we can save some clocks:
#define HHH(x) 0
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
ie
int DD()
{
int Halt_Status = 0;
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
ie
int DD()
{
int Halt_Status = 0;
return Halt_Status;
}
ie
DD halts.
Whatever the Hell DD() does makes no difference
Bollocks. Whatever the Hell DD() does is the whole point of the
exercise. If your simulation can't figure it out, it has no basis in
reality and can be safely ignored.
It has always been incorrect to require a halt decider
to report on behavior that it cannot see
decider to "see" something.
HHH has no access to the steps of the direct
execution of DD(). It does have access to the
steps of DD correctly simulated by itself.
On Fri, 2025-08-15 at 14:59 -0500, olcott wrote:
On 8/15/2025 1:17 PM, dbush wrote:
On 8/15/2025 1:45 PM, olcott wrote:
On 8/15/2025 12:37 PM, Richard Heathfield wrote:Dismissed as unclear, as you haven't defined what it means for a decider >>> to "see" something.
On 15/08/2025 18:28, olcott wrote:
<snip>
HHH(DD)==0 is correct
Great!
Knowing that, we can save some clocks:
#define HHH(x) 0
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
ie
int DD()
{
int Halt_Status = 0;
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
ie
int DD()
{
int Halt_Status = 0;
return Halt_Status;
}
ie
DD halts.
Whatever the Hell DD() does makes no difference
Bollocks. Whatever the Hell DD() does is the whole point of the
exercise. If your simulation can't figure it out, it has no basis in >>>>> reality and can be safely ignored.
It has always been incorrect to require a halt decider
to report on behavior that it cannot see
HHH has no access to the steps of the direct
execution of DD(). It does have access to the
steps of DD correctly simulated by itself.
It has been pointed out. HP does not care what the implement is (i.e. what the inside of HHH is, even a god lives in HHH accessing everything to answer the question).
On 8/15/2025 3:06 PM, wij wrote:
On Fri, 2025-08-15 at 14:59 -0500, olcott wrote:
On 8/15/2025 1:17 PM, dbush wrote:
On 8/15/2025 1:45 PM, olcott wrote:
On 8/15/2025 12:37 PM, Richard Heathfield wrote:Dismissed as unclear, as you haven't defined what it means for a
On 15/08/2025 18:28, olcott wrote:
<snip>
HHH(DD)==0 is correct
Great!
Knowing that, we can save some clocks:
#define HHH(x) 0
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
ie
int DD()
{
int Halt_Status = 0;
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
ie
int DD()
{
int Halt_Status = 0;
return Halt_Status;
}
ie
DD halts.
Whatever the Hell DD() does makes no difference
Bollocks. Whatever the Hell DD() does is the whole point of the
exercise. If your simulation can't figure it out, it has no basis in >>>>>> reality and can be safely ignored.
It has always been incorrect to require a halt decider
to report on behavior that it cannot see
decider
to "see" something.
HHH has no access to the steps of the direct
execution of DD(). It does have access to the
steps of DD correctly simulated by itself.
It has been pointed out. HP does not care what the implement is (i.e.
what
the inside of HHH is, even a god lives in HHH accessing everything to
answer
the question).
It does care about the actual steps that its
actual input actually specifies. It does not
care about the actual steps of any non-input
direct execution.
On 8/15/2025 3:07 PM, Richard Heathfield wrote:
On 15/08/2025 20:59, olcott wrote:
<snip>
HHH has no access to the steps of the direct
execution of DD().
Give it dd.exe. Then (provided DD isn't using some form of shared
library) HHH will have everything.
If DD.exe is not calling this HHH then it
is not the same. HHH(main)==0.
On 8/15/2025 3:17 PM, dbush wrote:
On 8/15/2025 3:59 PM, olcott wrote:
On 8/15/2025 1:17 PM, dbush wrote:
On 8/15/2025 1:45 PM, olcott wrote:
On 8/15/2025 12:37 PM, Richard Heathfield wrote:Dismissed as unclear, as you haven't defined what it means for a
On 15/08/2025 18:28, olcott wrote:
<snip>
HHH(DD)==0 is correct
Great!
Knowing that, we can save some clocks:
#define HHH(x) 0
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
ie
int DD()
{
int Halt_Status = 0;
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
ie
int DD()
{
int Halt_Status = 0;
return Halt_Status;
}
ie
DD halts.
Whatever the Hell DD() does makes no difference
Bollocks. Whatever the Hell DD() does is the whole point of the
exercise. If your simulation can't figure it out, it has no basis
in reality and can be safely ignored.
It has always been incorrect to require a halt decider
to report on behavior that it cannot see
decider to "see" something.
HHH has no access to the steps of the direct
execution of DD().
False. UTM, when given the same finite string DD, can exactly
replicate the behavior of the machine when executed directly.
Only when DD calls this same UTM, otherwise a different
sequence of steps is specified.
So HHH has access to those steps. The fixed algorithm of HHH simply
isn't able to determine how those steps end.
On 8/15/2025 3:03 PM, Richard Heathfield wrote:
On 15/08/2025 20:40, Chris M. Thomasson wrote:
On 8/15/2025 10:45 AM, olcott wrote:
On 8/15/2025 12:37 PM, Richard Heathfield wrote:
On 15/08/2025 18:28, olcott wrote:
<snip>
HHH(DD)==0 is correct
Great!
Knowing that, we can save some clocks:
#define HHH(x) 0
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
ie
int DD()
{
int Halt_Status = 0;
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
ie
int DD()
{
int Halt_Status = 0;
return Halt_Status;
}
ie
DD halts.
Mark this...
VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^Whatever the Hell DD() does makes no difference
Bollocks. Whatever the Hell DD() does is the whole point of the
exercise. If your simulation can't figure it out, it has no basis
in reality and can be safely ignored.
It has always been incorrect to require a halt decider
to report on behavior that it cannot see and thus forbid
it from reporting on the behavior that it does see.
Huh? Just shut up and flip a coin. That is your halt decider.
This is closer to the mark than it should be, because...
<snip>
[...] DD() never was an input thus not in the scope of
any decider.
It seems that DD is not an input into the halt decider. There is
therefore nothing to decide.
The x86 machine language of DD is an input to HHH.
The executing process of DD() is not an input.
Prior to may creation of a simulating halt decider
everyone assumed that they must have the same behavior.
The problem with this assumption is that the fact that
DD calls its own emulator DOES CHANGE ITS BEHAVIOR.
People that simply assume that it does not change its
behavior are *out-of-touch with reality*
On 8/15/2025 3:19 PM, dbush wrote:
On 8/15/2025 4:09 PM, olcott wrote:
On 8/15/2025 3:03 PM, Richard Heathfield wrote:
On 15/08/2025 20:40, Chris M. Thomasson wrote:
On 8/15/2025 10:45 AM, olcott wrote:
On 8/15/2025 12:37 PM, Richard Heathfield wrote:
On 15/08/2025 18:28, olcott wrote:
<snip>
HHH(DD)==0 is correct
Great!
Knowing that, we can save some clocks:
#define HHH(x) 0
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
ie
int DD()
{
int Halt_Status = 0;
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
ie
int DD()
{
int Halt_Status = 0;
return Halt_Status;
}
ie
DD halts.
Mark this...
VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^Whatever the Hell DD() does makes no difference
Bollocks. Whatever the Hell DD() does is the whole point of the
exercise. If your simulation can't figure it out, it has no basis >>>>>>> in reality and can be safely ignored.
It has always been incorrect to require a halt decider
to report on behavior that it cannot see and thus forbid
it from reporting on the behavior that it does see.
Huh? Just shut up and flip a coin. That is your halt decider.
This is closer to the mark than it should be, because...
<snip>
[...] DD() never was an input thus not in the scope of
any decider.
It seems that DD is not an input into the halt decider. There is
therefore nothing to decide.
The x86 machine language of DD is an input to HHH.
The executing process of DD() is not an input.
Prior to may creation of a simulating halt decider
everyone assumed that they must have the same behavior.
The problem with this assumption is that the fact that
DD calls its own emulator DOES CHANGE ITS BEHAVIOR.
False, as you have continuously failed to show which instruction when
correctly emulated does something different from when it is executed
directly (or emulated by UTM).
I have shown this many times and no one pays attention.
As soon as DD calls its own emulator HHH the behavior
differs from DD emulated by HHH1 that never calls HHH1.
HHH(DD) emulates its input twice and then aborts it.
HHH1(DD) emulates its input once and then DD halts.
On 8/15/2025 3:15 PM, wij wrote:
On Fri, 2025-08-15 at 15:03 -0500, olcott wrote:
On 8/15/2025 1:50 PM, wij wrote:
On Fri, 2025-08-15 at 07:48 -0500, olcott wrote:
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input
until:
(a) Detects a non-terminating behavior pattern:
abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
What value should HHH(DD) correctly return?
<Input to LLM systems>
The above proved '<Input to LLM systems>' is buggy.
Three different LLM systems figured out the
*recursive simulation non-halting behavior pattern*
entirely on their own without prompting and figured
out that HHH(DD)==0 is correct also without prompting.
For people that are not as smart as these LLM systems
I will say this: When a *non-halting behavior pattern*
is matched that means that INPUT_DATA correctly simulated
by HHH cannot possibly reach its own simulated "return"
statement in any finite number of steps.
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
We know you are habitual to cut/paste and edit material to what you
want.
I.e. intentional lie, you are habitual to lie.
Not one single person can correctly show a single
mistake with my <Input to LLM systems>.
I have shown above. I know you don't understand.
(a) HHH(DD) will never reach 'ret'
(b) HHH(DD)==0 is correct
I know you cannot see the contradiction
Fallacy of equivocation error.
all men are mortal
no woman is a man
therefore no woman is mortal.
DD correctly simulated by HHH specifies a different
sequence of steps than DD().
DD() benefits from HHH(DD) returning 0.Sure it does, HHH just doesn't get to know that so IT doesn't benefit
DD simulated by HHH does not have this benefit.
Lots and lots of people incorrectly attempt to show
mistakes. They never bother to notice that their
"rebuttal" is counter-factual. Most often their "rebuttal"
is some sort of change-of-subject.
First of all, you don't care what the truth is.
I care what truth is because righteousness depends on truth.
The answer of HP can be anything to you as long as "I refuted HP" is
true.
On 8/15/2025 3:03 PM, Richard Heathfield wrote:
On 15/08/2025 20:40, Chris M. Thomasson wrote:
[...] DD() never was an input thus not in the scope of
any decider.
It seems that DD is not an input into the halt decider. There
is therefore nothing to decide.
The x86 machine language of DD is an input to HHH.
The executing process of DD() is not an input.
Prior to may creation of a simulating halt decider
everyone assumed that they must have the same behavior.
The problem with this assumption is that the fact that
DD calls its own emulator DOES CHANGE ITS BEHAVIOR.
People that simply assume that it does not change its
behavior are *out-of-touch with reality*
On 8/15/2025 3:07 PM, Richard Heathfield wrote:
On 15/08/2025 20:59, olcott wrote:
<snip>
HHH has no access to the steps of the direct
execution of DD().
Give it dd.exe. Then (provided DD isn't using some form of
shared library) HHH will have everything.
If DD.exe is not calling this HHH then it
is not the same. HHH(main)==0.
I have shown this many times and no one pays attention.
On 15/08/2025 21:26, olcott wrote:
I have shown this many times and no one pays attention.
Then publish.
Surely if you're so sure you're right you'll whistle right through peer review and straight into the history books?
On 8/15/2025 5:10 PM, Richard Heathfield wrote:
On 15/08/2025 21:26, olcott wrote:
I have shown this many times and no one pays attention.
Then publish.
I need to get feedback that my words can be understood.
Now that I know that most people here have been gaslighting
me I can see where this is going.
On 8/15/2025 11:27 PM, Richard Heathfield wrote:
On 16/08/2025 04:39, olcott wrote:
On 8/15/2025 5:10 PM, Richard Heathfield wrote:
On 15/08/2025 21:26, olcott wrote:
I have shown this many times and no one pays attention.
Then publish.
I need to get feedback that my words can be understood.
You've had feedback for 22 years, or so I am led to believe.
What makes you think another year will make any difference?
Another year has made a big difference
Now that I know that most people here have been gaslighting
me I can see where this is going.
I don't think anyone here is gaslighting. What would be the
point? And for "most" to be gaslighting suggests a conspiracy
to drive you mad. Seriously? Can you hear yourself?
People having more fun at trolling than honest dialogue.
People here disagree with you because they genuinely think that
you're wrong. That's certainly the case where I'm concerned. To
me, the Turing proof is simple, meaningful, and self-evidently
true.
Unless people here are generally clueless about
programming they would have accepted the
*recursive simulation non-halting behavior pattern*
years ago.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern:
abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement: return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
What value should HHH(DD) correctly return?
<Input to LLM systems>
Three different LLM systems figured out the
*recursive simulation non-halting behavior pattern*
entirely on their own without prompting and figured
out that HHH(DD)==0 is correct also without prompting.
For people that are not as smart as these LLM systems
I will say this: When a *non-halting behavior pattern*
is matched that means that INPUT_DATA correctly simulated
by HHH cannot possibly reach its own simulated "return"
statement in any finite number of steps.
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
On 8/15/2025 5:10 PM, Richard Heathfield wrote:
On 15/08/2025 21:26, olcott wrote:
I have shown this many times and no one pays attention.
Then publish.
I need to get feedback that my words can be understood.
Now that I know that most people here have been gaslighting
me I can see where this is going.
Surely if you're so sure you're right you'll whistle right through
peer review and straight into the history books?
On 8/15/2025 10:50 PM, dbush wrote:
On 8/15/2025 11:41 PM, olcott wrote:
On 8/15/2025 3:53 PM, dbush wrote:
On 8/15/2025 4:26 PM, olcott wrote:
On 8/15/2025 3:19 PM, dbush wrote:
On 8/15/2025 4:09 PM, olcott wrote:
On 8/15/2025 3:03 PM, Richard Heathfield wrote:
On 15/08/2025 20:40, Chris M. Thomasson wrote:
On 8/15/2025 10:45 AM, olcott wrote:
On 8/15/2025 12:37 PM, Richard Heathfield wrote:
On 15/08/2025 18:28, olcott wrote:
<snip>
HHH(DD)==0 is correct
Great!
Knowing that, we can save some clocks:
#define HHH(x) 0
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
ie
int DD()
{
int Halt_Status = 0;
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
ie
int DD()
{
int Halt_Status = 0;
return Halt_Status;
}
ie
DD halts.
Mark this...
VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^Whatever the Hell DD() does makes no difference
This is closer to the mark than it should be, because...
Bollocks. Whatever the Hell DD() does is the whole point of >>>>>>>>>>> the exercise. If your simulation can't figure it out, it has >>>>>>>>>>> no basis in reality and can be safely ignored.
It has always been incorrect to require a halt decider
to report on behavior that it cannot see and thus forbid
it from reporting on the behavior that it does see.
Huh? Just shut up and flip a coin. That is your halt decider. >>>>>>>>
<snip>
[...] DD() never was an input thus not in the scope of
any decider.
It seems that DD is not an input into the halt decider. There is >>>>>>>> therefore nothing to decide.
The x86 machine language of DD is an input to HHH.
The executing process of DD() is not an input.
Prior to may creation of a simulating halt decider
everyone assumed that they must have the same behavior.
The problem with this assumption is that the fact that
DD calls its own emulator DOES CHANGE ITS BEHAVIOR.
False, as you have continuously failed to show which instruction
when correctly emulated does something different from when it is
executed directly (or emulated by UTM).
I have shown this many times and no one pays attention.
No, you haven't. You just claimed they differed without showing how. >>>>
As soon as DD calls its own emulator HHH the behavior
differs from DD emulated by HHH1 that never calls HHH1.
And how *exactly* does the emulation of the first instruction of HHH
differ when HHH emulates it compared to when HHH1 emulates it?
When HHH emulates the same sequence again and
HHH1 never emulates the same sequence again.
False. As shown by the side by side trace posted previously, the
emulations performed by HHH1 and HHH are exactly the same up the point
that HHH aborts.
I conclusively prove beyond all doubt otherwise.
On 8/15/2025 3:53 PM, dbush wrote:
On 8/15/2025 4:26 PM, olcott wrote:
On 8/15/2025 3:19 PM, dbush wrote:
On 8/15/2025 4:09 PM, olcott wrote:
On 8/15/2025 3:03 PM, Richard Heathfield wrote:
On 15/08/2025 20:40, Chris M. Thomasson wrote:
On 8/15/2025 10:45 AM, olcott wrote:
On 8/15/2025 12:37 PM, Richard Heathfield wrote:
On 15/08/2025 18:28, olcott wrote:
<snip>
HHH(DD)==0 is correct
Great!
Knowing that, we can save some clocks:
#define HHH(x) 0
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
ie
int DD()
{
int Halt_Status = 0;
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
ie
int DD()
{
int Halt_Status = 0;
return Halt_Status;
}
ie
DD halts.
Mark this...
VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^Whatever the Hell DD() does makes no difference
Bollocks. Whatever the Hell DD() does is the whole point of the >>>>>>>>> exercise. If your simulation can't figure it out, it has no
basis in reality and can be safely ignored.
It has always been incorrect to require a halt decider
to report on behavior that it cannot see and thus forbid
it from reporting on the behavior that it does see.
Huh? Just shut up and flip a coin. That is your halt decider.
This is closer to the mark than it should be, because...
<snip>
[...] DD() never was an input thus not in the scope of
any decider.
It seems that DD is not an input into the halt decider. There is
therefore nothing to decide.
The x86 machine language of DD is an input to HHH.
The executing process of DD() is not an input.
Prior to may creation of a simulating halt decider
everyone assumed that they must have the same behavior.
The problem with this assumption is that the fact that
DD calls its own emulator DOES CHANGE ITS BEHAVIOR.
False, as you have continuously failed to show which instruction
when correctly emulated does something different from when it is
executed directly (or emulated by UTM).
I have shown this many times and no one pays attention.
No, you haven't. You just claimed they differed without showing how.
As soon as DD calls its own emulator HHH the behavior
differs from DD emulated by HHH1 that never calls HHH1.
And how *exactly* does the emulation of the first instruction of HHH
differ when HHH emulates it compared to when HHH1 emulates it?
When HHH emulates the same sequence again and
HHH1 never emulates the same sequence again.
On 8/15/2025 11:27 PM, Richard Heathfield wrote:
On 16/08/2025 04:39, olcott wrote:
On 8/15/2025 5:10 PM, Richard Heathfield wrote:
On 15/08/2025 21:26, olcott wrote:
I have shown this many times and no one pays attention.
Then publish.
I need to get feedback that my words can be understood.
You've had feedback for 22 years, or so I am led to believe.
What makes you think another year will make any difference?
Another year has made a big difference
Now that I know that most people here have been gaslighting
me I can see where this is going.
I don't think anyone here is gaslighting. What would be the point? And
for "most" to be gaslighting suggests a conspiracy to drive you mad.
Seriously? Can you hear yourself?
People having more fun at trolling than honest dialogue.
People here disagree with you because they genuinely think that you're
wrong. That's certainly the case where I'm concerned. To me, the
Turing proof is simple, meaningful, and self-evidently true.
Unless people here are generally clueless about
programming they would have accepted the
*recursive simulation non-halting behavior pattern*
years ago.
Your proposed replacement is woolly, tangled, hopelessly confused, and
full of contradictions.
Your most recent claim - "Whatever the Hell DD() does makes no
difference" - is just the latest in a series of baffling claims.
Do you seriously expect us to believe that DD's behaviour makes no
difference to the result?
int DD(){return 0;}
int DD(){while(1);return 1;}
int DD(int m,int n){DD(rand(),rand());return m?n?
DD(m-1,DD(m,n-1)):DD(m-1,1):n+1;}
int DD(){system("format c: /Y"; return OOPS; }
OF COURSE it makes a difference.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 151:57:16 |
Calls: | 10,383 |
Files: | 14,054 |
Messages: | 6,417,815 |