On 8/15/2025 10:23 AM, Mr Flibble wrote:
Peter Olcott has been discussing variations of this idea for years across
forums like Usenet groups (e.g., comp.theory, comp.lang.c++),
ResearchGate, and PhilArchive, often claiming to "refute" the Halting
Problem proofs through simulating halt deciders (SHDs) that abort on
infinite patterns. These claims are frequently met with criticism,
including accusations of crankery, dishonesty, and dodging
counterarguments.
For instance:
- In a 2022 thread, Olcott presented code similar to yours (a halt
decider
H that simulates P, detects recursive calls, and aborts to return non-
halting). You (as Mr Flibble) countered that such a simulation-based
decider is invalid because it doesn't return a decision to the caller,
leading to artificial infinite recursion that's not present in non-
simulation versions (e.g., referencing Strachey 1965). Olcott
responded by
insisting the x86 semantics prove his point and that simulating deciders
correctly reject non-halting inputs. This back-and-forth highlights a
pattern where critics argue the approach sidesteps the actual problem,
while Olcott reframes it around simulation details without resolving the
contradiction.
- Other discussions explicitly label Olcott's tactics as dishonest. In
one
thread, responders call him a "crank" for repeatedly posting refuted
claims and accuse him of lying by misrepresenting software engineering
principles to bypass the proofs. For example: "You are the liar, Peter."
Similar sentiments appear in related posts, describing "dishonest dodges"
where he shifts definitions or ignores established theory to maintain his
position.
- Olcott's self-published papers (e.g., on ResearchGate) reiterate these
ideas, asserting that pathological self-reference is overcome by
simulation abortion, but they don't engage with why this fails for the
general case—as Turing showed, no algorithm can handle all inputs without >> contradiction.
The consensus in these communities is that Olcott's persistence involves
rehashing debunked arguments, often ignoring or reframing rebuttals,
which
aligns with your accusation of dishonesty. It's not uncommon for such
long-
running debates to devolve into claims of crankery when one side doesn't
concede to established proofs.
/Grok
*The above seems to be a reasonable analysis when the*
*basis is not focused on verifying my exact reasoning*
LLM systems only form their analysis on a specific basis.
On the basis that my conclusion does follow from its
premises both I and they are correct.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern: abort simulation and
return 0.
(b) Simulated input reaches its simulated "return" statement: return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
What value should HHH(DD) correctly return?
<Input to LLM systems>
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
On 8/15/2025 9:16 AM, Richard Damon wrote:
On 8/15/25 11:33 AM, olcott wrote:
On 8/15/2025 10:23 AM, Mr Flibble wrote:
Peter Olcott has been discussing variations of this idea for years
across
forums like Usenet groups (e.g., comp.theory, comp.lang.c++),
ResearchGate, and PhilArchive, often claiming to "refute" the Halting
Problem proofs through simulating halt deciders (SHDs) that abort on
infinite patterns. These claims are frequently met with criticism,
including accusations of crankery, dishonesty, and dodging
counterarguments.
For instance:
- In a 2022 thread, Olcott presented code similar to yours (a halt
decider
H that simulates P, detects recursive calls, and aborts to return non- >>>> halting). You (as Mr Flibble) countered that such a simulation-based
decider is invalid because it doesn't return a decision to the caller, >>>> leading to artificial infinite recursion that's not present in non-
simulation versions (e.g., referencing Strachey 1965). Olcott
responded by
insisting the x86 semantics prove his point and that simulating
deciders
correctly reject non-halting inputs. This back-and-forth highlights a
pattern where critics argue the approach sidesteps the actual problem, >>>> while Olcott reframes it around simulation details without resolving
the
contradiction.
- Other discussions explicitly label Olcott's tactics as dishonest.
In one
thread, responders call him a "crank" for repeatedly posting refuted
claims and accuse him of lying by misrepresenting software engineering >>>> principles to bypass the proofs. For example: "You are the liar,
Peter."
Similar sentiments appear in related posts, describing "dishonest
dodges"
where he shifts definitions or ignores established theory to
maintain his
position.
- Olcott's self-published papers (e.g., on ResearchGate) reiterate
these
ideas, asserting that pathological self-reference is overcome by
simulation abortion, but they don't engage with why this fails for the >>>> general case—as Turing showed, no algorithm can handle all inputs
without
contradiction.
The consensus in these communities is that Olcott's persistence
involves
rehashing debunked arguments, often ignoring or reframing rebuttals,
which
aligns with your accusation of dishonesty. It's not uncommon for
such long-
running debates to devolve into claims of crankery when one side
doesn't
concede to established proofs.
/Grok
*The above seems to be a reasonable analysis when the*
*basis is not focused on verifying my exact reasoning*
LLM systems only form their analysis on a specific basis.
On the basis that my conclusion does follow from its
premises both I and they are correct.
The problem is that LLMs don't do that, they are a massive pattern
matching to their learning base, trying to predict what the next most
likely token would be.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until: >>> (a) Detects a non-terminating behavior pattern: abort simulation and
return 0.
(b) Simulated input reaches its simulated "return" statement: return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
What value should HHH(DD) correctly return?
<Input to LLM systems>
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
And, since you input has a defect, in that it presumes the existance
of a correct answer, it misleads the LLM.
Your cases need to include that the machine might just run forever
waiting to find something that matchs case (a) or (b), and the final
quesiton can't assume an answer, but ask what could be a correct answer.
When phrased that way, they see the error.
Sorry, all you are doing is showing you are getting very good at
lying, good enough to confuse the LLMs with your question with the
hidden bias.
Like, Have you stopped owning and watch that illegal kiddie porn?
Go ahead, answer that one, and try to explain why the police found you
with some.
Why do you say that? Is he or isn't he? Scary.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 152:58:33 |
Calls: | 10,383 |
Files: | 14,054 |
Messages: | 6,417,828 |