On 8/26/2025 2:57 PM, dbush wrote:
On 8/26/2025 3:44 PM, Richard Heathfield wrote:
On 26/08/2025 20:00, olcott wrote:
On 8/26/2025 12:49 PM, Richard Heathfield wrote:
<snip>
You have already established that HHH returns 0
to claim that DDD never halts.Liar
I'm sorry? Are you now saying DDD halts?
He's referring to his weasel-word phrase "DD emulated by HHH
according to the semantics of the x86 language".
The ACTUAL BEHAVIOR of the ACTUAL INPUT to HHH(DD)
specifies the non-halting behavior of never reaching
its own halt state as measured by DD correctly simulated
by any HHH.
If we don't measure the The ACTUAL BEHAVIOR of the
ACTUAL INPUT to HHH(DD) this way then we stupidly
ignore the verified fact that DD DOES CALL HHH(DD)
in RECURSIVE SIMULATION.
I honestly don't see how dozens of people in the
last three years could honestly ignore the fact
that DD does call HHH(DD) in recursive simulation
and this does change the behavior of DD.
On 8/26/2025 3:52 PM, Richard Heathfield wrote:
On 26/08/2025 21:23, olcott wrote:
I honestly don't see how dozens of people in the
last three years could honestly ignore the fact
that DD does call HHH(DD) in recursive simulation
and this does change the behavior of DD.
DD's halting behaviour depends entirely on what HHH returns.
Only an idiot can't see that.
As I have said hundreds of times now it never
has been any of the f-cking business of any
Turing machine based halt decider whether M
halts on input P.
All the textbooks get this WRONG.
It has all been about the behavior specified by
the input ⟨M⟩, P to to H THAT IS CHANGED WHEN
M calls H in recursive simulation.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 54:53:57 |
Calls: | 10,397 |
Calls today: | 5 |
Files: | 14,067 |
Messages: | 6,417,417 |
Posted today: | 1 |