On 8/6/2025 2:16 AM, Mikko wrote:
On 2025-08-05 14:41:15 +0000, olcott said:
On 8/5/2025 2:08 AM, Mikko wrote:
On 2025-08-04 18:29:04 +0000, olcott said:
Diagonalization only arises when one assumes that a
Turing machine decider must report on its own behavior
instead of the behavior specified by its machine description.
Wrong. There are proofs by diagonalization that do not assume
anything about Turing machines.
That was said within the context of Turing Machine halt deciders
as can be seen by its words.
No, in the OP the words were those of a general statement. References
to Turing machines were only in a subordinate clause and there were
no reference to halting.
Diagonalization only arises when one assumes that a
*Turing machine decider* must report on its own behavior
instead of the behavior specified by its machine description.
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
if Ĥ applied to ⟨Ĥ⟩ halts, and
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
if Ĥ applied to ⟨Ĥ⟩ does not halt.
Thus lines two and four are incorrect because they
require Ĥ.embedded_H to report on its own behavior.
It does not make sense to assume that a Turing machine decider
must or must not report on this or that. The only reasonable
assumptions are those that specify the topic of the proof.
It makes no sense to require a function to be applied
to elements outside of its domain.
What is the square-root of a dead chicken?
Turing machines only compute the mapping from their actual inputs.
Non-inputs such as other Turing machines are not in the domain
of any Turing machine halt decider.
The halting problem specifies that the input to a halting decider
is a description of the computation asked about.
That is not stated with complete accuracy.
The actual question is about the behavior that
the finite string input specifies.
The term of the art: "machine description"
has always been too vague.
The answer is
required to predict whether that computation will halt if executed.
That has always been incorrect.
"The contradiction in Linz's (or Turing's) self-referential
halting construction only appears if one insists that the
machine can and must decide on its own behavior, which is
neither possible nor required."
https://chatgpt.com/share/6890ee5a-52bc-8011-852e-3d9f97bcfbd8
If you say otherwise you aren't saying anything about the halting
problem.
I am saying that the halting problem has always been
stated incorrectly.
*Here it is stated correctly*
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞
if ⟨Ĥ⟩ ⟨Ĥ⟩ simulated by Ĥ.embedded_H reaches
its simulated final halt state of ⟨Ĥ.qn⟩, and
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
if ⟨Ĥ⟩ ⟨Ĥ⟩ simulated by Ĥ.embedded_H cannot possibly
reach its simulated final halt state of ⟨Ĥ.qn⟩.
On 8/6/2025 6:41 AM, Richard Damon wrote:
On 8/6/25 7:33 AM, olcott wrote:
On 8/6/2025 2:16 AM, Mikko wrote:
On 2025-08-05 14:41:15 +0000, olcott said:
On 8/5/2025 2:08 AM, Mikko wrote:
On 2025-08-04 18:29:04 +0000, olcott said:
Diagonalization only arises when one assumes that a
Turing machine decider must report on its own behavior
instead of the behavior specified by its machine description.
Wrong. There are proofs by diagonalization that do not assume
anything about Turing machines.
That was said within the context of Turing Machine halt deciders
as can be seen by its words.
No, in the OP the words were those of a general statement. References
to Turing machines were only in a subordinate clause and there were
no reference to halting.
Diagonalization only arises when one assumes that a
*Turing machine decider* must report on its own behavior
instead of the behavior specified by its machine description.
But that *IS* a requirement of it, at least if we ask about its own
behavior by providing it with a representation that includes itself.
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
if Ĥ applied to ⟨Ĥ⟩ halts, and
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
if Ĥ applied to ⟨Ĥ⟩ does not halt.
Thus lines two and four are incorrect because they
require Ĥ.embedded_H to report on its own behavior.
But there is nothing wrong about that.
You are just asserting your own ignorance.
It does not make sense to assume that a Turing machine decider
must or must not report on this or that. The only reasonable
assumptions are those that specify the topic of the proof.
It makes no sense to require a function to be applied
to elements outside of its domain.
But that *IS* in its domain.
Turing machines are not in the domain of Turing machine deciders,
only finite strings are in this domain.
I have proven that in the case where a Turing Machine halt decider
it evaluating the behavior of its own machine description that
the behavior specified by the description is not the behavior
of the underlying machine.
People only disagree on the basis that the verified facts
disagree with their opinion.
On 8/6/2025 2:16 AM, Mikko wrote:
On 2025-08-05 14:41:15 +0000, olcott said:
On 8/5/2025 2:08 AM, Mikko wrote:
On 2025-08-04 18:29:04 +0000, olcott said:
Diagonalization only arises when one assumes that a
Turing machine decider must report on its own behavior
instead of the behavior specified by its machine description.
Wrong. There are proofs by diagonalization that do not assume
anything about Turing machines.
That was said within the context of Turing Machine halt deciders
as can be seen by its words.
No, in the OP the words were those of a general statement. References
to Turing machines were only in a subordinate clause and there were
no reference to halting.
Diagonalization only arises when one assumes that a
*Turing machine decider* must report on its own behavior
instead of the behavior specified by its machine description.
On 8/7/2025 2:48 AM, Mikko wrote:
On 2025-08-06 11:33:50 +0000, olcott said:
On 8/6/2025 2:16 AM, Mikko wrote:
On 2025-08-05 14:41:15 +0000, olcott said:
On 8/5/2025 2:08 AM, Mikko wrote:
On 2025-08-04 18:29:04 +0000, olcott said:
Diagonalization only arises when one assumes that a
Turing machine decider must report on its own behavior
instead of the behavior specified by its machine description.
Wrong. There are proofs by diagonalization that do not assume
anything about Turing machines.
That was said within the context of Turing Machine halt deciders
as can be seen by its words.
No, in the OP the words were those of a general statement. References
to Turing machines were only in a subordinate clause and there were
no reference to halting.
Diagonalization only arises when one assumes that a
*Turing machine decider* must report on its own behavior
instead of the behavior specified by its machine description.
Said that way the claim is false. Above sentence claims that
diagonalization
does not arise in contexts not involving Turing machines. That is false.
The idea of diagonalization is older than the idea of Turing machines.
*My point is subtle thus easy to miss*
When one Turing machine is required to report on
the behavior of a non-input directly executed TM
then the diagonalization result occurs.
Turing Machine Linz Ĥ applied to its own machine description ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
*repeats until aborted*
(a) Ĥ copies its input ⟨Ĥ⟩
(b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
When Ĥ.embedded_H is based on a UTM that measures
the behavior specified by its input by simulating
this input then the recursive simulation non-halting
behavior pattern is met.
On 8/7/2025 2:48 AM, Mikko wrote:
On 2025-08-06 11:33:50 +0000, olcott said:
On 8/6/2025 2:16 AM, Mikko wrote:
On 2025-08-05 14:41:15 +0000, olcott said:
On 8/5/2025 2:08 AM, Mikko wrote:
On 2025-08-04 18:29:04 +0000, olcott said:
Diagonalization only arises when one assumes that a
Turing machine decider must report on its own behavior
instead of the behavior specified by its machine description.
Wrong. There are proofs by diagonalization that do not assume
anything about Turing machines.
That was said within the context of Turing Machine halt deciders
as can be seen by its words.
No, in the OP the words were those of a general statement. References
to Turing machines were only in a subordinate clause and there were
no reference to halting.
Diagonalization only arises when one assumes that a
*Turing machine decider* must report on its own behavior
instead of the behavior specified by its machine description.
Said that way the claim is false. Above sentence claims that diagonalization >> does not arise in contexts not involving Turing machines. That is false.
The idea of diagonalization is older than the idea of Turing machines.
*My point is subtle thus easy to miss*
When one Turing machine is required to report on
the behavior of a non-input directly executed TM
then the diagonalization result occurs.
Turing Machine Linz Ĥ applied to its own machine description ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
*repeats until aborted*
(a) Ĥ copies its input ⟨Ĥ⟩
(b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
When Ĥ.embedded_H is based on a UTM that measures
the behavior specified by its input by simulating
this input then the recursive simulation non-halting
behavior pattern is met.
On 2025-08-07 13:17:59 +0000, olcott said:
*My point is subtle thus easy to miss*
When one Turing machine is required to report on
the behavior of a non-input directly executed TM
then the diagonalization result occurs.
That is the core idea of Turing's proof.
What you call "non-input" is the behaviour specified by the input.
Whether that behaviour is or will be acutally executed is
irrelevant.
The result is that there is no Turing machine that can fully
solve the problem.
On 8/8/2025 2:43 AM, Richard Heathfield wrote:
On 08/08/2025 08:26, Mikko wrote:
On 2025-08-07 13:17:59 +0000, olcott said:
<snip>
*My point is subtle thus easy to miss*
When one Turing machine is required to report on
the behavior of a non-input directly executed TM
then the diagonalization result occurs.
That is the core idea of Turing's proof.
What you call "non-input" is the behaviour specified by the input.
Whether that behaviour is or will be acutally executed is irrelevant.
The result is that there is no Turing machine that can fully
solve the problem.
By what olcott calls a 'non-input' he appears to mean an incomputable
computation - ie it's a fancy way of saying that Turing and Linz are
right to claim that there are some problems that a computer cannot
compute.
Prediction: olcott will deny the above. But it seems likely nonetheless.
It may be very difficult to understand, yet the behavior
of non-inputs does not count. Simulating termination
analyzers are only accountable for the behavior that their
inputs specify.
Correct simulation is a correct measure of this behavior.
Correct simulation and direct execution only vary when
an input calls its own simulating termination analyzer.
In the case it is the input that rules and non-inputs are
irrelevant.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 152:38:28 |
Calls: | 10,383 |
Files: | 14,054 |
Messages: | 6,417,821 |