• Re: Succinct rebuttal to the Linz halting problem proof.

    From Richard Damon@21:1/5 to olcott on Wed Aug 6 07:41:09 2025
    On 8/6/25 7:33 AM, olcott wrote:
    On 8/6/2025 2:16 AM, Mikko wrote:
    On 2025-08-05 14:41:15 +0000, olcott said:

    On 8/5/2025 2:08 AM, Mikko wrote:
    On 2025-08-04 18:29:04 +0000, olcott said:

    Diagonalization only arises when one assumes that a
    Turing machine decider must report on its own behavior
    instead of the behavior specified by its machine description.

    Wrong. There are proofs by diagonalization that do not assume
    anything about Turing machines.

    That was said within the context of Turing Machine halt deciders
    as can be seen by its words.

    No, in the OP the words were those of a general statement. References
    to Turing machines were only in a subordinate clause and there were
    no reference to halting.


    Diagonalization only arises when one assumes that a
    *Turing machine decider* must report on its own behavior
    instead of the behavior specified by its machine description.


    But that *IS* a requirement of it, at least if we ask about its own
    behavior by providing it with a representation that includes itself.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
       if Ĥ applied to ⟨Ĥ⟩ halts, and
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
       if Ĥ applied to ⟨Ĥ⟩ does not halt.

    Thus lines two and four are incorrect because they
    require Ĥ.embedded_H to report on its own behavior.

    But there is nothing wrong about that.

    You are just asserting your own ignorance.


    It does not make sense to assume that a Turing machine decider
    must or must not report on this or that. The only reasonable
    assumptions are those that specify the topic of the proof.


    It makes no sense to require a function to be applied
    to elements outside of its domain.

    But that *IS* in its domain.

    It makes no sense to call something outside its domain when it is in the domain.

    That just shows you lie.


    What is the square-root of a dead chicken?
    Turing machines only compute the mapping from their actual inputs.
    Non-inputs such as other Turing machines are not in the domain
    of any Turing machine halt decider.

    The halting problem specifies that the input to a halting decider
    is a description of the computation asked about.

    That is not stated with complete accuracy.
    The actual question is about the behavior that
    the finite string input specifies.

    No, The halting problem, when properly stated, never talks about "the
    behavior of the input string" as that isn't actually a valid concept.


    The term of the art: "machine description"
    has always been too vague.

    Nope, just shows your ignorance, which has made you into a pathological
    liar.

    Just because you can't understand it due to your mental incompetence,
    doesn't make it invalid.


    The answer is
    required to predict whether that computation will halt if executed.

    That has always been incorrect.

       "The contradiction in Linz's (or Turing's) self-referential
        halting construction only appears if one insists that the
        machine can and must decide on its own behavior, which is
        neither possible nor required."

    https://chatgpt.com/share/6890ee5a-52bc-8011-852e-3d9f97bcfbd8

    Which shows you lack of understand how logic works, since that answer
    came in responce to you injecting the LIE that deciders can't be
    responsible for deciding on their own behaviorl


    If you say otherwise you aren't saying anything about the halting
    problem.


    I am saying that the halting problem has always been
    stated incorrectly.

    Which just makes you wrong, and a pathological liar.

    Just like your false claim that you are "god", you can't change the
    definition of the problem and be working on that actual problem.

    That is like saying you broke the worlds record for the Marathon,
    because you ran the 440 quicker than anyone has run the Matathon.

    Sorry, you are just proving you are a pathological liar.


    *Here it is stated correctly*
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞
       if ⟨Ĥ⟩ ⟨Ĥ⟩ simulated by Ĥ.embedded_H reaches
       its simulated final halt state of ⟨Ĥ.qn⟩, and

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
       if ⟨Ĥ⟩ ⟨Ĥ⟩ simulated by Ĥ.embedded_H cannot possibly
       reach its simulated final halt state of ⟨Ĥ.qn⟩.


    No;e, just your lies,

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Wed Aug 6 22:11:17 2025
    On 8/6/25 8:16 AM, olcott wrote:
    On 8/6/2025 6:41 AM, Richard Damon wrote:
    On 8/6/25 7:33 AM, olcott wrote:
    On 8/6/2025 2:16 AM, Mikko wrote:
    On 2025-08-05 14:41:15 +0000, olcott said:

    On 8/5/2025 2:08 AM, Mikko wrote:
    On 2025-08-04 18:29:04 +0000, olcott said:

    Diagonalization only arises when one assumes that a
    Turing machine decider must report on its own behavior
    instead of the behavior specified by its machine description.

    Wrong. There are proofs by diagonalization that do not assume
    anything about Turing machines.

    That was said within the context of Turing Machine halt deciders
    as can be seen by its words.

    No, in the OP the words were those of a general statement. References
    to Turing machines were only in a subordinate clause and there were
    no reference to halting.


    Diagonalization only arises when one assumes that a
    *Turing machine decider* must report on its own behavior
    instead of the behavior specified by its machine description.


    But that *IS* a requirement of it, at least if we ask about its own
    behavior by providing it with a representation that includes itself.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
        if Ĥ applied to ⟨Ĥ⟩ halts, and
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
        if Ĥ applied to ⟨Ĥ⟩ does not halt.

    Thus lines two and four are incorrect because they
    require Ĥ.embedded_H to report on its own behavior.

    But there is nothing wrong about that.

    You are just asserting your own ignorance.


    It does not make sense to assume that a Turing machine decider
    must or must not report on this or that. The only reasonable
    assumptions are those that specify the topic of the proof.


    It makes no sense to require a function to be applied
    to elements outside of its domain.

    But that *IS* in its domain.


    Turing machines are not in the domain of Turing machine deciders,
    only finite strings are in this domain.

    But they are by representation

    I guess yoyu think Turing Machines can't do arithmatic, since numbers
    are in the domain of finite strings.


    I have proven that in the case where a Turing Machine halt decider
    it evaluating the behavior of its own machine description that
    the behavior specified by the description is not the behavior
    of the underlying machine.

    Sure it is, you just don't define the behavior correctly.

    The Behavior of the input is the behavior of the machine it represents,
    or eqivalently the behavior of that exact input simulated by a UTM, that
    means it includes the code of the HHH that it calls, not the UTM.

    Note, trying to make it be the simulation done by the decider means it
    isn't a defintion, as the decider could then do whatever it wanted, if
    you don't tie it to the actual UTM.

    That is your problem, you just don't understand the meaning of what you
    are talking about, and refuse to see the errors in your understand when
    they are pointed out.

    This just makes you a stupid pathological liar, as you just don't care
    about what is actually true.


    People only disagree on the basis that the verified facts
    disagree with their opinion.


    Right, and since you keep on using wrong meanings, none of what you say
    is "verified"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Thu Aug 7 10:48:23 2025
    On 2025-08-06 11:33:50 +0000, olcott said:

    On 8/6/2025 2:16 AM, Mikko wrote:
    On 2025-08-05 14:41:15 +0000, olcott said:

    On 8/5/2025 2:08 AM, Mikko wrote:
    On 2025-08-04 18:29:04 +0000, olcott said:

    Diagonalization only arises when one assumes that a
    Turing machine decider must report on its own behavior
    instead of the behavior specified by its machine description.

    Wrong. There are proofs by diagonalization that do not assume
    anything about Turing machines.

    That was said within the context of Turing Machine halt deciders
    as can be seen by its words.

    No, in the OP the words were those of a general statement. References
    to Turing machines were only in a subordinate clause and there were
    no reference to halting.

    Diagonalization only arises when one assumes that a
    *Turing machine decider* must report on its own behavior
    instead of the behavior specified by its machine description.

    Said that way the claim is false. Above sentence claims that diagonalization does not arise in contexts not involving Turing machines. That is false.
    The idea of diagonalization is older than the idea of Turing machines.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Thu Aug 7 20:50:46 2025
    On 8/7/25 9:17 AM, olcott wrote:
    On 8/7/2025 2:48 AM, Mikko wrote:
    On 2025-08-06 11:33:50 +0000, olcott said:

    On 8/6/2025 2:16 AM, Mikko wrote:
    On 2025-08-05 14:41:15 +0000, olcott said:

    On 8/5/2025 2:08 AM, Mikko wrote:
    On 2025-08-04 18:29:04 +0000, olcott said:

    Diagonalization only arises when one assumes that a
    Turing machine decider must report on its own behavior
    instead of the behavior specified by its machine description.

    Wrong. There are proofs by diagonalization that do not assume
    anything about Turing machines.

    That was said within the context of Turing Machine halt deciders
    as can be seen by its words.

    No, in the OP the words were those of a general statement. References
    to Turing machines were only in a subordinate clause and there were
    no reference to halting.

    Diagonalization only arises when one assumes that a
    *Turing machine decider* must report on its own behavior
    instead of the behavior specified by its machine description.

    Said that way the claim is false. Above sentence claims that
    diagonalization
    does not arise in contexts not involving Turing machines. That is false.
    The idea of diagonalization is older than the idea of Turing machines.


    *My point is subtle thus easy to miss*
    When one Turing machine is required to report on
    the behavior of a non-input directly executed TM
    then the diagonalization result occurs.

    But it is incorrect to call it a non-input, as it is the semantic
    meaning of the input, and what the problem requires of it.


    Turing Machine Linz Ĥ applied to its own machine description ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    *repeats until aborted*
    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩

    And since it will abort, discussion of it not aborting is incorrect.


    When Ĥ.embedded_H is based on a UTM that measures
    the behavior specified by its input by simulating
    this input then the recursive simulation non-halting
    behavior pattern is met.


    It may be based on a UTM, but is no longer a UTM, unless you admit that
    you H will NEVER abort is simulaiton and fails to be a decider.

    All you are doing is showing that you think lying is correct logic, and
    thus you claimed goal is just a bigger lie.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Fri Aug 8 10:26:14 2025
    On 2025-08-07 13:17:59 +0000, olcott said:

    On 8/7/2025 2:48 AM, Mikko wrote:
    On 2025-08-06 11:33:50 +0000, olcott said:

    On 8/6/2025 2:16 AM, Mikko wrote:
    On 2025-08-05 14:41:15 +0000, olcott said:

    On 8/5/2025 2:08 AM, Mikko wrote:
    On 2025-08-04 18:29:04 +0000, olcott said:

    Diagonalization only arises when one assumes that a
    Turing machine decider must report on its own behavior
    instead of the behavior specified by its machine description.

    Wrong. There are proofs by diagonalization that do not assume
    anything about Turing machines.

    That was said within the context of Turing Machine halt deciders
    as can be seen by its words.

    No, in the OP the words were those of a general statement. References
    to Turing machines were only in a subordinate clause and there were
    no reference to halting.

    Diagonalization only arises when one assumes that a
    *Turing machine decider* must report on its own behavior
    instead of the behavior specified by its machine description.

    Said that way the claim is false. Above sentence claims that diagonalization >> does not arise in contexts not involving Turing machines. That is false.
    The idea of diagonalization is older than the idea of Turing machines.

    *My point is subtle thus easy to miss*

    When one Turing machine is required to report on
    the behavior of a non-input directly executed TM
    then the diagonalization result occurs.

    That is the core idea of Turing's proof.

    What you call "non-input" is the behaviour specified by the input.
    Whether that behaviour is or will be acutally executed is irrelevant.

    The result is that there is no Turing machine that can fully
    solve the problem.

    Turing Machine Linz Ĥ applied to its own machine description ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    *repeats until aborted*
    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩

    When Ĥ.embedded_H is based on a UTM that measures
    the behavior specified by its input by simulating
    this input then the recursive simulation non-halting
    behavior pattern is met.

    From the meaning of the word "simulation" follows that every pattern
    that is met in a simulation is also met in a direct execution.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Mikko on Fri Aug 8 08:43:46 2025
    On 08/08/2025 08:26, Mikko wrote:
    On 2025-08-07 13:17:59 +0000, olcott said:


    <snip>

    *My point is subtle thus easy to miss*

    When one Turing machine is required to report on
    the behavior of a non-input directly executed TM
    then the diagonalization result occurs.

    That is the core idea of Turing's proof.

    What you call "non-input" is the behaviour specified by the input.
    Whether that behaviour is or will be acutally executed is
    irrelevant.

    The result is that there is no Turing machine that can fully
    solve the problem.

    By what olcott calls a 'non-input' he appears to mean an
    incomputable computation - ie it's a fancy way of saying that
    Turing and Linz are right to claim that there are some problems
    that a computer cannot compute.

    Prediction: olcott will deny the above. But it seems likely
    nonetheless.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Aug 8 17:49:15 2025
    On 8/8/25 11:00 AM, olcott wrote:
    On 8/8/2025 2:43 AM, Richard Heathfield wrote:
    On 08/08/2025 08:26, Mikko wrote:
    On 2025-08-07 13:17:59 +0000, olcott said:


    <snip>

    *My point is subtle thus easy to miss*

    When one Turing machine is required to report on
    the behavior of a non-input directly executed TM
    then the diagonalization result occurs.

    That is the core idea of Turing's proof.

    What you call "non-input" is the behaviour specified by the input.
    Whether that behaviour is or will be acutally executed is irrelevant.

    The result is that there is no Turing machine that can fully
    solve the problem.

    By what olcott calls a 'non-input' he appears to mean an incomputable
    computation - ie it's a fancy way of saying that Turing and Linz are
    right to claim that there are some problems that a computer cannot
    compute.

    Prediction: olcott will deny the above. But it seems likely nonetheless.


    It may be very difficult to understand, yet the behavior
    of non-inputs does not count. Simulating termination
    analyzers are only accountable for the behavior that their
    inputs specify.

    But it isn't a "non-input" but the exact semantic meaning of the input.


    Correct simulation is a correct measure of this behavior.
    Correct simulation and direct execution only vary when
    an input calls its own simulating termination analyzer.
    In the case it is the input that rules and non-inputs are
    irrelevant.


    And "Correct Simulation" by DEFINITIOIN, recreates the behavior of the
    program when run.

    You just are too stupid to understand that, and LIE about not-correct
    (because it is just partial) telling something it doesn't, and your lies
    that different inputs are the same.

    Your problem is it seems you don't think language actually HAS correct
    meanings but can just be reinterpreted to mean what you want.

    Things like what is a program, halting, or correct simulation. What a
    correct representation of a program is.

    You even try to redefine the Machine Description Language of your
    decider, to try to define a wrong behavior as correct. After all, that
    input string *IS* a statement in the language of the Machine Description Language that means the exact program being described, and the exact
    behavior of it being run, but you want it to means something else, just
    like you try to redefine a number of words in the theory, which just
    makes you a liar.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)