• Re: H(D,D) cannot even be asked about the behavior of D(D)

    From Richard Damon@21:1/5 to olcott on Thu Jun 13 23:44:11 2024
    XPost: sci.logic

    On 6/13/24 11:14 PM, olcott wrote:
    On 6/13/2024 10:04 PM, Richard Damon wrote:
    On 6/13/24 9:39 PM, olcott wrote:
    On 6/13/2024 8:24 PM, Richard Damon wrote:
    On 6/13/24 11:32 AM, olcott wrote:

    It is contingent upon you to show the exact steps of how H computes
    the mapping from the x86 machine language finite string input to
    H(D,D) using the finite string transformation rules specified by
    the semantics of the x86 programming language that reaches the
    behavior of the directly executed D(D)


    Why? I don't claim it can.


    That means that H cannot even be asked the question:
    "Does D halt on its input?"

    WHy not? After all, H does what it does, the PERSON we ask is the
    programmer.


    *When H and D have a pathological relationship to each other*
    There is no way to encode any H such that it can be asked:
    Does D(D) halt?

    Which just pproves that Halting is non-computable.

    You keep on doing that, Making claims that show the truth of the
    statement you are trying to disprove.
    The fact you don't undrstand that, just show how little you understand
    what you are saying.


    You must see this from the POV of H or you won't get it.
    H cannot read your theory of computation textbooks, it
    only knows what it directly sees, its actual input.

    But H doesn't HAVE a "poimt of view".

    H is just a "mechanical" computation. It is a rote algorithm that does
    what it has been told to do.

    It really seems likem you just don't understand the concept of
    deterministic automatons, and Willful beings as being different.

    Which just shows how ignorant you are about what you talk about.


    If there is no possible way for H to transform its input
    into the behavior of D(D) then H cannot be asked about
    the behavior of D(D).


    No, it says it can't do it, not that it can't be asked to do it.

    We are allowed to ask questions, that have answers, that are too hard to compute.

    THe input to H(D,D) DOES have a behaivor when run, so there IS a correct answer, so the question is valid.

    It just turns out to hard to be computed.

    You just don't understand the rules of the game, because it seems you
    never never even tried to learn them.

    This has cause you to show your utter ignoracnce of the topic, and your reckless disregard for the turth.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From joes@21:1/5 to All on Fri Jun 14 08:38:40 2024
    Am Thu, 13 Jun 2024 22:14:52 -0500 schrieb olcott:
    On 6/13/2024 10:04 PM, Richard Damon wrote:
    On 6/13/24 9:39 PM, olcott wrote:
    On 6/13/2024 8:24 PM, Richard Damon wrote:
    On 6/13/24 11:32 AM, olcott wrote:

    It is contingent upon you to show the exact steps of how H computes
    the mapping from the x86 machine language finite string input to
    H(D,D) using the finite string transformation rules specified by the >>>>> semantics of the x86 programming language that reaches the behavior
    of the directly executed D(D)

    Why? I don't claim it can.

    That means that H cannot even be asked the question:
    "Does D halt on its input?"

    WHy not? After all, H does what it does, the PERSON we ask is the
    programmer.

    *When H and D have a pathological relationship to each other*
    There is no way to encode any H such that it can be asked: Does D(D)
    halt?
    That is the question that H answers for every other input.

    You must see this from the POV of H or you won't get it.
    H cannot read your theory of computation textbooks, it only knows what
    it directly sees, its actual input.
    And it doesn't need to know more, the behaviour of D is completely
    specified by its transition table and input (which happens to be itself).

    If there is no possible way for H to transform its input into the
    behavior of D(D) then H cannot be asked about the behavior of D(D).
    It only means it doesn't give the right answer, when given D(D) as input.
    It was specified to answer for all machines.

    Also I'm not He.

    --
    joes

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Jun 14 07:39:47 2024
    XPost: sci.logic

    On 6/14/24 12:13 AM, olcott wrote:
    On 6/13/2024 10:44 PM, Richard Damon wrote:
    On 6/13/24 11:14 PM, olcott wrote:
    On 6/13/2024 10:04 PM, Richard Damon wrote:
    On 6/13/24 9:39 PM, olcott wrote:
    On 6/13/2024 8:24 PM, Richard Damon wrote:
    On 6/13/24 11:32 AM, olcott wrote:

    It is contingent upon you to show the exact steps of how H computes >>>>>>> the mapping from the x86 machine language finite string input to >>>>>>> H(D,D) using the finite string transformation rules specified by >>>>>>> the semantics of the x86 programming language that reaches the
    behavior of the directly executed D(D)


    Why? I don't claim it can.


    That means that H cannot even be asked the question:
    "Does D halt on its input?"

    WHy not? After all, H does what it does, the PERSON we ask is the
    programmer.


    *When H and D have a pathological relationship to each other*
    There is no way to encode any H such that it can be asked:
    Does D(D) halt?

    Which just pproves that Halting is non-computable.


    No it is more than that.
    H cannot even be asked the question:
    Does D(D) halt?

    No, you just don't understand the proper meaning of "ask" when applied
    to a deterministic entity.


    You already admitted the basis for this.

    No, that is something different.


    You keep on doing that, Making claims that show the truth of the
    statement you are trying to disprove.
    The fact you don't undrstand that, just show how little you understand
    what you are saying.


    You must see this from the POV of H or you won't get it.
    H cannot read your theory of computation textbooks, it
    only knows what it directly sees, its actual input.

    But H doesn't HAVE a "poimt of view".


    When H is a simulating halt decider you can't even ask it
    about the behavior of D(D). You already said that it cannot
    map its input to the behavior of D(D). That means that you
    cannot ask H(D,D) about the behavior of D(D).

    OF course you can, becaue, BY DEFIINITION, that is the ONLY thing it
    does with its inputs.


    What seems to me to be the world's leading termination
    analyzer symbolically executes its transformed input. https://link.springer.com/content/pdf/10.1007/978-3-030-99527-0_21.pdf

    It takes C programs and translates them into something like
    generic assembly language and then symbolically executes them
    to form a directed graph of their behavior. x86utm and HH do
    something similar in a much more limited fashion.

    And note, it only gives difinitive answers for SOME input.


    H is just a "mechanical" computation. It is a rote algorithm that does
    what it has been told to do.


    H cannot be asked the question Does DD(D) halt?
    There is no way to encode that. You already admitted
    this when you said the finite string input to H(D,D)
    cannot be mapped to the behavior of D(D).

    It is every time it is given an input, at least if H is a halt decider.

    That is what halt deciders (if they exist) do.


    It really seems likem you just don't understand the concept of
    deterministic automatons, and Willful beings as being different.

    Which just shows how ignorant you are about what you talk about.


    The issue is that you don't understand truthmaker theory.
    You can not simply correctly wave your hands to get H to know
    what question is being asked.

    No, YOU don't understand Truth.



    If there is no possible way for H to transform its input
    into the behavior of D(D) then H cannot be asked about
    the behavior of D(D).


    No, it says it can't do it, not that it can't be asked to do it.


    It can't even be asked. You said that yourself.
    The input to H(D,D) cannot be transformed into
    the behavior of D(D).


    No, we can't make an arbitrary problem solver, since we can show there
    are unsolvable problems.

    Nothing says we can't encode the Halting Question into an input. What
    can't be done it create a program that gives the right answer for all
    such inputs.

    You, like normal, don't understand your requirements and capabilities.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From joes@21:1/5 to It can't be on Fri Jun 14 15:54:35 2024
    Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott:
    On 6/14/2024 6:39 AM, Richard Damon wrote:
    On 6/14/24 12:13 AM, olcott wrote:
    On 6/13/2024 10:44 PM, Richard Damon wrote:
    On 6/13/24 11:14 PM, olcott wrote:
    On 6/13/2024 10:04 PM, Richard Damon wrote:
    On 6/13/24 9:39 PM, olcott wrote:
    On 6/13/2024 8:24 PM, Richard Damon wrote:
    On 6/13/24 11:32 AM, olcott wrote:

    H cannot even be asked the question: Does D(D) halt?
    No, you just don't understand the proper meaning of "ask" when applied
    to a deterministic entity.
    When H and D have a pathological relationship to each other then H(D,D)
    is not being asked about the behavior of D(D). H1(D,D) has no such pathological relationship thus D correctly simulated by H1 is the
    behavior of D(D).
    H is asked whether its input halts, and by definition should give the
    (right) answer for every input.
    D by construction is pathological to the supposed decider it is
    constructed on. H1 can not decide D1. For every "decider" we can construct
    an undecidable pathological program. No decider decides every input.

    Can a correct answer to the stated question be a correct answer to the unstated question?
    H(D,D) is not even being asked about the behavior of D(D)
    It can't be asked any other way.

    When H is a simulating halt decider you can't even ask it about the
    behavior of D(D). You already said that it cannot map its input to the
    behavior of D(D). That means that you cannot ask H(D,D) about the
    behavior of D(D).
    OF course you can, becaue, BY DEFINITION, that is the ONLY thing it
    does with its inputs.
    That definition might be in textbooks,
    yet H does not and cannot read textbooks.
    That is very confusing. H still adheres to textbooks.

    The only definition that H sees is the combination of its algorithm with
    the finite string of machine language of its input.
    H does not see its own algorithm, it only follows its internal
    programming. A machine and input completely determine the behaviour,
    whether that is D(D) or H(D, D).

    It is impossible to encode any algorithm such that H and D have a pathological relationship and have H even see the behavior of D(D).
    H literally gets it as input.

    You already admitted there there is no mapping from the finite string of machine code of the input to H(D,D) to the behavior of D(D).
    Which means that H can't simulate D(D). Other machines can do so.

    And note, it only gives difinitive answers for SOME input.
    It is my understanding is that it does this much better than anyone else does. AProVE "symbolically executes the LLVM program".
    Better doesn't cut it. H should work for ALL programs, especially for D.

    H is just a "mechanical" computation. It is a rote algorithm that
    does what it has been told to do.
    H cannot be asked the question Does D(D) halt?
    There is no way to encode that. You already admitted this when you
    said the finite string input to H(D,D)
    cannot be mapped to the behavior of D(D).
    H answers that question for every other input.
    The question "What is your answer/Is your answer right?" is pointless
    and not even computed by H.

    It is every time it is given an input, at least if H is a halt decider.
    If you cannot even ask H the question that you want answered then this
    is not an actual case of undecidability. H does correctly answer the
    actual question that it was actually asked.
    D(D) is a valid input. H should be universal.

    That is what halt deciders (if they exist) do.
    When H and D are defined to have a pathological relationship then H
    cannot even be asked about the behavior of D(D).
    H cannot give a correct ANSWER about D(D).

    It really seems likem you just don't understand the concept of
    deterministic automata, and Willful beings as being different.
    You can not simply correctly wave your hands to get H to know what
    question is being asked.
    H doesn't need to know. It is programmed to answer a fixed question,
    and the input completely determines the answer.

    It can't even be asked. You said that yourself.
    The input to H(D,D) cannot be transformed into the behavior of D(D).
    It can, just not by H.

    No, we can't make an arbitrary problem solver, since we can show there
    are unsolvable problems.
    That is a whole other different issue.
    The key subset of this is that the notion of undecidability is a ruse.
    A ruse for what?

    Nothing says we can't encode the Halting Question into an input.
    If there is no mapping from the input to H(D,D) to the behavior of D(D)
    then H cannot possibly be asked about behavior that it cannot possibly
    see.
    It can be asked and be wrong.

    What can't be done it create a program that gives the right answer for
    all such inputs.
    Expecting a correct answer to the wrong question is only foolishness.
    The question is just whether D(D) halts.

    Where do you disagree with the halting problem proof?

    --
    joes

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Jun 14 19:27:42 2024
    On 6/14/24 1:39 PM, olcott wrote:
    On 6/14/2024 10:54 AM, joes wrote:
    Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott:
    On 6/14/2024 6:39 AM, Richard Damon wrote:
    On 6/14/24 12:13 AM, olcott wrote:
    On 6/13/2024 10:44 PM, Richard Damon wrote:
    On 6/13/24 11:14 PM, olcott wrote:
    On 6/13/2024 10:04 PM, Richard Damon wrote:
    On 6/13/24 9:39 PM, olcott wrote:
    On 6/13/2024 8:24 PM, Richard Damon wrote:
    On 6/13/24 11:32 AM, olcott wrote:

    H cannot even be asked the question: Does D(D) halt?
    No, you just don't understand the proper meaning of "ask" when applied >>>> to a deterministic entity.
    When H and D have a pathological relationship to each other then H(D,D)
    is not being asked about the behavior of D(D). H1(D,D) has no such
    pathological relationship thus D correctly simulated by H1 is the
    behavior of D(D).
    H is asked whether its input halts, and by definition should give the
    (right) answer for every input.

    If we used that definition of decider then no human ever decided
    anything because every human has made at least one mistake.

    But Humans are NOT deciders in the Computation Theory sense, because we
    don't run deterministic algorithms.

    This seems to be part of your fundamental problem, yiu just don't know
    what you are talking about, and don't understand the difference between
    willful beings, and deterministic algorithms.


    I use the term "termination analyzer" as a close fit. The term
    partial halt decider is more accurate yet confuses most people.
    A partial halt decider is a halt decider with a limited domain.

    D by construction is pathological to the supposed decider it is
    constructed on. H1 can not decide D1. For every "decider" we can
    construct
    an undecidable pathological program. No decider decides every input.


    Parroting what you memorized by rote is not very deep understanding.

    Understanding that the halting problem counter-example input that
    does the opposite of whatever value the halt decider returns is
    merely the Liar Paradox in disguise is a much deeper understanding.

    Can a correct answer to the stated question be a correct answer to the
    unstated question?
    H(D,D) is not even being asked about the behavior of D(D)

    It can't be asked any other way.

    It can't be asked in any way what-so-ever because it is
    already being asked a different question.

    When H is a simulating halt decider you can't even ask it about the
    behavior of D(D). You already said that it cannot map its input to the >>>>> behavior of D(D). That means that you cannot ask H(D,D) about the
    behavior of D(D).
    OF course you can, becaue, BY DEFINITION, that is the ONLY thing it
    does with its inputs.
    That definition might be in textbooks,
    yet H does not and cannot read textbooks.

    That is very confusing. H still adheres to textbooks.

    No the textbooks have it incorrectly.

    The only definition that H sees is the combination of its algorithm with >>> the finite string of machine language of its input.

    H does not see its own algorithm, it only follows its internal
    programming. A machine and input completely determine the behaviour,
    whether that is D(D) or H(D, D).


    No H (with a pathological relationship to D) can possibly see the
    behavior of D(D).

    It is impossible to encode any algorithm such that H and D have a
    pathological relationship and have H even see the behavior of D(D).

    H literally gets it as input.


    The input DOES NOT SPECIFY THE BEHAVIOR OF D(D).
    The input specifies the behavior WITHIN THE PATHOLOGICAL RELATIONSHIP
    It does not specify the behavior WITHOUT THE PATHOLOGICAL RELATIONSHIP.

    You already admitted there there is no mapping from the finite string of >>> machine code of the input to H(D,D) to the behavior of D(D).

    Which means that H can't simulate D(D). Other machines can do so.


    H cannot simulate D(D) for the same reason that
    int sum(int x, int y) { return x + y; }
    sum(3,4) cannot return the sum of 5 + 6;


    And note, it only gives difinitive answers for SOME input.
    It is my understanding is that it does this much better than anyone else >>> does. AProVE "symbolically executes the LLVM program".

    Better doesn't cut it. H should work for ALL programs, especially for D.


    You don't even have a slight clue about termination analyzers.

    H is just a "mechanical" computation. It is a rote algorithm that
    does what it has been told to do.
    H cannot be asked the question Does D(D) halt?
    There is no way to encode that. You already admitted this when you
    said the finite string input to H(D,D)
    cannot be mapped to the behavior of D(D).

    H answers that question for every other input.
    The question "What is your answer/Is your answer right?" is pointless
    and not even computed by H.


    It is ridiculously stupid to think that the pathological
    relationship between H and D cannot possibly change the
    behavior of D especially when it has been conclusively
    proven that it DOES CHANGE THE BEHAVIOR OF D

    It is every time it is given an input, at least if H is a halt decider. >>> If you cannot even ask H the question that you want answered then this
    is not an actual case of undecidability. H does correctly answer the
    actual question that it was actually asked.

    D(D) is a valid input. H should be universal.


    Likewise the Liar Paradox *should* be true or false,
    except for the fact that it isn't.


    That is what halt deciders (if they exist) do.
    When H and D are defined to have a pathological relationship then H
    cannot even be asked about the behavior of D(D).

    H cannot give a correct ANSWER about D(D).


    H cannot be asked the right question.

    It really seems likem you just don't understand the concept of
    deterministic automata, and Willful beings as being different.
    You can not simply correctly wave your hands to get H to know what
    question is being asked.
    H doesn't need to know. It is programmed to answer a fixed question,
    and the input completely determines the answer.


    The fixed question that H is asked is:
    Can your input terminate normally?
    The answer to that question is: NO.

    It can't even be asked. You said that yourself.
    The input to H(D,D) cannot be transformed into the behavior of D(D).
    It can, just not by H.


    How crazy is it to expect a correct answer to a
    different question than the one you asked?

    No, we can't make an arbitrary problem solver, since we can show there >>>> are unsolvable problems.
    That is a whole other different issue.
    The key subset of this is that the notion of undecidability is a ruse.
    A ruse for what?

    Nothing says we can't encode the Halting Question into an input.
    If there is no mapping from the input to H(D,D) to the behavior of D(D)
    then H cannot possibly be asked about behavior that it cannot possibly
    see.
    It can be asked and be wrong.

    What can't be done it create a program that gives the right answer for >>>> all such inputs.
    Expecting a correct answer to the wrong question is only foolishness.
    The question is just whether D(D) halts.

    Where do you disagree with the halting problem proof?


    There are several different issues the key one of these issues
    that two PhD computer science professors agree with me on is
    that there is something wrong with it along the lines of it
    being isomorphic to the Liar Paradox.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Jun 14 21:38:27 2024
    XPost: sci.logic

    On 6/14/24 8:34 PM, olcott wrote:
    On 6/14/2024 6:27 PM, Richard Damon wrote:
    On 6/14/24 9:15 AM, olcott wrote:
    On 6/14/2024 6:39 AM, Richard Damon wrote:
    On 6/14/24 12:13 AM, olcott wrote:

    No it is more than that.
    H cannot even be asked the question:
    Does D(D) halt?

    No, you just don't understand the proper meaning of "ask" when
    applied to a deterministic entity.


    When H and D have a pathological relationship to each
    other then H(D,D) is not being asked about the behavior
    of D(D). H1(D,D) has no such pathological relationship
    thus D correctly simulated by H1 is the behavior of D(D).

    OF course it is. The nature of the input doesn't affet the form of the
    question that H is supposed to answer.


    The textbook asks the question.
    The data cannot possibly do that.


    But the data doesn't need to do it, as the program specifictions define it.

    Now, if H was supposed to be a "Universal Problem Decider", then we
    would need to somehow "encode" the goal of H determining that a correct
    (and complete) simulation of its input would need to reach a final
    state, but I see no issue with defining a way to encode that.

    You already said that H cannot possibly map its
    input to the behavior of D(D).

    Right, it is impossible for H to itself compute that behavior and give
    an answer.

    That doesn't mean we can't encode the question.


    We need to stay focused on this one single point until you
    fully get it. Unlike the other two respondents you do have
    the capacity to understand this.

    You keep expecting H to read your computer science
    textbooks.


    No, I expect its PROGRAMMER to have done that, which clearly you haven't
    done.

    Programs don't read their requirements, the perform the actions they
    were programmed to do, and if the program is correct, it will get the
    right answer. If it doesn't get the right answer, then the programmer
    erred in saying it meet the requirements.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Jun 14 22:17:00 2024
    XPost: sci.logic

    On 6/14/24 10:06 PM, olcott wrote:
    On 6/14/2024 8:38 PM, Richard Damon wrote:
    On 6/14/24 8:34 PM, olcott wrote:
    On 6/14/2024 6:27 PM, Richard Damon wrote:
    On 6/14/24 9:15 AM, olcott wrote:
    On 6/14/2024 6:39 AM, Richard Damon wrote:
    On 6/14/24 12:13 AM, olcott wrote:

    No it is more than that.
    H cannot even be asked the question:
    Does D(D) halt?

    No, you just don't understand the proper meaning of "ask" when
    applied to a deterministic entity.


    When H and D have a pathological relationship to each
    other then H(D,D) is not being asked about the behavior
    of D(D). H1(D,D) has no such pathological relationship
    thus D correctly simulated by H1 is the behavior of D(D).

    OF course it is. The nature of the input doesn't affet the form of
    the question that H is supposed to answer.


    The textbook asks the question.
    The data cannot possibly do that.


    But the data doesn't need to do it, as the program specifictions
    define it.

    Now, if H was supposed to be a "Universal Problem Decider", then we
    would need to somehow "encode" the goal of H determining that a
    correct (and complete) simulation of its input would need to reach a
    final state, but I see no issue with defining a way to encode that.

    You already said that H cannot possibly map its
    input to the behavior of D(D).

    Right, it is impossible for H to itself compute that behavior and give
    an answer.

    That doesn't mean we can't encode the question.


    We need to stay focused on this one single point until you
    fully get it. Unlike the other two respondents you do have
    the capacity to understand this.

    You keep expecting H to read your computer science
    textbooks.


    No, I expect its PROGRAMMER to have done that, which clearly you
    haven't done.

    Programs don't read their requirements, the perform the actions they
    were programmed to do, and if the program is correct, it will get the
    right answer. If it doesn't get the right answer, then the programmer
    erred in saying it meet the requirements.


    I am only going to talk to you in the one thread about
    this, it is too difficult material to understand outside
    of a single chain of thought.


    What, you can't keep the different topic straight?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Jun 14 22:16:09 2024
    XPost: sci.logic

    On 6/14/24 9:59 PM, olcott wrote:
    On 6/14/2024 8:38 PM, Richard Damon wrote:
    On 6/14/24 8:34 PM, olcott wrote:
    On 6/14/2024 6:27 PM, Richard Damon wrote:
    On 6/14/24 9:15 AM, olcott wrote:
    On 6/14/2024 6:39 AM, Richard Damon wrote:
    On 6/14/24 12:13 AM, olcott wrote:

    No it is more than that.
    H cannot even be asked the question:
    Does D(D) halt?

    No, you just don't understand the proper meaning of "ask" when
    applied to a deterministic entity.


    When H and D have a pathological relationship to each
    other then H(D,D) is not being asked about the behavior
    of D(D). H1(D,D) has no such pathological relationship
    thus D correctly simulated by H1 is the behavior of D(D).

    OF course it is. The nature of the input doesn't affet the form of
    the question that H is supposed to answer.


    The textbook asks the question.
    The data cannot possibly do that.


    But the data doesn't need to do it, as the program specifictions
    define it.


    Did you know that the code itself cannot read these specifications?
    The specifications say {draw a square circle}, the code says huh?

    And what make you think it needs to?

    You are just showing a TOTAL IGNORANCE of the field of prgramming.

    did the x86utm program write itself after you showing it the specifications?


    Now, if H was supposed to be a "Universal Problem Decider", then we

    I don't have time for an infinite conversation.
    H is ONLY defined to be a D decider.

    It needs to be at least a D Halting Decider which has the same
    requirement, just restricted to the class of programs built on the
    template D.

    And that means H doesn't need to "read" the problem statement either.

    So, you are just showing your stupidity.


    would need to somehow "encode" the goal of H determining that a
    correct (and complete) simulation of its input would need to reach a
    final state, but I see no issue with defining a way to encode that.

    You already said that H cannot possibly map its
    input to the behavior of D(D).

    Right, it is impossible for H to itself compute that behavior and give
    an answer.


    NO !!! It is impossible for anyone or anything to provide
    a correct answer to a question THAT THEY ARE NOT BEING ASKED.


    OF course they can. For instance, you can solve a maze without knowing
    that this is the task, if you are given an instruction sheet telling you
    what moves to make.

    Programs don't "know" what they are doing, they are just "dumb"
    automatons that do exact as they are programmed to act.

    That doesn't mean we can't encode the question.


    Give it your best shot, it must be encoded in C.

    Why?

    C is not a good language to express requirements.



    We need to stay focused on this one single point until you
    fully get it. Unlike the other two respondents you do have
    the capacity to understand this.

    You keep expecting H to read your computer science
    textbooks.


    No, I expect its PROGRAMMER to have done that, which clearly you
    haven't done.

    The spec says {CAD system that draws square circles}
    The programmer say WTF!

    But there isn't a contradition like that in the specification of Halting.


    Programs don't read their requirements, the perform the actions they
    were programmed to do,

    There is no way to encode H to even see the behavior of D(D)
    when H and D have the pathological relationship.

    That is the dumbed down version of H cannot map its finite
    string x86 machine code to the behavior of D(D).

    But the map exists, so we are allowed to ask to compute it.

    Of course, one possible answer is that it can not be done, but for that
    answer to be correct, we need to show that it actually can not be done,
    which the Turing Proof does.


    and if the program is correct, it will get the right answer. If it
    doesn't get the right answer, then the programmer erred in saying it
    meet the requirements.


    Sure make a CAD system that draws square circles or you are fired.

    Your not my boss. Note, for the Halting problem, a allowed, and turns
    out correct, answer is: No such program can exist.


    You are failing to understand the notion of logically
    impossible.


    Nope.

    YOu are showing you don't understand the purpose of Computation theory.
    That some problems have no answer is EXACTLY what Computation Theory is
    looking at.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Jun 14 22:48:05 2024
    XPost: sci.logic

    On 6/14/24 10:25 PM, olcott wrote:
    On 6/14/2024 9:16 PM, Richard Damon wrote:
    On 6/14/24 9:59 PM, olcott wrote:
    On 6/14/2024 8:38 PM, Richard Damon wrote:
    On 6/14/24 8:34 PM, olcott wrote:
    On 6/14/2024 6:27 PM, Richard Damon wrote:
    On 6/14/24 9:15 AM, olcott wrote:
    On 6/14/2024 6:39 AM, Richard Damon wrote:
    On 6/14/24 12:13 AM, olcott wrote:

    No it is more than that.
    H cannot even be asked the question:
    Does D(D) halt?

    No, you just don't understand the proper meaning of "ask" when >>>>>>>> applied to a deterministic entity.


    When H and D have a pathological relationship to each
    other then H(D,D) is not being asked about the behavior
    of D(D). H1(D,D) has no such pathological relationship
    thus D correctly simulated by H1 is the behavior of D(D).

    OF course it is. The nature of the input doesn't affet the form of >>>>>> the question that H is supposed to answer.


    The textbook asks the question.
    The data cannot possibly do that.


    But the data doesn't need to do it, as the program specifictions
    define it.


    Did you know that the code itself cannot read these specifications?
    The specifications say {draw a square circle}, the code says huh?

    And what make you think it needs to?

    You are just showing a TOTAL IGNORANCE of the field of prgramming.

    did the x86utm program write itself after you showing it the
    specifications?


    Now, if H was supposed to be a "Universal Problem Decider", then we

    I don't have time for an infinite conversation.
    H is ONLY defined to be a D decider.

    It needs to be at least a D Halting Decider which has the same
    requirement, just restricted to the class of programs built on the
    template D.

    And that means H doesn't need to "read" the problem statement either.

    So, you are just showing your stupidity.


    would need to somehow "encode" the goal of H determining that a
    correct (and complete) simulation of its input would need to reach a
    final state, but I see no issue with defining a way to encode that.

    You already said that H cannot possibly map its
    input to the behavior of D(D).

    Right, it is impossible for H to itself compute that behavior and
    give an answer.


    NO !!! It is impossible for anyone or anything to provide
    a correct answer to a question THAT THEY ARE NOT BEING ASKED.


    OF course they can. For instance, you can solve a maze without knowing
    that this is the task, if you are given an instruction sheet telling
    you what moves to make.

    Programs don't "know" what they are doing, they are just "dumb"
    automatons that do exact as they are programmed to act.

    That doesn't mean we can't encode the question.


    Give it your best shot, it must be encoded in C.

    Why?

    C is not a good language to express requirements.



    We need to stay focused on this one single point until you
    fully get it. Unlike the other two respondents you do have
    the capacity to understand this.

    You keep expecting H to read your computer science
    textbooks.


    No, I expect its PROGRAMMER to have done that, which clearly you
    haven't done.

    The spec says {CAD system that draws square circles}
    The programmer say WTF!

    But there isn't a contradition like that in the specification of Halting.


    Programs don't read their requirements, the perform the actions they
    were programmed to do,

    There is no way to encode H to even see the behavior of D(D)
    when H and D have the pathological relationship.

    That is the dumbed down version of H cannot map its finite
    string x86 machine code to the behavior of D(D).

    But the map exists, so we are allowed to ask to compute it.


    There is no map from the input to H(D,D) to the behavior of D(D)

    Utterly a LIE.

    Since your H(D,D) returns 0, we KNOW that the mapping is:

    (D,D) -> Halting.

    And the way to determine the mapping for ANY input is:

    if UTM(<M>,d) will halt, the (<M>, d) -> Halting,
    otherwise (<M>, d) -> Non-halting.




    Of course, one possible answer is that it can not be done, but for
    that answer to be correct, we need to show that it actually can not be
    done, which the Turing Proof does.

    It is IMPOSSIBLE TO EVEN ASK THE QUESTION.

    No it isn't.


    You agreed that there is no map.

    No, there IS a mapping from the input to the correct answer.

    You fail to understand that this means
    THE QUESTION CANNOT EVEN BE ASKED.
    THIS IS YOUR SHORT-COMING AND NOT MY MISTAKE.


    And you are proving that you are just totally ignorant of what you are
    talking about, and thus LYING when you make falall your false claims.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Jun 14 23:43:55 2024
    XPost: sci.logic

    On 6/14/24 10:52 PM, olcott wrote:
    On 6/14/2024 9:48 PM, Richard Damon wrote:
    On 6/14/24 10:25 PM, olcott wrote:
    On 6/14/2024 9:16 PM, Richard Damon wrote:

    You agreed that there is no map.

    No, there IS a mapping from the input to the correct answer.


    You said that there is no map from the input to H(D,D)
    to the behavior of D(D)

    WHERE?

    I think this is another of your famous lies cause by your
    misunderstanding what was said, and even after the person corrects you,
    you are still stuck on your lie.

    I have said that H can not compute that mapping, but that is something different.


    You fail to understand that this means
    THE QUESTION CANNOT EVEN BE ASKED.
    THIS IS YOUR SHORT-COMING AND NOT MY MISTAKE.


    And you are proving that you are just totally ignorant of what you are
    talking about, and thus LYING when you make falall your false claims.


    Maybe this is simply ever your head too??




    Nope. You are just proving you are a liar.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From joes@21:1/5 to make up its mind. D goes "I'm gonna on Sat Jun 15 11:34:39 2024
    Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott:
    On 6/14/2024 10:54 AM, joes wrote:
    Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott:
    On 6/14/2024 6:39 AM, Richard Damon wrote:
    On 6/14/24 12:13 AM, olcott wrote:
    On 6/13/2024 10:44 PM, Richard Damon wrote:
    On 6/13/24 11:14 PM, olcott wrote:
    On 6/13/2024 10:04 PM, Richard Damon wrote:
    On 6/13/24 9:39 PM, olcott wrote:
    On 6/13/2024 8:24 PM, Richard Damon wrote:
    On 6/13/24 11:32 AM, olcott wrote:

    When H and D have a pathological relationship to each other then
    H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no
    such pathological relationship thus D correctly simulated by H1 is the
    behavior of D(D).
    What is H1 asked?
    H is asked whether its input halts, and by definition should give the
    (right) answer for every input.
    If we used that definition of decider then no human ever decided
    anything because every human has made at least one mistake.
    Yes. Humans are not machines.
    I use the term "termination analyzer" as a close fit. The term partial
    halt decider is more accurate yet confuses most people.
    I have not seen you use that term before. You have not called it partial.
    That was confusing.

    D by construction is pathological to the supposed decider it is
    constructed on. H1 can not decide D1. For every "decider" we can
    construct an undecidable pathological program. No decider decides every
    input.
    Parroting what you memorized by rote is not very deep understanding.
    This was my own phrasing. Can you explain the halting problem proof?
    Understanding that the halting problem counter-example input that does
    the opposite of whatever value the halt decider returns is merely the
    Liar Paradox in disguise is a much deeper understanding.
    I know that.

    H(D,D) is not even being asked about the behavior of D(D)
    It can't be asked any other way.
    It can't be asked in any way what-so-ever because it is already being
    asked a different question.
    Is that question "Do you answer yes?"?

    When H is a simulating halt decider you can't even ask it about the
    behavior of D(D). You already said that it cannot map its input to
    the behavior of D(D). That means that you cannot ask H(D,D) about
    the behavior of D(D).
    OF course you can, becaue, BY DEFINITION, that is the ONLY thing it
    does with its inputs.
    That definition might be in textbooks,
    yet H does not and cannot read textbooks.
    That is very confusing. H still adheres to textbooks.
    No the textbooks have it incorrectly.


    The only definition that H sees is the combination of its algorithm
    with the finite string of machine language of its input.
    H does not see its own algorithm, it only follows its internal
    programming. A machine and input completely determine the behaviour,
    whether that is D(D) or H(D, D).
    No H (with a pathological relationship to D) can possibly see the
    behavior of D(D).
    That is not a problem with D, but with H not being total.

    It is impossible to encode any algorithm such that H and D have a
    pathological relationship and have H even see the behavior of D(D).
    H literally gets it as input.
    The input DOES NOT SPECIFY THE BEHAVIOR OF D(D).
    The input specifies the behavior WITHIN THE PATHOLOGICAL RELATIONSHIP It
    does not specify the behavior WITHOUT THE PATHOLOGICAL RELATIONSHIP.
    There is no difference. If an H exists, it gives one answer. D then does
    the opposite. H cannot change its answer. Other analysers can see that
    H gives the wrong answer.

    You already admitted there there is no mapping from the finite string
    of machine code of the input to H(D,D) to the behavior of D(D).
    Which means that H can't simulate D(D). Other machines can do so.
    H cannot simulate D(D) for the same reason that int sum(int x, int y) { return x + y; } sum(3,4) cannot return the sum of 5 + 6;

    And note, it only gives definitive answers for SOME input.
    It is my understanding is that it does this much better than anyone
    else does. AProVE "symbolically executes the LLVM program".
    Better doesn't cut it. H should work for ALL programs, especially for
    D.
    You don't even have a slight clue about termination analyzers.
    Why do you say that? A (partial) termination analyser doesn't disprove
    the halting problem.

    H cannot be asked the question Does D(D) halt?
    There is no way to encode that. You already admitted this when you
    said the finite string input to H(D,D)
    cannot be mapped to the behavior of D(D).
    H answers that question for every other input.
    The question "What is your answer/Is your answer right?" is pointless
    and not even computed by H.
    It is ridiculously stupid to think that the pathological relationship
    between H and D cannot possibly change the behavior of D especially when
    it has been conclusively proven that it DOES CHANGE THE BEHAVIOR OF D
    D as a machine is completely specified and a valid Turing machine:
    It asks a supposed decider if it halts, and then does the opposite,
    making the decider wrong.
    Other deciders than the one it calls can simulate or decide it.
    D has exactly one fixed behaviour, like all TMs.
    The behaviour of H should change because of the recursion, but it has to
    make up its mind. D goes "I'm gonna do the opposite of what you said".

    If you cannot even ask H the question that you want answered then this
    is not an actual case of undecidability. H does correctly answer the
    actual question that it was actually asked.
    That would be the wrong question.
    D(D) is a valid input. H should be universal.
    Likewise the Liar Paradox *should* be true or false,
    except for the fact that it isn't.

    When H and D are defined to have a pathological relationship then H
    cannot even be asked about the behavior of D(D).
    H cannot give a correct ANSWER about D(D).
    H cannot be asked the right question.
    Then H would be faulty.

    You can not simply wave your hands to get H to know what
    question is being asked.
    H doesn't need to know. It is programmed to answer a fixed question,
    and the input completely determines the answer.
    The fixed question that H is asked is:
    Can your input terminate normally?
    Does the input terminate, rather.
    The answer to that question is: NO.
    If that were so, this would be given to D, since it asks H about itself.
    In this case, it would actually terminate. If H said Yes, it would go
    into an infinite loop.

    It can't even be asked. You said that yourself.
    The input to H(D,D) cannot be transformed into the behavior of D(D).
    It can, just not by H.
    How crazy is it to expect a correct answer to a different question than
    the one you asked?

    No, we can't make an arbitrary problem solver, since we can show
    there are unsolvable problems.
    That is a whole other different issue.
    The key subset of this is that the notion of undecidability is a ruse.
    A ruse for what?
    There are undecidable problems. Like halting.

    Nothing says we can't encode the Halting Question into an input.
    If there is no mapping from the input to H(D,D) to the behavior of
    D(D) then H cannot possibly be asked about behavior that it cannot
    possibly see.
    It can be asked and be wrong.

    What can't be done is create a program that gives the right answer
    for all such inputs.
    Expecting a correct answer to the wrong question is only foolishness.
    The question is just whether D(D) halts.
    Where do you disagree with the halting problem proof?
    There are several different issues the key one of these issues [...]
    is that there is something wrong with it along the lines of it being isomorphic to the Liar Paradox.
    "Something along the lines"? Can you point to the step where you disagree?

    Thanks for your extended reply.

    --
    joes

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to joes on Sat Jun 15 15:33:08 2024
    On 2024-06-15 11:34:39 +0000, joes said:

    Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott:
    On 6/14/2024 10:54 AM, joes wrote:
    Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott:
    On 6/14/2024 6:39 AM, Richard Damon wrote:
    On 6/14/24 12:13 AM, olcott wrote:
    On 6/13/2024 10:44 PM, Richard Damon wrote:
    On 6/13/24 11:14 PM, olcott wrote:
    On 6/13/2024 10:04 PM, Richard Damon wrote:
    On 6/13/24 9:39 PM, olcott wrote:
    On 6/13/2024 8:24 PM, Richard Damon wrote:
    On 6/13/24 11:32 AM, olcott wrote:

    When H and D have a pathological relationship to each other then
    H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no
    such pathological relationship thus D correctly simulated by H1 is the >>>> behavior of D(D).
    What is H1 asked?
    H is asked whether its input halts, and by definition should give the
    (right) answer for every input.
    If we used that definition of decider then no human ever decided
    anything because every human has made at least one mistake.
    Yes. Humans are not machines.
    I use the term "termination analyzer" as a close fit. The term partial
    halt decider is more accurate yet confuses most people.

    Olcott has used the term "termination analyzer", though whether he knows
    what it means is unclear.

    The main difference is that a halt decider or partial halt decider takes descriptions of both a Turing machine (or other program) and an input and determines whether that machine halts with that input but a termination analyzer takes only the dexcription of a Turing machine (or other program)
    and attempts to determine whether that machine halts with every input.
    The term "analyzer" instead "decider" indicates that it may fail to
    determine on some inputs and that it may produce additional information
    that may be useful. The intent is to create termination analyzers that
    are useful for practical purposes.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sat Jun 15 09:51:55 2024
    On 6/15/24 9:24 AM, olcott wrote:
    On 6/15/2024 7:33 AM, Mikko wrote:
    On 2024-06-15 11:34:39 +0000, joes said:

    Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott:
    On 6/14/2024 10:54 AM, joes wrote:
    Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott:
    On 6/14/2024 6:39 AM, Richard Damon wrote:
    On 6/14/24 12:13 AM, olcott wrote:
    On 6/13/2024 10:44 PM, Richard Damon wrote:
    On 6/13/24 11:14 PM, olcott wrote:
    On 6/13/2024 10:04 PM, Richard Damon wrote:
    On 6/13/24 9:39 PM, olcott wrote:
    On 6/13/2024 8:24 PM, Richard Damon wrote:
    On 6/13/24 11:32 AM, olcott wrote:

    When H and D have a pathological relationship to each other then
    H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no >>>>>> such pathological relationship thus D correctly simulated by H1 is >>>>>> the
    behavior of D(D).
    What is H1 asked?
    H is asked whether its input halts, and by definition should give the >>>>> (right) answer for every input.
    If we used that definition of decider then no human ever decided
    anything because every human has made at least one mistake.
    Yes. Humans are not machines.
    I use the term "termination analyzer" as a close fit. The term partial >>>> halt decider is more accurate yet confuses most people.

    Olcott has used the term "termination analyzer", though whether he knows
    what it means is unclear.


    To prove (non-)termination of a C program, AProVE uses the Clang
    compiler [7] to translate it to the intermediate representation of the
    LLVM framework [15]. Then AProVE symbolically executes the LLVM program
    and uses abstraction to obtain a finite symbolic execution graph (SEG) containing all possible program runs.

    *AProVE: Non-Termination Witnesses for C Programs* https://link.springer.com/content/pdf/10.1007/978-3-030-99527-0_21.pdf

    The main difference is that a halt decider or partial halt decider takes
    descriptions of both a Turing machine (or other program) and an input and
    determines whether that machine halts with that input

    H(D,D) is only required to get this one input correctly thus H is
    a halt decider with a domain restricted to D.


    And since it gets the one answer it is resposible for wrong, it fails.

    Since H(D,D) returns 0, it can be proven that D(D) will halt, which *IS*
    the behavior a "Halt Decider" (even a limited halt decider) is
    responsible for answering about, so H(D,D) is just WRONG.


    but a termination
    analyzer takes only the dexcription of a Turing machine (or other
    program)
    and attempts to determine whether that machine halts with every input.
    The term "analyzer" instead "decider" indicates that it may fail to
    determine on some inputs

    Yes that is the distinction that I intend.

    But it fails on the one machine you want it to answer about.


    and that it may produce additional information
    that may be useful. The intent is to create termination analyzers that
    are useful for practical purposes.



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sat Jun 15 09:52:01 2024
    On 6/15/24 8:21 AM, olcott wrote:
    On 6/15/2024 6:34 AM, joes wrote:
    Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott:
    On 6/14/2024 10:54 AM, joes wrote:
    Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott:
    On 6/14/2024 6:39 AM, Richard Damon wrote:
    On 6/14/24 12:13 AM, olcott wrote:
    On 6/13/2024 10:44 PM, Richard Damon wrote:
    On 6/13/24 11:14 PM, olcott wrote:
    On 6/13/2024 10:04 PM, Richard Damon wrote:
    On 6/13/24 9:39 PM, olcott wrote:
    On 6/13/2024 8:24 PM, Richard Damon wrote:
    On 6/13/24 11:32 AM, olcott wrote:

    When H and D have a pathological relationship to each other then
    H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no >>>>> such pathological relationship thus D correctly simulated by H1 is the >>>>> behavior of D(D).
    What is H1 asked?
    H is asked whether its input halts, and by definition should give the
    (right) answer for every input.
    If we used that definition of decider then no human ever decided
    anything because every human has made at least one mistake.
    Yes. Humans are not machines.
    I use the term "termination analyzer" as a close fit. The term partial
    halt decider is more accurate yet confuses most people.
    I have not seen you use that term before. You have not called it partial.
    That was confusing.

    D by construction is pathological to the supposed decider it is
    constructed on. H1 can not decide D1. For every "decider" we can
    construct an undecidable pathological program. No decider decides every >>>> input.
    Parroting what you memorized by rote is not very deep understanding.
    This was my own phrasing. Can you explain the halting problem proof?
    Understanding that the halting problem counter-example input that does
    the opposite of whatever value the halt decider returns is merely the
    Liar Paradox in disguise is a much deeper understanding.

    I know that.


    If you really knew that then you would know that the
    Halting probe is a mere ruse.

    No, the Halting Problem is a real problem.

    Your "rebuttal" is the Ruse, because it LIES about what it is doing


    H(D,D) is not even being asked about the behavior of D(D)
    It can't be asked any other way.
    It can't be asked in any way what-so-ever because it is already being
    asked a different question.
    Is that question "Do you answer yes?"?

    When H is a simulating halt decider you can't even ask it about the >>>>>>> behavior of D(D). You already said that it cannot map its input to >>>>>>> the behavior of D(D). That means that you cannot ask H(D,D) about >>>>>>> the behavior of D(D).
    OF course you can, becaue, BY DEFINITION, that is the ONLY thing it >>>>>> does with its inputs.
    That definition might be in textbooks,
    yet H does not and cannot read textbooks.
    That is very confusing. H still adheres to textbooks.
    No the textbooks have it incorrectly.


    The only definition that H sees is the combination of its algorithm
    with the finite string of machine language of its input.
    H does not see its own algorithm, it only follows its internal
    programming. A machine and input completely determine the behaviour,
    whether that is D(D) or H(D, D).
    No H (with a pathological relationship to D) can possibly see the
    behavior of D(D).
    That is not a problem with D, but with H not being total.

    It is impossible to encode any algorithm such that H and D have a
    pathological relationship and have H even see the behavior of D(D).
    H literally gets it as input.
    The input DOES NOT SPECIFY THE BEHAVIOR OF D(D).
    The input specifies the behavior WITHIN THE PATHOLOGICAL RELATIONSHIP It >>> does not specify the behavior WITHOUT THE PATHOLOGICAL RELATIONSHIP.
    There is no difference. If an H exists, it gives one answer. D then does
    the opposite. H cannot change its answer. Other analysers can see that
    H gives the wrong answer.

    You already admitted there there is no mapping from the finite string >>>>> of machine code of the input to H(D,D) to the behavior of D(D).
    Which means that H can't simulate D(D). Other machines can do so.
    H cannot simulate D(D) for the same reason that int sum(int x, int y) {
    return x + y; } sum(3,4) cannot return the sum of 5 + 6;

    And note, it only gives definitive answers for SOME input.
    It is my understanding is that it does this much better than anyone
    else does. AProVE "symbolically executes the LLVM program".
    Better doesn't cut it. H should work for ALL programs, especially for
    D.
    You don't even have a slight clue about termination analyzers.
    Why do you say that? A (partial) termination analyser doesn't disprove
    the halting problem.

    H cannot be asked the question Does D(D) halt?
    There is no way to encode that. You already admitted this when you >>>>>>> said the finite string input to H(D,D)
    cannot be mapped to the behavior of D(D).
    H answers that question for every other input.
    The question "What is your answer/Is your answer right?" is pointless
    and not even computed by H.
    It is ridiculously stupid to think that the pathological relationship
    between H and D cannot possibly change the behavior of D especially when >>> it has been conclusively proven that it DOES CHANGE THE BEHAVIOR OF D
    D as a machine is completely specified and a valid Turing machine:
    It asks a supposed decider if it halts, and then does the opposite,
    making the decider wrong.
    Other deciders than the one it calls can simulate or decide it.
    D has exactly one fixed behaviour, like all TMs.
    The behaviour of H should change because of the recursion, but it has to
    make up its mind. D goes "I'm gonna do the opposite of what you said".

    If you cannot even ask H the question that you want answered then this >>>>> is not an actual case of undecidability. H does correctly answer the >>>>> actual question that it was actually asked.
    That would be the wrong question.
    D(D) is a valid input. H should be universal.
    Likewise the Liar Paradox *should* be true or false,
    except for the fact that it isn't.

    When H and D are defined to have a pathological relationship then H
    cannot even be asked about the behavior of D(D).
    H cannot give a correct ANSWER about D(D).
    H cannot be asked the right question.
    Then H would be faulty.

    You can not simply wave your hands to get H to know what
    question is being asked.
    H doesn't need to know. It is programmed to answer a fixed question,
    and the input completely determines the answer.
    The fixed question that H is asked is:
    Can your input terminate normally?
    Does the input terminate, rather.
    The answer to that question is: NO.
    If that were so, this would be given to D, since it asks H about itself.
    In this case, it would actually terminate. If H said Yes, it would go
    into an infinite loop.

    It can't even be asked. You said that yourself.
    The input to H(D,D) cannot be transformed into the behavior of D(D). >>>> It can, just not by H.
    How crazy is it to expect a correct answer to a different question than
    the one you asked?

    No, we can't make an arbitrary problem solver, since we can show
    there are unsolvable problems.
    That is a whole other different issue.
    The key subset of this is that the notion of undecidability is a ruse. >>>> A ruse for what?
    There are undecidable problems. Like halting.

    Nothing says we can't encode the Halting Question into an input.
    If there is no mapping from the input to H(D,D) to the behavior of
    D(D) then H cannot possibly be asked about behavior that it cannot
    possibly see.

    No it cannot even be asked and the technical details
    of this are over everyone's head.

    But it can be asked.

    The fact you don't understand how just shows your own mental abiltity,
    not anything about the problem itself.


    Computing the map from the input to H(D,D) to the
    behavior of D(D) has nothing to do with Google maps.

    It can be asked and be wrong.

    What can't be done is create a program that gives the right answer >>>>>> for all such inputs.
    Expecting a correct answer to the wrong question is only foolishness. >>>> The question is just whether D(D) halts.
    Where do you disagree with the halting problem proof?
    There are several different issues the key one of these issues [...]
    is that there is something wrong with it along the lines of it being
    isomorphic to the Liar Paradox.
    "Something along the lines"? Can you point to the step where you
    disagree?

    Thanks for your extended reply.


    You don't hardly have any clue about any of this.


    No, *YOU* have proven you don't know anything about this, and have
    admitted it yourself a number of times.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Sun Jun 16 12:15:04 2024
    On 2024-06-15 13:24:45 +0000, olcott said:

    On 6/15/2024 7:33 AM, Mikko wrote:
    On 2024-06-15 11:34:39 +0000, joes said:

    Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott:
    On 6/14/2024 10:54 AM, joes wrote:
    Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott:
    On 6/14/2024 6:39 AM, Richard Damon wrote:
    On 6/14/24 12:13 AM, olcott wrote:
    On 6/13/2024 10:44 PM, Richard Damon wrote:
    On 6/13/24 11:14 PM, olcott wrote:
    On 6/13/2024 10:04 PM, Richard Damon wrote:
    On 6/13/24 9:39 PM, olcott wrote:
    On 6/13/2024 8:24 PM, Richard Damon wrote:
    On 6/13/24 11:32 AM, olcott wrote:

    When H and D have a pathological relationship to each other then
    H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no >>>>>> such pathological relationship thus D correctly simulated by H1 is the >>>>>> behavior of D(D).
    What is H1 asked?
    H is asked whether its input halts, and by definition should give the >>>>> (right) answer for every input.
    If we used that definition of decider then no human ever decided
    anything because every human has made at least one mistake.
    Yes. Humans are not machines.
    I use the term "termination analyzer" as a close fit. The term partial >>>> halt decider is more accurate yet confuses most people.

    Olcott has used the term "termination analyzer", though whether he knows
    what it means is unclear.


    To prove (non-)termination of a C program, AProVE uses the Clang
    compiler [7] to translate it to the intermediate representation of the
    LLVM framework [15]. Then AProVE symbolically executes the LLVM program
    and uses abstraction to obtain a finite symbolic execution graph (SEG) containing all possible program runs.

    AProVE is a particular attempt, not a defintion.

    *AProVE: Non-Termination Witnesses for C Programs* https://link.springer.com/content/pdf/10.1007/978-3-030-99527-0_21.pdf

    The main difference is that a halt decider or partial halt decider takes
    descriptions of both a Turing machine (or other program) and an input and
    determines whether that machine halts with that input

    H(D,D) is only required to get this one input correctly thus H is
    a halt decider with a domain restricted to D.

    Nevertheless, it takes both the program and its input inputs.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Mon Jun 17 10:10:02 2024
    On 2024-06-16 12:59:02 +0000, olcott said:

    On 6/16/2024 4:15 AM, Mikko wrote:
    On 2024-06-15 13:24:45 +0000, olcott said:

    On 6/15/2024 7:33 AM, Mikko wrote:
    On 2024-06-15 11:34:39 +0000, joes said:

    Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott:
    On 6/14/2024 10:54 AM, joes wrote:
    Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott:
    On 6/14/2024 6:39 AM, Richard Damon wrote:
    On 6/14/24 12:13 AM, olcott wrote:
    On 6/13/2024 10:44 PM, Richard Damon wrote:
    On 6/13/24 11:14 PM, olcott wrote:
    On 6/13/2024 10:04 PM, Richard Damon wrote:
    On 6/13/24 9:39 PM, olcott wrote:
    On 6/13/2024 8:24 PM, Richard Damon wrote:
    On 6/13/24 11:32 AM, olcott wrote:

    When H and D have a pathological relationship to each other then >>>>>>>> H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no >>>>>>>> such pathological relationship thus D correctly simulated by H1 is the >>>>>>>> behavior of D(D).
    What is H1 asked?
    H is asked whether its input halts, and by definition should give the >>>>>>> (right) answer for every input.
    If we used that definition of decider then no human ever decided
    anything because every human has made at least one mistake.
    Yes. Humans are not machines.
    I use the term "termination analyzer" as a close fit. The term partial >>>>>> halt decider is more accurate yet confuses most people.

    Olcott has used the term "termination analyzer", though whether he knows >>>> what it means is unclear.


    To prove (non-)termination of a C program, AProVE uses the Clang
    compiler [7] to translate it to the intermediate representation of the
    LLVM framework [15]. Then AProVE symbolically executes the LLVM program
    and uses abstraction to obtain a finite symbolic execution graph (SEG)
    containing all possible program runs.

    AProVE is a particular attempt, not a defintion.


    If you say: What is a duck? and I point to a duck that
    *is* what a duck is.

    That would be just an example, not a definition. In particular, it does
    not tell about another being whether it can be called a "duck".

    *Termination analysis*
    In computer science, termination analysis is program analysis which
    attempts to determine whether the evaluation of a given program halts
    for each input. This means to determine whether the input program
    computes a total function.
    https://en.wikipedia.org/wiki/Termination_analysis

    I pointed out AProVE because it is essentially a simulating
    halt decider with a limited domain.

    A difference between AProVE and a partial halt decider is that the input
    to AProVE is only a program but not an input to that program but the
    input to a partial halt decider contains both.

    *AProVE: Non-Termination Witnesses for C Programs*
    https://link.springer.com/content/pdf/10.1007/978-3-030-99527-0_21.pdf

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Tue Jun 18 10:44:57 2024
    On 2024-06-17 12:51:15 +0000, olcott said:

    On 6/17/2024 2:10 AM, Mikko wrote:
    On 2024-06-16 12:59:02 +0000, olcott said:

    On 6/16/2024 4:15 AM, Mikko wrote:
    On 2024-06-15 13:24:45 +0000, olcott said:

    On 6/15/2024 7:33 AM, Mikko wrote:
    On 2024-06-15 11:34:39 +0000, joes said:

    Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott:
    On 6/14/2024 10:54 AM, joes wrote:
    Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott:
    On 6/14/2024 6:39 AM, Richard Damon wrote:
    On 6/14/24 12:13 AM, olcott wrote:
    On 6/13/2024 10:44 PM, Richard Damon wrote:
    On 6/13/24 11:14 PM, olcott wrote:
    On 6/13/2024 10:04 PM, Richard Damon wrote:
    On 6/13/24 9:39 PM, olcott wrote:
    On 6/13/2024 8:24 PM, Richard Damon wrote:
    On 6/13/24 11:32 AM, olcott wrote:

    When H and D have a pathological relationship to each other then >>>>>>>>>> H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no >>>>>>>>>> such pathological relationship thus D correctly simulated by H1 is the
    behavior of D(D).
    What is H1 asked?
    H is asked whether its input halts, and by definition should give the >>>>>>>>> (right) answer for every input.
    If we used that definition of decider then no human ever decided >>>>>>>> anything because every human has made at least one mistake.
    Yes. Humans are not machines.
    I use the term "termination analyzer" as a close fit. The term partial >>>>>>>> halt decider is more accurate yet confuses most people.

    Olcott has used the term "termination analyzer", though whether he knows >>>>>> what it means is unclear.


    To prove (non-)termination of a C program, AProVE uses the Clang
    compiler [7] to translate it to the intermediate representation of the >>>>> LLVM framework [15]. Then AProVE symbolically executes the LLVM program >>>>> and uses abstraction to obtain a finite symbolic execution graph (SEG) >>>>> containing all possible program runs.

    AProVE is a particular attempt, not a defintion.


    If you say: What is a duck? and I point to a duck that
    *is* what a duck is.

    That would be just an example, not a definition. In particular, it does
    not tell about another being whether it can be called a "duck".

    *Termination analysis*
    In computer science, termination analysis is program analysis which
    attempts to determine whether the evaluation of a given program halts
    for each input. This means to determine whether the input program
    computes a total function.
    https://en.wikipedia.org/wiki/Termination_analysis

    I pointed out AProVE because it is essentially a simulating
    halt decider with a limited domain.

    A difference between AProVE and a partial halt decider is that the input
    to AProVE is only a program but not an input to that program but the
    input to a partial halt decider contains both.

    *AProVE: Non-Termination Witnesses for C Programs*
    https://link.springer.com/content/pdf/10.1007/978-3-030-99527-0_21.pdf


    AProVE is a kind of simulating termination analyzer.

    Not really. It does not simulate.

    H is a kind of simulating halt decider with a restricted domain.
    [Simulating termination analyzers for dummies] makes these ideas
    simpler.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Tue Jun 18 18:36:19 2024
    On 2024-06-18 12:46:13 +0000, olcott said:

    On 6/18/2024 2:44 AM, Mikko wrote:
    On 2024-06-17 12:51:15 +0000, olcott said:

    On 6/17/2024 2:10 AM, Mikko wrote:
    On 2024-06-16 12:59:02 +0000, olcott said:

    On 6/16/2024 4:15 AM, Mikko wrote:
    On 2024-06-15 13:24:45 +0000, olcott said:

    On 6/15/2024 7:33 AM, Mikko wrote:
    On 2024-06-15 11:34:39 +0000, joes said:

    Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott:
    On 6/14/2024 10:54 AM, joes wrote:
    Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott:
    On 6/14/2024 6:39 AM, Richard Damon wrote:
    On 6/14/24 12:13 AM, olcott wrote:
    On 6/13/2024 10:44 PM, Richard Damon wrote:
    On 6/13/24 11:14 PM, olcott wrote:
    On 6/13/2024 10:04 PM, Richard Damon wrote:
    On 6/13/24 9:39 PM, olcott wrote:
    On 6/13/2024 8:24 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 6/13/24 11:32 AM, olcott wrote:

    When H and D have a pathological relationship to each other then >>>>>>>>>>>> H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no
    such pathological relationship thus D correctly simulated by H1 is the
    behavior of D(D).
    What is H1 asked?
    H is asked whether its input halts, and by definition should give the
    (right) answer for every input.
    If we used that definition of decider then no human ever decided >>>>>>>>>> anything because every human has made at least one mistake. >>>>>>>>> Yes. Humans are not machines.
    I use the term "termination analyzer" as a close fit. The term partial
    halt decider is more accurate yet confuses most people.

    Olcott has used the term "termination analyzer", though whether he knows
    what it means is unclear.


    To prove (non-)termination of a C program, AProVE uses the Clang >>>>>>> compiler [7] to translate it to the intermediate representation of the >>>>>>> LLVM framework [15]. Then AProVE symbolically executes the LLVM program >>>>>>> and uses abstraction to obtain a finite symbolic execution graph (SEG) >>>>>>> containing all possible program runs.

    AProVE is a particular attempt, not a defintion.


    If you say: What is a duck? and I point to a duck that
    *is* what a duck is.

    That would be just an example, not a definition. In particular, it does >>>> not tell about another being whether it can be called a "duck".

    *Termination analysis*
    In computer science, termination analysis is program analysis which
    attempts to determine whether the evaluation of a given program halts >>>>> for each input. This means to determine whether the input program
    computes a total function.
    https://en.wikipedia.org/wiki/Termination_analysis

    I pointed out AProVE because it is essentially a simulating
    halt decider with a limited domain.

    A difference between AProVE and a partial halt decider is that the input >>>> to AProVE is only a program but not an input to that program but the
    input to a partial halt decider contains both.

    *AProVE: Non-Termination Witnesses for C Programs*
    https://link.springer.com/content/pdf/10.1007/978-3-030-99527-0_21.pdf >>>>

    AProVE is a kind of simulating termination analyzer.

    Not really. It does not simulate.


    To prove (non-)termination of a C program, AProVE uses the Clang
    compiler [7] to translate it to the intermediate representation of the
    LLVM framework [15].Then AProVE *symbolically executes the LLVM program*

    I.e., does not simulate.

    and uses abstraction to obtain a finite symbolic execution graph (SEG)

    H is a kind of simulating halt decider with a restricted domain.
    [Simulating termination analyzers for dummies] makes these ideas
    simpler.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Tue Jun 18 19:27:50 2024
    On 2024-06-18 15:44:16 +0000, olcott said:

    On 6/18/2024 10:36 AM, Mikko wrote:
    On 2024-06-18 12:46:13 +0000, olcott said:

    On 6/18/2024 2:44 AM, Mikko wrote:
    On 2024-06-17 12:51:15 +0000, olcott said:

    On 6/17/2024 2:10 AM, Mikko wrote:
    On 2024-06-16 12:59:02 +0000, olcott said:

    On 6/16/2024 4:15 AM, Mikko wrote:
    On 2024-06-15 13:24:45 +0000, olcott said:

    On 6/15/2024 7:33 AM, Mikko wrote:
    On 2024-06-15 11:34:39 +0000, joes said:

    Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott:
    On 6/14/2024 10:54 AM, joes wrote:
    Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott:
    On 6/14/2024 6:39 AM, Richard Damon wrote:
    On 6/14/24 12:13 AM, olcott wrote:
    On 6/13/2024 10:44 PM, Richard Damon wrote:
    On 6/13/24 11:14 PM, olcott wrote:
    On 6/13/2024 10:04 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 6/13/24 9:39 PM, olcott wrote:
    On 6/13/2024 8:24 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 6/13/24 11:32 AM, olcott wrote:

    When H and D have a pathological relationship to each other then >>>>>>>>>>>>>> H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no
    such pathological relationship thus D correctly simulated by H1 is the
    behavior of D(D).
    What is H1 asked?
    H is asked whether its input halts, and by definition should give the
    (right) answer for every input.
    If we used that definition of decider then no human ever decided >>>>>>>>>>>> anything because every human has made at least one mistake. >>>>>>>>>>> Yes. Humans are not machines.
    I use the term "termination analyzer" as a close fit. The term partial
    halt decider is more accurate yet confuses most people.

    Olcott has used the term "termination analyzer", though whether he knows
    what it means is unclear.


    To prove (non-)termination of a C program, AProVE uses the Clang >>>>>>>>> compiler [7] to translate it to the intermediate representation of the
    LLVM framework [15]. Then AProVE symbolically executes the LLVM program
    and uses abstraction to obtain a finite symbolic execution graph (SEG)
    containing all possible program runs.

    AProVE is a particular attempt, not a defintion.


    If you say: What is a duck? and I point to a duck that
    *is* what a duck is.

    That would be just an example, not a definition. In particular, it does >>>>>> not tell about another being whether it can be called a "duck".

    *Termination analysis*
    In computer science, termination analysis is program analysis which >>>>>>> attempts to determine whether the evaluation of a given program halts >>>>>>> for each input. This means to determine whether the input program >>>>>>> computes a total function.
    https://en.wikipedia.org/wiki/Termination_analysis

    I pointed out AProVE because it is essentially a simulating
    halt decider with a limited domain.

    A difference between AProVE and a partial halt decider is that the input >>>>>> to AProVE is only a program but not an input to that program but the >>>>>> input to a partial halt decider contains both.

    *AProVE: Non-Termination Witnesses for C Programs*
    https://link.springer.com/content/pdf/10.1007/978-3-030-99527-0_21.pdf


    AProVE is a kind of simulating termination analyzer.

    Not really. It does not simulate.


    To prove (non-)termination of a C program, AProVE uses the Clang
    compiler [7] to translate it to the intermediate representation of the
    LLVM framework [15].Then AProVE *symbolically executes the LLVM program*

    I.e., does not simulate.


    So maybe: *symbolically executes the LLVM program*
    means jumping up and down yelling and screaming?

    Not a bad guess but not quite right either.

    AProVE does form its non-halting decision on the basis of the
    dynamic behavior of its input instead of any static analysis.

    It is a kind of static analysis. The important diffrence is that
    in a simulation there would be a specific input but AProVE considers
    all possible inputs at the same time.

    *symbolically executes the LLVM program* means dynamic behavior
    and not static analysis.

    It does not reproduce any specific example of the dynamic behaviour.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Wed Jun 19 11:07:15 2024
    On 2024-06-18 16:36:53 +0000, olcott said:

    On 6/18/2024 11:27 AM, Mikko wrote:
    On 2024-06-18 15:44:16 +0000, olcott said:

    On 6/18/2024 10:36 AM, Mikko wrote:
    On 2024-06-18 12:46:13 +0000, olcott said:

    On 6/18/2024 2:44 AM, Mikko wrote:
    On 2024-06-17 12:51:15 +0000, olcott said:

    On 6/17/2024 2:10 AM, Mikko wrote:
    On 2024-06-16 12:59:02 +0000, olcott said:

    On 6/16/2024 4:15 AM, Mikko wrote:
    On 2024-06-15 13:24:45 +0000, olcott said:

    On 6/15/2024 7:33 AM, Mikko wrote:
    On 2024-06-15 11:34:39 +0000, joes said:

    Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott:
    On 6/14/2024 10:54 AM, joes wrote:
    Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott: >>>>>>>>>>>>>>>> On 6/14/2024 6:39 AM, Richard Damon wrote:
    On 6/14/24 12:13 AM, olcott wrote:
    On 6/13/2024 10:44 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 6/13/24 11:14 PM, olcott wrote:
    On 6/13/2024 10:04 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 6/13/24 9:39 PM, olcott wrote:
    On 6/13/2024 8:24 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>> On 6/13/24 11:32 AM, olcott wrote:

    When H and D have a pathological relationship to each other then
    H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no
    such pathological relationship thus D correctly simulated by H1 is the
    behavior of D(D).
    What is H1 asked?
    H is asked whether its input halts, and by definition should give the
    (right) answer for every input.
    If we used that definition of decider then no human ever decided >>>>>>>>>>>>>> anything because every human has made at least one mistake. >>>>>>>>>>>>> Yes. Humans are not machines.
    I use the term "termination analyzer" as a close fit. The term partial
    halt decider is more accurate yet confuses most people. >>>>>>>>>>>>
    Olcott has used the term "termination analyzer", though whether he knows
    what it means is unclear.


    To prove (non-)termination of a C program, AProVE uses the Clang >>>>>>>>>>> compiler [7] to translate it to the intermediate representation of the
    LLVM framework [15]. Then AProVE symbolically executes the LLVM program
    and uses abstraction to obtain a finite symbolic execution graph (SEG)
    containing all possible program runs.

    AProVE is a particular attempt, not a defintion.


    If you say: What is a duck? and I point to a duck that
    *is* what a duck is.

    That would be just an example, not a definition. In particular, it does
    not tell about another being whether it can be called a "duck". >>>>>>>>
    *Termination analysis*
    In computer science, termination analysis is program analysis which >>>>>>>>> attempts to determine whether the evaluation of a given program halts >>>>>>>>> for each input. This means to determine whether the input program >>>>>>>>> computes a total function.
    https://en.wikipedia.org/wiki/Termination_analysis

    I pointed out AProVE because it is essentially a simulating
    halt decider with a limited domain.

    A difference between AProVE and a partial halt decider is that the input
    to AProVE is only a program but not an input to that program but the >>>>>>>> input to a partial halt decider contains both.

    *AProVE: Non-Termination Witnesses for C Programs*
    https://link.springer.com/content/pdf/10.1007/978-3-030-99527-0_21.pdf


    AProVE is a kind of simulating termination analyzer.

    Not really. It does not simulate.


    To prove (non-)termination of a C program, AProVE uses the Clang
    compiler [7] to translate it to the intermediate representation of the >>>>> LLVM framework [15].Then AProVE *symbolically executes the LLVM program* >>>>
    I.e., does not simulate.


    So maybe: *symbolically executes the LLVM program*
    means jumping up and down yelling and screaming?

    Not a bad guess but not quite right either.

    AProVE does form its non-halting decision on the basis of the
    dynamic behavior of its input instead of any static analysis.

    It is a kind of static analysis. The important diffrence is that
    in a simulation there would be a specific input but AProVE considers
    all possible inputs at the same time.


    None-the-less it does derive the directly graph of all
    control flows on the basis of
    *symbolically executes the LLVM program*

    It is still unclear whether you know what "termination analyzer" means.
    Which doesn't matter as nobody believes you anyway.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Thu Jun 20 08:04:20 2024
    On 2024-06-19 13:37:53 +0000, olcott said:

    On 6/19/2024 3:07 AM, Mikko wrote:
    On 2024-06-18 16:36:53 +0000, olcott said:

    On 6/18/2024 11:27 AM, Mikko wrote:
    On 2024-06-18 15:44:16 +0000, olcott said:

    On 6/18/2024 10:36 AM, Mikko wrote:
    On 2024-06-18 12:46:13 +0000, olcott said:

    On 6/18/2024 2:44 AM, Mikko wrote:
    On 2024-06-17 12:51:15 +0000, olcott said:

    On 6/17/2024 2:10 AM, Mikko wrote:
    On 2024-06-16 12:59:02 +0000, olcott said:

    On 6/16/2024 4:15 AM, Mikko wrote:
    On 2024-06-15 13:24:45 +0000, olcott said:

    On 6/15/2024 7:33 AM, Mikko wrote:
    On 2024-06-15 11:34:39 +0000, joes said:

    Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott: >>>>>>>>>>>>>>>> On 6/14/2024 10:54 AM, joes wrote:
    Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott: >>>>>>>>>>>>>>>>>> On 6/14/2024 6:39 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 6/14/24 12:13 AM, olcott wrote:
    On 6/13/2024 10:44 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 6/13/24 11:14 PM, olcott wrote:
    On 6/13/2024 10:04 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>> On 6/13/24 9:39 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>> On 6/13/2024 8:24 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>>> On 6/13/24 11:32 AM, olcott wrote:

    When H and D have a pathological relationship to each other then
    H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no
    such pathological relationship thus D correctly simulated by H1 is the
    behavior of D(D).
    What is H1 asked?
    H is asked whether its input halts, and by definition should give the
    (right) answer for every input.
    If we used that definition of decider then no human ever decided
    anything because every human has made at least one mistake. >>>>>>>>>>>>>>> Yes. Humans are not machines.
    I use the term "termination analyzer" as a close fit. The term partial
    halt decider is more accurate yet confuses most people. >>>>>>>>>>>>>>
    Olcott has used the term "termination analyzer", though whether he knows
    what it means is unclear.


    To prove (non-)termination of a C program, AProVE uses the Clang >>>>>>>>>>>>> compiler [7] to translate it to the intermediate representation of the
    LLVM framework [15]. Then AProVE symbolically executes the LLVM program
    and uses abstraction to obtain a finite symbolic execution graph (SEG)
    containing all possible program runs.

    AProVE is a particular attempt, not a defintion.


    If you say: What is a duck? and I point to a duck that
    *is* what a duck is.

    That would be just an example, not a definition. In particular, it does
    not tell about another being whether it can be called a "duck". >>>>>>>>>>
    *Termination analysis*
    In computer science, termination analysis is program analysis which >>>>>>>>>>> attempts to determine whether the evaluation of a given program halts
    for each input. This means to determine whether the input program >>>>>>>>>>> computes a total function.
    https://en.wikipedia.org/wiki/Termination_analysis

    I pointed out AProVE because it is essentially a simulating >>>>>>>>>>> halt decider with a limited domain.

    A difference between AProVE and a partial halt decider is that the input
    to AProVE is only a program but not an input to that program but the >>>>>>>>>> input to a partial halt decider contains both.

    *AProVE: Non-Termination Witnesses for C Programs*
    https://link.springer.com/content/pdf/10.1007/978-3-030-99527-0_21.pdf


    AProVE is a kind of simulating termination analyzer.

    Not really. It does not simulate.


    To prove (non-)termination of a C program, AProVE uses the Clang >>>>>>> compiler [7] to translate it to the intermediate representation of the >>>>>>> LLVM framework [15].Then AProVE *symbolically executes the LLVM program*

    I.e., does not simulate.


    So maybe: *symbolically executes the LLVM program*
    means jumping up and down yelling and screaming?

    Not a bad guess but not quite right either.

    AProVE does form its non-halting decision on the basis of the
    dynamic behavior of its input instead of any static analysis.

    It is a kind of static analysis. The important diffrence is that
    in a simulation there would be a specific input but AProVE considers
    all possible inputs at the same time.


    None-the-less it does derive the directly graph of all
    control flows on the basis of
    *symbolically executes the LLVM program*

    It is still unclear whether you know what "termination analyzer" means.
    Which doesn't matter as nobody believes you anyway.


    It is dishonest to dismiss my reasoning out-of-hand without
    finding an actual error.

    So many of your errors have been found and analyzed that one more
    or less makes no difference.

    For my first three examples that have no input H0 is a termination
    analyzer.

    Sitll inclear whether you know what "termination analyzer" means.

    For my next example that has an input there is no existing
    term of the art that exactly fits besides halt decider with a limited
    domain or partial halt decider.

    The latter is better.

    This is too confusing to my software engineer reviewers.

    The cause of confusion is that you use common words in a way that
    is not compatible with their common meanings. If one cannot trust
    that you know what your words mean one cannot understand what you
    are trying to say.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Thu Jun 20 17:42:37 2024
    On 2024-06-20 05:15:37 +0000, olcott said:

    On 6/20/2024 12:04 AM, Mikko wrote:

    Sitll inclear whether you know what "termination analyzer" means.

    I really don't care what you believe.
    It is not about belief.
    It is about correct reasoning.

    No, it is not. It is about language maintenance. If you cannot present
    your reasoning in Common Language it does not matter whether your
    reasoning is correct.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From joes@21:1/5 to All on Thu Jun 20 16:16:06 2024
    Am Thu, 20 Jun 2024 10:04:35 -0500 schrieb olcott:
    On 6/20/2024 9:42 AM, Mikko wrote:
    On 2024-06-20 05:15:37 +0000, olcott said:
    On 6/20/2024 12:04 AM, Mikko wrote:

    Still unclear whether you know what "termination analyzer" means.

    I really don't care what you believe.
    It is not about belief.
    It is about correct reasoning.

    No, it is not. It is about language maintenance. If you cannot present
    your reasoning in Common Language it does not matter whether your
    reasoning is correct.

    I cannot possibly present my reasoning in a convincing way to people
    that have already made up their mind and closed it thus fail to trace
    through each step of this reasoning looking for an error and finding
    none.
    You cannot present wrong reasoning to people who know the literature.
    We found many errors.

    If you simply leap to the false assumption that I am wrong yet fail to
    point out any mistake because there are no mistakes this will only
    convince gullible fools that also lack sufficient technical competence.
    That "assumption" is pretty well founded if you believe CS. The mistakes
    are still there even if you can't see them. There is only one gullible
    fool here.

    --
    Man kann mit dunklen Zahlen nicht rechnen. Für die eigentliche Mathematik
    sind sie vollkommen nutzlos. --Wolfgang Mückenheim

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Thu Jun 20 21:55:33 2024
    On 6/20/24 11:04 AM, olcott wrote:
    On 6/20/2024 9:42 AM, Mikko wrote:
    On 2024-06-20 05:15:37 +0000, olcott said:

    On 6/20/2024 12:04 AM, Mikko wrote:

    Sitll inclear whether you know what "termination analyzer" means.

    I really don't care what you believe.
    It is not about belief.
    It is about correct reasoning.

    No, it is not. It is about language maintenance. If you cannot present
    your reasoning in Common Language it does not matter whether your
    reasoning is correct.


    I cannot possibly present my reasoning in a convincing way
    to people that have already made up their mind and closed it
    thus fail to trace through each step of this reasoning looking
    for an error and finding none.

    BNo, we are open to new ideas that have an actual factual


    If you simply leap to the false assumption that I am wrong
    yet fail to point out any mistake because there are no mistakes
    this will only convince gullible fools that also lack sufficient
    technical competence.


    We don't leap from false assumption, we start with DEFINTIONS.

    YOU seem to leap to the false assumption that you can just change the definitions, which you can not.

    When your statement are base on false definitions, that you refuse to
    see are wrong, you are just stuck in your lies, and think the world is
    against you, when what is against you is TRUTH, because your world is
    just built on LIES.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Thu Jun 20 22:38:26 2024
    On 6/20/24 10:04 PM, olcott wrote:
    On 6/20/2024 8:55 PM, Richard Damon wrote:
    On 6/20/24 11:04 AM, olcott wrote:
    On 6/20/2024 9:42 AM, Mikko wrote:
    On 2024-06-20 05:15:37 +0000, olcott said:

    On 6/20/2024 12:04 AM, Mikko wrote:

    Sitll inclear whether you know what "termination analyzer" means.

    I really don't care what you believe.
    It is not about belief.
    It is about correct reasoning.

    No, it is not. It is about language maintenance. If you cannot present >>>> your reasoning in Common Language it does not matter whether your
    reasoning is correct.


    I cannot possibly present my reasoning in a convincing way
    to people that have already made up their mind and closed it
    thus fail to trace through each step of this reasoning looking
    for an error and finding none.

    BNo, we are open to new ideas that have an actual factual


    If you simply leap to the false assumption that I am wrong
    yet fail to point out any mistake because there are no mistakes
    this will only convince gullible fools that also lack sufficient
    technical competence.


    We don't leap from false assumption, we start with DEFINTIONS.


    When it is defined that H(D,D) must report on the behavior
    of D(D) yet the finite string D cannot be mapped to the
    behavior of D(D) then the definition is wrong.

    *You seem to think that textbooks are the word of God*



    Why do you say it can not be "mapped"

    Of course it can be mapped by the definition of mapping that decider are supposed to use, as

    (D,D) -> Halting

    Is a perfectly valid mapping.

    Your problem is you keep on trying to LIE by trying to change the
    meaning of the words, probalby because you just don't understand the
    actual meaning because you have forced yourself to be stupid about them
    by not actually studing them.

    YOU are not "God" either, but the textbooks do quote the "words of
    'God'" in the sense that the creators of the fields are the 'Gods' of
    the field that define what things in the field mean.

    And, when you defy the words of 'God', you get cast out of 'heaven',
    which here means you logic just fails to be applicable.

    If you want, you can create your own field and be the 'God' of it, but
    then you need to convince people to come to your world.

    The 'faithful' of the existing system, who know the actual meaning of
    the words, will be there to expose your lies when you try to convince
    people that your world is just like the actual one that people know.

    So, all you are doing is publicly admitting that you are defying the authorative definitioin of things in the system, because you just don't
    like them.

    And, just ike the ACTUAL GOD of this universe, who created it and
    everything in it, decides what the rules are, and if you choose to not
    believe him, will cast you out at the time of judging, when you refuse
    the rules of the field of logic, you find yourself cast out the them,
    with nothing to stand on.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Fri Jun 21 10:16:04 2024
    On 2024-06-20 15:04:35 +0000, olcott said:

    On 6/20/2024 9:42 AM, Mikko wrote:
    On 2024-06-20 05:15:37 +0000, olcott said:

    On 6/20/2024 12:04 AM, Mikko wrote:

    Sitll inclear whether you know what "termination analyzer" means.

    I really don't care what you believe.
    It is not about belief.
    It is about correct reasoning.

    No, it is not. It is about language maintenance. If you cannot present
    your reasoning in Common Language it does not matter whether your
    reasoning is correct.

    I cannot possibly present my reasoning in a convincing way
    to people that have already made up their mind and closed it
    thus fail to trace through each step of this reasoning looking
    for an error and finding none.

    If you can't convince the reviewers of a journal that your article is
    well thought and well written you cannot get it published in a
    respected journal.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Fred. Zwarts@21:1/5 to All on Fri Jun 21 10:05:32 2024
    Op 20.jun.2024 om 18:28 schreef olcott:
    On 6/20/2024 11:16 AM, joes wrote:
    Am Thu, 20 Jun 2024 10:04:35 -0500 schrieb olcott:
    On 6/20/2024 9:42 AM, Mikko wrote:
    On 2024-06-20 05:15:37 +0000, olcott said:
    On 6/20/2024 12:04 AM, Mikko wrote:

    Still unclear whether you know what "termination analyzer" means.

    I really don't care what you believe.
    It is not about belief.
    It is about correct reasoning.

    No, it is not. It is about language maintenance. If you cannot present >>>> your reasoning in Common Language it does not matter whether your
    reasoning is correct.

    I cannot possibly present my reasoning in a convincing way to people
    that have already made up their mind and closed it thus fail to trace
    through each step of this reasoning looking for an error and finding
    none.

    You cannot present wrong reasoning to people who know the literature.
    We found many errors.


    All the "errors" that have been pointed out are mere
    dogmatic assertions that state that my conclusion is
    inconsistent with the conclusions stated in textbooks.

    The only other "errors" that were pointed out flatly
    disagree with verified facts.


    No one ever verified these facts. We know that in your language
    'verified facts' means 'my wishes'.
    Many errors were pointed out to you, but you prefer to ignore them,
    probably because your prejudice has already made up your mind that they
    must be wrong, so you did not bother to think about them.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Jun 21 10:27:02 2024
    On 6/21/24 9:13 AM, olcott wrote:
    On 6/21/2024 3:05 AM, Fred. Zwarts wrote:
    Op 20.jun.2024 om 18:28 schreef olcott:
    On 6/20/2024 11:16 AM, joes wrote:
    Am Thu, 20 Jun 2024 10:04:35 -0500 schrieb olcott:
    On 6/20/2024 9:42 AM, Mikko wrote:
    On 2024-06-20 05:15:37 +0000, olcott said:
    On 6/20/2024 12:04 AM, Mikko wrote:

    Still unclear whether you know what "termination analyzer" means. >>>>>>
    I really don't care what you believe.
    It is not about belief.
    It is about correct reasoning.

    No, it is not. It is about language maintenance. If you cannot
    present
    your reasoning in Common Language it does not matter whether your
    reasoning is correct.

    I cannot possibly present my reasoning in a convincing way to people >>>>> that have already made up their mind and closed it thus fail to trace >>>>> through each step of this reasoning looking for an error and finding >>>>> none.

    You cannot present wrong reasoning to people who know the literature.
    We found many errors.


    All the "errors" that have been pointed out are mere
    dogmatic assertions that state that my conclusion is
    inconsistent with the conclusions stated in textbooks.

    The only other "errors" that were pointed out flatly
    disagree with verified facts.


    No one ever verified these facts. We know that in your language
    'verified facts' means 'my wishes'.

    On 6/20/2024 5:37 PM, Richard Damon wrote:
    On 6/20/24 10:12 AM, olcott wrote:

    It also looks like you fail to comprehend that it is possible
    for a simulating termination analyzer to recognize inputs that
    would never terminate by recognizing the repeating state of
    these inputs after a finite number of steps of correct simulation.

    Right, but they don't do it by "Correctly Simulating" the
    input, but by a PARTIAL simulation that provides the needed
    information to prove that an ACTUAL CORRECT (and complete)
    simulation of that input would not halt.


    Which just shos your logic is based on lies and misinterpreting things.

    Yes, a decider can sometimes correctly detect that a non-halting machine
    is non-halting by a partial simulation, and then logic showing that the
    actual correct simulation would go on forever.

    But, as normal, your logic reverses things and gets wrongs answers. The
    fact that no H can simulate the instance of a template based on its self
    to a final state doesn't show that any of the instances are non-halting.

    ALL of those instances, when the decider does decide to abort its
    simulation (and thus it doesn't do a simulation that correctly reveals
    the behavior of the machine by itself) when simulated by an actual
    correct simulator, will reach an end.

    This shows that your decide was NEVER able to CORRECTLY determine that
    its particular input was non-halting, and that its correct simulation
    would never reach an end, unless of course, your logic allows you to
    "Correctly Determine" a wrong answer, as it seems yours does, which just
    means it has totally blown its self up into smitherines of contradicitions.


    Many errors were pointed out to you, but you prefer to ignore them,
    probably because your prejudice has already made up your mind that
    they must be wrong, so you did not bother to think about them.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Jun 21 10:43:05 2024
    On 6/21/24 9:21 AM, olcott wrote:
    On 6/21/2024 2:16 AM, Mikko wrote:
    On 2024-06-20 15:04:35 +0000, olcott said:

    On 6/20/2024 9:42 AM, Mikko wrote:
    On 2024-06-20 05:15:37 +0000, olcott said:

    On 6/20/2024 12:04 AM, Mikko wrote:

    Sitll inclear whether you know what "termination analyzer" means.

    I really don't care what you believe.
    It is not about belief.
    It is about correct reasoning.

    No, it is not. It is about language maintenance. If you cannot present >>>> your reasoning in Common Language it does not matter whether your
    reasoning is correct.

    I cannot possibly present my reasoning in a convincing way
    to people that have already made up their mind and closed it
    thus fail to trace through each step of this reasoning looking
    for an error and finding none.

    If you can't convince the reviewers of a journal that your article is
    well thought and well written you cannot get it published in a
    respected journal.


    The trick is to get people that say I am wrong
    to point out the exact mistake. When they really
    try to do this they find no mistake and all of
    their rebbutal was pure bluster with no actual basis.


    No we do, its just you don't like that the problem is you asked the
    wrong question.

    It seems beyond your understanding that you hav WASTED two decades of
    work looking at the wrong thing.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Sat Jun 22 14:06:23 2024
    On 2024-06-21 13:21:47 +0000, olcott said:

    On 6/21/2024 2:16 AM, Mikko wrote:
    On 2024-06-20 15:04:35 +0000, olcott said:

    On 6/20/2024 9:42 AM, Mikko wrote:
    On 2024-06-20 05:15:37 +0000, olcott said:

    On 6/20/2024 12:04 AM, Mikko wrote:

    Sitll inclear whether you know what "termination analyzer" means.

    I really don't care what you believe.
    It is not about belief.
    It is about correct reasoning.

    No, it is not. It is about language maintenance. If you cannot present >>>> your reasoning in Common Language it does not matter whether your
    reasoning is correct.

    I cannot possibly present my reasoning in a convincing way
    to people that have already made up their mind and closed it
    thus fail to trace through each step of this reasoning looking
    for an error and finding none.

    If you can't convince the reviewers of a journal that your article is
    well thought and well written you cannot get it published in a
    respected journal.


    The trick is to get people that say I am wrong
    to point out the exact mistake. When they really
    try to do this they find no mistake and all of
    their rebbutal was pure bluster with no actual basis.

    That trick does not work with editors and thir reviewers.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Fred. Zwarts@21:1/5 to All on Sat Jun 22 20:39:48 2024
    Op 21.jun.2024 om 15:21 schreef olcott:
    On 6/21/2024 2:16 AM, Mikko wrote:
    On 2024-06-20 15:04:35 +0000, olcott said:

    On 6/20/2024 9:42 AM, Mikko wrote:
    On 2024-06-20 05:15:37 +0000, olcott said:

    On 6/20/2024 12:04 AM, Mikko wrote:

    Sitll inclear whether you know what "termination analyzer" means.

    I really don't care what you believe.
    It is not about belief.
    It is about correct reasoning.

    No, it is not. It is about language maintenance. If you cannot present >>>> your reasoning in Common Language it does not matter whether your
    reasoning is correct.

    I cannot possibly present my reasoning in a convincing way
    to people that have already made up their mind and closed it
    thus fail to trace through each step of this reasoning looking
    for an error and finding none.

    If you can't convince the reviewers of a journal that your article is
    well thought and well written you cannot get it published in a
    respected journal.


    The trick is to get people that say I am wrong
    to point out the exact mistake. When they really
    try to do this they find no mistake and all of
    their rebbutal was pure bluster with no actual basis.


    It seems you do not even try to answer questions to show errors in the reasoning of your opponents, in order to protect yourself against
    finding no errors in their rebuttal.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)