On 6/13/2024 10:04 PM, Richard Damon wrote:
On 6/13/24 9:39 PM, olcott wrote:
On 6/13/2024 8:24 PM, Richard Damon wrote:
On 6/13/24 11:32 AM, olcott wrote:
It is contingent upon you to show the exact steps of how H computes
the mapping from the x86 machine language finite string input to
H(D,D) using the finite string transformation rules specified by
the semantics of the x86 programming language that reaches the
behavior of the directly executed D(D)
Why? I don't claim it can.
That means that H cannot even be asked the question:
"Does D halt on its input?"
WHy not? After all, H does what it does, the PERSON we ask is the
programmer.
*When H and D have a pathological relationship to each other*
There is no way to encode any H such that it can be asked:
Does D(D) halt?
You must see this from the POV of H or you won't get it.
H cannot read your theory of computation textbooks, it
only knows what it directly sees, its actual input.
If there is no possible way for H to transform its input
into the behavior of D(D) then H cannot be asked about
the behavior of D(D).
On 6/13/2024 10:04 PM, Richard Damon wrote:That is the question that H answers for every other input.
On 6/13/24 9:39 PM, olcott wrote:*When H and D have a pathological relationship to each other*
On 6/13/2024 8:24 PM, Richard Damon wrote:
On 6/13/24 11:32 AM, olcott wrote:That means that H cannot even be asked the question:
Why? I don't claim it can.
It is contingent upon you to show the exact steps of how H computes
the mapping from the x86 machine language finite string input to
H(D,D) using the finite string transformation rules specified by the >>>>> semantics of the x86 programming language that reaches the behavior
of the directly executed D(D)
"Does D halt on its input?"
WHy not? After all, H does what it does, the PERSON we ask is the
programmer.
There is no way to encode any H such that it can be asked: Does D(D)
halt?
You must see this from the POV of H or you won't get it.And it doesn't need to know more, the behaviour of D is completely
H cannot read your theory of computation textbooks, it only knows what
it directly sees, its actual input.
If there is no possible way for H to transform its input into theIt only means it doesn't give the right answer, when given D(D) as input.
behavior of D(D) then H cannot be asked about the behavior of D(D).
On 6/13/2024 10:44 PM, Richard Damon wrote:
On 6/13/24 11:14 PM, olcott wrote:
On 6/13/2024 10:04 PM, Richard Damon wrote:
On 6/13/24 9:39 PM, olcott wrote:
On 6/13/2024 8:24 PM, Richard Damon wrote:
On 6/13/24 11:32 AM, olcott wrote:
It is contingent upon you to show the exact steps of how H computes >>>>>>> the mapping from the x86 machine language finite string input to >>>>>>> H(D,D) using the finite string transformation rules specified by >>>>>>> the semantics of the x86 programming language that reaches the
behavior of the directly executed D(D)
Why? I don't claim it can.
That means that H cannot even be asked the question:
"Does D halt on its input?"
WHy not? After all, H does what it does, the PERSON we ask is the
programmer.
*When H and D have a pathological relationship to each other*
There is no way to encode any H such that it can be asked:
Does D(D) halt?
Which just pproves that Halting is non-computable.
No it is more than that.
H cannot even be asked the question:
Does D(D) halt?
You already admitted the basis for this.
You keep on doing that, Making claims that show the truth of the
statement you are trying to disprove.
The fact you don't undrstand that, just show how little you understand
what you are saying.
You must see this from the POV of H or you won't get it.
H cannot read your theory of computation textbooks, it
only knows what it directly sees, its actual input.
But H doesn't HAVE a "poimt of view".
When H is a simulating halt decider you can't even ask it
about the behavior of D(D). You already said that it cannot
map its input to the behavior of D(D). That means that you
cannot ask H(D,D) about the behavior of D(D).
What seems to me to be the world's leading termination
analyzer symbolically executes its transformed input. https://link.springer.com/content/pdf/10.1007/978-3-030-99527-0_21.pdf
It takes C programs and translates them into something like
generic assembly language and then symbolically executes them
to form a directed graph of their behavior. x86utm and HH do
something similar in a much more limited fashion.
H is just a "mechanical" computation. It is a rote algorithm that does
what it has been told to do.
H cannot be asked the question Does DD(D) halt?
There is no way to encode that. You already admitted
this when you said the finite string input to H(D,D)
cannot be mapped to the behavior of D(D).
It really seems likem you just don't understand the concept of
deterministic automatons, and Willful beings as being different.
Which just shows how ignorant you are about what you talk about.
The issue is that you don't understand truthmaker theory.
You can not simply correctly wave your hands to get H to know
what question is being asked.
If there is no possible way for H to transform its input
into the behavior of D(D) then H cannot be asked about
the behavior of D(D).
No, it says it can't do it, not that it can't be asked to do it.
It can't even be asked. You said that yourself.
The input to H(D,D) cannot be transformed into
the behavior of D(D).
On 6/14/2024 6:39 AM, Richard Damon wrote:H is asked whether its input halts, and by definition should give the
On 6/14/24 12:13 AM, olcott wrote:
On 6/13/2024 10:44 PM, Richard Damon wrote:
On 6/13/24 11:14 PM, olcott wrote:
On 6/13/2024 10:04 PM, Richard Damon wrote:
On 6/13/24 9:39 PM, olcott wrote:
On 6/13/2024 8:24 PM, Richard Damon wrote:
On 6/13/24 11:32 AM, olcott wrote:
When H and D have a pathological relationship to each other then H(D,D)H cannot even be asked the question: Does D(D) halt?No, you just don't understand the proper meaning of "ask" when applied
to a deterministic entity.
is not being asked about the behavior of D(D). H1(D,D) has no such pathological relationship thus D correctly simulated by H1 is the
behavior of D(D).
Can a correct answer to the stated question be a correct answer to the unstated question?It can't be asked any other way.
H(D,D) is not even being asked about the behavior of D(D)
That is very confusing. H still adheres to textbooks.That definition might be in textbooks,When H is a simulating halt decider you can't even ask it about theOF course you can, becaue, BY DEFINITION, that is the ONLY thing it
behavior of D(D). You already said that it cannot map its input to the
behavior of D(D). That means that you cannot ask H(D,D) about the
behavior of D(D).
does with its inputs.
yet H does not and cannot read textbooks.
The only definition that H sees is the combination of its algorithm withH does not see its own algorithm, it only follows its internal
the finite string of machine language of its input.
It is impossible to encode any algorithm such that H and D have a pathological relationship and have H even see the behavior of D(D).H literally gets it as input.
You already admitted there there is no mapping from the finite string of machine code of the input to H(D,D) to the behavior of D(D).Which means that H can't simulate D(D). Other machines can do so.
Better doesn't cut it. H should work for ALL programs, especially for D.And note, it only gives difinitive answers for SOME input.It is my understanding is that it does this much better than anyone else does. AProVE "symbolically executes the LLVM program".
H answers that question for every other input.H is just a "mechanical" computation. It is a rote algorithm thatH cannot be asked the question Does D(D) halt?
does what it has been told to do.
There is no way to encode that. You already admitted this when you
said the finite string input to H(D,D)
cannot be mapped to the behavior of D(D).
D(D) is a valid input. H should be universal.It is every time it is given an input, at least if H is a halt decider.If you cannot even ask H the question that you want answered then this
is not an actual case of undecidability. H does correctly answer the
actual question that it was actually asked.
H cannot give a correct ANSWER about D(D).That is what halt deciders (if they exist) do.When H and D are defined to have a pathological relationship then H
cannot even be asked about the behavior of D(D).
H doesn't need to know. It is programmed to answer a fixed question,It really seems likem you just don't understand the concept ofYou can not simply correctly wave your hands to get H to know what
deterministic automata, and Willful beings as being different.
question is being asked.
It can, just not by H.It can't even be asked. You said that yourself.
The input to H(D,D) cannot be transformed into the behavior of D(D).
A ruse for what?No, we can't make an arbitrary problem solver, since we can show thereThat is a whole other different issue.
are unsolvable problems.
The key subset of this is that the notion of undecidability is a ruse.
It can be asked and be wrong.Nothing says we can't encode the Halting Question into an input.If there is no mapping from the input to H(D,D) to the behavior of D(D)
then H cannot possibly be asked about behavior that it cannot possibly
see.
The question is just whether D(D) halts.What can't be done it create a program that gives the right answer forExpecting a correct answer to the wrong question is only foolishness.
all such inputs.
On 6/14/2024 10:54 AM, joes wrote:
Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott:
On 6/14/2024 6:39 AM, Richard Damon wrote:H is asked whether its input halts, and by definition should give the
On 6/14/24 12:13 AM, olcott wrote:
On 6/13/2024 10:44 PM, Richard Damon wrote:
On 6/13/24 11:14 PM, olcott wrote:
On 6/13/2024 10:04 PM, Richard Damon wrote:
On 6/13/24 9:39 PM, olcott wrote:
On 6/13/2024 8:24 PM, Richard Damon wrote:
On 6/13/24 11:32 AM, olcott wrote:
When H and D have a pathological relationship to each other then H(D,D)H cannot even be asked the question: Does D(D) halt?No, you just don't understand the proper meaning of "ask" when applied >>>> to a deterministic entity.
is not being asked about the behavior of D(D). H1(D,D) has no such
pathological relationship thus D correctly simulated by H1 is the
behavior of D(D).
(right) answer for every input.
If we used that definition of decider then no human ever decided
anything because every human has made at least one mistake.
I use the term "termination analyzer" as a close fit. The term
partial halt decider is more accurate yet confuses most people.
A partial halt decider is a halt decider with a limited domain.
D by construction is pathological to the supposed decider it is
constructed on. H1 can not decide D1. For every "decider" we can
construct
an undecidable pathological program. No decider decides every input.
Parroting what you memorized by rote is not very deep understanding.
Understanding that the halting problem counter-example input that
does the opposite of whatever value the halt decider returns is
merely the Liar Paradox in disguise is a much deeper understanding.
Can a correct answer to the stated question be a correct answer to the
unstated question?
H(D,D) is not even being asked about the behavior of D(D)
It can't be asked any other way.It can't be asked in any way what-so-ever because it is
already being asked a different question.
That definition might be in textbooks,When H is a simulating halt decider you can't even ask it about theOF course you can, becaue, BY DEFINITION, that is the ONLY thing it
behavior of D(D). You already said that it cannot map its input to the >>>>> behavior of D(D). That means that you cannot ask H(D,D) about the
behavior of D(D).
does with its inputs.
yet H does not and cannot read textbooks.
That is very confusing. H still adheres to textbooks.No the textbooks have it incorrectly.
The only definition that H sees is the combination of its algorithm with >>> the finite string of machine language of its input.
H does not see its own algorithm, it only follows its internal
programming. A machine and input completely determine the behaviour,
whether that is D(D) or H(D, D).
No H (with a pathological relationship to D) can possibly see the
behavior of D(D).
It is impossible to encode any algorithm such that H and D have a
pathological relationship and have H even see the behavior of D(D).
H literally gets it as input.
The input DOES NOT SPECIFY THE BEHAVIOR OF D(D).
The input specifies the behavior WITHIN THE PATHOLOGICAL RELATIONSHIP
It does not specify the behavior WITHOUT THE PATHOLOGICAL RELATIONSHIP.
You already admitted there there is no mapping from the finite string of >>> machine code of the input to H(D,D) to the behavior of D(D).
Which means that H can't simulate D(D). Other machines can do so.
H cannot simulate D(D) for the same reason that
int sum(int x, int y) { return x + y; }
sum(3,4) cannot return the sum of 5 + 6;
And note, it only gives difinitive answers for SOME input.It is my understanding is that it does this much better than anyone else >>> does. AProVE "symbolically executes the LLVM program".
Better doesn't cut it. H should work for ALL programs, especially for D.
You don't even have a slight clue about termination analyzers.
H is just a "mechanical" computation. It is a rote algorithm thatH cannot be asked the question Does D(D) halt?
does what it has been told to do.
There is no way to encode that. You already admitted this when you
said the finite string input to H(D,D)
cannot be mapped to the behavior of D(D).
H answers that question for every other input.
The question "What is your answer/Is your answer right?" is pointless
and not even computed by H.
It is ridiculously stupid to think that the pathological
relationship between H and D cannot possibly change the
behavior of D especially when it has been conclusively
proven that it DOES CHANGE THE BEHAVIOR OF D
It is every time it is given an input, at least if H is a halt decider. >>> If you cannot even ask H the question that you want answered then thisis not an actual case of undecidability. H does correctly answer the
actual question that it was actually asked.
D(D) is a valid input. H should be universal.
Likewise the Liar Paradox *should* be true or false,
except for the fact that it isn't.
That is what halt deciders (if they exist) do.When H and D are defined to have a pathological relationship then H
cannot even be asked about the behavior of D(D).
H cannot give a correct ANSWER about D(D).
H cannot be asked the right question.
H doesn't need to know. It is programmed to answer a fixed question,It really seems likem you just don't understand the concept ofYou can not simply correctly wave your hands to get H to know what
deterministic automata, and Willful beings as being different.
question is being asked.
and the input completely determines the answer.
The fixed question that H is asked is:
Can your input terminate normally?
The answer to that question is: NO.
It can, just not by H.It can't even be asked. You said that yourself.
The input to H(D,D) cannot be transformed into the behavior of D(D).
How crazy is it to expect a correct answer to a
different question than the one you asked?
A ruse for what?No, we can't make an arbitrary problem solver, since we can show there >>>> are unsolvable problems.That is a whole other different issue.
The key subset of this is that the notion of undecidability is a ruse.
It can be asked and be wrong.Nothing says we can't encode the Halting Question into an input.If there is no mapping from the input to H(D,D) to the behavior of D(D)
then H cannot possibly be asked about behavior that it cannot possibly
see.
The question is just whether D(D) halts.What can't be done it create a program that gives the right answer for >>>> all such inputs.Expecting a correct answer to the wrong question is only foolishness.
Where do you disagree with the halting problem proof?
There are several different issues the key one of these issues
that two PhD computer science professors agree with me on is
that there is something wrong with it along the lines of it
being isomorphic to the Liar Paradox.
On 6/14/2024 6:27 PM, Richard Damon wrote:
On 6/14/24 9:15 AM, olcott wrote:
On 6/14/2024 6:39 AM, Richard Damon wrote:
On 6/14/24 12:13 AM, olcott wrote:
No it is more than that.
H cannot even be asked the question:
Does D(D) halt?
No, you just don't understand the proper meaning of "ask" when
applied to a deterministic entity.
When H and D have a pathological relationship to each
other then H(D,D) is not being asked about the behavior
of D(D). H1(D,D) has no such pathological relationship
thus D correctly simulated by H1 is the behavior of D(D).
OF course it is. The nature of the input doesn't affet the form of the
question that H is supposed to answer.
The textbook asks the question.
The data cannot possibly do that.
You already said that H cannot possibly map its
input to the behavior of D(D).
We need to stay focused on this one single point until you
fully get it. Unlike the other two respondents you do have
the capacity to understand this.
You keep expecting H to read your computer science
textbooks.
On 6/14/2024 8:38 PM, Richard Damon wrote:
On 6/14/24 8:34 PM, olcott wrote:
On 6/14/2024 6:27 PM, Richard Damon wrote:
On 6/14/24 9:15 AM, olcott wrote:
On 6/14/2024 6:39 AM, Richard Damon wrote:
On 6/14/24 12:13 AM, olcott wrote:
No it is more than that.
H cannot even be asked the question:
Does D(D) halt?
No, you just don't understand the proper meaning of "ask" when
applied to a deterministic entity.
When H and D have a pathological relationship to each
other then H(D,D) is not being asked about the behavior
of D(D). H1(D,D) has no such pathological relationship
thus D correctly simulated by H1 is the behavior of D(D).
OF course it is. The nature of the input doesn't affet the form of
the question that H is supposed to answer.
The textbook asks the question.
The data cannot possibly do that.
But the data doesn't need to do it, as the program specifictions
define it.
Now, if H was supposed to be a "Universal Problem Decider", then we
would need to somehow "encode" the goal of H determining that a
correct (and complete) simulation of its input would need to reach a
final state, but I see no issue with defining a way to encode that.
You already said that H cannot possibly map its
input to the behavior of D(D).
Right, it is impossible for H to itself compute that behavior and give
an answer.
That doesn't mean we can't encode the question.
We need to stay focused on this one single point until you
fully get it. Unlike the other two respondents you do have
the capacity to understand this.
You keep expecting H to read your computer science
textbooks.
No, I expect its PROGRAMMER to have done that, which clearly you
haven't done.
Programs don't read their requirements, the perform the actions they
were programmed to do, and if the program is correct, it will get the
right answer. If it doesn't get the right answer, then the programmer
erred in saying it meet the requirements.
I am only going to talk to you in the one thread about
this, it is too difficult material to understand outside
of a single chain of thought.
On 6/14/2024 8:38 PM, Richard Damon wrote:
On 6/14/24 8:34 PM, olcott wrote:
On 6/14/2024 6:27 PM, Richard Damon wrote:
On 6/14/24 9:15 AM, olcott wrote:
On 6/14/2024 6:39 AM, Richard Damon wrote:
On 6/14/24 12:13 AM, olcott wrote:
No it is more than that.
H cannot even be asked the question:
Does D(D) halt?
No, you just don't understand the proper meaning of "ask" when
applied to a deterministic entity.
When H and D have a pathological relationship to each
other then H(D,D) is not being asked about the behavior
of D(D). H1(D,D) has no such pathological relationship
thus D correctly simulated by H1 is the behavior of D(D).
OF course it is. The nature of the input doesn't affet the form of
the question that H is supposed to answer.
The textbook asks the question.
The data cannot possibly do that.
But the data doesn't need to do it, as the program specifictions
define it.
Did you know that the code itself cannot read these specifications?
The specifications say {draw a square circle}, the code says huh?
Now, if H was supposed to be a "Universal Problem Decider", then we
I don't have time for an infinite conversation.
H is ONLY defined to be a D decider.
would need to somehow "encode" the goal of H determining that a
correct (and complete) simulation of its input would need to reach a
final state, but I see no issue with defining a way to encode that.
You already said that H cannot possibly map its
input to the behavior of D(D).
Right, it is impossible for H to itself compute that behavior and give
an answer.
NO !!! It is impossible for anyone or anything to provide
a correct answer to a question THAT THEY ARE NOT BEING ASKED.
That doesn't mean we can't encode the question.
Give it your best shot, it must be encoded in C.
The spec says {CAD system that draws square circles}
We need to stay focused on this one single point until you
fully get it. Unlike the other two respondents you do have
the capacity to understand this.
You keep expecting H to read your computer science
textbooks.
No, I expect its PROGRAMMER to have done that, which clearly you
haven't done.
The programmer say WTF!
Programs don't read their requirements, the perform the actions they
were programmed to do,
There is no way to encode H to even see the behavior of D(D)
when H and D have the pathological relationship.
That is the dumbed down version of H cannot map its finite
string x86 machine code to the behavior of D(D).
and if the program is correct, it will get the right answer. If it
doesn't get the right answer, then the programmer erred in saying it
meet the requirements.
Sure make a CAD system that draws square circles or you are fired.
You are failing to understand the notion of logically
impossible.
On 6/14/2024 9:16 PM, Richard Damon wrote:
On 6/14/24 9:59 PM, olcott wrote:
On 6/14/2024 8:38 PM, Richard Damon wrote:
On 6/14/24 8:34 PM, olcott wrote:
On 6/14/2024 6:27 PM, Richard Damon wrote:
On 6/14/24 9:15 AM, olcott wrote:
On 6/14/2024 6:39 AM, Richard Damon wrote:
On 6/14/24 12:13 AM, olcott wrote:
No it is more than that.
H cannot even be asked the question:
Does D(D) halt?
No, you just don't understand the proper meaning of "ask" when >>>>>>>> applied to a deterministic entity.
When H and D have a pathological relationship to each
other then H(D,D) is not being asked about the behavior
of D(D). H1(D,D) has no such pathological relationship
thus D correctly simulated by H1 is the behavior of D(D).
OF course it is. The nature of the input doesn't affet the form of >>>>>> the question that H is supposed to answer.
The textbook asks the question.
The data cannot possibly do that.
But the data doesn't need to do it, as the program specifictions
define it.
Did you know that the code itself cannot read these specifications?
The specifications say {draw a square circle}, the code says huh?
And what make you think it needs to?
You are just showing a TOTAL IGNORANCE of the field of prgramming.
did the x86utm program write itself after you showing it the
specifications?
Now, if H was supposed to be a "Universal Problem Decider", then we
I don't have time for an infinite conversation.
H is ONLY defined to be a D decider.
It needs to be at least a D Halting Decider which has the same
requirement, just restricted to the class of programs built on the
template D.
And that means H doesn't need to "read" the problem statement either.
So, you are just showing your stupidity.
would need to somehow "encode" the goal of H determining that a
correct (and complete) simulation of its input would need to reach a
final state, but I see no issue with defining a way to encode that.
You already said that H cannot possibly map its
input to the behavior of D(D).
Right, it is impossible for H to itself compute that behavior and
give an answer.
NO !!! It is impossible for anyone or anything to provide
a correct answer to a question THAT THEY ARE NOT BEING ASKED.
OF course they can. For instance, you can solve a maze without knowing
that this is the task, if you are given an instruction sheet telling
you what moves to make.
Programs don't "know" what they are doing, they are just "dumb"
automatons that do exact as they are programmed to act.
That doesn't mean we can't encode the question.
Give it your best shot, it must be encoded in C.
Why?
C is not a good language to express requirements.
The spec says {CAD system that draws square circles}
We need to stay focused on this one single point until you
fully get it. Unlike the other two respondents you do have
the capacity to understand this.
You keep expecting H to read your computer science
textbooks.
No, I expect its PROGRAMMER to have done that, which clearly you
haven't done.
The programmer say WTF!
But there isn't a contradition like that in the specification of Halting.
Programs don't read their requirements, the perform the actions they
were programmed to do,
There is no way to encode H to even see the behavior of D(D)
when H and D have the pathological relationship.
That is the dumbed down version of H cannot map its finite
string x86 machine code to the behavior of D(D).
But the map exists, so we are allowed to ask to compute it.
There is no map from the input to H(D,D) to the behavior of D(D)
Of course, one possible answer is that it can not be done, but forIt is IMPOSSIBLE TO EVEN ASK THE QUESTION.
that answer to be correct, we need to show that it actually can not be
done, which the Turing Proof does.
You agreed that there is no map.
You fail to understand that this means
THE QUESTION CANNOT EVEN BE ASKED.
THIS IS YOUR SHORT-COMING AND NOT MY MISTAKE.
On 6/14/2024 9:48 PM, Richard Damon wrote:
On 6/14/24 10:25 PM, olcott wrote:
On 6/14/2024 9:16 PM, Richard Damon wrote:
You agreed that there is no map.
No, there IS a mapping from the input to the correct answer.
You said that there is no map from the input to H(D,D)
to the behavior of D(D)
You fail to understand that this means
THE QUESTION CANNOT EVEN BE ASKED.
THIS IS YOUR SHORT-COMING AND NOT MY MISTAKE.
And you are proving that you are just totally ignorant of what you are
talking about, and thus LYING when you make falall your false claims.
Maybe this is simply ever your head too??
On 6/14/2024 10:54 AM, joes wrote:What is H1 asked?
Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott:
On 6/14/2024 6:39 AM, Richard Damon wrote:
On 6/14/24 12:13 AM, olcott wrote:
On 6/13/2024 10:44 PM, Richard Damon wrote:
On 6/13/24 11:14 PM, olcott wrote:
On 6/13/2024 10:04 PM, Richard Damon wrote:
On 6/13/24 9:39 PM, olcott wrote:
On 6/13/2024 8:24 PM, Richard Damon wrote:
On 6/13/24 11:32 AM, olcott wrote:
When H and D have a pathological relationship to each other then
H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no
such pathological relationship thus D correctly simulated by H1 is the
behavior of D(D).
Yes. Humans are not machines.H is asked whether its input halts, and by definition should give theIf we used that definition of decider then no human ever decided
(right) answer for every input.
anything because every human has made at least one mistake.
I use the term "termination analyzer" as a close fit. The term partialI have not seen you use that term before. You have not called it partial.
halt decider is more accurate yet confuses most people.
This was my own phrasing. Can you explain the halting problem proof?D by construction is pathological to the supposed decider it isParroting what you memorized by rote is not very deep understanding.
constructed on. H1 can not decide D1. For every "decider" we can
construct an undecidable pathological program. No decider decides every
input.
Understanding that the halting problem counter-example input that doesI know that.
the opposite of whatever value the halt decider returns is merely the
Liar Paradox in disguise is a much deeper understanding.
Is that question "Do you answer yes?"?It can't be asked in any way what-so-ever because it is already beingH(D,D) is not even being asked about the behavior of D(D)It can't be asked any other way.
asked a different question.
No the textbooks have it incorrectly.That is very confusing. H still adheres to textbooks.That definition might be in textbooks,When H is a simulating halt decider you can't even ask it about theOF course you can, becaue, BY DEFINITION, that is the ONLY thing it
behavior of D(D). You already said that it cannot map its input to
the behavior of D(D). That means that you cannot ask H(D,D) about
the behavior of D(D).
does with its inputs.
yet H does not and cannot read textbooks.
That is not a problem with D, but with H not being total.No H (with a pathological relationship to D) can possibly see theThe only definition that H sees is the combination of its algorithmH does not see its own algorithm, it only follows its internal
with the finite string of machine language of its input.
programming. A machine and input completely determine the behaviour,
whether that is D(D) or H(D, D).
behavior of D(D).
There is no difference. If an H exists, it gives one answer. D then doesThe input DOES NOT SPECIFY THE BEHAVIOR OF D(D).It is impossible to encode any algorithm such that H and D have aH literally gets it as input.
pathological relationship and have H even see the behavior of D(D).
The input specifies the behavior WITHIN THE PATHOLOGICAL RELATIONSHIP It
does not specify the behavior WITHOUT THE PATHOLOGICAL RELATIONSHIP.
Why do you say that? A (partial) termination analyser doesn't disproveH cannot simulate D(D) for the same reason that int sum(int x, int y) { return x + y; } sum(3,4) cannot return the sum of 5 + 6;You already admitted there there is no mapping from the finite stringWhich means that H can't simulate D(D). Other machines can do so.
of machine code of the input to H(D,D) to the behavior of D(D).
You don't even have a slight clue about termination analyzers.Better doesn't cut it. H should work for ALL programs, especially forAnd note, it only gives definitive answers for SOME input.It is my understanding is that it does this much better than anyone
else does. AProVE "symbolically executes the LLVM program".
D.
D as a machine is completely specified and a valid Turing machine:It is ridiculously stupid to think that the pathological relationshipH answers that question for every other input.H cannot be asked the question Does D(D) halt?
There is no way to encode that. You already admitted this when you
said the finite string input to H(D,D)
cannot be mapped to the behavior of D(D).
The question "What is your answer/Is your answer right?" is pointless
and not even computed by H.
between H and D cannot possibly change the behavior of D especially when
it has been conclusively proven that it DOES CHANGE THE BEHAVIOR OF D
That would be the wrong question.If you cannot even ask H the question that you want answered then this
is not an actual case of undecidability. H does correctly answer the
actual question that it was actually asked.
Then H would be faulty.D(D) is a valid input. H should be universal.Likewise the Liar Paradox *should* be true or false,
except for the fact that it isn't.
H cannot be asked the right question.When H and D are defined to have a pathological relationship then HH cannot give a correct ANSWER about D(D).
cannot even be asked about the behavior of D(D).
Does the input terminate, rather.The fixed question that H is asked is:H doesn't need to know. It is programmed to answer a fixed question,You can not simply wave your hands to get H to know what
question is being asked.
and the input completely determines the answer.
Can your input terminate normally?
The answer to that question is: NO.If that were so, this would be given to D, since it asks H about itself.
There are undecidable problems. Like halting.How crazy is it to expect a correct answer to a different question thanIt can, just not by H.It can't even be asked. You said that yourself.
The input to H(D,D) cannot be transformed into the behavior of D(D).
the one you asked?
A ruse for what?No, we can't make an arbitrary problem solver, since we can showThat is a whole other different issue.
there are unsolvable problems.
The key subset of this is that the notion of undecidability is a ruse.
"Something along the lines"? Can you point to the step where you disagree?It can be asked and be wrong.Nothing says we can't encode the Halting Question into an input.If there is no mapping from the input to H(D,D) to the behavior of
D(D) then H cannot possibly be asked about behavior that it cannot
possibly see.
There are several different issues the key one of these issues [...]The question is just whether D(D) halts.What can't be done is create a program that gives the right answerExpecting a correct answer to the wrong question is only foolishness.
for all such inputs.
Where do you disagree with the halting problem proof?
is that there is something wrong with it along the lines of it being isomorphic to the Liar Paradox.
Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott:
On 6/14/2024 10:54 AM, joes wrote:
Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott:
On 6/14/2024 6:39 AM, Richard Damon wrote:
On 6/14/24 12:13 AM, olcott wrote:
On 6/13/2024 10:44 PM, Richard Damon wrote:
On 6/13/24 11:14 PM, olcott wrote:
On 6/13/2024 10:04 PM, Richard Damon wrote:
On 6/13/24 9:39 PM, olcott wrote:
On 6/13/2024 8:24 PM, Richard Damon wrote:
On 6/13/24 11:32 AM, olcott wrote:
What is H1 asked?When H and D have a pathological relationship to each other then
H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no
such pathological relationship thus D correctly simulated by H1 is the >>>> behavior of D(D).
Yes. Humans are not machines.H is asked whether its input halts, and by definition should give theIf we used that definition of decider then no human ever decided
(right) answer for every input.
anything because every human has made at least one mistake.
I use the term "termination analyzer" as a close fit. The term partial
halt decider is more accurate yet confuses most people.
On 6/15/2024 7:33 AM, Mikko wrote:
On 2024-06-15 11:34:39 +0000, joes said:
Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott:
On 6/14/2024 10:54 AM, joes wrote:What is H1 asked?
Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott:
On 6/14/2024 6:39 AM, Richard Damon wrote:
On 6/14/24 12:13 AM, olcott wrote:
On 6/13/2024 10:44 PM, Richard Damon wrote:
On 6/13/24 11:14 PM, olcott wrote:
On 6/13/2024 10:04 PM, Richard Damon wrote:
On 6/13/24 9:39 PM, olcott wrote:
On 6/13/2024 8:24 PM, Richard Damon wrote:
On 6/13/24 11:32 AM, olcott wrote:
When H and D have a pathological relationship to each other then
H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no >>>>>> such pathological relationship thus D correctly simulated by H1 is >>>>>> the
behavior of D(D).
Yes. Humans are not machines.H is asked whether its input halts, and by definition should give the >>>>> (right) answer for every input.If we used that definition of decider then no human ever decided
anything because every human has made at least one mistake.
I use the term "termination analyzer" as a close fit. The term partial >>>> halt decider is more accurate yet confuses most people.
Olcott has used the term "termination analyzer", though whether he knows
what it means is unclear.
To prove (non-)termination of a C program, AProVE uses the Clang
compiler [7] to translate it to the intermediate representation of the
LLVM framework [15]. Then AProVE symbolically executes the LLVM program
and uses abstraction to obtain a finite symbolic execution graph (SEG) containing all possible program runs.
*AProVE: Non-Termination Witnesses for C Programs* https://link.springer.com/content/pdf/10.1007/978-3-030-99527-0_21.pdf
The main difference is that a halt decider or partial halt decider takes
descriptions of both a Turing machine (or other program) and an input and
determines whether that machine halts with that input
H(D,D) is only required to get this one input correctly thus H is
a halt decider with a domain restricted to D.
but a termination
analyzer takes only the dexcription of a Turing machine (or other
program)
and attempts to determine whether that machine halts with every input.
The term "analyzer" instead "decider" indicates that it may fail to
determine on some inputs
Yes that is the distinction that I intend.
and that it may produce additional information
that may be useful. The intent is to create termination analyzers that
are useful for practical purposes.
On 6/15/2024 6:34 AM, joes wrote:
Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott:
On 6/14/2024 10:54 AM, joes wrote:What is H1 asked?
Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott:
On 6/14/2024 6:39 AM, Richard Damon wrote:
On 6/14/24 12:13 AM, olcott wrote:
On 6/13/2024 10:44 PM, Richard Damon wrote:
On 6/13/24 11:14 PM, olcott wrote:
On 6/13/2024 10:04 PM, Richard Damon wrote:
On 6/13/24 9:39 PM, olcott wrote:
On 6/13/2024 8:24 PM, Richard Damon wrote:
On 6/13/24 11:32 AM, olcott wrote:
When H and D have a pathological relationship to each other then
H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no >>>>> such pathological relationship thus D correctly simulated by H1 is the >>>>> behavior of D(D).
Yes. Humans are not machines.H is asked whether its input halts, and by definition should give theIf we used that definition of decider then no human ever decided
(right) answer for every input.
anything because every human has made at least one mistake.
I use the term "termination analyzer" as a close fit. The term partialI have not seen you use that term before. You have not called it partial.
halt decider is more accurate yet confuses most people.
That was confusing.
This was my own phrasing. Can you explain the halting problem proof?D by construction is pathological to the supposed decider it isParroting what you memorized by rote is not very deep understanding.
constructed on. H1 can not decide D1. For every "decider" we can
construct an undecidable pathological program. No decider decides every >>>> input.
Understanding that the halting problem counter-example input that does
the opposite of whatever value the halt decider returns is merely the
Liar Paradox in disguise is a much deeper understanding.
I know that.
If you really knew that then you would know that the
Halting probe is a mere ruse.
Is that question "Do you answer yes?"?It can't be asked in any way what-so-ever because it is already beingH(D,D) is not even being asked about the behavior of D(D)It can't be asked any other way.
asked a different question.
No the textbooks have it incorrectly.That is very confusing. H still adheres to textbooks.That definition might be in textbooks,When H is a simulating halt decider you can't even ask it about the >>>>>>> behavior of D(D). You already said that it cannot map its input to >>>>>>> the behavior of D(D). That means that you cannot ask H(D,D) about >>>>>>> the behavior of D(D).OF course you can, becaue, BY DEFINITION, that is the ONLY thing it >>>>>> does with its inputs.
yet H does not and cannot read textbooks.
That is not a problem with D, but with H not being total.No H (with a pathological relationship to D) can possibly see theThe only definition that H sees is the combination of its algorithmH does not see its own algorithm, it only follows its internal
with the finite string of machine language of its input.
programming. A machine and input completely determine the behaviour,
whether that is D(D) or H(D, D).
behavior of D(D).
There is no difference. If an H exists, it gives one answer. D then doesThe input DOES NOT SPECIFY THE BEHAVIOR OF D(D).It is impossible to encode any algorithm such that H and D have aH literally gets it as input.
pathological relationship and have H even see the behavior of D(D).
The input specifies the behavior WITHIN THE PATHOLOGICAL RELATIONSHIP It >>> does not specify the behavior WITHOUT THE PATHOLOGICAL RELATIONSHIP.
the opposite. H cannot change its answer. Other analysers can see that
H gives the wrong answer.
Why do you say that? A (partial) termination analyser doesn't disproveH cannot simulate D(D) for the same reason that int sum(int x, int y) {You already admitted there there is no mapping from the finite string >>>>> of machine code of the input to H(D,D) to the behavior of D(D).Which means that H can't simulate D(D). Other machines can do so.
return x + y; } sum(3,4) cannot return the sum of 5 + 6;
You don't even have a slight clue about termination analyzers.Better doesn't cut it. H should work for ALL programs, especially forAnd note, it only gives definitive answers for SOME input.It is my understanding is that it does this much better than anyone
else does. AProVE "symbolically executes the LLVM program".
D.
the halting problem.
D as a machine is completely specified and a valid Turing machine:It is ridiculously stupid to think that the pathological relationshipH answers that question for every other input.H cannot be asked the question Does D(D) halt?
There is no way to encode that. You already admitted this when you >>>>>>> said the finite string input to H(D,D)
cannot be mapped to the behavior of D(D).
The question "What is your answer/Is your answer right?" is pointless
and not even computed by H.
between H and D cannot possibly change the behavior of D especially when >>> it has been conclusively proven that it DOES CHANGE THE BEHAVIOR OF D
It asks a supposed decider if it halts, and then does the opposite,
making the decider wrong.
Other deciders than the one it calls can simulate or decide it.
D has exactly one fixed behaviour, like all TMs.
The behaviour of H should change because of the recursion, but it has to
make up its mind. D goes "I'm gonna do the opposite of what you said".
That would be the wrong question.If you cannot even ask H the question that you want answered then this >>>>> is not an actual case of undecidability. H does correctly answer the >>>>> actual question that it was actually asked.
Then H would be faulty.D(D) is a valid input. H should be universal.Likewise the Liar Paradox *should* be true or false,
except for the fact that it isn't.
H cannot be asked the right question.When H and D are defined to have a pathological relationship then HH cannot give a correct ANSWER about D(D).
cannot even be asked about the behavior of D(D).
Does the input terminate, rather.The fixed question that H is asked is:H doesn't need to know. It is programmed to answer a fixed question,You can not simply wave your hands to get H to know what
question is being asked.
and the input completely determines the answer.
Can your input terminate normally?
The answer to that question is: NO.If that were so, this would be given to D, since it asks H about itself.
In this case, it would actually terminate. If H said Yes, it would go
into an infinite loop.
There are undecidable problems. Like halting.How crazy is it to expect a correct answer to a different question thanIt can't even be asked. You said that yourself.
The input to H(D,D) cannot be transformed into the behavior of D(D). >>>> It can, just not by H.
the one you asked?
No, we can't make an arbitrary problem solver, since we can showThat is a whole other different issue.
there are unsolvable problems.
The key subset of this is that the notion of undecidability is a ruse. >>>> A ruse for what?
Nothing says we can't encode the Halting Question into an input.If there is no mapping from the input to H(D,D) to the behavior of
D(D) then H cannot possibly be asked about behavior that it cannot
possibly see.
No it cannot even be asked and the technical details
of this are over everyone's head.
Computing the map from the input to H(D,D) to the
behavior of D(D) has nothing to do with Google maps.
"Something along the lines"? Can you point to the step where youIt can be asked and be wrong.
There are several different issues the key one of these issues [...]Where do you disagree with the halting problem proof?What can't be done is create a program that gives the right answer >>>>>> for all such inputs.Expecting a correct answer to the wrong question is only foolishness. >>>> The question is just whether D(D) halts.
is that there is something wrong with it along the lines of it being
isomorphic to the Liar Paradox.
disagree?
Thanks for your extended reply.
You don't hardly have any clue about any of this.
On 6/15/2024 7:33 AM, Mikko wrote:
On 2024-06-15 11:34:39 +0000, joes said:
Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott:
On 6/14/2024 10:54 AM, joes wrote:What is H1 asked?
Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott:
On 6/14/2024 6:39 AM, Richard Damon wrote:
On 6/14/24 12:13 AM, olcott wrote:
On 6/13/2024 10:44 PM, Richard Damon wrote:
On 6/13/24 11:14 PM, olcott wrote:
On 6/13/2024 10:04 PM, Richard Damon wrote:
On 6/13/24 9:39 PM, olcott wrote:
On 6/13/2024 8:24 PM, Richard Damon wrote:
On 6/13/24 11:32 AM, olcott wrote:
When H and D have a pathological relationship to each other then
H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no >>>>>> such pathological relationship thus D correctly simulated by H1 is the >>>>>> behavior of D(D).
Yes. Humans are not machines.H is asked whether its input halts, and by definition should give the >>>>> (right) answer for every input.If we used that definition of decider then no human ever decided
anything because every human has made at least one mistake.
I use the term "termination analyzer" as a close fit. The term partial >>>> halt decider is more accurate yet confuses most people.
Olcott has used the term "termination analyzer", though whether he knows
what it means is unclear.
To prove (non-)termination of a C program, AProVE uses the Clang
compiler [7] to translate it to the intermediate representation of the
LLVM framework [15]. Then AProVE symbolically executes the LLVM program
and uses abstraction to obtain a finite symbolic execution graph (SEG) containing all possible program runs.
*AProVE: Non-Termination Witnesses for C Programs* https://link.springer.com/content/pdf/10.1007/978-3-030-99527-0_21.pdf
The main difference is that a halt decider or partial halt decider takes
descriptions of both a Turing machine (or other program) and an input and
determines whether that machine halts with that input
H(D,D) is only required to get this one input correctly thus H is
a halt decider with a domain restricted to D.
On 6/16/2024 4:15 AM, Mikko wrote:
On 2024-06-15 13:24:45 +0000, olcott said:
On 6/15/2024 7:33 AM, Mikko wrote:
On 2024-06-15 11:34:39 +0000, joes said:
Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott:
On 6/14/2024 10:54 AM, joes wrote:What is H1 asked?
Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott:
On 6/14/2024 6:39 AM, Richard Damon wrote:
On 6/14/24 12:13 AM, olcott wrote:
On 6/13/2024 10:44 PM, Richard Damon wrote:
On 6/13/24 11:14 PM, olcott wrote:
On 6/13/2024 10:04 PM, Richard Damon wrote:
On 6/13/24 9:39 PM, olcott wrote:
On 6/13/2024 8:24 PM, Richard Damon wrote:
On 6/13/24 11:32 AM, olcott wrote:
When H and D have a pathological relationship to each other then >>>>>>>> H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no >>>>>>>> such pathological relationship thus D correctly simulated by H1 is the >>>>>>>> behavior of D(D).
Yes. Humans are not machines.H is asked whether its input halts, and by definition should give the >>>>>>> (right) answer for every input.If we used that definition of decider then no human ever decided
anything because every human has made at least one mistake.
I use the term "termination analyzer" as a close fit. The term partial >>>>>> halt decider is more accurate yet confuses most people.
Olcott has used the term "termination analyzer", though whether he knows >>>> what it means is unclear.
To prove (non-)termination of a C program, AProVE uses the Clang
compiler [7] to translate it to the intermediate representation of the
LLVM framework [15]. Then AProVE symbolically executes the LLVM program
and uses abstraction to obtain a finite symbolic execution graph (SEG)
containing all possible program runs.
AProVE is a particular attempt, not a defintion.
If you say: What is a duck? and I point to a duck that
*is* what a duck is.
*Termination analysis*
In computer science, termination analysis is program analysis which
attempts to determine whether the evaluation of a given program halts
for each input. This means to determine whether the input program
computes a total function.
https://en.wikipedia.org/wiki/Termination_analysis
I pointed out AProVE because it is essentially a simulating
halt decider with a limited domain.
*AProVE: Non-Termination Witnesses for C Programs*
https://link.springer.com/content/pdf/10.1007/978-3-030-99527-0_21.pdf
On 6/17/2024 2:10 AM, Mikko wrote:
On 2024-06-16 12:59:02 +0000, olcott said:
On 6/16/2024 4:15 AM, Mikko wrote:
On 2024-06-15 13:24:45 +0000, olcott said:
On 6/15/2024 7:33 AM, Mikko wrote:
On 2024-06-15 11:34:39 +0000, joes said:
Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott:
On 6/14/2024 10:54 AM, joes wrote:What is H1 asked?
Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott:
On 6/14/2024 6:39 AM, Richard Damon wrote:
On 6/14/24 12:13 AM, olcott wrote:
On 6/13/2024 10:44 PM, Richard Damon wrote:
On 6/13/24 11:14 PM, olcott wrote:
On 6/13/2024 10:04 PM, Richard Damon wrote:
On 6/13/24 9:39 PM, olcott wrote:
On 6/13/2024 8:24 PM, Richard Damon wrote:
On 6/13/24 11:32 AM, olcott wrote:
When H and D have a pathological relationship to each other then >>>>>>>>>> H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no >>>>>>>>>> such pathological relationship thus D correctly simulated by H1 is the
behavior of D(D).
Yes. Humans are not machines.H is asked whether its input halts, and by definition should give the >>>>>>>>> (right) answer for every input.If we used that definition of decider then no human ever decided >>>>>>>> anything because every human has made at least one mistake.
I use the term "termination analyzer" as a close fit. The term partial >>>>>>>> halt decider is more accurate yet confuses most people.
Olcott has used the term "termination analyzer", though whether he knows >>>>>> what it means is unclear.
To prove (non-)termination of a C program, AProVE uses the Clang
compiler [7] to translate it to the intermediate representation of the >>>>> LLVM framework [15]. Then AProVE symbolically executes the LLVM program >>>>> and uses abstraction to obtain a finite symbolic execution graph (SEG) >>>>> containing all possible program runs.
AProVE is a particular attempt, not a defintion.
If you say: What is a duck? and I point to a duck that
*is* what a duck is.
That would be just an example, not a definition. In particular, it does
not tell about another being whether it can be called a "duck".
*Termination analysis*
In computer science, termination analysis is program analysis which
attempts to determine whether the evaluation of a given program halts
for each input. This means to determine whether the input program
computes a total function.
https://en.wikipedia.org/wiki/Termination_analysis
I pointed out AProVE because it is essentially a simulating
halt decider with a limited domain.
A difference between AProVE and a partial halt decider is that the input
to AProVE is only a program but not an input to that program but the
input to a partial halt decider contains both.
*AProVE: Non-Termination Witnesses for C Programs*
https://link.springer.com/content/pdf/10.1007/978-3-030-99527-0_21.pdf
AProVE is a kind of simulating termination analyzer.
H is a kind of simulating halt decider with a restricted domain.
[Simulating termination analyzers for dummies] makes these ideas
simpler.
On 6/18/2024 2:44 AM, Mikko wrote:
On 2024-06-17 12:51:15 +0000, olcott said:
On 6/17/2024 2:10 AM, Mikko wrote:
On 2024-06-16 12:59:02 +0000, olcott said:
On 6/16/2024 4:15 AM, Mikko wrote:
On 2024-06-15 13:24:45 +0000, olcott said:
On 6/15/2024 7:33 AM, Mikko wrote:
On 2024-06-15 11:34:39 +0000, joes said:
Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott:
On 6/14/2024 10:54 AM, joes wrote:What is H1 asked?
Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott:
On 6/14/2024 6:39 AM, Richard Damon wrote:
On 6/14/24 12:13 AM, olcott wrote:
On 6/13/2024 10:44 PM, Richard Damon wrote:
On 6/13/24 11:14 PM, olcott wrote:
On 6/13/2024 10:04 PM, Richard Damon wrote:
On 6/13/24 9:39 PM, olcott wrote:
On 6/13/2024 8:24 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 6/13/24 11:32 AM, olcott wrote:
When H and D have a pathological relationship to each other then >>>>>>>>>>>> H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no
such pathological relationship thus D correctly simulated by H1 is the
behavior of D(D).
H is asked whether its input halts, and by definition should give theIf we used that definition of decider then no human ever decided >>>>>>>>>> anything because every human has made at least one mistake. >>>>>>>>> Yes. Humans are not machines.
(right) answer for every input.
I use the term "termination analyzer" as a close fit. The term partial
halt decider is more accurate yet confuses most people.
Olcott has used the term "termination analyzer", though whether he knows
what it means is unclear.
To prove (non-)termination of a C program, AProVE uses the Clang >>>>>>> compiler [7] to translate it to the intermediate representation of the >>>>>>> LLVM framework [15]. Then AProVE symbolically executes the LLVM program >>>>>>> and uses abstraction to obtain a finite symbolic execution graph (SEG) >>>>>>> containing all possible program runs.
AProVE is a particular attempt, not a defintion.
If you say: What is a duck? and I point to a duck that
*is* what a duck is.
That would be just an example, not a definition. In particular, it does >>>> not tell about another being whether it can be called a "duck".
*Termination analysis*
In computer science, termination analysis is program analysis which
attempts to determine whether the evaluation of a given program halts >>>>> for each input. This means to determine whether the input program
computes a total function.
https://en.wikipedia.org/wiki/Termination_analysis
I pointed out AProVE because it is essentially a simulating
halt decider with a limited domain.
A difference between AProVE and a partial halt decider is that the input >>>> to AProVE is only a program but not an input to that program but the
input to a partial halt decider contains both.
*AProVE: Non-Termination Witnesses for C Programs*
https://link.springer.com/content/pdf/10.1007/978-3-030-99527-0_21.pdf >>>>
AProVE is a kind of simulating termination analyzer.
Not really. It does not simulate.
To prove (non-)termination of a C program, AProVE uses the Clang
compiler [7] to translate it to the intermediate representation of the
LLVM framework [15].Then AProVE *symbolically executes the LLVM program*
and uses abstraction to obtain a finite symbolic execution graph (SEG)
H is a kind of simulating halt decider with a restricted domain.
[Simulating termination analyzers for dummies] makes these ideas
simpler.
On 6/18/2024 10:36 AM, Mikko wrote:
On 2024-06-18 12:46:13 +0000, olcott said:
On 6/18/2024 2:44 AM, Mikko wrote:
On 2024-06-17 12:51:15 +0000, olcott said:
On 6/17/2024 2:10 AM, Mikko wrote:
On 2024-06-16 12:59:02 +0000, olcott said:
On 6/16/2024 4:15 AM, Mikko wrote:
On 2024-06-15 13:24:45 +0000, olcott said:
On 6/15/2024 7:33 AM, Mikko wrote:
On 2024-06-15 11:34:39 +0000, joes said:
Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott:
On 6/14/2024 10:54 AM, joes wrote:What is H1 asked?
Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott:
On 6/14/2024 6:39 AM, Richard Damon wrote:
On 6/14/24 12:13 AM, olcott wrote:
On 6/13/2024 10:44 PM, Richard Damon wrote:
On 6/13/24 11:14 PM, olcott wrote:
On 6/13/2024 10:04 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 6/13/24 9:39 PM, olcott wrote:
On 6/13/2024 8:24 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 6/13/24 11:32 AM, olcott wrote:
When H and D have a pathological relationship to each other then >>>>>>>>>>>>>> H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no
such pathological relationship thus D correctly simulated by H1 is the
behavior of D(D).
H is asked whether its input halts, and by definition should give theIf we used that definition of decider then no human ever decided >>>>>>>>>>>> anything because every human has made at least one mistake. >>>>>>>>>>> Yes. Humans are not machines.
(right) answer for every input.
I use the term "termination analyzer" as a close fit. The term partial
halt decider is more accurate yet confuses most people.
Olcott has used the term "termination analyzer", though whether he knows
what it means is unclear.
To prove (non-)termination of a C program, AProVE uses the Clang >>>>>>>>> compiler [7] to translate it to the intermediate representation of the
LLVM framework [15]. Then AProVE symbolically executes the LLVM program
and uses abstraction to obtain a finite symbolic execution graph (SEG)
containing all possible program runs.
AProVE is a particular attempt, not a defintion.
If you say: What is a duck? and I point to a duck that
*is* what a duck is.
That would be just an example, not a definition. In particular, it does >>>>>> not tell about another being whether it can be called a "duck".
*Termination analysis*
In computer science, termination analysis is program analysis which >>>>>>> attempts to determine whether the evaluation of a given program halts >>>>>>> for each input. This means to determine whether the input program >>>>>>> computes a total function.
https://en.wikipedia.org/wiki/Termination_analysis
I pointed out AProVE because it is essentially a simulating
halt decider with a limited domain.
A difference between AProVE and a partial halt decider is that the input >>>>>> to AProVE is only a program but not an input to that program but the >>>>>> input to a partial halt decider contains both.
*AProVE: Non-Termination Witnesses for C Programs*
https://link.springer.com/content/pdf/10.1007/978-3-030-99527-0_21.pdf
AProVE is a kind of simulating termination analyzer.
Not really. It does not simulate.
To prove (non-)termination of a C program, AProVE uses the Clang
compiler [7] to translate it to the intermediate representation of the
LLVM framework [15].Then AProVE *symbolically executes the LLVM program*
I.e., does not simulate.
So maybe: *symbolically executes the LLVM program*
means jumping up and down yelling and screaming?
AProVE does form its non-halting decision on the basis of the
dynamic behavior of its input instead of any static analysis.
*symbolically executes the LLVM program* means dynamic behavior
and not static analysis.
On 6/18/2024 11:27 AM, Mikko wrote:
On 2024-06-18 15:44:16 +0000, olcott said:
On 6/18/2024 10:36 AM, Mikko wrote:
On 2024-06-18 12:46:13 +0000, olcott said:
On 6/18/2024 2:44 AM, Mikko wrote:I.e., does not simulate.
On 2024-06-17 12:51:15 +0000, olcott said:
On 6/17/2024 2:10 AM, Mikko wrote:
On 2024-06-16 12:59:02 +0000, olcott said:
On 6/16/2024 4:15 AM, Mikko wrote:
On 2024-06-15 13:24:45 +0000, olcott said:
On 6/15/2024 7:33 AM, Mikko wrote:
On 2024-06-15 11:34:39 +0000, joes said:
Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott:Olcott has used the term "termination analyzer", though whether he knows
On 6/14/2024 10:54 AM, joes wrote:What is H1 asked?
Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott: >>>>>>>>>>>>>>>> On 6/14/2024 6:39 AM, Richard Damon wrote:
On 6/14/24 12:13 AM, olcott wrote:
On 6/13/2024 10:44 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 6/13/24 11:14 PM, olcott wrote:
On 6/13/2024 10:04 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 6/13/24 9:39 PM, olcott wrote:
On 6/13/2024 8:24 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>> On 6/13/24 11:32 AM, olcott wrote:
When H and D have a pathological relationship to each other then
H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no
such pathological relationship thus D correctly simulated by H1 is the
behavior of D(D).
H is asked whether its input halts, and by definition should give theIf we used that definition of decider then no human ever decided >>>>>>>>>>>>>> anything because every human has made at least one mistake. >>>>>>>>>>>>> Yes. Humans are not machines.
(right) answer for every input.
I use the term "termination analyzer" as a close fit. The term partial
halt decider is more accurate yet confuses most people. >>>>>>>>>>>>
what it means is unclear.
To prove (non-)termination of a C program, AProVE uses the Clang >>>>>>>>>>> compiler [7] to translate it to the intermediate representation of the
LLVM framework [15]. Then AProVE symbolically executes the LLVM program
and uses abstraction to obtain a finite symbolic execution graph (SEG)
containing all possible program runs.
AProVE is a particular attempt, not a defintion.
If you say: What is a duck? and I point to a duck that
*is* what a duck is.
That would be just an example, not a definition. In particular, it does
not tell about another being whether it can be called a "duck". >>>>>>>>
*Termination analysis*
In computer science, termination analysis is program analysis which >>>>>>>>> attempts to determine whether the evaluation of a given program halts >>>>>>>>> for each input. This means to determine whether the input program >>>>>>>>> computes a total function.
https://en.wikipedia.org/wiki/Termination_analysis
I pointed out AProVE because it is essentially a simulating
halt decider with a limited domain.
A difference between AProVE and a partial halt decider is that the input
to AProVE is only a program but not an input to that program but the >>>>>>>> input to a partial halt decider contains both.
*AProVE: Non-Termination Witnesses for C Programs*
https://link.springer.com/content/pdf/10.1007/978-3-030-99527-0_21.pdf
AProVE is a kind of simulating termination analyzer.
Not really. It does not simulate.
To prove (non-)termination of a C program, AProVE uses the Clang
compiler [7] to translate it to the intermediate representation of the >>>>> LLVM framework [15].Then AProVE *symbolically executes the LLVM program* >>>>
So maybe: *symbolically executes the LLVM program*
means jumping up and down yelling and screaming?
Not a bad guess but not quite right either.
AProVE does form its non-halting decision on the basis of the
dynamic behavior of its input instead of any static analysis.
It is a kind of static analysis. The important diffrence is that
in a simulation there would be a specific input but AProVE considers
all possible inputs at the same time.
None-the-less it does derive the directly graph of all
control flows on the basis of
*symbolically executes the LLVM program*
On 6/19/2024 3:07 AM, Mikko wrote:
On 2024-06-18 16:36:53 +0000, olcott said:
On 6/18/2024 11:27 AM, Mikko wrote:
On 2024-06-18 15:44:16 +0000, olcott said:
On 6/18/2024 10:36 AM, Mikko wrote:
On 2024-06-18 12:46:13 +0000, olcott said:
On 6/18/2024 2:44 AM, Mikko wrote:
On 2024-06-17 12:51:15 +0000, olcott said:
On 6/17/2024 2:10 AM, Mikko wrote:
On 2024-06-16 12:59:02 +0000, olcott said:
On 6/16/2024 4:15 AM, Mikko wrote:
On 2024-06-15 13:24:45 +0000, olcott said:
On 6/15/2024 7:33 AM, Mikko wrote:
On 2024-06-15 11:34:39 +0000, joes said:
Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott: >>>>>>>>>>>>>>>> On 6/14/2024 10:54 AM, joes wrote:Olcott has used the term "termination analyzer", though whether he knows
What is H1 asked?Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott: >>>>>>>>>>>>>>>>>> On 6/14/2024 6:39 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 6/14/24 12:13 AM, olcott wrote:
On 6/13/2024 10:44 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 6/13/24 11:14 PM, olcott wrote:
On 6/13/2024 10:04 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>> On 6/13/24 9:39 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>> On 6/13/2024 8:24 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>>> On 6/13/24 11:32 AM, olcott wrote:
When H and D have a pathological relationship to each other then
H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no
such pathological relationship thus D correctly simulated by H1 is the
behavior of D(D).
H is asked whether its input halts, and by definition should give theIf we used that definition of decider then no human ever decided
(right) answer for every input.
anything because every human has made at least one mistake. >>>>>>>>>>>>>>> Yes. Humans are not machines.
I use the term "termination analyzer" as a close fit. The term partial
halt decider is more accurate yet confuses most people. >>>>>>>>>>>>>>
what it means is unclear.
To prove (non-)termination of a C program, AProVE uses the Clang >>>>>>>>>>>>> compiler [7] to translate it to the intermediate representation of the
LLVM framework [15]. Then AProVE symbolically executes the LLVM program
and uses abstraction to obtain a finite symbolic execution graph (SEG)
containing all possible program runs.
AProVE is a particular attempt, not a defintion.
If you say: What is a duck? and I point to a duck that
*is* what a duck is.
That would be just an example, not a definition. In particular, it does
not tell about another being whether it can be called a "duck". >>>>>>>>>>
*Termination analysis*
In computer science, termination analysis is program analysis which >>>>>>>>>>> attempts to determine whether the evaluation of a given program halts
for each input. This means to determine whether the input program >>>>>>>>>>> computes a total function.
https://en.wikipedia.org/wiki/Termination_analysis
I pointed out AProVE because it is essentially a simulating >>>>>>>>>>> halt decider with a limited domain.
A difference between AProVE and a partial halt decider is that the input
to AProVE is only a program but not an input to that program but the >>>>>>>>>> input to a partial halt decider contains both.
*AProVE: Non-Termination Witnesses for C Programs*
https://link.springer.com/content/pdf/10.1007/978-3-030-99527-0_21.pdf
AProVE is a kind of simulating termination analyzer.
Not really. It does not simulate.
To prove (non-)termination of a C program, AProVE uses the Clang >>>>>>> compiler [7] to translate it to the intermediate representation of the >>>>>>> LLVM framework [15].Then AProVE *symbolically executes the LLVM program*
I.e., does not simulate.
So maybe: *symbolically executes the LLVM program*
means jumping up and down yelling and screaming?
Not a bad guess but not quite right either.
AProVE does form its non-halting decision on the basis of the
dynamic behavior of its input instead of any static analysis.
It is a kind of static analysis. The important diffrence is that
in a simulation there would be a specific input but AProVE considers
all possible inputs at the same time.
None-the-less it does derive the directly graph of all
control flows on the basis of
*symbolically executes the LLVM program*
It is still unclear whether you know what "termination analyzer" means.
Which doesn't matter as nobody believes you anyway.
It is dishonest to dismiss my reasoning out-of-hand without
finding an actual error.
For my first three examples that have no input H0 is a termination
analyzer.
For my next example that has an input there is no existing
term of the art that exactly fits besides halt decider with a limited
domain or partial halt decider.
This is too confusing to my software engineer reviewers.
On 6/20/2024 12:04 AM, Mikko wrote:
Sitll inclear whether you know what "termination analyzer" means.
I really don't care what you believe.
It is not about belief.
It is about correct reasoning.
On 6/20/2024 9:42 AM, Mikko wrote:You cannot present wrong reasoning to people who know the literature.
On 2024-06-20 05:15:37 +0000, olcott said:I cannot possibly present my reasoning in a convincing way to people
On 6/20/2024 12:04 AM, Mikko wrote:
Still unclear whether you know what "termination analyzer" means.
I really don't care what you believe.
It is not about belief.
It is about correct reasoning.
No, it is not. It is about language maintenance. If you cannot present
your reasoning in Common Language it does not matter whether your
reasoning is correct.
that have already made up their mind and closed it thus fail to trace
through each step of this reasoning looking for an error and finding
none.
If you simply leap to the false assumption that I am wrong yet fail toThat "assumption" is pretty well founded if you believe CS. The mistakes
point out any mistake because there are no mistakes this will only
convince gullible fools that also lack sufficient technical competence.
On 6/20/2024 9:42 AM, Mikko wrote:
On 2024-06-20 05:15:37 +0000, olcott said:
On 6/20/2024 12:04 AM, Mikko wrote:
Sitll inclear whether you know what "termination analyzer" means.
I really don't care what you believe.
It is not about belief.
It is about correct reasoning.
No, it is not. It is about language maintenance. If you cannot present
your reasoning in Common Language it does not matter whether your
reasoning is correct.
I cannot possibly present my reasoning in a convincing way
to people that have already made up their mind and closed it
thus fail to trace through each step of this reasoning looking
for an error and finding none.
If you simply leap to the false assumption that I am wrong
yet fail to point out any mistake because there are no mistakes
this will only convince gullible fools that also lack sufficient
technical competence.
On 6/20/2024 8:55 PM, Richard Damon wrote:
On 6/20/24 11:04 AM, olcott wrote:
On 6/20/2024 9:42 AM, Mikko wrote:
On 2024-06-20 05:15:37 +0000, olcott said:
On 6/20/2024 12:04 AM, Mikko wrote:
Sitll inclear whether you know what "termination analyzer" means.
I really don't care what you believe.
It is not about belief.
It is about correct reasoning.
No, it is not. It is about language maintenance. If you cannot present >>>> your reasoning in Common Language it does not matter whether your
reasoning is correct.
I cannot possibly present my reasoning in a convincing way
to people that have already made up their mind and closed it
thus fail to trace through each step of this reasoning looking
for an error and finding none.
BNo, we are open to new ideas that have an actual factual
If you simply leap to the false assumption that I am wrong
yet fail to point out any mistake because there are no mistakes
this will only convince gullible fools that also lack sufficient
technical competence.
We don't leap from false assumption, we start with DEFINTIONS.
When it is defined that H(D,D) must report on the behavior
of D(D) yet the finite string D cannot be mapped to the
behavior of D(D) then the definition is wrong.
*You seem to think that textbooks are the word of God*
On 6/20/2024 9:42 AM, Mikko wrote:
On 2024-06-20 05:15:37 +0000, olcott said:
On 6/20/2024 12:04 AM, Mikko wrote:
Sitll inclear whether you know what "termination analyzer" means.
I really don't care what you believe.
It is not about belief.
It is about correct reasoning.
No, it is not. It is about language maintenance. If you cannot present
your reasoning in Common Language it does not matter whether your
reasoning is correct.
I cannot possibly present my reasoning in a convincing way
to people that have already made up their mind and closed it
thus fail to trace through each step of this reasoning looking
for an error and finding none.
On 6/20/2024 11:16 AM, joes wrote:
Am Thu, 20 Jun 2024 10:04:35 -0500 schrieb olcott:
On 6/20/2024 9:42 AM, Mikko wrote:
On 2024-06-20 05:15:37 +0000, olcott said:I cannot possibly present my reasoning in a convincing way to people
On 6/20/2024 12:04 AM, Mikko wrote:
Still unclear whether you know what "termination analyzer" means.
I really don't care what you believe.
It is not about belief.
It is about correct reasoning.
No, it is not. It is about language maintenance. If you cannot present >>>> your reasoning in Common Language it does not matter whether your
reasoning is correct.
that have already made up their mind and closed it thus fail to trace
through each step of this reasoning looking for an error and finding
none.
You cannot present wrong reasoning to people who know the literature.
We found many errors.
All the "errors" that have been pointed out are mere
dogmatic assertions that state that my conclusion is
inconsistent with the conclusions stated in textbooks.
The only other "errors" that were pointed out flatly
disagree with verified facts.
On 6/21/2024 3:05 AM, Fred. Zwarts wrote:
Op 20.jun.2024 om 18:28 schreef olcott:
On 6/20/2024 11:16 AM, joes wrote:
Am Thu, 20 Jun 2024 10:04:35 -0500 schrieb olcott:
On 6/20/2024 9:42 AM, Mikko wrote:
On 2024-06-20 05:15:37 +0000, olcott said:I cannot possibly present my reasoning in a convincing way to people >>>>> that have already made up their mind and closed it thus fail to trace >>>>> through each step of this reasoning looking for an error and finding >>>>> none.
On 6/20/2024 12:04 AM, Mikko wrote:
Still unclear whether you know what "termination analyzer" means. >>>>>>I really don't care what you believe.
It is not about belief.
It is about correct reasoning.
No, it is not. It is about language maintenance. If you cannot
present
your reasoning in Common Language it does not matter whether your
reasoning is correct.
You cannot present wrong reasoning to people who know the literature.
We found many errors.
All the "errors" that have been pointed out are mere
dogmatic assertions that state that my conclusion is
inconsistent with the conclusions stated in textbooks.
The only other "errors" that were pointed out flatly
disagree with verified facts.
No one ever verified these facts. We know that in your language
'verified facts' means 'my wishes'.
On 6/20/2024 5:37 PM, Richard Damon wrote:
On 6/20/24 10:12 AM, olcott wrote:
It also looks like you fail to comprehend that it is possible
for a simulating termination analyzer to recognize inputs that
would never terminate by recognizing the repeating state of
these inputs after a finite number of steps of correct simulation.
Right, but they don't do it by "Correctly Simulating" the
input, but by a PARTIAL simulation that provides the needed
information to prove that an ACTUAL CORRECT (and complete)
simulation of that input would not halt.
Many errors were pointed out to you, but you prefer to ignore them,
probably because your prejudice has already made up your mind that
they must be wrong, so you did not bother to think about them.
On 6/21/2024 2:16 AM, Mikko wrote:
On 2024-06-20 15:04:35 +0000, olcott said:
On 6/20/2024 9:42 AM, Mikko wrote:
On 2024-06-20 05:15:37 +0000, olcott said:
On 6/20/2024 12:04 AM, Mikko wrote:
Sitll inclear whether you know what "termination analyzer" means.
I really don't care what you believe.
It is not about belief.
It is about correct reasoning.
No, it is not. It is about language maintenance. If you cannot present >>>> your reasoning in Common Language it does not matter whether your
reasoning is correct.
I cannot possibly present my reasoning in a convincing way
to people that have already made up their mind and closed it
thus fail to trace through each step of this reasoning looking
for an error and finding none.
If you can't convince the reviewers of a journal that your article is
well thought and well written you cannot get it published in a
respected journal.
The trick is to get people that say I am wrong
to point out the exact mistake. When they really
try to do this they find no mistake and all of
their rebbutal was pure bluster with no actual basis.
On 6/21/2024 2:16 AM, Mikko wrote:
On 2024-06-20 15:04:35 +0000, olcott said:
On 6/20/2024 9:42 AM, Mikko wrote:
On 2024-06-20 05:15:37 +0000, olcott said:
On 6/20/2024 12:04 AM, Mikko wrote:
Sitll inclear whether you know what "termination analyzer" means.
I really don't care what you believe.
It is not about belief.
It is about correct reasoning.
No, it is not. It is about language maintenance. If you cannot present >>>> your reasoning in Common Language it does not matter whether your
reasoning is correct.
I cannot possibly present my reasoning in a convincing way
to people that have already made up their mind and closed it
thus fail to trace through each step of this reasoning looking
for an error and finding none.
If you can't convince the reviewers of a journal that your article is
well thought and well written you cannot get it published in a
respected journal.
The trick is to get people that say I am wrong
to point out the exact mistake. When they really
try to do this they find no mistake and all of
their rebbutal was pure bluster with no actual basis.
On 6/21/2024 2:16 AM, Mikko wrote:
On 2024-06-20 15:04:35 +0000, olcott said:
On 6/20/2024 9:42 AM, Mikko wrote:
On 2024-06-20 05:15:37 +0000, olcott said:
On 6/20/2024 12:04 AM, Mikko wrote:
Sitll inclear whether you know what "termination analyzer" means.
I really don't care what you believe.
It is not about belief.
It is about correct reasoning.
No, it is not. It is about language maintenance. If you cannot present >>>> your reasoning in Common Language it does not matter whether your
reasoning is correct.
I cannot possibly present my reasoning in a convincing way
to people that have already made up their mind and closed it
thus fail to trace through each step of this reasoning looking
for an error and finding none.
If you can't convince the reviewers of a journal that your article is
well thought and well written you cannot get it published in a
respected journal.
The trick is to get people that say I am wrong
to point out the exact mistake. When they really
try to do this they find no mistake and all of
their rebbutal was pure bluster with no actual basis.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 493 |
Nodes: | 16 (2 / 14) |
Uptime: | 18:46:15 |
Calls: | 9,713 |
Calls today: | 3 |
Files: | 13,741 |
Messages: | 6,182,006 |