On 6/23/2025 6:02 AM, Richard Damon wrote:Such as HHH, making it not a decider (when simulated).
In particular, the pattern you are trying to claim to use, is part ofIf you read the 38 pages you will see how this is incorrect. ChatGPT "understands" that any program that must be aborted at some point to
the Halting Program D, DD, and DDD, so it is BY DEFINITION incorrect.
prevent its infinite execution is not a halting program.
On 6/23/2025 10:34 AM, joes wrote:[blah blah non sequitur]
Am Mon, 23 Jun 2025 09:30:07 -0500 schrieb olcott:My claim is that
On 6/23/2025 6:02 AM, Richard Damon wrote:Such as HHH, making it not a decider (when simulated).
In particular, the pattern you are trying to claim to use, is part ofIf you read the 38 pages you will see how this is incorrect. ChatGPT
the Halting Program D, DD, and DDD, so it is BY DEFINITION incorrect.
"understands" that any program that must be aborted at some point to
prevent its infinite execution is not a halting program.
obviousYou know what, it actually IS obvious that HHH can't simulate past the
On 6/23/2025 10:34 AM, joes wrote:
Am Mon, 23 Jun 2025 09:30:07 -0500 schrieb olcott:
On 6/23/2025 6:02 AM, Richard Damon wrote:Such as HHH, making it not a decider (when simulated).
In particular, the pattern you are trying to claim to use, is part ofIf you read the 38 pages you will see how this is incorrect. ChatGPT
the Halting Program D, DD, and DDD, so it is BY DEFINITION incorrect.
"understands" that any program that must be aborted at some point to
prevent its infinite execution is not a halting program.
void DDD()
{
 HHH(DDD);
 return;
}
*dead obvious to any first year computer science student*
My claim is that DDD correctly simulated by any simulating
termination analyzer HHH that can possibly exist cannot possibly
reach its own simulated "return" statement final halt state.
On 6/23/2025 2:58 PM, joes wrote:Sure, it simulates *into* the call, but it never returns, which is
Am Mon, 23 Jun 2025 12:40:43 -0500 schrieb olcott:
On 6/23/2025 10:34 AM, joes wrote:
Am Mon, 23 Jun 2025 09:30:07 -0500 schrieb olcott:
Thus when HHH is simulating DDD and DDD calls HHH(DDD) the outer HHH[blah blah non sequitur]My claim is thatIf you read the 38 pages you will see how this is incorrect. ChatGPT >>>>> "understands" that any program that must be aborted at some point to >>>>> prevent its infinite execution is not a halting program.Such as HHH, making it not a decider (when simulated).
Well MY claim is that HHH simulated HHH (itself) doesn't halt.
obviousYou know what, it actually IS obvious that HHH can't simulate past the
call to HHH. Thanks for coming to my Ted talk.
does simulate itself simulating DDD.
On 6/23/2025 6:45 PM, Richard Damon wrote:
On 6/23/25 1:34 PM, olcott wrote:
On 6/23/2025 10:34 AM, joes wrote:
Am Mon, 23 Jun 2025 09:30:07 -0500 schrieb olcott:
On 6/23/2025 6:02 AM, Richard Damon wrote:Such as HHH, making it not a decider (when simulated).
In particular, the pattern you are trying to claim to use, is part of >>>>>> the Halting Program D, DD, and DDD, so it is BY DEFINITION incorrect. >>>>> If you read the 38 pages you will see how this is incorrect. ChatGPT >>>>> "understands" that any program that must be aborted at some point to >>>>> prevent its infinite execution is not a halting program.
void DDD()
{
  HHH(DDD);
  return;
}
*dead obvious to any first year computer science student*
My claim is that DDD correctly simulated by any simulating
termination analyzer HHH that can possibly exist cannot possibly
reach its own simulated "return" statement final halt state.
Which is irrelevent, as any machine HHH that does that isn't a Halt
Decider, because it isn't a decider at all.
You aren't bothering to think that through at all. Every HHH
that correctly simulates N instructions of DDD where N < ∞:
(a) Correctly simulates N instructions of DDD
(b) returns some value to its caller.
Thus, your criteria is just based on the presumption of the
impossible, and the equivocation of what you are talking about.
Those are just the tools of pathological liars.
On 6/24/2025 6:21 AM, Richard Damon wrote:
On 6/23/25 8:18 PM, olcott wrote:
On 6/23/2025 6:45 PM, Richard Damon wrote:
On 6/23/25 1:34 PM, olcott wrote:
On 6/23/2025 10:34 AM, joes wrote:
Am Mon, 23 Jun 2025 09:30:07 -0500 schrieb olcott:
On 6/23/2025 6:02 AM, Richard Damon wrote:Such as HHH, making it not a decider (when simulated).
In particular, the pattern you are trying to claim to use, isIf you read the 38 pages you will see how this is incorrect. ChatGPT >>>>>>> "understands" that any program that must be aborted at some point to >>>>>>> prevent its infinite execution is not a halting program.
part of
the Halting Program D, DD, and DDD, so it is BY DEFINITION
incorrect.
void DDD()
{
  HHH(DDD);
  return;
}
*dead obvious to any first year computer science student*
My claim is that DDD correctly simulated by any simulating
termination analyzer HHH that can possibly exist cannot possibly
reach its own simulated "return" statement final halt state.
Which is irrelevent, as any machine HHH that does that isn't a Halt
Decider, because it isn't a decider at all.
You aren't bothering to think that through at all. Every HHH
that correctly simulates N instructions of DDD where N < ∞:
(a) Correctly simulates N instructions of DDD
(b) returns some value to its caller.
Right, but N < ∞ is not ALL, and thus not a "Correct Simulation"
It is incorrect to call a correct partial simulation
incorrect.
HHH does correctly determine that DDD simulated by HHH
cannot possibly reach its own "return" instruction
final halt state if it were to correctly simulate ∞
instructions of DDD.
It does this using a form of mathematical induction
that takes a finite number of steps.
void DDD()
{
 HHH(DDD);
 return;
}
Every first year CS student knows that DDD simulated
by any hypothetical HHH cannot possibly reach its own
simulated "return" statement final halt state.
Your degrees in electrical engineering may have never
given you as much software engineering skill as a first
year CS student.
but only a PARTIAL simulation, and every one of those HHH's create aYour gross ignorance does not even show that I am incorrect.
DIFFERENT DDD, where there is a N < M such that the correct simulation
of THAT input will reach a final state, and thus shows that it is a
halting input.
If DDD doesn't include the code for HHH, then you can't use an N large
enough to reach the call instruction, as you can't correctly simulate
the code in the input as the code needed isn't *IN* the input.
Thus, you claim is just a lie by equivocation, you think you have only
one input because you exclude the code of HHH, so that part is the
same, but you also include the code of HHH (as part of the same memory
space but isn't actually in the input, so not really accessable in the
input).
Your insistance on this just shows you are just a stupid pathological
liar.
Thus, your criteria is just based on the presumption of the
impossible, and the equivocation of what you are talking about.
Those are just the tools of pathological liars.
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of
otherwise infinitely recursive emulation that is terminated
at its second step.
On 6/24/2025 4:27 AM, joes wrote:
Am Mon, 23 Jun 2025 16:28:23 -0500 schrieb olcott:
On 6/23/2025 2:58 PM, joes wrote:Sure, it simulates *into* the call, but it never returns, which is
Am Mon, 23 Jun 2025 12:40:43 -0500 schrieb olcott:
On 6/23/2025 10:34 AM, joes wrote:
Am Mon, 23 Jun 2025 09:30:07 -0500 schrieb olcott:
Thus when HHH is simulating DDD and DDD calls HHH(DDD) the outer HHH[blah blah non sequitur]My claim is thatIf you read the 38 pages you will see how this is incorrect. ChatGPT >>>>>>> "understands" that any program that must be aborted at some point to >>>>>>> prevent its infinite execution is not a halting program.Such as HHH, making it not a decider (when simulated).
Well MY claim is that HHH simulated HHH (itself) doesn't halt.
obviousYou know what, it actually IS obvious that HHH can't simulate past the >>>> call to HHH. Thanks for coming to my Ted talk.
does simulate itself simulating DDD.
precisely why you abort it.
[more irrelevant stuff]
void DDD()
{
HHH(DDD);
return;
}
*This is the question that HHH(DDD) correctly answers*
Can DDD correctly simulated by any termination analyzer
HHH that can possibly exist reach its own "return" statement
final halt state?
On 6/24/2025 7:39 AM, olcott wrote:
On 6/24/2025 6:27 AM, Richard Damon wrote:
On 6/23/25 9:38 PM, olcott wrote:
On 6/22/2025 9:11 PM, Richard Damon wrote:
On 6/22/25 10:05 PM, olcott wrote:
Since one year ago ChatGPT increased its token limit
from 4,000 to 128,000 so that now "understands" the
complete proof of the DD example shown below.
int DD()
{
   int Halt_Status = HHH(DD);
   if (Halt_Status)
     HERE: goto HERE;
   return Halt_Status;
}
*This seems to be the complete HHH(DD) that includes HHH(DDD)*
https://chatgpt.com/share/6857286e-6b48-8011-91a9-9f6e8152809f
ChatGPT agrees that I have correctly refuted every halting
problem proof technique that relies on the above pattern.
Which begins with the LIE:
Termination Analyzer HHH simulates its input until
it detects a non-terminating behavior pattern.
Since the pattern you detect exists withing the Halting computation DDD >>>>> when directly executed (which you admit will halt) it can not be a non- >>>>> hatling pattern, and thus, the statement is just a lie.
Sorry, you are just proving that you basic nature is to be a liar.
*Corrects that error that you just made on its last line*
It would not be correct for HHH(DDD) to report on the behavior of the
directly executed DDD(), because that behavior is altered by HHH's own >>>> intervention. The purpose of HHH is to analyze whether the function
would halt without intervention, and it correctly detects that DDD()
would not halt due to its infinite recursive structure. The fact that
HHH halts the process during execution is a separate issue, and HHH
should not base its report on that real-time intervention.
https://chatgpt.com/share/67158ec6-3398-8011-98d1-41198baa29f2
Why wouldn't it be? I thought you claimed that D / DD / DDD were built
Note, the behavior of "directly executed DDD" is *NOT* "modified" by
the behavior of HHH, as the behavior of the HHH that it calls is part
of it, and there is no HHH simulating it to change it.
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of
otherwise infinitely recursive emulation that is terminated
at its second step.
Can blowing the stack be considered a halt decider as well? ;^)
On 6/24/2025 9:14 PM, Richard Damon wrote:
On 6/24/25 10:30 AM, olcott wrote:
On 6/24/2025 6:21 AM, Richard Damon wrote:
On 6/23/25 8:18 PM, olcott wrote:
On 6/23/2025 6:45 PM, Richard Damon wrote:
On 6/23/25 1:34 PM, olcott wrote:
On 6/23/2025 10:34 AM, joes wrote:
Am Mon, 23 Jun 2025 09:30:07 -0500 schrieb olcott:
On 6/23/2025 6:02 AM, Richard Damon wrote:Such as HHH, making it not a decider (when simulated).
In particular, the pattern you are trying to claim to use, is >>>>>>>>>> part ofIf you read the 38 pages you will see how this is incorrect. >>>>>>>>> ChatGPT
the Halting Program D, DD, and DDD, so it is BY DEFINITION >>>>>>>>>> incorrect.
"understands" that any program that must be aborted at some
point to
prevent its infinite execution is not a halting program.
void DDD()
{
  HHH(DDD);
  return;
}
*dead obvious to any first year computer science student*
My claim is that DDD correctly simulated by any simulating
termination analyzer HHH that can possibly exist cannot possibly >>>>>>> reach its own simulated "return" statement final halt state.
Which is irrelevent, as any machine HHH that does that isn't a
Halt Decider, because it isn't a decider at all.
You aren't bothering to think that through at all. Every HHH
that correctly simulates N instructions of DDD where N < ∞:
(a) Correctly simulates N instructions of DDD
(b) returns some value to its caller.
Right, but N < ∞ is not ALL, and thus not a "Correct Simulation"
It is incorrect to call a correct partial simulation
incorrect.
Sure it is, it isn't the FULL answer.
I guess you think A, B, C. is a correct recitation of the alphabet.
HHH does correctly determine that DDD simulated by HHH
cannot possibly reach its own "return" instruction
final halt state if it were to correctly simulate ∞
instructions of DDD.
But that isn't the question. The question is "Does the program the
input represents Halt?"
That may have been the answer that you memorized
yet that answer is not correct.
It does this using a form of mathematical induction
that takes a finite number of steps.
Nope, only if "a form" includes incorrect forms.
void DDD()
{
  HHH(DDD);
  return;
}
Every first year CS student knows that DDD simulated
by any hypothetical HHH cannot possibly reach its own
simulated "return" statement final halt state.
The problem is you don't have *A* DDD in that case, you have a whole
set of them.
When every element of an infinite set has the
same non-halting property then each element also
has this same non-halting property.
Without including the HHH that a given DDD is built on, you can't
simulate it past the call instruction,
We have already been over this too many times.
Your degrees in electrical engineering may have never
given you as much software engineering skill as a first
year CS student.
You clearly don't understand my skill level, but then I suspect I am so
I am only estimating how much an electrical engineer
would be exposed to actual computer science and
programming. I always thought you were an electronics
engineer.
far above you that you couldn't understand some of my code. For
instance, I am the person the head of the software department at my
work comes to when he has issues with programming. How many of YOUR
coworkers treat you as a prime resource for computer knowledge.
I have a recent software engineering boss that
knew literally nothing about programming. Project
managers quite often have nearly zero technical
background.
It seems that YOU are the one that doesn't understand the first year
CS material.
Note, my MASTER'S degree is in combinded Electrical Engineering and
Computer Science,
*I don't see that*
Massachusetts Institute of Technology
MSEE, EE, Electrical Engineering MSEE, EE,
Electrical Engineering 1978 - 1982
The Ohio State University
BSEE, Electrical Engineering BSEE,
Electrical Engineering 1974 - 1978
and I did a number of courses that you should consider computer
related. As I remember, your degree isn't even a computer science degree.
I have had all of the computer science courses for
a computer science degree and all but one of the math
courses. I never had calculus II. That was about 100
credit hours more than the 125 that I needed to graduate.
I went 6.5 years full time including Summers.
<snip>
Your gross ignorance does not even show that I am incorrect.
Sure I have, you are just too stupid to undetstand it, because you
seem to have a pathological defect that blocks your understanding,
You do not know what every first year CS student knows.
DDD simulated by HHH is only a little more complex
than infinite recursion.
The fact that you can't show justification for your claims with
citations to any reputable source, only your vague reference to simple
material (that you don't seem to actually know).
Most of my words are self-evidently true, thus verified facts.
DDD correctly simrlated by HHH cannot possibly
reach its own simulated "return" statement final
halt state *is one of these verified facts*
On 6/25/2025 2:32 AM, Mikko wrote:
On 2025-06-24 14:09:10 +0000, olcott said:
On 6/24/2025 4:27 AM, joes wrote:
Am Mon, 23 Jun 2025 16:28:23 -0500 schrieb olcott:
On 6/23/2025 2:58 PM, joes wrote:Sure, it simulates *into* the call, but it never returns, which is
Am Mon, 23 Jun 2025 12:40:43 -0500 schrieb olcott:
On 6/23/2025 10:34 AM, joes wrote:
Am Mon, 23 Jun 2025 09:30:07 -0500 schrieb olcott:
Thus when HHH is simulating DDD and DDD calls HHH(DDD) the outer HHH >>>>> does simulate itself simulating DDD.[blah blah non sequitur]My claim is thatIf you read the 38 pages you will see how this is incorrect. ChatGPT >>>>>>>>> "understands" that any program that must be aborted at some point to >>>>>>>>> prevent its infinite execution is not a halting program.Such as HHH, making it not a decider (when simulated).
Well MY claim is that HHH simulated HHH (itself) doesn't halt.
obviousYou know what, it actually IS obvious that HHH can't simulate past the >>>>>> call to HHH. Thanks for coming to my Ted talk.
precisely why you abort it.
[more irrelevant stuff]
void DDD()
{
  HHH(DDD);
  return;
}
*This is the question that HHH(DDD) correctly answers*
Can DDD correctly simulated by any termination analyzer
HHH that can possibly exist reach its own "return" statement
final halt state?
Answering that question prevents HHH(DDD) from answering any
other question because it can only answer one question.
A termination analyzer is required to answer a different
question, which HHH(DDD) does not. Therefore HHH is not a
termination analyzer.
It turns out that the question a halt decider must answer
has always been a bogus question because no TM can ever
takes a directly executing TM as its input.
Deciders must always
compute the mapping *from* inputs thus are not allowed to
report on non-inputs. Partial Halt Deciders are actually
required to report on the behavior that their input specifies.
On 6/25/2025 2:38 AM, Mikko wrote:
On 2025-06-24 14:39:52 +0000, olcott said:
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of
otherwise infinitely recursive emulation that is terminated
at its second step.
No matter who agrees, the directly executed DDD is mote than
merely the first step of otherwise infinitely recursive
emulation that is terminated at its second step. Not much
more but anyway. After the return of HHH(DDD) there is the
return from DDD which is the last thing DDD does before its
termination.
*HHH(DDD) the input to HHH specifies non-terminating behavior*
The fact that DDD() itself halts does not contradict that
because the directly executing DDD() cannot possibly be an
input to HHH in the Turing machine model of computation,
thus is outside of the domain of HHH.
On 6/25/2025 2:38 AM, Mikko wrote:
On 2025-06-24 14:39:52 +0000, olcott said:
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of
otherwise infinitely recursive emulation that is terminated
at its second step.
No matter who agrees, the directly executed DDD is mote than
merely the first step of otherwise infinitely recursive
emulation that is terminated at its second step. Not much
more but anyway. After the return of HHH(DDD) there is the
return from DDD which is the last thing DDD does before its
termination.
*HHH(DDD) the input to HHH specifies non-terminating behavior*
The fact that DDD() itself halts does not contradict that
because the directly executing DDD() cannot possibly be an
input to HHH in the Turing machine model of computation,
thus is outside of the domain of HHH.
On 6/26/2025 5:16 AM, Mikko wrote:
On 2025-06-25 15:42:36 +0000, olcott said:
On 6/25/2025 2:38 AM, Mikko wrote:
On 2025-06-24 14:39:52 +0000, olcott said:
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of
otherwise infinitely recursive emulation that is terminated
at its second step.
No matter who agrees, the directly executed DDD is mote than
merely the first step of otherwise infinitely recursive
emulation that is terminated at its second step. Not much
more but anyway. After the return of HHH(DDD) there is the
return from DDD which is the last thing DDD does before its
termination.
*HHH(DDD) the input to HHH specifies non-terminating behavior*
The fact that DDD() itself halts does not contradict that
because the directly executing DDD() cannot possibly be an
input to HHH in the Turing machine model of computation,
thus is outside of the domain of HHH.
The input in HHH(DDD) is the same DDD that is executed in DDD()
so the behaviour specified by the input is the behavour of
directly executed DDD, a part of which is the behaour of the
HHH that DDD calls.
If HHH does not report about DDD but instead reports about itself
or its own actions it is not a partial halt decideer nor a partial
termination analyzer, as those are not allowed to report on their
own behavour more than "cannot determine".
Functions computed by Turing Machines are required to compute
the mapping from their inputs and not allowed to take other
executing Turing machines as inputs.
This means that every directly executed Turing machine is
outside of the domain of every function computed by any
Turing machine.
On 6/26/2025 3:46 AM, Fred. Zwarts wrote:
Op 25.jun.2025 om 17:42 schreef olcott:
On 6/25/2025 2:38 AM, Mikko wrote:
On 2025-06-24 14:39:52 +0000, olcott said:
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of
otherwise infinitely recursive emulation that is terminated
at its second step.
No matter who agrees, the directly executed DDD is mote than
merely the first step of otherwise infinitely recursive
emulation that is terminated at its second step. Not much
more but anyway. After the return of HHH(DDD) there is the
return from DDD which is the last thing DDD does before its
termination.
*HHH(DDD) the input to HHH specifies non-terminating behavior*
The fact that DDD() itself halts does not contradict that
because the directly executing DDD() cannot possibly be an
input to HHH in the Turing machine model of computation,
thus is outside of the domain of HHH.
Why repeating claims that have been proven incorrect.
The input to HHH is a pointer to code, that includes the code of HHH,
including the code to abort and halt. Therefore, it specifies a
halting program.
*No, you are using an incorrect measure*
*I have addressed this too many times*
DDD correctly simulated by HHH cannot possibly
reach its own simulated "return" statement
final halt state *No matter what HHH does*
Therefore the input to HHH(DD) unequivocally
specifies non-halting behavior.
On 6/27/2025 1:55 AM, Mikko wrote:
On 2025-06-27 02:58:47 +0000, olcott said:
On 6/26/2025 5:16 AM, Mikko wrote:
On 2025-06-25 15:42:36 +0000, olcott said:
On 6/25/2025 2:38 AM, Mikko wrote:
On 2025-06-24 14:39:52 +0000, olcott said:
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of
otherwise infinitely recursive emulation that is terminated
at its second step.
No matter who agrees, the directly executed DDD is mote than
merely the first step of otherwise infinitely recursive
emulation that is terminated at its second step. Not much
more but anyway. After the return of HHH(DDD) there is the
return from DDD which is the last thing DDD does before its
termination.
*HHH(DDD) the input to HHH specifies non-terminating behavior*
The fact that DDD() itself halts does not contradict that
because the directly executing DDD() cannot possibly be an
input to HHH in the Turing machine model of computation,
thus is outside of the domain of HHH.
The input in HHH(DDD) is the same DDD that is executed in DDD()
so the behaviour specified by the input is the behavour of
directly executed DDD, a part of which is the behaour of the
HHH that DDD calls.
If HHH does not report about DDD but instead reports about itself
or its own actions it is not a partial halt decideer nor a partial
termination analyzer, as those are not allowed to report on their
own behavour more than "cannot determine".
Functions computed by Turing Machines are required to compute
the mapping from their inputs and not allowed to take other
executing Turing machines as inputs.
There is no restriction on the functions.
counter factual.
A Turing machine is required
to compute the function identified in its specification and no other
function. For the halting problem the specification is that a halting
decider must compute the mapping that maps to "yes" if the computation
described by the input halts when directly executed.
No one ever bothered to notice that because directly
executed Turing machines cannot possibly be inputs to
other Turing machines that these directly executed
Turing machines have never been in the domain of any
Turing machine.
This excludes every TM from reporting on the behavior
of any directly executed TM. TM's can only report on
the behavior that their finite string input specifies.
On 6/28/2025 7:04 AM, Mikko wrote:
On 2025-06-27 14:19:28 +0000, olcott said:
On 6/27/2025 1:55 AM, Mikko wrote:
On 2025-06-27 02:58:47 +0000, olcott said:
On 6/26/2025 5:16 AM, Mikko wrote:
On 2025-06-25 15:42:36 +0000, olcott said:
On 6/25/2025 2:38 AM, Mikko wrote:
On 2025-06-24 14:39:52 +0000, olcott said:
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of
otherwise infinitely recursive emulation that is terminated
at its second step.
No matter who agrees, the directly executed DDD is mote than
merely the first step of otherwise infinitely recursive
emulation that is terminated at its second step. Not much
more but anyway. After the return of HHH(DDD) there is the
return from DDD which is the last thing DDD does before its
termination.
*HHH(DDD) the input to HHH specifies non-terminating behavior*
The fact that DDD() itself halts does not contradict that
because the directly executing DDD() cannot possibly be an
input to HHH in the Turing machine model of computation,
thus is outside of the domain of HHH.
The input in HHH(DDD) is the same DDD that is executed in DDD()
so the behaviour specified by the input is the behavour of
directly executed DDD, a part of which is the behaour of the
HHH that DDD calls.
If HHH does not report about DDD but instead reports about itself
or its own actions it is not a partial halt decideer nor a partial >>>>>> termination analyzer, as those are not allowed to report on their
own behavour more than "cannot determine".
Functions computed by Turing Machines are required to compute
the mapping from their inputs and not allowed to take other
executing Turing machines as inputs.
There is no restriction on the functions.
counter factual.
That is not a magic spell to create a restriction on functions.
A Turing machine is required
to compute the function identified in its specification and no other
function. For the halting problem the specification is that a halting
decider must compute the mapping that maps to "yes" if the computation >>>> described by the input halts when directly executed.
No one ever bothered to notice that because directly
executed Turing machines cannot possibly be inputs to
other Turing machines that these directly executed
Turing machines have never been in the domain of any
Turing machine.
Irrelevant. They are the domain of the halting problem.
That they are in the domain of the halting problem
and not in the domain of any Turing machine proves
that the requirement of the halting problem is incorrect.
If your
decider cannot predict whether a computation halts it is not a
halting decider.
This excludes every TM from reporting on the behavior
of any directly executed TM. TM's can only report on
the behavior that their finite string input specifies.
No Turing machine has a behaviour that cannot be specified with a
finite string.
On 6/28/2025 12:41 PM, Richard Damon wrote:
On 6/28/25 9:54 AM, olcott wrote:
On 6/28/2025 7:04 AM, Mikko wrote:
On 2025-06-27 14:19:28 +0000, olcott said:
On 6/27/2025 1:55 AM, Mikko wrote:
On 2025-06-27 02:58:47 +0000, olcott said:
On 6/26/2025 5:16 AM, Mikko wrote:
On 2025-06-25 15:42:36 +0000, olcott said:
On 6/25/2025 2:38 AM, Mikko wrote:
On 2025-06-24 14:39:52 +0000, olcott said:
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of
otherwise infinitely recursive emulation that is terminated >>>>>>>>>>> at its second step.
No matter who agrees, the directly executed DDD is mote than >>>>>>>>>> merely the first step of otherwise infinitely recursive
emulation that is terminated at its second step. Not much
more but anyway. After the return of HHH(DDD) there is the >>>>>>>>>> return from DDD which is the last thing DDD does before its >>>>>>>>>> termination.
*HHH(DDD) the input to HHH specifies non-terminating behavior* >>>>>>>>> The fact that DDD() itself halts does not contradict that
because the directly executing DDD() cannot possibly be an
input to HHH in the Turing machine model of computation,
thus is outside of the domain of HHH.
The input in HHH(DDD) is the same DDD that is executed in DDD() >>>>>>>> so the behaviour specified by the input is the behavour of
directly executed DDD, a part of which is the behaour of the
HHH that DDD calls.
If HHH does not report about DDD but instead reports about itself >>>>>>>> or its own actions it is not a partial halt decideer nor a partial >>>>>>>> termination analyzer, as those are not allowed to report on their >>>>>>>> own behavour more than "cannot determine".
Functions computed by Turing Machines are required to compute
the mapping from their inputs and not allowed to take other
executing Turing machines as inputs.
There is no restriction on the functions.
counter factual.
That is not a magic spell to create a restriction on functions.
A Turing machine is required
to compute the function identified in its specification and no other >>>>>> function. For the halting problem the specification is that a halting >>>>>> decider must compute the mapping that maps to "yes" if the
computation
described by the input halts when directly executed.
No one ever bothered to notice that because directly
executed Turing machines cannot possibly be inputs to
other Turing machines that these directly executed
Turing machines have never been in the domain of any
Turing machine.
Irrelevant. They are the domain of the halting problem.
That they are in the domain of the halting problem
and not in the domain of any Turing machine proves
that the requirement of the halting problem is incorrect.
No, it just says that you don't understand the concept of representation.
There exists no finite number of steps where N steps of
DDD are correctly simulated by HHH and this simulated DDD
reaches its simulated "return" statement final halts state.
On 6/28/2025 6:10 PM, Richard Damon wrote:
On 6/28/25 5:52 PM, olcott wrote:Proven to be counter-factual and over your head.
On 6/28/2025 12:41 PM, Richard Damon wrote:
On 6/28/25 9:54 AM, olcott wrote:
On 6/28/2025 7:04 AM, Mikko wrote:
On 2025-06-27 14:19:28 +0000, olcott said:
On 6/27/2025 1:55 AM, Mikko wrote:
On 2025-06-27 02:58:47 +0000, olcott said:
On 6/26/2025 5:16 AM, Mikko wrote:
On 2025-06-25 15:42:36 +0000, olcott said:
On 6/25/2025 2:38 AM, Mikko wrote:
On 2025-06-24 14:39:52 +0000, olcott said:
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of >>>>>>>>>>>>> otherwise infinitely recursive emulation that is terminated >>>>>>>>>>>>> at its second step.
No matter who agrees, the directly executed DDD is mote than >>>>>>>>>>>> merely the first step of otherwise infinitely recursive >>>>>>>>>>>> emulation that is terminated at its second step. Not much >>>>>>>>>>>> more but anyway. After the return of HHH(DDD) there is the >>>>>>>>>>>> return from DDD which is the last thing DDD does before its >>>>>>>>>>>> termination.
*HHH(DDD) the input to HHH specifies non-terminating behavior* >>>>>>>>>>> The fact that DDD() itself halts does not contradict that >>>>>>>>>>> because the directly executing DDD() cannot possibly be an >>>>>>>>>>> input to HHH in the Turing machine model of computation, >>>>>>>>>>> thus is outside of the domain of HHH.
The input in HHH(DDD) is the same DDD that is executed in DDD() >>>>>>>>>> so the behaviour specified by the input is the behavour of >>>>>>>>>> directly executed DDD, a part of which is the behaour of the >>>>>>>>>> HHH that DDD calls.
If HHH does not report about DDD but instead reports about itself >>>>>>>>>> or its own actions it is not a partial halt decideer nor a >>>>>>>>>> partial
termination analyzer, as those are not allowed to report on their >>>>>>>>>> own behavour more than "cannot determine".
Functions computed by Turing Machines are required to compute >>>>>>>>> the mapping from their inputs and not allowed to take other
executing Turing machines as inputs.
There is no restriction on the functions.
counter factual.
That is not a magic spell to create a restriction on functions.
A Turing machine is required
to compute the function identified in its specification and no >>>>>>>> other
function. For the halting problem the specification is that a
halting
decider must compute the mapping that maps to "yes" if the
computation
described by the input halts when directly executed.
No one ever bothered to notice that because directly
executed Turing machines cannot possibly be inputs to
other Turing machines that these directly executed
Turing machines have never been in the domain of any
Turing machine.
Irrelevant. They are the domain of the halting problem.
That they are in the domain of the halting problem
and not in the domain of any Turing machine proves
that the requirement of the halting problem is incorrect.
No, it just says that you don't understand the concept of
representation.
There exists no finite number of steps where N steps of
DDD are correctly simulated by HHH and this simulated DDD
reaches its simulated "return" statement final halts state.
But there is no HHH that correctly simulates the DDD that the HHH that
answers,
void Infinite_Recursion()
{
 Infinite_Recursion();
 return;
}
The exact same code that correctly recognizes infinite
recursion sees this non-terminating pattern after one
single recursive emulation.
On 6/28/2025 7:04 AM, Mikko wrote:
On 2025-06-27 14:19:28 +0000, olcott said:
On 6/27/2025 1:55 AM, Mikko wrote:
On 2025-06-27 02:58:47 +0000, olcott said:
On 6/26/2025 5:16 AM, Mikko wrote:
On 2025-06-25 15:42:36 +0000, olcott said:
On 6/25/2025 2:38 AM, Mikko wrote:
On 2025-06-24 14:39:52 +0000, olcott said:
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of
otherwise infinitely recursive emulation that is terminated
at its second step.
No matter who agrees, the directly executed DDD is mote than
merely the first step of otherwise infinitely recursive
emulation that is terminated at its second step. Not much
more but anyway. After the return of HHH(DDD) there is the
return from DDD which is the last thing DDD does before its
termination.
*HHH(DDD) the input to HHH specifies non-terminating behavior*
The fact that DDD() itself halts does not contradict that
because the directly executing DDD() cannot possibly be an
input to HHH in the Turing machine model of computation,
thus is outside of the domain of HHH.
The input in HHH(DDD) is the same DDD that is executed in DDD()
so the behaviour specified by the input is the behavour of
directly executed DDD, a part of which is the behaour of the
HHH that DDD calls.
If HHH does not report about DDD but instead reports about itself
or its own actions it is not a partial halt decideer nor a partial >>>>>> termination analyzer, as those are not allowed to report on their
own behavour more than "cannot determine".
Functions computed by Turing Machines are required to compute
the mapping from their inputs and not allowed to take other
executing Turing machines as inputs.
There is no restriction on the functions.
counter factual.
That is not a magic spell to create a restriction on functions.
A Turing machine is required
to compute the function identified in its specification and no other
function. For the halting problem the specification is that a halting
decider must compute the mapping that maps to "yes" if the computation >>>> described by the input halts when directly executed.
No one ever bothered to notice that because directly
executed Turing machines cannot possibly be inputs to
other Turing machines that these directly executed
Turing machines have never been in the domain of any
Turing machine.
Irrelevant. They are the domain of the halting problem.
That they are in the domain of the halting problem
and not in the domain of any Turing machine proves
that the requirement of the halting problem is incorrect.
On 6/28/2025 12:41 PM, Richard Damon wrote:
On 6/28/25 9:54 AM, olcott wrote:
On 6/28/2025 7:04 AM, Mikko wrote:
On 2025-06-27 14:19:28 +0000, olcott said:
On 6/27/2025 1:55 AM, Mikko wrote:
On 2025-06-27 02:58:47 +0000, olcott said:
On 6/26/2025 5:16 AM, Mikko wrote:
On 2025-06-25 15:42:36 +0000, olcott said:
On 6/25/2025 2:38 AM, Mikko wrote:
On 2025-06-24 14:39:52 +0000, olcott said:
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of
otherwise infinitely recursive emulation that is terminated >>>>>>>>>>> at its second step.
No matter who agrees, the directly executed DDD is mote than >>>>>>>>>> merely the first step of otherwise infinitely recursive
emulation that is terminated at its second step. Not much
more but anyway. After the return of HHH(DDD) there is the >>>>>>>>>> return from DDD which is the last thing DDD does before its >>>>>>>>>> termination.
*HHH(DDD) the input to HHH specifies non-terminating behavior* >>>>>>>>> The fact that DDD() itself halts does not contradict that
because the directly executing DDD() cannot possibly be an
input to HHH in the Turing machine model of computation,
thus is outside of the domain of HHH.
The input in HHH(DDD) is the same DDD that is executed in DDD() >>>>>>>> so the behaviour specified by the input is the behavour of
directly executed DDD, a part of which is the behaour of the
HHH that DDD calls.
If HHH does not report about DDD but instead reports about itself >>>>>>>> or its own actions it is not a partial halt decideer nor a partial >>>>>>>> termination analyzer, as those are not allowed to report on their >>>>>>>> own behavour more than "cannot determine".
Functions computed by Turing Machines are required to compute
the mapping from their inputs and not allowed to take other
executing Turing machines as inputs.
There is no restriction on the functions.
counter factual.
That is not a magic spell to create a restriction on functions.
A Turing machine is required
to compute the function identified in its specification and no other >>>>>> function. For the halting problem the specification is that a halting >>>>>> decider must compute the mapping that maps to "yes" if the computation >>>>>> described by the input halts when directly executed.
No one ever bothered to notice that because directly
executed Turing machines cannot possibly be inputs to
other Turing machines that these directly executed
Turing machines have never been in the domain of any
Turing machine.
Irrelevant. They are the domain of the halting problem.
That they are in the domain of the halting problem
and not in the domain of any Turing machine proves
that the requirement of the halting problem is incorrect.
No, it just says that you don't understand the concept of representation.
There exists no finite number of steps where N steps of
DDD are correctly simulated by HHH and this simulated DDD
reaches its simulated "return" statement final halts state.
On 6/28/2025 6:10 PM, Richard Damon wrote:
On 6/28/25 5:52 PM, olcott wrote:Proven to be counter-factual and over your head.
On 6/28/2025 12:41 PM, Richard Damon wrote:
On 6/28/25 9:54 AM, olcott wrote:
On 6/28/2025 7:04 AM, Mikko wrote:
On 2025-06-27 14:19:28 +0000, olcott said:
On 6/27/2025 1:55 AM, Mikko wrote:
On 2025-06-27 02:58:47 +0000, olcott said:
On 6/26/2025 5:16 AM, Mikko wrote:
On 2025-06-25 15:42:36 +0000, olcott said:
On 6/25/2025 2:38 AM, Mikko wrote:
On 2025-06-24 14:39:52 +0000, olcott said:
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of >>>>>>>>>>>>> otherwise infinitely recursive emulation that is terminated >>>>>>>>>>>>> at its second step.
No matter who agrees, the directly executed DDD is mote than >>>>>>>>>>>> merely the first step of otherwise infinitely recursive >>>>>>>>>>>> emulation that is terminated at its second step. Not much >>>>>>>>>>>> more but anyway. After the return of HHH(DDD) there is the >>>>>>>>>>>> return from DDD which is the last thing DDD does before its >>>>>>>>>>>> termination.
*HHH(DDD) the input to HHH specifies non-terminating behavior* >>>>>>>>>>> The fact that DDD() itself halts does not contradict that >>>>>>>>>>> because the directly executing DDD() cannot possibly be an >>>>>>>>>>> input to HHH in the Turing machine model of computation, >>>>>>>>>>> thus is outside of the domain of HHH.
The input in HHH(DDD) is the same DDD that is executed in DDD() >>>>>>>>>> so the behaviour specified by the input is the behavour of >>>>>>>>>> directly executed DDD, a part of which is the behaour of the >>>>>>>>>> HHH that DDD calls.
If HHH does not report about DDD but instead reports about itself >>>>>>>>>> or its own actions it is not a partial halt decideer nor a partial >>>>>>>>>> termination analyzer, as those are not allowed to report on their >>>>>>>>>> own behavour more than "cannot determine".
Functions computed by Turing Machines are required to compute >>>>>>>>> the mapping from their inputs and not allowed to take other
executing Turing machines as inputs.
There is no restriction on the functions.
counter factual.
That is not a magic spell to create a restriction on functions.
A Turing machine is required
to compute the function identified in its specification and no other >>>>>>>> function. For the halting problem the specification is that a halting >>>>>>>> decider must compute the mapping that maps to "yes" if the computation >>>>>>>> described by the input halts when directly executed.
No one ever bothered to notice that because directly
executed Turing machines cannot possibly be inputs to
other Turing machines that these directly executed
Turing machines have never been in the domain of any
Turing machine.
Irrelevant. They are the domain of the halting problem.
That they are in the domain of the halting problem
and not in the domain of any Turing machine proves
that the requirement of the halting problem is incorrect.
No, it just says that you don't understand the concept of representation. >>>>
There exists no finite number of steps where N steps of
DDD are correctly simulated by HHH and this simulated DDD
reaches its simulated "return" statement final halts state.
But there is no HHH that correctly simulates the DDD that the HHH that answers,
void Infinite_Recursion()
{
Infinite_Recursion();
return;
}
The exact same code that correctly recognizes infinite
recursion sees this non-terminating pattern after one
single recursive emulation.
On 6/28/2025 8:14 PM, Richard Damon wrote:
On 6/28/25 7:19 PM, olcott wrote:
On 6/28/2025 6:10 PM, Richard Damon wrote:
On 6/28/25 5:52 PM, olcott wrote:Proven to be counter-factual and over your head.
On 6/28/2025 12:41 PM, Richard Damon wrote:
On 6/28/25 9:54 AM, olcott wrote:
On 6/28/2025 7:04 AM, Mikko wrote:
On 2025-06-27 14:19:28 +0000, olcott said:
On 6/27/2025 1:55 AM, Mikko wrote:
On 2025-06-27 02:58:47 +0000, olcott said:
On 6/26/2025 5:16 AM, Mikko wrote:
On 2025-06-25 15:42:36 +0000, olcott said:
On 6/25/2025 2:38 AM, Mikko wrote:
On 2025-06-24 14:39:52 +0000, olcott said:
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of >>>>>>>>>>>>>>> otherwise infinitely recursive emulation that is terminated >>>>>>>>>>>>>>> at its second step.
No matter who agrees, the directly executed DDD is mote than >>>>>>>>>>>>>> merely the first step of otherwise infinitely recursive >>>>>>>>>>>>>> emulation that is terminated at its second step. Not much >>>>>>>>>>>>>> more but anyway. After the return of HHH(DDD) there is the >>>>>>>>>>>>>> return from DDD which is the last thing DDD does before its >>>>>>>>>>>>>> termination.
*HHH(DDD) the input to HHH specifies non-terminating behavior* >>>>>>>>>>>>> The fact that DDD() itself halts does not contradict that >>>>>>>>>>>>> because the directly executing DDD() cannot possibly be an >>>>>>>>>>>>> input to HHH in the Turing machine model of computation, >>>>>>>>>>>>> thus is outside of the domain of HHH.
The input in HHH(DDD) is the same DDD that is executed in DDD() >>>>>>>>>>>> so the behaviour specified by the input is the behavour of >>>>>>>>>>>> directly executed DDD, a part of which is the behaour of the >>>>>>>>>>>> HHH that DDD calls.
If HHH does not report about DDD but instead reports about >>>>>>>>>>>> itself
or its own actions it is not a partial halt decideer nor a >>>>>>>>>>>> partial
termination analyzer, as those are not allowed to report on >>>>>>>>>>>> their
own behavour more than "cannot determine".
Functions computed by Turing Machines are required to compute >>>>>>>>>>> the mapping from their inputs and not allowed to take other >>>>>>>>>>> executing Turing machines as inputs.
There is no restriction on the functions.
counter factual.
That is not a magic spell to create a restriction on functions. >>>>>>>>
A Turing machine is required
to compute the function identified in its specification and no >>>>>>>>>> other
function. For the halting problem the specification is that a >>>>>>>>>> halting
decider must compute the mapping that maps to "yes" if the >>>>>>>>>> computation
described by the input halts when directly executed.
No one ever bothered to notice that because directly
executed Turing machines cannot possibly be inputs to
other Turing machines that these directly executed
Turing machines have never been in the domain of any
Turing machine.
Irrelevant. They are the domain of the halting problem.
That they are in the domain of the halting problem
and not in the domain of any Turing machine proves
that the requirement of the halting problem is incorrect.
No, it just says that you don't understand the concept of
representation.
There exists no finite number of steps where N steps of
DDD are correctly simulated by HHH and this simulated DDD
reaches its simulated "return" statement final halts state.
But there is no HHH that correctly simulates the DDD that the HHH
that answers,
Reallhy? By what?
Your LIES?
Based on you not knowing what your words mean.
void Infinite_Recursion()
{
  Infinite_Recursion();
  return;
}
The exact same code that correctly recognizes infinite
recursion sees this non-terminating pattern after one
single recursive emulation.
So?
The issue is that THAT input doesn't halt, even when correctly
simulated by a UTM.
But UTM(DDD) will halt if HHH(DDD) returns an answer.
THus, you are just showing that you are just an stupid troll, that
doesn't understand the basic rules of logic.
Not at all. I am the only one to totally
think the paradox *all the way through*
Everyone else in the world just gives up.
These things are very very important to the notion
of truth itself.
Try to actually PROVE something, which requires showing the ACTUALLY
know statements that you are starting from, and then the truth
preserving steps from them that reach to your final statement.
*The proof is the self-evident verified fact that*
There exists no finite number of steps where N steps of
DDD are correctly simulated by HHH and this simulated DDD
reaches its simulated "return" statement final halts state.
If you have no idea what recursion is you might not see this.
Your problem is you always stast with a strawman ststement, as itWhen an input is deliberately designed to fool its
seems that is all your brain can process.
decider this presents a problem that no one else
has addressed in 90 years. Everyone followed the
herd and gave up.
On 6/29/2025 6:09 AM, Richard Damon wrote:
On 6/28/25 11:36 PM, olcott wrote:
On 6/28/2025 8:14 PM, Richard Damon wrote:
On 6/28/25 7:19 PM, olcott wrote:
On 6/28/2025 6:10 PM, Richard Damon wrote:
On 6/28/25 5:52 PM, olcott wrote:Proven to be counter-factual and over your head.
On 6/28/2025 12:41 PM, Richard Damon wrote:
On 6/28/25 9:54 AM, olcott wrote:
On 6/28/2025 7:04 AM, Mikko wrote:
On 2025-06-27 14:19:28 +0000, olcott said:
On 6/27/2025 1:55 AM, Mikko wrote:
On 2025-06-27 02:58:47 +0000, olcott said:
On 6/26/2025 5:16 AM, Mikko wrote:
On 2025-06-25 15:42:36 +0000, olcott said:
On 6/25/2025 2:38 AM, Mikko wrote:
On 2025-06-24 14:39:52 +0000, olcott said:
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of >>>>>>>>>>>>>>>>> otherwise infinitely recursive emulation that is >>>>>>>>>>>>>>>>> terminated
at its second step.
No matter who agrees, the directly executed DDD is mote >>>>>>>>>>>>>>>> than
merely the first step of otherwise infinitely recursive >>>>>>>>>>>>>>>> emulation that is terminated at its second step. Not much >>>>>>>>>>>>>>>> more but anyway. After the return of HHH(DDD) there is the >>>>>>>>>>>>>>>> return from DDD which is the last thing DDD does before its >>>>>>>>>>>>>>>> termination.
*HHH(DDD) the input to HHH specifies non-terminating >>>>>>>>>>>>>>> behavior*
The fact that DDD() itself halts does not contradict that >>>>>>>>>>>>>>> because the directly executing DDD() cannot possibly be an >>>>>>>>>>>>>>> input to HHH in the Turing machine model of computation, >>>>>>>>>>>>>>> thus is outside of the domain of HHH.
The input in HHH(DDD) is the same DDD that is executed in >>>>>>>>>>>>>> DDD()
so the behaviour specified by the input is the behavour of >>>>>>>>>>>>>> directly executed DDD, a part of which is the behaour of the >>>>>>>>>>>>>> HHH that DDD calls.
If HHH does not report about DDD but instead reports about >>>>>>>>>>>>>> itself
or its own actions it is not a partial halt decideer nor a >>>>>>>>>>>>>> partial
termination analyzer, as those are not allowed to report >>>>>>>>>>>>>> on their
own behavour more than "cannot determine".
Functions computed by Turing Machines are required to compute >>>>>>>>>>>>> the mapping from their inputs and not allowed to take other >>>>>>>>>>>>> executing Turing machines as inputs.
There is no restriction on the functions.
counter factual.
That is not a magic spell to create a restriction on functions. >>>>>>>>>>
A Turing machine is required
to compute the function identified in its specification and >>>>>>>>>>>> no other
function. For the halting problem the specification is that >>>>>>>>>>>> a halting
decider must compute the mapping that maps to "yes" if the >>>>>>>>>>>> computation
described by the input halts when directly executed.
No one ever bothered to notice that because directly
executed Turing machines cannot possibly be inputs to
other Turing machines that these directly executed
Turing machines have never been in the domain of any
Turing machine.
Irrelevant. They are the domain of the halting problem.
That they are in the domain of the halting problem
and not in the domain of any Turing machine proves
that the requirement of the halting problem is incorrect.
No, it just says that you don't understand the concept of
representation.
There exists no finite number of steps where N steps of
DDD are correctly simulated by HHH and this simulated DDD
reaches its simulated "return" statement final halts state.
But there is no HHH that correctly simulates the DDD that the HHH
that answers,
Reallhy? By what?
Your LIES?
Based on you not knowing what your words mean.
void Infinite_Recursion()
{
  Infinite_Recursion();
  return;
}
The exact same code that correctly recognizes infinite
recursion sees this non-terminating pattern after one
single recursive emulation.
So?
The issue is that THAT input doesn't halt, even when correctly
simulated by a UTM.
But UTM(DDD) will halt if HHH(DDD) returns an answer.
THus, you are just showing that you are just an stupid troll, that
doesn't understand the basic rules of logic.
Not at all. I am the only one to totally
think the paradox *all the way through*
Everyone else in the world just gives up.
Nope, you are jmust too stupid to understand what you are talking about.
That statement is what many insane people say when they can't explain
their fanstasy world to others.
Did you know that this crap is not any actual rebuttal
and make you look foolish?
The fact that you can' actually show why your statements are true
starting from the definitions of the system, shows that you don't know
what you are talking about.
The fact that you cannot understand that I proved these
things are true is not any actual rebuttal at all.
The best rebuttal that you ever provided was a mere
dogmatic assertion and had no actual supporting reasoning.
Other people say X and you are saying the opposite of X
therefore you are wrong, is not an actual rebuttal.
These things are very very important to the notion
of truth itself.
Then why do you try to support them with lies?
Try to actually PROVE something, which requires showing the ACTUALLY
know statements that you are starting from, and then the truth
preserving steps from them that reach to your final statement.
*The proof is the self-evident verified fact that*
There exists no finite number of steps where N steps of
DDD are correctly simulated by HHH and this simulated DDD
reaches its simulated "return" statement final halts state.
Not a proof, and that statement just shows your ignorance of what you
are talking about. Using an incorrect definition of non-halting.
You are just ignoring the error pointed out in your statement, which
just shows that you are STUPID.
Your "Proof" is based on strawman and bad definitions, and then
componded with lies.
Sorry, you have sunk your boat.
Maybe if you studied the field a little bit, instead of deciding that
the world is just wrong, you might see what is up.
Instead, you decided to put yourself into the jail of illogics.
If you have no idea what recursion is you might not see this.
I know what recursion is.
It seems you don't understand what a program is, and that different
programs can act diffferently.
Your problem is you always stast with a strawman ststement, as itWhen an input is deliberately designed to fool its
seems that is all your brain can process.
decider this presents a problem that no one else
has addressed in 90 years. Everyone followed the
herd and gave up.
The fact that an input *CAN* be designed to make a given decider
wrong, is what makes the problem uncalculatable.
Note, the input does "own" its desider (maybe be pwns is, but not owns
it).
It is designed to make a particular one wrong, but it doesn't have
"its" decider, and the input doesn't change meaning depending on which
version of the decider you give it to, since they all were defined to
use the same representation.
You are just showing how stupid you are.
On 6/29/2025 4:31 AM, Mikko wrote:
On 2025-06-28 23:19:11 +0000, olcott said:
On 6/28/2025 6:10 PM, Richard Damon wrote:
On 6/28/25 5:52 PM, olcott wrote:Proven to be counter-factual and over your head.
On 6/28/2025 12:41 PM, Richard Damon wrote:
On 6/28/25 9:54 AM, olcott wrote:
On 6/28/2025 7:04 AM, Mikko wrote:
On 2025-06-27 14:19:28 +0000, olcott said:
On 6/27/2025 1:55 AM, Mikko wrote:
On 2025-06-27 02:58:47 +0000, olcott said:
On 6/26/2025 5:16 AM, Mikko wrote:
On 2025-06-25 15:42:36 +0000, olcott said:
On 6/25/2025 2:38 AM, Mikko wrote:
On 2025-06-24 14:39:52 +0000, olcott said:
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of >>>>>>>>>>>>>>> otherwise infinitely recursive emulation that is terminated >>>>>>>>>>>>>>> at its second step.
No matter who agrees, the directly executed DDD is mote than >>>>>>>>>>>>>> merely the first step of otherwise infinitely recursive >>>>>>>>>>>>>> emulation that is terminated at its second step. Not much >>>>>>>>>>>>>> more but anyway. After the return of HHH(DDD) there is the >>>>>>>>>>>>>> return from DDD which is the last thing DDD does before its >>>>>>>>>>>>>> termination.
*HHH(DDD) the input to HHH specifies non-terminating behavior* >>>>>>>>>>>>> The fact that DDD() itself halts does not contradict that >>>>>>>>>>>>> because the directly executing DDD() cannot possibly be an >>>>>>>>>>>>> input to HHH in the Turing machine model of computation, >>>>>>>>>>>>> thus is outside of the domain of HHH.
The input in HHH(DDD) is the same DDD that is executed in DDD() >>>>>>>>>>>> so the behaviour specified by the input is the behavour of >>>>>>>>>>>> directly executed DDD, a part of which is the behaour of the >>>>>>>>>>>> HHH that DDD calls.
If HHH does not report about DDD but instead reports about >>>>>>>>>>>> itself
or its own actions it is not a partial halt decideer nor a >>>>>>>>>>>> partial
termination analyzer, as those are not allowed to report on >>>>>>>>>>>> their
own behavour more than "cannot determine".
Functions computed by Turing Machines are required to compute >>>>>>>>>>> the mapping from their inputs and not allowed to take other >>>>>>>>>>> executing Turing machines as inputs.
There is no restriction on the functions.
counter factual.
That is not a magic spell to create a restriction on functions. >>>>>>>>
A Turing machine is required
to compute the function identified in its specification and no >>>>>>>>>> other
function. For the halting problem the specification is that a >>>>>>>>>> halting
decider must compute the mapping that maps to "yes" if the >>>>>>>>>> computation
described by the input halts when directly executed.
No one ever bothered to notice that because directly
executed Turing machines cannot possibly be inputs to
other Turing machines that these directly executed
Turing machines have never been in the domain of any
Turing machine.
Irrelevant. They are the domain of the halting problem.
That they are in the domain of the halting problem
and not in the domain of any Turing machine proves
that the requirement of the halting problem is incorrect.
No, it just says that you don't understand the concept of
representation.
There exists no finite number of steps where N steps of
DDD are correctly simulated by HHH and this simulated DDD
reaches its simulated "return" statement final halts state.
But there is no HHH that correctly simulates the DDD that the HHH
that answers,
void Infinite_Recursion()
{
  Infinite_Recursion();
  return;
}
The exact same code that correctly recognizes infinite
recursion sees this non-terminating pattern after one
single recursive emulation.
Recursive simulation is not the same as recorsive call. Consequently
what is correct about recursive calls may be incorrect about
recursive simulation.
Actually from the POV of HHH it is exactly the same
as if DDD() called HHH(DDD) that simply calls DDD().
HHH has no idea that DDD is calling itself.
It sees DDD call the same function twice in sequence
with no conditional branch instructions inbetween the
beginning of DDD and its called to HHH(DDD).
There are conditional branch instructions in HHH
that HHH does ignore. These are irrelevant. They
cannot possibly cause the simulated DDD to reach
its own simulated final halt state, the correct
measure of halting.
On 6/29/2025 4:31 AM, Mikko wrote:
On 2025-06-28 23:19:11 +0000, olcott said:
On 6/28/2025 6:10 PM, Richard Damon wrote:
On 6/28/25 5:52 PM, olcott wrote:Proven to be counter-factual and over your head.
On 6/28/2025 12:41 PM, Richard Damon wrote:
On 6/28/25 9:54 AM, olcott wrote:
On 6/28/2025 7:04 AM, Mikko wrote:
On 2025-06-27 14:19:28 +0000, olcott said:
On 6/27/2025 1:55 AM, Mikko wrote:
On 2025-06-27 02:58:47 +0000, olcott said:
On 6/26/2025 5:16 AM, Mikko wrote:
On 2025-06-25 15:42:36 +0000, olcott said:
On 6/25/2025 2:38 AM, Mikko wrote:
On 2025-06-24 14:39:52 +0000, olcott said:
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of >>>>>>>>>>>>>>> otherwise infinitely recursive emulation that is terminated >>>>>>>>>>>>>>> at its second step.
No matter who agrees, the directly executed DDD is mote than >>>>>>>>>>>>>> merely the first step of otherwise infinitely recursive >>>>>>>>>>>>>> emulation that is terminated at its second step. Not much >>>>>>>>>>>>>> more but anyway. After the return of HHH(DDD) there is the >>>>>>>>>>>>>> return from DDD which is the last thing DDD does before its >>>>>>>>>>>>>> termination.
*HHH(DDD) the input to HHH specifies non-terminating behavior* >>>>>>>>>>>>> The fact that DDD() itself halts does not contradict that >>>>>>>>>>>>> because the directly executing DDD() cannot possibly be an >>>>>>>>>>>>> input to HHH in the Turing machine model of computation, >>>>>>>>>>>>> thus is outside of the domain of HHH.
The input in HHH(DDD) is the same DDD that is executed in DDD() >>>>>>>>>>>> so the behaviour specified by the input is the behavour of >>>>>>>>>>>> directly executed DDD, a part of which is the behaour of the >>>>>>>>>>>> HHH that DDD calls.
If HHH does not report about DDD but instead reports about itself >>>>>>>>>>>> or its own actions it is not a partial halt decideer nor a partial >>>>>>>>>>>> termination analyzer, as those are not allowed to report on their >>>>>>>>>>>> own behavour more than "cannot determine".
Functions computed by Turing Machines are required to compute >>>>>>>>>>> the mapping from their inputs and not allowed to take other >>>>>>>>>>> executing Turing machines as inputs.
There is no restriction on the functions.
counter factual.
That is not a magic spell to create a restriction on functions. >>>>>>>>
A Turing machine is required
to compute the function identified in its specification and no other >>>>>>>>>> function. For the halting problem the specification is that a halting
decider must compute the mapping that maps to "yes" if the computation
described by the input halts when directly executed.
No one ever bothered to notice that because directly
executed Turing machines cannot possibly be inputs to
other Turing machines that these directly executed
Turing machines have never been in the domain of any
Turing machine.
Irrelevant. They are the domain of the halting problem.
That they are in the domain of the halting problem
and not in the domain of any Turing machine proves
that the requirement of the halting problem is incorrect.
No, it just says that you don't understand the concept of representation.
There exists no finite number of steps where N steps of
DDD are correctly simulated by HHH and this simulated DDD
reaches its simulated "return" statement final halts state.
But there is no HHH that correctly simulates the DDD that the HHH that answers,
void Infinite_Recursion()
{
  Infinite_Recursion();
  return;
}
The exact same code that correctly recognizes infinite
recursion sees this non-terminating pattern after one
single recursive emulation.
Recursive simulation is not the same as recorsive call. Consequently
what is correct about recursive calls may be incorrect about
recursive simulation.
Actually from the POV of HHH it is exactly the same
as if DDD() called HHH(DDD) that simply calls DDD().
HHH has no idea that DDD is calling itself.
It sees DDD call the same function twice in sequence
with no conditional branch instructions inbetween the
beginning of DDD and its called to HHH(DDD).
There are conditional branch instructions in HHH
that HHH does ignore.
On 6/29/2025 4:27 AM, Mikko wrote:
On 2025-06-28 13:54:19 +0000, olcott said:
On 6/28/2025 7:04 AM, Mikko wrote:
On 2025-06-27 14:19:28 +0000, olcott said:
On 6/27/2025 1:55 AM, Mikko wrote:
On 2025-06-27 02:58:47 +0000, olcott said:
On 6/26/2025 5:16 AM, Mikko wrote:
On 2025-06-25 15:42:36 +0000, olcott said:
On 6/25/2025 2:38 AM, Mikko wrote:
On 2025-06-24 14:39:52 +0000, olcott said:
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of
otherwise infinitely recursive emulation that is terminated >>>>>>>>>>> at its second step.
No matter who agrees, the directly executed DDD is mote than >>>>>>>>>> merely the first step of otherwise infinitely recursive
emulation that is terminated at its second step. Not much
more but anyway. After the return of HHH(DDD) there is the >>>>>>>>>> return from DDD which is the last thing DDD does before its >>>>>>>>>> termination.
*HHH(DDD) the input to HHH specifies non-terminating behavior* >>>>>>>>> The fact that DDD() itself halts does not contradict that
because the directly executing DDD() cannot possibly be an
input to HHH in the Turing machine model of computation,
thus is outside of the domain of HHH.
The input in HHH(DDD) is the same DDD that is executed in DDD() >>>>>>>> so the behaviour specified by the input is the behavour of
directly executed DDD, a part of which is the behaour of the
HHH that DDD calls.
If HHH does not report about DDD but instead reports about itself >>>>>>>> or its own actions it is not a partial halt decideer nor a partial >>>>>>>> termination analyzer, as those are not allowed to report on their >>>>>>>> own behavour more than "cannot determine".
Functions computed by Turing Machines are required to compute
the mapping from their inputs and not allowed to take other
executing Turing machines as inputs.
There is no restriction on the functions.
counter factual.
That is not a magic spell to create a restriction on functions.
A Turing machine is required
to compute the function identified in its specification and no other >>>>>> function. For the halting problem the specification is that a halting >>>>>> decider must compute the mapping that maps to "yes" if the computation >>>>>> described by the input halts when directly executed.
No one ever bothered to notice that because directly
executed Turing machines cannot possibly be inputs to
other Turing machines that these directly executed
Turing machines have never been in the domain of any
Turing machine.
Irrelevant. They are the domain of the halting problem.
That they are in the domain of the halting problem
and not in the domain of any Turing machine proves
that the requirement of the halting problem is incorrect.
The halting problem can be partially solved with partial halt deciders.
A computation that cannot be determined to halt or not to halt with
some partical halt decier can be determined with another partial halt
decider.
OK then is the Goldbach conjecture true or false?
On 6/29/2025 4:29 AM, Mikko wrote:
On 2025-06-28 21:52:06 +0000, olcott said:
On 6/28/2025 12:41 PM, Richard Damon wrote:
On 6/28/25 9:54 AM, olcott wrote:There exists no finite number of steps where N steps of
On 6/28/2025 7:04 AM, Mikko wrote:
On 2025-06-27 14:19:28 +0000, olcott said:
On 6/27/2025 1:55 AM, Mikko wrote:
On 2025-06-27 02:58:47 +0000, olcott said:
On 6/26/2025 5:16 AM, Mikko wrote:
On 2025-06-25 15:42:36 +0000, olcott said:
On 6/25/2025 2:38 AM, Mikko wrote:
On 2025-06-24 14:39:52 +0000, olcott said:
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of >>>>>>>>>>>>> otherwise infinitely recursive emulation that is terminated >>>>>>>>>>>>> at its second step.
No matter who agrees, the directly executed DDD is mote than >>>>>>>>>>>> merely the first step of otherwise infinitely recursive >>>>>>>>>>>> emulation that is terminated at its second step. Not much >>>>>>>>>>>> more but anyway. After the return of HHH(DDD) there is the >>>>>>>>>>>> return from DDD which is the last thing DDD does before its >>>>>>>>>>>> termination.
*HHH(DDD) the input to HHH specifies non-terminating behavior* >>>>>>>>>>> The fact that DDD() itself halts does not contradict that >>>>>>>>>>> because the directly executing DDD() cannot possibly be an >>>>>>>>>>> input to HHH in the Turing machine model of computation, >>>>>>>>>>> thus is outside of the domain of HHH.
The input in HHH(DDD) is the same DDD that is executed in DDD() >>>>>>>>>> so the behaviour specified by the input is the behavour of >>>>>>>>>> directly executed DDD, a part of which is the behaour of the >>>>>>>>>> HHH that DDD calls.
If HHH does not report about DDD but instead reports about itself >>>>>>>>>> or its own actions it is not a partial halt decideer nor a partial >>>>>>>>>> termination analyzer, as those are not allowed to report on their >>>>>>>>>> own behavour more than "cannot determine".
Functions computed by Turing Machines are required to compute >>>>>>>>> the mapping from their inputs and not allowed to take other
executing Turing machines as inputs.
There is no restriction on the functions.
counter factual.
That is not a magic spell to create a restriction on functions.
A Turing machine is required
to compute the function identified in its specification and no other >>>>>>>> function. For the halting problem the specification is that a halting >>>>>>>> decider must compute the mapping that maps to "yes" if the computation >>>>>>>> described by the input halts when directly executed.
No one ever bothered to notice that because directly
executed Turing machines cannot possibly be inputs to
other Turing machines that these directly executed
Turing machines have never been in the domain of any
Turing machine.
Irrelevant. They are the domain of the halting problem.
That they are in the domain of the halting problem
and not in the domain of any Turing machine proves
that the requirement of the halting problem is incorrect.
No, it just says that you don't understand the concept of representation. >>>
DDD are correctly simulated by HHH and this simulated DDD
reaches its simulated "return" statement final halts state.
That is a statement about HHH that does not tell about halting of DDD.
That this is too difficult for you to
understand count as zero rebuttal what-so-ever.
void DDD()
{
HHH(DDD);
return;
}
_DDD()
[00002192] 55 push ebp
[00002193] 8bec mov ebp,esp
[00002195] 6892210000 push 00002192 // push DDD
[0000219a] e833f4ffff call 000015d2 // call HHH
[0000219f] 83c404 add esp,+04
[000021a2] 5d pop ebp
[000021a3] c3 ret
Size in bytes:(0018) [000021a3]
The x86 source code of DDD specifies that this emulated
DDD cannot possibly reach its own emulated "ret" instruction
final halt state when emulated by HHH according to the
semantics of the x86 language.
Op 29.jun.2025 om 16:47 schreef olcott:
On 6/29/2025 4:31 AM, Mikko wrote:
On 2025-06-28 23:19:11 +0000, olcott said:
On 6/28/2025 6:10 PM, Richard Damon wrote:
On 6/28/25 5:52 PM, olcott wrote:Proven to be counter-factual and over your head.
On 6/28/2025 12:41 PM, Richard Damon wrote:
On 6/28/25 9:54 AM, olcott wrote:
On 6/28/2025 7:04 AM, Mikko wrote:
On 2025-06-27 14:19:28 +0000, olcott said:
On 6/27/2025 1:55 AM, Mikko wrote:
On 2025-06-27 02:58:47 +0000, olcott said:
On 6/26/2025 5:16 AM, Mikko wrote:
On 2025-06-25 15:42:36 +0000, olcott said:
On 6/25/2025 2:38 AM, Mikko wrote:
On 2025-06-24 14:39:52 +0000, olcott said:
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of >>>>>>>>>>>>>>>> otherwise infinitely recursive emulation that is terminated >>>>>>>>>>>>>>>> at its second step.
No matter who agrees, the directly executed DDD is mote than >>>>>>>>>>>>>>> merely the first step of otherwise infinitely recursive >>>>>>>>>>>>>>> emulation that is terminated at its second step. Not much >>>>>>>>>>>>>>> more but anyway. After the return of HHH(DDD) there is the >>>>>>>>>>>>>>> return from DDD which is the last thing DDD does before its >>>>>>>>>>>>>>> termination.
*HHH(DDD) the input to HHH specifies non-terminating behavior* >>>>>>>>>>>>>> The fact that DDD() itself halts does not contradict that >>>>>>>>>>>>>> because the directly executing DDD() cannot possibly be an >>>>>>>>>>>>>> input to HHH in the Turing machine model of computation, >>>>>>>>>>>>>> thus is outside of the domain of HHH.
The input in HHH(DDD) is the same DDD that is executed in DDD() >>>>>>>>>>>>> so the behaviour specified by the input is the behavour of >>>>>>>>>>>>> directly executed DDD, a part of which is the behaour of the >>>>>>>>>>>>> HHH that DDD calls.
If HHH does not report about DDD but instead reports about itself >>>>>>>>>>>>> or its own actions it is not a partial halt decideer nor a partial
termination analyzer, as those are not allowed to report on their >>>>>>>>>>>>> own behavour more than "cannot determine".
Functions computed by Turing Machines are required to compute >>>>>>>>>>>> the mapping from their inputs and not allowed to take other >>>>>>>>>>>> executing Turing machines as inputs.
There is no restriction on the functions.
counter factual.
That is not a magic spell to create a restriction on functions. >>>>>>>>>
A Turing machine is required
to compute the function identified in its specification and no other
function. For the halting problem the specification is that a halting
decider must compute the mapping that maps to "yes" if the computation
described by the input halts when directly executed.
No one ever bothered to notice that because directly
executed Turing machines cannot possibly be inputs to
other Turing machines that these directly executed
Turing machines have never been in the domain of any
Turing machine.
Irrelevant. They are the domain of the halting problem.
That they are in the domain of the halting problem
and not in the domain of any Turing machine proves
that the requirement of the halting problem is incorrect.
No, it just says that you don't understand the concept of representation.
There exists no finite number of steps where N steps of
DDD are correctly simulated by HHH and this simulated DDD
reaches its simulated "return" statement final halts state.
But there is no HHH that correctly simulates the DDD that the HHH that answers,
void Infinite_Recursion()
{
Infinite_Recursion();
return;
}
The exact same code that correctly recognizes infinite
recursion sees this non-terminating pattern after one
single recursive emulation.
Recursive simulation is not the same as recorsive call. Consequently
what is correct about recursive calls may be incorrect about
recursive simulation.
Actually from the POV of HHH it is exactly the same
as if DDD() called HHH(DDD) that simply calls DDD().
HHH has no idea that DDD is calling itself.
It sees DDD call the same function twice in sequence
with no conditional branch instructions inbetween the
beginning of DDD and its called to HHH(DDD).
There are conditional branch instructions in HHH
that HHH does ignore. These are irrelevant. They
cannot possibly cause the simulated DDD to reach
its own simulated final halt state, the correct
measure of halting.
Exactly these conditional branch instruction are the cause for the abort done by HHH, which then
returns to DDD, which then halts.
This is shown by world class smulators and also by HHH1, which does count these conditional branch
instructions and, therefore, is able to reach the end of the simulation.
That you do not understand it, does not make it incorrect.
It means that HHH is incorrect to abort the simulation before it can see that the simulated HHH
would do the abort.
These conditional branch instructions are part of the specification in the input.
That HHH does not use them, does not change the specification, it only demonstrates the failure of
HHH to reach the natural end of the simulation.
Your only rebuttal in all these years is repeating the same claims without any evidence. That does
not count as rebuttal.
On 6/30/2025 2:35 AM, Fred. Zwarts wrote:
Op 29.jun.2025 om 16:47 schreef olcott:
On 6/29/2025 4:31 AM, Mikko wrote:
On 2025-06-28 23:19:11 +0000, olcott said:
On 6/28/2025 6:10 PM, Richard Damon wrote:
On 6/28/25 5:52 PM, olcott wrote:Proven to be counter-factual and over your head.
On 6/28/2025 12:41 PM, Richard Damon wrote:
On 6/28/25 9:54 AM, olcott wrote:
On 6/28/2025 7:04 AM, Mikko wrote:
On 2025-06-27 14:19:28 +0000, olcott said:
On 6/27/2025 1:55 AM, Mikko wrote:
On 2025-06-27 02:58:47 +0000, olcott said:
On 6/26/2025 5:16 AM, Mikko wrote:
On 2025-06-25 15:42:36 +0000, olcott said:
On 6/25/2025 2:38 AM, Mikko wrote:
On 2025-06-24 14:39:52 +0000, olcott said:
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of >>>>>>>>>>>>>>>>> otherwise infinitely recursive emulation that is >>>>>>>>>>>>>>>>> terminated
at its second step.
No matter who agrees, the directly executed DDD is mote >>>>>>>>>>>>>>>> than
merely the first step of otherwise infinitely recursive >>>>>>>>>>>>>>>> emulation that is terminated at its second step. Not much >>>>>>>>>>>>>>>> more but anyway. After the return of HHH(DDD) there is the >>>>>>>>>>>>>>>> return from DDD which is the last thing DDD does before its >>>>>>>>>>>>>>>> termination.
*HHH(DDD) the input to HHH specifies non-terminating >>>>>>>>>>>>>>> behavior*
The fact that DDD() itself halts does not contradict that >>>>>>>>>>>>>>> because the directly executing DDD() cannot possibly be an >>>>>>>>>>>>>>> input to HHH in the Turing machine model of computation, >>>>>>>>>>>>>>> thus is outside of the domain of HHH.
The input in HHH(DDD) is the same DDD that is executed in >>>>>>>>>>>>>> DDD()
so the behaviour specified by the input is the behavour of >>>>>>>>>>>>>> directly executed DDD, a part of which is the behaour of the >>>>>>>>>>>>>> HHH that DDD calls.
If HHH does not report about DDD but instead reports about >>>>>>>>>>>>>> itself
or its own actions it is not a partial halt decideer nor a >>>>>>>>>>>>>> partial
termination analyzer, as those are not allowed to report >>>>>>>>>>>>>> on their
own behavour more than "cannot determine".
Functions computed by Turing Machines are required to compute >>>>>>>>>>>>> the mapping from their inputs and not allowed to take other >>>>>>>>>>>>> executing Turing machines as inputs.
There is no restriction on the functions.
counter factual.
That is not a magic spell to create a restriction on functions. >>>>>>>>>>
A Turing machine is required
to compute the function identified in its specification and >>>>>>>>>>>> no other
function. For the halting problem the specification is that >>>>>>>>>>>> a halting
decider must compute the mapping that maps to "yes" if the >>>>>>>>>>>> computation
described by the input halts when directly executed.
No one ever bothered to notice that because directly
executed Turing machines cannot possibly be inputs to
other Turing machines that these directly executed
Turing machines have never been in the domain of any
Turing machine.
Irrelevant. They are the domain of the halting problem.
That they are in the domain of the halting problem
and not in the domain of any Turing machine proves
that the requirement of the halting problem is incorrect.
No, it just says that you don't understand the concept of
representation.
There exists no finite number of steps where N steps of
DDD are correctly simulated by HHH and this simulated DDD
reaches its simulated "return" statement final halts state.
But there is no HHH that correctly simulates the DDD that the HHH
that answers,
void Infinite_Recursion()
{
  Infinite_Recursion();
  return;
}
The exact same code that correctly recognizes infinite
recursion sees this non-terminating pattern after one
single recursive emulation.
Recursive simulation is not the same as recorsive call. Consequently
what is correct about recursive calls may be incorrect about
recursive simulation.
Actually from the POV of HHH it is exactly the same
as if DDD() called HHH(DDD) that simply calls DDD().
HHH has no idea that DDD is calling itself.
It sees DDD call the same function twice in sequence
with no conditional branch instructions inbetween the
beginning of DDD and its called to HHH(DDD).
There are conditional branch instructions in HHH
that HHH does ignore. These are irrelevant. They
cannot possibly cause the simulated DDD to reach
its own simulated final halt state, the correct
measure of halting.
Exactly these conditional branch instruction are the cause for the
abort done by HHH, which then returns to DDD, which then halts.
*Counter-factual*
void DDD()
{
 HHH(DDD);
 return;
}
int main()
{
 HHH(DDD);
}
*In the above nothing returns to DDD*
*I always quit at the first counter-factual error*
As soon as DDD aborts its outermost DDD simulation
every recursive simulation immediately stops and
does not return to anywhere. Then HHH returns 0 to main.
On 6/30/2025 11:42 AM, Mike Terry wrote:
On 30/06/2025 08:35, Fred. Zwarts wrote:
Op 29.jun.2025 om 16:47 schreef olcott:
On 6/29/2025 4:31 AM, Mikko wrote:
On 2025-06-28 23:19:11 +0000, olcott said:
On 6/28/2025 6:10 PM, Richard Damon wrote:
On 6/28/25 5:52 PM, olcott wrote:Proven to be counter-factual and over your head.
On 6/28/2025 12:41 PM, Richard Damon wrote:
On 6/28/25 9:54 AM, olcott wrote:
On 6/28/2025 7:04 AM, Mikko wrote:
On 2025-06-27 14:19:28 +0000, olcott said:
On 6/27/2025 1:55 AM, Mikko wrote:
On 2025-06-27 02:58:47 +0000, olcott said:
On 6/26/2025 5:16 AM, Mikko wrote:
On 2025-06-25 15:42:36 +0000, olcott said:
On 6/25/2025 2:38 AM, Mikko wrote:
On 2025-06-24 14:39:52 +0000, olcott said:
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of >>>>>>>>>>>>>>>>>> otherwise infinitely recursive emulation that is >>>>>>>>>>>>>>>>>> terminated
at its second step.
No matter who agrees, the directly executed DDD is mote >>>>>>>>>>>>>>>>> than
merely the first step of otherwise infinitely recursive >>>>>>>>>>>>>>>>> emulation that is terminated at its second step. Not much >>>>>>>>>>>>>>>>> more but anyway. After the return of HHH(DDD) there is the >>>>>>>>>>>>>>>>> return from DDD which is the last thing DDD does before >>>>>>>>>>>>>>>>> its
termination.
*HHH(DDD) the input to HHH specifies non-terminating >>>>>>>>>>>>>>>> behavior*
The fact that DDD() itself halts does not contradict that >>>>>>>>>>>>>>>> because the directly executing DDD() cannot possibly be an >>>>>>>>>>>>>>>> input to HHH in the Turing machine model of computation, >>>>>>>>>>>>>>>> thus is outside of the domain of HHH.
The input in HHH(DDD) is the same DDD that is executed in >>>>>>>>>>>>>>> DDD()
so the behaviour specified by the input is the behavour of >>>>>>>>>>>>>>> directly executed DDD, a part of which is the behaour of the >>>>>>>>>>>>>>> HHH that DDD calls.
If HHH does not report about DDD but instead reports >>>>>>>>>>>>>>> about itself
or its own actions it is not a partial halt decideer nor >>>>>>>>>>>>>>> a partial
termination analyzer, as those are not allowed to report >>>>>>>>>>>>>>> on their
own behavour more than "cannot determine".
Functions computed by Turing Machines are required to compute >>>>>>>>>>>>>> the mapping from their inputs and not allowed to take other >>>>>>>>>>>>>> executing Turing machines as inputs.
There is no restriction on the functions.
counter factual.
That is not a magic spell to create a restriction on functions. >>>>>>>>>>>
A Turing machine is required
to compute the function identified in its specification and >>>>>>>>>>>>> no other
function. For the halting problem the specification is that >>>>>>>>>>>>> a halting
decider must compute the mapping that maps to "yes" if the >>>>>>>>>>>>> computation
described by the input halts when directly executed.
No one ever bothered to notice that because directly
executed Turing machines cannot possibly be inputs to
other Turing machines that these directly executed
Turing machines have never been in the domain of any
Turing machine.
Irrelevant. They are the domain of the halting problem.
That they are in the domain of the halting problem
and not in the domain of any Turing machine proves
that the requirement of the halting problem is incorrect.
No, it just says that you don't understand the concept of
representation.
There exists no finite number of steps where N steps of
DDD are correctly simulated by HHH and this simulated DDD
reaches its simulated "return" statement final halts state.
But there is no HHH that correctly simulates the DDD that the HHH >>>>>>> that answers,
void Infinite_Recursion()
{
  Infinite_Recursion();
  return;
}
The exact same code that correctly recognizes infinite
recursion sees this non-terminating pattern after one
single recursive emulation.
Recursive simulation is not the same as recorsive call. Consequently >>>>> what is correct about recursive calls may be incorrect about
recursive simulation.
Actually from the POV of HHH it is exactly the same
as if DDD() called HHH(DDD) that simply calls DDD().
HHH has no idea that DDD is calling itself.
It sees DDD call the same function twice in sequence
with no conditional branch instructions inbetween the
beginning of DDD and its called to HHH(DDD).
There are conditional branch instructions in HHH
that HHH does ignore. These are irrelevant. They
cannot possibly cause the simulated DDD to reach
its own simulated final halt state, the correct
measure of halting.
Exactly these conditional branch instruction are the cause for the
abort done by HHH, which then returns to DDD, which then halts.
This is shown by world class smulators and also by HHH1, which does
count these conditional branch instructions and, therefore, is able
to reach the end of the simulation.
HHH1 does not count the conditional branch instructions. The
explanation for it reaching the end of the simulation is that
HHH1(DDD)'s input does not call itself in recursive simulation
like the input to HHH(DDD) does call itself in recursive simulation.
All the chatbots (even the stupid one) knows that the input
to HHH(DDD) calls HHH(DDD) in recursive simulation preventing
DDD correctly simulated by HHH from reaching its own simulated
"return" instruction final halt state.
So it is either the case that everyone here is more stupid than
a stupid chatbot or these chatbots make a detectable error that
can be actually proven to be an error.
I say that DDD correctly simulated by HHH specifies behavior that
cannot possibly reach its "return" instruction final halt state.
*There has been no actual rebuttal to this*
Rebuttals that rely on counter-factual assumptions
do not count as actual rebuttals.
On 6/30/2025 4:21 AM, Mikko wrote:
On 2025-06-29 14:38:51 +0000, olcott said:
On 6/29/2025 4:27 AM, Mikko wrote:
On 2025-06-28 13:54:19 +0000, olcott said:
On 6/28/2025 7:04 AM, Mikko wrote:
On 2025-06-27 14:19:28 +0000, olcott said:
On 6/27/2025 1:55 AM, Mikko wrote:
On 2025-06-27 02:58:47 +0000, olcott said:
On 6/26/2025 5:16 AM, Mikko wrote:
On 2025-06-25 15:42:36 +0000, olcott said:
On 6/25/2025 2:38 AM, Mikko wrote:
On 2025-06-24 14:39:52 +0000, olcott said:
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of >>>>>>>>>>>>> otherwise infinitely recursive emulation that is terminated >>>>>>>>>>>>> at its second step.
No matter who agrees, the directly executed DDD is mote than >>>>>>>>>>>> merely the first step of otherwise infinitely recursive >>>>>>>>>>>> emulation that is terminated at its second step. Not much >>>>>>>>>>>> more but anyway. After the return of HHH(DDD) there is the >>>>>>>>>>>> return from DDD which is the last thing DDD does before its >>>>>>>>>>>> termination.
*HHH(DDD) the input to HHH specifies non-terminating behavior* >>>>>>>>>>> The fact that DDD() itself halts does not contradict that >>>>>>>>>>> because the directly executing DDD() cannot possibly be an >>>>>>>>>>> input to HHH in the Turing machine model of computation, >>>>>>>>>>> thus is outside of the domain of HHH.
The input in HHH(DDD) is the same DDD that is executed in DDD() >>>>>>>>>> so the behaviour specified by the input is the behavour of >>>>>>>>>> directly executed DDD, a part of which is the behaour of the >>>>>>>>>> HHH that DDD calls.
If HHH does not report about DDD but instead reports about itself >>>>>>>>>> or its own actions it is not a partial halt decideer nor a partial >>>>>>>>>> termination analyzer, as those are not allowed to report on their >>>>>>>>>> own behavour more than "cannot determine".
Functions computed by Turing Machines are required to compute >>>>>>>>> the mapping from their inputs and not allowed to take other
executing Turing machines as inputs.
There is no restriction on the functions.
counter factual.
That is not a magic spell to create a restriction on functions.
A Turing machine is required
to compute the function identified in its specification and no other >>>>>>>> function. For the halting problem the specification is that a halting >>>>>>>> decider must compute the mapping that maps to "yes" if the computation >>>>>>>> described by the input halts when directly executed.
No one ever bothered to notice that because directly
executed Turing machines cannot possibly be inputs to
other Turing machines that these directly executed
Turing machines have never been in the domain of any
Turing machine.
Irrelevant. They are the domain of the halting problem.
That they are in the domain of the halting problem
and not in the domain of any Turing machine proves
that the requirement of the halting problem is incorrect.
The halting problem can be partially solved with partial halt deciders. >>>> A computation that cannot be determined to halt or not to halt with
some partical halt decier can be determined with another partial halt
decider.
OK then is the Goldbach conjecture true or false?
I'll post the answer when I find a halting oracle.
If it is true it requires an infinite proof, thus non-halting.
If it is false a finite proof will find the exception.
On 7/1/2025 3:36 AM, Fred. Zwarts wrote:
Op 01.jul.2025 om 00:00 schreef olcott:
On 6/30/2025 11:42 AM, Mike Terry wrote:
On 30/06/2025 08:35, Fred. Zwarts wrote:
Op 29.jun.2025 om 16:47 schreef olcott:
On 6/29/2025 4:31 AM, Mikko wrote:
On 2025-06-28 23:19:11 +0000, olcott said:
On 6/28/2025 6:10 PM, Richard Damon wrote:
On 6/28/25 5:52 PM, olcott wrote:Proven to be counter-factual and over your head.
On 6/28/2025 12:41 PM, Richard Damon wrote:
On 6/28/25 9:54 AM, olcott wrote:
On 6/28/2025 7:04 AM, Mikko wrote:No, it just says that you don't understand the concept of >>>>>>>>>>> representation.
On 2025-06-27 14:19:28 +0000, olcott said:That they are in the domain of the halting problem
On 6/27/2025 1:55 AM, Mikko wrote:
On 2025-06-27 02:58:47 +0000, olcott said:
On 6/26/2025 5:16 AM, Mikko wrote:
On 2025-06-25 15:42:36 +0000, olcott said:
On 6/25/2025 2:38 AM, Mikko wrote:
On 2025-06-24 14:39:52 +0000, olcott said: >>>>>>>>>>>>>>>>>>>
*ChatGPT and I agree that*
The directly executed DDD() is merely the first step of >>>>>>>>>>>>>>>>>>>> otherwise infinitely recursive emulation that is >>>>>>>>>>>>>>>>>>>> terminated
at its second step.
No matter who agrees, the directly executed DDD is >>>>>>>>>>>>>>>>>>> mote than
merely the first step of otherwise infinitely recursive >>>>>>>>>>>>>>>>>>> emulation that is terminated at its second step. Not >>>>>>>>>>>>>>>>>>> much
more but anyway. After the return of HHH(DDD) there >>>>>>>>>>>>>>>>>>> is the
return from DDD which is the last thing DDD does >>>>>>>>>>>>>>>>>>> before its
termination.
*HHH(DDD) the input to HHH specifies non-terminating >>>>>>>>>>>>>>>>>> behavior*
The fact that DDD() itself halts does not contradict that >>>>>>>>>>>>>>>>>> because the directly executing DDD() cannot possibly >>>>>>>>>>>>>>>>>> be an
input to HHH in the Turing machine model of computation, >>>>>>>>>>>>>>>>>> thus is outside of the domain of HHH.
The input in HHH(DDD) is the same DDD that is executed >>>>>>>>>>>>>>>>> in DDD()
so the behaviour specified by the input is the behavour of >>>>>>>>>>>>>>>>> directly executed DDD, a part of which is the behaour >>>>>>>>>>>>>>>>> of the
HHH that DDD calls.
If HHH does not report about DDD but instead reports >>>>>>>>>>>>>>>>> about itself
or its own actions it is not a partial halt decideer >>>>>>>>>>>>>>>>> nor a partial
termination analyzer, as those are not allowed to >>>>>>>>>>>>>>>>> report on their
own behavour more than "cannot determine".
Functions computed by Turing Machines are required to >>>>>>>>>>>>>>>> compute
the mapping from their inputs and not allowed to take other >>>>>>>>>>>>>>>> executing Turing machines as inputs.
There is no restriction on the functions.
counter factual.
That is not a magic spell to create a restriction on >>>>>>>>>>>>> functions.
A Turing machine is requiredNo one ever bothered to notice that because directly >>>>>>>>>>>>>> executed Turing machines cannot possibly be inputs to >>>>>>>>>>>>>> other Turing machines that these directly executed >>>>>>>>>>>>>> Turing machines have never been in the domain of any >>>>>>>>>>>>>> Turing machine.
to compute the function identified in its specification >>>>>>>>>>>>>>> and no other
function. For the halting problem the specification is >>>>>>>>>>>>>>> that a halting
decider must compute the mapping that maps to "yes" if >>>>>>>>>>>>>>> the computation
described by the input halts when directly executed. >>>>>>>>>>>>>>
Irrelevant. They are the domain of the halting problem. >>>>>>>>>>>>
and not in the domain of any Turing machine proves
that the requirement of the halting problem is incorrect. >>>>>>>>>>>
There exists no finite number of steps where N steps of
DDD are correctly simulated by HHH and this simulated DDD
reaches its simulated "return" statement final halts state. >>>>>>>>>>
But there is no HHH that correctly simulates the DDD that the >>>>>>>>> HHH that answers,
void Infinite_Recursion()
{
  Infinite_Recursion();
  return;
}
The exact same code that correctly recognizes infinite
recursion sees this non-terminating pattern after one
single recursive emulation.
Recursive simulation is not the same as recorsive call. Consequently >>>>>>> what is correct about recursive calls may be incorrect about
recursive simulation.
Actually from the POV of HHH it is exactly the same
as if DDD() called HHH(DDD) that simply calls DDD().
HHH has no idea that DDD is calling itself.
It sees DDD call the same function twice in sequence
with no conditional branch instructions inbetween the
beginning of DDD and its called to HHH(DDD).
There are conditional branch instructions in HHH
that HHH does ignore. These are irrelevant. They
cannot possibly cause the simulated DDD to reach
its own simulated final halt state, the correct
measure of halting.
Exactly these conditional branch instruction are the cause for the
abort done by HHH, which then returns to DDD, which then halts.
This is shown by world class smulators and also by HHH1, which does
count these conditional branch instructions and, therefore, is able
to reach the end of the simulation.
HHH1 does not count the conditional branch instructions. The
explanation for it reaching the end of the simulation is that
HHH1(DDD)'s input does not call itself in recursive simulation
like the input to HHH(DDD) does call itself in recursive simulation.
Indeed, the failure of the programmer is that he thinks that a
simulator can simulate itself correctly in recursive simulation.
OK I give up on you. I can't stand talking
to people that insist on denying verified facts.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 24:59:44 |
Calls: | 10,390 |
Calls today: | 1 |
Files: | 14,064 |
Messages: | 6,417,017 |