On 10/22/2024 10:18 AM, joes wrote:
Am Tue, 22 Oct 2024 08:47:39 -0500 schrieb olcott:
On 10/22/2024 4:50 AM, joes wrote:Just no. Do you believe that I didn't write this myself after all?
Am Mon, 21 Oct 2024 22:04:49 -0500 schrieb olcott:
On 10/21/2024 9:42 PM, Richard Damon wrote:
On 10/21/24 7:08 PM, olcott wrote:
On 10/21/2024 6:05 PM, Richard Damon wrote:
On 10/21/24 6:48 PM, olcott wrote:
On 10/21/2024 5:34 PM, Richard Damon wrote:
On 10/21/24 12:29 PM, olcott wrote:
On 10/21/2024 10:17 AM, joes wrote:
Am Mon, 21 Oct 2024 08:41:11 -0500 schrieb olcott:
On 10/21/2024 3:39 AM, joes wrote:
It's not like it will deterministically regenerate the same output.Did ChatGPT generate that?
If it did then I need *ALL the input that caused it to generate >>>>>>>>>>> that*
"naw, I wasn't lied to, they said they were saying the truth" sureI asked it if what it was told was a lie and it explained how what >>>>>>> it was told is correct.No, it said that given what you told it (which was a lie)No, someone using some REAL INTELEGENCE, as opposed to a program >>>>>>>>>> using "artificial intelegence" that had been loaded with false >>>>>>>>>> premises and other lies.I specifically asked it to verify that its key assumption is >>>>>>>>> correct and it did.
buddy.
HAHAHAHAHA there isn't anything about truth in there, prove me wrongBecause Chat GPT doesn't care about lying.ChatGPT computes the truth and you can't actually show otherwise.
That seems to indicate that you are admitting that you cheated when youI definitely typed something out in the style of an LLM instead of myBecause what you are asking for is nonsense.I believe that the "output" Joes provided was fake on the basis that >>>>> she did not provide the input to derive that output and did not use
Of course an AI that has been programmed with lies might repeat the >>>>>> lies.
When it is told the actual definition, after being told your lies, >>>>>> and asked if your conclusion could be right, it said No.
Thus, it seems by your logic, you have to admit defeat, as the AI, >>>>>> after being told your lies, still was able to come up with the
correct answer, that DDD will halt, and that HHH is just incorrect to >>>>>> say it doesn't.
the required basis that was on the link.
own words /s
Accepting your premises makes the problem uninteresting.If you want me to pay more attention to what you say, you first need >>>>>> to return the favor, and at least TRY to find an error in what I say, >>>>>> and be based on more than just that you think that can't be right. >>>>>> But you can't do that, as you don't actually know any facts about the >>>>>> field that you can point to qualified references.You cannot show that my premises are actually false.
To show that they are false would at least require showing that they >>>>> contradict each other.
discussed this with ChatGPT. You gave it a faulty basis and then argued
against that.
They also conventional within the context of software engineering. Thatlol
software engineering conventions seem incompatible with computer science >>> conventions may refute the latter.
The a halt decider must report on the behavior that itself is containedJust because you don't like the undecidability of the halting problem?
within seems to be an incorrect convention.
u32 HHH1(ptr P) // line 721That makes no sense. DDD halts or doesn't either way. HHH and HHH1 may
u32 HHH(ptr P) // line 801
The above two functions have identical C code except for their name.
The input to HHH1(DDD) halts. The input to HHH(DDD) does not halt. This
conclusively proves that the pathological relationship between DDD and
HHH makes a difference in the behavior of DDD.
give different answers, but then exactly one of them must be wrong.
Do they both call HHH? How does their execution differ?
void DDD()
{
HHH(DDD);
return;
}
*It is a verified fact that*
(a) Both HHH1 and HHH emulate DDD according to the
semantics of the x86 language.
(b) HHH and HHH1 have verbatim identical c source
code, except for their differing names.
(c) DDD emulated by HHH has different behavior than
DDD emulated by HHH1.
(d) Each DDD *correctly_emulated_by* any HHH that
this DDD calls cannot possibly return no matter
what this HHH does.
On 10/22/2024 10:02 PM, Richard Damon wrote:
On 10/22/24 11:57 AM, olcott wrote:
On 10/22/2024 10:18 AM, joes wrote:
Am Tue, 22 Oct 2024 08:47:39 -0500 schrieb olcott:
On 10/22/2024 4:50 AM, joes wrote:Just no. Do you believe that I didn't write this myself after all?
Am Mon, 21 Oct 2024 22:04:49 -0500 schrieb olcott:
On 10/21/2024 9:42 PM, Richard Damon wrote:
On 10/21/24 7:08 PM, olcott wrote:
On 10/21/2024 6:05 PM, Richard Damon wrote:
On 10/21/24 6:48 PM, olcott wrote:
On 10/21/2024 5:34 PM, Richard Damon wrote:
On 10/21/24 12:29 PM, olcott wrote:
On 10/21/2024 10:17 AM, joes wrote:
Am Mon, 21 Oct 2024 08:41:11 -0500 schrieb olcott: >>>>>>>>>>>>>>> On 10/21/2024 3:39 AM, joes wrote:
It's not like it will deterministically regenerate the same output.Did ChatGPT generate that?
If it did then I need *ALL the input that caused it to >>>>>>>>>>>>> generate
that*
That seems to indicate that you are admitting that you cheated when"naw, I wasn't lied to, they said they were saying the truth" sure >>>>>> buddy.I asked it if what it was told was a lie and it explained how what >>>>>>>>> it was told is correct.No, it said that given what you told it (which was a lie)No, someone using some REAL INTELEGENCE, as opposed to a >>>>>>>>>>>> programI specifically asked it to verify that its key assumption is >>>>>>>>>>> correct and it did.
using "artificial intelegence" that had been loaded with false >>>>>>>>>>>> premises and other lies.
I definitely typed something out in the style of an LLM instead of my >>>>>> own words /sBecause Chat GPT doesn't care about lying.ChatGPT computes the truth and you can't actually show otherwise. >>>>>> HAHAHAHAHA there isn't anything about truth in there, prove me wrong >>>>
Because what you are asking for is nonsense.I believe that the "output" Joes provided was fake on the basis that >>>>>>> she did not provide the input to derive that output and did not use >>>>>>> the required basis that was on the link.
Of course an AI that has been programmed with lies might repeat the >>>>>>>> lies.
When it is told the actual definition, after being told your lies, >>>>>>>> and asked if your conclusion could be right, it said No.
Thus, it seems by your logic, you have to admit defeat, as the AI, >>>>>>>> after being told your lies, still was able to come up with the >>>>>>>> correct answer, that DDD will halt, and that HHH is just
incorrect to
say it doesn't.
Accepting your premises makes the problem uninteresting.If you want me to pay more attention to what you say, you first >>>>>>>> needYou cannot show that my premises are actually false.
to return the favor, and at least TRY to find an error in what I >>>>>>>> say,
and be based on more than just that you think that can't be right. >>>>>>>> But you can't do that, as you don't actually know any facts
about the
field that you can point to qualified references.
To show that they are false would at least require showing that they >>>>>>> contradict each other.
you
discussed this with ChatGPT. You gave it a faulty basis and then
argued
against that.
They also conventional within the context of software engineering.lol
That
software engineering conventions seem incompatible with computer
science
conventions may refute the latter.
The a halt decider must report on the behavior that itself isJust because you don't like the undecidability of the halting problem? >>>>
contained
within seems to be an incorrect convention.
u32 HHH1(ptr P) // line 721That makes no sense. DDD halts or doesn't either way. HHH and HHH1 may >>>> give different answers, but then exactly one of them must be wrong.
u32 HHH(ptr P) // line 801
The above two functions have identical C code except for their name. >>>>>
The input to HHH1(DDD) halts. The input to HHH(DDD) does not halt.
This
conclusively proves that the pathological relationship between DDD and >>>>> HHH makes a difference in the behavior of DDD.
Do they both call HHH? How does their execution differ?
void DDD()
{
HHH(DDD);
return;
}
*It is a verified fact that*
(a) Both HHH1 and HHH emulate DDD according to the
semantics of the x86 language.
But HHH only does so INCOMPLETELY.
(b) HHH and HHH1 have verbatim identical c source
code, except for their differing names.
So? the fact the give different results just proves that they must
have a "hidden input" thta gives them that different behavior, so they
can't be actually deciders.
HHH1 either references itself with the name HHH1, instead of the name
HHH, so has DIFFERENT source code, or your code uses assembly to
extract the address that it is running at, making that address a
"hidden input" to the code.
So, you just proved that you never meet your basic requirements, and
everything is just a lie.
(c) DDD emulated by HHH has different behavior than
DDD emulated by HHH1.
No, just less of it because HHH aborts its emulation.
Aborted emulation doesn't provide final behavior.
(d) Each DDD *correctly_emulated_by* any HHH that
this DDD calls cannot possibly return no matter
what this HHH does.
No, it can not be emulated by that HHH to that point, but that doesn't
mean that the behavior of program DDD doesn't get there.
Halt Deciding / Termination Analysis is about the behavior of the
program described, and thus all you are showing is that you aren't
working on either of those problems, but have just been lying.
Note, your argument is using a equivocation on the term "correctly
emulated" as you are trying to claim a correct emulation by just a
partial emulation, but also trying to claim a result that only comes
from COMPLETE emulation, that of determining final behavior.
This again, just prove that you whole proof is based on lies.
I didn't hardly glance at any of that.
*This verified fact is a key element of my point*
When HHH1(DDD) emulates DDD this DDD reaches its final state.
When HHH(DDD) emulates DDD this DDD cannot possibly reach its
final state.
On 10/22/2024 10:47 PM, Richard Damon wrote:
On 10/22/24 11:25 PM, olcott wrote:
On 10/22/2024 10:02 PM, Richard Damon wrote:
On 10/22/24 11:57 AM, olcott wrote:
On 10/22/2024 10:18 AM, joes wrote:
Am Tue, 22 Oct 2024 08:47:39 -0500 schrieb olcott:
On 10/22/2024 4:50 AM, joes wrote:Just no. Do you believe that I didn't write this myself after all? >>>>>>
Am Mon, 21 Oct 2024 22:04:49 -0500 schrieb olcott:
On 10/21/2024 9:42 PM, Richard Damon wrote:
On 10/21/24 7:08 PM, olcott wrote:
On 10/21/2024 6:05 PM, Richard Damon wrote:
On 10/21/24 6:48 PM, olcott wrote:
On 10/21/2024 5:34 PM, Richard Damon wrote:
On 10/21/24 12:29 PM, olcott wrote:
On 10/21/2024 10:17 AM, joes wrote:
Am Mon, 21 Oct 2024 08:41:11 -0500 schrieb olcott: >>>>>>>>>>>>>>>>> On 10/21/2024 3:39 AM, joes wrote:
It's not like it will deterministically regenerate the same output. >>>>>>Did ChatGPT generate that?
If it did then I need *ALL the input that caused it to >>>>>>>>>>>>>>> generate
that*
"naw, I wasn't lied to, they said they were saying the truth" sure >>>>>>>> buddy.it was told is correct.No, it said that given what you told it (which was a lie) >>>>>>>>>>> I asked it if what it was told was a lie and it explained how >>>>>>>>>>> whatNo, someone using some REAL INTELEGENCE, as opposed to a >>>>>>>>>>>>>> programI specifically asked it to verify that its key assumption is >>>>>>>>>>>>> correct and it did.
using "artificial intelegence" that had been loaded with >>>>>>>>>>>>>> false
premises and other lies.
Because Chat GPT doesn't care about lying.ChatGPT computes the truth and you can't actually show otherwise. >>>>>>>> HAHAHAHAHA there isn't anything about truth in there, prove me >>>>>>>> wrong
That seems to indicate that you are admitting that you cheatedI definitely typed something out in the style of an LLM instead >>>>>>>> of myBecause what you are asking for is nonsense.I believe that the "output" Joes provided was fake on the basis >>>>>>>>> that
Of course an AI that has been programmed with lies might
repeat the
lies.
When it is told the actual definition, after being told your >>>>>>>>>> lies,
and asked if your conclusion could be right, it said No.
Thus, it seems by your logic, you have to admit defeat, as the >>>>>>>>>> AI,
after being told your lies, still was able to come up with the >>>>>>>>>> correct answer, that DDD will halt, and that HHH is just
incorrect to
say it doesn't.
she did not provide the input to derive that output and did not >>>>>>>>> use
the required basis that was on the link.
own words /s
Accepting your premises makes the problem uninteresting.If you want me to pay more attention to what you say, youYou cannot show that my premises are actually false.
first need
to return the favor, and at least TRY to find an error in what >>>>>>>>>> I say,
and be based on more than just that you think that can't be >>>>>>>>>> right.
But you can't do that, as you don't actually know any facts >>>>>>>>>> about the
field that you can point to qualified references.
To show that they are false would at least require showing that >>>>>>>>> they
contradict each other.
when you
discussed this with ChatGPT. You gave it a faulty basis and then >>>>>>> argued
against that.
They also conventional within the context of softwarelol
engineering. That
software engineering conventions seem incompatible with computer >>>>>>> science
conventions may refute the latter.
The a halt decider must report on the behavior that itself isJust because you don't like the undecidability of the halting
contained
within seems to be an incorrect convention.
problem?
u32 HHH1(ptr P) // line 721That makes no sense. DDD halts or doesn't either way. HHH and HHH1 >>>>>> may
u32 HHH(ptr P) // line 801
The above two functions have identical C code except for their name. >>>>>>>
The input to HHH1(DDD) halts. The input to HHH(DDD) does not
halt. This
conclusively proves that the pathological relationship between
DDD and
HHH makes a difference in the behavior of DDD.
give different answers, but then exactly one of them must be wrong. >>>>>> Do they both call HHH? How does their execution differ?
void DDD()
{
HHH(DDD);
return;
}
*It is a verified fact that*
(a) Both HHH1 and HHH emulate DDD according to the
semantics of the x86 language.
But HHH only does so INCOMPLETELY.
(b) HHH and HHH1 have verbatim identical c source
code, except for their differing names.
So? the fact the give different results just proves that they must
have a "hidden input" thta gives them that different behavior, so
they can't be actually deciders.
HHH1 either references itself with the name HHH1, instead of the
name HHH, so has DIFFERENT source code, or your code uses assembly
to extract the address that it is running at, making that address a
"hidden input" to the code.
So, you just proved that you never meet your basic requirements, and
everything is just a lie.
(c) DDD emulated by HHH has different behavior than
DDD emulated by HHH1.
No, just less of it because HHH aborts its emulation.
Aborted emulation doesn't provide final behavior.
(d) Each DDD *correctly_emulated_by* any HHH that
this DDD calls cannot possibly return no matter
what this HHH does.
No, it can not be emulated by that HHH to that point, but that
doesn't mean that the behavior of program DDD doesn't get there.
Halt Deciding / Termination Analysis is about the behavior of the
program described, and thus all you are showing is that you aren't
working on either of those problems, but have just been lying.
Note, your argument is using a equivocation on the term "correctly
emulated" as you are trying to claim a correct emulation by just a
partial emulation, but also trying to claim a result that only comes
from COMPLETE emulation, that of determining final behavior.
This again, just prove that you whole proof is based on lies.
I didn't hardly glance at any of that.
*This verified fact is a key element of my point*
When HHH1(DDD) emulates DDD this DDD reaches its final state.
When HHH(DDD) emulates DDD this DDD cannot possibly reach its
final state.
But HHH aborts its emulation, and to that point saw EXACTLY the same
sequence of steps that HHH1 saw (or you have lied about them being
identical and pure funcitons).
*That double talk dodges the point that I made*
DDD emulated by HHH cannot possibly reach
its final state no matter WTF that HHH does.
Whether HHH aborts or plays bingo has
NO EFFECT WHAT-SO-EVER ON THIS!
I know that you know that the whole "pure function"
thing only has to do with the return value from HHH,
thus HAS NO EFFECT WHAT-SO-EVER ON WHICH STEPS ARE EMULATED.
On 10/23/2024 6:12 AM, Richard Damon wrote:
On 10/23/24 12:04 AM, olcott wrote:
On 10/22/2024 10:47 PM, Richard Damon wrote:
On 10/22/24 11:25 PM, olcott wrote:
On 10/22/2024 10:02 PM, Richard Damon wrote:
On 10/22/24 11:57 AM, olcott wrote:
On 10/22/2024 10:18 AM, joes wrote:
Am Tue, 22 Oct 2024 08:47:39 -0500 schrieb olcott:
On 10/22/2024 4:50 AM, joes wrote:Just no. Do you believe that I didn't write this myself after all? >>>>>>>>
Am Mon, 21 Oct 2024 22:04:49 -0500 schrieb olcott:
On 10/21/2024 9:42 PM, Richard Damon wrote:
On 10/21/24 7:08 PM, olcott wrote:
On 10/21/2024 6:05 PM, Richard Damon wrote:
On 10/21/24 6:48 PM, olcott wrote:
On 10/21/2024 5:34 PM, Richard Damon wrote:
On 10/21/24 12:29 PM, olcott wrote:
On 10/21/2024 10:17 AM, joes wrote:
Am Mon, 21 Oct 2024 08:41:11 -0500 schrieb olcott: >>>>>>>>>>>>>>>>>>> On 10/21/2024 3:39 AM, joes wrote:
It's not like it will deterministically regenerate the same >>>>>>>>>> output.Did ChatGPT generate that?
If it did then I need *ALL the input that caused it to >>>>>>>>>>>>>>>>> generate
that*
"naw, I wasn't lied to, they said they were saying the truth" >>>>>>>>>> sureit was told is correct.No, it said that given what you told it (which was a lie) >>>>>>>>>>>>> I asked it if what it was told was a lie and it explained >>>>>>>>>>>>> how whatNo, someone using some REAL INTELEGENCE, as opposed to a >>>>>>>>>>>>>>>> programI specifically asked it to verify that its key assumption is >>>>>>>>>>>>>>> correct and it did.
using "artificial intelegence" that had been loaded with >>>>>>>>>>>>>>>> false
premises and other lies.
buddy.
HAHAHAHAHA there isn't anything about truth in there, prove me >>>>>>>>>> wrongBecause Chat GPT doesn't care about lying.ChatGPT computes the truth and you can't actually show
otherwise.
That seems to indicate that you are admitting that you cheated >>>>>>>>> when youI definitely typed something out in the style of an LLMBecause what you are asking for is nonsense.I believe that the "output" Joes provided was fake on the >>>>>>>>>>> basis that
Of course an AI that has been programmed with lies might >>>>>>>>>>>> repeat the
lies.
When it is told the actual definition, after being told your >>>>>>>>>>>> lies,
and asked if your conclusion could be right, it said No. >>>>>>>>>>>> Thus, it seems by your logic, you have to admit defeat, as >>>>>>>>>>>> the AI,
after being told your lies, still was able to come up with the >>>>>>>>>>>> correct answer, that DDD will halt, and that HHH is just >>>>>>>>>>>> incorrect to
say it doesn't.
she did not provide the input to derive that output and did >>>>>>>>>>> not use
the required basis that was on the link.
instead of my
own words /s
Accepting your premises makes the problem uninteresting.If you want me to pay more attention to what you say, you >>>>>>>>>>>> first needYou cannot show that my premises are actually false.
to return the favor, and at least TRY to find an error in >>>>>>>>>>>> what I say,
and be based on more than just that you think that can't be >>>>>>>>>>>> right.
But you can't do that, as you don't actually know any facts >>>>>>>>>>>> about the
field that you can point to qualified references.
To show that they are false would at least require showing >>>>>>>>>>> that they
contradict each other.
discussed this with ChatGPT. You gave it a faulty basis and
then argued
against that.
They also conventional within the context of softwarelol
engineering. That
software engineering conventions seem incompatible with
computer science
conventions may refute the latter.
The a halt decider must report on the behavior that itself is >>>>>>>>> containedJust because you don't like the undecidability of the halting
within seems to be an incorrect convention.
problem?
u32 HHH1(ptr P) // line 721That makes no sense. DDD halts or doesn't either way. HHH and
u32 HHH(ptr P) // line 801
The above two functions have identical C code except for their >>>>>>>>> name.
The input to HHH1(DDD) halts. The input to HHH(DDD) does not >>>>>>>>> halt. This
conclusively proves that the pathological relationship between >>>>>>>>> DDD and
HHH makes a difference in the behavior of DDD.
HHH1 may
give different answers, but then exactly one of them must be wrong. >>>>>>>> Do they both call HHH? How does their execution differ?
void DDD()
{
HHH(DDD);
return;
}
*It is a verified fact that*
(a) Both HHH1 and HHH emulate DDD according to the
semantics of the x86 language.
But HHH only does so INCOMPLETELY.
(b) HHH and HHH1 have verbatim identical c source
code, except for their differing names.
So? the fact the give different results just proves that they must >>>>>> have a "hidden input" thta gives them that different behavior, so
they can't be actually deciders.
HHH1 either references itself with the name HHH1, instead of the
name HHH, so has DIFFERENT source code, or your code uses assembly >>>>>> to extract the address that it is running at, making that address
a "hidden input" to the code.
So, you just proved that you never meet your basic requirements,
and everything is just a lie.
(c) DDD emulated by HHH has different behavior than
DDD emulated by HHH1.
No, just less of it because HHH aborts its emulation.
Aborted emulation doesn't provide final behavior.
(d) Each DDD *correctly_emulated_by* any HHH that
this DDD calls cannot possibly return no matter
what this HHH does.
No, it can not be emulated by that HHH to that point, but that
doesn't mean that the behavior of program DDD doesn't get there.
Halt Deciding / Termination Analysis is about the behavior of the
program described, and thus all you are showing is that you aren't >>>>>> working on either of those problems, but have just been lying.
Note, your argument is using a equivocation on the term "correctly >>>>>> emulated" as you are trying to claim a correct emulation by just a >>>>>> partial emulation, but also trying to claim a result that only
comes from COMPLETE emulation, that of determining final behavior. >>>>>>
This again, just prove that you whole proof is based on lies.
I didn't hardly glance at any of that.
*This verified fact is a key element of my point*
When HHH1(DDD) emulates DDD this DDD reaches its final state.
When HHH(DDD) emulates DDD this DDD cannot possibly reach its
final state.
But HHH aborts its emulation, and to that point saw EXACTLY the same
sequence of steps that HHH1 saw (or you have lied about them being
identical and pure funcitons).
*That double talk dodges the point that I made*
What "Double talk"?
Your whole logic is just double talk.
You confuse your made up fanstay for reality and lock yourself into
your insanity.
DDD emulated by HHH cannot possibly reach
its final state no matter WTF that HHH does.
There is your Equivocation again!
"Reaching Final State" is a property of the execution of complete
emulation of a program.
So, since when we look at that for a DDD that calls an HHH that
returns an answer, we find it reaches such a final state, your claim
is just a blantant lie. Not just an honest mistake, as you have been
told repeatedly the answer, but in your total stupidity reject the
truth to keep your lies.
DDD emulated by HHH according to the semantics of the
x86 language cannot possibly reach its own "return"
instruction matter WTF that HHH does.
When termination analyzers analyze C functions for
termination the measure of termination is reaching
the "return" statement.
Whether HHH aborts or plays bingo has
NO EFFECT WHAT-SO-EVER ON THIS!
But that has been proven wrong by Fibble.
If HHH emulates itself by checking each possiblity, it can figure out
what HHH must do to be correct, and then do it.
Only by your equivocation on what the sentence means can you get your
answer.
Yes, no emulation of DDD by HHH that tries to emulate each instruction
by the definition of the x86 get there, but that doesn't show that DDD
never get there, as a partial emulation is NOT the "behavior" of the
thing emulated.
You are just proving yourself to be a liar.
I know that you know that the whole "pure function"
thing only has to do with the return value from HHH,
thus HAS NO EFFECT WHAT-SO-EVER ON WHICH STEPS ARE EMULATED.
Nope, irt affects the behavior of HHH.
You are so stupid you don't even know what your own program does.
You are just proving you don't understand that basics of what
programming is about.
You are just proving you are just totally stupid.
On 10/23/2024 6:16 PM, Richard Damon wrote:
On 10/23/24 8:33 AM, olcott wrote:
On 10/23/2024 6:12 AM, Richard Damon wrote:
On 10/23/24 12:04 AM, olcott wrote:
On 10/22/2024 10:47 PM, Richard Damon wrote:
On 10/22/24 11:25 PM, olcott wrote:
On 10/22/2024 10:02 PM, Richard Damon wrote:
On 10/22/24 11:57 AM, olcott wrote:
On 10/22/2024 10:18 AM, joes wrote:
Am Tue, 22 Oct 2024 08:47:39 -0500 schrieb olcott:
On 10/22/2024 4:50 AM, joes wrote:Just no. Do you believe that I didn't write this myself after >>>>>>>>>> all?
Am Mon, 21 Oct 2024 22:04:49 -0500 schrieb olcott:
On 10/21/2024 9:42 PM, Richard Damon wrote:
On 10/21/24 7:08 PM, olcott wrote:
On 10/21/2024 6:05 PM, Richard Damon wrote:
On 10/21/24 6:48 PM, olcott wrote:
On 10/21/2024 5:34 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>> On 10/21/24 12:29 PM, olcott wrote:
On 10/21/2024 10:17 AM, joes wrote:
Am Mon, 21 Oct 2024 08:41:11 -0500 schrieb olcott: >>>>>>>>>>>>>>>>>>>>> On 10/21/2024 3:39 AM, joes wrote:
It's not like it will deterministically regenerate the same >>>>>>>>>>>> output.Did ChatGPT generate that?
If it did then I need *ALL the input that caused it >>>>>>>>>>>>>>>>>>> to generate
that*
"naw, I wasn't lied to, they said they were saying the >>>>>>>>>>>> truth" sureit was told is correct.No, it said that given what you told it (which was a lie) >>>>>>>>>>>>>>> I asked it if what it was told was a lie and it explained >>>>>>>>>>>>>>> how whatNo, someone using some REAL INTELEGENCE, as opposed to >>>>>>>>>>>>>>>>>> a programI specifically asked it to verify that its key >>>>>>>>>>>>>>>>> assumption is
using "artificial intelegence" that had been loaded >>>>>>>>>>>>>>>>>> with false
premises and other lies.
correct and it did.
buddy.
HAHAHAHAHA there isn't anything about truth in there, prove >>>>>>>>>>>> me wrongBecause Chat GPT doesn't care about lying.ChatGPT computes the truth and you can't actually show >>>>>>>>>>>>> otherwise.
cheated when youI definitely typed something out in the style of an LLM >>>>>>>>>>>> instead of myBecause what you are asking for is nonsense.I believe that the "output" Joes provided was fake on the >>>>>>>>>>>>> basis that
Of course an AI that has been programmed with lies might >>>>>>>>>>>>>> repeat the
lies.
When it is told the actual definition, after being told >>>>>>>>>>>>>> your lies,
and asked if your conclusion could be right, it said No. >>>>>>>>>>>>>> Thus, it seems by your logic, you have to admit defeat, as >>>>>>>>>>>>>> the AI,
after being told your lies, still was able to come up with >>>>>>>>>>>>>> the
correct answer, that DDD will halt, and that HHH is just >>>>>>>>>>>>>> incorrect to
say it doesn't.
she did not provide the input to derive that output and did >>>>>>>>>>>>> not use
the required basis that was on the link.
own words /s
Accepting your premises makes the problem uninteresting. >>>>>>>>>>> That seems to indicate that you are admitting that youIf you want me to pay more attention to what you say, you >>>>>>>>>>>>>> first needYou cannot show that my premises are actually false. >>>>>>>>>>>>> To show that they are false would at least require showing >>>>>>>>>>>>> that they
to return the favor, and at least TRY to find an error in >>>>>>>>>>>>>> what I say,
and be based on more than just that you think that can't >>>>>>>>>>>>>> be right.
But you can't do that, as you don't actually know any >>>>>>>>>>>>>> facts about the
field that you can point to qualified references.
contradict each other.
discussed this with ChatGPT. You gave it a faulty basis and >>>>>>>>>>> then argued
against that.
They also conventional within the context of softwarelol
engineering. That
software engineering conventions seem incompatible with
computer science
conventions may refute the latter.
The a halt decider must report on the behavior that itself is >>>>>>>>>>> containedJust because you don't like the undecidability of the halting >>>>>>>>>> problem?
within seems to be an incorrect convention.
u32 HHH1(ptr P) // line 721That makes no sense. DDD halts or doesn't either way. HHH and >>>>>>>>>> HHH1 may
u32 HHH(ptr P) // line 801
The above two functions have identical C code except for >>>>>>>>>>> their name.
The input to HHH1(DDD) halts. The input to HHH(DDD) does not >>>>>>>>>>> halt. This
conclusively proves that the pathological relationship
between DDD and
HHH makes a difference in the behavior of DDD.
give different answers, but then exactly one of them must be >>>>>>>>>> wrong.
Do they both call HHH? How does their execution differ?
void DDD()
{
HHH(DDD);
return;
}
*It is a verified fact that*
(a) Both HHH1 and HHH emulate DDD according to the
semantics of the x86 language.
But HHH only does so INCOMPLETELY.
(b) HHH and HHH1 have verbatim identical c source
code, except for their differing names.
So? the fact the give different results just proves that they
must have a "hidden input" thta gives them that different
behavior, so they can't be actually deciders.
HHH1 either references itself with the name HHH1, instead of the >>>>>>>> name HHH, so has DIFFERENT source code, or your code uses
assembly to extract the address that it is running at, making
that address a "hidden input" to the code.
So, you just proved that you never meet your basic requirements, >>>>>>>> and everything is just a lie.
(c) DDD emulated by HHH has different behavior than
DDD emulated by HHH1.
No, just less of it because HHH aborts its emulation.
Aborted emulation doesn't provide final behavior.
(d) Each DDD *correctly_emulated_by* any HHH that
this DDD calls cannot possibly return no matter
what this HHH does.
No, it can not be emulated by that HHH to that point, but that >>>>>>>> doesn't mean that the behavior of program DDD doesn't get there. >>>>>>>>
Halt Deciding / Termination Analysis is about the behavior of
the program described, and thus all you are showing is that you >>>>>>>> aren't working on either of those problems, but have just been >>>>>>>> lying.
Note, your argument is using a equivocation on the term
"correctly emulated" as you are trying to claim a correct
emulation by just a partial emulation, but also trying to claim >>>>>>>> a result that only comes from COMPLETE emulation, that of
determining final behavior.
This again, just prove that you whole proof is based on lies.
I didn't hardly glance at any of that.
*This verified fact is a key element of my point*
When HHH1(DDD) emulates DDD this DDD reaches its final state.
When HHH(DDD) emulates DDD this DDD cannot possibly reach its
final state.
But HHH aborts its emulation, and to that point saw EXACTLY the
same sequence of steps that HHH1 saw (or you have lied about them
being identical and pure funcitons).
*That double talk dodges the point that I made*
What "Double talk"?
Your whole logic is just double talk.
You confuse your made up fanstay for reality and lock yourself into
your insanity.
DDD emulated by HHH cannot possibly reach
its final state no matter WTF that HHH does.
There is your Equivocation again!
"Reaching Final State" is a property of the execution of complete
emulation of a program.
So, since when we look at that for a DDD that calls an HHH that
returns an answer, we find it reaches such a final state, your claim
is just a blantant lie. Not just an honest mistake, as you have been
told repeatedly the answer, but in your total stupidity reject the
truth to keep your lies.
DDD emulated by HHH according to the semantics of the
x86 language cannot possibly reach its own "return"
instruction matter WTF that HHH does.
Then your logic is just inconsistant as HHH can not be folling the
semantic of the x86 language and then do "WTF".
We have already been through this too many times.
I just found out that ChatGPT also has ADD. When
you hit 4000 words of input and output it starts
forgetting things. Maybe you are this same way?
It is freaking amazing that when you stay within
this 4000 word limit its reasoning is superb.
You seem to be having a hard time understanding the
above 24 words.
You can't seem to understand a correct emulation of
zero to infinity steps by each element of an an infinite
set of HHH emulators results in zero instances of DDD
reaching its own "return" instruction.
ChatGPT does completely understand this.
On 10/23/2024 9:48 PM, Richard Damon wrote:
On 10/23/24 9:51 PM, olcott wrote:
ChatGPT does completely understand this.
But, it is just a stupid idiot that has been taught to repeat what it
has been told.
It is a brilliant genius that seems to infallibly deduce all
of the subtle nuances of each of the consequences on the basis
of a set of premises.
The key is that the conversation cannot have more than 4000
words. When 4000 words are exceeded ChatGPT starts acting like
it has dementia.
It seems you are nothing but a stupid idiot that believe what you have
told yourselfg.
If this was true then someone would have been able to find
an actual error in my work.
Mike can't even seem to pay attention to what changes the
execution trace of DDD emulated by HHH relative to DDD
emulated by HHH1. All that he can do is take an incorrect
guess without bothering to pay attention.
All you are doing with all this talk about Chat GPT agreeing with you
is proving that you know you arguement is so bad, the only thing with
any form of intelgence that will believe you is a program with only
artificial intelegence.
Sorry, yolu are just proving how stupid your ideas are.
ChatGPT does seem to infallibly understand every nuance of the
consequences that follow from my premises. No one can show otherwise.
ChatGPT can also validate the most important key assumptions of
these premises. No one can show otherwise.
On 10/24/2024 6:23 PM, Richard Damon wrote:
On 10/24/24 9:36 AM, olcott wrote:
On 10/23/2024 9:48 PM, Richard Damon wrote:
On 10/23/24 9:51 PM, olcott wrote:
ChatGPT does completely understand this.
But, it is just a stupid idiot that has been taught to repeat what
it has been told.
It is a brilliant genius that seems to infallibly deduce all
of the subtle nuances of each of the consequences on the basis
of a set of premises.
I guess you don't undetstand how "Large Language Models work, do you.
It has NO actual intelegence, or ability to "deduce" nuances, it is
just a massive pattern matching system.
All you are doing is proving how little you understand about what you
are talking about,
Remember, at the bottom of the page is a WARNING that it can make
mistakes. And feeding it LIES, like you do is one easy way to do that.
There is much more to this than your superficial
understanding. Here is a glimpse: https://www.technologyreview.com/2024/03/04/1089403/large-language- models-amazing-but-nobody-knows-why/
The bottom line is that ChatGPT made no error in its
evaluation of my work when this evaluation is based on
pure reasoning. It is only when my work is measured
against arbitrary dogma that cannot be justified with
pure reasoning that makes me and ChatGPT seem incorrect.
If use your same approach to these things we could say that
ZFC stupidly fails to have a glimmering of understanding of
Naive set theory. From your perspective ZFC is a damned liar.
On 10/25/2024 7:27 AM, Richard Damon wrote:
On 10/24/24 8:56 PM, olcott wrote:
On 10/24/2024 6:23 PM, Richard Damon wrote:
On 10/24/24 9:36 AM, olcott wrote:
On 10/23/2024 9:48 PM, Richard Damon wrote:
On 10/23/24 9:51 PM, olcott wrote:
ChatGPT does completely understand this.
But, it is just a stupid idiot that has been taught to repeat what >>>>>> it has been told.
It is a brilliant genius that seems to infallibly deduce all
of the subtle nuances of each of the consequences on the basis
of a set of premises.
I guess you don't undetstand how "Large Language Models work, do you.
It has NO actual intelegence, or ability to "deduce" nuances, it is
just a massive pattern matching system.
All you are doing is proving how little you understand about what
you are talking about,
Remember, at the bottom of the page is a WARNING that it can make
mistakes. And feeding it LIES, like you do is one easy way to do that. >>>>
There is much more to this than your superficial
understanding. Here is a glimpse:
https://www.technologyreview.com/2024/03/04/1089403/large-language-
models-amazing-but-nobody-knows-why/
The bottom line is that ChatGPT made no error in its
evaluation of my work when this evaluation is based on
pure reasoning. It is only when my work is measured
against arbitrary dogma that cannot be justified with
pure reasoning that makes me and ChatGPT seem incorrect.
If use your same approach to these things we could say that
ZFC stupidly fails to have a glimmering of understanding of
Naive set theory. From your perspective ZFC is a damned liar.
The articles says no such thing.
*large-language-models-amazing-but-nobody-knows-why*
They are much smarter and can figure out all kinds of
things. Their original designers have no idea how they
do this.
In fact, it comments about the problem of "overfitting" where the
processing get the wrong answers because it over generalizes.
This is because the modeling process has no concept of actual meaning,
and thus of truth, only the patterns that it has seen.
AI's don't "Reason", they patern match and compare.
Note, that "arbitrary dogma" that you try to reject, are the RULES and
DEFINITONS of the system that you claim to be working in.
How about we stipulate that the system that I am
working in is termination analysis for the x86 language.
as my system software says in its own name: x86utm.
By your logic, Trump was right that he won, because he was saying we
need to ignore the "dogma" of the truth and rules about voting, but
instead use the fact that he got more votes than anyone else prior.
That is the "proof" that he must have won, and the fact that Biden got
more than him is just a misuse of "dogma".
Sorry, you are just proving how utterly STUPID and IGNORANT you are,
and that you logic has absolutely ZERO basis.
Your new dependence of Chat GPT just shows your stupidity,
I have no need to depend on ChatGPT, yet ChatGPT does correctly
make every rebuttal of my work look ridiculously foolish.
Because of its preexisting knowledge of software development
it can even verify that the basis that it was given is a correct
basis. What you call are lies are commonly known verified facts.
https://www.researchgate.net/ publication/385090708_ChatGPT_Analyzes_Simulating_Termination_Analyzer
On 10/25/2024 10:45 AM, Richard Damon wrote:
On 10/25/24 9:37 AM, olcott wrote:
On 10/25/2024 7:27 AM, Richard Damon wrote:
On 10/24/24 9:04 PM, olcott wrote:
On 10/24/2024 6:23 PM, Richard Damon wrote:
On 10/24/24 2:19 PM, olcott wrote:The directly executed DDD has the same behavior as
On 10/23/2024 9:48 PM, Richard Damon wrote:
On 10/23/24 10:33 PM, olcott wrote:
On 10/23/2024 6:16 PM, Richard Damon wrote:
On 10/23/24 8:33 AM, olcott wrote:That is not DDD emulated by HHH according to the semantics
On 10/23/2024 6:12 AM, Richard Damon wrote:
On 10/23/24 12:04 AM, olcott wrote:
On 10/22/2024 10:47 PM, Richard Damon wrote:
On 10/22/24 11:25 PM, olcott wrote:
On 10/22/2024 10:02 PM, Richard Damon wrote:
On 10/22/24 11:57 AM, olcott wrote:
On 10/22/2024 10:18 AM, joes wrote:
Am Tue, 22 Oct 2024 08:47:39 -0500 schrieb olcott: >>>>>>>>>>>>>>>>>>> On 10/22/2024 4:50 AM, joes wrote:
Just no. Do you believe that I didn't write this >>>>>>>>>>>>>>>>>> myself after all?Am Mon, 21 Oct 2024 22:04:49 -0500 schrieb olcott: >>>>>>>>>>>>>>>>>>>>> On 10/21/2024 9:42 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>> On 10/21/24 7:08 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>> On 10/21/2024 6:05 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>> On 10/21/24 6:48 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>>> On 10/21/2024 5:34 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>>>> On 10/21/24 12:29 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>> On 10/21/2024 10:17 AM, joes wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>> Am Mon, 21 Oct 2024 08:41:11 -0500 schrieb >>>>>>>>>>>>>>>>>>>>>>>>>>>> olcott:
It's not like it will deterministically regenerate >>>>>>>>>>>>>>>>>>>> the same output.Did ChatGPT generate that? >>>>>>>>>>>>>>>>>>>>>>>>>>> If it did then I need *ALL the input that >>>>>>>>>>>>>>>>>>>>>>>>>>> caused it to generateOn 10/21/2024 3:39 AM, joes wrote: >>>>>>>>>>>>>>>>>>
that*
"naw, I wasn't lied to, they said they were saying >>>>>>>>>>>>>>>>>>>> the truth" sureI asked it if what it was told was a lie and it >>>>>>>>>>>>>>>>>>>>>>> explained how whatNo, it said that given what you told it (which >>>>>>>>>>>>>>>>>>>>>>>> was a lie)No, someone using some REAL INTELEGENCE, as >>>>>>>>>>>>>>>>>>>>>>>>>> opposed to a programI specifically asked it to verify that its key >>>>>>>>>>>>>>>>>>>>>>>>> assumption is
using "artificial intelegence" that had been >>>>>>>>>>>>>>>>>>>>>>>>>> loaded with false
premises and other lies.
correct and it did.
it was told is correct.
buddy.
HAHAHAHAHA there isn't anything about truth in >>>>>>>>>>>>>>>>>>>> there, prove me wrongBecause Chat GPT doesn't care about lying. >>>>>>>>>>>>>>>>>>>>> ChatGPT computes the truth and you can't actually >>>>>>>>>>>>>>>>>>>>> show otherwise.
That seems to indicate that you are admitting that >>>>>>>>>>>>>>>>>>> you cheated when youown words /sBecause what you are asking for is nonsense. >>>>>>>>>>>>>>>>>>>>>> Of course an AI that has been programmed with lies >>>>>>>>>>>>>>>>>>>>>> might repeat theI believe that the "output" Joes provided was fake >>>>>>>>>>>>>>>>>>>>> on the basis that
lies.
When it is told the actual definition, after being >>>>>>>>>>>>>>>>>>>>>> told your lies,
and asked if your conclusion could be right, it >>>>>>>>>>>>>>>>>>>>>> said No.
Thus, it seems by your logic, you have to admit >>>>>>>>>>>>>>>>>>>>>> defeat, as the AI,
after being told your lies, still was able to come >>>>>>>>>>>>>>>>>>>>>> up with the
correct answer, that DDD will halt, and that HHH >>>>>>>>>>>>>>>>>>>>>> is just incorrect to
say it doesn't.
she did not provide the input to derive that output >>>>>>>>>>>>>>>>>>>>> and did not use
the required basis that was on the link. >>>>>>>>>>>>>>>>>>>> I definitely typed something out in the style of an >>>>>>>>>>>>>>>>>>>> LLM instead of my
Accepting your premises makes the problem >>>>>>>>>>>>>>>>>>>> uninteresting.If you want me to pay more attention to what you >>>>>>>>>>>>>>>>>>>>>> say, you first needcontradict each other.
to return the favor, and at least TRY to find an >>>>>>>>>>>>>>>>>>>>>> error in what I say,
and be based on more than just that you think that >>>>>>>>>>>>>>>>>>>>>> can't be right.
But you can't do that, as you don't actually know >>>>>>>>>>>>>>>>>>>>>> any facts about the
field that you can point to qualified references. >>>>>>>>>>>>>>>>>>>>> You cannot show that my premises are actually false. >>>>>>>>>>>>>>>>>>>>> To show that they are false would at least require >>>>>>>>>>>>>>>>>>>>> showing that they
discussed this with ChatGPT. You gave it a faulty >>>>>>>>>>>>>>>>>>> basis and then argued
against that.
They also conventional within the context of software >>>>>>>>>>>>>>>>>>> engineering. Thatlol
software engineering conventions seem incompatible >>>>>>>>>>>>>>>>>>> with computer science
conventions may refute the latter.
The a halt decider must report on the behavior that >>>>>>>>>>>>>>>>>>> itself is containedgive different answers, but then exactly one of them >>>>>>>>>>>>>>>>>> must be wrong.
within seems to be an incorrect convention. >>>>>>>>>>>>>>>>>> Just because you don't like the undecidability of the >>>>>>>>>>>>>>>>>> halting problem?
u32 HHH1(ptr P) // line 721
u32 HHH(ptr P) // line 801
The above two functions have identical C code except >>>>>>>>>>>>>>>>>>> for their name.
The input to HHH1(DDD) halts. The input to HHH(DDD) >>>>>>>>>>>>>>>>>>> does not halt. This
conclusively proves that the pathological >>>>>>>>>>>>>>>>>>> relationship between DDD and
HHH makes a difference in the behavior of DDD. >>>>>>>>>>>>>>>>>> That makes no sense. DDD halts or doesn't either way. >>>>>>>>>>>>>>>>>> HHH and HHH1 may
Do they both call HHH? How does their execution differ? >>>>>>>>>>>>>>>>>>
void DDD()
{
HHH(DDD);
return;
}
*It is a verified fact that*
(a) Both HHH1 and HHH emulate DDD according to the >>>>>>>>>>>>>>>>> semantics of the x86 language.
But HHH only does so INCOMPLETELY.
(b) HHH and HHH1 have verbatim identical c source >>>>>>>>>>>>>>>>> code, except for their differing names.
So? the fact the give different results just proves that >>>>>>>>>>>>>>>> they must have a "hidden input" thta gives them that >>>>>>>>>>>>>>>> different behavior, so they can't be actually deciders. >>>>>>>>>>>>>>>>
HHH1 either references itself with the name HHH1, >>>>>>>>>>>>>>>> instead of the name HHH, so has DIFFERENT source code, >>>>>>>>>>>>>>>> or your code uses assembly to extract the address that >>>>>>>>>>>>>>>> it is running at, making that address a "hidden input" >>>>>>>>>>>>>>>> to the code.
So, you just proved that you never meet your basic >>>>>>>>>>>>>>>> requirements, and everything is just a lie.
(c) DDD emulated by HHH has different behavior than >>>>>>>>>>>>>>>>> DDD emulated by HHH1.
No, just less of it because HHH aborts its emulation. >>>>>>>>>>>>>>>>
Aborted emulation doesn't provide final behavior. >>>>>>>>>>>>>>>>
(d) Each DDD *correctly_emulated_by* any HHH that >>>>>>>>>>>>>>>>> this DDD calls cannot possibly return no matter >>>>>>>>>>>>>>>>> what this HHH does.
No, it can not be emulated by that HHH to that point, >>>>>>>>>>>>>>>> but that doesn't mean that the behavior of program DDD >>>>>>>>>>>>>>>> doesn't get there.
Halt Deciding / Termination Analysis is about the >>>>>>>>>>>>>>>> behavior of the program described, and thus all you are >>>>>>>>>>>>>>>> showing is that you aren't working on either of those >>>>>>>>>>>>>>>> problems, but have just been lying.
Note, your argument is using a equivocation on the term >>>>>>>>>>>>>>>> "correctly emulated" as you are trying to claim a >>>>>>>>>>>>>>>> correct emulation by just a partial emulation, but also >>>>>>>>>>>>>>>> trying to claim a result that only comes from COMPLETE >>>>>>>>>>>>>>>> emulation, that of determining final behavior. >>>>>>>>>>>>>>>>
This again, just prove that you whole proof is based on >>>>>>>>>>>>>>>> lies.
I didn't hardly glance at any of that.
*This verified fact is a key element of my point* >>>>>>>>>>>>>>>
When HHH1(DDD) emulates DDD this DDD reaches its final >>>>>>>>>>>>>>> state.
When HHH(DDD) emulates DDD this DDD cannot possibly reach >>>>>>>>>>>>>>> its
final state.
But HHH aborts its emulation, and to that point saw >>>>>>>>>>>>>> EXACTLY the same sequence of steps that HHH1 saw (or you >>>>>>>>>>>>>> have lied about them being identical and pure funcitons). >>>>>>>>>>>>>>
*That double talk dodges the point that I made*
What "Double talk"?
Your whole logic is just double talk.
You confuse your made up fanstay for reality and lock
yourself into your insanity.
DDD emulated by HHH cannot possibly reach
its final state no matter WTF that HHH does.
There is your Equivocation again!
"Reaching Final State" is a property of the execution of >>>>>>>>>>>> complete emulation of a program.
So, since when we look at that for a DDD that calls an HHH >>>>>>>>>>>> that returns an answer, we find it reaches such a final >>>>>>>>>>>> state, your claim is just a blantant lie. Not just an honest >>>>>>>>>>>> mistake, as you have been told repeatedly the answer, but in >>>>>>>>>>>> your total stupidity reject the truth to keep your lies. >>>>>>>>>>>>
DDD emulated by HHH according to the semantics of the
x86 language cannot possibly reach its own "return"
instruction matter WTF that HHH does.
Then your logic is just inconsistant as HHH can not be folling >>>>>>>>>> the semantic of the x86 language and then do "WTF".
For HHH to emulate its input in a way to show the actual
behavior of that input, it must not EVER abort its emulation. >>>>>>>>>> PERIOD.
Your logic is just WTF, and based on the assumption that HHH >>>>>>>>>> can do two different things at the same time with the same >>>>>>>>>> code which is just a LIE,
When termination analyzers analyze C functions for
termination the measure of termination is reaching
the "return" statement.
Rigjt, when the BEHAIVIOR of the function is to do so, and >>>>>>>>>> that behavior is DEFINED to be the results of direct execution, >>>>>>>>>
of the x86 language. That is DDD emulated by HHH1 according
to the semantics of the x86 language.
Depends on which of the equivocations you are meaning.
If we are talking about the behavior of the PROGRAM DDD.
In other words you are saying that DDD must be emulated
by HHH violating the semantics of the x86 language.
Where did I say that?
DDD emulated by HHH1 according to the semantics
of the x86 language.
The only way for DDD emulated by HHH to have this
same behavior (that includes DDD calling itself)
is to ignore the call to itself.
Nope, as "according to the semantics of the x86 language" is an
OBJECTIVE standard, and thus the only meaning of behavior of "the
call itself" is to look at what the x86 processor does on that call.
*That lame excuse tries to pretend that UTMs don't exist*
Nope. But UTMs will never abort their emulation of their input, or
they are not a UTM.
You claimed that emulation is an incorrect basis
and I proved you wrong.
On 10/25/2024 10:45 AM, Richard Damon wrote:
On 10/25/24 9:56 AM, olcott wrote:
On 10/25/2024 7:27 AM, Richard Damon wrote:
On 10/24/24 8:56 PM, olcott wrote:
On 10/24/2024 6:23 PM, Richard Damon wrote:
On 10/24/24 9:36 AM, olcott wrote:
On 10/23/2024 9:48 PM, Richard Damon wrote:
On 10/23/24 9:51 PM, olcott wrote:
ChatGPT does completely understand this.
But, it is just a stupid idiot that has been taught to repeat
what it has been told.
It is a brilliant genius that seems to infallibly deduce all
of the subtle nuances of each of the consequences on the basis
of a set of premises.
I guess you don't undetstand how "Large Language Models work, do you. >>>>>>
It has NO actual intelegence, or ability to "deduce" nuances, it
is just a massive pattern matching system.
All you are doing is proving how little you understand about what
you are talking about,
Remember, at the bottom of the page is a WARNING that it can make
mistakes. And feeding it LIES, like you do is one easy way to do
that.
There is much more to this than your superficial
understanding. Here is a glimpse:
https://www.technologyreview.com/2024/03/04/1089403/large-language-
models-amazing-but-nobody-knows-why/
The bottom line is that ChatGPT made no error in its
evaluation of my work when this evaluation is based on
pure reasoning. It is only when my work is measured
against arbitrary dogma that cannot be justified with
pure reasoning that makes me and ChatGPT seem incorrect.
If use your same approach to these things we could say that
ZFC stupidly fails to have a glimmering of understanding of
Naive set theory. From your perspective ZFC is a damned liar.
The articles says no such thing.
*large-language-models-amazing-but-nobody-knows-why*
They are much smarter and can figure out all kinds of
things. Their original designers have no idea how they
do this.
In fact, it comments about the problem of "overfitting" where the
processing get the wrong answers because it over generalizes.
This is because the modeling process has no concept of actual
meaning, and thus of truth, only the patterns that it has seen.
AI's don't "Reason", they patern match and compare.
Note, that "arbitrary dogma" that you try to reject, are the RULES
and DEFINITONS of the system that you claim to be working in.
How about we stipulate that the system that I am
working in is termination analysis for the x86 language.
as my system software says in its own name: x86utm.
But it doesn;t actually know
I said the the underlying formal mathematical system
of DDD/HHH <is> the x86 language.
DDD emulated by HHH within this formal system cannot
possibly reach its own "return" instruction even if
no one or no thing "knows" this.
Just came accross an interesting parody about LLMs, showing there issues
https://www.youtube.com/watch?v=Bbfii4wz2ys&ab_channel=HonestAds
It seems you are just one of those taken in by it.
Not at all taken in by it.
100% perfectly understanding that its review of the
succinct essence of my work is utterly unassailable.
Mike's review of the difference between
DDD emulated by HHH
and
DDD emulated by HHH1
according to the semantics of the x86 language
is pure bluster.
On 10/25/2024 11:07 PM, Richard Damon wrote:No. Emulation just needs to match the execution (duh).
On 10/25/24 7:18 PM, olcott wrote:
On 10/25/2024 5:17 PM, Richard Damon wrote:
On 10/25/24 5:54 PM, olcott wrote:
On 10/25/2024 10:45 AM, Richard Damon wrote:
On 10/25/24 9:37 AM, olcott wrote:
On 10/25/2024 7:27 AM, Richard Damon wrote:
On 10/24/24 9:04 PM, olcott wrote:
On 10/24/2024 6:23 PM, Richard Damon wrote:
On 10/24/24 2:19 PM, olcott wrote:
On 10/23/2024 9:48 PM, Richard Damon wrote:
On 10/23/24 10:33 PM, olcott wrote:
On 10/23/2024 6:16 PM, Richard Damon wrote:
On 10/23/24 8:33 AM, olcott wrote:
On 10/23/2024 6:12 AM, Richard Damon wrote:
On 10/23/24 12:04 AM, olcott wrote:
On 10/22/2024 10:47 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>> On 10/22/24 11:25 PM, olcott wrote:
*This does not say that*No, I said a PARTIAL emulation is an incorrect basis.You claimed that emulation is an incorrect basis and I proved youNope. But UTMs will never abort their emulation of their input, or >>>>>> they are not a UTM.*That lame excuse tries to pretend that UTMs don't exist*Nope, as "according to the semantics of the x86 language" is an >>>>>>>> OBJECTIVE standard, and thus the only meaning of behavior of "the >>>>>>>> call itself" is to look at what the x86 processor does on that >>>>>>>> call.The directly executed DDD has the same behavior as DDD emulated >>>>>>>>> by HHH1 according to the semantics of the x86 language.Where did I say that?Depends on which of the equivocations you are meaning. >>>>>>>>>>>> If we are talking about the behavior of the PROGRAM DDD. >>>>>>>>>>> In other words you are saying that DDD must be emulated by HHH >>>>>>>>>>> violating the semantics of the x86 language.That is not DDD emulated by HHH according to the semantics >>>>>>>>>>>>> of the x86 language. That is DDD emulated by HHH1 according >>>>>>>>>>>>> to the semantics of the x86 language.When termination analyzers analyze C functions for >>>>>>>>>>>>>>> termination the measure of termination is reaching the >>>>>>>>>>>>>>> "return" statement.Rigjt, when the BEHAIVIOR of the function is to do so, and >>>>>>>>>>>>>> that behavior is DEFINED to be the results of direct >>>>>>>>>>>>>> execution,
The only way for DDD emulated by HHH to have this same behavior >>>>>>>>> (that includes DDD calling itself) is to ignore the call to
itself.
wrong.
On 10/25/2024 7:27 AM, Richard Damon wrote:
Nope, as "according to the semantics of the x86 language" is anThis does reject emulation out-of-hand and forbids an x86 processor to
OBJECTIVE standard, and thus the only meaning of behavior of "the
call itself" is to look at what the x86 processor does on that
call.
emulate itself recursively as is required to form the isomorphism to
the halting problem.
You could also show the actual behaviour disobeys them.Nope, the key is that *IF* you can show that the results of theThen you can show that the emulation by HHH disobeys the semantics of
emulation will match the behavior of the actual machine,
the x86 language:
When DDD is emulated by HHH according to the semantics of the x86Just as in the direct execution.
language then HHH must emulate itself emulating DDD.
When DDD is emulated by HHH1 according to the semantics of the x86Do you think that it is not directly executed according to x86 semantics?
language then HHH1 does not emulate itself emulating DDD.
DDD emulated by HHH1 has the same behavior as when DDD is directly
executed.
On 10/25/2024 11:07 PM, Richard Damon wrote:
On 10/25/24 7:22 PM, olcott wrote:
On 10/25/2024 5:17 PM, Richard Damon wrote:>>>
No, I said a PARTIAL emulation is an incorrect basis.
You are just a proven liar that twists peoples words because you
don't know what you are talking about.
It is ridiculously stupid to require a complete emulation
of a non-terminating input. No twisted words there.
HHH doesn't need to to the complete emulation, just show that the
complete emulation doesn't reach an end.
Then you admit that DDD emulated by HHH according to the
semantics of the x86 language cannot possibly reach its
own "return" instruction?
IF you want to call that rediculously stupid, you are just showing
your own stupidity, as that IS the requirement, and you can't show
anything that proves it otherwise, because you just don't know
anything about the fundamental facts of what you talk about.
I am not the one stupidly requiring the compete emulation
of a non-terminating input.
The problem is that any HHH that answers for the input built on it,
must have been a decider that aborts when emulating that input, and
thus only does a partial emulation.
It is ridiculously stupid to require a complete emulation
of a non-terminating input. No twisted words there.
HHH doesn't need to to the complete emulation, just show that the
complete emulation doesn't reach an end.
IF you want to call that rediculously stupid, you are just showing
your own stupidity, as that IS the requirement, and you can't show
anything that proves it otherwise, because you just don't know
anything about the fundamental facts of what you talk about.
Then you admit that DDD emulated by HHH according to the
semantics of the x86 language cannot possibly reach its
own "return" instruction?
That is why HHH1 can get the right answer, because, not actually
being an exact copy, it is able to emulate the input to the end, and
see that it will halt.
It is an exact copy AND THE ONLY RELEVANT DIFFERENCE
IS THAT DDD CALLS HHH AND DOES NOT CALL HHH1. This
was even over Mike's head.
Nope, It just proves that your HHH is not a "pure function" of its
input, as it uses a "hidden input" and thus just fails to even be of
the right form to be a decider.
We have not got to the point in the conversation where
we begin to talk about pure functions because you insist
on dodging a mandatory prerequisite point.
On 10/26/2024 9:10 AM, joes wrote:
Am Sat, 26 Oct 2024 08:47:11 -0500 schrieb olcott:
On 10/25/2024 11:07 PM, Richard Damon wrote:No. Emulation just needs to match the execution (duh).
On 10/25/24 7:18 PM, olcott wrote:
On 10/25/2024 5:17 PM, Richard Damon wrote:
On 10/25/24 5:54 PM, olcott wrote:
On 10/25/2024 10:45 AM, Richard Damon wrote:
On 10/25/24 9:37 AM, olcott wrote:
On 10/25/2024 7:27 AM, Richard Damon wrote:
On 10/24/24 9:04 PM, olcott wrote:
On 10/24/2024 6:23 PM, Richard Damon wrote:
On 10/24/24 2:19 PM, olcott wrote:
On 10/23/2024 9:48 PM, Richard Damon wrote:
On 10/23/24 10:33 PM, olcott wrote:
On 10/23/2024 6:16 PM, Richard Damon wrote:
On 10/23/24 8:33 AM, olcott wrote:
On 10/23/2024 6:12 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>> On 10/23/24 12:04 AM, olcott wrote:
On 10/22/2024 10:47 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 10/22/24 11:25 PM, olcott wrote:
*This does not say that*No, I said a PARTIAL emulation is an incorrect basis.You claimed that emulation is an incorrect basis and I proved you >>>>>>> wrong.Nope. But UTMs will never abort their emulation of their input, or >>>>>>>> they are not a UTM.*That lame excuse tries to pretend that UTMs don't exist*Nope, as "according to the semantics of the x86 language" is an >>>>>>>>>> OBJECTIVE standard, and thus the only meaning of behavior of "the >>>>>>>>>> call itself" is to look at what the x86 processor does on that >>>>>>>>>> call.The directly executed DDD has the same behavior as DDD emulated >>>>>>>>>>> by HHH1 according to the semantics of the x86 language.Where did I say that?Depends on which of the equivocations you are meaning. >>>>>>>>>>>>>> If we are talking about the behavior of the PROGRAM DDD. >>>>>>>>>>>>> In other words you are saying that DDD must be emulated by HHH >>>>>>>>>>>>> violating the semantics of the x86 language.That is not DDD emulated by HHH according to the semantics >>>>>>>>>>>>>>> of the x86 language. That is DDD emulated by HHH1 according >>>>>>>>>>>>>>> to the semantics of the x86 language.When termination analyzers analyze C functions for >>>>>>>>>>>>>>>>> termination the measure of termination is reaching the >>>>>>>>>>>>>>>>> "return" statement.Rigjt, when the BEHAIVIOR of the function is to do so, and >>>>>>>>>>>>>>>> that behavior is DEFINED to be the results of direct >>>>>>>>>>>>>>>> execution,
The only way for DDD emulated by HHH to have this same behavior >>>>>>>>>>> (that includes DDD calling itself) is to ignore the call to >>>>>>>>>>> itself.
On 10/25/2024 7:27 AM, Richard Damon wrote:
> Nope, as "according to the semantics of the x86 language" is an >>>>> > OBJECTIVE standard, and thus the only meaning of behavior of "the >>>>> > call itself" is to look at what the x86 processor does on that >>>>> > call.
This does reject emulation out-of-hand and forbids an x86 processor to >>>>> emulate itself recursively as is required to form the isomorphism to >>>>> the halting problem.
You could also show the actual behaviour disobeys them.Nope, the key is that *IF* you can show that the results of theThen you can show that the emulation by HHH disobeys the semantics of
emulation will match the behavior of the actual machine,
the x86 language:
When DDD is emulated by HHH according to the semantics of the x86Just as in the direct execution.
language then HHH must emulate itself emulating DDD.
When DDD is emulated by HHH1 according to the semantics of the x86Do you think that it is not directly executed according to x86 semantics?
language then HHH1 does not emulate itself emulating DDD.
DDD emulated by HHH1 has the same behavior as when DDD is directly
executed.
Maybe you get overloaded with too many points.
Here is the one key point:
DDD emulated by HHH must emulate itself emulating DDD.
DDD emulated by HHH1 must NOT emulate itself emulating DDD.
DDD emulated by HHH1 has the same behavior as executed DDD.
On 10/26/2024 10:35 AM, Richard Damon wrote:
On 10/26/24 9:55 AM, olcott wrote:
On 10/25/2024 11:07 PM, Richard Damon wrote:
On 10/25/24 7:22 PM, olcott wrote:
On 10/25/2024 5:17 PM, Richard Damon wrote:>>>
No, I said a PARTIAL emulation is an incorrect basis.
You are just a proven liar that twists peoples words because you
don't know what you are talking about.
It is ridiculously stupid to require a complete emulation
of a non-terminating input. No twisted words there.
HHH doesn't need to to the complete emulation, just show that the
complete emulation doesn't reach an end.
Then you admit that DDD emulated by HHH according to the
semantics of the x86 language cannot possibly reach its
own "return" instruction?
IF you want to call that rediculously stupid, you are just showing
your own stupidity, as that IS the requirement, and you can't show
anything that proves it otherwise, because you just don't know
anything about the fundamental facts of what you talk about.
I am not the one stupidly requiring the compete emulation
of a non-terminating input.
The problem is that any HHH that answers for the input built on
it, must have been a decider that aborts when emulating that
input, and thus only does a partial emulation.
It is ridiculously stupid to require a complete emulation
of a non-terminating input. No twisted words there.
HHH doesn't need to to the complete emulation, just show that the
complete emulation doesn't reach an end.
IF you want to call that rediculously stupid, you are just showing
your own stupidity, as that IS the requirement, and you can't show
anything that proves it otherwise, because you just don't know
anything about the fundamental facts of what you talk about.
Then you admit that DDD emulated by HHH according to the
semantics of the x86 language cannot possibly reach its
own "return" instruction?
The problem is that your HHH doesn't do that,
Of course it doesn't do that. It is ridiculously stupid for
an emulating termination analyzer to emulate a non-terminating
input forever.
On 10/26/2024 10:35 AM, Richard Damon wrote:
On 10/26/24 9:47 AM, olcott wrote:
On 10/25/2024 11:07 PM, Richard Damon wrote:
On 10/25/24 7:18 PM, olcott wrote:
On 10/25/2024 5:17 PM, Richard Damon wrote:
On 10/25/24 5:54 PM, olcott wrote:
On 10/25/2024 10:45 AM, Richard Damon wrote:
On 10/25/24 9:37 AM, olcott wrote:
On 10/25/2024 7:27 AM, Richard Damon wrote:
On 10/24/24 9:04 PM, olcott wrote:
On 10/24/2024 6:23 PM, Richard Damon wrote:
On 10/24/24 2:19 PM, olcott wrote:The directly executed DDD has the same behavior as
On 10/23/2024 9:48 PM, Richard Damon wrote:
On 10/23/24 10:33 PM, olcott wrote:In other words you are saying that DDD must be emulated >>>>>>>>>>>>> by HHH violating the semantics of the x86 language.
On 10/23/2024 6:16 PM, Richard Damon wrote:
On 10/23/24 8:33 AM, olcott wrote:
On 10/23/2024 6:12 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>> On 10/23/24 12:04 AM, olcott wrote:
On 10/22/2024 10:47 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 10/22/24 11:25 PM, olcott wrote:
On 10/22/2024 10:02 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>> On 10/22/24 11:57 AM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>> On 10/22/2024 10:18 AM, joes wrote: >>>>>>>>>>>>>>>>>>>>>>>> Am Tue, 22 Oct 2024 08:47:39 -0500 schrieb olcott: >>>>>>>>>>>>>>>>>>>>>>>>> On 10/22/2024 4:50 AM, joes wrote: >>>>>>>>>>>>>>>>>>>>>>>>>> Am Mon, 21 Oct 2024 22:04:49 -0500 schrieb >>>>>>>>>>>>>>>>>>>>>>>>>> olcott:
Just no. Do you believe that I didn't write this >>>>>>>>>>>>>>>>>>>>>>>> myself after all?On 10/21/2024 9:42 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>> On 10/21/24 7:08 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> On 10/21/2024 6:05 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On 10/21/24 6:48 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On 10/21/2024 5:34 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On 10/21/24 12:29 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On 10/21/2024 10:17 AM, joes wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Am Mon, 21 Oct 2024 08:41:11 -0500 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> schrieb olcott: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On 10/21/2024 3:39 AM, joes wrote: >>>>>>>>>>>>>>>>>>>>>>>>It's not like it will deterministically >>>>>>>>>>>>>>>>>>>>>>>>>> regenerate the same output.
Did ChatGPT generate that? >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If it did then I need *ALL the input >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that caused it to generate >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that*
buddy.I asked it if what it was told was a lie >>>>>>>>>>>>>>>>>>>>>>>>>>>>> and it explained how what >>>>>>>>>>>>>>>>>>>>>>>>>>>>> it was told is correct. >>>>>>>>>>>>>>>>>>>>>>>>>> "naw, I wasn't lied to, they said they were >>>>>>>>>>>>>>>>>>>>>>>>>> saying the truth" sureNo, someone using some REAL INTELEGENCE, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as opposed to a program >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> using "artificial intelegence" that had >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> been loaded with false >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> premises and other lies. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I specifically asked it to verify that >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> its key assumption is >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> correct and it did. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> No, it said that given what you told it >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (which was a lie)
HAHAHAHAHA there isn't anything about truth in >>>>>>>>>>>>>>>>>>>>>>>>>> there, prove me wrongBecause Chat GPT doesn't care about lying. >>>>>>>>>>>>>>>>>>>>>>>>>>> ChatGPT computes the truth and you can't >>>>>>>>>>>>>>>>>>>>>>>>>>> actually show otherwise.
That seems to indicate that you are admitting >>>>>>>>>>>>>>>>>>>>>>>>> that you cheated when youown words /sBecause what you are asking for is nonsense. >>>>>>>>>>>>>>>>>>>>>>>>>>>> Of course an AI that has been programmed >>>>>>>>>>>>>>>>>>>>>>>>>>>> with lies might repeat the >>>>>>>>>>>>>>>>>>>>>>>>>>>> lies.I believe that the "output" Joes provided was >>>>>>>>>>>>>>>>>>>>>>>>>>> fake on the basis that
When it is told the actual definition, after >>>>>>>>>>>>>>>>>>>>>>>>>>>> being told your lies,
and asked if your conclusion could be right, >>>>>>>>>>>>>>>>>>>>>>>>>>>> it said No.
Thus, it seems by your logic, you have to >>>>>>>>>>>>>>>>>>>>>>>>>>>> admit defeat, as the AI, >>>>>>>>>>>>>>>>>>>>>>>>>>>> after being told your lies, still was able >>>>>>>>>>>>>>>>>>>>>>>>>>>> to come up with the
correct answer, that DDD will halt, and that >>>>>>>>>>>>>>>>>>>>>>>>>>>> HHH is just incorrect to >>>>>>>>>>>>>>>>>>>>>>>>>>>> say it doesn't.
she did not provide the input to derive that >>>>>>>>>>>>>>>>>>>>>>>>>>> output and did not use
the required basis that was on the link. >>>>>>>>>>>>>>>>>>>>>>>>>> I definitely typed something out in the style >>>>>>>>>>>>>>>>>>>>>>>>>> of an LLM instead of my
Accepting your premises makes the problem >>>>>>>>>>>>>>>>>>>>>>>>>> uninteresting.If you want me to pay more attention to what >>>>>>>>>>>>>>>>>>>>>>>>>>>> you say, you first need >>>>>>>>>>>>>>>>>>>>>>>>>>>> to return the favor, and at least TRY to >>>>>>>>>>>>>>>>>>>>>>>>>>>> find an error in what I say, >>>>>>>>>>>>>>>>>>>>>>>>>>>> and be based on more than just that you >>>>>>>>>>>>>>>>>>>>>>>>>>>> think that can't be right. >>>>>>>>>>>>>>>>>>>>>>>>>>>> But you can't do that, as you don't actually >>>>>>>>>>>>>>>>>>>>>>>>>>>> know any facts about the >>>>>>>>>>>>>>>>>>>>>>>>>>>> field that you can point to qualified >>>>>>>>>>>>>>>>>>>>>>>>>>>> references.You cannot show that my premises are actually >>>>>>>>>>>>>>>>>>>>>>>>>>> false.
To show that they are false would at least >>>>>>>>>>>>>>>>>>>>>>>>>>> require showing that they >>>>>>>>>>>>>>>>>>>>>>>>>>> contradict each other.
discussed this with ChatGPT. You gave it a >>>>>>>>>>>>>>>>>>>>>>>>> faulty basis and then argued >>>>>>>>>>>>>>>>>>>>>>>>> against that.
They also conventional within the context of >>>>>>>>>>>>>>>>>>>>>>>>> software engineering. Thatgive different answers, but then exactly one of >>>>>>>>>>>>>>>>>>>>>>>> them must be wrong.
software engineering conventions seem >>>>>>>>>>>>>>>>>>>>>>>>> incompatible with computer science >>>>>>>>>>>>>>>>>>>>>>>>> conventions may refute the latter. >>>>>>>>>>>>>>>>>>>>>>>> lol
The a halt decider must report on the behavior >>>>>>>>>>>>>>>>>>>>>>>>> that itself is contained
within seems to be an incorrect convention. >>>>>>>>>>>>>>>>>>>>>>>> Just because you don't like the undecidability >>>>>>>>>>>>>>>>>>>>>>>> of the halting problem?
u32 HHH1(ptr P) // line 721 >>>>>>>>>>>>>>>>>>>>>>>>> u32 HHH(ptr P) // line 801 >>>>>>>>>>>>>>>>>>>>>>>>> The above two functions have identical C code >>>>>>>>>>>>>>>>>>>>>>>>> except for their name.
The input to HHH1(DDD) halts. The input to >>>>>>>>>>>>>>>>>>>>>>>>> HHH(DDD) does not halt. This >>>>>>>>>>>>>>>>>>>>>>>>> conclusively proves that the pathological >>>>>>>>>>>>>>>>>>>>>>>>> relationship between DDD and >>>>>>>>>>>>>>>>>>>>>>>>> HHH makes a difference in the behavior of DDD. >>>>>>>>>>>>>>>>>>>>>>>> That makes no sense. DDD halts or doesn't either >>>>>>>>>>>>>>>>>>>>>>>> way. HHH and HHH1 may
Do they both call HHH? How does their execution >>>>>>>>>>>>>>>>>>>>>>>> differ?
void DDD()
{
HHH(DDD);
return;
}
*It is a verified fact that*
(a) Both HHH1 and HHH emulate DDD according to the >>>>>>>>>>>>>>>>>>>>>>> semantics of the x86 language.
But HHH only does so INCOMPLETELY. >>>>>>>>>>>>>>>>>>>>>>
So? the fact the give different results just >>>>>>>>>>>>>>>>>>>>>> proves that they must have a "hidden input" thta >>>>>>>>>>>>>>>>>>>>>> gives them that different behavior, so they can't >>>>>>>>>>>>>>>>>>>>>> be actually deciders.
(b) HHH and HHH1 have verbatim identical c source >>>>>>>>>>>>>>>>>>>>>>> code, except for their differing names. >>>>>>>>>>>>>>>>>>>>>>
HHH1 either references itself with the name HHH1, >>>>>>>>>>>>>>>>>>>>>> instead of the name HHH, so has DIFFERENT source >>>>>>>>>>>>>>>>>>>>>> code, or your code uses assembly to extract the >>>>>>>>>>>>>>>>>>>>>> address that it is running at, making that address >>>>>>>>>>>>>>>>>>>>>> a "hidden input" to the code.
So, you just proved that you never meet your basic >>>>>>>>>>>>>>>>>>>>>> requirements, and everything is just a lie. >>>>>>>>>>>>>>>>>>>>>>
(c) DDD emulated by HHH has different behavior than >>>>>>>>>>>>>>>>>>>>>>> DDD emulated by HHH1.
No, just less of it because HHH aborts its emulation. >>>>>>>>>>>>>>>>>>>>>>
Aborted emulation doesn't provide final behavior. >>>>>>>>>>>>>>>>>>>>>>
(d) Each DDD *correctly_emulated_by* any HHH that >>>>>>>>>>>>>>>>>>>>>>> this DDD calls cannot possibly return no matter >>>>>>>>>>>>>>>>>>>>>>> what this HHH does.
No, it can not be emulated by that HHH to that >>>>>>>>>>>>>>>>>>>>>> point, but that doesn't mean that the behavior of >>>>>>>>>>>>>>>>>>>>>> program DDD doesn't get there.
Halt Deciding / Termination Analysis is about the >>>>>>>>>>>>>>>>>>>>>> behavior of the program described, and thus all >>>>>>>>>>>>>>>>>>>>>> you are showing is that you aren't working on >>>>>>>>>>>>>>>>>>>>>> either of those problems, but have just been lying. >>>>>>>>>>>>>>>>>>>>>>
Note, your argument is using a equivocation on the >>>>>>>>>>>>>>>>>>>>>> term "correctly emulated" as you are trying to >>>>>>>>>>>>>>>>>>>>>> claim a correct emulation by just a partial >>>>>>>>>>>>>>>>>>>>>> emulation, but also trying to claim a result that >>>>>>>>>>>>>>>>>>>>>> only comes from COMPLETE emulation, that of >>>>>>>>>>>>>>>>>>>>>> determining final behavior.
This again, just prove that you whole proof is >>>>>>>>>>>>>>>>>>>>>> based on lies.
I didn't hardly glance at any of that. >>>>>>>>>>>>>>>>>>>>> *This verified fact is a key element of my point* >>>>>>>>>>>>>>>>>>>>>
When HHH1(DDD) emulates DDD this DDD reaches its >>>>>>>>>>>>>>>>>>>>> final state.
When HHH(DDD) emulates DDD this DDD cannot possibly >>>>>>>>>>>>>>>>>>>>> reach its
final state.
But HHH aborts its emulation, and to that point saw >>>>>>>>>>>>>>>>>>>> EXACTLY the same sequence of steps that HHH1 saw (or >>>>>>>>>>>>>>>>>>>> you have lied about them being identical and pure >>>>>>>>>>>>>>>>>>>> funcitons).
*That double talk dodges the point that I made* >>>>>>>>>>>>>>>>>>
What "Double talk"?
Your whole logic is just double talk.
You confuse your made up fanstay for reality and lock >>>>>>>>>>>>>>>>>> yourself into your insanity.
DDD emulated by HHH cannot possibly reach >>>>>>>>>>>>>>>>>>> its final state no matter WTF that HHH does. >>>>>>>>>>>>>>>>>>There is your Equivocation again!
"Reaching Final State" is a property of the execution >>>>>>>>>>>>>>>>>> of complete emulation of a program.
So, since when we look at that for a DDD that calls an >>>>>>>>>>>>>>>>>> HHH that returns an answer, we find it reaches such a >>>>>>>>>>>>>>>>>> final state, your claim is just a blantant lie. Not >>>>>>>>>>>>>>>>>> just an honest mistake, as you have been told >>>>>>>>>>>>>>>>>> repeatedly the answer, but in your total stupidity >>>>>>>>>>>>>>>>>> reject the truth to keep your lies.
DDD emulated by HHH according to the semantics of the >>>>>>>>>>>>>>>>> x86 language cannot possibly reach its own "return" >>>>>>>>>>>>>>>>> instruction matter WTF that HHH does.
Then your logic is just inconsistant as HHH can not be >>>>>>>>>>>>>>>> folling the semantic of the x86 language and then do "WTF". >>>>>>>>>>>>>>>>
For HHH to emulate its input in a way to show the actual >>>>>>>>>>>>>>>> behavior of that input, it must not EVER abort its >>>>>>>>>>>>>>>> emulation. PERIOD.
Your logic is just WTF, and based on the assumption that >>>>>>>>>>>>>>>> HHH can do two different things at the same time with >>>>>>>>>>>>>>>> the same code which is just a LIE,
When termination analyzers analyze C functions for >>>>>>>>>>>>>>>>> termination the measure of termination is reaching >>>>>>>>>>>>>>>>> the "return" statement.
Rigjt, when the BEHAIVIOR of the function is to do so, >>>>>>>>>>>>>>>> and that behavior is DEFINED to be the results of direct >>>>>>>>>>>>>>>> execution,
That is not DDD emulated by HHH according to the semantics >>>>>>>>>>>>>>> of the x86 language. That is DDD emulated by HHH1 according >>>>>>>>>>>>>>> to the semantics of the x86 language.
Depends on which of the equivocations you are meaning. >>>>>>>>>>>>>>
If we are talking about the behavior of the PROGRAM DDD. >>>>>>>>>>>>>
Where did I say that?
DDD emulated by HHH1 according to the semantics
of the x86 language.
The only way for DDD emulated by HHH to have this
same behavior (that includes DDD calling itself)
is to ignore the call to itself.
Nope, as "according to the semantics of the x86 language" is >>>>>>>>>> an OBJECTIVE standard, and thus the only meaning of behavior >>>>>>>>>> of "the call itself" is to look at what the x86 processor does >>>>>>>>>> on that call.
*That lame excuse tries to pretend that UTMs don't exist*
Nope. But UTMs will never abort their emulation of their input, >>>>>>>> or they are not a UTM.
You claimed that emulation is an incorrect basis
and I proved you wrong.
No, I said a PARTIAL emulation is an incorrect basis.
*This does not say that*
On 10/25/2024 7:27 AM, Richard Damon wrote:
Nope, as "according to the semantics of the x86 language" is an"the call
OBJECTIVE standard, and thus the only meaning of behavior of
itself" is to look at what the x86 processor does on that call.
This does reject emulation out-of-hand and forbids an x86
processor to emulate itself recursively as is required to
form the isomorphism to the halting problem.
Nope, the key is that *IF* you can show that the results of the
emulation will match the behavior of the actual machine,
Then you can show that the emulation by HHH disobeys the
semantics of the x86 language:
When DDD is emulated by HHH according to the semantics of
the x86 language then HHH must emulate itself emulating DDD.
Right, and either it follows the rules of the x86 language and NEVER
stop, or it disobeys the requirements of the x86 language to stop its
emulaiton and return.
In other words after all of these years you still don't get this:
"simulating halt decider H correctly simulates its input D until"
Repetition to help your ADD see what it keeps missing.
Repetition to help your ADD see what it keeps missing.
Repetition to help your ADD see what it keeps missing.
I have told you at least 500 times and your ADD forces you to never
see the *UNTIL*
I have told you at least 500 times and your ADD forces you to never
see the *UNTIL*
I have told you at least 500 times and your ADD forces you to never
see the *UNTIL*
I have told you at least 500 times and your ADD forces you to never
see the *UNTIL*
I have told you at least 500 times and your ADD forces you to never
see the *UNTIL*
Repetition to help your ADD see what it keeps missing.
Repetition to help your ADD see what it keeps missing.
Repetition to help your ADD see what it keeps missing.
On 10/26/2024 10:35 AM, Richard Damon wrote:"Until" means that it stops simulating, which is not specified in x86.
On 10/26/24 9:47 AM, olcott wrote:
On 10/25/2024 11:07 PM, Richard Damon wrote:
On 10/25/24 7:18 PM, olcott wrote:
On 10/25/2024 5:17 PM, Richard Damon wrote:
On 10/25/24 5:54 PM, olcott wrote:
On 10/25/2024 10:45 AM, Richard Damon wrote:
On 10/25/24 9:37 AM, olcott wrote:
On 10/25/2024 7:27 AM, Richard Damon wrote:
On 10/24/24 9:04 PM, olcott wrote:
On 10/24/2024 6:23 PM, Richard Damon wrote:
On 10/24/24 2:19 PM, olcott wrote:
On 10/23/2024 9:48 PM, Richard Damon wrote:
On 10/23/24 10:33 PM, olcott wrote:
On 10/23/2024 6:16 PM, Richard Damon wrote:
On 10/23/24 8:33 AM, olcott wrote:
On 10/23/2024 6:12 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>> On 10/23/24 12:04 AM, olcott wrote:
On 10/22/2024 10:47 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 10/22/24 11:25 PM, olcott wrote:
In other words after all of these years you still don't get this:Right, and either it follows the rules of the x86 language and NEVERNope, the key is that *IF* you can show that the results of the*This does not say that*No, I said a PARTIAL emulation is an incorrect basis.You claimed that emulation is an incorrect basis and I proved you >>>>>>> wrong.Nope, as "according to the semantics of the x86 language" is an >>>>>>>>>> OBJECTIVE standard, and thus the only meaning of behavior of >>>>>>>>>> "the call itself" is to look at what the x86 processor does on >>>>>>>>>> that call.The directly executed DDD has the same behavior as DDDIn other words you are saying that DDD must be emulated by >>>>>>>>>>>>> HHH violating the semantics of the x86 language.When termination analyzers analyze C functions for >>>>>>>>>>>>>>>>> termination the measure of termination is reaching the >>>>>>>>>>>>>>>>> "return" statement.
Rigjt, when the BEHAIVIOR of the function is to do so, >>>>>>>>>>>>>>>> and that behavior is DEFINED to be the results of direct >>>>>>>>>>>>>>>> execution,
That is not DDD emulated by HHH according to the semantics >>>>>>>>>>>>>>> of the x86 language. That is DDD emulated by HHH1 >>>>>>>>>>>>>>> according to the semantics of the x86 language.
Depends on which of the equivocations you are meaning. >>>>>>>>>>>>>> If we are talking about the behavior of the PROGRAM DDD. >>>>>>>>>>>>>
Where did I say that?
emulated by HHH1 according to the semantics of the x86
language.
The only way for DDD emulated by HHH to have this same
behavior (that includes DDD calling itself) is to ignore the >>>>>>>>>>> call to itself.
*That lame excuse tries to pretend that UTMs don't exist*
Nope. But UTMs will never abort their emulation of their input, >>>>>>>> or they are not a UTM.
On 10/25/2024 7:27 AM, Richard Damon wrote:
Nope, as "according to the semantics of the x86 language" is an
OBJECTIVE standard, and thus the only meaning of behavior of
"the call
itself" is to look at what the x86 processor does on that call.
This does reject emulation out-of-hand and forbids an x86 processor
to emulate itself recursively as is required to form the isomorphism >>>>> to the halting problem.
emulation will match the behavior of the actual machine,
Then you can show that the emulation by HHH disobeys the semantics of
the x86 language:
When DDD is emulated by HHH according to the semantics of the x86
language then HHH must emulate itself emulating DDD.
stop, or it disobeys the requirements of the x86 language to stop its
emulaiton and return.
"simulating halt decider H correctly simulates its input D until"
I have told you at least 500 times and your ADD forces you to never see
the *UNTIL*
On 10/26/2024 10:52 AM, Richard Damon wrote:Why hypothetical? The HHH that *this* DDD here calls does abort.
On 10/26/24 11:44 AM, olcott wrote:Not at all. In the hypothetical case where HHH never aborts then DDD
On 10/26/2024 10:35 AM, Richard Damon wrote:
In other words after all of these years you still don't get this:
Right, and either it follows the rules of the x86 language and NEVER
stop, or it disobeys the requirements of the x86 language to stop its
emulaiton and return.
"simulating halt decider H correctly simulates its input D
until"
Repetition to help your ADD see what it keeps missing. Repetition to
help your ADD see what it keeps missing. Repetition to help your ADD
see what it keeps missing.
But it fails to meet the requirements, because your logic presumes that
HHH will never abort.
never stops running.
On 10/26/2024 10:51 AM, Richard Damon wrote:
On 10/26/24 10:17 AM, olcott wrote:
On 10/26/2024 9:10 AM, joes wrote:
Am Sat, 26 Oct 2024 08:47:11 -0500 schrieb olcott:Here is the one key point:
DDD emulated by HHH must emulate itself emulating DDD.
DDD emulated by HHH1 must NOT emulate itself emulating DDD.
DDD emulated by HHH1 has the same behavior as executed DDD.
And since HHH can not "correctly" (as in completely) emulate HHH
You acknowledge that it is ridiculously stupid to require an
emulating termination analyzer to infinitely emulate a
non-terminating input and then you make this ridiculously
stupid requirement again.
Do you have a short-circuit in your brain?
On 10/26/2024 8:04 PM, Richard Damon wrote:
On 10/26/24 12:26 PM, olcott wrote:
On 10/26/2024 10:55 AM, Richard Damon wrote:
On 10/26/24 11:46 AM, olcott wrote:
On 10/26/2024 10:35 AM, Richard Damon wrote:
On 10/26/24 9:55 AM, olcott wrote:
On 10/25/2024 11:07 PM, Richard Damon wrote:
On 10/25/24 7:22 PM, olcott wrote:
On 10/25/2024 5:17 PM, Richard Damon wrote:>>>
No, I said a PARTIAL emulation is an incorrect basis.
You are just a proven liar that twists peoples words because >>>>>>>>>> you don't know what you are talking about.
It is ridiculously stupid to require a complete emulation
of a non-terminating input. No twisted words there.
HHH doesn't need to to the complete emulation, just show that
the complete emulation doesn't reach an end.
Then you admit that DDD emulated by HHH according to the
semantics of the x86 language cannot possibly reach its
own "return" instruction?
IF you want to call that rediculously stupid, you are just
showing your own stupidity, as that IS the requirement, and you >>>>>>>> can't show anything that proves it otherwise, because you just >>>>>>>> don't know anything about the fundamental facts of what you talk >>>>>>>> about.
I am not the one stupidly requiring the compete emulation
of a non-terminating input.
The problem is that any HHH that answers for the input built >>>>>>>>>> on it, must have been a decider that aborts when emulating >>>>>>>>>> that input, and thus only does a partial emulation.
It is ridiculously stupid to require a complete emulation
of a non-terminating input. No twisted words there.
HHH doesn't need to to the complete emulation, just show that
the complete emulation doesn't reach an end.
IF you want to call that rediculously stupid, you are just
showing your own stupidity, as that IS the requirement, and you >>>>>>>> can't show anything that proves it otherwise, because you just >>>>>>>> don't know anything about the fundamental facts of what you talk >>>>>>>> about.
Then you admit that DDD emulated by HHH according to the
semantics of the x86 language cannot possibly reach its
own "return" instruction?
The problem is that your HHH doesn't do that,
Of course it doesn't do that. It is ridiculously stupid for
an emulating termination analyzer to emulate a non-terminating
input forever.
Right, but it needs to answer about what the unaborted emulation
would do,
Exactly !!!
And the unaborted emulation HALTS, since DDD calls the HHH that does
abort and return,
You keep on trying to lie by playing a shell game and changing the
imput to the system, which includes the code of the HHH that DDD calls.
Sorry, you are just proving your utter stupdity.
No you are merely contradicting yourself, thus an objective
measure of your error opposed to a subjective opinion of me.
DDD emulated by HHH according to the semantics of the x86
language cannot possibly reach its own "return" instruction
whether or not HHH ever aborts its emulation of DDD.
On 10/26/2024 10:00 PM, Richard Damon wrote:namely, simulating DDD, in particular not the abort condition in the
On 10/26/24 9:29 PM, olcott wrote:
On 10/26/2024 8:04 PM, Richard Damon wrote:
On 10/26/24 12:26 PM, olcott wrote:
On 10/26/2024 10:55 AM, Richard Damon wrote:
On 10/26/24 11:46 AM, olcott wrote:
On 10/26/2024 10:35 AM, Richard Damon wrote:
On 10/26/24 9:55 AM, olcott wrote:
On 10/25/2024 11:07 PM, Richard Damon wrote:
On 10/25/24 7:22 PM, olcott wrote:
On 10/25/2024 5:17 PM, Richard Damon wrote:
The problem is that your HHH doesn't do that,Then you admit that DDD emulated by HHH according to theHHH doesn't need to to the complete emulation, just show that >>>>>>>>>> the complete emulation doesn't reach an end.The problem is that any HHH that answers for the input built >>>>>>>>>>>> on it, must have been a decider that aborts when emulating >>>>>>>>>>>> that input, and thus only does a partial emulation.It is ridiculously stupid to require a complete emulation of a >>>>>>>>>>> non-terminating input. No twisted words there.
IF you want to call that rediculously stupid, you are just >>>>>>>>>> showing your own stupidity, as that IS the requirement, and you >>>>>>>>>> can't show anything that proves it otherwise, because you just >>>>>>>>>> don't know anything about the fundamental facts of what you >>>>>>>>>> talk about.
semantics of the x86 language cannot possibly reach its own
"return" instruction?
You can quote-mine all you want, but he didn't.To the best of my knowledge you recently admitted that DDD emulated byWHERE did I contradict myself?No you are merely contradicting yourself, thus an objective measure ofAnd the unaborted emulation HALTS, since DDD calls the HHH that doesExactly !!!Of course it doesn't do that. It is ridiculously stupid for anRight, but it needs to answer about what the unaborted emulation
emulating termination analyzer to emulate a non-terminating input >>>>>>> forever.
would do,
abort and return,
You keep on trying to lie by playing a shell game and changing the
imput to the system, which includes the code of the HHH that DDD
calls.
Sorry, you are just proving your utter stupdity.
your error opposed to a subjective opinion of me.
The CORRECT answer is based on the actual beahvior of the direct
exectution of the program. That is the definition.
It is a provable fact, that the COMPLETE (and correct) emulation of the
input will give the same answer as that, so is an proper equivalent for
that definition.
HHH never reaches its return instruction whether HHH aborts its
emulation or not.
On 10/26/2024 10:00 PM, Richard Damon wrote:
On 10/26/24 9:29 PM, olcott wrote:
On 10/26/2024 8:04 PM, Richard Damon wrote:
On 10/26/24 12:26 PM, olcott wrote:
On 10/26/2024 10:55 AM, Richard Damon wrote:
On 10/26/24 11:46 AM, olcott wrote:
On 10/26/2024 10:35 AM, Richard Damon wrote:
On 10/26/24 9:55 AM, olcott wrote:
On 10/25/2024 11:07 PM, Richard Damon wrote:
On 10/25/24 7:22 PM, olcott wrote:
On 10/25/2024 5:17 PM, Richard Damon wrote:>>>
No, I said a PARTIAL emulation is an incorrect basis.
You are just a proven liar that twists peoples words because >>>>>>>>>>>> you don't know what you are talking about.
It is ridiculously stupid to require a complete emulation >>>>>>>>>>> of a non-terminating input. No twisted words there.
HHH doesn't need to to the complete emulation, just show that >>>>>>>>>> the complete emulation doesn't reach an end.
Then you admit that DDD emulated by HHH according to the
semantics of the x86 language cannot possibly reach its
own "return" instruction?
IF you want to call that rediculously stupid, you are just >>>>>>>>>> showing your own stupidity, as that IS the requirement, and >>>>>>>>>> you can't show anything that proves it otherwise, because you >>>>>>>>>> just don't know anything about the fundamental facts of what >>>>>>>>>> you talk about.
I am not the one stupidly requiring the compete emulation
of a non-terminating input.
The problem is that any HHH that answers for the input built >>>>>>>>>>>> on it, must have been a decider that aborts when emulating >>>>>>>>>>>> that input, and thus only does a partial emulation.
It is ridiculously stupid to require a complete emulation >>>>>>>>>>> of a non-terminating input. No twisted words there.
HHH doesn't need to to the complete emulation, just show that >>>>>>>>>> the complete emulation doesn't reach an end.
IF you want to call that rediculously stupid, you are just >>>>>>>>>> showing your own stupidity, as that IS the requirement, and >>>>>>>>>> you can't show anything that proves it otherwise, because you >>>>>>>>>> just don't know anything about the fundamental facts of what >>>>>>>>>> you talk about.
Then you admit that DDD emulated by HHH according to the
semantics of the x86 language cannot possibly reach its
own "return" instruction?
The problem is that your HHH doesn't do that,
Of course it doesn't do that. It is ridiculously stupid for
an emulating termination analyzer to emulate a non-terminating
input forever.
Right, but it needs to answer about what the unaborted emulation
would do,
Exactly !!!
And the unaborted emulation HALTS, since DDD calls the HHH that does
abort and return,
You keep on trying to lie by playing a shell game and changing the
imput to the system, which includes the code of the HHH that DDD calls. >>>>
Sorry, you are just proving your utter stupdity.
No you are merely contradicting yourself, thus an objective
measure of your error opposed to a subjective opinion of me.
WHERE did I contradict myself?
The CORRECT answer is based on the actual beahvior of the direct
exectution of the program. That is the definition.
It is a provable fact, that the COMPLETE (and correct) emulation of
the input will give the same answer as that, so is an proper
equivalent for that definition.
To the best of my knowledge you recently admitted that DDD
emulated by HHH never reaches its return instruction whether
HHH aborts its emulation or not.
On 10/27/2024 6:38 AM, Richard Damon wrote:
On 10/26/24 11:11 PM, olcott wrote:
On 10/26/2024 10:00 PM, Richard Damon wrote:
On 10/26/24 9:29 PM, olcott wrote:
On 10/26/2024 8:04 PM, Richard Damon wrote:
On 10/26/24 12:26 PM, olcott wrote:
On 10/26/2024 10:55 AM, Richard Damon wrote:
On 10/26/24 11:46 AM, olcott wrote:
On 10/26/2024 10:35 AM, Richard Damon wrote:
On 10/26/24 9:55 AM, olcott wrote:
On 10/25/2024 11:07 PM, Richard Damon wrote:
On 10/25/24 7:22 PM, olcott wrote:
On 10/25/2024 5:17 PM, Richard Damon wrote:>>>
No, I said a PARTIAL emulation is an incorrect basis. >>>>>>>>>>>>>>
You are just a proven liar that twists peoples words >>>>>>>>>>>>>> because you don't know what you are talking about. >>>>>>>>>>>>>>
It is ridiculously stupid to require a complete emulation >>>>>>>>>>>>> of a non-terminating input. No twisted words there.
HHH doesn't need to to the complete emulation, just show >>>>>>>>>>>> that the complete emulation doesn't reach an end.
Then you admit that DDD emulated by HHH according to the >>>>>>>>>>> semantics of the x86 language cannot possibly reach its
own "return" instruction?
IF you want to call that rediculously stupid, you are just >>>>>>>>>>>> showing your own stupidity, as that IS the requirement, and >>>>>>>>>>>> you can't show anything that proves it otherwise, because >>>>>>>>>>>> you just don't know anything about the fundamental facts of >>>>>>>>>>>> what you talk about.
I am not the one stupidly requiring the compete emulation >>>>>>>>>>> of a non-terminating input.
The problem is that any HHH that answers for the input >>>>>>>>>>>>>> built on it, must have been a decider that aborts when >>>>>>>>>>>>>> emulating that input, and thus only does a partial emulation. >>>>>>>>>>>>>>
It is ridiculously stupid to require a complete emulation >>>>>>>>>>>>> of a non-terminating input. No twisted words there.
HHH doesn't need to to the complete emulation, just show >>>>>>>>>>>> that the complete emulation doesn't reach an end.
IF you want to call that rediculously stupid, you are just >>>>>>>>>>>> showing your own stupidity, as that IS the requirement, and >>>>>>>>>>>> you can't show anything that proves it otherwise, because >>>>>>>>>>>> you just don't know anything about the fundamental facts of >>>>>>>>>>>> what you talk about.
Then you admit that DDD emulated by HHH according to the >>>>>>>>>>> semantics of the x86 language cannot possibly reach its
own "return" instruction?
The problem is that your HHH doesn't do that,
Of course it doesn't do that. It is ridiculously stupid for
an emulating termination analyzer to emulate a non-terminating >>>>>>>>> input forever.
Right, but it needs to answer about what the unaborted emulation >>>>>>>> would do,
Exactly !!!
And the unaborted emulation HALTS, since DDD calls the HHH that
does abort and return,
You keep on trying to lie by playing a shell game and changing the >>>>>> imput to the system, which includes the code of the HHH that DDD
calls.
Sorry, you are just proving your utter stupdity.
No you are merely contradicting yourself, thus an objective
measure of your error opposed to a subjective opinion of me.
WHERE did I contradict myself?
The CORRECT answer is based on the actual beahvior of the direct
exectution of the program. That is the definition.
It is a provable fact, that the COMPLETE (and correct) emulation of
the input will give the same answer as that, so is an proper
equivalent for that definition.
To the best of my knowledge you recently admitted that DDD
emulated by HHH never reaches its return instruction whether
HHH aborts its emulation or not.
No, I admitted that the EMULATION OF DDD by HHH will never reach the
return instruction, to resolve tbe equivocation of the statement.
*There is no equivocation in this*
DDD emulated by HHH according to the semantics of the x86 language
for the zero through infinity emulated states will never reach the
return instruction of DDD.
Since for the HHH that returns an answer, that emulation is the fact
above, incomplete,
Why do you utterly insist on requiring the logically impossible?
There is no such thing as the complete emulation of a non-terminating
input. You know this and still require it. Trollish head games may
count as lies of Revelations 21:8.
that means that said HHH hasn't shown that the program DDD will not
reach its return instruction, only established a lower bound for how
many steps it will take to do so.
You keep on trying to use that equivocation, which shows that either
you are so stupid you don't understand the simple error in English, or
you are just so ignorant of the rules that you ignore them, or you are
just guilty of being a blantant liar, or multiple of the above.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 493 |
Nodes: | 16 (2 / 14) |
Uptime: | 172:24:32 |
Calls: | 9,704 |
Calls today: | 4 |
Files: | 13,736 |
Messages: | 6,178,515 |