On 10/21/2024 9:42 PM, Richard Damon wrote:It's not like it will deterministically regenerate the same output.
On 10/21/24 7:08 PM, olcott wrote:
On 10/21/2024 6:05 PM, Richard Damon wrote:
On 10/21/24 6:48 PM, olcott wrote:
On 10/21/2024 5:34 PM, Richard Damon wrote:
On 10/21/24 12:29 PM, olcott wrote:
On 10/21/2024 10:17 AM, joes wrote:
Am Mon, 21 Oct 2024 08:41:11 -0500 schrieb olcott:
On 10/21/2024 3:39 AM, joes wrote:
Am Sun, 20 Oct 2024 17:36:25 -0500 schrieb olcott:
On 10/20/2024 4:41 PM, Richard Damon wrote:
On 10/20/24 4:23 PM, olcott wrote:
On 10/20/2024 2:13 PM, Richard Damon wrote:
On 10/20/24 1:33 PM, olcott wrote:
Did ChatGPT generate that?I am not interested in arguing with a chatbot. Make the points >>>>>>>> yourself.You can click on the link and cut-and-paste the question to see >>>>>>>>> the whole answer in compete detail.The executed DDD calls HHH() and this call returns. TheBut whyyy doesn't HHH abort?
emulated DDD calls HHH(DDD) and this call cannot possibly >>>>>>>>>>> return.
1. **Nature of `DDD()`**:
- `DDD()` simply calls `HHH(DDD)`. It does not perform any >>>>>>>> additional operations that could create a loop or prevent it from >>>>>>>> returning.
- If `HHH` returns (whether by aborting or completing its >>>>>>>> simulation),
`DDD()` can return to its caller.
2. **Behavior of `HHH`**:
- If `HHH` is able to simulate `DDD()` and return, it
should
report that `DDD()` terminates. If `HHH` aborts due to detecting >>>>>>>> non- termination,
it does not reflect the actual execution of `DDD()`; it leads to >>>>>>>> a conclusion that may not align with the true behavior.
3. **Contradiction in Results**:
- If `HHH` claims that `DDD()` does not halt, but in
reality,
`DDD()`
can terminate once `HHH` returns, then `HHH` is providing an
incorrect analysis.
- The contradiction lies in the ability of `HHH` to detect >>>>>>>> non-
termination theoretically while simultaneously allowing `DDD()` >>>>>>>> to halt in practical execution.
### Conclusion:
Given the nature of `DDD()` and how `HHH` operates, it becomes >>>>>>>> clear that `HHH` cannot consistently provide a correct answer
about whether `DDD()`
halts. The dynamics of calling and returning create a scenario >>>>>>>> where the outcomes conflict. Thus, `HHH` is fundamentally flawed >>>>>>>> in its role as a termination analyzer for functions like `DDD()`. >>>>>>>
If it did then I need *ALL the input that caused it to generate
that*
"naw, I wasn't lied to, they said they were saying the truth" sure buddy.I asked it if what it was told was a lie and it explained how what itNo, it said that given what you told it (which was a lie)No, someone using some REAL INTELEGENCE, as opposed to a programI specifically asked it to verify that its key assumption is correct >>>>> and it did.
using "artificial intelegence" that had been loaded with false
premises and other lies.
was told is correct.
HAHAHAHAHA there isn't anything about truth in there, prove me wrongBecause Chat GPT doesn't care about lying.ChatGPT computes the truth and you can't actually show otherwise.
I definitely typed something out in the style of an LLM instead ofBecause what you are asking for is nonsense.I believe that the "output" Joes provided was fake on the basis that she
Of course an AI that has been programmed with lies might repeat the
lies.
When it is told the actual definition, after being told your lies, and
asked if your conclusion could be right, it said No.
Thus, it seems by your logic, you have to admit defeat, as the AI,
after being told your lies, still was able to come up with the correct
answer, that DDD will halt, and that HHH is just incorrect to say it
doesn't.
did not provide the input to derive that output and did not use the
required basis that was on the link.
Accepting your premises makes the problem uninteresting.If you want me to pay more attention to what you say, you first need to
return the favor, and at least TRY to find an error in what I say, and
be based on more than just that you think that can't be right.
But you can't do that, as you don't actually know any facts about theYou cannot show that my premises are actually false.
field that you can point to qualified references.
To show that they are false would at least require showing that they contradict each other.
On 10/21/2024 9:42 PM, Richard Damon wrote:
On 10/21/24 7:08 PM, olcott wrote:
On 10/21/2024 6:05 PM, Richard Damon wrote:
On 10/21/24 6:48 PM, olcott wrote:
On 10/21/2024 5:34 PM, Richard Damon wrote:
On 10/21/24 12:29 PM, olcott wrote:
On 10/21/2024 10:17 AM, joes wrote:No, someone using some REAL INTELEGENCE, as opposed to a program
Am Mon, 21 Oct 2024 08:41:11 -0500 schrieb olcott:
On 10/21/2024 3:39 AM, joes wrote:I am not interested in arguing with a chatbot. Make the points >>>>>>>> yourself.
Am Sun, 20 Oct 2024 17:36:25 -0500 schrieb olcott:
On 10/20/2024 4:41 PM, Richard Damon wrote:lolwut? A decider is a normal program, and it should be
On 10/20/24 4:23 PM, olcott wrote:
On 10/20/2024 2:13 PM, Richard Damon wrote:
On 10/20/24 1:33 PM, olcott wrote:
Note, I DID tell that to Chat GPT, and it agrees that DDD, >>>>>>>>>>>> when theNo one ever bother to notice that (a) A decider cannot have >>>>>>>>>>> its actual
criteria is what does DDD actually do, which is what the >>>>>>>>>>>> question
MUST be about to be about the Termination or Halting
problem, then
DDD WILL HALT since HHH(DDD) will return 0 to it.
self as its input.
handled like
every other input.
(b) In the case of the pathological input DDD to emulating >>>>>>>>>>> terminationDDD *is* the input to HHH.
analyzer HHH the behavior of the directly executed DDD (not >>>>>>>>>>> an input
to HHH) is different than the behavior of DDD that is an >>>>>>>>>>> input to HHH.
You can click on the link and cut-and-paste the question to see >>>>>>>>> theThe executed DDD calls HHH() and this call returns. TheBut whyyy doesn't HHH abort?
emulated DDD
calls HHH(DDD) and this call cannot possibly return.
whole answer in compete detail.
1. **Nature of `DDD()`**:
- `DDD()` simply calls `HHH(DDD)`. It does not perform any >>>>>>>> additional
operations that could create a loop or prevent it from returning. >>>>>>>> - If `HHH` returns (whether by aborting or completing its >>>>>>>> simulation),
`DDD()` can return to its caller.
2. **Behavior of `HHH`**:
- If `HHH` is able to simulate `DDD()` and return, it should >>>>>>>> report
that `DDD()` terminates. If `HHH` aborts due to detecting non- >>>>>>>> termination,
it does not reflect the actual execution of `DDD()`; it leads to a >>>>>>>> conclusion that may not align with the true behavior.
3. **Contradiction in Results**:
- If `HHH` claims that `DDD()` does not halt, but in
reality, `DDD()`
can terminate once `HHH` returns, then `HHH` is providing an
incorrect
analysis.
- The contradiction lies in the ability of `HHH` to detect non- >>>>>>>> termination theoretically while simultaneously allowing `DDD()` >>>>>>>> to halt in
practical execution.
### Conclusion:
Given the nature of `DDD()` and how `HHH` operates, it becomes >>>>>>>> clear that
`HHH` cannot consistently provide a correct answer about whether >>>>>>>> `DDD()`
halts. The dynamics of calling and returning create a scenario >>>>>>>> where the
outcomes conflict. Thus, `HHH` is fundamentally flawed in its
role as a
termination analyzer for functions like `DDD()`.
Did ChatGPT generate that?
If it did then I need *ALL the input that caused it to generate
that*
https://chatgpt.com/share/6709e046-4794-8011-98b7-27066fb49f3e
If you did not start with the basis of this link then you cheated. >>>>>>>
using "artificial intelegence" that had been loaded with false
premises and other lies.
Sorry, you are just showing that you have NO intelegence, and are
depending on a program that includes a disclaimed on every page
that its answers may have mistakes.
I specifically asked it to verify that its key
assumption is correct and it did.
No, it said that given what you told it (which was a lie)
I asked it if what it was told was a lie and it
explained how what it was told is correct.
Because Chat GPT doesn't care about lying.
ChatGPT computes the truth and you can't actually
show otherwise.
Instead of me having to repeat the same thing to
you fifty times why don't you do what I do to
focus my own concentration read what I say many
times over and over until you at least see what
I said.
Because what you are asking for is nonsense.
Of course an AI that has been programmed with lies might repeat the lies.
When it is told the actual definition, after being told your lies, and
asked if your conclusion could be right, it said No.
Thus, it seems by your logic, you have to admit defeat, as the AI,
after being told your lies, still was able to come up with the correct
answer, that DDD will halt, and that HHH is just incorrect to say it
doesn't.
I believe that the "output" Joes provided was fake on the
basis that she did not provide the input to derive that
output and did not use the required basis that was on the
link.
If you want me to pay more attention to what you say, you first need
to return the favor, and at least TRY to find an error in what I say,
and be based on more than just that you think that can't be right.
You are merely spouting off what you have been indoctrinated
to believe and cannot provide any actual basis in reasoning
why I am incorrect.
But you can't do that, as you don't actually know any facts about the
field that you can point to qualified references.
You cannot show that my premises are actually false. The
most that you can do is show that they are unconventional.
To show that they are false would at least require showing
that they contradict each other.
Failing to do that no one has any basis to even show that
they are false. The most that they can do is show that they
are unconventional.
On 10/22/2024 4:50 AM, joes wrote:Just no. Do you believe that I didn't write this myself after all?
Am Mon, 21 Oct 2024 22:04:49 -0500 schrieb olcott:
On 10/21/2024 9:42 PM, Richard Damon wrote:
On 10/21/24 7:08 PM, olcott wrote:
On 10/21/2024 6:05 PM, Richard Damon wrote:
On 10/21/24 6:48 PM, olcott wrote:
On 10/21/2024 5:34 PM, Richard Damon wrote:
On 10/21/24 12:29 PM, olcott wrote:
On 10/21/2024 10:17 AM, joes wrote:
Am Mon, 21 Oct 2024 08:41:11 -0500 schrieb olcott:
On 10/21/2024 3:39 AM, joes wrote:
It's not like it will deterministically regenerate the same output.Did ChatGPT generate that?
If it did then I need *ALL the input that caused it to generate >>>>>>>>> that*
"naw, I wasn't lied to, they said they were saying the truth" sureI asked it if what it was told was a lie and it explained how whatNo, it said that given what you told it (which was a lie)No, someone using some REAL INTELEGENCE, as opposed to a program >>>>>>>> using "artificial intelegence" that had been loaded with false >>>>>>>> premises and other lies.I specifically asked it to verify that its key assumption is
correct and it did.
it was told is correct.
buddy.
HAHAHAHAHA there isn't anything about truth in there, prove me wrongBecause Chat GPT doesn't care about lying.ChatGPT computes the truth and you can't actually show otherwise.
That seems to indicate that you are admitting that you cheated when you discussed this with ChatGPT. You gave it a faulty basis and then argued against that.I definitely typed something out in the style of an LLM instead of myBecause what you are asking for is nonsense.I believe that the "output" Joes provided was fake on the basis that
Of course an AI that has been programmed with lies might repeat the
lies.
When it is told the actual definition, after being told your lies,
and asked if your conclusion could be right, it said No.
Thus, it seems by your logic, you have to admit defeat, as the AI,
after being told your lies, still was able to come up with the
correct answer, that DDD will halt, and that HHH is just incorrect to
say it doesn't.
she did not provide the input to derive that output and did not use
the required basis that was on the link.
own words /s
Accepting your premises makes the problem uninteresting.If you want me to pay more attention to what you say, you first needYou cannot show that my premises are actually false.
to return the favor, and at least TRY to find an error in what I say,
and be based on more than just that you think that can't be right.
But you can't do that, as you don't actually know any facts about the
field that you can point to qualified references.
To show that they are false would at least require showing that they
contradict each other.
They also conventional within the context of software engineering. That software engineering conventions seem incompatible with computer science conventions may refute the latter.lol
The a halt decider must report on the behavior that itself is contained within seems to be an incorrect convention.Just because you don't like the undecidability of the halting problem?
u32 HHH1(ptr P) // line 721That makes no sense. DDD halts or doesn't either way. HHH and HHH1 may
u32 HHH(ptr P) // line 801
The above two functions have identical C code except for their name.
The input to HHH1(DDD) halts. The input to HHH(DDD) does not halt. This conclusively proves that the pathological relationship between DDD and
HHH makes a difference in the behavior of DDD.
Am Tue, 22 Oct 2024 08:47:39 -0500 schrieb olcott:
On 10/22/2024 4:50 AM, joes wrote:
Am Mon, 21 Oct 2024 22:04:49 -0500 schrieb olcott:
On 10/21/2024 9:42 PM, Richard Damon wrote:
On 10/21/24 7:08 PM, olcott wrote:
On 10/21/2024 6:05 PM, Richard Damon wrote:
On 10/21/24 6:48 PM, olcott wrote:
On 10/21/2024 5:34 PM, Richard Damon wrote:
On 10/21/24 12:29 PM, olcott wrote:
On 10/21/2024 10:17 AM, joes wrote:
Am Mon, 21 Oct 2024 08:41:11 -0500 schrieb olcott:
On 10/21/2024 3:39 AM, joes wrote:
It's not like it will deterministically regenerate the same output.Did ChatGPT generate that?
If it did then I need *ALL the input that caused it to generate >>>>>>>>>> that*
"naw, I wasn't lied to, they said they were saying the truth" sureI asked it if what it was told was a lie and it explained how what >>>>>> it was told is correct.No, it said that given what you told it (which was a lie)No, someone using some REAL INTELEGENCE, as opposed to a program >>>>>>>>> using "artificial intelegence" that had been loaded with false >>>>>>>>> premises and other lies.I specifically asked it to verify that its key assumption is
correct and it did.
buddy.
HAHAHAHAHA there isn't anything about truth in there, prove me wrongBecause Chat GPT doesn't care about lying.ChatGPT computes the truth and you can't actually show otherwise.
Just no. Do you believe that I didn't write this myself after all?That seems to indicate that you are admitting that you cheated when youI definitely typed something out in the style of an LLM instead of myBecause what you are asking for is nonsense.I believe that the "output" Joes provided was fake on the basis that
Of course an AI that has been programmed with lies might repeat the
lies.
When it is told the actual definition, after being told your lies,
and asked if your conclusion could be right, it said No.
Thus, it seems by your logic, you have to admit defeat, as the AI,
after being told your lies, still was able to come up with the
correct answer, that DDD will halt, and that HHH is just incorrect to >>>>> say it doesn't.
she did not provide the input to derive that output and did not use
the required basis that was on the link.
own words /s
Accepting your premises makes the problem uninteresting.If you want me to pay more attention to what you say, you first need >>>>> to return the favor, and at least TRY to find an error in what I say, >>>>> and be based on more than just that you think that can't be right.You cannot show that my premises are actually false.
But you can't do that, as you don't actually know any facts about the >>>>> field that you can point to qualified references.
To show that they are false would at least require showing that they
contradict each other.
discussed this with ChatGPT. You gave it a faulty basis and then argued
against that.
They also conventional within the context of software engineering. Thatlol
software engineering conventions seem incompatible with computer science
conventions may refute the latter.
The a halt decider must report on the behavior that itself is containedJust because you don't like the undecidability of the halting problem?
within seems to be an incorrect convention.
u32 HHH1(ptr P) // line 721That makes no sense. DDD halts or doesn't either way. HHH and HHH1 may
u32 HHH(ptr P) // line 801
The above two functions have identical C code except for their name.
The input to HHH1(DDD) halts. The input to HHH(DDD) does not halt. This
conclusively proves that the pathological relationship between DDD and
HHH makes a difference in the behavior of DDD.
give different answers, but then exactly one of them must be wrong.
Do they both call HHH? How does their execution differ?
On 10/22/2024 6:22 AM, Richard Damon wrote:
On 10/21/24 11:04 PM, olcott wrote:
On 10/21/2024 9:42 PM, Richard Damon wrote:
On 10/21/24 7:08 PM, olcott wrote:
On 10/21/2024 6:05 PM, Richard Damon wrote:
On 10/21/24 6:48 PM, olcott wrote:
On 10/21/2024 5:34 PM, Richard Damon wrote:
On 10/21/24 12:29 PM, olcott wrote:
On 10/21/2024 10:17 AM, joes wrote:No, someone using some REAL INTELEGENCE, as opposed to a program >>>>>>>> using "artificial intelegence" that had been loaded with false >>>>>>>> premises and other lies.
Am Mon, 21 Oct 2024 08:41:11 -0500 schrieb olcott:
On 10/21/2024 3:39 AM, joes wrote:I am not interested in arguing with a chatbot. Make the points >>>>>>>>>> yourself.
Am Sun, 20 Oct 2024 17:36:25 -0500 schrieb olcott:
On 10/20/2024 4:41 PM, Richard Damon wrote:lolwut? A decider is a normal program, and it should be >>>>>>>>>>>> handled like
On 10/20/24 4:23 PM, olcott wrote:
On 10/20/2024 2:13 PM, Richard Damon wrote:
On 10/20/24 1:33 PM, olcott wrote:
Note, I DID tell that to Chat GPT, and it agrees that DDD, >>>>>>>>>>>>>> when theNo one ever bother to notice that (a) A decider cannot have >>>>>>>>>>>>> its actual
criteria is what does DDD actually do, which is what the >>>>>>>>>>>>>> question
MUST be about to be about the Termination or Halting >>>>>>>>>>>>>> problem, then
DDD WILL HALT since HHH(DDD) will return 0 to it.
self as its input.
every other input.
(b) In the case of the pathological input DDD to emulating >>>>>>>>>>>>> terminationDDD *is* the input to HHH.
analyzer HHH the behavior of the directly executed DDD (not >>>>>>>>>>>>> an input
to HHH) is different than the behavior of DDD that is an >>>>>>>>>>>>> input to HHH.
You can click on the link and cut-and-paste the question to >>>>>>>>>>> see theThe executed DDD calls HHH() and this call returns. The >>>>>>>>>>>>> emulated DDDBut whyyy doesn't HHH abort?
calls HHH(DDD) and this call cannot possibly return.
whole answer in compete detail.
1. **Nature of `DDD()`**:
- `DDD()` simply calls `HHH(DDD)`. It does not perform any >>>>>>>>>> additional
operations that could create a loop or prevent it from returning. >>>>>>>>>> - If `HHH` returns (whether by aborting or completing its >>>>>>>>>> simulation),
`DDD()` can return to its caller.
2. **Behavior of `HHH`**:
- If `HHH` is able to simulate `DDD()` and return, it >>>>>>>>>> should report
that `DDD()` terminates. If `HHH` aborts due to detecting non- >>>>>>>>>> termination,
it does not reflect the actual execution of `DDD()`; it leads >>>>>>>>>> to a
conclusion that may not align with the true behavior.
3. **Contradiction in Results**:
- If `HHH` claims that `DDD()` does not halt, but in >>>>>>>>>> reality, `DDD()`
can terminate once `HHH` returns, then `HHH` is providing an >>>>>>>>>> incorrect
analysis.
- The contradiction lies in the ability of `HHH` to detect >>>>>>>>>> non-
termination theoretically while simultaneously allowing
`DDD()` to halt in
practical execution.
### Conclusion:
Given the nature of `DDD()` and how `HHH` operates, it becomes >>>>>>>>>> clear that
`HHH` cannot consistently provide a correct answer about
whether `DDD()`
halts. The dynamics of calling and returning create a scenario >>>>>>>>>> where the
outcomes conflict. Thus, `HHH` is fundamentally flawed in its >>>>>>>>>> role as a
termination analyzer for functions like `DDD()`.
Did ChatGPT generate that?
If it did then I need *ALL the input that caused it to generate >>>>>>>>> that*
https://chatgpt.com/share/6709e046-4794-8011-98b7-27066fb49f3e >>>>>>>>> If you did not start with the basis of this link then you cheated. >>>>>>>>>
Sorry, you are just showing that you have NO intelegence, and
are depending on a program that includes a disclaimed on every >>>>>>>> page that its answers may have mistakes.
I specifically asked it to verify that its key
assumption is correct and it did.
No, it said that given what you told it (which was a lie)
I asked it if what it was told was a lie and it
explained how what it was told is correct.
Because Chat GPT doesn't care about lying.
ChatGPT computes the truth and you can't actually
show otherwise.
Of course it doesn't, that is why it has the disclaimer at the bottom
of the page that it can make mistakes.
Yet again you do not pay COMPLETE ATTENTION !!!
I claim X and you refute an incorrect paraphrase of X.
ChatGPT can and does make mistakes.
ChatGPT made no mistakes in analyzing my work and you
can't show otherwise with any actual reasoning.
The most that you can possibly show (and I don't think that
you can show this) is that my premises are unconventional.
Instead of me having to repeat the same thing to
you fifty times why don't you do what I do to
focus my own concentration read what I say many
times over and over until you at least see what
I said.
Because what you are asking for is nonsense.
Of course an AI that has been programmed with lies might repeat the
lies.
When it is told the actual definition, after being told your lies,
and asked if your conclusion could be right, it said No.
Thus, it seems by your logic, you have to admit defeat, as the AI,
after being told your lies, still was able to come up with the
correct answer, that DDD will halt, and that HHH is just incorrect
to say it doesn't.
I believe that the "output" Joes provided was fake on the
basis that she did not provide the input to derive that
output and did not use the required basis that was on the
link.
But that doesn't prove anything.
Correct we toss out Joes rebuttal because it doen't
prove anything.
If you want me to pay more attention to what you say, you first need
to return the favor, and at least TRY to find an error in what I
say, and be based on more than just that you think that can't be right. >>>>
You are merely spouting off what you have been indoctrinated
to believe and cannot provide any actual basis in reasoning
why I am incorrect.
No, I *HAVE* provided the reason, but you have brainwashed yourself
All that you have is the dogma of the received view.
The most that you can say is that the software engineering
that I propose seems inconsistent with the received view
in computer science.
You cannot show that it is actually false. You can only
show that my assumptions are incompatible with yours.
But you can't do that, as you don't actually know any facts about
the field that you can point to qualified references.
You cannot show that my premises are actually false. The
most that you can do is show that they are unconventional.
Of course I have, they presume definitions in conflict with the Formal
System you claim to be working in.
Not at all. My definitions specify the formal system
that I am working in and you cannot show that these
definitions are false.
The most that you can show is that they are unconventional.
I don't think that you can even do that. Within software
engineering my definitions are conventional.
To show that they are false would at least require showing
that they contradict each other.
No, just that they contradict statements already established in the
system you claim to be working in.
Not at all.
*X correctly_emulated_by Y*
Is defined to mean one or more x86 instructions of X are
emulated by Y according to the semantics of the x86 language.
void DDD()
{
HHH(DDD);
return;
}
When HHH is an x86 emulation based termination analyzer
then each DDD *correctly_emulated_by* any HHH that this DDD
calls cannot possibly return no matter what this HHH does.
The above is conventional software engineering.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 148:20:49 |
Calls: | 10,383 |
Calls today: | 8 |
Files: | 14,054 |
D/L today: |
2 files (1,861K bytes) |
Messages: | 6,417,740 |