• Re: Halting Problem is wrong two different ways --very stupid

    From Richard Damon@21:1/5 to olcott on Wed Jun 5 22:28:17 2024
    XPost: sci.logic

    On 6/5/24 10:09 PM, olcott wrote:
    On 6/5/2024 1:33 PM, joes wrote:
    Am Wed, 05 Jun 2024 12:09:18 -0500 schrieb olcott:
    On 6/5/2024 12:03 PM, John Smith wrote:
    On 5/06/24 04:16, olcott wrote:
    On 6/4/2024 9:12 PM, John Smith wrote:
    On 5/06/24 04:05, olcott wrote:
    On 6/4/2024 8:48 PM, Richard Damon wrote:
    (6) Can Carol correctly answer “no” to this question?
    Let's ask Carol. If she says “yes”, she's saying that “no” is the >>> correct answer for her, so “yes” is incorrect. If she says “no”, she's
    saying that she cannot correctly answer “no”, which is her answer. We >>> are assuming for this and all subsequent questions that the only
    acceptable answers are “yes” and “no”, and in this case, both answers
    are incorrect. Carol cannot answer the question correctly. Now let's ask >>> Dave. He says “no”, and he is correct because Carol cannot correctly >>> answer “no”. So (6) is subjective because it is a consistent,
    satisfiable specification for some agent (anyone other than Carol), and
    an inconsistent, unsatisfiable specification for some agent (Carol).

    But that's like running a different machine. That's not interesting.
    We wanted to see a machine that can answer ALL questions.

    To expect a correct answer to an incorrect question has
    always been very stupid.

    But there is nothing "incorrect" about the Halting Question, it always
    has a specific and precise answer (even if we don't know it) and it has
    some very useful cases that we would like to be able to solve.

    So, what do you see actually wrong with that actual question?

    The problem is you don't understand the field enough to really
    understand what the question means, and that is what gets you in
    trouble, you guess (incorrectly) about too many parts of the theory,
    that you just show your total ignorance of the field.


    This one was
    specifically constructed to be unanswerable by this machine. The
    equivalent translation would be "Can YOU answer No?".




    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Thu Jun 6 11:52:05 2024
    On 2024-06-06 02:09:35 +0000, olcott said:

    On 6/5/2024 1:33 PM, joes wrote:
    Am Wed, 05 Jun 2024 12:09:18 -0500 schrieb olcott:
    On 6/5/2024 12:03 PM, John Smith wrote:
    On 5/06/24 04:16, olcott wrote:
    On 6/4/2024 9:12 PM, John Smith wrote:
    On 5/06/24 04:05, olcott wrote:
    On 6/4/2024 8:48 PM, Richard Damon wrote:
    (6) Can Carol correctly answer “no” to this question?
    Let's ask Carol. If she says “yes”, she's saying that “no” is the >>> correct answer for her, so “yes” is incorrect. If she says “no”, she's
    saying that she cannot correctly answer “no”, which is her answer. We >>> are assuming for this and all subsequent questions that the only
    acceptable answers are “yes” and “no”, and in this case, both answers
    are incorrect. Carol cannot answer the question correctly. Now let's ask >>> Dave. He says “no”, and he is correct because Carol cannot correctly >>> answer “no”. So (6) is subjective because it is a consistent,
    satisfiable specification for some agent (anyone other than Carol), and
    an inconsistent, unsatisfiable specification for some agent (Carol).

    But that's like running a different machine. That's not interesting.
    We wanted to see a machine that can answer ALL questions.

    To expect a correct answer to an incorrect question has
    always been very stupid.

    To call a question incorrect just because one stupid machine cannot
    correctly answer it is stupid.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Thu Jun 6 22:08:23 2024
    XPost: sci.logic

    On 6/6/24 9:37 AM, olcott wrote:
    On 6/6/2024 3:52 AM, Mikko wrote:
    On 2024-06-06 02:09:35 +0000, olcott said:

    On 6/5/2024 1:33 PM, joes wrote:
    Am Wed, 05 Jun 2024 12:09:18 -0500 schrieb olcott:
    On 6/5/2024 12:03 PM, John Smith wrote:
    On 5/06/24 04:16, olcott wrote:
    On 6/4/2024 9:12 PM, John Smith wrote:
    On 5/06/24 04:05, olcott wrote:
    On 6/4/2024 8:48 PM, Richard Damon wrote:
    (6) Can Carol correctly answer “no” to this question?
    Let's ask Carol. If she says “yes”, she's saying that “no” is the >>>>> correct answer for her, so “yes” is incorrect. If she says “no”, she's
    saying that she cannot correctly answer “no”, which is her answer. We >>>>> are assuming for this and all subsequent questions that the only
    acceptable answers are “yes” and “no”, and in this case, both answers
    are incorrect. Carol cannot answer the question correctly. Now
    let's ask
    Dave. He says “no”, and he is correct because Carol cannot correctly >>>>> answer “no”. So (6) is subjective because it is a consistent,
    satisfiable specification for some agent (anyone other than Carol),
    and
    an inconsistent, unsatisfiable specification for some agent (Carol).

    But that's like running a different machine. That's not interesting.
    We wanted to see a machine that can answer ALL questions.

    To expect a correct answer to an incorrect question has
    always been very stupid.

    To call a question incorrect just because one stupid machine cannot
    correctly answer it is stupid.


    Whenever and yes/no question has no correct yes/no answer such
    as What time is it (yes or no)?
    It this sentence true or false "this sentence is not true" ???
    Then the question is incorrect.

    But the actual question of the Halting Problem always has a correct yes
    or not answer.

    Your problem is you forget that to ask it, you first need to define what
    H does, and thus H's answer has been fixed, so the other answer can be
    correct.

    You apparently just don't understand the basics of the theory you have
    been claiming to be an expert in.


    People that are woefully ignorant of context in linguistics
    think that they can get away with simply ignoring how the
    context of who is asked change the meaning of a question.
    When one anchors their views in ignorance they anchor these
    views in error.

    Can Carol correctly answer “no” to this (yes/no) question?

        ...is a consistent, satisfiable specification for some
        agent (anyone other than Carol), and an inconsistent,
        unsatisfiable specification for some agent (Carol). (Hehner:2017)

    If Carol answers “no” to this question she is saying that “no” is the wrong answer, if she is correct then “no” is the right answer making her necessarily incorrect.

    If Carol answers “yes” to this question she is saying that “no” is the
    correct answer thus making “yes” necessarily the wrong answer.

    Thus both [yes, no] are the wrong answer from Carol, thus “no” is the correct answer from anyone else.

    Since the question posed to Carol has no correct answer from Carol and
    the same word-for-word question does have a correct answer from anyone
    else linguistics understands that these are two different questions
    because they have different meanings depending on the linguistic context
    of who is asked.

    A concrete example of how the meaning of the same word-for-word question
    has an entirely different answer depending on who is asked: Are you a
    little girl?

    We can see that Carol's question posed to Carol is self-contradictory
    for Carol because the question contradicts both yes/no answers from
    Carol.

    Upon careful examination we can see that Carol's question posed to Carol
    is isomorphic to input D to decider H where D has been defined to do the opposite of whatever Boolean value that H returns.



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)