• Re: Ok I made a joke, sorry (Re: 2nd Cognitive Turn ~~> no Bayesian Bra

    From Mild Shock@21:1/5 to Mild Shock on Sun Aug 4 00:00:54 2024
    BTW: Friedrich Ueberweg is quite good
    and funny to browse, he reports relatively
    unfiltered what we would nowadays call

    forms of "rational behaviour", so its a little
    pot purry, except for his sections where he
    explains some schemas, like the Aristotelan

    figures, which are more pure logic of the form.
    And peng you get a guy talking pages and
    pages about pure and form:

    "Pure" logic, ontology, and phenomenology
    David Woodruff Smith https://www.cairn.info/revue-internationale-de-philosophie-2003-2-page-21.htm

    But the above is a from species of philosophy
    that is endangered now. Its predator are
    abstractions on the computer like lambda

    calculus and the Curry Howard isomorphism. The
    revue has become an irrelevant cabarett, only
    dead people would be interested in, like

    may father, grandfather etc...

    Mild Shock schrieb:

    My impression Cognitive Science was never
    Bayesian Brain, so I guess I made a joke.

    The time scale, its start in 1950s and that
    it is still relative unknown subject,

    would explain:
    - why my father or mother never tried to
      educated me towards cognitive science.
      It could be that they are totally blank
      in this respect?

    - why my grandfather or grandmothers never
      tried to educate me towards cognitive
      science. Dito It could be that they are totally
      blank in this respect?

    - it could be that there are rare cases where
      some philosophers had already a glimps of
      cognitive science. But when I open for
      example this booklet:

    System der Logic
    Friedrich Ueberweg
    Bonn - 1868
    https://philpapers.org/rec/UEBSDL

      One can feel the dry swimming that is reported
      for several millennia.  What happened in the
      1950s was the possibility of computer modelling.


    Mild Shock schrieb:

    Hi,

    Yes, maybe we are just before a kind
    of 2nd Cognitive Turn. The first Cognitive
    Turn is characterized as:

    The cognitive revolution was an intellectual
    movement that began in the 1950s as an
    interdisciplinary study of the mind and its
    processes, from which emerged a new
    field known as cognitive science.
    https://en.wikipedia.org/wiki/Cognitive_revolution

    The current mainstream believe is that
    Chat Bots and the progress in AI is mainly
    based on "Machine Learning", whereas

    most of the progress is more based on
    "Deep Learning". But I am also sceptical
    about "Deep Learning" in the end a frequentist

    is again lurking. In the worst case the
    no Bayension Brain shock will come with a
    Technological singularity in that the current

    short inferencing of LLMs is enhanced by
    some long inferencing, like here:

    A week ago, I posted that I was cooking a
    logical reasoning benchmark as a side project.
    Now it's finally ready! Introducing 🦓 𝙕𝙚𝙗𝙧𝙖𝙇𝙤𝙜𝙞𝙘,
    designed for evaluating LLMs with Logic Puzzles.
    https://x.com/billyuchenlin/status/1814254565128335705

    making it possible not to excell by LLMs
    in such puzzles, but to advance to more
    elaborate scientific models, that can somehow

    overcome fallacies such as:
    - Kochen Specker Paradox, some fallacies
       caused by averaging?
    - Gluts and Gaps in Bayesian Reasoning,
       some fallacies by consistency assumptions?
    - What else?

    So on quiet paws AI might become the new overlord
    of science which we will happily depend on.

    Jeff Barnett schrieb:
    You are surprised; I am saddened. Not only have
    we lost contact with the primary studies of knowledge
    and reasoning, we have also lost contact with the
    studies of methods and motivation. Psychology
    was the basic home room of Alan Newell and many
    other AI all stars. What is now called AI, I think
    incorrectly, is just ways of exercising large amounts
    of very cheap computer power to calculate approximates
    to correlations and other statistical approximations.

    The problem with all of this in my mind, is that we
    learn nothing about the capturing of knowledge, what
    it is, or how it is used. Both logic and heuristic reasoning
    are needed and we certainly believe that intelligence is
    not measured by its ability to discover "truth" or its
    infallibly consistent results. Newton's thought process
    was pure genius but known to produce fallacious results
    when you know what Einstein knew at a later time.

    I remember reading Ted Shortliffe's dissertation about
    MYCIN (an early AI medical consultant for diagnosing
    blood-borne infectious diseases) where I learned about
    one use of the term "staff disease", or just "staff" for short.
    In patient care areas there always seems to be an in-
    house infection that changes over time. It changes
    because sick patients brought into the area contribute
    whatever is making them sick in the first place. In the
    second place there is rapid mutations driven by all sorts
    of factors present in hospital-like environments. The
    result is that the local staff is varying, literally, minute
    by minute. In a days time, the samples you took are
    no longer valid, i.e., their day old cultures may be
    meaningless. The underlying mathematical problem is
    that probability theory doesn't really have the tools to
    make predictions when the basic probabilities are
    changing faster than observations can be
    turned into inferences.

    Why do I mention the problems of unstable probabilities
    here? Because new AI uses fancy ideas of correlation
    to simulate probabilistic inference, e.g., Bayesian inference.
    Since actual probabilities may not exist in any meaningful
    ways, the simulations are often based on air.

    A hallmark of excellent human reasoning is the ability to
    explain how we arrived at our conclusions. We are also
    able to repair our inner models when we are in error if
    we can understand why. The abilities to explain and
    repair are fundamental to excellence of thought processes.
    By the way, I'm not claiming that all humans or I have theses
    reflective abilities. Those who do are few and far between.
    However, any AI that doesn't have some of these
    capabilities isn't very interesting.

    For more on reasons why logic and truth are only part of human
    ability to reasonably reason, see

    https://www.yahoo.com/news/opinion-want-convince-conspiracy-theory-100258277.html


        -- Jeff Barnett


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)