• Evolution of consciousness

    From Mark Isaak@21:1/5 to All on Mon Apr 29 16:36:45 2024
    My views on the evolution of consciousness are starting to gel.

    1. Rudimentary nervous systems evolve.
    2. Brains evolve, capable of memory and of decisions other than reflex.
    3. Those decisions probably work better if the brain has a model of the
    world to work with. So such a model evolves.
    4. Some creatures live socially. Their brains need a model of that
    important aspect of the world: the fellow beings one lives with,
    including how they think.
    5. So we've now got a model of minds. How about if we apply it to *our
    own mind*? That might make our thinking about interactions with others'
    minds more efficient.
    6. Viola! Consciousness!

    Does that make sense to people? Is it time for me to write a book on
    the subject? (Do you think publishers will want the book to be more than
    106 words long?)

    There's also the problem of testing it. I'm open to suggestions there,
    too. Step 4 implies that the model of how we think need not agree with
    how we think, much as the mental model of our world is flat, not
    spherical. This has at least some confirmation (e.g., blindness to many biases). More would be better.

    --
    Mark Isaak
    "Wisdom begins when you discover the difference between 'That
    doesn't make sense' and 'I don't understand.'" - Mary Doria Russell

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris Thompson@21:1/5 to Mark Isaak on Mon Apr 29 19:44:39 2024
    Mark Isaak wrote:
    My views on the evolution of consciousness are starting to gel.

    1. Rudimentary nervous systems evolve.
    2. Brains evolve, capable of memory and of decisions other than reflex.
    3. Those decisions probably work better if the brain has a model of the
    world to work with. So such a model evolves.
    4. Some creatures live socially. Their brains need a model of that
    important aspect of the world: the fellow beings one lives with,
    including how they think.
    5. So we've now got a model of minds. How about if we apply it to *our
    own mind*? That might make our thinking about interactions with others'
    minds more efficient.
    6. Viola! Consciousness!

    Does that make sense to people?  Is it time for me to write a book on
    the subject? (Do you think publishers will want the book to be more than
    106 words long?)

    There's also the problem of testing it. I'm open to suggestions there,
    too. Step 4 implies that the model of how we think need not agree with
    how we think, much as the mental model of our world is flat, not
    spherical. This has at least some confirmation (e.g., blindness to many biases). More would be better.


    I especially like how you bring in string theory in number 6.

    Chris

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richmond@21:1/5 to Mark Isaak on Tue Apr 30 14:36:29 2024
    Mark Isaak <specimenNOSPAM@curioustaxon.omy.net> writes:

    My views on the evolution of consciousness are starting to gel.

    1. Rudimentary nervous systems evolve.
    2. Brains evolve, capable of memory and of decisions other than reflex.
    3. Those decisions probably work better if the brain has a model of
    the world to work with. So such a model evolves.
    4. Some creatures live socially. Their brains need a model of that
    important aspect of the world: the fellow beings one lives with,
    including how they think.
    5. So we've now got a model of minds. How about if we apply it to *our
    own mind*? That might make our thinking about interactions with
    others' minds more efficient.
    6. Viola! Consciousness!

    Does that make sense to people? Is it time for me to write a book on
    the subject? (Do you think publishers will want the book to be more
    than 106 words long?)

    There's also the problem of testing it. I'm open to suggestions there,
    too. Step 4 implies that the model of how we think need not agree with
    how we think, much as the mental model of our world is flat, not
    spherical. This has at least some confirmation (e.g., blindness to
    many biases). More would be better.

    You might want to read Nicholas Humphrey:

    https://en.wikipedia.org/wiki/Nicholas_Humphrey

    I see he did a lecture "How did consciousness evolve?"

    https://www.youtube.com/watch?v=9QWaZp_2I1k

    Back in the days when television wasn't aimed at vegetable based life, I
    think he had a TV series about consciousness, but I have forgotten what
    it was called.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kalkidas@21:1/5 to Mark Isaak on Wed May 1 08:36:55 2024
    On 4/29/2024 4:36 PM, Mark Isaak wrote:
    My views on the evolution of consciousness are starting to gel.

    1. Rudimentary nervous systems evolve.
    2. Brains evolve, capable of memory and of decisions other than reflex.
    3. Those decisions probably work better if the brain has a model of the
    world to work with. So such a model evolves.
    4. Some creatures live socially. Their brains need a model of that
    important aspect of the world: the fellow beings one lives with,
    including how they think.
    5. So we've now got a model of minds. How about if we apply it to *our
    own mind*? That might make our thinking about interactions with others'
    minds more efficient.
    6. Viola! Consciousness!

    Does that make sense to people?  Is it time for me to write a book on
    the subject? (Do you think publishers will want the book to be more than
    106 words long?)

    There's also the problem of testing it. I'm open to suggestions there,
    too. Step 4 implies that the model of how we think need not agree with
    how we think, much as the mental model of our world is flat, not
    spherical. This has at least some confirmation (e.g., blindness to many biases). More would be better.


    As the famous evolutionist Professor Bullwinkle said: "Watch me pull a
    rabbit out of my hat!"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Arkalen@21:1/5 to Mark Isaak on Thu May 2 15:21:31 2024
    On 30/04/2024 01:36, Mark Isaak wrote:
    My views on the evolution of consciousness are starting to gel.

    1. Rudimentary nervous systems evolve.
    2. Brains evolve, capable of memory and of decisions other than reflex.
    3. Those decisions probably work better if the brain has a model of the
    world to work with. So such a model evolves.
    4. Some creatures live socially. Their brains need a model of that
    important aspect of the world: the fellow beings one lives with,
    including how they think.
    5. So we've now got a model of minds. How about if we apply it to *our
    own mind*? That might make our thinking about interactions with others'
    minds more efficient.
    6. Viola! Consciousness!

    Does that make sense to people?  Is it time for me to write a book on
    the subject? (Do you think publishers will want the book to be more than
    106 words long?)

    There's also the problem of testing it. I'm open to suggestions there,
    too. Step 4 implies that the model of how we think need not agree with
    how we think, much as the mental model of our world is flat, not
    spherical. This has at least some confirmation (e.g., blindness to many biases). More would be better.


    Have you seen my thread on Michael Tomasello's "The Evolution of
    Agency"? I think the book would interest you. If you want more detail I
    have a post somewhere in that thread summarizing its arguments, I'd be
    happy to hear your take.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Isaak@21:1/5 to Arkalen on Thu May 2 10:03:27 2024
    On 5/2/24 6:21 AM, Arkalen wrote:
    On 30/04/2024 01:36, Mark Isaak wrote:
    My views on the evolution of consciousness are starting to gel.

    1. Rudimentary nervous systems evolve.
    2. Brains evolve, capable of memory and of decisions other than reflex.
    3. Those decisions probably work better if the brain has a model of
    the world to work with. So such a model evolves.
    4. Some creatures live socially. Their brains need a model of that
    important aspect of the world: the fellow beings one lives with,
    including how they think.
    5. So we've now got a model of minds. How about if we apply it to *our
    own mind*? That might make our thinking about interactions with
    others' minds more efficient.
    6. Viola! Consciousness!

    Does that make sense to people?  Is it time for me to write a book on
    the subject? (Do you think publishers will want the book to be more
    than 106 words long?)

    There's also the problem of testing it. I'm open to suggestions there,
    too. Step 4 implies that the model of how we think need not agree with
    how we think, much as the mental model of our world is flat, not
    spherical. This has at least some confirmation (e.g., blindness to
    many biases). More would be better.

    Have you seen my thread on Michael Tomasello's "The Evolution of
    Agency"? I think the book would interest you. If you want more detail I
    have a post somewhere in that thread summarizing its arguments, I'd be
    happy to hear your take.

    I have seen it, but I don't remember particular points.

    I just came across reference to another book by Michael S.S. Graziano, _Consciousness and the Social Brain_, which appears to make an argument
    similar to mine above (particularly steps 4 and 5).

    --
    Mark Isaak
    "Wisdom begins when you discover the difference between 'That
    doesn't make sense' and 'I don't understand.'" - Mary Doria Russell

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Arkalen@21:1/5 to Mark Isaak on Fri May 3 20:21:17 2024
    On 02/05/2024 19:03, Mark Isaak wrote:
    On 5/2/24 6:21 AM, Arkalen wrote:
    On 30/04/2024 01:36, Mark Isaak wrote:
    My views on the evolution of consciousness are starting to gel.

    1. Rudimentary nervous systems evolve.
    2. Brains evolve, capable of memory and of decisions other than reflex.
    3. Those decisions probably work better if the brain has a model of
    the world to work with. So such a model evolves.
    4. Some creatures live socially. Their brains need a model of that
    important aspect of the world: the fellow beings one lives with,
    including how they think.
    5. So we've now got a model of minds. How about if we apply it to
    *our own mind*? That might make our thinking about interactions with
    others' minds more efficient.
    6. Viola! Consciousness!

    Does that make sense to people?  Is it time for me to write a book on
    the subject? (Do you think publishers will want the book to be more
    than 106 words long?)

    There's also the problem of testing it. I'm open to suggestions
    there, too. Step 4 implies that the model of how we think need not
    agree with how we think, much as the mental model of our world is
    flat, not spherical. This has at least some confirmation (e.g.,
    blindness to many biases). More would be better.

    Have you seen my thread on Michael Tomasello's "The Evolution of
    Agency"? I think the book would interest you. If you want more detail
    I have a post somewhere in that thread summarizing its arguments, I'd
    be happy to hear your take.

    I have seen it, but I don't remember particular points.

    I just came across reference to another book by Michael S.S. Graziano, _Consciousness and the Social Brain_, which appears to make an argument similar to mine above (particularly steps 4 and 5).


    Basically (if you don't mind me going on about it again) he proposes a
    scheme similar to what you did but more specific, fleshed-out and (IMO) convincing. It revolves around the notion of "agents" or "agency" which Tomasello defines as a system that achieves goals via a feedback-control mechanisms where the system perceives aspects of the environment,
    compares them to the desired goal, engages in behaviors meant to bring
    it closer to the goal, checks the environment again, and loops this way
    until the goal is achieved.

    His parallels to your steps might be:

    1) rudimentary nervous systems evolve that coordinate perception with
    behavior on a stimulus-response basis but not the feedback-control
    system involved in true agency.

    2) brains evolve that do implement such a feedback-control system [I'm
    not sure in the book he explicitly associates it with brains, but he
    does associate it with vertebrates which do have distinct brains as a
    feature so I'll say it's close enough for a paraphrase]

    He doesn't have a parallel to your step "3" because models of the world
    are implicit in all of the cognitive models he presents, in fact the differences in he calls "experiential niches" (which could be thought of
    as "world models") are pretty important. So for example he points out
    that with agency comes the mechanism of *attention* (i.e. you orient
    your perceptions in specific ways depending on what goals you're working towards and where you're currently at in working towards them) which
    implies experiences of an outside world and internal states that are or
    aren't in sync, full of things that are relevant/irrelevant, good/bad etc.

    4) He does bring in social living as a possible cause of his next step
    in the evolution of agency that he sets at early mammals: the appearance
    of a feedback-control system applied on top of the previous one to
    monitor and control the goal-seeking process itself (he sees social
    living as a driver for this because of the competition between peers
    would induce a benefit in more flexible, efficient decision-making).
    These early mammals would be able to not only perceive the world, pick a behavior to fulfill a goal and shut everything down in case of danger
    (as he describes lizards doing), but mentally play out possible
    behaviors and flexibly inhibit some in favor of others depending on
    which they anticipate working out best. This would introduce into the
    "world model" or "experiential niche" notions of goals, behaviors and cause-and-effect relationships between the two. I don't think he
    introduces models of other *minds* at this step per se although it's a
    bit like world models - they're implicit in several steps it's more of a question of what aspect of minds is being modelled.

    5) I do think there is still some similarity between your 5 and the next
    level of agency Tomasello suggests, although he sets it at great apes
    and you seem to set it at humans (then again many would argue great apes
    are conscious and I don't think Tomasello would disagree). He proposes
    an extra metacognitive feedback-control system monitoring the lower ones allowing control not only over the behaviors taken in service of a goal
    but of the goals themselves, and an understanding of cause-and-effect in general and not only as concerns one's own actions. It also induces an understanding of others as being agents with goals they behave in
    service of.

    6) While he does think of 5 as the ability to reason and I'm pretty sure
    would call it "consciousness" he does have 2 other steps separating
    humans from that, which involve collective agency. He proposes the
    critical difference between humans and other great apes is the ability
    to coordinate as part of a group that itself fits the criteria for being
    an agent - with collective goals, the ability to monitor their
    completion and act and self-regulate in service of them. He sees this as
    coming in two parts, first the ability to coordinate pairwise to achieve specific tasks (somewhere in hominid evolution - he gives several
    examples illustrating how strikingly worse chimpanzees are at basic
    cooperation than even human children) and then the ability to function
    as part of a larger community with shared norms that allow coordination
    with strangers (which he sets early in the evolution of our own
    species). He talks about this inducing a kind of triple mental model of
    agency, the "self" agent (the individual's goals, parallel to the sense
    of agency of other great apes), the "role" agent (the goals implied by
    one's role in some collective enterprise) and the "collective" agent
    (the goals of the collective enterprise itself). He then talks about how various aspects of our experience like culture, morality etc follow from
    that.


    I think it's interesting how this suggests a difference between having a
    model of one's own mind, having a model of others' minds, and having a
    model of *mind in general* that's then applied to oneself and others.
    "Models of the world" and "models of the mind" really collapses a lot of functionality and variability and I think Tomasello's model does a
    better job of separating out different potential strands and honing in
    on those that actually account for how we resemble and differ from other species.


    I also like how this model justifies that the last step, and only the
    last step, is truly self-reflective. All the other steps involve taking
    a system at a certain level of agency and adding a monitoring/control
    level, resulting in a system that's aware of itself *as a system of the
    lower level*. That last step is the only one that involves the system monitoring/controlling a level *above* itself, and indeed being able to monitor/control any arbitrary system of agency at all (given any
    combination of humans can display collective agency and a human can be
    part of multiple collective agencies at any given time). Meaning the
    recursion ends there, it's the only agent model that can model itself as
    being the level it is.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Isaak@21:1/5 to Arkalen on Mon May 6 07:45:46 2024
    On 5/3/24 11:21 AM, Arkalen wrote:
    On 02/05/2024 19:03, Mark Isaak wrote:
    On 5/2/24 6:21 AM, Arkalen wrote:
    On 30/04/2024 01:36, Mark Isaak wrote:
    My views on the evolution of consciousness are starting to gel.

    1. Rudimentary nervous systems evolve.
    2. Brains evolve, capable of memory and of decisions other than reflex. >>>> 3. Those decisions probably work better if the brain has a model of
    the world to work with. So such a model evolves.
    4. Some creatures live socially. Their brains need a model of that
    important aspect of the world: the fellow beings one lives with,
    including how they think.
    5. So we've now got a model of minds. How about if we apply it to
    *our own mind*? That might make our thinking about interactions with
    others' minds more efficient.
    6. Viola! Consciousness!

    Does that make sense to people?  Is it time for me to write a book
    on the subject? (Do you think publishers will want the book to be
    more than 106 words long?)

    There's also the problem of testing it. I'm open to suggestions
    there, too. Step 4 implies that the model of how we think need not
    agree with how we think, much as the mental model of our world is
    flat, not spherical. This has at least some confirmation (e.g.,
    blindness to many biases). More would be better.

    Have you seen my thread on Michael Tomasello's "The Evolution of
    Agency"? I think the book would interest you. If you want more detail
    I have a post somewhere in that thread summarizing its arguments, I'd
    be happy to hear your take.

    I have seen it, but I don't remember particular points.

    I just came across reference to another book by Michael S.S. Graziano,
    _Consciousness and the Social Brain_, which appears to make an
    argument similar to mine above (particularly steps 4 and 5).


    Basically (if you don't mind me going on about it again) he proposes a
    scheme similar to what you did but more specific, fleshed-out and (IMO) convincing. It revolves around the notion of "agents" or "agency" which Tomasello defines as a system that achieves goals via a feedback-control mechanisms where the system perceives aspects of the environment,
    compares them to the desired goal, engages in behaviors meant to bring
    it closer to the goal, checks the environment again, and loops this way
    until the goal is achieved.

    His parallels to your steps might be:

    1) rudimentary nervous systems evolve that coordinate perception with behavior on a stimulus-response basis but not the feedback-control
    system involved in true agency.

    2) brains evolve that do implement such a feedback-control system [I'm
    not sure in the book he explicitly associates it with brains, but he
    does associate it with vertebrates which do have distinct brains as a
    feature so I'll say it's close enough for a paraphrase]

    He doesn't have a parallel to your step "3" because models of the world
    are implicit in all of the cognitive models he presents, in fact the differences in he calls "experiential niches" (which could be thought of
    as "world models") are pretty important. So for example he points out
    that with agency comes the mechanism of *attention* (i.e. you orient
    your perceptions in specific ways depending on what goals you're working towards and where you're currently at in working towards them) which
    implies experiences of an outside world and internal states that are or aren't in sync, full of things that are relevant/irrelevant, good/bad etc.

    4) He does bring in social living as a possible cause of his next step
    in the evolution of agency that he sets at early mammals: the appearance
    of a feedback-control system applied on top of the previous one to
    monitor and control the goal-seeking process itself (he sees social
    living as a driver for this because of the competition between peers
    would induce a benefit in more flexible, efficient decision-making).
    These early mammals would be able to not only perceive the world, pick a behavior to fulfill a goal and shut everything down in case of danger
    (as he describes lizards doing), but mentally play out possible
    behaviors and flexibly inhibit some in favor of others depending on
    which they anticipate working out best. This would introduce into the
    "world model" or "experiential niche" notions of goals, behaviors and cause-and-effect relationships between the two. I don't think he
    introduces models of other *minds* at this step per se although it's a
    bit like world models - they're implicit in several steps it's more of a question of what aspect of minds is being modelled.

    5) I do think there is still some similarity between your 5 and the next level of agency Tomasello suggests, although he sets it at great apes
    and you seem to set it at humans (then again many would argue great apes
    are conscious and I don't think Tomasello would disagree). He proposes
    an extra metacognitive feedback-control system monitoring the lower ones allowing control not only over the behaviors taken in service of a goal
    but of the goals themselves, and an understanding of cause-and-effect in general and not only as concerns one's own actions. It also induces an understanding of others as being agents with goals they behave in
    service of.

    6) While he does think of 5 as the ability to reason and I'm pretty sure would call it "consciousness" he does have 2 other steps separating
    humans from that, which involve collective agency. He proposes the
    critical difference between humans and other great apes is the ability
    to coordinate as part of a group that itself fits the criteria for being
    an agent - with collective goals, the ability to monitor their
    completion and act and self-regulate in service of them. He sees this as coming in two parts, first the ability to coordinate pairwise to achieve specific tasks (somewhere in hominid evolution - he gives several
    examples illustrating how strikingly worse chimpanzees are at basic cooperation than even human children) and then the ability to function
    as part of a larger community with shared norms that allow coordination
    with strangers (which he sets early in the evolution of our own
    species). He talks about this inducing a kind of triple mental model of agency, the "self" agent (the individual's goals, parallel to the sense
    of agency of other great apes), the "role" agent (the goals implied by
    one's role in some collective enterprise) and the "collective" agent
    (the goals of the collective enterprise itself). He then talks about how various aspects of our experience like culture, morality etc follow from that.


    I think it's interesting how this suggests a difference between having a model of one's own mind, having a model of others' minds, and having a
    model of *mind in general* that's then applied to oneself and others.
    "Models of the world" and "models of the mind" really collapses a lot of functionality and variability and I think Tomasello's model does a
    better job of separating out different potential strands and honing in
    on those that actually account for how we resemble and differ from other species.


    I also like how this model justifies that the last step, and only the
    last step, is truly self-reflective. All the other steps involve taking
    a system at a certain level of agency and adding a monitoring/control
    level, resulting in a system that's aware of itself *as a system of the
    lower level*. That last step is the only one that involves the system monitoring/controlling a level *above* itself, and indeed being able to monitor/control any arbitrary system of agency at all (given any
    combination of humans can display collective agency and a human can be
    part of multiple collective agencies at any given time). Meaning the recursion ends there, it's the only agent model that can model itself as being the level it is.

    Does he give a definition of consciousness? It sounds like he sees the "collective" agent as an essential part of it. I don't doubt that it is essential for humanity's achievements, but I'm not convinced it is
    necessary for consciousness. I still like my definition of consciousness
    as having a mental model of one's own mind.

    --
    Mark Isaak
    "Wisdom begins when you discover the difference between 'That
    doesn't make sense' and 'I don't understand.'" - Mary Doria Russell

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Arkalen@21:1/5 to Mark Isaak on Tue May 7 08:28:17 2024
    On 06/05/2024 16:45, Mark Isaak wrote:
    On 5/3/24 11:21 AM, Arkalen wrote:
    On 02/05/2024 19:03, Mark Isaak wrote:
    On 5/2/24 6:21 AM, Arkalen wrote:
    On 30/04/2024 01:36, Mark Isaak wrote:
    My views on the evolution of consciousness are starting to gel.

    1. Rudimentary nervous systems evolve.
    2. Brains evolve, capable of memory and of decisions other than
    reflex.
    3. Those decisions probably work better if the brain has a model of
    the world to work with. So such a model evolves.
    4. Some creatures live socially. Their brains need a model of that
    important aspect of the world: the fellow beings one lives with,
    including how they think.
    5. So we've now got a model of minds. How about if we apply it to
    *our own mind*? That might make our thinking about interactions
    with others' minds more efficient.
    6. Viola! Consciousness!

    Does that make sense to people?  Is it time for me to write a book
    on the subject? (Do you think publishers will want the book to be
    more than 106 words long?)

    There's also the problem of testing it. I'm open to suggestions
    there, too. Step 4 implies that the model of how we think need not
    agree with how we think, much as the mental model of our world is
    flat, not spherical. This has at least some confirmation (e.g.,
    blindness to many biases). More would be better.

    Have you seen my thread on Michael Tomasello's "The Evolution of
    Agency"? I think the book would interest you. If you want more
    detail I have a post somewhere in that thread summarizing its
    arguments, I'd be happy to hear your take.

    I have seen it, but I don't remember particular points.

    I just came across reference to another book by Michael S.S.
    Graziano, _Consciousness and the Social Brain_, which appears to make
    an argument similar to mine above (particularly steps 4 and 5).


    Basically (if you don't mind me going on about it again) he proposes a
    scheme similar to what you did but more specific, fleshed-out and
    (IMO) convincing. It revolves around the notion of "agents" or
    "agency" which Tomasello defines as a system that achieves goals via a
    feedback-control mechanisms where the system perceives aspects of the
    environment, compares them to the desired goal, engages in behaviors
    meant to bring it closer to the goal, checks the environment again,
    and loops this way until the goal is achieved.

    His parallels to your steps might be:

    1) rudimentary nervous systems evolve that coordinate perception with
    behavior on a stimulus-response basis but not the feedback-control
    system involved in true agency.

    2) brains evolve that do implement such a feedback-control system [I'm
    not sure in the book he explicitly associates it with brains, but he
    does associate it with vertebrates which do have distinct brains as a
    feature so I'll say it's close enough for a paraphrase]

    He doesn't have a parallel to your step "3" because models of the
    world are implicit in all of the cognitive models he presents, in fact
    the differences in he calls "experiential niches" (which could be
    thought of as "world models") are pretty important. So for example he
    points out that with agency comes the mechanism of *attention* (i.e.
    you orient your perceptions in specific ways depending on what goals
    you're working towards and where you're currently at in working
    towards them) which implies experiences of an outside world and
    internal states that are or aren't in sync, full of things that are
    relevant/irrelevant, good/bad etc.

    4) He does bring in social living as a possible cause of his next step
    in the evolution of agency that he sets at early mammals: the
    appearance of a feedback-control system applied on top of the previous
    one to monitor and control the goal-seeking process itself (he sees
    social living as a driver for this because of the competition between
    peers would induce a benefit in more flexible, efficient
    decision-making). These early mammals would be able to not only
    perceive the world, pick a behavior to fulfill a goal and shut
    everything down in case of danger (as he describes lizards doing), but
    mentally play out possible behaviors and flexibly inhibit some in
    favor of others depending on which they anticipate working out best.
    This would introduce into the "world model" or "experiential niche"
    notions of goals, behaviors and cause-and-effect relationships between
    the two. I don't think he introduces models of other *minds* at this
    step per se although it's a bit like world models - they're implicit
    in several steps it's more of a question of what aspect of minds is
    being modelled.

    5) I do think there is still some similarity between your 5 and the
    next level of agency Tomasello suggests, although he sets it at great
    apes and you seem to set it at humans (then again many would argue
    great apes are conscious and I don't think Tomasello would disagree).
    He proposes an extra metacognitive feedback-control system monitoring
    the lower ones allowing control not only over the behaviors taken in
    service of a goal but of the goals themselves, and an understanding of
    cause-and-effect in general and not only as concerns one's own
    actions. It also induces an understanding of others as being agents
    with goals they behave in service of.

    6) While he does think of 5 as the ability to reason and I'm pretty
    sure would call it "consciousness" he does have 2 other steps
    separating humans from that, which involve collective agency. He
    proposes the critical difference between humans and other great apes
    is the ability to coordinate as part of a group that itself fits the
    criteria for being an agent - with collective goals, the ability to
    monitor their completion and act and self-regulate in service of them.
    He sees this as coming in two parts, first the ability to coordinate
    pairwise to achieve specific tasks (somewhere in hominid evolution -
    he gives several examples illustrating how strikingly worse
    chimpanzees are at basic cooperation than even human children) and
    then the ability to function as part of a larger community with shared
    norms that allow coordination with strangers (which he sets early in
    the evolution of our own species). He talks about this inducing a kind
    of triple mental model of agency, the "self" agent (the individual's
    goals, parallel to the sense of agency of other great apes), the
    "role" agent (the goals implied by one's role in some collective
    enterprise) and the "collective" agent (the goals of the collective
    enterprise itself). He then talks about how various aspects of our
    experience like culture, morality etc follow from that.


    I think it's interesting how this suggests a difference between having
    a model of one's own mind, having a model of others' minds, and having
    a model of *mind in general* that's then applied to oneself and
    others. "Models of the world" and "models of the mind" really
    collapses a lot of functionality and variability and I think
    Tomasello's model does a better job of separating out different
    potential strands and honing in on those that actually account for how
    we resemble and differ from other species.


    I also like how this model justifies that the last step, and only the
    last step, is truly self-reflective. All the other steps involve
    taking a system at a certain level of agency and adding a
    monitoring/control level, resulting in a system that's aware of itself
    *as a system of the lower level*. That last step is the only one that
    involves the system monitoring/controlling a level *above* itself, and
    indeed being able to monitor/control any arbitrary system of agency at
    all (given any combination of humans can display collective agency and
    a human can be part of multiple collective agencies at any given
    time). Meaning the recursion ends there, it's the only agent model
    that can model itself as being the level it is.

    Does he give a definition of consciousness? It sounds like he sees the "collective" agent as an essential part of it. I don't doubt that it is essential for humanity's achievements, but I'm not convinced it is
    necessary for consciousness. I still like my definition of consciousness
    as having a mental model of one's own mind.


    The book isn't about consciousness; it's about the evolution of agency
    and it doesn't conflate agency with consciousness, agency as it defines
    it is completely different. I think it has very obvious and clarifying implications on the evolution of consciousness but the notion that the "collective agent" is essential to consciousness as we experience it is
    my extrapolation, not his. I'd guess he's in the "there are many kinds
    of consciousness" camp and I'd further guess that he thinks of great
    apes at least as "fully" conscious like us.


    For me it's a bit like I said in another thread recently - when I look
    at my own consciousness I feel that the self-reflective "this is what
    happened and here's how I think of it" is a very important part of it,
    and that an existence that just had the experiences of the moment
    without the integrating, looking-back part wouldn't be fully conscious
    even if *in us* it's obviously an integral part of our conscious state.
    And I wouldn't be surprised if that self-reflective part really does
    occur only at part 6.


    "Having a mental model of one's own mind" is all well and good but
    "model" is never 1:1 and plenty of simplified representations we could
    call "models" obviously don't suffice so it just pushes the question
    back to "what kind of model?".


    I might have another post giving more detail on what Tomasello says
    about experiential niches later.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Arkalen@21:1/5 to Arkalen on Tue May 7 12:15:27 2024
    On 07/05/2024 08:28, Arkalen wrote:
    On 06/05/2024 16:45, Mark Isaak wrote:
    On 5/3/24 11:21 AM, Arkalen wrote:
    On 02/05/2024 19:03, Mark Isaak wrote:
    On 5/2/24 6:21 AM, Arkalen wrote:
    On 30/04/2024 01:36, Mark Isaak wrote:
    My views on the evolution of consciousness are starting to gel.

    1. Rudimentary nervous systems evolve.
    2. Brains evolve, capable of memory and of decisions other than
    reflex.
    3. Those decisions probably work better if the brain has a model
    of the world to work with. So such a model evolves.
    4. Some creatures live socially. Their brains need a model of that >>>>>> important aspect of the world: the fellow beings one lives with,
    including how they think.
    5. So we've now got a model of minds. How about if we apply it to
    *our own mind*? That might make our thinking about interactions
    with others' minds more efficient.
    6. Viola! Consciousness!

    Does that make sense to people?  Is it time for me to write a book >>>>>> on the subject? (Do you think publishers will want the book to be
    more than 106 words long?)

    There's also the problem of testing it. I'm open to suggestions
    there, too. Step 4 implies that the model of how we think need not >>>>>> agree with how we think, much as the mental model of our world is
    flat, not spherical. This has at least some confirmation (e.g.,
    blindness to many biases). More would be better.

    Have you seen my thread on Michael Tomasello's "The Evolution of
    Agency"? I think the book would interest you. If you want more
    detail I have a post somewhere in that thread summarizing its
    arguments, I'd be happy to hear your take.

    I have seen it, but I don't remember particular points.

    I just came across reference to another book by Michael S.S.
    Graziano, _Consciousness and the Social Brain_, which appears to
    make an argument similar to mine above (particularly steps 4 and 5).


    Basically (if you don't mind me going on about it again) he proposes
    a scheme similar to what you did but more specific, fleshed-out and
    (IMO) convincing. It revolves around the notion of "agents" or
    "agency" which Tomasello defines as a system that achieves goals via
    a feedback-control mechanisms where the system perceives aspects of
    the environment, compares them to the desired goal, engages in
    behaviors meant to bring it closer to the goal, checks the
    environment again, and loops this way until the goal is achieved.

    His parallels to your steps might be:

    1) rudimentary nervous systems evolve that coordinate perception with
    behavior on a stimulus-response basis but not the feedback-control
    system involved in true agency.

    2) brains evolve that do implement such a feedback-control system
    [I'm not sure in the book he explicitly associates it with brains,
    but he does associate it with vertebrates which do have distinct
    brains as a feature so I'll say it's close enough for a paraphrase]

    He doesn't have a parallel to your step "3" because models of the
    world are implicit in all of the cognitive models he presents, in
    fact the differences in he calls "experiential niches" (which could
    be thought of as "world models") are pretty important. So for example
    he points out that with agency comes the mechanism of *attention*
    (i.e. you orient your perceptions in specific ways depending on what
    goals you're working towards and where you're currently at in working
    towards them) which implies experiences of an outside world and
    internal states that are or aren't in sync, full of things that are
    relevant/irrelevant, good/bad etc.

    4) He does bring in social living as a possible cause of his next
    step in the evolution of agency that he sets at early mammals: the
    appearance of a feedback-control system applied on top of the
    previous one to monitor and control the goal-seeking process itself
    (he sees social living as a driver for this because of the
    competition between peers would induce a benefit in more flexible,
    efficient decision-making). These early mammals would be able to not
    only perceive the world, pick a behavior to fulfill a goal and shut
    everything down in case of danger (as he describes lizards doing),
    but mentally play out possible behaviors and flexibly inhibit some in
    favor of others depending on which they anticipate working out best.
    This would introduce into the "world model" or "experiential niche"
    notions of goals, behaviors and cause-and-effect relationships
    between the two. I don't think he introduces models of other *minds*
    at this step per se although it's a bit like world models - they're
    implicit in several steps it's more of a question of what aspect of
    minds is being modelled.

    5) I do think there is still some similarity between your 5 and the
    next level of agency Tomasello suggests, although he sets it at great
    apes and you seem to set it at humans (then again many would argue
    great apes are conscious and I don't think Tomasello would disagree).
    He proposes an extra metacognitive feedback-control system monitoring
    the lower ones allowing control not only over the behaviors taken in
    service of a goal but of the goals themselves, and an understanding
    of cause-and-effect in general and not only as concerns one's own
    actions. It also induces an understanding of others as being agents
    with goals they behave in service of.

    6) While he does think of 5 as the ability to reason and I'm pretty
    sure would call it "consciousness" he does have 2 other steps
    separating humans from that, which involve collective agency. He
    proposes the critical difference between humans and other great apes
    is the ability to coordinate as part of a group that itself fits the
    criteria for being an agent - with collective goals, the ability to
    monitor their completion and act and self-regulate in service of
    them. He sees this as coming in two parts, first the ability to
    coordinate pairwise to achieve specific tasks (somewhere in hominid
    evolution - he gives several examples illustrating how strikingly
    worse chimpanzees are at basic cooperation than even human children)
    and then the ability to function as part of a larger community with
    shared norms that allow coordination with strangers (which he sets
    early in the evolution of our own species). He talks about this
    inducing a kind of triple mental model of agency, the "self" agent
    (the individual's goals, parallel to the sense of agency of other
    great apes), the "role" agent (the goals implied by one's role in
    some collective enterprise) and the "collective" agent (the goals of
    the collective enterprise itself). He then talks about how various
    aspects of our experience like culture, morality etc follow from that.


    I think it's interesting how this suggests a difference between
    having a model of one's own mind, having a model of others' minds,
    and having a model of *mind in general* that's then applied to
    oneself and others. "Models of the world" and "models of the mind"
    really collapses a lot of functionality and variability and I think
    Tomasello's model does a better job of separating out different
    potential strands and honing in on those that actually account for
    how we resemble and differ from other species.


    I also like how this model justifies that the last step, and only the
    last step, is truly self-reflective. All the other steps involve
    taking a system at a certain level of agency and adding a
    monitoring/control level, resulting in a system that's aware of
    itself *as a system of the lower level*. That last step is the only
    one that involves the system monitoring/controlling a level *above*
    itself, and indeed being able to monitor/control any arbitrary system
    of agency at all (given any combination of humans can display
    collective agency and a human can be part of multiple collective
    agencies at any given time). Meaning the recursion ends there, it's
    the only agent model that can model itself as being the level it is.

    Does he give a definition of consciousness? It sounds like he sees the
    "collective" agent as an essential part of it. I don't doubt that it
    is essential for humanity's achievements, but I'm not convinced it is
    necessary for consciousness. I still like my definition of
    consciousness as having a mental model of one's own mind.


    The book isn't about consciousness; it's about the evolution of agency
    and it doesn't conflate agency with consciousness, agency as it defines
    it is completely different. I think it has very obvious and clarifying implications on the evolution of consciousness but the notion that the "collective agent" is essential to consciousness as we experience it is
    my extrapolation, not his. I'd guess he's in the "there are many kinds
    of consciousness" camp and I'd further guess that he thinks of great
    apes at least as "fully" conscious like us.


    For me it's a bit like I said in another thread recently - when I look
    at my own consciousness I feel that the self-reflective "this is what happened and here's how I think of it" is a very important part of it,
    and that an existence that just had the experiences of the moment
    without the integrating, looking-back part wouldn't be fully conscious
    even if *in us* it's obviously an integral part of our conscious state.
    And I wouldn't be surprised if that self-reflective part really does
    occur only at part 6.


    "Having a mental model of one's own mind" is all well and good but
    "model" is never 1:1 and plenty of simplified representations we could
    call "models" obviously don't suffice so it just pushes the question
    back to "what kind of model?".


    I might have another post giving more detail on what Tomasello says
    about experiential niches later.


    In the chapter "Ancient Vertebrates as Goal-Directed Agents" chapter:

    "Organisms need to perceive only those aspects of the environment that
    are relevant for their actions. A thermostat senses only temperature
    because that is all it needs to perceive to do its job. C. elegans
    perceives nutritious and noxious chemicals because that is all it needs
    to perceive to obtain food. A lizard perceives many things because that
    is what it needs to perceive to direct and control its various effective actions. The organism's action capabilities thus determine its
    experiential world. (See J. J. Gibson's 1977 argument that an organism's perceptual world comprises 'affordances' for its actions.)"
    - page 34

    "Along with goal-directed agency, then, comes a fundamental shift in experiential niche. Organisms no longer just perceive attractive and
    repulsive stimuli; they attend to situations that are relevant for their
    goal pursuit. Situations that are relevant for their goal pursuit are of
    two types: (i) opportunities for goal attainment (e.g., the cricket is
    low in the bush); or (ii) obstacles to goal attainment (e.g., a snake is
    close by)."
    - pages 36-37


    In the chapter "Ancient Mammals as Intentional Agents" chapter:

    "This new form of psychological organization leads, once again, not just
    to new particular experiences but to a new type of experience. Because
    reptiles begin operating as simple goal-directed agents, they began experiencing the world not just in terms of punctate stimuli but in
    terms of situations of opportunity and obstacle. Beyond this, operating
    with an executive psychological tier created for mammals the possibility
    of experiencing their own perceptual and behavioral functioning.
    Reptiles and other goal-directed agents do not experience their own
    perceptions and actions executively, whereas mammals not only experience
    their own perceptions and actions executively but operate on them from
    that executive tier. Reptiles and other goal-directed agents are
    sentient of the outside world; mammals and other intentional agents are conscious of their own actions and perceptions.

    Conscious experience thus exists, in my view, only in creatures who
    operate with an executive tier of functioning, including most mammals
    and whatever nonmammalian species operate in this way".
    - pages 64-65

    After referring to Graziano and Piaget's ideas on consciousness:
    "In this view, however, it may be that mammals and other intentional
    agents are not conscious of the more central psychological processes of executive decision-making and cognitive control (i.e., beyond a global
    feeling of uncertainty in considering a decision); they are doing these
    things, but are not conscious that they are doing them. This is an
    interesting possibility, because, as I speculate further in the next
    chapter, being conscious of their own executive decision-making and
    cognitive control - from a second-order executive (reflective) tier - is precisely what great apes, as rational agents, begin to do."
    - pages 65-66

    Concluding the chapter:
    "But for now, the essential points are that (i) basic sentience in the
    sense of attention to, and experience of, the outside world is for
    agents a psychological primitive; and (ii) basic consciousness involves
    the organism attending to its own goals, actions, and experience from
    its executive tier of functioning. My hypothesis is that mammals and
    other intentional agents are conscious in this sense."
    - page 66


    It's harder to get a representative excerpt of his view of consciousness
    of great apes in the next chapter "Ancient Apes as Rational Agents"
    because he mostly goes over their specific abilities & experimental
    evidence thereof, but if we figure he thinks consciousness is a
    mammalian trait it might explain he didn't feel the need to expand on
    that aspect.

    "In this case, the change in agentive organization characteristic of
    great apes - the emergence of a second-order tier of executive
    decision-making and control - led to the formation of an experiential
    niche structured by the causes underlying physical events and the
    intentions underlying agentive action, both organized into similar logical-inferential paradigms, enabling individuals to imagine causally
    and intentionally structured states of the world that are not directly perceived."
    - page 88


    Finally, the chapter "Ancient Humans as Socially Normative Agents":

    "Cognitively, to mentally coordinate with a collaborative partner,
    including via cooperative communication, early humans evolved to
    cognitively represent the world perspectivally: the exact same object or
    event may be construed as something different depending on the
    perspective one chooses to take. For example, this stick on the ground
    might be seen as a potential spear for us to use in our antelope hunt
    (if we need a weapon), or it might be seen as something that could make
    noise if stepped on (if we are worried about that), depending on our common-ground understanding of what is relevant to the situation at
    hand. Since the process of mental coordination in cooperative
    communication required individuals to take the perspective of others on
    their own perspective recursively - he *intents* for me to *attend* to
    that as a potential weapon - early humans came to cognitively represent
    the world both perspectivally and recursively (...). Great apes have not evolved recursively perspectival representations because they have not
    evolved to mentally coordinate with others in joint agencies".
    - page 103


    "As I have argued at other steps in my story, new agentive organization
    creates for individuals a new experiential niche. Reptiles come to
    experience situations of obstacle and opportunity; mammals come to
    consciously experience their own operational level of functioning; and
    great apes come to experience their own executive decision-making and
    cognitive control from a reflective tier of operation, which serves as
    the basis for apes' understanding of causal and intentional relations in
    their physical and social worlds. Early humans came to live in a social/cooperative experiential niche, structured by the shared worlds
    and recursive perspectives created by collaboration, joint attention,
    and common ground, and motivated by the partners' sense of respect and responsibility towards one another. Shared worlds experienced via
    recursive perspectives among mutually respectful and responsible
    cooperative agents: this is the new experiential niche inhabited by
    early humans."
    - pages 104-105


    Then, after discussing the extra level of social norms:

    "As a species of great ape, modern humans perceived and understood their physical and social worlds in terms of underlying causal and intentional forces. As descendants of earlier humans, modern humans perceived and understood reality in terms of different possible perspectives on it,
    and also in terms of newly normative social attitudes, like
    responsibilities, that bound individuals to their collaborative
    partners. But as they evolved into fully cultural beings, modern humans
    came to perceive and understand the world not just in terms of
    individual perspectives on things but in terms of the objective
    situation that was independent of any individual perspective. And they
    came to understand their group mates not just in terms of their responsibilities to one another but also in terms of their obligation to
    uphold the collective normative standards agreed to by everyone in the
    group. Modern humans came to inhabit an objective-normative world."
    - page 114


    As you might notice the quotes at this point barely mention
    consciousness at all - like I said I take it that Tomasello thinks all
    mammals are conscious, and so he doesn't feel the need to talk about
    those later tiers of agency in terms of consciousness at all but only in
    terms of what the organisms are conscious of. But I stand by the things
    I said: in disagreement with him I do think it's completely plausible
    that those last tiers are key to at least one of the things we call "consciousness". That for example when we have a sense of "intending to
    do something" that both Tomasello & Anil Seth would argue all mammals
    have, we're actually mobilizing one of the "higher tiers" that other
    mammals *don't* have. I don't think it detracts from their view that consciousness comes in levels and subcomponents that other life shares
    in and justifies calling various animals "conscious", I just find it
    more likely than they seem to say in their books that there is a kind of consciousness that we sometimes call "consciousness" but should maybe
    here call "full consciousness", that only we have.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)