My views on the evolution of consciousness are starting to gel.
1. Rudimentary nervous systems evolve.
2. Brains evolve, capable of memory and of decisions other than reflex.
3. Those decisions probably work better if the brain has a model of the
world to work with. So such a model evolves.
4. Some creatures live socially. Their brains need a model of that
important aspect of the world: the fellow beings one lives with,
including how they think.
5. So we've now got a model of minds. How about if we apply it to *our
own mind*? That might make our thinking about interactions with others'
minds more efficient.
6. Viola! Consciousness!
Does that make sense to people? Is it time for me to write a book on
the subject? (Do you think publishers will want the book to be more than
106 words long?)
There's also the problem of testing it. I'm open to suggestions there,
too. Step 4 implies that the model of how we think need not agree with
how we think, much as the mental model of our world is flat, not
spherical. This has at least some confirmation (e.g., blindness to many biases). More would be better.
My views on the evolution of consciousness are starting to gel.
1. Rudimentary nervous systems evolve.
2. Brains evolve, capable of memory and of decisions other than reflex.
3. Those decisions probably work better if the brain has a model of
the world to work with. So such a model evolves.
4. Some creatures live socially. Their brains need a model of that
important aspect of the world: the fellow beings one lives with,
including how they think.
5. So we've now got a model of minds. How about if we apply it to *our
own mind*? That might make our thinking about interactions with
others' minds more efficient.
6. Viola! Consciousness!
Does that make sense to people? Is it time for me to write a book on
the subject? (Do you think publishers will want the book to be more
than 106 words long?)
There's also the problem of testing it. I'm open to suggestions there,
too. Step 4 implies that the model of how we think need not agree with
how we think, much as the mental model of our world is flat, not
spherical. This has at least some confirmation (e.g., blindness to
many biases). More would be better.
My views on the evolution of consciousness are starting to gel.
1. Rudimentary nervous systems evolve.
2. Brains evolve, capable of memory and of decisions other than reflex.
3. Those decisions probably work better if the brain has a model of the
world to work with. So such a model evolves.
4. Some creatures live socially. Their brains need a model of that
important aspect of the world: the fellow beings one lives with,
including how they think.
5. So we've now got a model of minds. How about if we apply it to *our
own mind*? That might make our thinking about interactions with others'
minds more efficient.
6. Viola! Consciousness!
Does that make sense to people? Is it time for me to write a book on
the subject? (Do you think publishers will want the book to be more than
106 words long?)
There's also the problem of testing it. I'm open to suggestions there,
too. Step 4 implies that the model of how we think need not agree with
how we think, much as the mental model of our world is flat, not
spherical. This has at least some confirmation (e.g., blindness to many biases). More would be better.
My views on the evolution of consciousness are starting to gel.
1. Rudimentary nervous systems evolve.
2. Brains evolve, capable of memory and of decisions other than reflex.
3. Those decisions probably work better if the brain has a model of the
world to work with. So such a model evolves.
4. Some creatures live socially. Their brains need a model of that
important aspect of the world: the fellow beings one lives with,
including how they think.
5. So we've now got a model of minds. How about if we apply it to *our
own mind*? That might make our thinking about interactions with others'
minds more efficient.
6. Viola! Consciousness!
Does that make sense to people? Is it time for me to write a book on
the subject? (Do you think publishers will want the book to be more than
106 words long?)
There's also the problem of testing it. I'm open to suggestions there,
too. Step 4 implies that the model of how we think need not agree with
how we think, much as the mental model of our world is flat, not
spherical. This has at least some confirmation (e.g., blindness to many biases). More would be better.
On 30/04/2024 01:36, Mark Isaak wrote:
My views on the evolution of consciousness are starting to gel.
1. Rudimentary nervous systems evolve.
2. Brains evolve, capable of memory and of decisions other than reflex.
3. Those decisions probably work better if the brain has a model of
the world to work with. So such a model evolves.
4. Some creatures live socially. Their brains need a model of that
important aspect of the world: the fellow beings one lives with,
including how they think.
5. So we've now got a model of minds. How about if we apply it to *our
own mind*? That might make our thinking about interactions with
others' minds more efficient.
6. Viola! Consciousness!
Does that make sense to people? Is it time for me to write a book on
the subject? (Do you think publishers will want the book to be more
than 106 words long?)
There's also the problem of testing it. I'm open to suggestions there,
too. Step 4 implies that the model of how we think need not agree with
how we think, much as the mental model of our world is flat, not
spherical. This has at least some confirmation (e.g., blindness to
many biases). More would be better.
Have you seen my thread on Michael Tomasello's "The Evolution of
Agency"? I think the book would interest you. If you want more detail I
have a post somewhere in that thread summarizing its arguments, I'd be
happy to hear your take.
On 5/2/24 6:21 AM, Arkalen wrote:
On 30/04/2024 01:36, Mark Isaak wrote:
My views on the evolution of consciousness are starting to gel.
1. Rudimentary nervous systems evolve.
2. Brains evolve, capable of memory and of decisions other than reflex.
3. Those decisions probably work better if the brain has a model of
the world to work with. So such a model evolves.
4. Some creatures live socially. Their brains need a model of that
important aspect of the world: the fellow beings one lives with,
including how they think.
5. So we've now got a model of minds. How about if we apply it to
*our own mind*? That might make our thinking about interactions with
others' minds more efficient.
6. Viola! Consciousness!
Does that make sense to people? Is it time for me to write a book on
the subject? (Do you think publishers will want the book to be more
than 106 words long?)
There's also the problem of testing it. I'm open to suggestions
there, too. Step 4 implies that the model of how we think need not
agree with how we think, much as the mental model of our world is
flat, not spherical. This has at least some confirmation (e.g.,
blindness to many biases). More would be better.
Have you seen my thread on Michael Tomasello's "The Evolution of
Agency"? I think the book would interest you. If you want more detail
I have a post somewhere in that thread summarizing its arguments, I'd
be happy to hear your take.
I have seen it, but I don't remember particular points.
I just came across reference to another book by Michael S.S. Graziano, _Consciousness and the Social Brain_, which appears to make an argument similar to mine above (particularly steps 4 and 5).
On 02/05/2024 19:03, Mark Isaak wrote:
On 5/2/24 6:21 AM, Arkalen wrote:
On 30/04/2024 01:36, Mark Isaak wrote:
My views on the evolution of consciousness are starting to gel.
1. Rudimentary nervous systems evolve.
2. Brains evolve, capable of memory and of decisions other than reflex. >>>> 3. Those decisions probably work better if the brain has a model of
the world to work with. So such a model evolves.
4. Some creatures live socially. Their brains need a model of that
important aspect of the world: the fellow beings one lives with,
including how they think.
5. So we've now got a model of minds. How about if we apply it to
*our own mind*? That might make our thinking about interactions with
others' minds more efficient.
6. Viola! Consciousness!
Does that make sense to people? Is it time for me to write a book
on the subject? (Do you think publishers will want the book to be
more than 106 words long?)
There's also the problem of testing it. I'm open to suggestions
there, too. Step 4 implies that the model of how we think need not
agree with how we think, much as the mental model of our world is
flat, not spherical. This has at least some confirmation (e.g.,
blindness to many biases). More would be better.
Have you seen my thread on Michael Tomasello's "The Evolution of
Agency"? I think the book would interest you. If you want more detail
I have a post somewhere in that thread summarizing its arguments, I'd
be happy to hear your take.
I have seen it, but I don't remember particular points.
I just came across reference to another book by Michael S.S. Graziano,
_Consciousness and the Social Brain_, which appears to make an
argument similar to mine above (particularly steps 4 and 5).
Basically (if you don't mind me going on about it again) he proposes a
scheme similar to what you did but more specific, fleshed-out and (IMO) convincing. It revolves around the notion of "agents" or "agency" which Tomasello defines as a system that achieves goals via a feedback-control mechanisms where the system perceives aspects of the environment,
compares them to the desired goal, engages in behaviors meant to bring
it closer to the goal, checks the environment again, and loops this way
until the goal is achieved.
His parallels to your steps might be:
1) rudimentary nervous systems evolve that coordinate perception with behavior on a stimulus-response basis but not the feedback-control
system involved in true agency.
2) brains evolve that do implement such a feedback-control system [I'm
not sure in the book he explicitly associates it with brains, but he
does associate it with vertebrates which do have distinct brains as a
feature so I'll say it's close enough for a paraphrase]
He doesn't have a parallel to your step "3" because models of the world
are implicit in all of the cognitive models he presents, in fact the differences in he calls "experiential niches" (which could be thought of
as "world models") are pretty important. So for example he points out
that with agency comes the mechanism of *attention* (i.e. you orient
your perceptions in specific ways depending on what goals you're working towards and where you're currently at in working towards them) which
implies experiences of an outside world and internal states that are or aren't in sync, full of things that are relevant/irrelevant, good/bad etc.
4) He does bring in social living as a possible cause of his next step
in the evolution of agency that he sets at early mammals: the appearance
of a feedback-control system applied on top of the previous one to
monitor and control the goal-seeking process itself (he sees social
living as a driver for this because of the competition between peers
would induce a benefit in more flexible, efficient decision-making).
These early mammals would be able to not only perceive the world, pick a behavior to fulfill a goal and shut everything down in case of danger
(as he describes lizards doing), but mentally play out possible
behaviors and flexibly inhibit some in favor of others depending on
which they anticipate working out best. This would introduce into the
"world model" or "experiential niche" notions of goals, behaviors and cause-and-effect relationships between the two. I don't think he
introduces models of other *minds* at this step per se although it's a
bit like world models - they're implicit in several steps it's more of a question of what aspect of minds is being modelled.
5) I do think there is still some similarity between your 5 and the next level of agency Tomasello suggests, although he sets it at great apes
and you seem to set it at humans (then again many would argue great apes
are conscious and I don't think Tomasello would disagree). He proposes
an extra metacognitive feedback-control system monitoring the lower ones allowing control not only over the behaviors taken in service of a goal
but of the goals themselves, and an understanding of cause-and-effect in general and not only as concerns one's own actions. It also induces an understanding of others as being agents with goals they behave in
service of.
6) While he does think of 5 as the ability to reason and I'm pretty sure would call it "consciousness" he does have 2 other steps separating
humans from that, which involve collective agency. He proposes the
critical difference between humans and other great apes is the ability
to coordinate as part of a group that itself fits the criteria for being
an agent - with collective goals, the ability to monitor their
completion and act and self-regulate in service of them. He sees this as coming in two parts, first the ability to coordinate pairwise to achieve specific tasks (somewhere in hominid evolution - he gives several
examples illustrating how strikingly worse chimpanzees are at basic cooperation than even human children) and then the ability to function
as part of a larger community with shared norms that allow coordination
with strangers (which he sets early in the evolution of our own
species). He talks about this inducing a kind of triple mental model of agency, the "self" agent (the individual's goals, parallel to the sense
of agency of other great apes), the "role" agent (the goals implied by
one's role in some collective enterprise) and the "collective" agent
(the goals of the collective enterprise itself). He then talks about how various aspects of our experience like culture, morality etc follow from that.
I think it's interesting how this suggests a difference between having a model of one's own mind, having a model of others' minds, and having a
model of *mind in general* that's then applied to oneself and others.
"Models of the world" and "models of the mind" really collapses a lot of functionality and variability and I think Tomasello's model does a
better job of separating out different potential strands and honing in
on those that actually account for how we resemble and differ from other species.
I also like how this model justifies that the last step, and only the
last step, is truly self-reflective. All the other steps involve taking
a system at a certain level of agency and adding a monitoring/control
level, resulting in a system that's aware of itself *as a system of the
lower level*. That last step is the only one that involves the system monitoring/controlling a level *above* itself, and indeed being able to monitor/control any arbitrary system of agency at all (given any
combination of humans can display collective agency and a human can be
part of multiple collective agencies at any given time). Meaning the recursion ends there, it's the only agent model that can model itself as being the level it is.
On 5/3/24 11:21 AM, Arkalen wrote:
On 02/05/2024 19:03, Mark Isaak wrote:
On 5/2/24 6:21 AM, Arkalen wrote:
On 30/04/2024 01:36, Mark Isaak wrote:
My views on the evolution of consciousness are starting to gel.
1. Rudimentary nervous systems evolve.
2. Brains evolve, capable of memory and of decisions other than
reflex.
3. Those decisions probably work better if the brain has a model of
the world to work with. So such a model evolves.
4. Some creatures live socially. Their brains need a model of that
important aspect of the world: the fellow beings one lives with,
including how they think.
5. So we've now got a model of minds. How about if we apply it to
*our own mind*? That might make our thinking about interactions
with others' minds more efficient.
6. Viola! Consciousness!
Does that make sense to people? Is it time for me to write a book
on the subject? (Do you think publishers will want the book to be
more than 106 words long?)
There's also the problem of testing it. I'm open to suggestions
there, too. Step 4 implies that the model of how we think need not
agree with how we think, much as the mental model of our world is
flat, not spherical. This has at least some confirmation (e.g.,
blindness to many biases). More would be better.
Have you seen my thread on Michael Tomasello's "The Evolution of
Agency"? I think the book would interest you. If you want more
detail I have a post somewhere in that thread summarizing its
arguments, I'd be happy to hear your take.
I have seen it, but I don't remember particular points.
I just came across reference to another book by Michael S.S.
Graziano, _Consciousness and the Social Brain_, which appears to make
an argument similar to mine above (particularly steps 4 and 5).
Basically (if you don't mind me going on about it again) he proposes a
scheme similar to what you did but more specific, fleshed-out and
(IMO) convincing. It revolves around the notion of "agents" or
"agency" which Tomasello defines as a system that achieves goals via a
feedback-control mechanisms where the system perceives aspects of the
environment, compares them to the desired goal, engages in behaviors
meant to bring it closer to the goal, checks the environment again,
and loops this way until the goal is achieved.
His parallels to your steps might be:
1) rudimentary nervous systems evolve that coordinate perception with
behavior on a stimulus-response basis but not the feedback-control
system involved in true agency.
2) brains evolve that do implement such a feedback-control system [I'm
not sure in the book he explicitly associates it with brains, but he
does associate it with vertebrates which do have distinct brains as a
feature so I'll say it's close enough for a paraphrase]
He doesn't have a parallel to your step "3" because models of the
world are implicit in all of the cognitive models he presents, in fact
the differences in he calls "experiential niches" (which could be
thought of as "world models") are pretty important. So for example he
points out that with agency comes the mechanism of *attention* (i.e.
you orient your perceptions in specific ways depending on what goals
you're working towards and where you're currently at in working
towards them) which implies experiences of an outside world and
internal states that are or aren't in sync, full of things that are
relevant/irrelevant, good/bad etc.
4) He does bring in social living as a possible cause of his next step
in the evolution of agency that he sets at early mammals: the
appearance of a feedback-control system applied on top of the previous
one to monitor and control the goal-seeking process itself (he sees
social living as a driver for this because of the competition between
peers would induce a benefit in more flexible, efficient
decision-making). These early mammals would be able to not only
perceive the world, pick a behavior to fulfill a goal and shut
everything down in case of danger (as he describes lizards doing), but
mentally play out possible behaviors and flexibly inhibit some in
favor of others depending on which they anticipate working out best.
This would introduce into the "world model" or "experiential niche"
notions of goals, behaviors and cause-and-effect relationships between
the two. I don't think he introduces models of other *minds* at this
step per se although it's a bit like world models - they're implicit
in several steps it's more of a question of what aspect of minds is
being modelled.
5) I do think there is still some similarity between your 5 and the
next level of agency Tomasello suggests, although he sets it at great
apes and you seem to set it at humans (then again many would argue
great apes are conscious and I don't think Tomasello would disagree).
He proposes an extra metacognitive feedback-control system monitoring
the lower ones allowing control not only over the behaviors taken in
service of a goal but of the goals themselves, and an understanding of
cause-and-effect in general and not only as concerns one's own
actions. It also induces an understanding of others as being agents
with goals they behave in service of.
6) While he does think of 5 as the ability to reason and I'm pretty
sure would call it "consciousness" he does have 2 other steps
separating humans from that, which involve collective agency. He
proposes the critical difference between humans and other great apes
is the ability to coordinate as part of a group that itself fits the
criteria for being an agent - with collective goals, the ability to
monitor their completion and act and self-regulate in service of them.
He sees this as coming in two parts, first the ability to coordinate
pairwise to achieve specific tasks (somewhere in hominid evolution -
he gives several examples illustrating how strikingly worse
chimpanzees are at basic cooperation than even human children) and
then the ability to function as part of a larger community with shared
norms that allow coordination with strangers (which he sets early in
the evolution of our own species). He talks about this inducing a kind
of triple mental model of agency, the "self" agent (the individual's
goals, parallel to the sense of agency of other great apes), the
"role" agent (the goals implied by one's role in some collective
enterprise) and the "collective" agent (the goals of the collective
enterprise itself). He then talks about how various aspects of our
experience like culture, morality etc follow from that.
I think it's interesting how this suggests a difference between having
a model of one's own mind, having a model of others' minds, and having
a model of *mind in general* that's then applied to oneself and
others. "Models of the world" and "models of the mind" really
collapses a lot of functionality and variability and I think
Tomasello's model does a better job of separating out different
potential strands and honing in on those that actually account for how
we resemble and differ from other species.
I also like how this model justifies that the last step, and only the
last step, is truly self-reflective. All the other steps involve
taking a system at a certain level of agency and adding a
monitoring/control level, resulting in a system that's aware of itself
*as a system of the lower level*. That last step is the only one that
involves the system monitoring/controlling a level *above* itself, and
indeed being able to monitor/control any arbitrary system of agency at
all (given any combination of humans can display collective agency and
a human can be part of multiple collective agencies at any given
time). Meaning the recursion ends there, it's the only agent model
that can model itself as being the level it is.
Does he give a definition of consciousness? It sounds like he sees the "collective" agent as an essential part of it. I don't doubt that it is essential for humanity's achievements, but I'm not convinced it is
necessary for consciousness. I still like my definition of consciousness
as having a mental model of one's own mind.
On 06/05/2024 16:45, Mark Isaak wrote:
On 5/3/24 11:21 AM, Arkalen wrote:
On 02/05/2024 19:03, Mark Isaak wrote:
On 5/2/24 6:21 AM, Arkalen wrote:
On 30/04/2024 01:36, Mark Isaak wrote:
My views on the evolution of consciousness are starting to gel.
1. Rudimentary nervous systems evolve.
2. Brains evolve, capable of memory and of decisions other than
reflex.
3. Those decisions probably work better if the brain has a model
of the world to work with. So such a model evolves.
4. Some creatures live socially. Their brains need a model of that >>>>>> important aspect of the world: the fellow beings one lives with,
including how they think.
5. So we've now got a model of minds. How about if we apply it to
*our own mind*? That might make our thinking about interactions
with others' minds more efficient.
6. Viola! Consciousness!
Does that make sense to people? Is it time for me to write a book >>>>>> on the subject? (Do you think publishers will want the book to be
more than 106 words long?)
There's also the problem of testing it. I'm open to suggestions
there, too. Step 4 implies that the model of how we think need not >>>>>> agree with how we think, much as the mental model of our world is
flat, not spherical. This has at least some confirmation (e.g.,
blindness to many biases). More would be better.
Have you seen my thread on Michael Tomasello's "The Evolution of
Agency"? I think the book would interest you. If you want more
detail I have a post somewhere in that thread summarizing its
arguments, I'd be happy to hear your take.
I have seen it, but I don't remember particular points.
I just came across reference to another book by Michael S.S.
Graziano, _Consciousness and the Social Brain_, which appears to
make an argument similar to mine above (particularly steps 4 and 5).
Basically (if you don't mind me going on about it again) he proposes
a scheme similar to what you did but more specific, fleshed-out and
(IMO) convincing. It revolves around the notion of "agents" or
"agency" which Tomasello defines as a system that achieves goals via
a feedback-control mechanisms where the system perceives aspects of
the environment, compares them to the desired goal, engages in
behaviors meant to bring it closer to the goal, checks the
environment again, and loops this way until the goal is achieved.
His parallels to your steps might be:
1) rudimentary nervous systems evolve that coordinate perception with
behavior on a stimulus-response basis but not the feedback-control
system involved in true agency.
2) brains evolve that do implement such a feedback-control system
[I'm not sure in the book he explicitly associates it with brains,
but he does associate it with vertebrates which do have distinct
brains as a feature so I'll say it's close enough for a paraphrase]
He doesn't have a parallel to your step "3" because models of the
world are implicit in all of the cognitive models he presents, in
fact the differences in he calls "experiential niches" (which could
be thought of as "world models") are pretty important. So for example
he points out that with agency comes the mechanism of *attention*
(i.e. you orient your perceptions in specific ways depending on what
goals you're working towards and where you're currently at in working
towards them) which implies experiences of an outside world and
internal states that are or aren't in sync, full of things that are
relevant/irrelevant, good/bad etc.
4) He does bring in social living as a possible cause of his next
step in the evolution of agency that he sets at early mammals: the
appearance of a feedback-control system applied on top of the
previous one to monitor and control the goal-seeking process itself
(he sees social living as a driver for this because of the
competition between peers would induce a benefit in more flexible,
efficient decision-making). These early mammals would be able to not
only perceive the world, pick a behavior to fulfill a goal and shut
everything down in case of danger (as he describes lizards doing),
but mentally play out possible behaviors and flexibly inhibit some in
favor of others depending on which they anticipate working out best.
This would introduce into the "world model" or "experiential niche"
notions of goals, behaviors and cause-and-effect relationships
between the two. I don't think he introduces models of other *minds*
at this step per se although it's a bit like world models - they're
implicit in several steps it's more of a question of what aspect of
minds is being modelled.
5) I do think there is still some similarity between your 5 and the
next level of agency Tomasello suggests, although he sets it at great
apes and you seem to set it at humans (then again many would argue
great apes are conscious and I don't think Tomasello would disagree).
He proposes an extra metacognitive feedback-control system monitoring
the lower ones allowing control not only over the behaviors taken in
service of a goal but of the goals themselves, and an understanding
of cause-and-effect in general and not only as concerns one's own
actions. It also induces an understanding of others as being agents
with goals they behave in service of.
6) While he does think of 5 as the ability to reason and I'm pretty
sure would call it "consciousness" he does have 2 other steps
separating humans from that, which involve collective agency. He
proposes the critical difference between humans and other great apes
is the ability to coordinate as part of a group that itself fits the
criteria for being an agent - with collective goals, the ability to
monitor their completion and act and self-regulate in service of
them. He sees this as coming in two parts, first the ability to
coordinate pairwise to achieve specific tasks (somewhere in hominid
evolution - he gives several examples illustrating how strikingly
worse chimpanzees are at basic cooperation than even human children)
and then the ability to function as part of a larger community with
shared norms that allow coordination with strangers (which he sets
early in the evolution of our own species). He talks about this
inducing a kind of triple mental model of agency, the "self" agent
(the individual's goals, parallel to the sense of agency of other
great apes), the "role" agent (the goals implied by one's role in
some collective enterprise) and the "collective" agent (the goals of
the collective enterprise itself). He then talks about how various
aspects of our experience like culture, morality etc follow from that.
I think it's interesting how this suggests a difference between
having a model of one's own mind, having a model of others' minds,
and having a model of *mind in general* that's then applied to
oneself and others. "Models of the world" and "models of the mind"
really collapses a lot of functionality and variability and I think
Tomasello's model does a better job of separating out different
potential strands and honing in on those that actually account for
how we resemble and differ from other species.
I also like how this model justifies that the last step, and only the
last step, is truly self-reflective. All the other steps involve
taking a system at a certain level of agency and adding a
monitoring/control level, resulting in a system that's aware of
itself *as a system of the lower level*. That last step is the only
one that involves the system monitoring/controlling a level *above*
itself, and indeed being able to monitor/control any arbitrary system
of agency at all (given any combination of humans can display
collective agency and a human can be part of multiple collective
agencies at any given time). Meaning the recursion ends there, it's
the only agent model that can model itself as being the level it is.
Does he give a definition of consciousness? It sounds like he sees the
"collective" agent as an essential part of it. I don't doubt that it
is essential for humanity's achievements, but I'm not convinced it is
necessary for consciousness. I still like my definition of
consciousness as having a mental model of one's own mind.
The book isn't about consciousness; it's about the evolution of agency
and it doesn't conflate agency with consciousness, agency as it defines
it is completely different. I think it has very obvious and clarifying implications on the evolution of consciousness but the notion that the "collective agent" is essential to consciousness as we experience it is
my extrapolation, not his. I'd guess he's in the "there are many kinds
of consciousness" camp and I'd further guess that he thinks of great
apes at least as "fully" conscious like us.
For me it's a bit like I said in another thread recently - when I look
at my own consciousness I feel that the self-reflective "this is what happened and here's how I think of it" is a very important part of it,
and that an existence that just had the experiences of the moment
without the integrating, looking-back part wouldn't be fully conscious
even if *in us* it's obviously an integral part of our conscious state.
And I wouldn't be surprised if that self-reflective part really does
occur only at part 6.
"Having a mental model of one's own mind" is all well and good but
"model" is never 1:1 and plenty of simplified representations we could
call "models" obviously don't suffice so it just pushes the question
back to "what kind of model?".
I might have another post giving more detail on what Tomasello says
about experiential niches later.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 36:14:32 |
Calls: | 10,392 |
Calls today: | 3 |
Files: | 14,064 |
Messages: | 6,417,152 |