I think we do logically infer the mental states of others, but most of it isn't conscious. We've been a social species for a long time. We've likely built up a lot of instinctive reactions about the behavior of others. Which is amended and adjusted as we gain experience in the world. It seems like we lean heavily on that unconscious toolset.
The drawback is that sometimes it can mislead us, particularly if someone is from another culture or ethnicity. The way a lot of people in the west can misinterpret eyes with heavy epicanthic folds as boredom or cynicism comes to mind.
Of course, it's also subject to being fooled by deliberate acting on someone's part, assuming they're skilled enough to be convincing. I'm sure acting in a misleading way is as old as social interactions.
Even when you're present. As an experienced reader, I'm sure the words in this comment are effortlessly entering your inner speech. The amount of work your brain is doing to make that happen is pretty impressive. (Learning to read involves repurposing the face recognition region of one of your brain's hemispheres, at least for those who learn young.)
I agree, the inference by analogy theory feels convoluted and doesn't track with my own experience of empathy.
For one thing, I feel for things as different in behaviour from myself as a tree or a rock or the ocean. I can't help it. To me, a stormy sea feels angry and tumultuous, and a sick tree feels sad. I can't be alone in this either, because our language and especially poetry are littered with references to such things with feelings. We all intuitively grasp their meanings.
Although perhaps this is a product of simulation, or something like it. Not quite the explicit imagining myself in their shoes, but in having any idea of them in my mind, my mind might be said to become them, following Aristotle's idea that "the soul is in a way all things", and that we know things by their forms being imprinted upon our minds. And if we combine this with Aristotle's notion that the soul is the form of a living thing, then knowing a living thing (qua that kind of living thing) might be said to involve its soul truly existing within your own. Which also adds a nice angle to his comment that friendship is "a single soul dwelling in two bodies".
Coming from a different angle, it doesn't make much sense for us to evolve to have to infer the existence of other minds. It would be much simpler to evolve to intuitively/instinctively grasp the minds and intentions of others, perhaps even before we evolved to introspect and understand our own feelings.
We might also consider that certain behaviours transmit feelings without any need for analogy or inference to understand them. A baby crying is an unpleasant noise by design, so their pain is transmitted to those around it almost directly, while laughter is a pleasant noise by design. Empathy may begin with directly transmitting one's own feelings in such a way.
You've anticipated many points I'm making in my next post, even the point about the crying baby! Good point about the animistic character of poetic description and how it needs no further explanation, as we naturally intuit its meaning. And yes, I agree it seems much simpler to have evolved to intuit other minds directly rather than rely on logical inferences. The World-Soul may stand in a kind of friendship with us and we with it and each other. Who knows?
> Maybe we know other minds by imaginatively walking in the other’s shoes. To do this, we simulate in our own minds what it’s like to be them in their situation. This theory helps us explain other minds in a more flexible way.
This sounds likely to me. Context plays a bigger role here than we typically realize.
Have you ever caught sight of someone in a crowd and saw their face was contorted, but you couldn’t tell if they were laughing or crying? To get to the bottom of it, you rely on circumstantial cues. Like you might look for potential causes nearby, at the people they’re nearest to, how they’re interacting with the person, or if they’re also in a similar state.
But I think social interpretation is also more instinctual than we typically imagine. I’m sure you’ve seen the ridiculously complex courtship displays of some male birds. It’s hard to imagine the female bird witnessing such a display for the first time and inferring anything from it, and it’s also hard to imagine simulation theory doing much of the work. Instinct or evolved responses must be doing a lot of heavy lifting. I realize people aren’t birds, but maybe we’re more like other animals in this respect than we admit, and some of the reasons we give ourselves for interpreting people’s feelings the way we do are bolted on post hoc.
Yes, I know what you mean about the ambiguity between laughing and crying. I think you've just given me a better story for the evolution of tears than what I found on the internet—it could be that tears give us a way to distinguish between the two. But who knows. There's always a way to explain why things evolved the way they did if you look hard enough.
Yeah, it's hard to imagine a female simulating a male bird's courtship behavior. Bringing up animals is a good move, I think. A lot of these theories seem anthropocentric and overly rational.
Well I for one wouldn't want to be in this world alone! There are others we care about, surely, and not just in order to assess the logic of their arguments?
Oh Tina I'm referring to the EXTREMELY common fallacy of determining the validity of an argument not on the evidence, but on the arguer's state of mind.
Interesting observations that further reinforce my conjecture (admittedly a fixation) that meaning-making can't be abstracted from sensations, feelings, subjective perceptions, or, in short, consciousness. This also clarifies why, despite all the progress, AI systems struggle to recognize emotional facial expressions with human-level efficacy (e.g., see: https://osf.io/preprints/psyarxiv/kzhds_v1). While this doesn't mean that AI will never achieve such recognition, it is not surprising in hindsight. Computers lack the ability to engage in introspection and cannot access inner lived sensations. As a result, meaning without conscious experience will always remain a best guess or a curve-fitting procedure based on inferences and pattern recognition from gazillions of images and data that humans don't need to perform the same task.
Yes! The AI consciousness fervor reminds me of the debate about whether language determines thought or the other way around. Except here we have this idea that language IS thought. Here the thinking goes: "If we can hack language (by whatever process necessary), then we have consciousness!" But this fails to take into account that a different process might be required to arrive at the kind of meaning that exists for conscious beings. What is it that accounts for our language acquisition in the face of Chomsky’s “poverty of the stimulus” and Quine’s “indeterminacy of translation”? The brute force statistical analysis required for AI just highlights the point that children don’t need to gorge themselves on every word ever written to pick up language. I suspect whatever is behind natural language acquisition is more than a language instinct or language organ in the brain or genetic “programming”.
I have always found the idea that language causes consciousness to be utterly implausible, and I often wondered how even smart people like Bertrand Russell could believe in such a weird notion. I have read that recent research has disproven this idea. Anyway, here’s my perspective on LLMs and language: https://philpapers.org/rec/MASNQN
Oh what he's known for is all that no free will determinism stuff. I never looked into him because I didn't see anything new or interesting about his point of view. But let me know if I'm wrong!
There's "knowing through behaviour" and then there's "knowing through inference."
If everything is behaviour -- which seems to be a popular position these days -- then it makes sense that whatever is known , is known through behaviour. Consider two electrons in proximity. They behave in certain ways. If we were to say, speaking loosely, that one of them acts as if it "knows" the other is nearby, we would really be saying that it is responding in predictable ways to the behaviour of the other. This is knowing through behaviour, or more generally, perhaps, "knowledge as relationship."
If on the other hand we were to say, again speaking loosely, that it acts as if it "infers" the other is nearby, we would be saying that it is responding to calculations derived from observations it has made concerning its immediate environment. This is knowing through inference, or more generally, perhaps, "knowledge as cognition."
Speaking loosely of electrons may not be to everyone's taste, but it's instructive when we speak less loosely of how one person might "know" an angry person is nearby. They see anger in the face and gestures; they hear it in the voice; perhaps they even detect it unconsciously through pheromones. The behaviour affects them immediately and viscerally, typically inducing in them a sense of fear or alarm. This is knowing through behaviour, and it does not involve cognition. Even a babe in arms is alarmed by displays of anger.
If on the other hand the person were disconnected from such knowledge, they might have to consider the angle of the brows, the curve of the mouth, the shape of the clenched hands, the shaking of the limbs, the pitch and volume of the voice, and so on, and infer from these, dispassionately, that they were in the presence of an angry person. Then perhaps they could decide, based on their inference, whether to feel alarm.
Now it's possible that what we think of as the visceral, immediate type of knowledge is actually inferential, and that the inference is carried out unconsciously and at great speed by mechanisms hard-wired into the organism. But this threatens to abuse the notion of "inference," which is usually reserved for operations involving premisses and conclusions under the patient and dispassionate control of logical operations. We would be obliged to say that two cats in a fight were engaged in inferential activity. "Inference" may be the wrong word. It might be better to say that there is a kind of connectedness built into organisms, by which they "know" one another.
"They see anger in the face and gestures; they hear it in the voice"
This seems closer to the truth to me. It's IN the face and IN the voice. And yet, how? How do we come to see such and such an arrangement in the face and in the voice as angry or happy? It's rather mysterious.
"Now it's possible that what we think of as the visceral, immediate type of knowledge is actually inferential, and that the inference is carried out unconsciously and at great speed by mechanisms hard-wired into the organism. But this threatens to abuse the notion of "inference," which is usually reserved for operations involving premisses and conclusions under the patient and dispassionate control of logical operations."
Yes, exactly. To me it seems the inference by analogy theory ignores the experiential and simply cooks up an explanation after the fact. It makes sense when you don't think about it too much. But the more you think about it, the less sense it makes. You have to assume all of these logical operations are happening at lightning speed and unconsciously. Well, that could be the case, but we can never know. To me, that kind of explanation fails to satisfy.
"We would be obliged to say that two cats in a fight were engaged in inferential activity."
That's a good one!
"There is a kind of connectedness built into organisms, by which they "know" one another."
"...inference by analogy theory ignores the experiential and simply cooks up an explanation after the fact."
I think you nailed that one Tina. That's the problem with empiricism isn't it, it's always after the fact right? Unfortunately behaviorism is also a form of empiricism whereas experiential is in the moment, the now.
Ignoring the experiential aspect of our being and cooking up a story that the universe is a mechanical is even more lame than the nonsense of religious traditions. It's hard to find a middle ground isn't it, nevertheless that middle ground is what we should be looking for not the polar extremes.
No two events/situations are identical in any measurable property, so for two 'minds' to identify different events as 'alike' or 'of the same kind', or involving 'the same thing/being/phenomenon', presupposes a common standard of identifying sensory differences as belonging to the same identity, which is already metaphysical (not sensed but thought, as 'meaning') and social (a common thought). Another way, the pure difference of the physical/sensory is conditional on metaphysical identity (of the same 'mind' and the same 'being' at two different times), which is thus logically prior to the alleged conditions of its evolutionary emergence.
If all meaning is intrinsically social and metaphysical then mutual recognition of 'another mind' is a precondition of the meanings we are capable of identifying as being, including the being that 'has' a mind.
I think we do logically infer the mental states of others, but most of it isn't conscious. We've been a social species for a long time. We've likely built up a lot of instinctive reactions about the behavior of others. Which is amended and adjusted as we gain experience in the world. It seems like we lean heavily on that unconscious toolset.
The drawback is that sometimes it can mislead us, particularly if someone is from another culture or ethnicity. The way a lot of people in the west can misinterpret eyes with heavy epicanthic folds as boredom or cynicism comes to mind.
Of course, it's also subject to being fooled by deliberate acting on someone's part, assuming they're skilled enough to be convincing. I'm sure acting in a misleading way is as old as social interactions.
The unconscious is doing a lot of logic while I'm away!
Even when you're present. As an experienced reader, I'm sure the words in this comment are effortlessly entering your inner speech. The amount of work your brain is doing to make that happen is pretty impressive. (Learning to read involves repurposing the face recognition region of one of your brain's hemispheres, at least for those who learn young.)
I agree, the inference by analogy theory feels convoluted and doesn't track with my own experience of empathy.
For one thing, I feel for things as different in behaviour from myself as a tree or a rock or the ocean. I can't help it. To me, a stormy sea feels angry and tumultuous, and a sick tree feels sad. I can't be alone in this either, because our language and especially poetry are littered with references to such things with feelings. We all intuitively grasp their meanings.
Although perhaps this is a product of simulation, or something like it. Not quite the explicit imagining myself in their shoes, but in having any idea of them in my mind, my mind might be said to become them, following Aristotle's idea that "the soul is in a way all things", and that we know things by their forms being imprinted upon our minds. And if we combine this with Aristotle's notion that the soul is the form of a living thing, then knowing a living thing (qua that kind of living thing) might be said to involve its soul truly existing within your own. Which also adds a nice angle to his comment that friendship is "a single soul dwelling in two bodies".
Coming from a different angle, it doesn't make much sense for us to evolve to have to infer the existence of other minds. It would be much simpler to evolve to intuitively/instinctively grasp the minds and intentions of others, perhaps even before we evolved to introspect and understand our own feelings.
We might also consider that certain behaviours transmit feelings without any need for analogy or inference to understand them. A baby crying is an unpleasant noise by design, so their pain is transmitted to those around it almost directly, while laughter is a pleasant noise by design. Empathy may begin with directly transmitting one's own feelings in such a way.
You've anticipated many points I'm making in my next post, even the point about the crying baby! Good point about the animistic character of poetic description and how it needs no further explanation, as we naturally intuit its meaning. And yes, I agree it seems much simpler to have evolved to intuit other minds directly rather than rely on logical inferences. The World-Soul may stand in a kind of friendship with us and we with it and each other. Who knows?
> Maybe we know other minds by imaginatively walking in the other’s shoes. To do this, we simulate in our own minds what it’s like to be them in their situation. This theory helps us explain other minds in a more flexible way.
This sounds likely to me. Context plays a bigger role here than we typically realize.
Have you ever caught sight of someone in a crowd and saw their face was contorted, but you couldn’t tell if they were laughing or crying? To get to the bottom of it, you rely on circumstantial cues. Like you might look for potential causes nearby, at the people they’re nearest to, how they’re interacting with the person, or if they’re also in a similar state.
But I think social interpretation is also more instinctual than we typically imagine. I’m sure you’ve seen the ridiculously complex courtship displays of some male birds. It’s hard to imagine the female bird witnessing such a display for the first time and inferring anything from it, and it’s also hard to imagine simulation theory doing much of the work. Instinct or evolved responses must be doing a lot of heavy lifting. I realize people aren’t birds, but maybe we’re more like other animals in this respect than we admit, and some of the reasons we give ourselves for interpreting people’s feelings the way we do are bolted on post hoc.
Yes, I know what you mean about the ambiguity between laughing and crying. I think you've just given me a better story for the evolution of tears than what I found on the internet—it could be that tears give us a way to distinguish between the two. But who knows. There's always a way to explain why things evolved the way they did if you look hard enough.
Yeah, it's hard to imagine a female simulating a male bird's courtship behavior. Bringing up animals is a good move, I think. A lot of these theories seem anthropocentric and overly rational.
Knowing someone's mind might be fun, but beyond this I don't know what use it is. It doesn't change the validity of their argument.
Well I for one wouldn't want to be in this world alone! There are others we care about, surely, and not just in order to assess the logic of their arguments?
Oh Tina I'm referring to the EXTREMELY common fallacy of determining the validity of an argument not on the evidence, but on the arguer's state of mind.
Ah, I see! Tone of voice didn't come through to me on that one. :)
Interesting observations that further reinforce my conjecture (admittedly a fixation) that meaning-making can't be abstracted from sensations, feelings, subjective perceptions, or, in short, consciousness. This also clarifies why, despite all the progress, AI systems struggle to recognize emotional facial expressions with human-level efficacy (e.g., see: https://osf.io/preprints/psyarxiv/kzhds_v1). While this doesn't mean that AI will never achieve such recognition, it is not surprising in hindsight. Computers lack the ability to engage in introspection and cannot access inner lived sensations. As a result, meaning without conscious experience will always remain a best guess or a curve-fitting procedure based on inferences and pattern recognition from gazillions of images and data that humans don't need to perform the same task.
Yes! The AI consciousness fervor reminds me of the debate about whether language determines thought or the other way around. Except here we have this idea that language IS thought. Here the thinking goes: "If we can hack language (by whatever process necessary), then we have consciousness!" But this fails to take into account that a different process might be required to arrive at the kind of meaning that exists for conscious beings. What is it that accounts for our language acquisition in the face of Chomsky’s “poverty of the stimulus” and Quine’s “indeterminacy of translation”? The brute force statistical analysis required for AI just highlights the point that children don’t need to gorge themselves on every word ever written to pick up language. I suspect whatever is behind natural language acquisition is more than a language instinct or language organ in the brain or genetic “programming”.
I have always found the idea that language causes consciousness to be utterly implausible, and I often wondered how even smart people like Bertrand Russell could believe in such a weird notion. I have read that recent research has disproven this idea. Anyway, here’s my perspective on LLMs and language: https://philpapers.org/rec/MASNQN
Interesting. Thanks for the link! I'll check it out.
I actually saw "Behave" by Robert Sapolsky, acclaimed neuroscientist and primatologist discounted in a bookstore the other day, I should buy it.
I don't think I'd be able to stomach it. Let me know what you think if you do end up reading it.
Why?
Oh what he's known for is all that no free will determinism stuff. I never looked into him because I didn't see anything new or interesting about his point of view. But let me know if I'm wrong!
There's "knowing through behaviour" and then there's "knowing through inference."
If everything is behaviour -- which seems to be a popular position these days -- then it makes sense that whatever is known , is known through behaviour. Consider two electrons in proximity. They behave in certain ways. If we were to say, speaking loosely, that one of them acts as if it "knows" the other is nearby, we would really be saying that it is responding in predictable ways to the behaviour of the other. This is knowing through behaviour, or more generally, perhaps, "knowledge as relationship."
If on the other hand we were to say, again speaking loosely, that it acts as if it "infers" the other is nearby, we would be saying that it is responding to calculations derived from observations it has made concerning its immediate environment. This is knowing through inference, or more generally, perhaps, "knowledge as cognition."
Speaking loosely of electrons may not be to everyone's taste, but it's instructive when we speak less loosely of how one person might "know" an angry person is nearby. They see anger in the face and gestures; they hear it in the voice; perhaps they even detect it unconsciously through pheromones. The behaviour affects them immediately and viscerally, typically inducing in them a sense of fear or alarm. This is knowing through behaviour, and it does not involve cognition. Even a babe in arms is alarmed by displays of anger.
If on the other hand the person were disconnected from such knowledge, they might have to consider the angle of the brows, the curve of the mouth, the shape of the clenched hands, the shaking of the limbs, the pitch and volume of the voice, and so on, and infer from these, dispassionately, that they were in the presence of an angry person. Then perhaps they could decide, based on their inference, whether to feel alarm.
Now it's possible that what we think of as the visceral, immediate type of knowledge is actually inferential, and that the inference is carried out unconsciously and at great speed by mechanisms hard-wired into the organism. But this threatens to abuse the notion of "inference," which is usually reserved for operations involving premisses and conclusions under the patient and dispassionate control of logical operations. We would be obliged to say that two cats in a fight were engaged in inferential activity. "Inference" may be the wrong word. It might be better to say that there is a kind of connectedness built into organisms, by which they "know" one another.
"They see anger in the face and gestures; they hear it in the voice"
This seems closer to the truth to me. It's IN the face and IN the voice. And yet, how? How do we come to see such and such an arrangement in the face and in the voice as angry or happy? It's rather mysterious.
"Now it's possible that what we think of as the visceral, immediate type of knowledge is actually inferential, and that the inference is carried out unconsciously and at great speed by mechanisms hard-wired into the organism. But this threatens to abuse the notion of "inference," which is usually reserved for operations involving premisses and conclusions under the patient and dispassionate control of logical operations."
Yes, exactly. To me it seems the inference by analogy theory ignores the experiential and simply cooks up an explanation after the fact. It makes sense when you don't think about it too much. But the more you think about it, the less sense it makes. You have to assume all of these logical operations are happening at lightning speed and unconsciously. Well, that could be the case, but we can never know. To me, that kind of explanation fails to satisfy.
"We would be obliged to say that two cats in a fight were engaged in inferential activity."
That's a good one!
"There is a kind of connectedness built into organisms, by which they "know" one another."
I like it!
"...inference by analogy theory ignores the experiential and simply cooks up an explanation after the fact."
I think you nailed that one Tina. That's the problem with empiricism isn't it, it's always after the fact right? Unfortunately behaviorism is also a form of empiricism whereas experiential is in the moment, the now.
Ignoring the experiential aspect of our being and cooking up a story that the universe is a mechanical is even more lame than the nonsense of religious traditions. It's hard to find a middle ground isn't it, nevertheless that middle ground is what we should be looking for not the polar extremes.
Thanks! And as for the middle ground, I couldn't agree more!
No two events/situations are identical in any measurable property, so for two 'minds' to identify different events as 'alike' or 'of the same kind', or involving 'the same thing/being/phenomenon', presupposes a common standard of identifying sensory differences as belonging to the same identity, which is already metaphysical (not sensed but thought, as 'meaning') and social (a common thought). Another way, the pure difference of the physical/sensory is conditional on metaphysical identity (of the same 'mind' and the same 'being' at two different times), which is thus logically prior to the alleged conditions of its evolutionary emergence.
If all meaning is intrinsically social and metaphysical then mutual recognition of 'another mind' is a precondition of the meanings we are capable of identifying as being, including the being that 'has' a mind.
Yes! Very good points.