lproven: (Default)
[personal profile] lproven
This came up as an answer to a question in Another Place. It grew unnecessarily long. Some might find it interesting reading.

[Quoted: consciousness has been selected for by evolution. We have big brains and are conscious; therefore, big brains are needed for consciousness, because otherwise the brain wouldn't have got so big.]

Ah, hang on, false assumption in there. It's the /post hoc ergo procter hoc/ thing.

Yes, we evolved large brains because we needed large brains for some reason - this must be the case, because brains are "expensive". Fragile, slow maturation, vast resources drain, major problems with female pelvic architecture to get grotesquely enlarged heads of infants out, etc. etc.

/But/ there is no direct evidence that /consciousness/ is *why* we have large brains.

Counter-example: dolphins have very large brains for their body mass, too. (So, incidentally, do mice.)

But in dolphins, the evidence appears to be that they've got whacking great brains for reconstructing a coherent "view" of their environment out of sonar echoes, which is not a great medium for doing it.

Bats have it easier; they use sonar to avoid hitting things and catching and trying to eat anything small that's moving. Throw small pebbles in front of bats, they grab them and try to swallow them. They're not very discriminating and they live in an environment where sound travels poorly & they get relatively few echoes back. Plus, they can use smell too.

Dolphins' sonar is vastly more sophisticated than bats'. They can see inside objects and so on.

There is reasonably good evidence that lots of that brain mass is for auditory processing and resolving sonar impulses into shapes. It's their visual cortex, so to speak.

We have really big frontal lobes; AIUI, dolphins don't.

The real question is: OK, so we have big brains, but why?

It could be - I'm just throwing up some ideas here - that as our ancestors left the forests (or the forests left them), they grew terrestrial, into hunters, into tool-users, that they evolved big brains to allow more versatile behaviours in a broader range of environments than sylvestran apes. Proto-humans coped with desert, savanna, forest, riparian and coastal environments, with a habitat range from tropics to temperate zones, with hunting and gathering and tracking and following herd migrations, with memorizing what was and wasn't good to eat in widely differing habitats, with being able to throw rocks accurately, shape stone tools, all that sort of thing. All long before language. Indeed, the fossil record is scant in such fine detail, but it could be we evolved language *long* before we evolved sentience and language itself selected for big brains.

[Posit: sufficiently complex, completely non-intelligent zombies could be among us now. So long as they mimicked our behaviour perfectly, we couldn't tell if they were actually conscious or not.]

The zombie thing - well, I've explained my position already, but I think there are some good, neglected explorations of this in fiction. I'm a big fan of literary SF and it's been exploring ideas about artificial intelligence and modified or different intelligence for many decades now, probing into territories that my limited reading of Minsky and Dennett and so on leads me to believe the academics have not explored yet.

There are things in ordinary human intercourse that a "zombie" could not simply do. One exploration of this is asking what a highly intelligent but non-self-aware AI could or could not do. Getting some kinds of jokes is part of it. It requires a theory of mind, something chimpanzees have been demonstrated to possess. One example: /Queen of Angels/, a novel by Greg Bear. A test is set to determine if an AI, carefully designed to be capable of self-awareness, has achieved it. Despite many efforts, over years, the AI, "Jill", has failed to do so.

The test: to see if the machine understands a joke. It's not a very good joke. It goes:

Q: Why did the self-aware being look in the mirror?
A: To get to the other side.

Events perpetuate in the novel to cause a different AI to become sentient, indeed, schizophrenic. The original AI, in studying and modelling this, copies the "problem" and the result is that it now understands the joke.

There is a widespread mechanistic interpretation in much scientific thinking, that systems, that the universe, is deterministic and predictable and if it can be modelled well enough that it can be predicted. For instance, that a sufficiently exhaustive set of rules and actions would enable someone who does not understand Chinese to converse in it without translating it.

But this is not so; it doesn't work like that. For decades, quite trivial computer programs have exhibited non-deterministic behaviour. From simple cellular automata to fractals, in many cases, the only way to predict the behaviour of a simple algorithmic process is to do it. It cannot be modelled and its results determined, estimated or even guessed at in advance; a program of a few dozen machine instructions can result in patterns that are completely chaotic and unpredictable, even though they are completely repeatable.

Consciousness is not a miracle, it is not even, I submit, a particularly deep or interesting mystery. Animals living in complex social groups need to model one another's behaviour to interact with one another successfully and for the group to prosper. You need to know what your brother will do under various circumstances if you're going to cooperate with him. If you're an ant, your range of actions is very simple and little modelling is required; it's a one-in, one-out sort of model.

Small mammals are more complex. Ones who live in cooperative groups, such as meerkats, need to have signals to coordinate their activities. Meerkats have a simple symbolic language of a dozen or more "words", calls which indicate /what kind/ of predator a scout has spotted, for instance. If it's a flying predator, you take cover; if it's a surface one that can't climb, you go up a tree; if it's a big surface one, you flee down a small hole, and so on. There is thus strong selective pressure to have a sort of look-up table in their heads, if you will pardon a computing metaphor, of how their kin will react to different signals.

Now, consider a more complex social group in a more varied environment; a chimp troupe. Actually, because chimps are more complex, their signals and so on are not yet as well known, but some fascinating observations have been made.

Chimps can signal one another about food. For instance, chimps taught ASL, American sign language, will use signs to indicate that they want certain types of food. Even honey bees can tell one another where food it and how far away; this is not a challenge for a chimp.

Some troupes have been kept in closely-monitored enclosures and observed closely for years. In one wonderful film clip I have seen, the troupe got certain treats at certain times, for instance, fruit. The keepers varied where it was placed into the enclosure and often made it inaccessible, though, to give the inmates a bit of a workout.

In one instance, despite a keeper being stealthy, one loner chimp saw where a bunch of bananas was being hidden in the enclosure. This individual was observed on CCTV.

The others heard the noise and came looking for the food.

The one who had seen it gestured to the others and told them where to go. /In the opposite direction./ In other words, it lied.

Then it scuttled off and had a feast on its own, briefly, before the others found it, gave it a whack and took the nosh.

The point being, lying is a behaviour associated to some degree of sentience. To lie, you have to know that another has a mental picture of the world, because you are manipulating that mental model to your own advantage. This demonstrates that a chimp knows that other chimps think like it does.

In other words, chimps have a basic theory of mind: they can model the thoughts of other chimps.

Once you get a big enough brain that you can model others' brains in your own, and derive advantage from doing this, then it is a short leap to model your own brain as well. Once you can do this, you are better equipped to predict the future based on memories of the past and so on. You move on from simple cause-and-effect inference: if I do X, Y will happen. You can move on to "when condition A applied and I did B, C happened, but when I did B when condition D applied, C did not happen, E did instead" and so on. It's a simple tool; it enables a degree of reasoning about the environment.

I suspect chimps are, to a limited degree, sentient in the same way as humans; the evidence would appear to suggest it. They're pretty dumb but they think and they think about thinking.

There's not enough evidence to conclude whether dolphins do so. All they do, to quote Adams again, is fool around in the water and have a good time.

The Victorians took a terribly mechanistic view to animal behaviour: they treated animals as machines and tried to work out what inputs produced what outputs.

However, cellular automata show us that that no longer works with something the approximate brain complexity of an earthworm. This does not mean earthworms think, by the way.

But as I discovered to my dismay and cost during the course of acquiring a biology degree, the mindset still holds. Animals are machines, humans are special and do something different. Simple Occam's Razor suggests this isn't true: if you burn someone's finger and they howl and pull away, it's because they hurt, therefore, if you burn a fish's fin and it convulses and pulls away, it's probably because it hurt, rather than one of a thousand complex intertwined stimulus feedback loops.

I submit that there is a continuum, and it's a fairly smooth and continuous one, from animals with a literal handful of behaviours, like some tubeworm that extrudes its tentacles to filter feed when conditions are good, intermittently expels gametes and withdraws when it's touched and not a lot else, to a ruminant ungulate capable of cooperating with others to protect the herd, to a chimp which knows enough about other chimps to lie to them, to a human, which can not only perform complex symbolic communication of abstract concepts, but can also record these symbols for time-independent communication.

We're smarter than chimps, which are smarter than orangs, which are smarter than gorillas. But we're not *much* smarter. It's a matter of degree, a quantitative difference, not a qualitative one. And we still eat gorillas.

We have minds; this is demonstrable, though in the case of some humans, non-trivially so.

This leads to a whole range of behaviours which are difficult or impossible without minds. If something displays those behaviours, Occam tells us it has a mind. End of story. The main thing that makes humans think that their minds are very special and unique is [a] tradition and strong cultural heritage, which runs deep through the sciences still, and [b] bigotry. To a lesser extent, it's uncomfortable to think that we behave the way we do towards lots of other creatures without minds.

For a century, we've been giving women the vote. For a couple of them, we've stopped trading in black people as property. We're still very close to our roots of profound and unsupported bigotry with respect to our own species. We still continually indulge in complete bigotry towards other species.

People like swimming in seas where sharks live, and they like eating fish, so it's all right to condemn thousands of pretty smart large-brained mammals, bigger than we are and quite close kin, to a slow and horrible death by drowning in our nets.

People like knowing products are safe, so it is legally required for companies to torture tens of thousands of medium-sized mammals to death to make sure that products don't harm us.

People don't empathise well with animals. It's probably a predator thing.

But since we are, in general, so full of the belief that we are special, the apex of some notional evolutionary pyramid, the top of some ladder, even though there is no such thing, no hierarchy at all, then we must be special.

I think that if the SETI program works and we do contact intelligent aliens - and the odds are /extremely/ good that they are out there - then apart from basic maths, if, perhaps, that, we will prove to be completely unable to recognise non-human intelligence even if it literally comes and stares us in the face. Because we are surrounded by varying degrees of intelligence on all sides and we can't see it.

People mock terns who protect their nest but don't recognise their own eggs if moved 10cm away. They're not smart enough to see that *that thing there* is their baby and to retrieve it.

And yet people with doctorates drive their children to school in 4x4s, because they are too stupid to see that they are poisonign their own children's future. It's the exact same thing. Move outside of the tiny range of expected situations and the much-vaunted intelligence fails utterly. It is a topic of humour: the absent-minded professor who knows encyclopaedic amounts about his subject but can't tie his own tie.

We aren't as smart as we think we are. Everything else is smarter than we think it is. We are blind to the difference.

And above all, intelligence and sentience are nothing particularly special. They are everywhere; they're an emergent property of complex lifeforms, at least on this planet.

Profile

lproven: (Default)
Liam Proven

September 2025

S M T W T F S
 123456
78910111213
14151617181920
21222324252627
282930    

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jan. 25th, 2026 11:10 pm
Powered by Dreamwidth Studios