Consciousness as Middle-Management

Your conscious mind may not be doing anything all that interesting. No, not just you, but like, for everyone. From San Francisco State University:

Associate Professor of Psychology Ezequiel Morsella’s “Passive Frame Theory” suggests that the conscious mind is like an interpreter helping speakers of different languages communicate.

“The interpreter presents the information but is not the one making any arguments or acting upon the knowledge that is shared,” Morsella said. “Similarly, the information we perceive in our consciousness is not created by conscious processes, nor is it reacted to by conscious processes. Consciousness is the middle-man, and it doesn’t do as much work as you think.”

I have to say, though this kind of freaked me out when I first read it, having a kind of knee-jerk revulsion to the idea of a more or less hapless consciousness, upon consideration this seems entirely reasonable. One really has to let go of the idea of a kind of mini-self in one’s head that does all the pondering and decision-making, and think more about the mind as the layers of an operating system. Some levels of thinking and processing are “higher” than others, but it’s all still merely reacting to input.

Morsella’s theory seems to me to be analogous to a kind of resource-allocation process in a computer, deciding how much power or memory to give to an application, or what parts of a chip to activate and to what degree (I am not an engineer so this may be a sloppy analogy). What we think of as our consciousness may simply be a process that takes in a stimulus, and then works to figure out how to respond.

Another analogy for the conscious mind that rang true for me was by Michael Graziano, who likened our awareness to a miniature model of a battlefield for a general, complete with little tanks and soldiers, made to represent what was really out there, in order for the general to make decisions. But the general doesn’t have access to the “real” world, just the model he or she’s presented with, and has to rely on that to decide how to allocate resources.

So that’s us, isn’t it? No free will per se, no lofty sentience, just a data-crunching processer that says stop or go to a lot of other processes, relying on an incomplete simulation of the world in which it operates. No wonder we’re such a mess.

Advertisement

Animals Declared “Sentient” in New Zealand: Hard Questions Sure to Follow

Now who's sentient?Photo credit: quinn.anya / Foter / CC BY-SA

New Zealand has passed an amendment to its animal welfare law stating that animals are “sentient beings,” and the amendment seems to strengthen some measures that define how or in what situation an animal can be used for various purposes, such as medical experimentation. That’s good!

Though it’s not clear from the bill itself (as far as I can tell) what it means by “sentient.” No language in the wording of the bill spells it out, nor does it specify which animals possess sentience. The little bit of bloggy news coverage I’ve seen (all of which might as well be copy-and-paste jobs of each other) suggests the simple definition of the ability to percieve things, having feelings, and the ability to suffer. That doesn’t help me, really. I don’t mean to presume that this hasn’t been flushed out by the relevant parties, I have no idea, but I sure as hell don’t think I could say for sure to what degree, say, a mouse feels or suffers versus, say, a chimpanzee.

Because there has to be degrees of sentience, right? If sentience were a binary thing, then we’d have a much bigger problem on our hands, with trillions of members of millions of species all now declared to have “feelings” and “perception” just “like humans.” So I have to assume that New Zealand is not now offering asylum to fruit flies or making illegal the squashing of ants. We can be mostly certain they don’t have “feelings” (like, I dunno, jealousy?), but don’t ask me whether or not they “suffer.”

I don’t mean to make light of this, truly. I do think this is a good thing, but it strikes me as vague and ill-defined. The group Animal Equality (equality? really? you sure?) calls it a “monumental step forward for animals,” and I think that’s overselling it. We’re not talking about personhood, but rather what sounds more like a general sense-of-the-government quasi-resolution kind of thing, saying that we all need to be way more mindful about how we treat the other animal species we share the planet with, particularly those we breed and harvest and manipulate for our benefit.

That stipulated, its very nebulousness may be its saving grace. By virtue of being vague and undefined, it may force some very difficult and very necessary conversations, questions, and debates. For if there’s a questionable practice that seems to inhabit a grey area, or something being done to an animal whose “sentience” is not terribly clear, this new law may spur some very crucial arguments. Regardless of how those arguments are resolved, the conversation about our fellow creatures is suddenly elevated, given more gravity. All parties, then, get the benefit of having thought harder and longer about something we’ve had the privilege to take for granted since we first started domesticating.

One small step further, if you’ll allow, because with this discussion I can’t help but be reminded of the hearing over Data’s personhood on Star Trek: The Next Generation. Picard tells the Judge Advocate General:

[T]he decision you reach here today will determine how we will regard this creation of our genius. It will reveal the kind of a people we are, what he is destined to be. It will reach far beyond this courtroom and this one android. It could significantly redefine the boundaries of personal liberty and freedom, expanding them for some, savagely curtailing them for others. Are you prepared to condemn him and all who come after him to servitude and slavery?

The bill specifies animals, so this line of thought is probably moot for the news at hand, but think of artificial intelligence. At what point to we consider a machine or some software to be capable of “perceiving.” Don’t they already? When do we consider them to be “feeling”? When they tell us? When do we consider them to be “suffering”? Ever? As long as that’s never written into their programming?

One day, and maybe one day very soon, we’re going to need some law for that. And unlike animals, the artificial intelligence might ask us for it.