User Tools

Site Tools




A family member (who shall remain nameless unless they wish to be named!) suggested I read Yuval Noah Harari's “Homo Deus”, as a conversation opening reference. Last night we had a spirited conversation around topics arising from there. Now, for sure, Homo Deus is a popular book. Right now on Amazon (uk) it is rated at 4.6, similar to a book I rate highly, The Master and His Emissary, and a book I was recommended yesterday (in the Global Ecovillage Network's EDE course), Active Hope. I noted that the Wikipedia article on Homo Deus gives only one or two dissenting voices from the general chorus of approval. Hence it seems that Harari has caught the majority spirit of the age.

Then I came across a fascinating critique from 2018 by an Australian, Allan McCay, in the Journal of Evolution and Technology. I appreciate the general approach of McCay. He borrows extensively from the philosophical work of David Hodgson (judge), who argues that human decisions are not always simply algorithmic, but also contain what he calls “plausible reasoning”. Personally, while appreciating their positions, I neither go along completely with this, nor with McCay's hopes for the economic value of humans being dependent on non-algorithmic choice. I'm rather inclined to the idea that the economy, and in particularly our dominant neo-liberal economy, is more or less algorithmic. My understanding is that, in general, algorithms underlying trading of e.g. stocks and shares, or commodities, generally outperform humans, though I have no special knowledge in that area. So, why should we not suppose that the whole of the capitalist economy could not be run on sufficiently sophisticated algorithms? Thus, the economic arguments are not my chief aim. Rather, I'd like to elaborate what from my perspective is a more pointed critique of a position spelled out by Harari.

To help check the references, I'll clarify that I bought the paperback, ISBN 9781784703936. Let me start on page 128 where Harari is writing about consciousness.

Scientists don't know how a collection of electric brain signals creates subjective experiences. Even more crucially, they don't know what could be the evolutionary benefit of such a phenomenon.

This quote seems to imply that scientists accept subjective experiences as something that could, potentially, be subject to science — otherwise what would the sense be of wondering “how a collection …” and if they are wondering about the evolutionary benefit, that implies that “such a phenomenon” (consciousness?) is real, is a primary phenomenon, not an epiphenomenon. If you see consciousness as a mere epiphenomenon – in the sense that Wikipedia says belongs to the philosophy of mind, not to medicine – then by definition it can't have any benefit.

It doesn't make any sense to me first to ask those questions, and then to dismiss subjective experience altogether. For Harari to have a consistent view of scientists, he could say either that they simply ignore or dismiss subjective experience, including consciousness, or that they could have a more open mind, which would be open to the idea that subjective experiences were caused by something other than electrical brain signals; or that the reason they evolved was something else than the currently accepted ideas of evolutionary benefit.

Harari appears to be sitting on the fence in terms of his own beliefs. Does he, or doesn't he, take consciousness as a primary phenomenon? It's almost like he's trying to be scientific about scientific method, which doesn't make sense to me. The importance I see in this critique is that Harari seems to be needlessly confusing his readers. I can imagine why he might want to do this, but it seems to me rather teasing. I would prefer that he didn't do that, and instead that he gave his own beliefs, even if they are split — “part of me thinks this, another part thinks that” and then he could reflect on why the parts think in those different ways.

Anyway, I can see a benefit from consciousness, if we can understand it as enabling choice. Let's get onto that in a minute, after following Harari's tracks a little further.

Back with algorithms, Harari uses some tactics that seems to me rather philosophically underhand, with the mind/brain distinction, claiming that the brain is just neurons etc. and therefore runs on biologically-based algorithms. It's not far from this to a fundamentalist position on algorithms: everything is an algorithm, even if you don't know what it is yet. Which anyone can see is unscientific, as no evidence could possibly disprove it. By page 141, Harari concedes to skeptics that

“no known algorithm requires consciousness in order to function. […] Everything a human does – including reporting on allegedly conscious states – might in theory be the work of non-conscious algorithms.”

Harari then swaps back to considering animals. Look for the neurological correlates of when humans claim to be conscious, and compare that with animals? Perhaps animals may turn out to be conscious as well? Many observations point to that possibility.

Consciousness stepping out of algorithm

Here's the idea I want to play with: that consciousness is, exactly, the ability that allows people to step out of their routines, their patterns, their algorithms, and to make personal choices. This is naturally linked to free will. Let me address two objections: first, that there simply is no free will, and that people cannot step out of their algorithms; and second, relatedly, there is no such thing as true altruism. I see these as rather mixed together, and that they both involve basic arguments about whether people are able to make free, “moral”, choices about benefiting others rather than themselves.

It doesn't take much philosophical sophistication to notice that the first positions again lacks any trace of being scientific, in that there is no evidence that could disprove it. Then, if you see behaviour that looks altruistic, it is impossible to disprove the hypothesis that there is some deeper, unconscious hidden motive, which would mean it is “really” self-serving. Sure, it's impossible to disprove, but if pressed I would put together a set of cases which make it look less and less plausible. Relatedly, Kin selection seems to be well accepted in the scientific community, but not group selection — a search brought up a nice article about the controversy around group selection.

Kin selection goes along with “selfish gene” theories. But when evolutionary biologists are talking about kin selection or group selection, they are looking at all life forms, perhaps with a bias towards animals, but not specifically at humans. Clearly, mechanisms that work for all animals aren't going to depend on consciousness.

Maybe the interest in group selection rather than kin selection comes from observations and stories about people. Some religions teach respect for all sentient beings, and thus people can downplay their own interests if their own interests would involve harming other beings, not just people. And within the Christian tradition, after being given the great commandment to love our neighbours as ourselves, the answer to the follow-on question about “who is my neighbour” clearly points to it being nothing to do with kinship. Indeed, “Who is my mother, and who are my brothers?” He points to the disciples and says, “Here are my mother and my brothers. For whoever does the will of my father in heaven is my brother and sister and mother.”

What I'm trying to point to is that spiritual traditions seem always to point beyond the evolutionary biological perspective of kin selection as the basis for apparent altruism. Spiritual traditions, in my experience, also point towards levels of awareness and consciousness, and perhaps even go so far as to treat the earth as the archetypal mother. Not only does upbringing and education shape the way we behave towards other people and animals, but a single event, a single story, a single accident, has on occasion radially changed the lives of people. It isn't necessarily a mystical or religious experience. One hears of people having these experiences through psychedelics. Or, they happen as unexpected parts of everyday life. Krishnamurti talked about choiceless awareness, and while this is expressly not about any kind of conscious choice, it seems to me, intuitively, paradoxically maybe, that the state he is describing is just the kind of state where a person can take actions that are not egocentric or ego-serving, that are not part of any algorithm. If you see choice as something essentially algorithmic, then it is clear that choiceless awareness points to a state of mind that is beyond the algorithm.

If a single event can radically alter a person's behaviour patterns, what strength is there left in the idea that decisions, behaviour, activity, is algorithmic? If there is any governing algorithm, then our experience tells us that this algorithm must be highly liable to being rewritten, and that unpredictably. So what explanatory power is left with a deterministic viewpoint?

On the other hand, I accept that for most people, most of the time, we do indeed behave in ways that are shaped by our genes, our upbringing, our childhood experiences, healthy or adverse. But (and it is a big “but”) sometimes this can be overridden. My personal experience, as well as what I have heard and read from others, is that this overriding, or rewriting the algorithm code if you like, is associated with moments of great consciousness, awareness, presence — those “spiritual” matters again.

Trying to take a scientific viewpoint, if you look at behaviour statistically, you might barely notice these radical changes, if at all. If many people appear unpredictable, maybe this is largely an illusion, due to the fact that the causes, the patterns, the depth psychology, is deeply buried out of sight. To bring the buried things to awareness, and to heal them, that is a great aim, and that is why I have always been drawn to psychological enquiry, and lately to IFS. But just because some things that look unpredictable are actually predictable, doesn't mean that there aren't genuinely unpredictable things. Remember that it is always possible to suppose that there was some as yet unknown causal factor which was responsible for some great shift in someone's behaviour. That skeptical position cannot be disproved. But does that really make any sense, ultimately?

Other mechanisms; love

A side question that occurred to me this morning is this. Evolutionary biologists and psychologists have plausibly theorised about mechanisms for passing on not only genes but behaviour patterns, including memes. But this is all down to mechanisms, to algorithms at some level. How, then, could non-algorithmic awareness or behaviour propagate? Could we see mechanisms for that?

We need to avoid an obvious trap, that I see at both philosophical and political levels. It may be a strong temptation, and many have succumbed to it, to think you have achieved non-algorithmic consciousness, and now you need to get other people to adopt it, by a religious war, for example. But also, any such attempt will depend on triggering people's programmed behaviour patterns; it will be using those algorithms. So I would say that it is impossible to pass on non-algorithmic consciousness by algorithmic means. There is no method, no technique. Krishnamurti used to say that, too. If you thought you found a method, the consciousness that you would be passing on would be algorithmic again. Classic fail.

But I do have a sense of answer. Love. Not in any algorithmic sense of love, naturally. But when you read, say, “Greater love has no one than this: to lay down one’s life for one’s friends.” (friends, I note, not relatives!) well, to me, you're talking about real live altruism.

All sorts of caveats pop up immediately. If it's just some conditioning, some pattern programmed in early days that the way to survive was to sacrifice your needs to others, then of course it won't work. “If I give all I possess to the poor and give over my body to hardship that I may boast, but do not have love, I gain nothing.” (from the relatively well-known 1 Corinthians 13) speaks to me that acts that may appear altruistic but are not in the end motivated by love, just are not the real thing. I believe there is a real thing.

Could machines be conscious?

You may find what follows here less inspiring; perhaps more intellectually stimulating? If we follow through my thought that consciousness is to do with being able to choose outside the algorithms, then consider the twin questions: what does that say about the possibility of machine consciousness; and, what does this say about the material basis of consciousness for us?

A quick search reveals that there is a lot of speculation and some science going on around seeing the brain as a quantum system.

And this is just from the first few search results.

The speculation that I join with is roughly this: that the brain is a powerful amplifier of quantum effects. If we connect this with quantum entanglement, a fascinating possibility emerges, that we might be able, somehow, to allow our brains to become entangled – one might say “resonate” but that's a slightly different metaphor – with other brains, or with other subtle patterns in the universe. Maybe (sure, this is wild speculation, but why not?) we might be able to be connected in this way to several thought-like processes, which also have quantum elements, and maybe if we bring these to conscious awareness, we may be able to choose to follow either one or another. This would not be algorithmic choice, but something much more subtle.

This reminds me a little of what I sense my father was trying to express tentatively. He was highly science-oriented and skeptical in general, and strongly agnostic if not atheist in some senses, with a clear anti-religious stance. But he was also a Freemason, and wrote two short pieces, Can God be defined? and God without Myth. If you swap in quantum entanglement for his rather naive ideas about electromagnetism, maybe you can see the possibilities.

So … if quantum computing links up with ordinary, algorithmic Turing machine computing (that shouldn't be difficult), and (hugely more challenging) if quantum computing could somehow become entangled with human brains and the deeper more subtle quantum phenomena in the universe, why not suppose that machines could be conscious? What I am holding out for, though, is that in my view any computer that is only a Turing Machine is not going to have that “conscious” quality.

That's what I see resonating around in other people's writing, too. Consciousness is essentially linked to quantum uncertainty phenomena, and when you start contemplating entanglement, the potential is mind-boggling. Through the lens of teleology, maybe consciousness is evolution rising to the challenge of bringing quantum phenomena into life itself. Maybe evolution has been yearning for this moment, when life at macro-level finally reconnects with the incredibly small and subtle quantum level — the still, small voice of what looks like uncertainty, but is perhaps the ground of our being.

see also

d/2023-03-19.txt · Last modified: 2023-08-10 20:27 by simongrant