2 Comments
Jun 29, 2023Liked by Dr Paul Webster

Thanks Paul, excellent outline of convergent and divergent evolution. I does seem that ChatGPT is just a clever “imitaton”, but because:

(a) the “hard problem” of what makes consciousness (or entities are conscious) is still “hard” (and I believe unsolved), then

(b) one of several candidates could be the case, and thus

(c) because one of those candidates is that any complex system with the quantity of nodes, connections and feedback circuits similar to a human (or even planetarium worm brain) might be sufficient for the “emergence” of consciousness, then ..

(d) it might be that a highly complex AI “dark cloud” will mysteriously generate its own silver lining of emerging shiny sentience (i.e. we can’t say).

So, while “all that twitters may not be gold” it could equally be the case that every complex “dark cloud might have a sentient lining”.

Of course if such an AI system was to be embodied and able to sense and move around in our environment and then set free to evolve, it may through convergent evolution be even more likely to develop some form of consciousness, though even then (because of the “other minds” problem) we would still never really know “what it is like to be a robot”.

Expand full comment
author

I think that you are right that there is no way out of the fundamental problem that we have no way to determine whether or not another entity is conscious. We certainly can't prove that an AI system isn't conscious, but we also can't prove that a table isn't conscious. But that's just because certainty is too much to ask for in science (and is often a dead-end for philosophy). The best question we can ask is about what type of entity is more or less likely to be conscious.

My view is that there can be a temptation to think that the more an entity acts like us, the more likely it is to be conscious. I think that studying convergent evolution clarifies that resemblance isn't generally a good measure of the depth of similarities between evolved entities. This forces us to ask the deeper question of whether the actual tasks AI can perform should lead us to think that they are conscious. I think that the answer to this question hinges on the question: "Is consciousness necessary for the things humans do, or is it just a contingent by-product of the specific way in which we evolved?" This is a difficult question, but I hope that it is at least a little more grounded than the general question of whether a given AI system is conscious, and so might provide another route into the question by which neuroscientific and psychological research of the future may be able to make some progress.

Expand full comment