How Mathematical Logic Uncovered the Power of Human Intuition
To err is human, and to be human is special.
Which of these two questions is easier to answer?
Was Donald Trump a good president?
What is the square-root of 23746129?
Of course, the answer is question 2, as any reasonable robot will tell you. The square-root of a number is a well-defined mathematical quantity which can be found using a standard algorithm. It is therefore simple to compute that the square-root of 23746129 is 4873. By contrast, millennia of intense philosophical study have failed to provide any clear definition of what it means to be good. In the unlikely event that any algorithm is ever created that can irrefutably determine whether a president was good, it would surely be incomparably more complex than that for finding the square root of a number.
Yet I could answer “hard” question 1 in a second, even while “easy” question 2 would have me reaching desperately for a calculator. No doubt you are the same. This is the paradox of human intelligence, which confounded decades of work on artificial intelligence. Humans have a phenomenal ability to effortlessly answer questions that are objectively incredibly difficult. This ability is what makes us intelligent while mere machines are not. We call it intuition.
Surprisingly though, an explanation for the seemingly supernatural power of intuition can be found in the ostensibly dry world of mathematical logic. More remarkably still, it was uncovered by the father of computer science, Alan Turing, who has a popular reputation for being unempathetic and anti-social (as represented in 2014 film, The Imitation Game). In Turing’s words: “If a machine is expected to be infallible, it cannot also be intelligent. There are several mathematical theorems which say almost exactly that.” In short, we can only be intelligent because we can make mistakes.
How a Mathematical Logician Ended Dreams of Certainty
The “several mathematical theorems” Turing referred to were Gödel’s Incompleteness Theorems and a number of related results, all proved in the 1930s. Gödel was part of a grand project to put mathematics on a firm logical foundation. The job of a mathematician is to begin with known mathematical facts and then use a sequence of logical steps to prove that a new mathematical fact must be true. This new fact is called a theorem and it can then be used as a new starting point to prove further facts. However, if all proving new theorems always depends on previously proven theorems then we have an infinite regress problem – how were the first theorems proven? To avoid this issue, we must begin with a foundation of mathematical facts that are not proven but are instead just accepted. These are called axioms. The validity of all mathematics depends on establishing a sensible and consistent set of axioms and tracing all theorems logically back to these axioms. Yet in the early twentieth century, mathematicians realised that this had never been done. Millennia of mathematics, while supposedly proving absolute truths, had actually been built on an absent foundation! The mission of mathematical logicians in the early twentieth century was to fix that.
Kurt Gödel was a mathematical logician who destroyed the dreams of his field. In 1931, he published his incompleteness theorems which proved that it is impossible to come up with a set of axioms for mathematics that can be proven to be consistent (i.e., not to contradict each other). This means that it is impossible to build any foundation for mathematics that we can be sure is firm; all mathematical theorems ultimately trace back to a set of foundational facts that we assume are all true, but that we can never be sure don’t contradict each other!

Remarkably, Gödel based this work on the kind of paradoxes that appear in children’s games. When I was six years old, a staple of the school playground was for one child to declare that “it’s opposite day”, meaning that everything anyone said should now be taken to mean its opposite instead. However, this created a problem – was it already opposite day when the child declared “it’s opposite day”? If so, then the declaration would actually mean that it wasn’t opposite day. But if it’s not opposite day, then declaring “it’s opposite day” would mean it was opposite day after all. But that would mean that it wasn’t opposite day, which meant it was, which meant it wasn’t, which meant… It left us all very flustered. Unknown to us, this was a manifestation of the Liar’s Paradox which has troubled philosophers for millennia. The genius of Gödel was to demonstrate that paradoxes like this are not just a childish trick of everyday language; they are a fundamental obstruction to the universal application of logic that persist even in the rigorous language of mathematics.
From Mathematical Logic to Human Intelligence
Turing took the extraordinary step of extending Gödel’s results about mathematical logic to human intelligence. Specifically, he showed that just as flawless logic cannot provide a foundation for mathematics, it also is not sufficient to provide a basis for intelligence. Paradoxically, a perfectly logical being is necessarily an idiot.
Turing’s reasoning can be understood by considering a scenario that can afflict computer coders. A computer coder writes a series of instructions on a computer intended to perform a complicated computation. They then press “run” and wait for the computation to finish. It doesn’t finish right way, so they go to lunch and come back an hour later. They find that it is still running. They go home and come back the next day. It is still running. Suddenly, they have doubts. What if they made a mistake in their coding that leads to an infinite loop and it will never finish? In that case, they should cancel the computation now and try to fix any errors in the code before trying to start it again. On the other hand, though, perhaps there is no error, and the computation will finish soon if they just leave it running. In that case, cancelling the computation now would just cause them to lose their progress and to have wasted a day of work for nothing. The coder must depend on their intuition to determine when to give up hope that the computation will finish and cancel it. Inevitably, they will sometimes get it wrong and cancel a computation that was just about to finish, but mistakes like this are part of being human.
Surely a perfect intelligence could do better. Imagine a being that is super-intelligent and never makes mistakes. It only acts when it is certain it is making the right decision. How might this being deal with the coder’s problem? Remarkably, it would just sit there doing nothing like a complete idiot! Building on Gödel’s work, Turing showed that – like proving a foundation of mathematics is consistent – determining whether certain computations will ever finish is undecidable. It is impossible to ever prove with certainty one way or the other. So, if the coder had written code for one of these computations, the super-intelligent being could never be sure whether it was going to finish or not. It would be paralysed by its inability to make a mistake; unable to act without certainty, but prevented by mathematical logic from ever achieving this certainty. While the imperfect human coder would eventually just assume their code wasn’t going to finish, cancel it and move on with their lives, the perfect intelligence would be stuck.

This is what Turing meant when he said “if a machine is expected to be infallible, it cannot also be intelligent.” He proved that a machine that is not allowed to make mistakes cannot be intelligent because it can be rendered non-functional by situations where a human can proceed by following their imperfect instincts. He showed this rigorously for the so-called Halting Problem of computer coders, but we saw the same effect at the beginning of this post. A perfect intelligence can easily find the square-root of 23746129 but would seem a moron if asked to contribute to a discussion about politics.
A Life of Contradictions
Indeed, Alfred Tarski took the consequences of Gödel’s work even deeper by showing that truth itself cannot be defined in the kind of fully logical language a perfect intelligence would depend on. This means that if we try to reach a complete set of opinions about the world by dividing all statements into true and false, we will inevitably end up with a contradictory set of beliefs. This understanding is captured in myriad aphorisms such as “things are not always black and white” or, more humorously, by Tim Minchin in his song “The Fence”. Nevertheless, we must continue to operate – to speak, to act, to vote and to live – based on our contradictory beliefs. We must be able to answer questions of the type “Was Donald Trump a Good President?” even while knowing that our answer can never, by the strictest logical standards, be verified to be true. It is this ability to live a life of contradictions that makes us intelligent.
Intuition is the human faculty that allows this intelligence. Psychologist Daniel Kahneman, in his book Thinking Fast and Slow, describes human intelligence metaphorically as an interplay of two systems. System 2 constitutes our logical self that consciously assesses all relevant facts to reliably determine correct answers. It is what we rely on to calculate the square-root of 23746129 and is the kind of intelligence computers excel at. Yet, this system is slow and depends on deliberate and difficult mental exertion so cannot be the primary guide of our lives. System 1 represents our intuitive self. It operates quickly, subconsciously and naturally and is what is activated by questions about Donald Trump. Yet, as Kahneman documents extensively, it is extremely fallible, frequently making clearly demonstrable errors.
The dream of thinkers going back to Socrates and Plato has been to overcome this fallibility and to live a life of perfect logic and consistency. Millennia later, Turing and others showed the folly of this dream, through the seemingly inhuman and esoteric study of mathematical logic. We must accept that inconsistency and self-contradiction is a part of life. As Kahneman emphasises, we should moderate our intuition with logic whenever possible. However, we cannot ultimately hope to solve our problems and win arguments through logic alone. We must also use human interaction and compassion to find the common ground in our intuitions. Indeed, Turing discovered through mathematical logic what Socrates’ adversary Protagoras intuited 2500 years ago: “Of all things the measure is man.” To that, let me add: to err is human, and to be human is special.
Very interesting reading
Hi Paul,
I enjoyed your paper on quantification and judgement. Prior to my retirement a few years ago I worked as a Research Director in the Education faculty of an English University. One of my abiding memories was of the futile battles of the 'paradigm war', that is the war between one the one hand, those who favoured quantitative and on the other, those favouring qualitative methodologies. The former argued that their approach was less likely to be appropriated by ideologically driven researchers whose primary objective was to shape education in the favour or one or another set of values. Quantitative researchers saw themselves as more detached and objective in their research. The resolution to this dispute lay in the recognition that education is invariably an ideological driven kind of activity whatever kinds of data are extracted, and that both qualitative and quantitative data, are necessary to constructing useful and consensual perspectives on what education is and how we should be engaging learners in it.
It seems to me that most research is driven towards useful, consensual and coherent conclusions and that these are the main, perhaps the only criteria for truth that we have.