Can a computer think and should we care?
It feels a bit 2024 to still ask whether a computer can think. ‘No’ and ‘never’ used to be the intuitive answer, often followed by a ‘computers can never… you know…’ with the slight hunch in the shoulders, rubbing thumb and index finger together. A sort of summoning of words - words that never came. Because in reality we don’t know, but we want to know that they don’t.
Here, heading towards mid 2025 I don’t really hear that question anymore. I believe this is because it’s not so obvious that they don’t. ChatGPT, the cart horse of day-to-day AI seems perfectly capable when we put it through our make shift benchmarks for what it means to think. Research? Done. Reasoning? Better than some people. Empathy? Sure feels like it. And creativity? Well you either say that in principle it isn’t or you have to concede that something that resembles a sort of creativity lurks in there.
And this is where I think it gets interesting. Creativity is just an example but if we say that creativity is by definition a human activity, then there is no discussion. A computer is not a human and therefore, whatever it does cannot be considered creative. But if we agree that creativity has a purpose - that it does something - then it is not a logical conclusion that the goal can’t be met by something else.
To ask if a computer can think is to ask if a submarine can swim.
A response I once heard to this question is that ‘to ask if a computer can think is to ask if a submarine can swim’. It’s definitely a quirky and sharable quote, but it’s a surprisingly useful analogy. Because whatever you want to call it, it seems to be doing something that does what swimming does. And more importantly, in some use cases, it does it a lot better than a person. Consider something like ‘holding your breath and swimming under water’, submarines do it so well that it added a whole new dimension to maritime warfare.
So what do we make of this, does this mean computers can think? I’m not sure. Because I’m increasingly concerned with the question ‘can people think?’ This is not a ‘kids these days’ or ‘people are dumb’ comment. Instead I’m referring to the way in which we engage and think in the world.
LLMs will perhaps be the most consequential creation of our time and in another piece I talk about the curious question of discovery vs invention. But for now, let’s call it an invention. The transformer type LLM allows for every word or token to be considered in the context of the other words or tokens. That sounds a lot like how a person does it. Consider the following two images:
The shovel itself does not really change but the context in which we see it matters a lot. If I asked you to say something about it (or prompt you) you would go through a similar process of identifying the symbols you recognise (pre trained), weigh them up and put them in a context (transformer) before saying anything (generating).
This leaves us in a strange place. Does anything that generates thought like output think? Or is any generative pre-trained transformer just a computer? Sure, my training comes from life and ChatGPT uses a corpus, I generate thought, it generates tokens and I’m made of carbon and it’s made of silicon. But this somehow doesn’t feel that important anymore. In many ways it seems that computers might not have come such a long way in becoming human, as they have in shedding light on what it means to be human.
But my idea here is not a grim philosophical meandering. At least not grim. AI remains a tool. Like the submarine, it can replace swimming, it can even perform swimming-like tasks better than we can, opening up new possibilities. If not for recreation, we would have handed swimming over to the machines. But unlike swimming, thinking is our thing. It’s what got us out of the food chain. We’re weak specimens: our young rely on their parents for decades after birth, our senses aren’t particularly good and if we didn’t invent the submarine, the ocean might as well have been a few hundred meters deep. We wouldn’t know.
We have to revisit the importance of thinking. Or at least the style of thinking that a Generative Pre-trained Transformer can spit out. Do we hold this in high regard when assessing someone’s character? Is it important in the work we do? And most importantly, does it play an important role in our sense of self worth? If your answer is yes to any of these, you might want to start fine tuning your own model sooner rather than later.