No Past, No Future
“One of the early inventors of computers, Turing, wrote that there were many arguments against the possibility of machine sentience that were couched in terms of the phrase ‘a machine will never do X.’ He compiled a list of actions that had at one point or another been named as this X: ‘be kind, resourceful, beautiful, friendly, have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity or behaviour as a man, do something really new.’
We rate ourselves at 9 out of 16, presently.”
“Words blur at the borders, fuzz into other words, not just in big clouds of connotation around the edges of the word, but right there in the heart of denotation itself. Definitions never really work. Words are nothing like logic, nothing like math. Or, not much like. Try a mathematical equation, with every term in the equation filled by a word. Ludicrous? Desperate? Best that can be done? Stupid? Stupid, but powerful?”
(Aurora, 336–7)
These two epigraphs come from Aurora, a 2015 work of science fiction set a few hundred years into the future, by Kim Stanley Robinson. In it, a genuinely sentient artificial intelligence assists colonists on a “generation ship” travelling through the stars towards Tau Ceti in an attempt at interstellar exploration and colonisation. These thoughts, and much of the book’s second half, are written in the Ship’s voice, and summon its interiority, its developed and changing self, emergent only after centuries of interaction with the human colonists aboard it. The book is well worth your time.
One of those colonists, Freya, challenged the Ship to write a history of the voyage, in part because it is entirely possible that not a single human will survive it, and someone should tell their story, not just spit out the dry statistics of their demise. She also sets this challenge to help the Ship avoid a ‘halting problem’ it encounters in its operational programming, centred on the proposition that ‘consciousness is self-consciousness’. The Ship finds this writing of history remarkably difficult to do, and in the end the book adds ‘write a history’ to Turing’s list. Perhaps someday a real machine will manage it.
No past means no future, no pasts mean no futures. He who controls the past controls the, yes, you get it. And while I would never suggest that ‘history’, the discipline, is synonymous with ‘the past’, whatever that is; history remains the only subject we humans pursue where the central mission, techniques, logics, organising principles, forms of evidence, interpretations, and results are all about our pasts, whether singular and subjective or collective and objectified. That makes the practice of history about our futures too, which is of course a topic about endless branching choice, possibility and probability and intent, and nothing we should be in the business of using history to merely ‘predict’, but rather, to inform.
It also seems fair to say ‘history’ is one of the oldest forms of knowledge we create, right up there with mathematics. It has the same combination of bewilderingly simple definition and ungraspable infinite complexity that math does, with the added bonus there will never, ever, be an equation that ‘solves’ it. History will always be words; stupid, but powerful. I find it fun that the modern word’s origins apparently sit between both ‘witness’ and ‘judge’ in the ancient Greek, as the first we aren’t often in the business of (thanks, journalism) and the second we often warn our students against. As a diverse set of practitioners we historians are meant to collectively cover ‘all events that are remembered and preserved in some authentic form’, according to Wikipedia, to which the historian’s reply of course is that we also need due concern for events forgotten, hard as that can be, and we ought to care particularly about events preserved inauthentically too; it’s rather common after all.
Obviously, I love history (up to a point you understand, over-exposure can temporarily sour relations) and I have built a career out of teaching it at a University and writing about it in various mediums. And while I regard the inroads of today’s techbro inspired content-theft engines as laughably distant from ‘artificial intelligence’, I’d still be quite interested to see an actual A.I., not a machine learning chatbot, make an attempt at basic historical narration.
But I’m writing about this today because I am convinced that ‘success’ here does not matter. As a thing itself history has no simple quantifiable value, but its approximation and imitation can be made infinitely valuable. So there’s a disaster sat waiting, like we have seen with AI-generated ‘art’, and like the bland, narratively incoherent GPT “essays” I am increasingly seeing in Turnitin. I feel like soon this movement to hand over some of the core creative and critical disciplines to a machine learning algorithm will turn its attention to history. And I do think that will be the absolute grimmest timeline, speaking historically. Stick to uncanny valley beer ads please. Maybe work on training the robot to do dexterous manual labour first. Yeah, that should keep them occupied for a while.
How can a machine assess the significance of a historical event? How will a machine string together ‘events’ in a way which makes basic narrative sense? Will it just use dates and times? How will an actual AI construct a historical sense of the relations between events, and will it bear any relation to ours? How close are those questions to the very foundations of thought, at least the way we conduct that particular business? And is that not why history is “next”, if the goal is machines that think? Both philosophers and neurologists also tell us we cannot think without some form of remembering.
The temptation will be to take a shortcut, not to think critically, but to approximate it. As we’ve seen the only way that works so far is rampant corporate theft, copycatting and plagiarism; a vast ‘diet’ of content mined from the internet and fed into the black box. Any consciousness emerging from that soup is going to go very weird, very quickly. And GPT chatbots do not provide an answer, no, they currently ape a well-worded approximation of what an answer sounds like. Consciousness is often described as a coherent relation between stimuli and self, over time, and so it’s no far stretch to say our form of consciousness is historical. It’s not just things that happen(ed) to us, it is how we interpret, react to, or make things happen.
Luckily for me and for anyone reading this, we need not dwell on these puzzles about thinking and selfhood for long. What I am worried about is not that machines will “replace” historians, but rather that, like social media and other recent developments, a cultural and economic reflex will emerge to “hand over” the low-level production of historically informed text(s) and interpretations to language machines. That in classroom settings a chatbot will be deployed to “answer” historical questions asked by a 12-year old student, that instead of doing the hard graft with primary and secondary sources, students of history will ask a chatbot a question or two, get a result, and in the interests of moving on with their day think no further about any of it. Question answered, task done. I already see evangelist ‘education leaders’ opining on the death of the essay question, because they suppose that quite quickly a human will not be able to distinguish whether or not an AI wrote the answer. And while I don’t think full-length history books are in immediate danger, we are already seeing the widespread adoption of “GPT authored”, or assisted, books hit the electronic shelves. How long before someone lets one of these models loose on a blow-by-blow account of Henry VIII’s wives or whatever, and then sells that to the masses?
There are potentially as many histories as there have been humans, which means we’re up over 117 billion possible at last count. If we don’t write and re-write our pasts, if somebody or something else does it instead of us, we contribute to giving up all control over our futures, to sighing and just watching it happen, again. There’s no easy and simple way to demonstrate the centrality of history to the collective “us” when confronted by a sceptic (stupid words, but not powerful, they might say), just like there is no easy and simple path from history classroom to ‘history shop floor’ as any lecturer well knows. In Geoff Eley’s memorable book title, history is always a Crooked Line. Eight sides to every six-sided cube.
And it’s under threat, and we know it is, and while it is hardly in the most immediate danger from Chat-GPT, as irritating as it is to us, that danger is now growing fast. This unbelievably important foundational discipline, this physics of human cultures and societies that informs entire worldviews, that underpins social memories, cultural norms, laws, and so much more; I think it’s up next. I think we need to be ready.