The road to singularity
Beginning of this month Stephen Hawking told the BBC he worries deeply about artificial intelligence. Hawking’s fear is that if a self-learning computer outsmarts the homo sapiens it might take off on its own and re-design itself at an ever increasing rate. Hawking: “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
Frankenstein is the first science fiction story ever written
The fear that a manmade machine turns against his creator is as old as Frankenstein, first published in 1818 and written by Mary Shalley. Frankenstein is the first science fiction story ever written. Since its creation many authors have been inspired by the concept of a monster (read: robot) taking matters into its own ‘hands.’ Famous examples in film are of course 2001; A space oddity and Terminator.
The moment when a machine is fully artificial intelligent is called (technological) singularity – a term first used by mathematician John von Neumann in the 50s. We are decades away from this moment, but we are already working on intelligent computers that can execute individual human abilities, such as planning, learning, communication, perception, problem solving and social intelligence.
The human computer
An example of an artificially intelligent computer that was shared earlier on this blog is Watson, the IBM computer that was especially designed to play Jeopardy and famously beat two former winners of the quiz show. Though a computer competing on a game show might sound quite human, Watson’s abilities are in fact very limited; it was ‘merely’ able to answer questions posed in ‘natural’ language. However, this limited task required 90 servers, 2,880 POWER7 processor cores and 16 terabytes of RAM. This illustrates how much computing power would be necessary to completely mimic the human brain.
We already have computers that can more or less independently paint or make music in novel ways
In this context it is interesting to ask the question whether it would be possible at all to build a computer that is creative – being able to “redesign itself.” If we go by the definition of creativity that is commonly used in the scientific literature, this computer would need to be able to create ideas or works that are ‘new’ and ‘valuable.’ New is the relatively easy part. We already have computers that can more or less independently paint or make music in novel ways.
However, to make something that is valued by an audience also requires an understanding of what is culturally appreciated. Today, computers still don’t have a clue. So when a piece of art or music made by a computer ends up in a museum or concert hall, the credits still go to the person who programmed the computer. Which means the computer is still merely a tool and the human touch still essential.
Last September I came across a piece of news that gave me a first rudimentary peek into a way to incorporate the human element into a computer’s algorithm. It was news about Dutchman Peter de Kock receiving his PhD for creating a (patented) model that is able to predict terrorist behaviour. De Kock, a criminal investigation expert, was able to do so by building a database containing 53.000 different terrorist attacks. He divided each attack in 12 different elements so that he could categorize the attacks. What was special about these different elements is that they are all part of an archetypical story; every attack is specified by explicitly describing the protagonist, the antagonist, the message, the location, the setting, the red herring, the symbolism, etc.
De Kock stumbled upon an analogy between the production of a play or a film and the ‘production’ of a terrorist attack
De Kock had already formulated these elements when he studied at the Film Academy and tried to break down a scenario in the most elementary building blocks. Then, after a short career in film, when he was doing his Master of Criminal Investigation, De Kock stumbled upon an analogy between the production of a play or a film and the ‘production’ of a terrorist attack, which means that when a terrorist plans an attack he always writes a scenario. This is when De Kock had the insight to combine his scenario-elements with every terrorist attack.
Through a matrix, combining the different scenarios with the elements De Kock could easily observe correlations, cross connections and patterns in the ‘creative’ domain of terrorist attacks. When looking at the Boston Marathon bomb, for example, he could draw conclusions from the fact that the bomb was planted at the Finish; that was a difficult location, which meant that making a statement was an important element. Also, the bomb was made with a pressure cooker, which pointed in the direction of The Chechen Republic, where these kinds of bombs are relatively common.
What makes De Kock’s model so useful is that it doesn’t just help to analyse attacks but that it can even predict them. This is because the model works similar to a chess computer that remembers all the matches (read: scenarios) ever played and thus statistically able to predict the moves of his opponent.
A next step in developing the model is to incorporate all the fictional stories ever written about terrorism in the database. By doing so the database also possesses the more original plots, which makes it even more difficult for a terrorist to create a scenario that has never been conceived of before. If the model at some point can judge scenarios for their originality, you can imagine it might also be able to create an original scenario itself.
So, the first step in building a computer that can make creative products valued by society – from an artistic or scientific viewpoint – is uploading every creative work within a certain domain onto a database. This is relatively easy. If the domain would be literature, for example, you could easily upload every book ever written. From these books the algorithm could source any single phrase to create a new story.
However, the more difficult barrier to take is to thoroughly understand what makes a plot readable or even interesting. There are handbooks for writing a story, of course, but the way stories are judged depends on many different random, ambiguous and nuanced ‘parameters’ that are not all documented explicitly or in structured ways.
The magical ‘aha’ moment leading to incredible new ideas, will still be impossible for artificial intelligence for a very long time
And even if computers can separate creative from not creative, to be creatively productive it also needs to be able to make unfamiliar combinations of familiar ideas, to create something that is entirely new. But this is exactly what makes creativity human and inscrutable. The magical ‘aha’ moment – such as De Kock’s analogy – leading to incredible new ideas, will be impossible for artificial intelligence for a very long time.
It goes without saying that there are still many barriers to take before we have computers that contain as much knowledge, understanding and computing power as the human brain. But with our drive for innovation we’ll build more and more knowledge databases, with algorithms interpreting them and mimicking the human brain. When all these algorithms are advanced enough and combined, a system with creative abilities only seems a matter of course.
Speaking of a matter of course, if we compare how the Homo sapiens has greatly advanced since – say – the Homo erectus, and how computers have a learning curve that is much steeper, digital evolution overtaking human evolution and making singularity possible only makes sense. So let’s just hope that Frankenstein’s monster remains science fiction.
Photo: Scene from Young Frankenstein (1974) in which actor Gene Wilder finds out his ‘monster’ has come alive: “It’s Alive!”