December 5, 2017 / 0 Comments
Garry Kasparov is regarded as one of the best chess players the world has ever seen. In 1997 he became the first chess world champion to lose against the chess-playing computer Deep Blue, developed by IBM. During an interview at Google Kasparov spoke of unfair competition conditions, one of his arguments being that the computer had to be booted several times during the match, which he said was comparable to a player having heart attacks at different points during the game and needed to be revived. Whether it was fair or unfair, that match represents a decisive event in the history of the game of chess. A new thought was born: The thought that a computer was able to beat the best chess player in the world. And thoughts are highly contagious.
But what would have happened if Kasparov had not lost in 1997? Would the South-Korean Go world champion Lee Sedol possibly not have been defeated by the AlphaGo computer in 2016? Or would losing against a computer have simply been delayed by another 2, 5 or 10 years? Kasaparov himself assumes this would have been the case. But what if Kasparov and Sedol were a bastion for human intelligence which was conquered literally move for move? Whatever the case, human intelligence had begun to falter, and artificial intelligence was there to prop it up. The problem with being propped up is that we quickly become accustomed to this support.
Whilst Kasparov still claimed in the 1980’s that he would never be beaten by a chess programme, generations coming after the year 1997 have grown up quite naturally with the idea that machines can play chess better than humans. A powerful thought that can quickly lead to powerlessness.
Kasparov observes two phenomena amongst young chess champions who complement their chess knowledge by learning with machines: On the one hand they learn much faster and have progressed further in their ability than the older chess champions since computers provide a huge amount of chess knowledge to which the players have unlimited access. On the other hand, however, they rely so heavily on the messages from the machines that they are unable to identify or analyse their own mistakes. If the computer identifies an error, then this is a wrong move because the computer says so. If asked why it was a wrong move, they point to the data provided by the machine, data whose derivation they are unable to explain. “Somehow their mind is being hijacked by the power of the machine. This applies to many people. They are just staring at the computers, their eyes are just being caught by the screen expecting to find a solution there, instead of thinking themselves “, he summarises with concern.
Essential questions to be answered are: Why are such vast amounts of resources (labour and money) being invested in the development of artificial intelligence instead of significantly developing our own human intelligence further? Why are the means available to stretch our imagination, creativity, intuition, analytical abilities etc. used so little? Some AI researchers tirelessly work on improving AI, in the hope that AI will one day surpass human intelligence, to help us in solving global problems. This may work initially. But it is the player with the higher form of intelligence that is the player which is in control. We can put lions into cages because we are more advanced in terms of our knowledge than they are. How will AI treat us when we use our intelligence less and less in increasing numbers of situations because we are gradually handing over our power of thought to machines?
If a person grows up in an environment in which their own intelligence and its value are repeatedly belittled and repressed, this can have a deep psychological impact on that person. Humanity has gained many negative experiences in this regard: From the aristocracy, to the church, to dictators, possibly through to machines today. This kind of experience has a clear impact on our self-esteem. A healthy sense of self-worth is essential for humans to feel confident in confronting challenges in life and to freely develop their potential.
If, for instance, a child is repeatedly told that it is not as intelligent as other children, it is psychologically proven that this has a profound, negative impact on the child. There are two directions in which this development can go: Either the child stops believing in its capabilities and noticeably fails to reach its real potential or it attempts to prove to others throughout its life that it is much more intelligent than others and never receives satisfactory confirmation of this in the long term. Both paths can be regarded as a kind of programme error in the brain. A disorder that leads to limitations and internal and external pain.
For years Lee Sedol was regarded as the strongest Go player in the world. He was the youngest and quickest player to reach the highest possible ranking up until that point in the most complex game in the world. At the press conference with Sedol following his second match lost against AlphaGo in March 2016, he seems not only speechless but as if something were broken inside of him. Something fundamental that also has an impact on peoples’ lives. When several commentators in a row emphasised the computer’s exceptional performance, his face was a picture of shock and great pain.
On a daily basis we see that machines store, analyse and link data faster than us humans and are able to perform calculations in milliseconds. With phenomenal advancements in AI thanks to deep reinforcement learning and the fact that AI software can now also learn without the help of humans, the urgent question arises what value our human intelligence will have in the future.
Predictions tell us that machines will be the better car drivers, doctors, lawyers etc., at least in terms of knowledge. It is now important to reflect on these issues and to develop solutions so that we are not faced with a fait accompli as was the case with Big Data. The methods we have used in the past are not sufficient to face the challenges of the future. And yes, yesterday’s methods weren’t even sufficient to meet the challenges of yesterday.
Today’s education system goes back to the era of industrialisation in which the aim was to raise factory workers. Our education system has barely changed since this era. Instead of inspiring children to ask questions, to further their curiosity, develop team spirit etc., our education system moulds them into conformists and rivals. Philosopher and author Richard David Precht is therefore quite rightly calling for an educational revolution instead of further educational reforms. The humanist François Rabelais knew as early as the 16th century that “a child is not a vase to be filled, but a fire to be lit”.
Quoting Pablo Picasso to illustrate his point, Kasparov emphasised an important ability in us humans which must be developed further: “Computers are useless, because they can give you only answers. But everything begins with a question.” The answers can only be as good as the questions we ask. What questions are we not asking ourselves anymore and are not asking ourselves yet because we are confident we can find the answers to all our problems in machines and AI? On the other hand, what answers can technology give us, precisely because we are asking it the right questions?
The future is the result of the decisions we make today. AI technology is morally neutral. There is much potential to unlock in AI but also unforeseen dependence. AI can render us great services if it serves us, but in just the same way it can lead to unprecedented bondage and destruction if we look to AI for the solutions to all of our problems and stop thinking for ourselves and stop asking questions. “Since everything begins with a question.”