FAQ Artificial Intelligence
Artificial intelligence generally refers to the attempt to recreate a human-like intelligence, i.e. building or programming a computer so that it can solve problems on its own.
There are many forms of AI around – from Google’s search engine algorithms, to driverless cars, IBM’s Watson, to autonomous weapons. We mainly associate AI with robots in a human form. But when AI appears in the form of a human/non-human robot, this is only its external form. The AI itself is the software/the mind/the computer inside of the robot. The software in SIRI or in a chess computer, for instance, does not use a robot form.
1st Category: Artificial Narrow Intelligence (ANI) is referred to as weak AI and specialises in one single task, like the AI of a chess computer. It can perform this single task very well, and in the case of chess it performs better than humans, but a weak AI could not master an additional task. We are now surrounded by weak AI – the search engine algorithms on Google, the offers we are presented on Amazon, the apps on our phones etc.
2nd Category: Artificial General Intelligence (AGI), is referred to as strong AI. It is comparable to a computer that is just as intelligent as a human. Since the human brain is hugely complex, it is much more difficult to develop strong AI than weak AI and this has not yet been achieved by AI developers.
Whereas weak AI can only perform specific tasks, strong AI would be able to perform the same tasks as humans. It could think abstractly, learn quickly and from experience, make plans, solve problems etc. The long-term goal of many AI researchers is to develop strong AI.
3rd Category: Artificial Superintelligence (ASI): Artificial superintelligence is a software with an intelligence surpassing that of human minds in many or all areas, so an intelligence which is a little more intelligent than humans in all areas or millions of times more intelligent. The term artificial superintelligence describes something that exceeds all of our imaginations.
The fact of the matter is that intelligence enables control. We can lock up lions in a cage because we are more advanced than they are in terms of our knowledge. How do we think an artificial superintelligence would treat us humans?
Steve Wozniak, co-founder of Apple, stated the following with regard to a possible artificial superintelligence: “Will we be the gods? Or the family pets? Or will we be ants that get stepped on? Quite honestly, I don’t know.”
More infos about that: Tim Urban, writer of the blog Wait But Why, put AI in these three major categories in his brilliant article The AI Revolution: The Road to Superintelligence & The AI Revolution: Our Immortality or Extinction
Google subsidiary DeepMind announced in October 2017 that it had developed the software AlphaGo Zero which learns independently without human assistance and performs better than the same system learning with human assistance.
AlphaGo Zero is thus not only the new strongest Go player in the world, it has also learned the game using only the rules and in doing so has surpassed human capability.
Whereas the previous version of the programme AlphaGo was fed with moves that had been previously played, AlphaGo Zero did not rely on any human expertise. It learned from the mistakes and successes in the games it played against itself. The machine knew the rules of the game and in the beginning it chose random moves. In this way it automatically generated game sequences and strategies – both successful and unsuccessful.
It is also shocking for Go experts to see how AlphaGo Zero can discover centuries of human Go knowledge in a short space of time, like in the form of certain established move sequences played near the corners of the board – so-called Joseki -, and then discard these when learning further in favour of even better strategies that we humans have yet to discover. It is a hard sight for Go experts all over the world to witness: Seeing a computer learn all they have over their entire career within a matter of two days and then surpass their ability at the same rate.
“The biggest misconception is that everyone talks about automation as destroying jobs. The reality is that automation changes every job. It’s not so much about what jobs will we do, but how will we do our jobs, because automation isn’t going to affect some workers, it’s going to affect every worker. Young people, students will be the most affected by these changes because the types of roles that young people take, are precisely the type of entry-level tasks that can be most easily done by machines and artificial intelligence“, said Andrew Charlton, Director of AlphaBeta Australia.
Continuing in the same vein, it is evident that people in developing countries will be also affected at an early stage as a result of changes to jobs that can be automated quite swiftly. Even today the labour available and the wages of people in developing countries are not sufficient for them to live a decent life. Which work and wages will remain for them if automation spreads to these countries and these jobs?
Automation and AI in themselves offer us an opportunity to enhance elements of human intelligence, which is being strongly neglected in routine jobs, like creativity, independent thinking, emotional and social intelligence etc. However, this requires a change of the curriculum in schools and universities and changes in the society. These are abilities that have been barely taught or fostered so far.
According to AI experts the main risks are AI either landing in the wrong hands and this leading to destruction on an unprecedented scale or AI advancing so quickly by itself that we are no longer able to understand and control it.
In August 2017, 116 entrepreneurs and experts from the technology sector (including Mustafa Suleyman, Elon Musk, Yoshua Bengio, Stuart Russell, Jürgen Schmidhuber) appealed in an open letter to the UN for a ban on autonomous weapons and for these to be placed on the CCW list that has been in force since 1983. Certain Conventional Weapons are banned by the UN and include chemical weapons.
They stated that after black power and the atomic bomb, autonomous weapons threaten to become the third revolution in warfare. Quotation from the letter: “Once this Pandora’s box is opened, it will be hard to close” and “once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend”. Terrorists and despots could use autonomous weapons and even hack these.
Elon Musk, who himself is financially tied to AI businesses, issued a warning back in 2014: “The pace of progress in artificial intelligence (I’m referring to strong AI) is incredibly fast. …..Unless you have direct exposure to groups like Deepmind, you have no idea how fast it – it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe.”
He claimed he was not crying wolf as he knew exactly what he was talking about. “I am not alone in thinking we should be worried. They (the leading AI companies) recognise the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. ….. That remains to be seen”.
Vladimir Putin also tried to warn us in September 2017: “Whoever leads in the area of AI will dominate the world.”
Artificial intelligence has recently made great leaps in the area of artificial neural networks, also known as deep learning. Here, neural networks from the brain are artificially simulated on a computing system. The foundations for this had already been laid in the 80’s and 90’s. However, only with the power that computers have today has it become possible to simulate these kinds of neural networks to a viable extent.
Many of the most recent breakthroughs such as handwriting recognition, speech recognition, face recognition, driverless car technology and machine translation are based on this technology. It does not involve programming the systems but training the systems with the help of data, quite like the processes that take place in the human brain.
The most recent triumphs of AI, like when the programme AlphaGo beat the world’s best human Go player (Lee Sedol) at this game in 2016, have been the result of the use of deep learning to train the
neural network utilised in AlphaGo in addition to the higher processing speed of the hardware. We also hear these processes being referred to as self-learning systems and machine learning.
In this method the software is no longer fed human data but learns to develop its own strategies independently like a human through trial and error. The only specifications which the developers programme are the technical requirements and a reward for behaviour that leads to the desired result. In reinforcement learning, AI is its own teacher. This type of learning has proven quicker and more powerful than “supervised learning” where the software is fed with countless human data. The company DeepMind (subsidiary of Google) has recorded success using both artificial neural networks (deep learning) and reinforcement learning in the form of its Go programme AlphaGo Zero. This type of learning is referred to as deep reinforcement learning.
This is the point in time when artificial intelligence would surpass human intelligence. This point would be reached if AI grew rapidly by itself through independent learning processes. This would accelerate technological progress so immensely that the future of humanity would be unforeseeable after this point in time. From then on, further development would mainly be driven by AI and no longer by humans.
The term is closely associated with the theories and ideas of transhumanism (philosophical line of thought that attempts to exceed the limitations of human possibilities, whether intellectual, physical or mental, through the use of technologies).
Some representatives of this line of thought assume that the technological progress associated with this could help to considerably increase the length of the human lifespan or even that this could lead to biological immortality.