Progress and change are natural and fundamental human needs. Yet our very approach to modern technologies and how we handle these are what decide whether their effect is beneficial or devastating to us as humans.
Our world has arrived at an unprecedented, critical point in human history. We are demolishing the ecosystem of our planet, we are tampering with the biochemistry of life and focusing our attention with far too little caution on clones and genetic engineering, we have nuclear energy at our fingertips and thus possess the knowledge necessary to destroy our whole planet. Furthermore, modern technology we have created is threatening our ability to use this smartly and wisely, to outwit it.
Artificial intelligence (AI) is currently developing rapidly as we now have access to the processing power, storage capacity, data and algorithms that are required. Many people are involved in boosting development in this area with the help of abundant resources. Some of the leading AI experts (e.g. Elon Musk, Bill Gates, Steve Wozniak etc.) have raised their voices in concern worldwide having realised that the issue of security is being worryingly neglected in the development of AI.
Enormous pressure exists to be right at the forefront of technological developments and to remain competitive, which is a disconcerting strategy if we are developing technologies that could be so dangerous that their use could potentially put an end to humanity entirely.
What is for sure is that AI will not only completely change the living and working environment of all people, AI also forces us to consider what it means to be human, what our values are, what ethical standards we follow. A fundamental question is:
Which instructions do we use to programme algorithms? If we give them rules in which it is about winning and losing, everything becomes a competition. What ethics do we teach to AI? Symbolically, to someone holding a hammer, everything looks like a nail.
The key point is that we still have no answer to these questions as we humans have lived predominantly for thousands of years by the winner/loser and the hammer/nail principle and, time and again, this has led us around in circles back to the same problems. Our old ways have shown over millennia that they do not work and lead repeatedly to war, exploitation, suffering and suppression of human rights.
AI development therefore leaves us no other choice than to ask ourselves collectively some new questions:
The future of humanity will largely depend on whether we take the leap from thinking of our individual self (I win, you lose), to a collective self (I win, you win), thus, whether we manage to conceive ourselves as something that also includes everybody else.
If we wage a war against another nation this will also lead to suffering and terror within our own nation, if we clear the rainforests on another continent this will also have negative consequences on our side of the globe, if we throw a stone at our neighbour’s car this stone will one day be thrown at us etc.
Some AI developers have stopped believing in the continued development of human intelligence as they view biological progress as slow-moving and stagnant, this makes them focus even more determinedly on developing artificial intelligence.
It is for this reason that Tom Chi, the inventor of Google Glass and an employee at Google Brain once warned: “If we do not develop our consciousness faster than technology in the next years, we are done.”
Vision United World (VUW) will cooperate with scientists, psychologists, philosophers, researchers etc. to implement, lead and support projects which boost human intelligence and expand our consciousness. VUW simultaneously seeks to raise awareness on the topic of AI and to raise its voice within the international community in support of the development of technologies that work in harmony with ethical thinking.
For our future and that of generations to come, Vision United World is striving for a united world where there can be freedom, peace and dignity for all.
Lolita Aufmuth, Founder