Artificial Intelligence: Euphoria or Extinction

This is an introductory article of I-RISE on artificial intelligence. We will be writing more articles on artificial intelligence and give a more thorough treatment of the subject matter including implications of creating an artificial equivalent of a distorted image of the human being and giving the sources of our citations.

The 21st Century is the age of science and technology. It will also be the age when humanity confronts for the first time a challenge that may overwhelm and destroy the human species itself. And the irony of it all is that human beings themselves are feverishly creating this challenge thinking that it will ultimately be for the benefit of humanity itself.

One need only cite the promises of super health, super strength, super intelligence, and immortality for the humans that will be brought by AI in conjunction with other technologies. But there is a growing concern that this technological creation could lead to the extinction of the human species itself. This is the perilous challenge of artificial intelligence (AI).

AI and the Challenge of Humanity

One only need look at the recent achievements of AI to gain a very strong impression that humanity is entering an unprecedented time in history and moving into dangerous territory from which it may not safely extricate itself.

garry kasparov, artificial intelligence
Chess used to be a game for humans. Today it is dominated by machines.

Consider Watson, the IBM super computer that defeated the two world champions of Jeopardy. Watson could read millions of pages in the Internet and come to an answer in less than three seconds.

Or look at Alpha Go, the Google Deep Mind super computer that defeated Lee Sedol, the world champion in GO for 18 years. Sedol is considered the second best GO player of all time. GO is much more complicated than chess. And the number of probable moves is more than the entire number of atoms in the universe. Yet Alpha Go defeated Lee Sedol, 4 wins – 1 loss. Expert consider that this AI feat was not expected until 10 years into the future.

And the processing speed of supercomputers are mind boggling, beyond the mind to even imagine. As of early this year, the fastest computer in the world was Chinese, with a speed of over 33 quadrillion calculations (instructions) per second. Imagine what that could do when fed with billions of bits of information.

Because of the unprecedented and unexpected speed of these developments, voices of deep concern have surfaced. Very prominent global thinkers and leaders are warning the world of the threat of artificial intelligence, especially human-level AI or Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI).

The Growing Concern in AI, AGI, and ASI

Elon Musk, the billionaire owner of Tesla Motors and Space X and investor in a number of AI companies, has said that we may be “summoning the demon” with AI. And at the San Francisco Code Conference, he warned:

…at any rate of advancement in AI we will be left behind by a lot. The benign situation with ultra-intelligent AI is that we would be so far below in intelligence we’d be like a pet, or a house cat. I don’t love the idea of being a house cat…the scenario in which humans are turned into pets was the optimistic one, and that the true consequences of artificial intelligence could be much worse.

Elon Musk

Tesla CEO

Musk was concerned about saving humanity from extinction by artificial intelligence and created a $1 billion fund, with others, to “research on saving humanity from AI”.

And Musk is not alone. The world famous physicist and cosmologist, Stephen Hawking, considered by many to be the Einstein of the 21st century, warned that AI will be ‘either the best, or the worst thing, ever to happen to humanity’, and “praised the creation of an academic institute dedicated to researching the future of intelligence” as ‘crucial to the future of our civilisation and our species’. He fears that there is a risk that humanity ‘could be the architect of its own destruction if it creates a superintelligence with a will of its own’.

It needs to be said though that all these individuals see the benefit of using AI and are encourage the community of AI scientists and experts to ensure that the forthcoming generation of AI will be safe for humanity.

And these sentiments stand behind the signature campaign launched on January 2015 by the Future of Life Institute in Puerto Rico. As of a few months ago, over 8,600 AI experts and AI technology creators from Google, Microsoft, Deep Mind, and some other distinguished business and academic institutions have all signed the document which seeks to spread the concern for “safe AI”.

Developing “Safe AI” Means Changing Our Worldview

But this goal of “safe AI” will not be easy to achieve. In fact, as of today, no one really knows how to go about it.

Here is the voice of Stuart Russell, one of the top global experts on AI. He co-authored a textbook on AI that is widely used as the fundamental text on the subject around the world.

The question is not whether machines can be made to obey human values but which humans ought to decide those values.

Stuart Russell

AI developer

One author observes: “On the contrary, both are important questions that must be asked, and Russell asks both questions in all of his published talks. The values a robot takes on will have to be decided by societies, government officials, policy makers, the robot’s owners, etc. Russell argues that the learning process should involve the entire human race, to the extent possible, both now and throughout history.

The impossibility of this procedure is just one conundrum facing the AI safety community. Even if this procedure is eventually worked out, there are still some complicated issues that need to be addressed.

AI experts, for example, keep on emphasizing that we may come up with the ideal values to program into the robot, but it is not certain where an AGI, ASI, or even a more sophisticated version of current AI will necessarily follow these values. There are tacit sub-programs that the AI may follow to override the values of humanity being programmed into them.

However, from the perspective of I-RISE, there is one thing missing in these concerns about AI. There is no debate on what image of the human being stands behind the AI creation. In current AI work, there is the tacit belief that human beings are nothing but complex biological machines that can altered, manipulated, cloned, and patented. This attitude stems from the ruling assumption of modern day science that humans are nothing  but purely matter. And that there is no active spiritual agency in human beings.

This shows up very clearly in the AI assumption that human consciousness is nothing but the firing of neurons in the brain. So this assumption is then taken in such attempts to store and transfer human consciousness from one form of substrate (biological matter) to another substrate (silicon matter). This is how some prominent AI proponents think of achieving immortality.

This is a misplaced idea and is nothing but an illusion. This idea, a residue of the excesses of reductionism in 19th century science, is especially problematic giving the mounting scientific evidence that human consciousness is not just in the brain and that the human is not just matter.

This is the reason why philosophers, neuroscientists, and AI proponents are struggling with the so-called “hard problem” of consciousness. For indeed how could something material (like human brain tissue) give rise to non-material substance like ideas. (See related article in this website on The Mystery of Consciousness: A Critique)

Leave a comment

Your email address will not be published. Required fields are marked *