What Impact Will Artificial Intelligence Have on Our Future?

    The prospect of artificial intelligence excites and repulses people in equal measure. Will it bring us a kind of paradise or will it bring us a techno-hell? To get a clearer handle of what might happen and when, it’s best to divide AI into three categories:

  1. Artificial Narrow Intelligence
  2. Artificial General Intelligence
  3. Artificial Superintelligence

    The first of these is “Artificial Narrow Intelligence”, or weak AI. This kind of AI is already used in many programs; it’s the kind of AI that uses big data and complex algorithms to arrange your Facebook timeline or beat you at chess. Our lives, infrastructure, and financial markets are already very dependent on it.

    The next step up the AI ladder is “Artificial General Intelligence”, or strong AI. This is an intelligence that can think as well as we can. We are probably about 30 years away from this. The difficulties with creating strong AI are all about building machines that are good at doing things which are easy for us humans to do, but which machines have always struggled with. Oddly, it’s so much easier to build a machine that can do advanced calculus than it is to build one that can get milk from the fridge, recognize someone or something, or walk up the stairs. Our brains are brilliant at so-called “everyday tasks” like decoding three-dimensional images, working out people’s motivations, and spotting sarcasm. We’re very far ahead of machines in those areas.

ASIMO is a humanoid robot developed by Honda and is designed to be a multi-functional mobile assistant

    Some scientists doubt we’ll ever see strong AI, but the majority of AI experts today seem to think that we will be there in the coming decades. If you’re under 35, the great probability is that you will be there to enter the strong AI age. So what will happen to us once we’ve succeeded creating an intelligence to that is like our own? Well, the rivalry will be extremely short lived because the key point about strong AI is that it will be able to learn and upgrade itself on its own without instructions. This is what makes it so revolutionary and so different to almost any machine we have ever built. The machine will be given a certain amount of intelligence, but it can then build on this as it learns and develops. It will learn by trial and error with an infinite capacity to acquire skills. This means there’ll be no reason for AI  to stop getting smarter once it reaches the human level. The more intelligent a system becomes, the more it will learn and do. This cycle equates to an exponential growth in intelligence that would leave us astonished, but also dominated and very scared. It might not take very long, maybe only a few months, before the machine is smarter than its creator. This is when it gets very exciting. This is referred to as “The Singularity”, which is where we begin to see the third type of AI, “Artificial Superintelligence”.

    Technically this is any AI that surpasses human levels of intelligence. AI that reaches this level would soon be way ahead of us, and statements such as “Well, let’s just turn it off” might be like trying to drink soup with a fork, it would be useless. Some including Bill Gates, Stephen Hawking, and Elon Musk are so scared they believe that we’re unlikely ever to be able to control any super intelligence we create. Artificial minds will pursue their aims and these aims may not coincide with ours. A machine wouldn’t specifically want to kill us, but it’s morals would mean that it would be willing to cause our extinction if necessary. It’s tempting to assume that anything intelligent will just naturally develop human values like empathy and respect for life, but this cannot be guaranteed, and given that we find it impossible to agree among ourselves what’s right and wrong, how could we possibly program a computer with a knowledge that could reliably be considered moral. Now that’s a pessimistic angle, but there is a more cheerful angle, of course.

From left to right: Stephen Hawking, Elon Musk, and Bill Gates.

    According to the optimists, in a world of artificial superintelligence, machines will still be our servants, we’ll give them some basic rules of never killing or doing any harm, and then they’ll set about solving all the things that have long puzzled us. The immediate priority of super intelligence would be to help us create free energy, in turn, dramatically decreasing the prices for almost everything. We would be in the era that Google’s chief futurologist, Ray Kurzweil, described as “abundance”. Everything today that has a price would drop to almost zero. Work for money would come to an end. The real challenge would be not getting miserable with all this abundance.

    The solution to this “abundance” would be to develop a typeof AI that’s been called AEI, or artificial emotional intelligence. This AEI would help us with all the emotional, psychological, and philosophical end of things. We would be helped with understanding our intellect, mastering our emotions, finding out our true talents, and guiding us to the people with whom we might form good and satisfying relationships. Most of the many psychological mistakes which allow us to waste our lives could be prevented. Instead of focusing on insecurities and inconsistencies, we’d be guided to a more compassionate, happier, and wiser future.

    We need to build up the wisdom to control which way we are heading, and part of that means thinking very realistically about things that, today, still seem fictional. Humans are a toolmaking species; we are on the brink of creating tools like no other, so the trick is going to be to stay close to the underlying ancient purpose of every tool, which is to help us do something more effectively.


The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” -Edsger W. Dijkstra



Leave a Reply