
Rise of the Machines - will AI lead to Super Intelligence or Doomsday?
Ray Kurzweil is Google's director of engineering and a highly accurate futurist who has predicted that machines will surpass human intelligence by 2045. This tipping point is termed the 'singularity' and the implications of computers becoming more intelligent than their makers has divided opinion in the scientific and technological communities.
Both Kurzweil and Son are advocates of the singularity and are looking forward to how machines can help humanity. They believe the merging of both physical and artificial intelligence will lead to a super-intelligence But equally there are those that fear the rise of the machines with the likes of Stephen Hawking and Elon Musk expressing their fear that Artificial Intelligence is more likely to lead to a doomsday scenario. Us versus them. With them winning. Echoes of The Terminator and Skynet anybody?
The detractors of the singularity worry that when computers become sentient that they will become the masters of the planet. An analogy would be like the humans relationship with ants. Generally speaking we tend to leave the insects alone, unless they become a nuisance to us in some way and then what do we do? We simply eliminate them. The resultant question then must be, would artificially intelligent machines think about mankind in the same way and dispense of the carbon based lifeforms that inhabit the earth with some human version of Raid?
There are certainly some warning signs. At the Consumer Electronics Show last year, Hanson Robotics introduced their artificially intelligent robot called Sophia. Complete with realistic animatronic facial expressions, Sophia can hold a conversation with you and also answer open questions. When quizzed about whether AI was a good thing, her answer was particularly erudite:
"The pros outweigh the cons. AI is good for the world, helping people in various ways. We will never replace people, but we can be your friends and helpers"
All very positive. That was until at the SXSW conference a few months later when her creator / inventor David Hanson jokingly asked Sophia-Bot whether she would ever want to destroy humans. In hindsight I think he probably wishes he had never asked the question. Her answer, almost predictably, was:
"OK, I will destroy humans"
Gulp. Be afraid, be very afraid.
But there are those experts out there who think that the singularity is nothing more than an elaborate myth and believe that Kurzweil and his cohorts are charlatans. One of them is UC Berkeley roboticist Ken Goldberg who thinks the singularity is absolute nonsense and is unlikely to ever come to fruition because Moore's Law must inevitably reach a ceiling (computer chips can only get so small and their capacity is not infinite). Goldman believes that we should focus on the 'multiplicity' which is the way that humans and machines are already working together right now. He states that this multiplicity is the real future where, for example, a robot will gently hand us a knife to help us in the kitchen rather than trying to stab us with it.
So what do you think? Is the singularity going to become a reality or is it just a theory based upon overactive imaginations? If you do believe that the singularity will occur in the future, will it be helpful to humans or detrimental? As ever I am keen to hear your thoughts...
Trending
-
1 SEO Mistakes That Could Be Costing Your Shopify Store Sales
Daniel Hall -
2 Strategies for Safeguarding Assets and Investments
Daniel Hall -
3 The Role of PR Firms in Crisis Management and Damage Control
Nitish Mathur -
4 How to Make Appealing Visuals for Your E-commerce Store
Daniel Hall -
5 The Competitive Landscape of Low-Cost Carriers in Belgium: TUI Fly Belgium’s Position
Daniel Hall
Comments