2012-11-28

Shifting Gears: Robotics, Cybernetics, Artificial Intelligence and All Points In-Between

Man Vs. Machine: Cambridge University To Launch Center For The Study Of Existential Risk

By Roxanne Palmer
November 26 2012 6:11 PM
Source:
IBTimes

Come to Cambridge University if you want to live.

If all goes according to plan, the venerable British institution will soon be home to the Center for the Study of Existential Risk, a multidisciplinary research center that will focus on issues that pose a threat to humanity.

The center will investigate a wide range of apocalyptic scenarios, ranging from runaway nanotechnology to extreme weather events caused by climate change to the rise of superintelligent and hostile artificial intelligence. Basically, if it can appear in a science fiction or a Michael Bay film, it's fair game.

“Our goal is to steer a small fraction of Cambridge's great intellectual resources, and of the reputation built on its past and present scientific pre-eminence, to the task of ensuring that our own species has a long-term future,” the founders wrote in April.

The architects of this doomsday academy are Cambridge philosopher Huw Price, Cambridge cosmology and astrophysics professor Martin Rees, and Skype founder Jaan Tallinn.

In August, Price and Tallinn wrote a piece for The Conversation speculating on the dangerous possibilities of artificial intelligence.

Computers can already play chess better than humans, and it seems almost inevitable that machines will continue to improve in analytical power until they match -- and likely exceed -- the capacity of the human brain. But beating people at chess, while a bit wounding to the ego of our species, isn't exactly threatening.

However, “the greatest concerns stem from the possibility that computers might take over domains that are critical to controlling the speed and direction of technological progress itself,” Price and Tallinn wrote.

If machines surpass humans in the ability to write computer programs, there could be an “intelligence explosion.” Humanity would no longer be in the driver's seat of technological progress, and we could only marvel at what the machines make.

While one could hope that a smart machine wouldn't necessarily be hostile, there's no guarantee that they would even take notice of humans, let alone work with them or be kind to them.

Wary pessimists say that “almost all the things we humans value (love, happiness, even survival) are important to us because we have particular evolutionary history -- a history we share with higher animals, but not with computer programs, such as artificial intelligences,” the pair wrote.

If the machines take over, even if there is no conflict between us and them, humans will still have to deal with the hard fact of losing our place at the top of the pyramid. But there is no current framework for investigating or formulating a plan to deal with this shift.

“A good first step, we think, would be to stop treating intelligent machines as the stuff of science fiction, and start thinking of them as a part of the reality that we or our descendants may actually confront, sooner or later,” Price and Tallinn say.

Experts to Study Whether Robots Will Exterminate Humanity

How close are we to a Skynet takeover?

Paul Joseph Watson
November 27, 2012
Source:
Infowars.com

Experts at the prestigious University of Cambridge will conduct research into the “extinction-level risks” posed to humanity by artificially intelligent robots.

The Cambridge Project for Existential Risk is dedicated to “ensuring that our own species has a long-term future” by studying the risks posed by AI, nanotechnology and biotechnology.

“The scientists said that to dismiss concerns of a potential robot uprising would be “dangerous,” reports the BBC.

The project was co-founded by Huw Price, Bertrand Russell Professor of Philosophy at Cambridge, Martin Rees, Emeritus Professor of Cosmology & Astrophysics at Cambridge, and Jaan Tallinn, the co-founder of Skype.

It also counts amongst its advisers Max Tegmark, Professor of Physics, MIT and George M Church, Professor of Genetics at Harvard Medical School.

An article written by Tallinn and Price warns that artificially intelligent computers or robots could take over “the speed and direction of technological progress itself,” and shape the environment of planet earth to their own ends while displaying about as much concern for humanity as we do for a bug on the windscreen.

Far from being resigned to works as science fiction such as in the Terminator films, the threat posed by a potential future “rise of the robots” has never been closer to reality.

The study echoes the predictions of respected author, inventor and futurist Ray Kurzweil, renowned for his deadly accurate technological forecasts.

In his 1999 book The Age of Spiritual Machines, Kurzweil predicted that after 2029, the elite would come closer to their goal of technological singularity – man merging with machine – and that by the end of the century, the entire planet will be run by artificially intelligent computer systems which are smarter than the entire human race combined – similar to the Skynet system fictionalized in the Terminator franchise.

Amidst the debate, the fact that the US military under DARPA is already developing robots for the express purpose of of killing people has been largely overlooked by futurists.

As we have previously highlighted, the whole direction of drones and automated robot technology being developed by the likes of DARPA is all geared towards having machines take the role of police officers and soldiers in pursuing and engaging “insurgents” on American soil.

Experts like Noel Sharkey, professor of artificial intelligence and robotics at the University of Sheffield, have warned that DARPA’s robots represent “an incredible technical achievement, but it’s unfortunate that it’s going to be used to kill people.”

The Department of Defense recently issued a new policy directive attempting to “reassure” people that artificially intelligent cyborgs wouldn’t be used to murder people after Human Rights Watch called for an international ban on “killer robots”.
Policy directive 3000.09 states: “Semi-autonomous weapon systems that are onboard or integrated with unmanned platforms must be designed such that, in the event of degraded or lost communications, the system does not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorised human operator.”

No comments:

Post a Comment