Author Topic: Top Scientists, Experts and Philosophers Warn of Dangers of Artificial Intellige  (Read 1346 times)

Offline Reginald Hudlin

  • Landlord
  • Honorary Wakandan
  • *****
  • Posts: 9561
    • View Profile
Top Scientists, Experts and Philosophers Warn of Dangers of Artificial Intelligence
by Mark Newton ⋅ Posted on January 15th, 2015 at 7:03am

Writer, cynic, Walter Mitty day-dreamer. Email: mark@moviepilot.com

If it is ever achieved - and the current consensus is that it will be - the creation of a fully-fledged artificial intelligence could be the most major milestone in the history of humanity.

The technology has the ability to operate on a level currently inaccessible to humans, and potentially reap major rewards, but there are also massive dangers.

This is why several notable scientists, industry experts and technicians have banded together to deliver an open letter to the artificial intelligence developing community. They're not asking for the research to stop, merely that there be some kind of oversight to mitigate the risks.


The open letter, which was devised by the Future of Life Institute and contains names such as Stephen Hawking, Elon Musk, Skype co-founder Jaan Tallinn, George Church, and Nick Bostrom, states:

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.
The open letter also added that AI research was currently too preoccupied with simply making AI happen, but not arriving at it in the best possible way. It states technicians need to "focus research not only on making AI more capable, but also on maximizing the societal benefit of AI."


The document also included a compiled report of research proposals which raised certain issues as well as outlining important factors that must be taken into account, including:

Verification - "Did I build this system right?"
Validity - "Did I build the right system?"
Security - "Is this system safe from manipulation?"
Control - "Ok, I built the system wrong, can I fix it?"
What are the dangers with Artificial Intelligence?


Firstly, as mentioned in the opening, super-(or artificial)-intelligence will be unlike anything else humanity has ever created, and can affect the world in ways no other technology, including the wheel, internal combustion engine and internet, ever have. One of the co-signers of the letter, and member of the prestigious Oxford University's philosophy faculty, Nick Bostrom has stated:

A prerequisite for having a meaningful discussion of superintelligence is the realization that superintelligence is not just another technology, another tool that will add incrementally to human capabilities. Superintelligence is radically different.
Bostrom also claims the biggest issue isn't necessarily a Skynet scenario in which robots attempt to kill off humanity or launch nuclear bombs into the atmosphere, but one where a small elite group has control of a super-intelligence. In this way, a super-intelligence could be pre-programmed with human prejudices towards others.


AI might be the only technology capable of wiping us out

Similarly, an error in programming could result in unforeseen consequences. He posits that a super-intelligence dedicated to the mundane task of manufacturing paper clips (and nothing else), could break beyond expected limits in order to maximize the output of paper clips. In this sense, capitalistic sensibilities of increasing production, output and profit, need to be controlled within an ethical framework. This is something humans innately do, but would have to be something carefully programmed into an AI. As The Atlantic states, if the robots kill us, its because it's their job and we've programmed them that way.

But could robots really wipe out humanity? Well, Stuart Armstrong, a philosopher and Research Fellow at the institute, thinks it's possible. In fact, AI might be the only technology capable of wiping us out.

One of the things that makes AI risk scary is that it’s one of the few that is genuinely an extinction risk if it were to go bad. With a lot of other risks, it’s actually surprisingly hard to get to an extinction risk... First of all forget about the Terminator. The robots are basically just armoured bears and we might have fears from our evolutionary history but the really scary thing would be an intelligence that would be actually smarter than us – more socially adept... When they can become better at politics, at economics, potentially at technological research.
Furthermore, the resources required to develop an AI means the feat will only be available to states and major corporations - entities with expressed agendas and attitudes towards certain people. Would the US allow its super-intelligence to mutually benefit all the world's population on an objective basis? What if that means the AI diverts more resources away from the US? Perhaps even to unfriendly states? What if that benevolence falls foul of US foreign policy?

The same is also true for corporations like Google. What if their AI suggests decreasing its profits in exchange for increasing social support? Would Google really allow that? In that sense, can we actually create a truly non-prejudiced AI?

But there is one more, terrifying, conclusion. An artificial intelligence could rob us of our most important possession: humanity.

More subtly, it could result in a superintelligence realizing a state of affairs that we might now judge as desirable but which in fact turns out to be a false utopia, in which things essential to human flourishing have been irreversibly lost. We need to be careful about what we wish for from a superintelligence, because we might get it.
What are the potential benefits?


But it is not all doom and gloom. AI can also potentially rid the world of many of our current problems.

It has been suggested that a fully-fledged super-intelligence could aid the development of space travel, unlock the secrets of creation, answer our fundamental questions, eliminate age and disease, calculate the best possible solution to issues, and if coupled with nano-technology, end environmental destruction and "unnecessary suffering of all kinds".

These are all lofty and worthwhile goals, but they still rest one on major issue - that a benevolent AI is developed. Bostrom claims the only solution is to build a super-intelligence which is fundamentally and irreversibly imbued with a sense of respect towards ALL humans (regardless of race, creed or political leanings) and perhaps even all sentient life.


But once again, if the machine is created by inherently flawed humans and by organizations whose expressed agendas are often subtly, if not explicitly, contrary to the benevolence for all, is this possible?

Could an AI actually result in a world where social, economic and political divisions are more pronounced? One where we are divided by our access to super-intelligence (and its benefits) and those who do not. This would no longer be a simple division of the 'First' and 'Third World' or 'Developed' and 'Developing countries,' but something of a much greater magnitude, and potentially, danger.

Offline Battle

  • Honorary Wakandan
  • *****
  • Posts: 6914
  • M.A.X. Commander
    • View Profile
Top Scientists, Experts and Philosophers Warn of Dangers of Artificial Intelligence
by Mark Newton ⋅ Posted on January 15th, 2015 at 7:03am

Writer, cynic, Walter Mitty day-dreamer. Email: mark@moviepilot.com

If it is ever achieved - and the current consensus is that it will be - the creation of a fully-fledged artificial intelligence could be the most major milestone in the history of humanity.




I'll believe it when I see advanced Artificial Intelligence in action.

Offline Battle

  • Honorary Wakandan
  • *****
  • Posts: 6914
  • M.A.X. Commander
    • View Profile
Even BILL GATES fears Artificial Intelligence



Artificial intelligence will start out as a help but will become “strong enough to be a concern”, Bill Gates has said in its latest question and answer session on Reddit.

In response to a question about whether machine super intelligence will become an existential threat, Gates said that he was “in the camp that is concerned about super intelligence”.

“First the machines will do a lot of jobs for us and not be super intelligent,” he wrote. That should be positive if we manage it well.

ROBOTRON: 2084


“A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.”


Would You Like To Know More?
http://www.msn.com/en-us/money/technology/bill-gates-artificial-intelligence-will-become-strong-and-threaten-us/ar-AA8ISUS#image=1



------------------------------


Then again, this is the same guy who once proclaimed at a 1981 computer trade show regarding the introduction of IBM's 640k usable RAM limit,   "640k ought to be enough for anybody." "sure, Bill..."

Offline Battle

  • Honorary Wakandan
  • *****
  • Posts: 6914
  • M.A.X. Commander
    • View Profile

Marvelous Marv


Marvin Minsky was a pioneer, someone who was thinking one step ahead of anyone else. He was a founding father when it comes to artificial intelligence and computer science. He was also one of the most thoughtful scientists, inspiring generations of computer scientists.


He died earlier this year [2016] on January 24 of a cerebral hemorrhage.

After studying mathematics at Harvard and Princeton, Minsky joined the MIT faculty in 1958.

Minsky started working on artificial intelligence in the 1950s, long before the invention of personal computers or the Internet. He co-founded the Artificial Intelligence Group at MIT with John McCarthy, another computer science hero who coined the term “artificial intelligence.”

Some experts have said the field of artificial intelligence is currently experiencing something of a golden age, with deep-learning neural networks making advances in both speech and image recognition.

In one of his last interviews, with MIT Technology Review last year, Prof Minsky said there had been "very little growth in artificial intelligence" in the past decade, adding current work had been "mostly attempting to improve systems that aren't very good and haven't improved much in two decades".*

By contrast, he said, "the 1950s and 1960s were wonderful - something new every week".




Would You Like to Know More?
http://techcrunch.com/2016/01/26/marvin-minsky-artificial-intelligence-and-computer-science-visionary-dies-at-88/
http://www.bbc.com/news/technology-35409119






*He's soooo-o-o-o correct! :)

Offline Tanksleyd

  • Honorary Wakandan
  • *****
  • Posts: 1816
    • View Profile
The last day my parents could make me go to church
The preacher gave a sermon
That changed my life
Genesis 2:17