View Single Post
  #11  
Old 08-07-2010, 11:37 PM
hamandcheese hamandcheese is offline
 
Join Date: Nov 2008
Location: Nova Scotia
Posts: 48
Send a message via Skype™ to hamandcheese
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

This is a great example of what I was referring to, regarding the problems of a normative AI. I agree that the I-Robot notion of AI rising up and overthrowing us is fallaciously anthropocentric: It's a human phenomenon to thirst for power, so we shouldn't expect an AI to unless we program it that way.

Yet we will have to give it the ideas of power and oppression, and other important moral concepts, so that it can actually apply them in answering normative questions. Or will we just cleverly design it to consider moral concepts with an ironic distance?

To me this all suggests a type of implicit moral anti-realism, if not nihilism, on Eliezer's part. By saying 'we simply won't program the concepts of power, selfishness etc.' into the AI it implies that those concepts are not necessary concepts and certainly not transcendent or objective concepts that it could acquire though it's own, accelerating intelligence and introspection.
__________________
Abstract Minutiae blog
Reply With Quote