Go Back   Bloggingheads Community > Diavlog comments
FAQ Members List Calendar Search Today's Posts Mark Forums Read


Diavlog comments Post comments about particular diavlogs here.
(Users cannot create new threads.)

Thread Tools Display Modes
Prev Previous Post   Next Post Next
Old 08-07-2010, 10:37 PM
hamandcheese hamandcheese is offline
Join Date: Nov 2008
Location: Nova Scotia
Posts: 48
Send a message via Skype™ to hamandcheese
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

This is a great example of what I was referring to, regarding the problems of a normative AI. I agree that the I-Robot notion of AI rising up and overthrowing us is fallaciously anthropocentric: It's a human phenomenon to thirst for power, so we shouldn't expect an AI to unless we program it that way.

Yet we will have to give it the ideas of power and oppression, and other important moral concepts, so that it can actually apply them in answering normative questions. Or will we just cleverly design it to consider moral concepts with an ironic distance?

To me this all suggests a type of implicit moral anti-realism, if not nihilism, on Eliezer's part. By saying 'we simply won't program the concepts of power, selfishness etc.' into the AI it implies that those concepts are not necessary concepts and certainly not transcendent or objective concepts that it could acquire though it's own, accelerating intelligence and introspection.
Abstract Minutiae blog
Reply With Quote

Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

All times are GMT -4. The time now is 12:42 PM.

Powered by vBulletin® Version 3.8.7 Beta 1
Copyright ©2000 - 2020, vBulletin Solutions, Inc.