Go Back   Bloggingheads Community > Diavlog comments
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Notices

Diavlog comments Post comments about particular diavlogs here.
(Users cannot create new threads.)

Reply
 
Thread Tools Display Modes
  #1  
Old 08-07-2010, 01:03 AM
Bloggingheads Bloggingheads is offline
BhTV staff
 
Join Date: Nov 2007
Posts: 1,936
Default Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Reply With Quote
  #2  
Old 08-07-2010, 02:35 AM
r108dos r108dos is offline
 
Join Date: Apr 2008
Posts: 34
Default Re: Science Saturday: Purposes and Futures

Where is Mickey? This is singularly unacceptable.
Reply With Quote
  #3  
Old 08-07-2010, 03:07 AM
karlsmith karlsmith is offline
 
Join Date: Aug 2010
Posts: 10
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Long time watcher, first time commenter. Wright and Yudkowsky together. I haven't even started yet and I am giddy.
Reply With Quote
  #4  
Old 08-07-2010, 03:29 AM
Abdicate Abdicate is offline
 
Join Date: Dec 2007
Location: Eden Prairie, Minnesota
Posts: 90
Send a message via Yahoo to Abdicate Send a message via Skype™ to Abdicate
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

I was really appalled by Bob's conduct in this diavlog.
Reply With Quote
  #5  
Old 08-07-2010, 03:49 AM
BeachFrontView BeachFrontView is offline
 
Join Date: Jul 2008
Location: Los Angeles
Posts: 94
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Great diavlog!


Eliezer has great clarity on these complicated topics.
Reply With Quote
  #6  
Old 08-07-2010, 04:45 AM
jerusalemite jerusalemite is offline
 
Join Date: Jul 2010
Posts: 6
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

"I was really appalled by Bob's conduct in this diavlog.

Be specific. What conduct was appalling?
Reply With Quote
  #7  
Old 08-07-2010, 07:16 AM
MikeDrew MikeDrew is offline
 
Join Date: May 2008
Posts: 110
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

There are just too many meanings of 'purpose' going around here for them all to be squared under one term, holding each user responsible for each use thereof as equivalent to each other of his uses.
Reply With Quote
  #8  
Old 08-07-2010, 07:38 AM
Baxta76 Baxta76 is offline
 
Join Date: Sep 2009
Posts: 6
Default Re: Science Saturday: illusion of "purpose"

Mr Richard Wright annoys me in this diavlog almost as much as his interigation of Dan Dennett does on meaningoflife.tv
Natural selection has the illusion of "purpose" as it inherantly involves improvement over time. It is not driven towards anything other than survival.
Is this really science?

Last edited by Baxta76; 08-07-2010 at 08:07 AM..
Reply With Quote
  #9  
Old 08-07-2010, 07:46 AM
bbenzon bbenzon is offline
 
Join Date: Jan 2009
Posts: 20
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

So, we're back to science fiction Saturday, eh?
Reply With Quote
  #10  
Old 08-07-2010, 08:12 AM
testostyrannical testostyrannical is offline
 
Join Date: Aug 2006
Location: Denver
Posts: 83
Default The Singularity Is Nonsense

And probably basically contradicts what we know about complexity. What does it even mean for a program to make itself "smarter"? It's one thing to build faster, more sophisticated CPUs, another to construct "intelligent" code that can navel gaze and go, wow, this portion of myself isn't as smart as, er, this part of me that's looking at it critically...I should rewrite it better! This Skynet bullshit is what happens when you take certain nerd myths about smartness and graft them onto vaguely apocalyptic assumptions about the future of technology.
Reply With Quote
  #11  
Old 08-07-2010, 12:19 PM
Ocean Ocean is offline
 
Join Date: Jun 2008
Location: US Northeast
Posts: 6,784
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Well, not exactly science, but one may consider it a spin off from science.

Eliezer was in the best behavior I've seen him at BhTV. I think that debating Bob made him tone down a bit. His effort was obvious. Bob, on the other hand didn't tone down anything. He seemed rather annoyed with Eliezer during the first half when they discussed the Singularity. I can't blame him for that...

During the first half of the diavlog Bob and Eliezer discussed the concept of Singularity. Bob tried to address an issue which is rather obvious to those of us who are skeptic about this project, that is, the seemingly childish wishful thinking behind the idea. I have repeatedly wondered whether the entire Singularity- Artificial Intelligence project is based on a fantasy about creating an All Mighty Godly Father who will rescue us and solve all of humanities problems or whether that's just Eliezer's unfiltered wishful thinking. Bob's questions seemed appropriate and basic in terms of what a layperson would want to know about the topic (much appreciated). During one of the sections they used as an example the case in which the Singularity, if existent today, would be able to solve the Israel-Palestine problem after being given very minor instructions about the goals. They also talked about the possibility of "bugs" and the possible consequences when applied to the All Mighty Singularity. Eliezer seemed to consistently minimize the concern for that possibility. There's an aspect in Eliezer's psyche that projects such adoration for the idea of a superintelligent entity that will take care of us, that it's difficult to take his arguments seriously. This discussion was really detrimental to the Singularity cause.

In the second half they discussed Bob's idea about "purpose". And this was revenge time for Eliezer. Bob, indeed, did something similar to what he did with Dan Dennett, as another commenter pointed out. I don't know whether Bob realizes how he comes across when he tries to explore his interlocutor's opinions so that by putting them together in his way, he concludes that the person must agree with him. To the spectator, it comes across as a stretch, and very often it's obvious that it's not what the interlocutor thinks. Eliezer had to correct Bob's assumptions about his (Eliezer's) opinions constantly. If Bob is practicing Maieutics from Socrates' dialectic method, he needs to practice more because it isn't working well. (Bob, if I may, this is the best advice I can give you.)

I consider myself among those that object to the idea of using the term purpose unless there is an intention to imply that there is a designer or creator. Purpose implies a predetermined goal or outcome. Natural phenomena, as we understand it, doesn't have the ability to establish goals. We can study nature and we can find patterns or a direction in which evolution occurs, but the idea of purpose remains as our own construct, and says nothing about the external reality.

Knowing that the word purpose has such a problematic connotation, I would encourage Bob to clearly define how he uses it. If he thinks there is some kind of creator or designer who established a final goal towards which evolution marches, he should make that clear. If he thinks there's no such creator, and that evolution is directed by natural laws without a pre-established plan, then he should use a different term. It would save a lot of time and energy which is being used in this kind of discussion.

Cheers!
Reply With Quote
  #12  
Old 08-07-2010, 12:31 PM
hamandcheese hamandcheese is offline
 
Join Date: Nov 2008
Location: Nova Scotia
Posts: 48
Send a message via Skype™ to hamandcheese
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Has Eliezer ever considered that the AI might run its computations and come back in support of moral nihilism? Moral facts may not exist and if they do they may be universally false. How can we trust artificial intelligence with our normative ends when normativity itself may be strictly unintelligible. I'm increasingly of the persuasion that Morality will itself be a vestige to our incarnate stupidity.

How can AI transcend cognitive bias and irrational heuristic thinking and still have moral values when both those seem to be the functional basis of ethics?
__________________
Abstract Minutiae blog
Reply With Quote
  #13  
Old 08-07-2010, 12:51 PM
Ocean Ocean is offline
 
Join Date: Jun 2008
Location: US Northeast
Posts: 6,784
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by hamandcheese View Post
Has Eliezer ever considered that the AI might run its computations and come back in support of moral nihilism? Moral facts may not exist and if they do they may be universally false. How can we trust artificial intelligence with our normative ends when normativity itself may be strictly unintelligible. I'm increasingly of the persuasion that Morality will itself be a vestige to our incarnate stupidity.

How can AI transcend cognitive bias and irrational heuristic thinking and still have moral values when both those seem to be the functional basis of ethics?
Excellent points H&C!

Your comment points exactly at one of the problems that I see when Eliezer makes his arguments. From previous diavlogs I remember that he would argue that in theory the AI would handle all the elements that are needed in order to make moral judgments (ie: cognitive, emotional, etc.) and because of its superior ability it would be able to perfect morality. The problem is, that a significant part of moral judgments depends on our own limitations and shortcomings. I don't know how he would solve that problem. Or perhaps, I should say, that he doesn't get to that level of detail and that he mostly expresses, as I said before, his wishful thinking about how it could generally work out.
Reply With Quote
  #14  
Old 08-07-2010, 01:25 PM
burger_flipper burger_flipper is offline
 
Join Date: Aug 2010
Posts: 1
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

1) how about throwing those Dennett links up?

2) I'd really love it if Wright would live up to the code of of the Lawtonites and just spell out what exactly he's getting at w/ this moral direction the universe is heading in. What he means by it. It comes up often in the pods, but I have not seen it spelled out. I know there's stuff on it at the end of the last book, but I already have a couple-year backlog of books, and it seems like even people who have read it (Horgan for example) are still fuzzy on what he means.

Fun as it is to watch him try to play "Gotcha" w/ Yudkowski, etc, I like to "getcha" point first.

3) kinda agree w/ the point someone made above. This is the second time Yudkoski has been treated dismissively by the other Head (Horgan was the first). Pretty obvious Wright took a quick look at the one web site and did not look into Yudkowski's own site or any of the things he's written on the topics under discussion posted at Less Wrong or Overcoming Bias.

4) Yudkoski himself needs to work on a short into to the topic before he delves into the paperclip example and the like. Also throws around a lot of jargon (utility function, etc) that's gonna keep the sermon from reaching past the choir.

I do hope these guys get together again because there was some good stuff here, but overall it struck me as a missed opportunity.
Reply With Quote
  #15  
Old 08-07-2010, 01:46 PM
SkepticDoc SkepticDoc is offline
 
Join Date: Jan 2008
Location: Argleton
Posts: 1,168
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

http://meaningoflife.tv/video.php?speaker=dennett

http://video.google.com/videoplay?do...8412578691486#
Reply With Quote
  #16  
Old 08-07-2010, 01:46 PM
claymisher claymisher is offline
 
Join Date: Mar 2008
Location: Newbridge, NJ
Posts: 2,673
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by hamandcheese View Post
Has Eliezer ever considered that the AI might run its computations and come back in support of moral nihilism? Moral facts may not exist and if they do they may be universally false. How can we trust artificial intelligence with our normative ends when normativity itself may be strictly unintelligible. I'm increasingly of the persuasion that Morality will itself be a vestige to our incarnate stupidity.

How can AI transcend cognitive bias and irrational heuristic thinking and still have moral values when both those seem to be the functional basis of ethics?
My hope is that the singularity happens, but like technology in general it follows an s-curve (logistic growth), and that computer intelligence tops out at the level of a corgi. That'll teach those nerds. At least we'll learn a lot about simulated-corgi morality.

Anyway, nobody makes me laugh like Yudkowsky (I love the part where he explains exactly how you would program a peacemaking AI). The pairing of such self-confidence with a complete lack of achievement is comedy gold. Too bad he's not in on the joke.
Reply With Quote
  #17  
Old 08-07-2010, 01:58 PM
ktm6c ktm6c is offline
 
Join Date: Aug 2010
Posts: 1
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Robert Wright was overly defensive (and somewhat obnoxious) while trying to defend his hypothesis. He passionately defended his own beliefs.

Eliezer Yudkowsky was trying to think like a robot ... but he did show glimmers of human cognitive biases (though he appeared to try to catch and deny them). He appears to be sacrificing parts of his own humanity in an attempt to prevent the destruction of humanity by the singularity. Perhaps that means it is already too late
Reply With Quote
  #18  
Old 08-07-2010, 02:36 PM
Plinthy The Middling Plinthy The Middling is offline
 
Join Date: Jul 2010
Posts: 63
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Every diavlog with Yudkoski constitutes a missed opportunity for this diavloghead.
Reply With Quote
  #19  
Old 08-07-2010, 02:51 PM
ohreally ohreally is offline
 
Join Date: Jan 2010
Posts: 666
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by claymisher View Post
Anyway, nobody makes me laugh like Yudkowsky (I love the part where he explains exactly how you would program a peacemaking AI). The pairing of such self-confidence with a complete lack of achievement is comedy gold. Too bad he's not in on the joke.
Are you sure? I am beginning to wonder if Yudkowsky is not our local Ali G.
Reply With Quote
  #20  
Old 08-07-2010, 03:11 PM
Florian Florian is offline
 
Join Date: Mar 2009
Posts: 2,118
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by ohreally View Post
Are you sure? I am beginning to wonder if Yudkowsky is not our local Ali G.
I nominate him to the college of pataphysics. Perhaps a notch above Ali G.?

http://en.wikipedia.org/wiki/'Pataphysics
Reply With Quote
  #21  
Old 08-07-2010, 03:31 PM
cragger cragger is offline
 
Join Date: Aug 2007
Posts: 632
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

An AI that would be useful in solving the world's problems would have to be far more intelligent than humans, it would have to be so superintelligent that it could also map out a foolproof plan for manipulating humans into implementing the solutions. Self interest, denial, and self-deception are such powerful forces that we consistently fail to act now on many issues for which we know solutions, both individually and collectively.
Reply With Quote
  #22  
Old 08-07-2010, 03:54 PM
chamblee54 chamblee54 is offline
 
Join Date: Dec 2009
Posts: 319
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)



Is torturing a metaphor against the Geneva convention?

chamblee54
__________________
Chamblee54
Reply With Quote
  #23  
Old 08-07-2010, 03:57 PM
Wonderment Wonderment is offline
 
Join Date: Jul 2007
Location: Southern California
Posts: 5,694
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Good post!

Quote:
Eliezer was in the best behavior I've seen him at BhTV. I think that debating Bob made him tone down a bit. His effort was obvious.
I thought so too. Bob, to his credit, is disarming, and his personality may have prevented E from going into his supremely obnoxious I'm-so-much-smarter-than-you-that-it's-hilarious mode.

Since we all know Eliezar's transhumanism/Singularity shtick backwards and forwards by now, after MULTIPLE appearances on BHTV, I confess to only having watched to see how Bob would handle Eliezar.

How did Bob do? Putting aside the Ali G theory, I think Bob works the cameras infinitely better than E does. It's as if Bob had a superhuman intelligence and E were merely the sum total of 11 billion chimp intelligences (just kidding). But Bob does know how to roll his eyes and work the viewer with winks and nods while watching his partner sputter and rave.

Quote:
I have repeatedly wondered whether the entire Singularity- Artificial Intelligence project is based on a fantasy about creating an All Mighty Godly Father who will rescue us and solve all of humanities problems or whether that's just Eliezer's unfiltered wishful thinking.
Coincidentally, this is precisely the fantasy that E's parents, grandparents and great-grandparents had as ultra-Orthodox Jews waiting for the Messiah. But let's not go there. Bob seemed content to suggest it was an adolescent fantasy available to any science-fiction consuming boy, and he put E on the defensive explaining how he had recently grown up.

Quote:
In the second half they discussed Bob's idea about "purpose". And this was revenge time for Eliezer. Bob, indeed, did something similar to what he did with Dan Dennett, as another commenter pointed out.
I saw this one coming from a mile away, preparing myself for Bob's Dennett frenzy a good 30 minutes before it happened. At least, Bob didn't post the promised link to his 40,000 blogposts on the earthshaking and endless "You said it! No, I didn't!" Dennett-Wright controversy. (Speaking of adolescent arm-wrestling contests.)

Quote:
If he [Bob] thinks there is some kind of creator or designer who established a final goal towards which evolution marches, he should make that clear. If he thinks there's no such creator, and that evolution is directed by natural laws without a pre-established plan, then he should use a different term. It would save a lot of time and energy which is being used in this kind of discussion.
Agreed. I think it's fair to say that most atheists who read "Evolution of God" concluded, like Eliezer, that Bob was more than flirting with the idea of a Higher Power.

Bob has written three excellent books about his views. He also has BH, the NYT and many other amplifiers to make himself abundantly clear. Which he has.

It seems quite odd for someone who speaks and writes as well as Bob does to then claim he has been grossly misunderstood by his readership and listeners.

We get it, Bob: You want to assign a higher probability to a Designer, a Big Purpose or a (G-o-d) than most atheists are comfortable with; you want to make a case that is compatible with most religious thinking; but you also want to remain a card-carrying member of the respectable scientific agnostic club. Can you have it both ways? Certainly not without making people like Eliezar, Dennett, John Horgan and Carl Zimmer scream.
__________________
Seek Peace and Pursue it
בקש שלום ורדפהו
Busca la paz y síguela
--Psalm 34:15
Reply With Quote
  #24  
Old 08-07-2010, 04:04 PM
Wonderment Wonderment is offline
 
Join Date: Jul 2007
Location: Southern California
Posts: 5,694
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Has Eliezer ever considered that the AI might run its computations and come back in support of moral nihilism? Moral facts may not exist and if they do they may be universally false. How can we trust artificial intelligence with our normative ends when normativity itself may be strictly unintelligible. I'm increasingly of the persuasion that Morality will itself be a vestige to our incarnate stupidity.
Good point. Bob seemed to accept this in principle when he alluded to different (mutually exclusive?) theories of ethics. He said something to the effect that although he was a ultilitarian, why should the smart robot be as well? Oddly, Bob, who is less of an atheist than Eliezer, may be more willing to accept this moral nihilism possibility, even with all his talk of the directionality of morality.

Eliezer, on the other hand, may have too much invested in the Benevolence and (human-like) Genius of the Singularity to seriously entertain the possibility of moral nihilism. He likes to talk of the dangers of AI going off the rails, but only to persuade us that if we do the Right Thing, it won't.
__________________
Seek Peace and Pursue it
בקש שלום ורדפהו
Busca la paz y síguela
--Psalm 34:15
Reply With Quote
  #25  
Old 08-07-2010, 04:08 PM
chamblee54 chamblee54 is offline
 
Join Date: Dec 2009
Posts: 319
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

This is the happy hunting ground for dingalinks.
At some point in this trainwreck, the guy with the beard says that we need the Wright values to program into ai. This is a disturbing concept.
chamblee54
__________________
Chamblee54
Reply With Quote
  #26  
Old 08-07-2010, 04:14 PM
Markos Markos is offline
 
Join Date: Jan 2008
Location: NYC
Posts: 334
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

How does Bob know that ants don't have the construction of an anthill in mind?
That seems to me a completely presumptuous conclusion based on no empirical evidence. Unless Bob has a window into the ant brain that exceeds the technological ability of science to experience the thoughts and mental imagery of a human being who is not one's own self and without the ability to obtain descriptions by the subject of his/her thoughts and mental imagery.

I don't believe Bob or modern science can read the thoughts and mental imagery of an ant.

Plus, it would seem to me, based on what empirical evidence we do have of ant behavior, very possible that individual ants might have some preconception in the form of an image or maybe even some primitive form of thought of the anthill they are building.
Reply With Quote
  #27  
Old 08-07-2010, 04:22 PM
uncle ebeneezer uncle ebeneezer is offline
 
Join Date: Feb 2007
Posts: 3,332
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Nice!! We do what we have to...otherwise the metaphors win!!1!
Reply With Quote
  #28  
Old 08-07-2010, 04:43 PM
Emef Emef is offline
 
Join Date: Dec 2009
Posts: 3
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

This was a very good discussion but could have been much better. The arrogance of both participants was annoying, but Bob's yelling and interuptions were particularly disconcerting. I would rather not have bloggingheads be a boxing match. While illuminating disagreements is exactly what makes watching interesting, respectful interchanges are much more informative to the viewer than point-scoring. I'd love to see this same discussion again, but with both Bob and Eliezer exploring the other's ideas with curiosity rather than hostility, and acknowledging that there is an audience interested in understanding rather than blood sport.
Reply With Quote
  #29  
Old 08-07-2010, 04:47 PM
Markos Markos is offline
 
Join Date: Jan 2008
Location: NYC
Posts: 334
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

I intend to use my car to bake a cake.
Reply With Quote
  #30  
Old 08-07-2010, 05:17 PM
bbenzon bbenzon is offline
 
Join Date: Jan 2009
Posts: 20
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
...with a complete lack of achievement is comedy gold.
Has he actually produced any code? Has the Singularity Institute produced any code? Some months ago I took a run at his paper on levels of intelligence and decided it was mostly a word salad.
Reply With Quote
  #31  
Old 08-07-2010, 05:43 PM
Meng Bomin Meng Bomin is offline
 
Join Date: Oct 2008
Posts: 57
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

I agree.
Reply With Quote
  #32  
Old 08-07-2010, 05:50 PM
Meng Bomin Meng Bomin is offline
 
Join Date: Oct 2008
Posts: 57
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

He was very combative and seemed to put very little effort into understanding the arguments and concepts espoused by Yudkowsky. Now, it may be that Yudkowsky wasn't explaining himself well, but strangely I thought that I understood what Yudkowsky was saying and it seemed to me that Bob was absolutely clueless on the matter, despite an abiding confidence on his part that he had unwoven deep flaws in Yudkowsky's points.

So in summary, the combination of combativeness and what seemed to be mistaken comprehsion of his opposite's points was very off-putting and gave me the sense that I was wasting my time watching the diavlog.
Reply With Quote
  #33  
Old 08-07-2010, 05:54 PM
odopoboqo odopoboqo is offline
 
Join Date: Jul 2010
Posts: 3
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by Wonderment View Post
He said something to the effect that although he was a ultilitarian, why should the smart robot be as well? Oddly, Bob, who is less of an atheist than Eliezer, may be more willing to accept this moral nihilism possibility, even with all his talk of the directionality of morality.

Eliezer, on the other hand, may have too much invested in the Benevolence and (human-like) Genius of the Singularity to seriously entertain the possibility of moral nihilism.

I think Elizer does entertain, and possibly even believe, the possibility of moral nihilism. I'm reminded of this post that Elizer wrote on lesswrong a while back.

Elizer's point, I think, is that even if moral nihilism is true and human morality is arbitrary, that doesn't mean that there isn't a pattern in it. In the parable, the pebblesorters' morality is not strictly random: prime numbers are correct, composite numbers are incorrect. You can figure out this pattern even if you assume that the correctness of primes is arbitrary.

Last edited by odopoboqo; 08-07-2010 at 05:57 PM..
Reply With Quote
  #34  
Old 08-07-2010, 06:05 PM
Meng Bomin Meng Bomin is offline
 
Join Date: Oct 2008
Posts: 57
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by hamandcheese View Post
Has Eliezer ever considered that the AI might run its computations and come back in support of moral nihilism? Moral facts may not exist and if they do they may be universally false. How can we trust artificial intelligence with our normative ends when normativity itself may be strictly unintelligible. I'm increasingly of the persuasion that Morality will itself be a vestige to our incarnate stupidity.

How can AI transcend cognitive bias and irrational heuristic thinking and still have moral values when both those seem to be the functional basis of ethics?
I agree that moral facts may not exist; however, I think that's where Yudkowsky's differentiation between facts and values comes in. One of the allegories Yudkowsky likes to use (and there were hints of it early in the diavlog) is the paperclip maximizing AI, which only cares about maximizing the number of paperclips in the universe. Maximizing paperclips is not a fact about the universe. However, a self-modifying AI that properly preserved its intial values that was a paperclip maximizer would not find paperclip maximization to be false.

And of course an AI whose goals were orthogonal to or directly contrary to the continued survival of the human species could indeed end our existence, which I believe is part of the motivation behind Yudkowsky's institutes attempts to figure out how to "properly" make a friendly AI.

So from Yudkowsky's point of view, a "moral nihilist" AI could arise for a number of reasons including that it wasn't initially designed to optimize in ways analogous to human morality or that it didn't adequately protect such optimization during self-modification. Indeed from what I've read and heard of Yudkowsky, the vast majority of possibility space for a superhuman intelligent AI would qualify by most standards as "moral nihilist". He just hasn't used that particular terminology.
Reply With Quote
  #35  
Old 08-07-2010, 06:11 PM
thouartgob thouartgob is offline
 
Join Date: Oct 2006
Posts: 765
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by Ocean View Post
I have repeatedly wondered whether the entire Singularity- Artificial Intelligence project is based on a fantasy about creating an All Mighty Godly Father who will rescue us and solve all of humanities problems or whether that's just Eliezer's unfiltered wishful thinking.
I half remember Eliezer's comment about the singularity being analogous to an Atomic Pile with 1 neutron begetting 2 or more creating a chain reaction but I don't think it's god that he is after, he merely wants to be part of an uber version of the Manhattan Project. An A.I. version of the arms race with Hitler but instead of keeping the Bomb away from Nazis he wants to save Life the Universe and Everything from an apocolyptic "Bad Programmer". These might be similiar examples of wish fulfillment but to quote Richard Feynmen "whatever keeps him out of law school" ( least that is where I heard the line from )

His assumption that we are alone in the visible universe answers the Fermi Paradox brought up in a previous Science Saturday's and gives weight to his decision to save us from the same fate as extinct alien races that don't inhabit this galaxy or cluster of them.

I would say that Bob probably believes that Eliezer is trying to save the world just to get laid ( not me saying that by the way )

Speaking of the devil.

Quote:
Originally Posted by Ocean View Post
I would encourage Bob to clearly define how he uses it. If he thinks there is some kind of creator or designer who established a final goal towards which evolution marches, he should make that clear.
I think that would be some wish fulfillment of your own :-) I think he enjoys "bobbing" and weaving around the question of his belief system and why spoil the fun by leaping to one side or the other of the fence. I will say that his use of the Intelligent Design-ish example of "designed" bacteria begetting all things in life is mmmm more problematic than a more naturalistic idea of some meta natural selection phenomenon. I would have stood on that example personally but I'm not in a feather rustling mood.
Reply With Quote
  #36  
Old 08-07-2010, 06:38 PM
ohreally ohreally is offline
 
Join Date: Jan 2010
Posts: 666
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Wright needs to define what he means by purpose (some form of theism?). He claimed victory by getting Dennett to buy his definition of purpose as some sort of function-producing design. But a definition is only part of the game's rules and not part of the game itself. I have no problem with Wright's definition of anything, including purpose. But he needs to give us a definition first before we can argue whether it has this or that attribute. Otherwise we'll be arguing over definitions and not properties.

Wright is guilty of this confusion when he says something like "So you admit that purpose does not need to involve consciousness." He makes it sound as though we're discussing a property of something called purpose, when in fact we're still trying to pin down the definition of the word.

Consider the sentence "So you'll admit that 7 follows 6" and "So you'll admit that 7 is prime." The first is simply the standard definition of the number 7: it is true by convention. The second states a property of that number: it could be true or false.
Reply With Quote
  #37  
Old 08-07-2010, 06:39 PM
Ocean Ocean is offline
 
Join Date: Jun 2008
Location: US Northeast
Posts: 6,784
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by thouartgob View Post
I half remember Eliezer's comment about the singularity being analogous to an Atomic Pile with 1 neutron begetting 2 or more creating a chain reaction but I don't think it's god that he is after, he merely wants to be part of an uber version of the Manhattan Project. An A.I. version of the arms race with Hitler but instead of keeping the Bomb away from Nazis he wants to save Life the Universe and Everything from an apocolyptic "Bad Programmer". These might be similiar examples of wish fulfillment but to quote Richard Feynmen "whatever keeps him out of law school" ( least that is where I heard the line from )

His assumption that we are alone in the visible universe answers the Fermi Paradox brought up in a previous Science Saturday's and gives weight to his decision to save us from the same fate as extinct alien races that don't inhabit this galaxy or cluster of them.
Your argument above doesn't invalidate the opinion I expressed about the All Mighty Savior. Of course I was using the expression as a metaphor, but, in fact it is a very close comparison. Whether the Savior is a man-made, superintelligent, cascading self-programming entity, is just a matter of the external shape of the fantasy. The core desire for the all-knowing protector is still contained in it. The addition of the argument about preventing the evil ones from developing the technology first, is a valid one, but it certainly resembles rather closely the kind of argument that one would make to obtain funding when other arguments have failed.

Quote:
I would say that Bob probably believes that Eliezer is trying to save the world just to get laid ( not me saying that by the way )
I don't know whether that is what Bob implied, although I wouldn't be surprised.

Quote:
I think that would be some wish fulfillment of your own :-)
I'm not sure what you mean, but, indeed it would be nice if people would either define themselves more clearly or place all their cards on the table. When it comes to this topic, Bob seems to be hiding one piece, and unfortunately that makes his arguments confusing.

Quote:
I think he enjoys "bobbing" and weaving around the question of his belief system and why spoil the fun by leaping to one side or the other of the fence.
You must be seeing something that I don't see. I don't see him having fun "on the fence".

Quote:
I will say that his use of the Intelligent Design-ish example of "designed" bacteria begetting all things in life is mmmm more problematic than a more naturalistic idea of some meta natural selection phenomenon. I would have stood on that example personally but I'm not in a feather rustling mood.
When an argument fails to inspire productive discussion, and keeps being reduced to endless ruminations about definitions and repeated contradictions, it should be revised. A piece is missing or misplaced. I don't think it's that hard to see that. Figuring out how to fix the problem is another story.

Let's say that we trust Bob's ability to fix it, sooner or later.
Reply With Quote
  #38  
Old 08-07-2010, 06:53 PM
T.G.G.P T.G.G.P is offline
 
Join Date: Nov 2006
Posts: 278
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

I'm very skeptical of the possibility about aggregating a preference from all of humanity. Kenneth Arrow wrote about the difficulties of aggregating beyond even one person a while back.

Ants are not optimized, but execute adaptations. Close enough I guess, so I'm just nitpicking.

Science usually does not forget, but it happened to the Tasmanians and it has happened even to us.

Special relativity was the product of a number of people (including Poincare, Lorenz & Minkowski). General relativity was pretty much all Einstein though.

Steven Landsburg argues that complexity is evidence of a UNINTELLIGENCE. Optimization for a purpose (which Eliezer saw in Grand Theft Auto (hopefully he was thinking of GTA 2!)) is much more of a sign of intelligence than complexity.

Last edited by T.G.G.P; 08-07-2010 at 07:31 PM..
Reply With Quote
  #39  
Old 08-07-2010, 08:23 PM
Wonderment Wonderment is offline
 
Join Date: Jul 2007
Location: Southern California
Posts: 5,694
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

A couple of thoughts in response:


Quote:
Elizer's point, I think, is that even if moral nihilism is true and human morality is arbitrary, that doesn't mean that there isn't a pattern in it.
Or that there is. The "pattern" could be either arbitrary OR non-existent. There might not be any pebbles to sort.

Also, I have seen Eliezer express great moral indignation on BHTV, as we humans are wont to do. He views transhumanism through a moralistic lens (how could he not?).

That makes me think Bob's skepticism makes sense: How do you plan for the non-apocalyptic Singularity without a moral consensus? Are Ahmadinejad, Benyamin Netanyahu, Peter Singer and the Pope going to be consulted on the construction of super-intelligent AI? If not, how do you exclude them?
__________________
Seek Peace and Pursue it
בקש שלום ורדפהו
Busca la paz y síguela
--Psalm 34:15
Reply With Quote
  #40  
Old 08-07-2010, 10:11 PM
Monkey Corp Monkey Corp is offline
 
Join Date: Oct 2009
Location: Adelaide, Australia
Posts: 27
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

I'm not sure if this debate got beyond bob saying there was a purpose (god) behind the emergence of the first life on earth and Eliezer saying if so where did the purpose (god) come from. Not sure if we can reasonably expect a winning argument that convinces the other side. An old debate whose respective sides were well presented here exposing nuances not often thought of. Thank you to you both.
Reply With Quote
 


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -4. The time now is 02:39 AM.


Powered by vBulletin® Version 3.8.7 Beta 1
Copyright ©2000 - 2020, vBulletin Solutions, Inc.