Go Back   Bloggingheads Community > Diavlog comments
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Notices

Diavlog comments Post comments about particular diavlogs here.
(Users cannot create new threads.)

Reply
 
Thread Tools Display Modes
  #1  
Old 08-07-2010, 01:03 AM
Bloggingheads Bloggingheads is offline
BhTV staff
 
Join Date: Nov 2007
Posts: 1,936
Default Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Reply With Quote
  #2  
Old 08-07-2010, 02:35 AM
r108dos r108dos is offline
 
Join Date: Apr 2008
Posts: 34
Default Re: Science Saturday: Purposes and Futures

Where is Mickey? This is singularly unacceptable.
Reply With Quote
  #3  
Old 08-07-2010, 03:07 AM
karlsmith karlsmith is offline
 
Join Date: Aug 2010
Posts: 10
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Long time watcher, first time commenter. Wright and Yudkowsky together. I haven't even started yet and I am giddy.
Reply With Quote
  #4  
Old 08-07-2010, 03:29 AM
Abdicate Abdicate is offline
 
Join Date: Dec 2007
Location: Eden Prairie, Minnesota
Posts: 90
Send a message via Yahoo to Abdicate Send a message via Skype™ to Abdicate
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

I was really appalled by Bob's conduct in this diavlog.
Reply With Quote
  #5  
Old 08-07-2010, 05:43 PM
Meng Bomin Meng Bomin is offline
 
Join Date: Oct 2008
Posts: 57
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

I agree.
Reply With Quote
  #6  
Old 08-07-2010, 03:49 AM
BeachFrontView BeachFrontView is offline
 
Join Date: Jul 2008
Location: Los Angeles
Posts: 94
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Great diavlog!


Eliezer has great clarity on these complicated topics.
Reply With Quote
  #7  
Old 08-07-2010, 04:45 AM
jerusalemite jerusalemite is offline
 
Join Date: Jul 2010
Posts: 6
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

"I was really appalled by Bob's conduct in this diavlog.

Be specific. What conduct was appalling?
Reply With Quote
  #8  
Old 08-07-2010, 05:50 PM
Meng Bomin Meng Bomin is offline
 
Join Date: Oct 2008
Posts: 57
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

He was very combative and seemed to put very little effort into understanding the arguments and concepts espoused by Yudkowsky. Now, it may be that Yudkowsky wasn't explaining himself well, but strangely I thought that I understood what Yudkowsky was saying and it seemed to me that Bob was absolutely clueless on the matter, despite an abiding confidence on his part that he had unwoven deep flaws in Yudkowsky's points.

So in summary, the combination of combativeness and what seemed to be mistaken comprehsion of his opposite's points was very off-putting and gave me the sense that I was wasting my time watching the diavlog.
Reply With Quote
  #9  
Old 08-07-2010, 07:16 AM
MikeDrew MikeDrew is offline
 
Join Date: May 2008
Posts: 110
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

There are just too many meanings of 'purpose' going around here for them all to be squared under one term, holding each user responsible for each use thereof as equivalent to each other of his uses.
Reply With Quote
  #10  
Old 08-07-2010, 07:38 AM
Baxta76 Baxta76 is offline
 
Join Date: Sep 2009
Posts: 6
Default Re: Science Saturday: illusion of "purpose"

Mr Richard Wright annoys me in this diavlog almost as much as his interigation of Dan Dennett does on meaningoflife.tv
Natural selection has the illusion of "purpose" as it inherantly involves improvement over time. It is not driven towards anything other than survival.
Is this really science?

Last edited by Baxta76; 08-07-2010 at 08:07 AM..
Reply With Quote
  #11  
Old 08-07-2010, 07:46 AM
bbenzon bbenzon is offline
 
Join Date: Jan 2009
Posts: 20
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

So, we're back to science fiction Saturday, eh?
Reply With Quote
  #12  
Old 08-07-2010, 08:12 AM
testostyrannical testostyrannical is offline
 
Join Date: Aug 2006
Location: Denver
Posts: 83
Default The Singularity Is Nonsense

And probably basically contradicts what we know about complexity. What does it even mean for a program to make itself "smarter"? It's one thing to build faster, more sophisticated CPUs, another to construct "intelligent" code that can navel gaze and go, wow, this portion of myself isn't as smart as, er, this part of me that's looking at it critically...I should rewrite it better! This Skynet bullshit is what happens when you take certain nerd myths about smartness and graft them onto vaguely apocalyptic assumptions about the future of technology.
Reply With Quote
  #13  
Old 08-07-2010, 12:19 PM
Ocean Ocean is offline
 
Join Date: Jun 2008
Location: US Northeast
Posts: 6,784
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Well, not exactly science, but one may consider it a spin off from science.

Eliezer was in the best behavior I've seen him at BhTV. I think that debating Bob made him tone down a bit. His effort was obvious. Bob, on the other hand didn't tone down anything. He seemed rather annoyed with Eliezer during the first half when they discussed the Singularity. I can't blame him for that...

During the first half of the diavlog Bob and Eliezer discussed the concept of Singularity. Bob tried to address an issue which is rather obvious to those of us who are skeptic about this project, that is, the seemingly childish wishful thinking behind the idea. I have repeatedly wondered whether the entire Singularity- Artificial Intelligence project is based on a fantasy about creating an All Mighty Godly Father who will rescue us and solve all of humanities problems or whether that's just Eliezer's unfiltered wishful thinking. Bob's questions seemed appropriate and basic in terms of what a layperson would want to know about the topic (much appreciated). During one of the sections they used as an example the case in which the Singularity, if existent today, would be able to solve the Israel-Palestine problem after being given very minor instructions about the goals. They also talked about the possibility of "bugs" and the possible consequences when applied to the All Mighty Singularity. Eliezer seemed to consistently minimize the concern for that possibility. There's an aspect in Eliezer's psyche that projects such adoration for the idea of a superintelligent entity that will take care of us, that it's difficult to take his arguments seriously. This discussion was really detrimental to the Singularity cause.

In the second half they discussed Bob's idea about "purpose". And this was revenge time for Eliezer. Bob, indeed, did something similar to what he did with Dan Dennett, as another commenter pointed out. I don't know whether Bob realizes how he comes across when he tries to explore his interlocutor's opinions so that by putting them together in his way, he concludes that the person must agree with him. To the spectator, it comes across as a stretch, and very often it's obvious that it's not what the interlocutor thinks. Eliezer had to correct Bob's assumptions about his (Eliezer's) opinions constantly. If Bob is practicing Maieutics from Socrates' dialectic method, he needs to practice more because it isn't working well. (Bob, if I may, this is the best advice I can give you.)

I consider myself among those that object to the idea of using the term purpose unless there is an intention to imply that there is a designer or creator. Purpose implies a predetermined goal or outcome. Natural phenomena, as we understand it, doesn't have the ability to establish goals. We can study nature and we can find patterns or a direction in which evolution occurs, but the idea of purpose remains as our own construct, and says nothing about the external reality.

Knowing that the word purpose has such a problematic connotation, I would encourage Bob to clearly define how he uses it. If he thinks there is some kind of creator or designer who established a final goal towards which evolution marches, he should make that clear. If he thinks there's no such creator, and that evolution is directed by natural laws without a pre-established plan, then he should use a different term. It would save a lot of time and energy which is being used in this kind of discussion.

Cheers!
Reply With Quote
  #14  
Old 08-07-2010, 03:57 PM
Wonderment Wonderment is offline
 
Join Date: Jul 2007
Location: Southern California
Posts: 5,694
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Good post!

Quote:
Eliezer was in the best behavior I've seen him at BhTV. I think that debating Bob made him tone down a bit. His effort was obvious.
I thought so too. Bob, to his credit, is disarming, and his personality may have prevented E from going into his supremely obnoxious I'm-so-much-smarter-than-you-that-it's-hilarious mode.

Since we all know Eliezar's transhumanism/Singularity shtick backwards and forwards by now, after MULTIPLE appearances on BHTV, I confess to only having watched to see how Bob would handle Eliezar.

How did Bob do? Putting aside the Ali G theory, I think Bob works the cameras infinitely better than E does. It's as if Bob had a superhuman intelligence and E were merely the sum total of 11 billion chimp intelligences (just kidding). But Bob does know how to roll his eyes and work the viewer with winks and nods while watching his partner sputter and rave.

Quote:
I have repeatedly wondered whether the entire Singularity- Artificial Intelligence project is based on a fantasy about creating an All Mighty Godly Father who will rescue us and solve all of humanities problems or whether that's just Eliezer's unfiltered wishful thinking.
Coincidentally, this is precisely the fantasy that E's parents, grandparents and great-grandparents had as ultra-Orthodox Jews waiting for the Messiah. But let's not go there. Bob seemed content to suggest it was an adolescent fantasy available to any science-fiction consuming boy, and he put E on the defensive explaining how he had recently grown up.

Quote:
In the second half they discussed Bob's idea about "purpose". And this was revenge time for Eliezer. Bob, indeed, did something similar to what he did with Dan Dennett, as another commenter pointed out.
I saw this one coming from a mile away, preparing myself for Bob's Dennett frenzy a good 30 minutes before it happened. At least, Bob didn't post the promised link to his 40,000 blogposts on the earthshaking and endless "You said it! No, I didn't!" Dennett-Wright controversy. (Speaking of adolescent arm-wrestling contests.)

Quote:
If he [Bob] thinks there is some kind of creator or designer who established a final goal towards which evolution marches, he should make that clear. If he thinks there's no such creator, and that evolution is directed by natural laws without a pre-established plan, then he should use a different term. It would save a lot of time and energy which is being used in this kind of discussion.
Agreed. I think it's fair to say that most atheists who read "Evolution of God" concluded, like Eliezer, that Bob was more than flirting with the idea of a Higher Power.

Bob has written three excellent books about his views. He also has BH, the NYT and many other amplifiers to make himself abundantly clear. Which he has.

It seems quite odd for someone who speaks and writes as well as Bob does to then claim he has been grossly misunderstood by his readership and listeners.

We get it, Bob: You want to assign a higher probability to a Designer, a Big Purpose or a (G-o-d) than most atheists are comfortable with; you want to make a case that is compatible with most religious thinking; but you also want to remain a card-carrying member of the respectable scientific agnostic club. Can you have it both ways? Certainly not without making people like Eliezar, Dennett, John Horgan and Carl Zimmer scream.
__________________
Seek Peace and Pursue it
בקש שלום ורדפהו
Busca la paz y síguela
--Psalm 34:15
Reply With Quote
  #15  
Old 08-07-2010, 06:11 PM
thouartgob thouartgob is offline
 
Join Date: Oct 2006
Posts: 765
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by Ocean View Post
I have repeatedly wondered whether the entire Singularity- Artificial Intelligence project is based on a fantasy about creating an All Mighty Godly Father who will rescue us and solve all of humanities problems or whether that's just Eliezer's unfiltered wishful thinking.
I half remember Eliezer's comment about the singularity being analogous to an Atomic Pile with 1 neutron begetting 2 or more creating a chain reaction but I don't think it's god that he is after, he merely wants to be part of an uber version of the Manhattan Project. An A.I. version of the arms race with Hitler but instead of keeping the Bomb away from Nazis he wants to save Life the Universe and Everything from an apocolyptic "Bad Programmer". These might be similiar examples of wish fulfillment but to quote Richard Feynmen "whatever keeps him out of law school" ( least that is where I heard the line from )

His assumption that we are alone in the visible universe answers the Fermi Paradox brought up in a previous Science Saturday's and gives weight to his decision to save us from the same fate as extinct alien races that don't inhabit this galaxy or cluster of them.

I would say that Bob probably believes that Eliezer is trying to save the world just to get laid ( not me saying that by the way )

Speaking of the devil.

Quote:
Originally Posted by Ocean View Post
I would encourage Bob to clearly define how he uses it. If he thinks there is some kind of creator or designer who established a final goal towards which evolution marches, he should make that clear.
I think that would be some wish fulfillment of your own :-) I think he enjoys "bobbing" and weaving around the question of his belief system and why spoil the fun by leaping to one side or the other of the fence. I will say that his use of the Intelligent Design-ish example of "designed" bacteria begetting all things in life is mmmm more problematic than a more naturalistic idea of some meta natural selection phenomenon. I would have stood on that example personally but I'm not in a feather rustling mood.
Reply With Quote
  #16  
Old 08-07-2010, 06:39 PM
Ocean Ocean is offline
 
Join Date: Jun 2008
Location: US Northeast
Posts: 6,784
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by thouartgob View Post
I half remember Eliezer's comment about the singularity being analogous to an Atomic Pile with 1 neutron begetting 2 or more creating a chain reaction but I don't think it's god that he is after, he merely wants to be part of an uber version of the Manhattan Project. An A.I. version of the arms race with Hitler but instead of keeping the Bomb away from Nazis he wants to save Life the Universe and Everything from an apocolyptic "Bad Programmer". These might be similiar examples of wish fulfillment but to quote Richard Feynmen "whatever keeps him out of law school" ( least that is where I heard the line from )

His assumption that we are alone in the visible universe answers the Fermi Paradox brought up in a previous Science Saturday's and gives weight to his decision to save us from the same fate as extinct alien races that don't inhabit this galaxy or cluster of them.
Your argument above doesn't invalidate the opinion I expressed about the All Mighty Savior. Of course I was using the expression as a metaphor, but, in fact it is a very close comparison. Whether the Savior is a man-made, superintelligent, cascading self-programming entity, is just a matter of the external shape of the fantasy. The core desire for the all-knowing protector is still contained in it. The addition of the argument about preventing the evil ones from developing the technology first, is a valid one, but it certainly resembles rather closely the kind of argument that one would make to obtain funding when other arguments have failed.

Quote:
I would say that Bob probably believes that Eliezer is trying to save the world just to get laid ( not me saying that by the way )
I don't know whether that is what Bob implied, although I wouldn't be surprised.

Quote:
I think that would be some wish fulfillment of your own :-)
I'm not sure what you mean, but, indeed it would be nice if people would either define themselves more clearly or place all their cards on the table. When it comes to this topic, Bob seems to be hiding one piece, and unfortunately that makes his arguments confusing.

Quote:
I think he enjoys "bobbing" and weaving around the question of his belief system and why spoil the fun by leaping to one side or the other of the fence.
You must be seeing something that I don't see. I don't see him having fun "on the fence".

Quote:
I will say that his use of the Intelligent Design-ish example of "designed" bacteria begetting all things in life is mmmm more problematic than a more naturalistic idea of some meta natural selection phenomenon. I would have stood on that example personally but I'm not in a feather rustling mood.
When an argument fails to inspire productive discussion, and keeps being reduced to endless ruminations about definitions and repeated contradictions, it should be revised. A piece is missing or misplaced. I don't think it's that hard to see that. Figuring out how to fix the problem is another story.

Let's say that we trust Bob's ability to fix it, sooner or later.
Reply With Quote
  #17  
Old 08-08-2010, 11:29 AM
thouartgob thouartgob is offline
 
Join Date: Oct 2006
Posts: 765
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by Ocean View Post
The addition of the argument about preventing the evil ones from developing the technology first, is a valid one, but it certainly resembles rather closely the kind of argument that one would make to obtain funding when other arguments have failed.
The funding argument was something that I had not considered but it's defintely something I should have. Well the quest does have the ring of a science fiction story in either case. Title suggestion "Arms Race to GOD !"


Quote:
Originally Posted by Ocean View Post

I'm not sure what you mean, but, indeed it would be nice if people would either define themselves more clearly or place all their cards on the table. When it comes to this topic, Bob seems to be hiding one piece, and unfortunately that makes his arguments confusing.
Well I like to think that I meant to make a vaguely humorous suggestion that wanting Bob to be elucidate his posistion, either by better analogies or just saying he believes in a god, may be an unfufilled desire since he is seemingly keeping his cards to his chest for a reason. Either he is still working things out or he wants to keep peoples attention.


Quote:
Originally Posted by Ocean View Post
You must be seeing something that I don't see. I don't see him having fun "on the fence".

Whatever it is it's working for him. Maybe it's just a slowly evolving epiphany ... evolving using the mechanism of Natural Selection in a Materialist world :-)


Quote:
Originally Posted by Ocean View Post
When an argument fails to inspire productive discussion, and keeps being reduced to endless ruminations about definitions and repeated contradictions, it should be revised. A piece is missing or misplaced. I don't think it's that hard to see that. Figuring out how to fix the problem is another story.

Let's say that we trust Bob's ability to fix it, sooner or later.
Somebody Say Amen
Reply With Quote
  #18  
Old 08-08-2010, 11:37 AM
Ocean Ocean is offline
 
Join Date: Jun 2008
Location: US Northeast
Posts: 6,784
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by thouartgob View Post
The funding argument was something that I had not considered but it's defintely something I should have. Well the quest does have the ring of a science fiction story in either case. Title suggestion "Arms Race to GOD !"

Well I like to think that I meant to make a vaguely humorous suggestion that wanting Bob to be elucidate his posistion, either by better analogies or just saying he believes in a god, may be an unfufilled desire since he is seemingly keeping his cards to his chest for a reason. Either he is still working things out or he wants to keep peoples attention.

Whatever it is it's working for him. Maybe it's just a slowly evolving epiphany ... evolving using the mechanism of Natural Selection in a Materialist world :-)

Somebody Say Amen
Amen!
Reply With Quote
  #19  
Old 08-08-2010, 07:43 AM
MikeDrew MikeDrew is offline
 
Join Date: May 2008
Posts: 110
Default Bob's Mystery God Analogy

I have never quite understood the objections here to Eliezer's "behavior." He has always just seemed like someone with some very specific ideas that he has expressed in ways that have gotten people's attention, and attends to the resulting questions as openly and frankly as he can, nothing more, nothing less.

Other than that, I agree with your assessment of Bob's continuing pursuit of his idea of possible purpose behind natural selection. Its "purpose" is clearly to point in the direction of a justification for a natural God, and Bob has all but admitted that is his motivation. Moreover, the idea that he is trying to promote is itself among the vaguest concepts I have ever encountered. Every time I finish listening to him press his case to someone, my head is spinning from trying to understand just what in the hell the thing is he is trying to get the person to say is possible. As far a I can tell, Bob insists we need to see development of life thru the internet on Earth as evidence that there is purpose behind the process we already know to have driven that development, merely because, if there were, it would be analogous to the role natural selection played in eventually producing organisms whose gestation period (or full life cycle?), which Bob finds to be analogous to development on Earth writ large. As far as I can tell, Bob wants people to admit that because he can draw what he believes to be a good analogy between two relationships, one between two things we agree exist, and one between a thing we agree exists and one he just posits without describing much at all, that the ability to draw that analogy should be seen as evidence that the fourth thing exists. Further, by insisting he is just trying to get admissions that this idea is barely possible ("not utterly laughable," or whatever his standard is), he engages full-bore in the style of pro-God argumentation that people like D'Sousa use to get non-believers to admit they aren't perfectly certain and therefore can't deny the possibility of the existence of God. Well, when you get down to it, what couldn't Bob or Dinesh get us all to have to admit we aren't certain isn't the case if he put his mind to it?

All that said, it was fair for Bob to be very forward in his defense of his ideas here, because Eliezer apparently said he wanted 'interrogate' him about them, or at least requested the dialogue. This is in contrast to Bob's meaningoflife.tv dialogue with Dennett, where by all appearances Prof. Dennett really didn't have much idea how attached Bob is to his idea, and submitted to Bob's questioning at Bob's request rather than the other way round. Then afterwards, when, on the strength of series of rickety analogies built on assumptions and stretched definitions Bob finally got Dennett to cry uncle and say that one could see this chain of reasoning as "evidence" if one so chose, Bob then rushed out to publish what was essentially a press release declaring that he had gotten Dennett to recant his worldview. Talk about appalling behavior. I'm not surprised Bob thought better of reposting the interview and aftermath after talking big about it in the dv. It's nothing less than video and written record of what should be seen as the lowest moment of his career, and he is right to be ashamed of them if that's why he chickened out reposting the links. I like Bob's manner and humor with just about every guest, and what he's created here is brilliant, but when it comes to this idea he developed out of resentment at perceived slights in the elite academy because of his religious beliefs (all admitted), defensiveness gets the better of him, and he becomes someone I want nothing to do with. Luckily, when not directly confronted with it, I am more than happy to put all that out of mind and enjoy his deadpan interactions with Mickey.

Last edited by MikeDrew; 08-08-2010 at 07:48 AM..
Reply With Quote
  #20  
Old 08-08-2010, 01:30 PM
uncle ebeneezer uncle ebeneezer is offline
 
Join Date: Feb 2007
Posts: 3,332
Default Re: Bob's Mystery God Analogy

Excellent post, Mike. In short: Bob's the best, until he gets on his "purpose-driven" rants, and he starts acting all crazy.
Reply With Quote
  #21  
Old 08-08-2010, 10:07 PM
Wonderment Wonderment is offline
 
Join Date: Jul 2007
Location: Southern California
Posts: 5,694
Default Re: Bob's Mystery God Analogy

Quote:
Bob's the best, until he gets on his "purpose-driven" rants, and he starts acting all crazy.
Nice summation.
__________________
Seek Peace and Pursue it
בקש שלום ורדפהו
Busca la paz y síguela
--Psalm 34:15
Reply With Quote
  #22  
Old 08-09-2010, 01:44 AM
MikeDrew MikeDrew is offline
 
Join Date: May 2008
Posts: 110
Default Re: Bob's Mystery God Analogy

Thanks. That's exactly right.
Reply With Quote
  #23  
Old 08-09-2010, 12:31 PM
Gilbert Garza Gilbert Garza is offline
 
Join Date: Aug 2010
Location: El Paso, Texas
Posts: 7
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

First time commenting.
I believe Bob said In TEOG that he would have to write another book to give himself a chance to take direction back to purpose or take direction forward to purpose. We'll have to wait and see if and how that goes. His diavlog with Eliezer may have had been a testing ground and a search for ideas.
Reply With Quote
  #24  
Old 08-09-2010, 12:42 PM
Ocean Ocean is offline
 
Join Date: Jun 2008
Location: US Northeast
Posts: 6,784
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by Gilbert Garza View Post
First time commenting.
I believe Bob said In TEOG that he would have to write another book to give himself a chance to take direction back to purpose or take direction forward to purpose. We'll have to wait and see if and how that goes. His diavlog with Eliezer may have had been a testing ground and a search for ideas.
Welcome to BhTV outspoken community!
Reply With Quote
  #25  
Old 08-10-2010, 10:12 PM
Gilbert Garza Gilbert Garza is offline
 
Join Date: Aug 2010
Location: El Paso, Texas
Posts: 7
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

(What can I say? This is part of me. Can this forum take occasional long broad general commentaries like the following which are likely to be ignored and be discussion dead ends? If not then I will restrict myself to parts of me that no doubt are acceptable.)

Origination and formation (and sometimes information) happen through accidentally advantageous accidental changes which allow created forms singly or in combination to survive long enough to reproduce and (in effect) spread (because of better survival rates) these advantaged characteristics in later generations in succeeding populations and environments and in that way accidentally assure that the process continues. As readers of Robert Wright, we try to refine and extend our perspective and our understanding as we view these processes operationally in terms of material and technological manifestations of the applications of non-zero-sum actions transactions and outcomes at levels of the interbiological, the intercultural, the intermoral, the intermeme, and the interbeyond. We might wonder whether this processes of evolution and inter-evolution which we would easily characterize as being merely self/it-serving and self/it-perpetuating (in accidental directions taken and in accidental purposes served) is also effectively serving (or is destined ultimately to serve) a higher (accidental or non-accidental) ultimate purpose. Looking backward and extrapolating forward what might be the more obvious more optimistic views on this and what might be the more obvious less optimistic views on this that we would be willing to state (and for some people risk stating) in introspection and speculation?

At the high end we might wonder if (and even wish that) our existence and progress as creation and creatures were the result (for the purpose and by the will) of a prior instance of creation and creatures which were at that time (and now even more are) of such a nature and character as to make it very easy for us to revere love and wish to deservedly be accepted as kin in creation and being with them. Still at the high end we might wonder if (and wish that) as a mere outcome of the process we and our descendants and the world that we create for ourselves will in time evolve to be populated by beings of such a nature and culture that looking back all of what has preceded would seem to have been directed toward this state of existence and being as a ultimate higher purpose. At the low end there is the possibility that our creation will reach a not unremarkable but hardly transcendent peak followed by stasis (as judged by us or by other creations and creatures of some knowable or unknowable kind origin and state of being or existence) an occurrence somewhat like music seems to have had in reaching and never again equaling its peak during a very remarkable Classical Period. To help us with our speculation, as readers of Robert Wright, we have in TEOG the benefit of a thoroughly-investigated and highly-developed perspective of the historical record both from sacred scriptures and narratives and from secular historical records as they pertain to the evolution of the intercultural, the intermoral, the intermemal, and with (I think) the promise of a development of the interbeyond, and of the Integral of all.
Reply With Quote
  #26  
Old 08-10-2010, 10:26 PM
Ocean Ocean is offline
 
Join Date: Jun 2008
Location: US Northeast
Posts: 6,784
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

I especially like your optimism at the end.

Peace.
Reply With Quote
  #27  
Old 08-07-2010, 12:31 PM
hamandcheese hamandcheese is offline
 
Join Date: Nov 2008
Location: Nova Scotia
Posts: 48
Send a message via Skype™ to hamandcheese
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Has Eliezer ever considered that the AI might run its computations and come back in support of moral nihilism? Moral facts may not exist and if they do they may be universally false. How can we trust artificial intelligence with our normative ends when normativity itself may be strictly unintelligible. I'm increasingly of the persuasion that Morality will itself be a vestige to our incarnate stupidity.

How can AI transcend cognitive bias and irrational heuristic thinking and still have moral values when both those seem to be the functional basis of ethics?
__________________
Abstract Minutiae blog
Reply With Quote
  #28  
Old 08-07-2010, 12:51 PM
Ocean Ocean is offline
 
Join Date: Jun 2008
Location: US Northeast
Posts: 6,784
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by hamandcheese View Post
Has Eliezer ever considered that the AI might run its computations and come back in support of moral nihilism? Moral facts may not exist and if they do they may be universally false. How can we trust artificial intelligence with our normative ends when normativity itself may be strictly unintelligible. I'm increasingly of the persuasion that Morality will itself be a vestige to our incarnate stupidity.

How can AI transcend cognitive bias and irrational heuristic thinking and still have moral values when both those seem to be the functional basis of ethics?
Excellent points H&C!

Your comment points exactly at one of the problems that I see when Eliezer makes his arguments. From previous diavlogs I remember that he would argue that in theory the AI would handle all the elements that are needed in order to make moral judgments (ie: cognitive, emotional, etc.) and because of its superior ability it would be able to perfect morality. The problem is, that a significant part of moral judgments depends on our own limitations and shortcomings. I don't know how he would solve that problem. Or perhaps, I should say, that he doesn't get to that level of detail and that he mostly expresses, as I said before, his wishful thinking about how it could generally work out.
Reply With Quote
  #29  
Old 08-07-2010, 01:46 PM
claymisher claymisher is offline
 
Join Date: Mar 2008
Location: Newbridge, NJ
Posts: 2,673
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by hamandcheese View Post
Has Eliezer ever considered that the AI might run its computations and come back in support of moral nihilism? Moral facts may not exist and if they do they may be universally false. How can we trust artificial intelligence with our normative ends when normativity itself may be strictly unintelligible. I'm increasingly of the persuasion that Morality will itself be a vestige to our incarnate stupidity.

How can AI transcend cognitive bias and irrational heuristic thinking and still have moral values when both those seem to be the functional basis of ethics?
My hope is that the singularity happens, but like technology in general it follows an s-curve (logistic growth), and that computer intelligence tops out at the level of a corgi. That'll teach those nerds. At least we'll learn a lot about simulated-corgi morality.

Anyway, nobody makes me laugh like Yudkowsky (I love the part where he explains exactly how you would program a peacemaking AI). The pairing of such self-confidence with a complete lack of achievement is comedy gold. Too bad he's not in on the joke.
Reply With Quote
  #30  
Old 08-07-2010, 02:51 PM
ohreally ohreally is offline
 
Join Date: Jan 2010
Posts: 666
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by claymisher View Post
Anyway, nobody makes me laugh like Yudkowsky (I love the part where he explains exactly how you would program a peacemaking AI). The pairing of such self-confidence with a complete lack of achievement is comedy gold. Too bad he's not in on the joke.
Are you sure? I am beginning to wonder if Yudkowsky is not our local Ali G.
Reply With Quote
  #31  
Old 08-07-2010, 03:11 PM
Florian Florian is offline
 
Join Date: Mar 2009
Posts: 2,118
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by ohreally View Post
Are you sure? I am beginning to wonder if Yudkowsky is not our local Ali G.
I nominate him to the college of pataphysics. Perhaps a notch above Ali G.?

http://en.wikipedia.org/wiki/'Pataphysics
Reply With Quote
  #32  
Old 08-07-2010, 03:31 PM
cragger cragger is offline
 
Join Date: Aug 2007
Posts: 632
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

An AI that would be useful in solving the world's problems would have to be far more intelligent than humans, it would have to be so superintelligent that it could also map out a foolproof plan for manipulating humans into implementing the solutions. Self interest, denial, and self-deception are such powerful forces that we consistently fail to act now on many issues for which we know solutions, both individually and collectively.
Reply With Quote
  #33  
Old 08-12-2010, 07:23 PM
Gilbert Garza Gilbert Garza is offline
 
Join Date: Aug 2010
Location: El Paso, Texas
Posts: 7
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

This is so obvious that perhaps I am the less-than-knowledgeable reader of Asimov that should be bringing this up: he, Asimov, and other science fiction writers have been all over this robot- mind and robot- mind -creator thing. What we haven’t had is Robert Wright working this out as a fairly inevitable (maybe vital, maybe relatively minor) part of the general progression of the substantial and expansive non-zero-sum evolutionary reality of the marvelous track and trace of creation and creatures that has occurred is occurring and is yet to occur --maybe.
Reply With Quote
  #34  
Old 08-07-2010, 05:17 PM
bbenzon bbenzon is offline
 
Join Date: Jan 2009
Posts: 20
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
...with a complete lack of achievement is comedy gold.
Has he actually produced any code? Has the Singularity Institute produced any code? Some months ago I took a run at his paper on levels of intelligence and decided it was mostly a word salad.
Reply With Quote
  #35  
Old 08-07-2010, 04:04 PM
Wonderment Wonderment is offline
 
Join Date: Jul 2007
Location: Southern California
Posts: 5,694
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Has Eliezer ever considered that the AI might run its computations and come back in support of moral nihilism? Moral facts may not exist and if they do they may be universally false. How can we trust artificial intelligence with our normative ends when normativity itself may be strictly unintelligible. I'm increasingly of the persuasion that Morality will itself be a vestige to our incarnate stupidity.
Good point. Bob seemed to accept this in principle when he alluded to different (mutually exclusive?) theories of ethics. He said something to the effect that although he was a ultilitarian, why should the smart robot be as well? Oddly, Bob, who is less of an atheist than Eliezer, may be more willing to accept this moral nihilism possibility, even with all his talk of the directionality of morality.

Eliezer, on the other hand, may have too much invested in the Benevolence and (human-like) Genius of the Singularity to seriously entertain the possibility of moral nihilism. He likes to talk of the dangers of AI going off the rails, but only to persuade us that if we do the Right Thing, it won't.
__________________
Seek Peace and Pursue it
בקש שלום ורדפהו
Busca la paz y síguela
--Psalm 34:15
Reply With Quote
  #36  
Old 08-07-2010, 05:54 PM
odopoboqo odopoboqo is offline
 
Join Date: Jul 2010
Posts: 3
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by Wonderment View Post
He said something to the effect that although he was a ultilitarian, why should the smart robot be as well? Oddly, Bob, who is less of an atheist than Eliezer, may be more willing to accept this moral nihilism possibility, even with all his talk of the directionality of morality.

Eliezer, on the other hand, may have too much invested in the Benevolence and (human-like) Genius of the Singularity to seriously entertain the possibility of moral nihilism.

I think Elizer does entertain, and possibly even believe, the possibility of moral nihilism. I'm reminded of this post that Elizer wrote on lesswrong a while back.

Elizer's point, I think, is that even if moral nihilism is true and human morality is arbitrary, that doesn't mean that there isn't a pattern in it. In the parable, the pebblesorters' morality is not strictly random: prime numbers are correct, composite numbers are incorrect. You can figure out this pattern even if you assume that the correctness of primes is arbitrary.

Last edited by odopoboqo; 08-07-2010 at 05:57 PM..
Reply With Quote
  #37  
Old 08-07-2010, 08:23 PM
Wonderment Wonderment is offline
 
Join Date: Jul 2007
Location: Southern California
Posts: 5,694
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

A couple of thoughts in response:


Quote:
Elizer's point, I think, is that even if moral nihilism is true and human morality is arbitrary, that doesn't mean that there isn't a pattern in it.
Or that there is. The "pattern" could be either arbitrary OR non-existent. There might not be any pebbles to sort.

Also, I have seen Eliezer express great moral indignation on BHTV, as we humans are wont to do. He views transhumanism through a moralistic lens (how could he not?).

That makes me think Bob's skepticism makes sense: How do you plan for the non-apocalyptic Singularity without a moral consensus? Are Ahmadinejad, Benyamin Netanyahu, Peter Singer and the Pope going to be consulted on the construction of super-intelligent AI? If not, how do you exclude them?
__________________
Seek Peace and Pursue it
בקש שלום ורדפהו
Busca la paz y síguela
--Psalm 34:15
Reply With Quote
  #38  
Old 08-07-2010, 06:05 PM
Meng Bomin Meng Bomin is offline
 
Join Date: Oct 2008
Posts: 57
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by hamandcheese View Post
Has Eliezer ever considered that the AI might run its computations and come back in support of moral nihilism? Moral facts may not exist and if they do they may be universally false. How can we trust artificial intelligence with our normative ends when normativity itself may be strictly unintelligible. I'm increasingly of the persuasion that Morality will itself be a vestige to our incarnate stupidity.

How can AI transcend cognitive bias and irrational heuristic thinking and still have moral values when both those seem to be the functional basis of ethics?
I agree that moral facts may not exist; however, I think that's where Yudkowsky's differentiation between facts and values comes in. One of the allegories Yudkowsky likes to use (and there were hints of it early in the diavlog) is the paperclip maximizing AI, which only cares about maximizing the number of paperclips in the universe. Maximizing paperclips is not a fact about the universe. However, a self-modifying AI that properly preserved its intial values that was a paperclip maximizer would not find paperclip maximization to be false.

And of course an AI whose goals were orthogonal to or directly contrary to the continued survival of the human species could indeed end our existence, which I believe is part of the motivation behind Yudkowsky's institutes attempts to figure out how to "properly" make a friendly AI.

So from Yudkowsky's point of view, a "moral nihilist" AI could arise for a number of reasons including that it wasn't initially designed to optimize in ways analogous to human morality or that it didn't adequately protect such optimization during self-modification. Indeed from what I've read and heard of Yudkowsky, the vast majority of possibility space for a superhuman intelligent AI would qualify by most standards as "moral nihilist". He just hasn't used that particular terminology.
Reply With Quote
  #39  
Old 08-09-2010, 01:48 PM
Flaw Flaw is offline
 
Join Date: Feb 2008
Posts: 84
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by hamandcheese View Post
Has Eliezer ever considered that the AI might run its computations and come back in support of moral nihilism? Moral facts may not exist and if they do they may be universally false. How can we trust artificial intelligence with our normative ends when normativity itself may be strictly unintelligible. I'm increasingly of the persuasion that Morality will itself be a vestige to our incarnate stupidity.

How can AI transcend cognitive bias and irrational heuristic thinking and still have moral values when both those seem to be the functional basis of ethics?
Bob asks this question.
http://bloggingheads.tv/diavlogs/300...5:01&out=18:17

Eliezer basically says that an AI could extract the moral schema from the population of humans (brain scan or something clever...); that it could gather what we value and with longterm, more powerful thinking, ect create a world that facilitated our ideals.

Nihilism is avoided because the AI is rooted in the schema found in "humanity".
Reply With Quote
  #40  
Old 08-09-2010, 02:22 PM
Ocean Ocean is offline
 
Join Date: Jun 2008
Location: US Northeast
Posts: 6,784
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by Flaw View Post
Eliezer basically says that an AI could extract the moral schema from the population of humans (brain scan or something clever...); that it could gather what we value and with longterm, more powerful thinking, ect create a world that facilitated our ideals.

Nihilism is avoided because the AI is rooted in the schema found in "humanity".
Without even getting into the issue of how a brain scan would detect moral values, the above argument assumes that moral values emerge as some kind of average of opinion. Do we know that's the case?
Reply With Quote
 


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -4. The time now is 02:15 AM.


Powered by vBulletin® Version 3.8.7 Beta 1
Copyright ©2000 - 2020, vBulletin Solutions, Inc.