Go Back   Bloggingheads Community > Diavlog comments
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Notices

Diavlog comments Post comments about particular diavlogs here.
(Users cannot create new threads.)

Reply
 
Thread Tools Display Modes
  #41  
Old 08-07-2010, 10:25 PM
Epicurus Epicurus is offline
 
Join Date: Aug 2007
Posts: 29
Smile Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

This diavlog was thoroughly entertaining and interesting. Bob's dickishness is highly amusing. Keep it up.

Last edited by Epicurus; 08-07-2010 at 10:28 PM..
Reply With Quote
  #42  
Old 08-07-2010, 10:27 PM
Unit Unit is offline
 
Join Date: Aug 2008
Posts: 1,713
Default The deepest point ever made

This is a beautiful and extremely deep point that Bob makes here.
Reply With Quote
  #43  
Old 08-07-2010, 11:26 PM
Unit Unit is offline
 
Join Date: Aug 2008
Posts: 1,713
Default Re: The deepest point ever made

Quote:
Originally Posted by heatfish View Post
No it's not close to a deep point. Human cooperation is a hall mark of our species. Ants cooperate to build an ant hill. Tah da!

I thought Bob was his strident best when attemting to defend his religious convictions which is really what all his deflections were about.

I would really like to hear someone interview Eliezer intelligently so we could appreciate the full stretch of his thinking. Bob, sadly chose to interject and disrupt the train of thought and discussion at every turn.

Bob turned what could have been an interesting discussion of his grand religious/spiritual beliefs into the commentary version of nano-mush.

heatfish
I find ant's cooperation mind blowing. It just goes to show what the power of local incentives and division of labor can produce: something much bigger than any individual ant. The point is not to focus on the simplicity of a single ant, the point is that complexity emerges out of millions of agents interacting under simple rules.
Reply With Quote
  #44  
Old 08-07-2010, 11:34 PM
Ocean Ocean is offline
 
Join Date: Jun 2008
Location: US Northeast
Posts: 6,784
Default Re: The deepest point ever made

Quote:
Originally Posted by Unit View Post
I find ant's cooperation mind blowing. It just goes to show what the power of local incentives and division of labor can produce: something much bigger than any individual ant. The point is not to focus on the simplicity of a single ant, the point is that complexity emerges out of millions of agents interacting under simple rules.
And this is what man made AI accomplishes when you write about ants, isn't it telling?

Last edited by Ocean; 03-06-2011 at 12:07 PM..
Reply With Quote
  #45  
Old 08-07-2010, 11:37 PM
hamandcheese hamandcheese is offline
 
Join Date: Nov 2008
Location: Nova Scotia
Posts: 48
Send a message via Skype™ to hamandcheese
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

This is a great example of what I was referring to, regarding the problems of a normative AI. I agree that the I-Robot notion of AI rising up and overthrowing us is fallaciously anthropocentric: It's a human phenomenon to thirst for power, so we shouldn't expect an AI to unless we program it that way.

Yet we will have to give it the ideas of power and oppression, and other important moral concepts, so that it can actually apply them in answering normative questions. Or will we just cleverly design it to consider moral concepts with an ironic distance?

To me this all suggests a type of implicit moral anti-realism, if not nihilism, on Eliezer's part. By saying 'we simply won't program the concepts of power, selfishness etc.' into the AI it implies that those concepts are not necessary concepts and certainly not transcendent or objective concepts that it could acquire though it's own, accelerating intelligence and introspection.
__________________
Abstract Minutiae blog
Reply With Quote
  #46  
Old 08-08-2010, 12:07 AM
Furcas Furcas is offline
 
Join Date: Aug 2009
Posts: 2
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Eliezer has written quite a bit about metaethics:

http://wiki.lesswrong.com/wiki/Metaethics_sequence
Reply With Quote
  #47  
Old 08-08-2010, 12:45 AM
AemJeff AemJeff is offline
 
Join Date: Feb 2007
Posts: 7,750
Default Related

There's been far too much Eliezer bashing here, I think. Here's Vernor Vinge, who probably knows about as much about the topic of the singularity as anybody:

http://cdn.itconversations.com/ITC.A...2005.09.17.mp3
__________________
-A. E. M. Jeff (Eponym)
Magnets - We know how they work!

Last edited by AemJeff; 08-08-2010 at 12:52 AM..
Reply With Quote
  #48  
Old 08-08-2010, 02:11 AM
bjkeefe bjkeefe is offline
 
Join Date: Jan 2007
Location: Not Real America, according to St. Sa家h
Posts: 21,798
Default Wow

I gotta say, the number of times Bob invoked in defense of his ideas the argument What if people had just told Darwin that God created everything, so shut up already came awfully close to earning the 40-point Galileo penalty.

I think Bob was a good interlocutor for Eliezer's work. I was not bothered by his tone or whatever in the way some other commenters were. A little sarcasm is a perfectly good thing, both for highlighting weak points and in making the proponent strengthen or clarify his argument. But man, when it came to answering critiques of his own project, I thought he was off-puttingly defensive. Ultimately, he came across as persuasive most of all for the idea that I should not accept his belief system.

To the extent that I understand it, I mean -- I've been exposed to it numerous times over the past few years, and I still cannot say with much confidence or any specificity what it actually is. I am often inclined to wonder if Bob is merely recapitulating R. Daneel Olivaw and R. Giskard Reventlov's Zeroth Law thinking, but that's probably not it.

[Added] I am not saying I would not like it to be true that there is some overarching purpose/direction to all we see of evolution and history. I am saying that I am much closer to Eliezer's view here than I am Bob's: Seeing a watch in the desert, or a cactus, does easily suggest that there must be something larger, that we can also comprehend, that allows us to talk about how that watch or cactus might have come to be; but the whole mess (ecosystem)? That, it appears, is just as easily explained by "shit happens," plus the idea that our brains impose the idea of The Existence of Something Larger on emergent phenomena, as it does suggest the requirement that there actually be something Larger.
__________________
Brendan

Last edited by bjkeefe; 08-08-2010 at 02:26 AM..
Reply With Quote
  #49  
Old 08-08-2010, 02:16 AM
bjkeefe bjkeefe is offline
 
Join Date: Jan 2007
Location: Not Real America, according to St. Sa家h
Posts: 21,798
Default Re: Related

Quote:
Originally Posted by AemJeff View Post
There's been far too much Eliezer bashing here, I think.
Yeah, but there always is. I will repeat what I've said before, in his defense: I think, if nothing else, Eliezer is very probably right that something like an AI-related Singularity is not too far off, and so it makes sense to start thinking about this now, because it is virtually certain that many people will come up with ways to implement imperfect aspects of the idea, and we do risk unhappiness from those efforts. I am glad he is doing what he is doing, and I think he is laying useful groundwork.

Quote:
Here's Vernor Vinge, who probably knows about as much about the topic of the singularity as anybody:

http://cdn.itconversations.com/ITC.A...2005.09.17.mp3
Thanks for the link. I will download and listen later.
__________________
Brendan
Reply With Quote
  #50  
Old 08-08-2010, 07:43 AM
MikeDrew MikeDrew is offline
 
Join Date: May 2008
Posts: 110
Default Bob's Mystery God Analogy

I have never quite understood the objections here to Eliezer's "behavior." He has always just seemed like someone with some very specific ideas that he has expressed in ways that have gotten people's attention, and attends to the resulting questions as openly and frankly as he can, nothing more, nothing less.

Other than that, I agree with your assessment of Bob's continuing pursuit of his idea of possible purpose behind natural selection. Its "purpose" is clearly to point in the direction of a justification for a natural God, and Bob has all but admitted that is his motivation. Moreover, the idea that he is trying to promote is itself among the vaguest concepts I have ever encountered. Every time I finish listening to him press his case to someone, my head is spinning from trying to understand just what in the hell the thing is he is trying to get the person to say is possible. As far a I can tell, Bob insists we need to see development of life thru the internet on Earth as evidence that there is purpose behind the process we already know to have driven that development, merely because, if there were, it would be analogous to the role natural selection played in eventually producing organisms whose gestation period (or full life cycle?), which Bob finds to be analogous to development on Earth writ large. As far as I can tell, Bob wants people to admit that because he can draw what he believes to be a good analogy between two relationships, one between two things we agree exist, and one between a thing we agree exists and one he just posits without describing much at all, that the ability to draw that analogy should be seen as evidence that the fourth thing exists. Further, by insisting he is just trying to get admissions that this idea is barely possible ("not utterly laughable," or whatever his standard is), he engages full-bore in the style of pro-God argumentation that people like D'Sousa use to get non-believers to admit they aren't perfectly certain and therefore can't deny the possibility of the existence of God. Well, when you get down to it, what couldn't Bob or Dinesh get us all to have to admit we aren't certain isn't the case if he put his mind to it?

All that said, it was fair for Bob to be very forward in his defense of his ideas here, because Eliezer apparently said he wanted 'interrogate' him about them, or at least requested the dialogue. This is in contrast to Bob's meaningoflife.tv dialogue with Dennett, where by all appearances Prof. Dennett really didn't have much idea how attached Bob is to his idea, and submitted to Bob's questioning at Bob's request rather than the other way round. Then afterwards, when, on the strength of series of rickety analogies built on assumptions and stretched definitions Bob finally got Dennett to cry uncle and say that one could see this chain of reasoning as "evidence" if one so chose, Bob then rushed out to publish what was essentially a press release declaring that he had gotten Dennett to recant his worldview. Talk about appalling behavior. I'm not surprised Bob thought better of reposting the interview and aftermath after talking big about it in the dv. It's nothing less than video and written record of what should be seen as the lowest moment of his career, and he is right to be ashamed of them if that's why he chickened out reposting the links. I like Bob's manner and humor with just about every guest, and what he's created here is brilliant, but when it comes to this idea he developed out of resentment at perceived slights in the elite academy because of his religious beliefs (all admitted), defensiveness gets the better of him, and he becomes someone I want nothing to do with. Luckily, when not directly confronted with it, I am more than happy to put all that out of mind and enjoy his deadpan interactions with Mickey.

Last edited by MikeDrew; 08-08-2010 at 07:48 AM..
Reply With Quote
  #51  
Old 08-08-2010, 11:29 AM
thouartgob thouartgob is offline
 
Join Date: Oct 2006
Posts: 765
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by Ocean View Post
The addition of the argument about preventing the evil ones from developing the technology first, is a valid one, but it certainly resembles rather closely the kind of argument that one would make to obtain funding when other arguments have failed.
The funding argument was something that I had not considered but it's defintely something I should have. Well the quest does have the ring of a science fiction story in either case. Title suggestion "Arms Race to GOD !"


Quote:
Originally Posted by Ocean View Post

I'm not sure what you mean, but, indeed it would be nice if people would either define themselves more clearly or place all their cards on the table. When it comes to this topic, Bob seems to be hiding one piece, and unfortunately that makes his arguments confusing.
Well I like to think that I meant to make a vaguely humorous suggestion that wanting Bob to be elucidate his posistion, either by better analogies or just saying he believes in a god, may be an unfufilled desire since he is seemingly keeping his cards to his chest for a reason. Either he is still working things out or he wants to keep peoples attention.


Quote:
Originally Posted by Ocean View Post
You must be seeing something that I don't see. I don't see him having fun "on the fence".

Whatever it is it's working for him. Maybe it's just a slowly evolving epiphany ... evolving using the mechanism of Natural Selection in a Materialist world :-)


Quote:
Originally Posted by Ocean View Post
When an argument fails to inspire productive discussion, and keeps being reduced to endless ruminations about definitions and repeated contradictions, it should be revised. A piece is missing or misplaced. I don't think it's that hard to see that. Figuring out how to fix the problem is another story.

Let's say that we trust Bob's ability to fix it, sooner or later.
Somebody Say Amen
Reply With Quote
  #52  
Old 08-08-2010, 11:37 AM
Ocean Ocean is offline
 
Join Date: Jun 2008
Location: US Northeast
Posts: 6,784
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by thouartgob View Post
The funding argument was something that I had not considered but it's defintely something I should have. Well the quest does have the ring of a science fiction story in either case. Title suggestion "Arms Race to GOD !"

Well I like to think that I meant to make a vaguely humorous suggestion that wanting Bob to be elucidate his posistion, either by better analogies or just saying he believes in a god, may be an unfufilled desire since he is seemingly keeping his cards to his chest for a reason. Either he is still working things out or he wants to keep peoples attention.

Whatever it is it's working for him. Maybe it's just a slowly evolving epiphany ... evolving using the mechanism of Natural Selection in a Materialist world :-)

Somebody Say Amen
Amen!
Reply With Quote
  #53  
Old 08-08-2010, 01:30 PM
uncle ebeneezer uncle ebeneezer is offline
 
Join Date: Feb 2007
Posts: 3,332
Default Re: Bob's Mystery God Analogy

Excellent post, Mike. In short: Bob's the best, until he gets on his "purpose-driven" rants, and he starts acting all crazy.
Reply With Quote
  #54  
Old 08-08-2010, 01:43 PM
propagandhi propagandhi is offline
 
Join Date: Jun 2009
Location: Brooklyn, NY
Posts: 45
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

I'm not sure what all the fuss is about regarding Bob and Eliezer's conduct during the diavlog. It seemed to me like two intellectuals debating a complex topic, both were frustrated at times, both found humor in it at times. I think that their candor was pure bloggingheads. I can't say I'm clear enough on all the subjects to make perfect sense of everything, but I can say that Bob's arguments for "purpose" have swayed me a bit. I'm not anywhere near as sympathetic and moderate about religious belief as Bob, I certainly don't think his "purpose" should be equated with divinity, but It's been enough to cause disagreement between my father and me (he refuses to accept any evidence of directionality, and sounds much like Michael Shermer in his refutation of Nonzero).
Reply With Quote
  #55  
Old 08-08-2010, 03:59 PM
bjkeefe bjkeefe is offline
 
Join Date: Jan 2007
Location: Not Real America, according to St. Sa家h
Posts: 21,798
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by propagandhi View Post
I'm not sure what all the fuss is about regarding Bob and Eliezer's conduct during the diavlog. It seemed to me like two intellectuals debating a complex topic, both were frustrated at times, both found humor in it at times. I think that their candor was pure bloggingheads. I can't say I'm clear enough on all the subjects to make perfect sense of everything, but I can say that Bob's arguments for "purpose" have swayed me a bit. I'm not anywhere near as sympathetic and moderate about religious belief as Bob, I certainly don't think his "purpose" should be equated with divinity, but It's been enough to cause disagreement between my father and me (he refuses to accept any evidence of directionality, and sounds much like Michael Shermer in his refutation of Nonzero).
That's a well-written piece by Shermer. Thanks for the link, especially as I gather you do not wholly embrace it.
__________________
Brendan
Reply With Quote
  #56  
Old 08-08-2010, 05:07 PM
ciphergoth ciphergoth is offline
 
Join Date: Aug 2010
Posts: 1
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Bloggingheads would greatly benefit from software which measured how long each speaker spoke, how often each interrupted the other, and how often each yielded to the other in response to an interruption.
Reply With Quote
  #57  
Old 08-08-2010, 05:40 PM
bjkeefe bjkeefe is offline
 
Join Date: Jan 2007
Location: Not Real America, according to St. Sa家h
Posts: 21,798
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by ciphergoth View Post
Bloggingheads would greatly benefit from software which measured how long each speaker spoke, how often each interrupted the other, and how often each yielded to the other in response to an interruption.
I fail to see how this would be of benefit to anyone, except for fussbudgets who have an overdeveloped -- and immature -- sense of Fairness. And what would you ask for in v2.0? A counter for uhs and likes, with Dire Consequences to follow for all those who crossed the Threshold Of Unacceptability?

Either a conversation works for you or it doesn't. And often, a conversation works for me even when (maybe even because of) one person talks appreciably more. And as for interruptions, it seems to me that the one being interrupted is almost the only one whose opinion matters.
__________________
Brendan

Last edited by bjkeefe; 08-08-2010 at 06:38 PM.. Reason: make a verb agree
Reply With Quote
  #58  
Old 08-08-2010, 06:05 PM
erudyte42 erudyte42 is offline
 
Join Date: Jun 2010
Posts: 20
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by Emef View Post
This was a very good discussion but could have been much better. The arrogance of both participants was annoying, but Bob's yelling and interuptions were particularly disconcerting. I would rather not have bloggingheads be a boxing match. While illuminating disagreements is exactly what makes watching interesting, respectful interchanges are much more informative to the viewer than point-scoring. I'd love to see this same discussion again, but with both Bob and Eliezer exploring the other's ideas with curiosity rather than hostility, and acknowledging that there is an audience interested in understanding rather than blood sport.
Ditto that:
Reply With Quote
  #59  
Old 08-08-2010, 06:34 PM
ledocs ledocs is offline
 
Join Date: Sep 2007
Location: France, Earth
Posts: 1,165
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Bob's use of praeteritio here could have been more subtle,

bloggingheads.tv/diavlogs/30013?in=23:18&out=23:41


but this clip might be useful for demonstrating the rhetorical technique to KGB agents.
__________________
ledocs

Last edited by ledocs; 08-09-2010 at 09:39 AM..
Reply With Quote
  #60  
Old 08-08-2010, 06:37 PM
erudyte42 erudyte42 is offline
 
Join Date: Jun 2010
Posts: 20
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

I have come across two dialogs about the transhumanism/AI vision, I find it to be transparently silly (obviously so in a previous discussion with Massimo Pigliuci). I'm not convinced its science and I doubt if I will listen to any more of it. Nevertheless, given that the discussion took place I thought Bob was playing the right balance by allowing Eliezer to talk about it, but treating it with calm analysis and discerning skepticism.

However,it seemed to me that when the conversation got to Eliezers criticism, Bob Wright allowed himself to get emotionally defensive, and I don't personally find this a good recipe for informative debate or a good video. Also, by trying to deny the argument that the 'integrated functionality' for an organism is different and stronger than the case for the ecology, Bob made his own case seem less reasonable. I personally think that there is enough directionality in the ecology to require some level of explanation, and a reasonable case to be made about a direction/objective for the ecology as a whole, regardless of whether 'purpose' is the right word or not. But I found that all that emotional defensiveness prevented Bob from presenting us, with the best scientific case for that proposition.

Kudos to Eliezer; with such a silly vision he still managed to look like a reasonable debater.
Reply With Quote
  #61  
Old 08-08-2010, 06:40 PM
bjkeefe bjkeefe is offline
 
Join Date: Jan 2007
Location: Not Real America, according to St. Sa家h
Posts: 21,798
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by ledocs View Post
Bob's use of praeteritio here could have been more subtle,

bloggingheads.tv/diavlogs/30013?in=23:18&out=23:41


but this clip might be useful for demonstrating the rhetorical technique to KGB agents.
Link fix.

And yeah, that was more than a little heavy-handed, wasn't it?
__________________
Brendan
Reply With Quote
  #62  
Old 08-08-2010, 07:01 PM
erudyte42 erudyte42 is offline
 
Join Date: Jun 2010
Posts: 20
Default Re: Wow

BJkeefe:

My reaction was close to yours, and I think the 'God created everything, so shut up Darwin' comment was not analagous here.

I find BW pretty good most of the time, but I am struck by how much variation in 'reasonabless of argument' there is.
Reply With Quote
  #63  
Old 08-08-2010, 07:10 PM
ohreally ohreally is offline
 
Join Date: Jan 2010
Posts: 666
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by propagandhi View Post
I can say that Bob's arguments for "purpose" have swayed me a bit.
I wouldn't mind being swayed if I understood what he means by purpose. Shermer seems to interpret it as Wright saying that social progress via nonzero sumness was necessary and not contingent. In other words, purpose means that the Internet had to happen; slavery had to end; democracy had to spread; etc. That it was not just accidental.

Philosophy has something interesting to say about such questions. And it is this. On the surface, whether, say, the concept of democracy had to emerge is a question that might be hard to answer but certainly seems compelling enough and easy to grasp. After all, couldn't things have happened otherwise? Wright says no. Shermer says yes. Who's right, who's wrong? Good question.

Well, one of the few contributions of philosophy is to say that it might be very wise to stay away from such questions. (Which is why I hope Shermer's interpretation of Wright's definition of purpose is wrong.)

The question is metaphysical. Aristotle, Kant, Quine, and Kripke all lost sleep over it. Why is it tough? Several reasons: first, it's impossible to imagine that we lived in a world with no concept of democracy. Simply because if we do not have the concept of democracy then the concept of having "no concept of democracy" is meaningless since the word democracy cannot be defined by us. So we need to build possible worlds (a slippery slope -- Aristotle believed that all past truths were necessary), where initial conditions and coin flips may vary. But if so, to prove or disprove contingency requires probabilities, that is, a quantitative analysis. To establish or rule out contingency requires empirical work. But since the concept of democracy requires consciousness, there's not a chance of ever collecting the necessary empirical evidence. At any rate, simply to assert necessity on the basis on one ultra-simplistic game-theoretical principle is wrong. Perhaps best to stay away from metaphysics altogether.

But again maybe Wright will tell us what he meant by purpose.

Last edited by ohreally; 08-08-2010 at 09:12 PM..
Reply With Quote
  #64  
Old 08-08-2010, 08:31 PM
AemJeff AemJeff is offline
 
Join Date: Feb 2007
Posts: 7,750
Default Re: Related

Quote:
Originally Posted by bjkeefe View Post
Yeah, but there always is. I will repeat what I've said before, in his defense: I think, if nothing else, Eliezer is very probably right that something like an AI-related Singularity is not too far off, and so it makes sense to start thinking about this now, because it is virtually certain that many people will come up with ways to implement imperfect aspects of the idea, and we do risk unhappiness from those efforts. I am glad he is doing what he is doing, and I think he is laying useful groundwork.



Thanks for the link. I will download and listen later.
There's not much that's new there (it dates from 2005) but it is a pretty clear presentation, and I get the feeling that, for a lot of people, Eliezer is close to being the only advocate of the the idea they've had the opportunity to hear in detail.
__________________
-A. E. M. Jeff (Eponym)
Magnets - We know how they work!
Reply With Quote
  #65  
Old 08-08-2010, 10:07 PM
Wonderment Wonderment is offline
 
Join Date: Jul 2007
Location: Southern California
Posts: 5,694
Default Re: Bob's Mystery God Analogy

Quote:
Bob's the best, until he gets on his "purpose-driven" rants, and he starts acting all crazy.
Nice summation.
__________________
Seek Peace and Pursue it
בקש שלום ורדפהו
Busca la paz y s璲uela
--Psalm 34:15
Reply With Quote
  #66  
Old 08-09-2010, 12:35 AM
bjkeefe bjkeefe is offline
 
Join Date: Jan 2007
Location: Not Real America, according to St. Sa家h
Posts: 21,798
Default Re: Related

Quote:
Originally Posted by AemJeff View Post
There's not much that's new there (it dates from 2005) but it is a pretty clear presentation, and I get the feeling that, for a lot of people, Eliezer is close to being the only advocate of the the idea they've had the opportunity to hear in detail.
Still haven't gotten around to listening, sorry to say.

Meantime, what do you make of this, perhaps as it pertains to overall progress in AI? Robert Fortner: "Rest in Peas: The Unrecognized Death of Speech Recognition" (via).
__________________
Brendan
Reply With Quote
  #67  
Old 08-09-2010, 01:43 AM
T.G.G.P T.G.G.P is offline
 
Join Date: Nov 2006
Posts: 278
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by bjkeefe View Post
I fail to see how this would be of benefit to anyone, except for fussbudgets who have an overdeveloped -- and immature -- sense of Fairness. And what would you ask for in v2.0? A counter for uhs and likes, with Dire Consequences to follow for all those who crossed the Threshold Of Unacceptability?

Either a conversation works for you or it doesn't. And often, a conversation works for me even when (maybe even because of) one person talks appreciably more. And as for interruptions, it seems to me that the one being interrupted is almost the only one whose opinion matters.
Maybe it wouldn't make that much of a difference but more stats is always cool.
Reply With Quote
  #68  
Old 08-09-2010, 01:44 AM
MikeDrew MikeDrew is offline
 
Join Date: May 2008
Posts: 110
Default Re: Bob's Mystery God Analogy

Thanks. That's exactly right.
Reply With Quote
  #69  
Old 08-09-2010, 01:52 AM
Wonderment Wonderment is offline
 
Join Date: Jul 2007
Location: Southern California
Posts: 5,694
Default Re: Related

Quote:

Meantime, what do you make of this, perhaps as it pertains to overall progress in AI? Robert Fortner: "Rest in Peas: The Unrecognized Death of Speech Recognition"
A lot of people have argued that machine speech is, in principle, impossible. To put it in other terms, if a machine speaks or truly understands language, it's no longer a machine; it's a person.

To engineer a language-competent machine you'd have to engineer a person.

No machine, by definition, will ever pass the Turing Test. If it passes, it ain't a machine.

I wouldn't rule out building non-human persons; I just think we're totally clueless at this point about how it would happen.

Certainly Kurzweil, guru of Eliezer, was wrong, or at least on the wrong track:

Quote:
Kurzweil looked at the trajectory he had helped carve and prophesied that machines would inevitably become intelligent and then spiritual.
For real AI, my best guess is that you'd have to build a brain that thinks it has a body, interests, emotions, family, etc. From scratch. The Brain might have to live in a virtual world. The ethical issues would be enormous. I think I would come out on the side of the Luddites, but of course they would not prevail.
__________________
Seek Peace and Pursue it
בקש שלום ורדפהו
Busca la paz y s璲uela
--Psalm 34:15
Reply With Quote
  #70  
Old 08-09-2010, 02:08 AM
eliharrigan eliharrigan is offline
 
Join Date: Aug 2010
Posts: 1
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Keep the fight alive Bob. Just a little less coffee next time...
Reply With Quote
  #71  
Old 08-09-2010, 02:17 AM
bjkeefe bjkeefe is offline
 
Join Date: Jan 2007
Location: Not Real America, according to St. Sa家h
Posts: 21,798
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by T.G.G.P View Post
Maybe it wouldn't make that much of a difference but more stats is always cool.
Hee hee! You got me there.
__________________
Brendan
Reply With Quote
  #72  
Old 08-09-2010, 02:55 AM
bjkeefe bjkeefe is offline
 
Join Date: Jan 2007
Location: Not Real America, according to St. Sa家h
Posts: 21,798
Default Re: Related

Quote:
Originally Posted by Wonderment View Post
A lot of people have argued that machine speech is, in principle, impossible. To put it in other terms, if a machine speaks or truly understands language, it's no longer a machine; it's a person.

To engineer a language-competent machine you'd have to engineer a person.
I don't believe that. I'm interested that the speech recognition problem has so far resisted solution, and I think it suggests that some parts of the general AI problem are likely not to be as easy to solve as we might think. However, I don't see, in principle, why one brilliant insight couldn't come from out of the blue and make it a snap for a machine to recognize speech as well as a human. We do have rather amazing brains, but they are not infinitely powerful machines.

Quote:
Originally Posted by Wonderment View Post
No machine, by definition, will ever pass the Turing Test. If it passes, it ain't a machine.
TT arguments rarely go anywhere but in circles, but for the record, your statements are simply not true. The test is merely a way to define whether a machine can be said to have intelligence. (And it's actually even less than that.)

I'd add that we have been able to build machines that will fool many people for a long time into believing they were communicating with other people, if you don't require that the communication be by speech. Speech recognition is not a necessary condition as Turing defined it and as it is generally understood to this day -- the only requirement is communication in a natural language.

Quote:
Originally Posted by Wonderment View Post
I wouldn't rule out building non-human persons; I just think we're totally clueless at this point about how it would happen.
That may well be, but in the meantime, we get better at all sorts of pieces. For example, visual recognition has progressed much more than I would have once thought, and I used to something related for a living. And to tie this back to why I think what Eliezer is doing is useful: it's the uneven progress along the many paths on the quest for this (perhaps mythical) "non-human person" that worries me more. To put it melodramatically, what if it turns out that once a machine is built that can reprogram itself, it decides it never cares whether it can understand us?

Quote:
Originally Posted by Wonderment View Post
Certainly Kurzweil, guru of Eliezer, was wrong, or at least on the wrong track:

Quote:
Kurzweil looked at the trajectory he had helped carve and prophesied that machines would inevitably become intelligent and then spiritual.
That's a pretty sketchy example of quoting, Wha ... Wonderment. Here's a bit more of it:

Quote:
Speech recognition pioneer Ray Kurzweil piloted computing a long way down the path toward artificial intelligence. His software programs first recognized printed characters, then images and finally spoken words. Quite reasonably, Kurzweil looked at the trajectory he had helped carve and prophesied that machines would inevitably become intelligent and then spiritual.
Note that even the author of the post, who is arguing that speech recognition is stalled, cast what Kurzweil predicted as a reasonable extrapolation. Also note the links -- if you click them, you'll note that the word choices are taken from Kurzweil's book titles. I hate to have to be the one to break this to you, but sometimes, book titles are a bit more lurid than the book's content, to attract attention. Shocking, I know.

Quote:
Originally Posted by Wonderment View Post
For real AI, my best guess is that you'd have to build a brain that thinks it has a body, interests, emotions, family, etc. From scratch. The Brain might have to live in a virtual world. The ethical issues would be enormous. I think I would come out on the side of the Luddites, but of course they would not prevail.
I don't agree with this at all, and I don't think many people who work in this field do, either. But in any case, as I said above, I don't worry so much about getting to some sort of Compleat AI as I do about misapplication of partial accomplishments along that path.
__________________
Brendan

Last edited by bjkeefe; 08-09-2010 at 03:00 AM..
Reply With Quote
  #73  
Old 08-09-2010, 03:47 AM
Wonderment Wonderment is offline
 
Join Date: Jul 2007
Location: Southern California
Posts: 5,694
Default Re: Related

Quote:
However, I don't see, in principle, why one brilliant insight couldn't come from out of the blue and make it a snap for a machine to recognize speech as well as a human. We do have rather amazing brains, but they are not infinitely powerful machines.
Well, I suppose anything could come out of the blue, but there's nothing I'm aware of that suggests we are even remotely on the right track. There are plenty of people who haven't given up, nor should they, but so far nothing earthshaking is on the horizon.

I agree that brains are not infinitely powerful machines. Certainly I can imagine brains bigger and smarter than human brains. I'm just skeptical that we can create them any time soon. Yes, in 100 years or 500 years (an obvious nanosecond on the evolutionary scale) anything could happen. Just ask the little boy in the Steven Spielberg movie "AI." But that's one of the problems with transhumanism -- a blurring of the science fiction line, a lot of assertions and predictions without much substance.

Quote:
And to tie this back to why I think what Eliezer is doing is useful: it's the uneven progress along the many paths on the quest for this (perhaps mythical) "non-human person" that worries me more. To put it melodramatically, what if it turns out that once a machine is built that can reprogram itself, it decides it never cares whether it can understand us?
I don't really get how it could "decide" anything, but I won't get into a Bob debate on words like "purpose."

I do understand how we could create a doomsday machine, so I'll stipulate that Eliezer's prophetic warnings are worthy of consideration. I don't see, however, how he is doing more than this guy.

I don't mean that entirely facetiously. Eliezer seems to be playing both Pollyana and Jeremiah. On the one hand, he is deliriously optimistic about attaining immortality (Kurzweil is even more blatantly obsessed); on the other he sounds like Chicken Little.

I would think that you of all people would be skeptical of narratives that sound suspiciously like religious fables.
__________________
Seek Peace and Pursue it
בקש שלום ורדפהו
Busca la paz y s璲uela
--Psalm 34:15
Reply With Quote
  #74  
Old 08-09-2010, 06:39 AM
Florian Florian is offline
 
Join Date: Mar 2009
Posts: 2,118
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by ohreally View Post
But again maybe Wright will tell us what he meant by purpose.
That is the crux! Unfortunately, neither speaker (to the extent that I could penetrate their meandering arguments) seems to have a firm grasp on the notion of purpose, which can bear both a subjective and an objective sense. We human beings obviously have (subjective) purposes. We set purposes (goals, ends) for ourselves, both individually and collectively, and we devise the appropriate means to attain them. This banal truth has been known ever since Socrates contrasted scientific (causal) explanation with explanation in terms of purposes or goals (tele), which we cannot help using in conducting our daily lives as members of political communities. The idea of objective purposes in nature or in history is, to say the least, much more problematic. Such purposes, if they exist, presuppose a godlike point of view on the universe or nature (the in-itself) that may be inaccessible to human reason or only accessible in retrospect, as in your example of the "inevitability" of democracy. If history has a purpose or a direction (towards democracy or whatever...), we can only know this fact after it has occurred. Or as Hegel said, the owl of Minerva only takes its flight at dusk....

Modern physics, beginning with Galileo and Descartes ruled out purposes in the explanation of natural phenomena, i.e. "final causes" in the language of Aristotle. Darwin thought that he could do the same in the explanation of evolution. Darwin was probably wrong, but until a super Darwin comes along, looking for purposes in nature seems to me a fool's errand. History offers a much a more fertile ground for the discovery of purpose (s).
Reply With Quote
  #75  
Old 08-09-2010, 12:31 PM
Gilbert Garza Gilbert Garza is offline
 
Join Date: Aug 2010
Location: El Paso, Texas
Posts: 7
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

First time commenting.
I believe Bob said In TEOG that he would have to write another book to give himself a chance to take direction back to purpose or take direction forward to purpose. We'll have to wait and see if and how that goes. His diavlog with Eliezer may have had been a testing ground and a search for ideas.
Reply With Quote
  #76  
Old 08-09-2010, 12:42 PM
Ocean Ocean is offline
 
Join Date: Jun 2008
Location: US Northeast
Posts: 6,784
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by Gilbert Garza View Post
First time commenting.
I believe Bob said In TEOG that he would have to write another book to give himself a chance to take direction back to purpose or take direction forward to purpose. We'll have to wait and see if and how that goes. His diavlog with Eliezer may have had been a testing ground and a search for ideas.
Welcome to BhTV outspoken community!
Reply With Quote
  #77  
Old 08-09-2010, 01:48 PM
Flaw Flaw is offline
 
Join Date: Feb 2008
Posts: 84
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by hamandcheese View Post
Has Eliezer ever considered that the AI might run its computations and come back in support of moral nihilism? Moral facts may not exist and if they do they may be universally false. How can we trust artificial intelligence with our normative ends when normativity itself may be strictly unintelligible. I'm increasingly of the persuasion that Morality will itself be a vestige to our incarnate stupidity.

How can AI transcend cognitive bias and irrational heuristic thinking and still have moral values when both those seem to be the functional basis of ethics?
Bob asks this question.
http://bloggingheads.tv/diavlogs/300...5:01&out=18:17

Eliezer basically says that an AI could extract the moral schema from the population of humans (brain scan or something clever...); that it could gather what we value and with longterm, more powerful thinking, ect create a world that facilitated our ideals.

Nihilism is avoided because the AI is rooted in the schema found in "humanity".
Reply With Quote
  #78  
Old 08-09-2010, 02:22 PM
Ocean Ocean is offline
 
Join Date: Jun 2008
Location: US Northeast
Posts: 6,784
Default Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)

Quote:
Originally Posted by Flaw View Post
Eliezer basically says that an AI could extract the moral schema from the population of humans (brain scan or something clever...); that it could gather what we value and with longterm, more powerful thinking, ect create a world that facilitated our ideals.

Nihilism is avoided because the AI is rooted in the schema found in "humanity".
Without even getting into the issue of how a brain scan would detect moral values, the above argument assumes that moral values emerge as some kind of average of opinion. Do we know that's the case?
Reply With Quote
  #79  
Old 08-09-2010, 03:17 PM
bjkeefe bjkeefe is offline
 
Join Date: Jan 2007
Location: Not Real America, according to St. Sa家h
Posts: 21,798
Default Re: Related

Quote:
Originally Posted by Wonderment View Post
Well, I suppose anything could come out of the blue, but there's nothing I'm aware of that suggests we are even remotely on the right track.
That seems fair, except for the minor quibble that extensively exploring some tracks and showing they lack promise helps narrow the rest of the possible choices. A lot of times progress is like this -- long periods of time spent chipping away and going down blind alleys, and then all of the sudden, boom. Think, for example, what searching the Internet was like before Google, and how much and how fast things changed after they were up and running. I don't know if you remember back in the mid-90s, but searching absolutely sucked, everyone knew it and complained about it all the time, and no one knew what to do about it.

Now, evidently, this (providing satisfactory-to-humans search results nearly all the time, based on human-entered phrases) is an easier problem; on the other hand, once solved, any problem seems easy in retrospect.

Quote:
[...] I agree that brains are not infinitely powerful machines. Certainly I can imagine brains bigger and smarter than human brains. I'm just skeptical that we can create them any time soon. Yes, in 100 years or 500 years (an obvious nanosecond on the evolutionary scale) anything could happen. Just ask the little boy in the Steven Spielberg movie "AI." But that's one of the problems with transhumanism -- a blurring of the science fiction line, a lot of assertions and predictions without much substance.
I take your point, sort of. I'd say two things, though.

First, if we suppose that your "100 years" figure is right, we can probably expect preliminary findings over the next few decades. At risk of beating this drum to death, this is primarily what concerns me -- incompletely understood ideas, especially as pertains to the ramifications of willy-nilly application and attempts at monetization.

Second, I think it is unfair of you to bandy about the word transhumanism as you do. It seems evident that you are trying to belittle Eliezer by associating him with a bunch of lay enthusiasts.

Also, I reject your pejorative use of "science fiction" -- look at the whole history of SF and see how many things have been correctly anticipated. And I am not just talking about predictions of specific technologies; I'm talking about useful contemplations of societal effects under the assumption of the existence of certain technologies. I suspect you do not dismiss other genres of literature so sweepingly, precisely because the novelists and the poets often have a lot of worth in looking at these larger issues.

Quote:
I don't really get how it could "decide" anything, but I won't get into a Bob debate on words like "purpose."
Sure, it's easy to over-anthropomorphize, and it's hard to speak colloquially/extemporaneously without suggesting that you're doing that. But, come on, you know what I mean when I say a machine "decides." If you want to be pedantic, I mean "at some if-then-else point in a decision tree specified by stored algorithms."

Quote:
I do understand how we could create a doomsday machine, so I'll stipulate that Eliezer's prophetic warnings are worthy of consideration. I don't see, however, how he is doing more than this guy.
So far as I can tell, he's trying his best to think rigorously about these issues. He's not just painting a belief on a sign and calling it good.

Quote:
I don't mean that entirely facetiously. Eliezer seems to be playing both Pollyana and Jeremiah. On the one hand, he is deliriously optimistic about attaining immortality (Kurzweil is even more blatantly obsessed); on the other he sounds like Chicken Little.
Seems to me you're being hyperbolic about someone who is thinking clearly enough to recognize that there are potential upsides and downsides to an anticipated new set of technologies.

And again, you seem to be indulging unwarrantedly in guilt-by-association tactics. So far as I know, Eliezer does not consider himself a disciple of Kurzweil to the extent that he's proselytizing the latter's every last thought, so why bring him up?

Quote:
I would think that you of all people would be skeptical of narratives that sound suspiciously like religious fables.
Ah. Now you're trying to paint me into the same corner you've been trying to paint Eliezer into, using the same cheap brush.

Look, W, just because some people, somewhere, won't shut up about how wonderful life is going to be when they upload themselves into the Cloud or whatever doesn't mean that I either subscribe to their enthusiasm or their predicted timescales. I don't care about such people -- they're about as relevant and harmful as those who sat around a generation or two ago, reading comic books and dreaming of flying cars. But even if every single one of them were to go quiet tomorrow, there will remain a whole sheaf of issues and potential issues in the field we label as AI, and I don't think it's anything but rational to spend time thinking about them. People, especially people with money and power, are interested in better, faster, smarter machines; they always have been, and they always will be. And just because things aren't proceeding as smoothly as early AI researchers predicted, it doesn't mean that therefore, we'll never have to deal with any aspect of it.

[Added] To see where I'm coming from on this, consider your own views on a different topic. You believe in, and advocate for, something that most people feel is either unrealistic or very far off in the future: complete abolition of nuclear weapons. I would not say your views (hopes) are religious. I would say you and others who share your views have identified a worthy goal, and that even though it may seem unattainable, it nonetheless serves as encouragement to get smart, energetic people to do work -- like developing better verification systems, say, or sacrificing large chunks of their lives to the brutal incrementalism of diplomacy, to name another example -- that leads down that path.
__________________
Brendan

Last edited by bjkeefe; 08-09-2010 at 03:49 PM..
Reply With Quote
  #80  
Old 08-09-2010, 03:35 PM
bjkeefe bjkeefe is offline
 
Join Date: Jan 2007
Location: Not Real America, according to St. Sa家h
Posts: 21,798
Default Re: Related

Quote:
Originally Posted by bjkeefe View Post
[...]

Meantime, what do you make of this, perhaps as it pertains to overall progress in AI? Robert Fortner: "Rest in Peas: The Unrecognized Death of Speech Recognition" (via).
On a related note, there's an interesting op-ed by Jaron Lanier* in today's NYT. It reflects what I have been struggling to make clear in my own mind about, for example, speech recognition not necessarily being a necessary condition for achieving AI, and the idea that we need not achieve a Compleat AI before we'll start having to deal with consequences of work done in that pursuit.

Here's how it begins.

Quote:
The First Church of Robotics

THE news of the day often includes an item about some development in artificial intelligence: a machine that smiles, a program that can predict human tastes in mates or music, a robot that teaches foreign languages to children. This constant stream of stories suggests that machines are becoming smart and autonomous, a new form of life, and that we should think of them as fellow creatures instead of as tools. But such conclusions arent just changing how we think about computers they are reshaping the basic assumptions of our lives in misguided and ultimately damaging ways.

I myself have worked on projects like machine vision algorithms that can detect human facial expressions in order to animate avatars or recognize individuals. Some would say these too are examples of A.I., but I would say it is research on a specific software problem that shouldnt be confused with the deeper issues of intelligence or the nature of personhood. Equally important, my philosophical position has not prevented me from making progress in my work. (This is not an insignificant distinction: someone who refused to believe in, say, general relativity would not be able to make a GPS navigation system.)

In fact, the nuts and bolts of A.I. research can often be more usefully interpreted without the concept of A.I. at all.
I should make clear that I don't agree with every last one of the views Jaron expresses in the piece. It is, however, a worthwhile read.

==========

* [Added] Who, I forgot to note earlier, is a one-time B'head.
__________________
Brendan

Last edited by bjkeefe; 08-09-2010 at 10:33 PM..
Reply With Quote
 


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -4. The time now is 10:10 PM.


Powered by vBulletin® Version 3.8.7 Beta 1
Copyright ©2000 - 2020, vBulletin Solutions, Inc.