Re: Science Saturday: Purposes and Futures (Robert Wright & Eliezer Yudkowsky)
Has Eliezer ever considered that the AI might run its computations and come back in support of moral nihilism? Moral facts may not exist and if they do they may be universally false. How can we trust artificial intelligence with our normative ends when normativity itself may be strictly unintelligible. I'm increasingly of the persuasion that Morality will itself be a vestige to our incarnate stupidity.
How can AI transcend cognitive bias and irrational heuristic thinking and still have moral values when both those seem to be the functional basis of ethics?