Originally Posted by Bloggin' Noggin
Tastes are "brute" desires. I like vanilla ice cream and I don't like chocolate ice cream. If you ask why this is so, I may be able to provide a reason -- but this reason will be a "reason" only in a causal sense.
But to value something -- say justice or kindness (take an example of something you actually value) -- is NOT just to have this kind of brute taste for justice or kindness. It is rather to regard the justice or kindness as a good thing, whose goodness justifies one's desire for it (or if one doesn't currently desire it, then justifies an attempt to acquire the desire for it). Suppose you know that your taste for vanilla will change tomorrow to a taste for chocolate. What is the rational thing to do?--lay in a store of chocolate ice cream for tomorrow. But if I'm told that my desire to be kind will change tomorrow into a desire to be wantonly cruel, I would not today regard it as reasonable to lay in a store of whips or put myself in a position of power so as to be as cruel as possible to as many as possible tomorrow.
I think that you are selling the expected-utility-maximizing framework short. It can capture this distinction.
Your taste for vanilla ice cream is not a "brute" taste in the sense of having no justification. Rather, you value vanilla ice cream because it provides a certain kind of pleasure. What you're really interested in is getting that pleasure. Right now, vanilla gives you that pleasure, so you value vanilla. But, in fact, your actions are guided by a desire to keep that particular kind of pleasure rolling in. Now, in your hypothetical, you know that you will tomorrow prefer chocolate over vanilla. You then infer that the course of action that will keep the pleasure-train running is to sell off all but a day's supply of vanilla, and to stock up on chocolate.
In contrast, you value justice in and of itself. Moreover, you currently
justice. Yes, the future-you won't value justice in his present (which is your future). Nonetheless, the present-you does
want there to be justice in the future-you's present. This is why present-you does not want to accommodate future-you's disregard for justice.
Now, consider that pleasure that vanilla currently gives you, but which chocolate will give you tomorrow. Do you value that pleasure in and of itself? Maybe. It's certainly closer to
being intrinsically valuable than is vanilla itself. But I want to point out that, even if
you value that pleasure in and of itself, it might only be current
or near term
pleasure that you currently value in that way. You might currently place no significant value on far-future pleasure, even if you currently place value on far-future justice.
All of these subtleties can be expressed with a suitable utility function. One might object at this point that the expected-utility-maximizing framework is too flexible. If it can accommodate all of these possibilities, then does it have any predictive content at all?
No, it doesn't really, not in and of itself. It's best thought of as a language that is so expressive that it can describe just about any conceivable system of values, or at least any system of values that could be acted upon in a coherent and consistent way.
This is why Kranton says that they can imbed their model within the traditional expected-utility-maximizing framework. Their contribution is to turn our attention to utility functions that are better approximations to ones that humans in fact have — namely, utility functions that contain terms for social disapprobation.