• Keine Ergebnisse gefunden

Our second main result provides a second-best decision rule for the case that types are uniformly distributed

N/A
N/A
Protected

Academic year: 2022

Aktie "Our second main result provides a second-best decision rule for the case that types are uniformly distributed"

Copied!
32
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Efficient Compromising

by Tilman B¨orgers and Peter Postl

September 2004

Tilman B¨orgers’ research was financially supported by the ESRC through the “Centre for Economic Learning and Social Evolution” (ELSE) at University College London.

Department of Economics and ELSE, University College London, Gower Street, Lon- don WC1E 6BT, United Kingdom, [email protected].

Department of Economics, University of Birmingham, Birmingham B15 2TT, United Kingdom, [email protected].

(2)

Abstract

Two agents have to choose one of three alternatives. Their ordinal rankings of these alternatives are commonly known among them. The rankings are diametrically opposed to each other. Ex ante efficiency requires that they reach a compromise, that is choose the alternative which they both rank second, if and only if the sum of their von Neumann Morgenstern utilities from this alternative exceeds the sum of utilities when either agent’s most preferred alternative is chosen. We assume that the von Neumann Morgen- stern utilities are independent and privately observed random variables, and ask whether there are incentive compatible mechanisms which elicit utili- ties and implement efficient decisions. Our first main result is that no such mechanisms exist if the distribution of agents’ types has a density with full support. Our second main result provides a second-best decision rule for the case that types are uniformly distributed. We find that the compromise is less frequently chosen than ex-ante efficiency requires.

(3)

1. Introduction

You and your partner disagree about which restaurant to go to. You prefer the Italian restaurant over the English restaurant, and the English restaurant over the Chinese restaurant. But your partner has exactly the opposite preferences. Should you compromise by going to the English restau- rant, or should you go to a restaurant that one of you likes best? The answer to this question presumably depends on how strongly each partner prefers his favorite restaurant over the compromise, and how strongly he prefers the compromise over the bottom ranked alternative. Is there a way of finding out the partners’ strengths of preference, or will they, for example, necessarily overstate the importance of seeing their first choice implemented? This is the question which this paper addresses.

The answer to our question depends, of course, on what we mean by

“strength of preference”. One interpretation could be that the strength of preference is equal to the amount of money that an agent is willing to pay in order to obtain one outcome rather than another. If this were what we have in mind, then one could try to elicit the strength of the partners’ preferences by introducing a mechanism that obliges any partner whose favorite restaurant is chosen to pay compensation to the other.

Here, we want to abstract from such side payments as they seem inap- propriate in many situations. Spouses, for example, rarely pay money to each other to resolve conflicts. When initially conceiving of this paper, we had another situation in mind in which money payments are typically not made: voting. Optimal voting rules, if there are more than two candidates, need to elicit, in some sense, the “strength of preference” for candidates, yet voters are typically not asked to offer payments in conjunction with their

(4)

votes. The problem that we study here is a simplified version of the voting problem.

If side payments are ruled out, what do we mean by “strength of prefer- ences”, and how can we elicit them? We mean in this paper by strength of preference the von Neumann Morgenstern utility of different alternatives. If we evaluate different mechanisms from an ex ante perspective (Holmstr¨om and Myerson (1983)), then von Neumann Morgenstern utilities have to be taken into account when resolving conflicts. How can we elicit von Neu- mann Morgenstern utilities truthfully? By exposing agents to risk. Agents’

choices among lotteries indicate their von Neumann Morgenstern utilities. If agents play a game with incomplete information, then they are almost always automatically exposed to risk. Their choices can then reveal their utilities.

We develop this theme in a simple stylized example with two agents and three alternatives. We assume that it is commonly known that the agents’ rankings of the alternatives are diametrically opposed. Their von Neumann Morgenstern utilities for the alternatives are, however, not known.

To implement efficient decisions, these utilities need to be elicited, as it is optimal to implement the compromise if and only if the sum of the agents’

utilities of the compromise is larger than the sum of their utilities when either agent’s most preferred alternative is chosen.

Our first main analytical result is that this decision rule, to which we refer as first-best, is not incentive compatible, and can therefore not be im- plemented, if the distribution of von Neumann Morgenstern utilities has a density with full support. We complement this observation with a study of second-best decision rules. This study is restricted to the case that agents’

types are uniformly distributed, or follow some related distributions. For these special cases we find that the second-best decision rule picks the com-

(5)

promise less frequently than ex-ante efficiency would require.

One motivation for our paper is that mechanisms for efficient compromis- ing are potentially relevant to many areas of conflict, such as labor relations or international negotiations. A second motivation was already mentioned above: we are interested in the application of the theory of Bayesian mecha- nism design to voting. The current study is a first and limited step into that direction. Traditionally, the literature on voting has either studied strate- gic behavior under specific voting rules, or the design of voting rules using solution concepts that rely on weak informational assumptions, such as dom- inant strategies (Gibbard (1973), Satterthwaite (1975), Dutta, Peters and Sen (2004)), or undominated strategies (B¨orgers (1991)). Our purpose here is to explore the theory of voting with stronger informational assumptions, which are, however, frequently made in other areas of incentive theory. A third motivation for this paper is that it is a case study in Bayesian mecha- nism design without transferrable utility. Much of the literature on Bayesian mechanism design has relied on the assumption of transferrable utility. It seems worthwhile to explore what happens if this assumption is relaxed.

It turns out that the setting that we study, although formally without transferrable utility, is closely related to models of mechanism design for pub- lic goods with transferrable utility as studied by D’Aspremont and G´erard- Varet (1979), G¨uth and Hellwig (1986), Rob (1989), Mailath and Postlewaite (1990), Hellwig (2003), and Norman (forthcoming). These papers all con- sider settings in which there are two goods, a public good, and “money.”

Agents’ preferences are assumed to be additive in the quantity of the public good that is provided and “money”. In our setting there is no “money”.

However, for each agent the probability with which their most preferred al- ternative is chosen serves in some sense as “money”. The public good is

(6)

the probability with which the compromise is implemented. Agents “pay”

for an increased probability of the compromise by giving up probability of their most preferred alternative. Agents’ preferences are additive in the “real good” and “money” because they are von Neumann Morgenstern preferences over lotteries, which are additive in probabilities.

The details of the analogy between our work and the literature on mech- anism design for public goods will be explained later. Two points deserve emphasis. Firstly, an important difference between our work and the estab- lished public goods literature is that agents, in our model, face a budget constraint, which is absent from traditional models. The budget constraint arises from boundaries on the amount of probability which agents can sur- render: for instance, it cannot be larger than one.

The second difference is that our model does not feature individual ra- tionality constraints. Most, though not all, of the previous literature on public goods has postulated a budget constraint (see the discussion in Hell- wig (2003)). Although in our setting there is no “outside option” which would guarantee agents a minimum utility, a lower boundary for agents’ ex- pected utility nevertheless easily follows from the facts that there is only a finite number of allocation decisions, and that there is an upper boundary for the “payments” which agents can make. Thus the budget constraint has a similar effect as an individual rationality constraint.

In the light of the above discussions, it becomes intuitively plausible that it is not possible to implement the first best in our setting. Analogous results have been obtained for the public goods setting by Rob (1989), Mailath and Postlewaite (1990), and Hellwig (2003). The analysis of the second best in our setting is more involved than in the established public-goods literature because of the difficulty to take the implicit budget constraint. For this

(7)

reason our analysis of second-best rules is restricted to the case of uniformly distributed types.

This paper is organized as follows. In Section 2 we introduce our model.

Sections 3 contains a characterization of incentive compatibility and a de- tailed discussion of the analogy between our setting and the public goods problem. In Section 4 we derive further characterizations of incentive com- patible decision rules, which form the basis for our later results. In Section 5 we consider welfare properties of decision rules, and define formally first-best and second-best decision rules. Section 6 proves the impossibility of imple- menting first-best decision rules. Section 7 deals with second-best rules in the case of general type distributions, but considers a relaxed optimization problem in which some constraints are disregarded. Section 8 then brings the omitted incentive constraints back into play, but only for the uniform distribution case, and some other special distributions. Section 9 concludes.

2. The Model

There are two agents i = 1,2 who must collectively choose one alternative from the set {A, B, C}. Agent 1 prefers A over B, and B over C. Agent 2 prefers C over B, and B over A. These preferences are common knowledge among the two agents.

Each agent i has a von Neumann Morgenstern utility function: ui : {A, B, C} → <. We normalize utilities so that: u1(A) = u2(C) = 1 and u1(C) = u2(A) = 0. These features of the von Neumann Morgenstern utility functions are common knowledge among the two agents.1

1The normalization of agents’ utilities that we have introduced in this paragraph is not entirely innocuous. It will be discussed towards the end of this section, where we also sketch an alternative model without this normalization. We will then argue that the analysis of the alternative model is equivalent to the analysis of the model with normalization.

(8)

Fori= 1,2 we write ti for ui(B). We refer to ti as “playeri’s type”. We assume that ti is a random variable which is only observed by agent i. The two players’ types are stochastically independent, and they are identically distributed with cumulative distribution function G. We assume that G has support [0,1], that it has a density g, and thatg(t)>0 for allt∈[0,1]. The joint distribution of (t1, t2) is common knowledge among the agents.

A decision rulef is a functionf : [0,1]2 ∆({A, B, C}) where ∆({A, B, C}) is the set of probability distributions over{A, B, C}. We writefA(t1, t2) for the probability which f(t1, t2) assigns to alternative A, and we define fB(t1, t2) and fC(t1, t2) analogously.

Given any decision rule, denote for every ti [0,1] by pi(ti) the condi- tional probability that the alternative that agent ilikes best is implemented, conditional on agent i’s type beingti, i.e.:

p1(t1) = Z 1

0

fA(t1, t2)g(t2)dt2 and p2(t2) = Z 1

0

fC(t1, t2)g(t1)dt1. Denote by qi(ti) the probability that the compromise is implemented, condi- tional on agent i’s type beingti, i.e. fori= 1,2:

qi(ti) = Z 1

0

fB(t1, t2)g(tj)dtj where j 6=i.

Finally, we denote by Ui(ti) agent i’s expected utility, conditional on being type ti, that is:

Ui(ti) =pi(ti) +qi(ti)ti.

We emphasize two aspects of the model described in this section. The first is that the model is symmetric. This is intended to reflect that ex ante there is no known reason to systematically bias the decision rule in favor of one of the agents.

(9)

The second aspect of our model that we emphasize is the normalization of von Neumann Morgenstern utilities so that the utility of the most preferred alternative is 1, and the utility of the least preferred alternative is 0. This normalization is not without loss of generality, because it is assumed to apply for all types, and because agents’ ex ante expected utility, before they know their type, will be used to evaluate decision rules. From the ex ante point of view, if all utilities of a type were multiplied by 0.5, say, this would provide a reason to attach less weight to this type’s utilities.

Because the normalization of utilities is potentially problematic, we con- sider the following alternative model in which utilities are not normalized.

Suppose each agent has a two-dimensional type (ti, τi)[0,1]2, and that the vector of utilities of the type (ti, τi) is τi times the vector of utilities of type ti in our model, i.e. the utility of the most preferred alternative is τi, the utility of the compromise B is tiτi, and the utility of the least preferred al- ternative is 0. Intuitively, ti determines the agent’s interim utility if he finds himself in the circumstances which lead him to be this type, and τi reflects the importance which the agent attaches to these circumstances ex ante.

Note that the normalization of the utility of the least preferred alternative to zero is irrelevant. A constant could be added, and neither the interim incentives nor the ex ante trade-offs between different type combinations would be affected.

Observe that the agents’ interim incentives only depend on ti, not on τi. Therefore, a mechanism will be able to extract information about τi only if agent i is indifferent between different alternatives, and makes his choice among these alternatives dependent on τi. If we neglect this rather fragile possibility, then the collective choice can only depend on ti, but not on τi. Agents’ interim incentives and ex-ante expected utilities are then

(10)

the same as in the model in which the random variable τi is replaced by a deterministic constant that is equal to the expected value of τi.2 One can think of the model in this paper as having been constructed in this way.

A more thorough discussion of the normalization of utility in mechanism design is provided by Hortala-Vallve (2004). Without making assumptions about how players resolve indifferences, he shows in a related model that it is impossible for an incentive compatible social choice function to condition on privately observed variables that do not affect interim incentives.

3. Incentive Compatibility

Because types are privately observed, a decision rule can be implemented in practice if and only if it is incentive compatible.3

Definition 1 A decision rules f is incentive compatible if for i = 1,2 and for any types ti, t0i [0,1] we have:

pi(ti) +qi(ti)ti ≥pi(t0i) +qi(t0i)ti.

The following simple lemma is key for understanding how an incentive compatible rule incentivises agents to reveal their true von Neumann Mor- genstern utility of the compromise. Because the proof of this Lemma is familiar from the literature on Bayesian incentive compatibility, we omit it.

Lemma 1 A decision rule f is incentive compatible if and only if fori= 1,2 we have:

2Recall that we have assumed thatti andτi are stochastically independent.

3The following definition implicitly assumes that the mechanism which is used to im- plement the decision rule is a direct one. By the “revelation principle” this is without loss of generality.

(11)

(i) qi is monotonically increasing in ti; (ii) for any two types ti, t0i [0,1] with ti < t0i:

−t0i(qi(t0i)−qi(ti))≤pi(t0i)−pi(ti)≤ −ti(qi(t0i)−qi(ti)).

The first item in this Lemma says that the probability of the compromise, conditional on an agent’s type, increases as this agent’s utility of the com- promise increases. Where is this probability taken from? The second item in Lemma 1 indicates that at least some of the probability has to be taken from the probability assigned to the agent’s most preferred alternative. The inequality in the second item in Lemma 1 provides a lower and an upper boundary for the change in the probability of the most preferred alternative.

Both of these boundaries are negative.

It is intuitive that the probability of the most preferred alternative must decrease. If the additional probability for the compromise were only taken from the agent’s least preferred alternative, then the agent would have an incentive to report a higher utility for the compromise than he actually has.

The agent has topay for a higher probability of the compromise with a lower probability of his most preferred alternative.

The boundaries in item (ii) of Lemma 1 are such that among two types the higher type prefers to pay the price and obtain a higher probability of the compromise, whereas the lower type prefers not to pay the price. If we divide all sides in this sequence of inequalities by t0i −ti, and take the limit for ti →t0i, then, assuming differentiability, we obtain the condition:

−tidqi(ti)

dti = dpi(ti) dti ,

which is the standard local indifference condition which, in the differentiable case, is necessary and sufficient for incentive compatibility.

(12)

Lemma 1 suggests a parallel between our work and the theory of Bayesian mechanism design for public goods (D’Aspremont and G´erard-Varet (1979), G¨uth and Hellwig (1986), Rob (1989), Mailath and Postlewaite (1990), Hell- wig (2003) and Norman (forthcoming)). We can view the probability with which the compromise is chosen as the quantity of a public good without exclusion that is consumed by both agents. Each agent’s private type deter- mines the agent’s valuation of the public good. Agents pay for the public good with a reduced probability of their most preferred alternative.

To make this analogy more precise let us define somewhat arbitrarily the outcome in which each of the two extreme alternativesAandC is chosen with probability 0.5 as the default outcome. For every agent i define mi(t1, t2) to be the difference between the default probability of this agent’s most preferred alternative, and the one with which the agent’s most preferred alternative is chosen by a given decision rule if the types are (t1, t2):

m1(t1, t2) 0.5−fA(t1, t2) m2(t1, t2) 0.5−fC(t1, t2)

for all (t1, t2) [0,1]2. We can think of mi(t1, t2) as the payment made by agent iif types are (t1, t2). The probability of the compromise is then:

fB(t1, t2) = m1(t1, t2) +m2(t1, t2)

for all (t1, t2)[0,1]2. We can think of this probability as the quantity of a public good that is produced if types are (t1, t2). The above equation shows that the public good is produced with a one-to-one technology where the quantity produced equals the sum of agents’ payments. The quantity of the public good can obviously not be more than one, and we might model this by assuming that the public good’s marginal costs rise to infinity once the quantity exceeds one.

(13)

Our model is then isomorphic to the traditional set-up for Bayesian mech- anism design for public goods, except that we have to respect a budget con- straint: For every i∈ {1,2} and every (t1, t2)[0,1]2 we must have:

mi(t1, t2)[−0.5,+0.5].

It is this ex post budget constraint that distinguishes our set-up from the public-good models that have been investigated in earlier literature.

The budget constraint has no impact on the characterization presented in Lemma 1. This characterization is, with the above re-interpretation, the exact equivalent to incentive compatibility characterizations that apply in the traditional set-up. However, the budget constraint will play an important role in what follows below.

A second feature that distinguishes our set-up from the traditional public goods set-up is that there is no individual rationality constraint in our model.

In the public goods context, and in other related contexts, one is typically interested in characterizing all decision rules that are incentive compatible and individual rationality. But in our model there is no natural role for individual rationality.

The two differences between our context and the traditional set-up neu- tralize each other to some extent. Specifically, even though there is no in- dividual rationality constraint, there is a lower boundary for the interim expected utility of the agents because there is only a finite number of alter- natives, and agents cannot be asked to pay more than their budget allows.

We shall return to this point in Section 4.

4. Characterizations of Incentive Compatible Decision Rules We now provide further characterizations of incentive compatibility. The following characterization describes incentive compatibility in terms of prop-

(14)

erties of the interim expected utility. The following result is standard in related settings4, and therefore we omit the proof.

Lemma 2 A decision rule f is incentive compatible if and only if for every agent i= 1,2:

(i) qi is monotonically increasing in ti;

(ii) for every ti [0,1] such that qi is continuous at ti: Ui0(ti) = qi(ti).

We can use this result to obtain a formula that links the interim expected probabilities of each agent’s favorite alternative to the interim expected prob- abilities of the compromise.

Lemma 3 A decision rule f is incentive compatible if and only if for every agent i= 1,2:

(i) qi is monotonically increasing in ti; (ii) pi(ti) = pi(1) +qi(1)−qi(ti)tiR1

tiqi(si)dsi for all ti [0,1].

Proof: We show that condition (ii) of Lemma 3 is equivalent to condition (ii) of Lemma 2:

Ui0(ti) = qi(ti) for all continuity points of qi Ui(ti) = Ui(1)

Z 1

ti

qi(si)dsi for all ti [0,1] pi(ti) +qi(ti)ti = pi(1) +qi(1)

Z 1

ti

qi(si)dsi for all ti [0,1] pi(ti) = pi(1) +qi(1)−qi(ti)ti

Z 1

ti

qi(si)dsi for all ti [0,1].

Q.E.D.

4See, for example, Section 5.1.1 of Krishna (2002).

(15)

Next, we seek a result which tells us which functions that assign to every pair of types a probability of the compromise can be part of an incentive compatible decision rule. As the probability of the compromise in our model is the equivalent of the production decision in the public goods model, it seems natural to seek such a result. A characterization for incentive compat- ible production rules plays a crucial role in the study of Bayesian incentive compatibility in the public goods model. Unfortunately, the only result of this type that seems available in our model is substantially weaker than the result that holds in the public goods model, and this difference between our model and the public goods model is crucial. We therefore develop this point now in detail.

Suppose a compromise rule fB is given, and assume that it satisfies the monotonicity requirement in part (i) of Lemma 1, 2 or 3. Can we choose interim expected probabilities of A and C so as to make the decision rule incentive compatible? The equation in part (ii) of Lemma 2 is almost suffi- cient for this purpose. The only difficulty is that the right hand side of this equation refers to pi(1). We don’t know this probability if we know only fB. However, it is easy to see that when studying incentive-compatible decision rules it is without loss of generality to restrict attention to rules for which pi(1) = 0 for i = 1,2. The reason is that, if agent i is of type ti = 1, then instead of assigning probability to agent i’s most preferred alternative, the decision rule may as well assign probability to the compromise B. Agent iis indifferent between the compromise and his most preferred alternative.

To make the argument of the previous paragraph precise, we introduce the following two definitions:

Definition 2 An incentive compatible decision rule f is called regular if t1 = 1⇒fA(t1, t2) = 0 and t2 = 1⇒fC(t1, t2) = 0.

(16)

Definition 3 Two decision rules f and f˜are called “interim equivalent” if pi(ti) = ˜pi(ti) and qi(ti) = ˜qi(ti) for every i= 1,2 and every type ti [0,1].

The next lemma says provides the precise sense in which it is without loss of generality for us to restrict attention to regular decision rules.

Lemma 4 For every incentive compatible decision rule f there is a regular incentive compatible decision rule f˜that is interim equivalent to f.

Proof: Let f be an incentive compatible decision rule. Define ˜f to be the decision rule that satisfies:

(i) If ti <1 fori= 1,2 then:

f(t˜ 1, t2) =f(t1, t2)

(ii) If t1 = 1 but t2 <1 then:

f˜A(t1, t2) = 0

f˜B(t1, t2) = fA(t1, t2) +fB(t1, t2) f˜C(t1, t2) = fC(t1, t2)

(iii) If t1 <1 but t2 = 1 then:

f˜A(t1, t2) = fA(t1, t2)

f˜B(t1, t2) = fB(t1, t2) +fC(t1, t2) f˜C(t1, t2) = 0

(iv) If t1 =t2 = 1 then:

fB(t1, t2) = 1

It is evident that the decision rule ˜f is equivalent to f. It remains to show that ˜f is incentive compatible. Note first that the move from f to

(17)

f˜does not affect agent i’s incentives if agent i is of type ti = 1. Such an agent’s interim utilities under ˜f are the same as they are underf, whether the agent tells the truth or lies. Consider now an agent i with type ti <1. The agent’s incentives to pretend to be some other typet0i <1 are by construction not affected. Finally, note that such an agent’s incentives to pretend to be type t0i = 1 have been negatively affected. Probability that was previously assigned to the agent’s most preferred alternative has now been shifted to the compromise, which the agent by assumption values less than the compromise.

Thus, the agent will find it optimal to report his type truthfully.

Q.E.D.

Now we return to the question which compromise rules fB can be part of an incentive compatible decision rule. Restricting attention to regular incentive compatible decision rules, we obtain the following result:

Lemma 5 Consider a function fˆB : [0,1]2 [0,1]. For i = 1,2 define the interim expected value of fˆB to be: qˆi(ti) R1

0 fˆ(ti, tj)g(ti)dtj for all ti [0,1], wherej 6=i. If there is a regular incentive compatible decision rule f = (fA, fB, fC) such that fB = ˆfB, then for i= 1,2:

(i) qˆi(ti) is monotonically increasing in ti; (ii)

Z 1

0

Z 1

0

fˆB(t1, t2) µ

t1+G(t1)

g(t1) +t2+G(t2) g(t2) 1

g(t1)g(t2)dt1dt2

= qˆ1(1) + ˆq2(1)1

Condition (i) in Lemma 5 is, of course, the same as condition (i) in Lemmas 1-3. To obtain condition (ii) we take the formula in part (ii) of Lemma 3, calculate the ex-ante expected probability of the agent’s most

(18)

preferred alternatives, and write down the equation that the sum of these ex-ante expected probabilities plus the ex-ante expected probability of the compromise must be equal to 1. The details are as follows.

Proof: The necessity of (i) was already shown in Lemma 1. To see why condition (ii) is necessary, supposef is a regular incentive compatible decision rule as described in the Lemma, and note first that the fact that probabilities add up to one implies:

Z 1

0

p1(t1)g(t1)dt1+ Z 1

0

p2(t2)g(t2)dt2+ Z 1

0

Z 1

0

fB(t1, t2)g(t1)g(t2)dt1dt2 = 1.

Next we observe that condition (ii) in Lemma 3 implies for regular decision rules:

pi(ti) =qi(1)−qi(ti)ti Z 1

ti

qi(si)dsi.

Using this formula we can calculate the expected value of pi(ti):

Z 1

0

pi(ti)g(ti)dti

= Z 1

0

µ

qi(1)−qi(ti)ti Z 1

ti

qi(si)dsi

g(ti)dti

= qi(1) Z 1

0

qi(ti)tig(ti)dti Z 1

0

Z 1

ti

qi(si)dsig(ti)dti

= qi(1) Z 1

0

qi(ti)tig(ti)dti Z 1

0

qi(ti)G(ti)dti

= qi(1) Z 1

0

qi(ti) µ

ti+ G(ti) g(ti)

g(ti)dti.

Now we can substitute these expressions into the equation with which we started:

q1(1) Z 1

0

q1(t1) µ

t1+G(t1) g(t1)

g(t1)dt1 q2(1)

Z 1

0

q2(t2) µ

t2+G(t2) g(t2)

g(t2)dt2 +

Z 1

0

Z 1

0

fB(t1, t2)g(t1)g(t2)dt1dt2 = 1.

(19)

Re-arranging terms, and recalling thatfB = ˆfB and thatqi(ti) = ˆqi(ti) yields the assertion.

Q.E.D.

Condition (ii) in Lemma 5 is the analogue of the ex-ante balanced budget constraint in the mechanism design for public goods problems. In the public goods problem ex ante balanced budgets can also be balanced ex-post, as observed in Theorem 1 of Mailath and Postlewaite (1990). Therefore, the result of Lemma 5, in the public goods context, is an if and only if result.

In our context, however, Lemma 5 gives only a necessary, not a sufficient condition for incentive compatibility. To prove that ex-ante budget balance implies ex-post budget balance, Mailath and Postlewaite use a formula simi- lar to one used in Cramton, Gibbons and Klemperer (1987, proof of Lemma 4). Using this formula in our context would lead to a violation of the agents budget constraints. It is thus the fact that in our model agents have budget constraints that makes it impossible to turn Lemma 5 into an if and only if result.

To obtain necessary and sufficient conditions, we might also seek to apply a result similar to Proposition 3.1 of Border (1991).5 Border’s result provides necessary and sufficient conditions that the functionspihave to satisfy if they can be obtained as the interim expected values of functions fA and fC.6 We would need to generalize Border’s result to a setting in which the function fB is pre-determined. This seems complicated. Our paper shows that some

5Border’s motivation for proving this result stems from the theory of optimal auctions with risk averse buyers. It generalizes earlier results obtained by Maskin and Riley (1984) and Matthews (1984) in the context of that particular application. We are grateful to Mark Armstrong for drawing our attention to the connection between our work and this literature.

6Border’s result assumes symmetry: p1=p2, andfA(t, t0) =fC(t0, t).

(20)

headway can be made even if one works only with the necessary condition in Lemma 5.

5. Normative Properties of Decision Rules

We calculate the expected welfare resulting associated with a decision rule f using a utilitarian welfare criterion. This corresponds to the evaluation of the ex ante expected utility of an agent who does not know whether he will be agent 1 or 2, and who does not yet know his type. We assume that the probability of being either agent 1 or agent 2 is equal to 1/2. We can then omit the probability weights when calculating ex ante expected utility and simply consider an non-weighted sum.

Definition 4 The ex ante expected utility associated with decision rule f is:

Z 1

0

U1(t1)g(t1)dt1+ Z 1

0

U2(t2)g(t2)dt2.

The ex ante expected utility associated with a decision rulef can equiv- alently be written as:

1 + Z 1

0

Z 1

0

fB(t1, t2)(t1+t21)g(t1)g(t2)dt1dt2.

In this formula, we might as well omit the constant 1, which is what we shall do below.

From the above formula it is obvious that the decision rules f that max- imize ex ante expected utility among all decision rules are those that are

“first-best” in the sense of the following definition.

Definition 5 A decision rule f is called first-best if with probability 1 we have:

t1 +t2 >1⇒fB(t1, t2) = 1 and t1+t2 <1⇒fB(t1, t2) = 0.

(21)

Note that there are many first-best decision rules. The reason is firstly that Definition 5 requires the listed conditions to be true with probability one, but notalways. The reason is secondly, and more importantly, that the above definition does not restrict the probabilities with which alternatives A and C are chosen if the compromise is not implemented.

A second normative property that will play a role in this paper is the following symmetry condition.

Definition 6 A decision rule f is called symmetric if for all t, t0 [0,1]we have: fA(t, t0) = fC(t0, t).

Our interest will be in those decision rules that maximize ex ante expected utility among all incentive compatible rules. We define:

Definition 7 A decision rule f is called second-best if it yields the largest ex ante expected utility among all incentive compatible decision rules.

The following simple result shows that when considering whether first- best rules are incentive compatible, or when investigating second-best rules, there is no loss of generality in considering symmetric and regular decision rules only.

Lemma 6 For every incentive compatible decision rulef there is a symmet- ric and regular incentive compatible decision rule f0 which yields the same ex ante expected utility as f.

Proof: Letf be an incentive compatible decision rule. Define first f to be the rule which results if agents are first asked to reveal their types, then a fair

(22)

coin is tossed and, if head comes up f is applied, but if tails comes up f is applied except that the roles of the agents are reversed, i.e. agent 2 now plays the role of agent 1, and vice versa. Because f is incentive compatible, both agents would have no incentive to distort their preferences if they knew the outcome of the coin toss in advance. Therefore they also have no incentive to distort preferences ex ante, and f is incentive compatible. It is obvious that the new rule f is symmetric, and that it has the same ex ante expected utility as f. Now replace f by its regular equivalent f0, as described in Lemma 4. Note that the new rule is still symmetric. Therefore, f0 has all the properties asserted in Lemma 6.

Q.E.D.

6. Impossibility of Implementing First-Best Rules

We now investigate whether first-best decision rules are incentive com- patible.

Proposition 1 No first-best decision rule is incentive compatible.

We prove Proposition 1 by showing that a first-best decision rule that is incentive compatible must violate condition (ii) in Lemma 5. This means that the ex-ante probability of the compromise, and the ex-ante probabilities of alternatives A and C, as implied by incentive compatibility, don’t add up to one. The ex-ante balanced budget constraint, that is in our setting necessary, although not sufficient, for incentive compatibility, is violated.

Note that the structure of our argument parallels the arguments behind well-known impossibility results, such as that of Myerson and Satterthwaite (1983) for bilateral trade, and the results of Rob (1989), Mailath and Postle- waite (1990), Hellwig (2003) and Norman (forthcoming) for public goods.

(23)

The most important difference between the structure of our results, and the results in earlier papers, is that in those papers the boundary condition which makes it possible to pin down agents’ payments results from an individual rationality constraint, whereas in our paper it results from the fact that the first-best outcome in state ti = 1 is such that each agent’s “budget” is ex- hausted, and therefore utility cannot be shifted through side payments.

Proof: We assume thatfis a first-best decision rule that is incentive compat- ible. Without loss of generality, we assume that f is symmetric and regular.

We then deduce that f violates condition (ii) in Lemma 5. By Lemma 5 we then have a contradiction.

We consider first the right hand side of the equation in condition (ii). A first best decision rule satisfies for every agent i with probability 1:

qi(ti) = 1−G(1−ti).

We wish to show that this equality needs to hold for ti = 1, i.e. that we need to have: qi(1) = 1. The proof is indirect. Suppose qi(1) < 1. Because f is regular, we have pi(1) = 0. Hence, Ui(1) = qi(1) <1. Because qi(ti) = 1−G(1−ti) with probability 1, there is a sequence {tni}n∈N of elements of [0,1] such that limn→∞tni = 1, andqi(tni) = 1−G(1−tni) for all n∈N. Hence limn→∞qi(tni) = 1. Therefore, agent 1 will find it advantageous to pretend to be of type tni for sufficiently large n. Thus, the rule is not incentive compatible. We conclude that qi(1) = for both i = 1,2, and hence that the right hand side of the equation in condition (ii) of Lemma 5 equals 1.

When considering the left hand side of the equation in condition (ii) we can assume without loss of generality that the condition that defines first best holds always, and not just with probability 1. This is because the left hand side of the equation only contains integrals of the decision rule probabilities, and therefore values that are taken on sets of measure zero don’t matter.

(24)

The left hand side of the equation in condition (ii) of Lemma 5 can be written as the sum of three integrals:

X

i∈{1,2}

µZ 1

0

Z 1

0

fB(t1, t2) µ

ti+ G(ti) g(ti)

g(t1)g(t2)dt1dt2

Z 1

0

Z 1

0

fB(t1, t2)g(t1)g(t2)dt1dt2 We evaluate these three integrals separately.

We begin with the first two integrals. Let i∈ {1,2}, and assume j 6=i.

Z 1

0

Z 1

0

fB(t1, t2) µ

ti+ G(ti) g(ti)

g(ti)g(tj)dtjdti

= Z 1

0

Z 1

0

fB(t1, t2)(tig(ti) +G(ti))g(tj)dtjdti

= Z 1

0

(tig(ti) +G(ti))(1−G(1−ti))dti

= Z 1

0

tig(ti)dti+ Z 1

0

G(ti)dti Z 1

0

(tig(ti) +G(ti))G(1−ti)dti

=

·

tiG(ti)

¸1

0

Z 1

0

G(ti)dti+ Z 1

0

G(ti)dti Z 1

0

(tig(ti) +G(ti))G(1−ti)dti

= 1 Z 1

0

(tig(ti) +G(ti))G(1−ti)dti. Note also that:

Z 1

0

Z 1

0

fB(t1, t2)g(ti)g(tj)dtjdti

= Z 1

0

(1−G(1−ti))g(ti)dti

= 1 Z 1

0

g(ti)G(1−ti)dti.

Our objective is now to prove that the left hand side of condition (ii) in Lemma 3 is smaller than the right hand side, i.e.:

12 Z 1

0

(tig(ti) +G(ti))G(1−ti)dti+ Z 1

0

g(ti)G(1−ti)dti < 1 Z 1

0

(2ti1)G(1−ti)g(ti)dti+ 2 Z 1

0

G(ti)G(1−ti)dti > 0.

(25)

Denote the first integral on the left hand side of this inequality by I:

I Z 1

0

(2ti1)G(1−ti)g(ti)dti. Integration by parts yields:

I =

·

(2ti1)G(1−ti)G(ti)

¸1

0

Z 1

0

G(ti) (2G(1−ti)(2ti1)g(1−ti))dti

= −2 Z 1

0

G(ti)G(1−ti)dti+ Z 1

i

(2ti 1)g(1−ti)G(ti)dti.

A change of variables, setting τi = 1−ti, allows us to rewrite the second integral on the right hand side as follows:

Z 1

i

(2ti1)g(1−ti)G(ti)dti = Z 1

0

(2τi1)g(τi)G(1−τi)dτi. Thus, replacing τi again by ti, we find:

I = −2

Z 1

0

G(ti)G(1−ti)dti Z 1

0

(2ti1)g(ti)G(1−ti)dti

= −2 Z 1

0

G(ti)G(1−ti)dti−I, which implies

I = Z 1

0

G(ti)G(1−ti)dti.

Substituting this into the inequality which we want to prove, we obtain:

−2 Z 1

0

G(ti)G(1−ti)dti+ 2 Z 1

0

G(ti)G(1−ti)dti >0 Z 1

0

G(ti)G(1−ti)dti > 0, which is obviously true.

Q.E.D.

(26)

7. An Auxiliary Second-Best Problem

In the light of Proposition 1, our next objective is to investigate second- best decision rules. Finding general characteristics of second best rules is hard. To see the difficulty, suppose we attempted to determine simultane- ously all three functions,fA,fBandfC that make up the second-best decision rule. The set of constraints that we would have to take into account would consist of the incentive compatibility constraints and the constraints that for each vector of types (t1, t2), the probabilities fA(t1, t2), fB(t1, t2), and fC(t1, t2), have to be non-negative, and have to add up to one. This latter set of constraints is difficult to handle.

We shall take a different approach, and initially focus on the function fB, that determines the probability of the compromise B, only. Recall that ex-ante expected welfare depends on this function only. Focusing on this function is analogous to an approach that is often pursued in mechanism design in other context. This approach is to first determine welfare relevant decisions, and to supplement the analysis later with a determination of pay- ment rules that make the efficiency relevant part of the collective decision incentive compatible.

The constraints that we shall take account of when investigating the second-best compromise rule fB are those listed in Lemma 5. We emphasize that these constraints are necessary, but not sufficient for the function fB to be part of an incentive compatible decision rule. This differentiates our study from the traditional study of the public goods problem, where the anal- ogous constraints are necessary and sufficient. In our context, we are thus considering a problem in which some constraints are omitted. If the solution happens to satisfy also these other constraints, then it will be second best.

However, if it violates these other constraints, then the second best will be

(27)

different. For some special cases we shall find in the next section that the omitted constraints are satisfied. However, for the general case, we have not been able to show this.

The optimization problem that we are considering to determine a candi- date solution fB is as follows:

Auxiliary Optimization Problem: Choose fB so as to maximize Z 1

0

Z 1

0

fB(t1, t2)(t1+t21)g(t1)g(t2)dt1dt2 subject to:

(i) qi(ti) is weakly monotonically increasing in ti for every i∈ {1,2};

(ii) Z 1

0

Z 1

0

fB(t1, t2) µ

t1+G(t1)

g(t1) +t2+G(t2) g(t2) 1

g(t1)g(t2)dt1dt2

= q1(1) +q2(1)1

To simplify our analysis of the auxiliary optimization problem, we make the following assumption:

Monotonicity Assumption: G(t)/g(t) is a monotonically increasing func- tion of t.

Under this assumption a solution to the auxiliary optimization problem that satisfies condition (ii) above also satisfies the monotonicity condition (i). A standard application of Lagrange multipliers leads to the following result.

We omit the proof of this result.

Referenzen

ÄHNLICHE DOKUMENTE

An essential ingredient in the proof of Theorem 1.1 is a new sum-product type result which shows that a certain expander involving products and shifts gives superquadratic growth..

When the photon buffers that are used for the computation of the light distribution in the scene in the second rendering step are calculated, it must be taken into consideration

In the second part of the paper we show that in our model there is no subgame-perfect equilibrium that involves rationing. The reason behind this result is that the monopolist

The types of governance that we conceive share one vital feature: they are radical departures from the centralized state. However, they diffuse authority in contrasting ways. The

Although I have been discussing them as alternative models of the public sector, as indeed they are, the patterns of reform that have shaped contemporary government have a number

The first step in the argument sounds nothing but logical, the second, however, is not easily to reconcile with the AG‘s observation that ―the EAEC rules are only aimed at

If the length of the patent is ¢, it is clear that the share of competitively supplied products a depends on the growth rate g: To see the relationship between a, g; and ¢; note

One more feature is that beginning in the second half of 1996, the degree of market segmentation in European Russia is approximately twice that in Russia excluding