Efficient Compromising^{∗}

by Tilman B¨orgers* ^{†}* and Peter Postl

^{‡}September 2004

*∗*Tilman B¨orgers’ research was financially supported by the ESRC through the “Centre
for Economic Learning and Social Evolution” (ELSE) at University College London.

*†*Department of Economics and ELSE, University College London, Gower Street, Lon-
don WC1E 6BT, United Kingdom, [email protected].

*‡*Department of Economics, University of Birmingham, Birmingham B15 2TT, United
Kingdom, [email protected].

Abstract

Two agents have to choose one of three alternatives. Their ordinal rankings of these alternatives are commonly known among them. The rankings are diametrically opposed to each other. Ex ante efficiency requires that they reach a compromise, that is choose the alternative which they both rank second, if and only if the sum of their von Neumann Morgenstern utilities from this alternative exceeds the sum of utilities when either agent’s most preferred alternative is chosen. We assume that the von Neumann Morgen- stern utilities are independent and privately observed random variables, and ask whether there are incentive compatible mechanisms which elicit utili- ties and implement efficient decisions. Our first main result is that no such mechanisms exist if the distribution of agents’ types has a density with full support. Our second main result provides a second-best decision rule for the case that types are uniformly distributed. We find that the compromise is less frequently chosen than ex-ante efficiency requires.

1. Introduction

You and your partner disagree about which restaurant to go to. You prefer the Italian restaurant over the English restaurant, and the English restaurant over the Chinese restaurant. But your partner has exactly the opposite preferences. Should you compromise by going to the English restau- rant, or should you go to a restaurant that one of you likes best? The answer to this question presumably depends on how strongly each partner prefers his favorite restaurant over the compromise, and how strongly he prefers the compromise over the bottom ranked alternative. Is there a way of finding out the partners’ strengths of preference, or will they, for example, necessarily overstate the importance of seeing their first choice implemented? This is the question which this paper addresses.

The answer to our question depends, of course, on what we mean by

“strength of preference”. One interpretation could be that the strength of preference is equal to the amount of money that an agent is willing to pay in order to obtain one outcome rather than another. If this were what we have in mind, then one could try to elicit the strength of the partners’ preferences by introducing a mechanism that obliges any partner whose favorite restaurant is chosen to pay compensation to the other.

Here, we want to abstract from such side payments as they seem inap- propriate in many situations. Spouses, for example, rarely pay money to each other to resolve conflicts. When initially conceiving of this paper, we had another situation in mind in which money payments are typically not made: voting. Optimal voting rules, if there are more than two candidates, need to elicit, in some sense, the “strength of preference” for candidates, yet voters are typically not asked to offer payments in conjunction with their

votes. The problem that we study here is a simplified version of the voting problem.

If side payments are ruled out, what do we mean by “strength of prefer-
ences”, and how can we elicit them? We mean in this paper by strength of
preference the von Neumann Morgenstern utility of different alternatives. If
we evaluate different mechanisms from an *ex ante* perspective (Holmstr¨om
and Myerson (1983)), then von Neumann Morgenstern utilities have to be
taken into account when resolving conflicts. How can we elicit von Neu-
mann Morgenstern utilities truthfully? By exposing agents to risk. Agents’

choices among lotteries indicate their von Neumann Morgenstern utilities. If agents play a game with incomplete information, then they are almost always automatically exposed to risk. Their choices can then reveal their utilities.

We develop this theme in a simple stylized example with two agents and three alternatives. We assume that it is commonly known that the agents’ rankings of the alternatives are diametrically opposed. Their von Neumann Morgenstern utilities for the alternatives are, however, not known.

To implement efficient decisions, these utilities need to be elicited, as it is optimal to implement the compromise if and only if the sum of the agents’

utilities of the compromise is larger than the sum of their utilities when either agent’s most preferred alternative is chosen.

Our first main analytical result is that this decision rule, to which we refer as first-best, is not incentive compatible, and can therefore not be im- plemented, if the distribution of von Neumann Morgenstern utilities has a density with full support. We complement this observation with a study of second-best decision rules. This study is restricted to the case that agents’

types are uniformly distributed, or follow some related distributions. For these special cases we find that the second-best decision rule picks the com-

promise less frequently than ex-ante efficiency would require.

One motivation for our paper is that mechanisms for efficient compromis- ing are potentially relevant to many areas of conflict, such as labor relations or international negotiations. A second motivation was already mentioned above: we are interested in the application of the theory of Bayesian mecha- nism design to voting. The current study is a first and limited step into that direction. Traditionally, the literature on voting has either studied strate- gic behavior under specific voting rules, or the design of voting rules using solution concepts that rely on weak informational assumptions, such as dom- inant strategies (Gibbard (1973), Satterthwaite (1975), Dutta, Peters and Sen (2004)), or undominated strategies (B¨orgers (1991)). Our purpose here is to explore the theory of voting with stronger informational assumptions, which are, however, frequently made in other areas of incentive theory. A third motivation for this paper is that it is a case study in Bayesian mecha- nism design without transferrable utility. Much of the literature on Bayesian mechanism design has relied on the assumption of transferrable utility. It seems worthwhile to explore what happens if this assumption is relaxed.

It turns out that the setting that we study, although formally without transferrable utility, is closely related to models of mechanism design for pub- lic goods with transferrable utility as studied by D’Aspremont and G´erard- Varet (1979), G¨uth and Hellwig (1986), Rob (1989), Mailath and Postlewaite (1990), Hellwig (2003), and Norman (forthcoming). These papers all con- sider settings in which there are two goods, a public good, and “money.”

Agents’ preferences are assumed to be additive in the quantity of the public good that is provided and “money”. In our setting there is no “money”.

However, for each agent the probability with which their most preferred al- ternative is chosen serves in some sense as “money”. The public good is

the probability with which the compromise is implemented. Agents “pay”

for an increased probability of the compromise by giving up probability of their most preferred alternative. Agents’ preferences are additive in the “real good” and “money” because they are von Neumann Morgenstern preferences over lotteries, which are additive in probabilities.

The details of the analogy between our work and the literature on mech- anism design for public goods will be explained later. Two points deserve emphasis. Firstly, an important difference between our work and the estab- lished public goods literature is that agents, in our model, face a budget constraint, which is absent from traditional models. The budget constraint arises from boundaries on the amount of probability which agents can sur- render: for instance, it cannot be larger than one.

The second difference is that our model does not feature individual ra- tionality constraints. Most, though not all, of the previous literature on public goods has postulated a budget constraint (see the discussion in Hell- wig (2003)). Although in our setting there is no “outside option” which would guarantee agents a minimum utility, a lower boundary for agents’ ex- pected utility nevertheless easily follows from the facts that there is only a finite number of allocation decisions, and that there is an upper boundary for the “payments” which agents can make. Thus the budget constraint has a similar effect as an individual rationality constraint.

In the light of the above discussions, it becomes intuitively plausible that it is not possible to implement the first best in our setting. Analogous results have been obtained for the public goods setting by Rob (1989), Mailath and Postlewaite (1990), and Hellwig (2003). The analysis of the second best in our setting is more involved than in the established public-goods literature because of the difficulty to take the implicit budget constraint. For this

reason our analysis of second-best rules is restricted to the case of uniformly distributed types.

This paper is organized as follows. In Section 2 we introduce our model.

Sections 3 contains a characterization of incentive compatibility and a de- tailed discussion of the analogy between our setting and the public goods problem. In Section 4 we derive further characterizations of incentive com- patible decision rules, which form the basis for our later results. In Section 5 we consider welfare properties of decision rules, and define formally first-best and second-best decision rules. Section 6 proves the impossibility of imple- menting first-best decision rules. Section 7 deals with second-best rules in the case of general type distributions, but considers a relaxed optimization problem in which some constraints are disregarded. Section 8 then brings the omitted incentive constraints back into play, but only for the uniform distribution case, and some other special distributions. Section 9 concludes.

2. The Model

There are two agents *i* = 1,2 who must collectively choose one alternative
from the set *{A, B, C}. Agent 1 prefers* *A* over *B, and* *B* over *C. Agent 2*
prefers *C* over *B, and* *B* over *A. These preferences are common knowledge*
among the two agents.

Each agent *i* has a von Neumann Morgenstern utility function: *u**i* :
*{A, B, C} → <. We normalize utilities so that:* *u*_{1}(A) = *u*_{2}(C) = 1 and
*u*_{1}(C) = *u*_{2}(A) = 0. These features of the von Neumann Morgenstern utility
functions are common knowledge among the two agents.^{1}

1The normalization of agents’ utilities that we have introduced in this paragraph is not entirely innocuous. It will be discussed towards the end of this section, where we also sketch an alternative model without this normalization. We will then argue that the analysis of the alternative model is equivalent to the analysis of the model with normalization.

For*i*= 1,2 we write *t** _{i}* for

*u*

*(B). We refer to*

_{i}*t*

*as “player*

_{i}*i’s type”. We*assume that

*t*

*is a random variable which is only observed by agent*

_{i}*i. The*two players’ types are stochastically independent, and they are identically distributed with cumulative distribution function

*G. We assume that*

*G*has support [0,1], that it has a density

*g, and thatg(t)>*0 for all

*t∈*[0,1]. The joint distribution of (t1

*, t*2) is common knowledge among the agents.

A decision rule*f* is a function*f* : [0,1]^{2} *→*∆({A, B, C}) where ∆({A, B,
*C}) is the set of probability distributions over{A, B, C}. We writef** _{A}*(t

_{1}

*, t*

_{2}) for the probability which

*f*(t1

*, t*2) assigns to alternative

*A, and we define*

*f*

*(t*

_{B}_{1}

*, t*

_{2}) and

*f*

*(t*

_{C}_{1}

*, t*

_{2}) analogously.

Given any decision rule, denote for every *t*_{i}*∈* [0,1] by *p** _{i}*(t

*) the condi- tional probability that the alternative that agent*

_{i}*i*likes best is implemented, conditional on agent

*i’s type beingt*

*, i.e.:*

_{i}*p*_{1}(t_{1}) =
Z _{1}

0

*f** _{A}*(t

_{1}

*, t*

_{2})g(t

_{2})dt

_{2}and

*p*

_{2}(t

_{2}) = Z

_{1}

0

*f** _{C}*(t

_{1}

*, t*

_{2})g(t

_{1})dt

_{1}

*.*Denote by

*q*

*(t*

_{i}*) the probability that the compromise is implemented, condi- tional on agent*

_{i}*i’s type beingt*

*, i.e. for*

_{i}*i*= 1,2:

*q**i*(t*i*) =
Z _{1}

0

*f**B*(t1*, t*2)g(t*j*)dt*j* where *j* *6=i.*

Finally, we denote by *U** _{i}*(t

*) agent*

_{i}*i’s expected utility, conditional on being*type

*t*

*, that is:*

_{i}*U**i*(t*i*) =*p**i*(t*i*) +*q**i*(t*i*)t*i**.*

We emphasize two aspects of the model described in this section. The first is that the model is symmetric. This is intended to reflect that ex ante there is no known reason to systematically bias the decision rule in favor of one of the agents.

The second aspect of our model that we emphasize is the normalization of von Neumann Morgenstern utilities so that the utility of the most preferred alternative is 1, and the utility of the least preferred alternative is 0. This normalization is not without loss of generality, because it is assumed to apply for all types, and because agents’ ex ante expected utility, before they know their type, will be used to evaluate decision rules. From the ex ante point of view, if all utilities of a type were multiplied by 0.5, say, this would provide a reason to attach less weight to this type’s utilities.

Because the normalization of utilities is potentially problematic, we con- sider the following alternative model in which utilities are not normalized.

Suppose each agent has a two-dimensional type (t_{i}*, τ** _{i}*)

*∈*[0,1]

^{2}, and that the vector of utilities of the type (t

*i*

*, τ*

*i*) is

*τ*

*i*times the vector of utilities of type

*t*

*in our model, i.e. the utility of the most preferred alternative is*

_{i}*τ*

*, the utility of the compromise*

_{i}*B*is

*t*

_{i}*τ*

*, and the utility of the least preferred al- ternative is 0. Intuitively,*

_{i}*t*

*i*determines the agent’s interim utility if he finds himself in the circumstances which lead him to be this type, and

*τ*

*reflects the importance which the agent attaches to these circumstances ex ante.*

_{i}Note that the normalization of the utility of the least preferred alternative to zero is irrelevant. A constant could be added, and neither the interim incentives nor the ex ante trade-offs between different type combinations would be affected.

Observe that the agents’ interim incentives only depend on *t** _{i}*, not on

*τ*

*. Therefore, a mechanism will be able to extract information about*

_{i}*τ*

*only if agent*

_{i}*i*is indifferent between different alternatives, and makes his choice among these alternatives dependent on

*τ*

*. If we neglect this rather fragile possibility, then the collective choice can only depend on*

_{i}*t*

*, but not on*

_{i}*τ*

*i*. Agents’ interim incentives and ex-ante expected utilities are then

the same as in the model in which the random variable *τ** _{i}* is replaced by a
deterministic constant that is equal to the expected value of

*τ*

*.*

_{i}^{2}One can think of the model in this paper as having been constructed in this way.

A more thorough discussion of the normalization of utility in mechanism design is provided by Hortala-Vallve (2004). Without making assumptions about how players resolve indifferences, he shows in a related model that it is impossible for an incentive compatible social choice function to condition on privately observed variables that do not affect interim incentives.

3. Incentive Compatibility

Because types are privately observed, a decision rule can be implemented in
practice if and only if it is incentive compatible.^{3}

Definition 1 *A decision rules* *f* *is* incentive compatible *if for* *i* = 1,2 *and*
*for any types* *t*_{i}*, t*^{0}_{i}*∈*[0,1] *we have:*

*p** _{i}*(t

*) +*

_{i}*q*

*(t*

_{i}*)t*

_{i}

_{i}*≥p*

*(t*

_{i}

^{0}*) +*

_{i}*q*

*(t*

_{i}

^{0}*)t*

_{i}

_{i}*.*

The following simple lemma is key for understanding how an incentive compatible rule incentivises agents to reveal their true von Neumann Mor- genstern utility of the compromise. Because the proof of this Lemma is familiar from the literature on Bayesian incentive compatibility, we omit it.

Lemma 1 *A decision rule f is incentive compatible if and only if fori*= 1,2
*we have:*

2Recall that we have assumed that*t**i* and*τ**i* are stochastically independent.

3The following definition implicitly assumes that the mechanism which is used to im- plement the decision rule is a direct one. By the “revelation principle” this is without loss of generality.

*(i)* *q*_{i}*is monotonically increasing in* *t*_{i}*;*
*(ii) for any two types* *t**i**, t*^{0}_{i}*∈*[0,1] *with* *t**i* *< t*^{0}_{i}*:*

*−t*^{0}* _{i}*(q

*(t*

_{i}

^{0}*)*

_{i}*−q*

*(t*

_{i}*))*

_{i}*≤p*

*(t*

_{i}

^{0}*)*

_{i}*−p*

*(t*

_{i}*)*

_{i}*≤ −t*

*(q*

_{i}*(t*

_{i}

^{0}*)*

_{i}*−q*

*(t*

_{i}*)).*

_{i}The first item in this Lemma says that the probability of the compromise, conditional on an agent’s type, increases as this agent’s utility of the com- promise increases. Where is this probability taken from? The second item in Lemma 1 indicates that at least some of the probability has to be taken from the probability assigned to the agent’s most preferred alternative. The inequality in the second item in Lemma 1 provides a lower and an upper boundary for the change in the probability of the most preferred alternative.

Both of these boundaries are negative.

It is intuitive that the probability of the most preferred alternative must decrease. If the additional probability for the compromise were only taken from the agent’s least preferred alternative, then the agent would have an incentive to report a higher utility for the compromise than he actually has.

The agent has to*pay* for a higher probability of the compromise with a lower
probability of his most preferred alternative.

The boundaries in item (ii) of Lemma 1 are such that among two types
the higher type prefers to pay the price and obtain a higher probability of
the compromise, whereas the lower type prefers not to pay the price. If we
divide all sides in this sequence of inequalities by *t*^{0}_{i}*−t** _{i}*, and take the limit
for

*t*

*i*

*→t*

^{0}*, then, assuming differentiability, we obtain the condition:*

_{i}*−t*_{i}*dq** _{i}*(t

*)*

_{i}*dt** _{i}* =

*dp*

*(t*

_{i}*)*

_{i}*dt*

_{i}*,*

which is the standard local indifference condition which, in the differentiable case, is necessary and sufficient for incentive compatibility.

Lemma 1 suggests a parallel between our work and the theory of Bayesian mechanism design for public goods (D’Aspremont and G´erard-Varet (1979), G¨uth and Hellwig (1986), Rob (1989), Mailath and Postlewaite (1990), Hell- wig (2003) and Norman (forthcoming)). We can view the probability with which the compromise is chosen as the quantity of a public good without exclusion that is consumed by both agents. Each agent’s private type deter- mines the agent’s valuation of the public good. Agents pay for the public good with a reduced probability of their most preferred alternative.

To make this analogy more precise let us define somewhat arbitrarily the
outcome in which each of the two extreme alternatives*A*and*C* is chosen with
probability 0.5 as the *default* outcome. For every agent *i* define *m** _{i}*(t

_{1}

*, t*

_{2}) to be the difference between the default probability of this agent’s most preferred alternative, and the one with which the agent’s most preferred alternative is chosen by a given decision rule if the types are (t

_{1}

*, t*

_{2}):

*m*1(t1*, t*2) *≡* 0.5*−f**A*(t1*, t*2)
*m*2(t1*, t*2) *≡* 0.5*−f**C*(t1*, t*2)

for all (t_{1}*, t*_{2}) *∈* [0,1]^{2}. We can think of *m** _{i}*(t

_{1}

*, t*

_{2}) as the

*payment*made by agent

*i*if types are (t

_{1}

*, t*

_{2}). The probability of the compromise is then:

*f** _{B}*(t

_{1}

*, t*

_{2}) =

*m*

_{1}(t

_{1}

*, t*

_{2}) +

*m*

_{2}(t

_{1}

*, t*

_{2})

for all (t_{1}*, t*_{2})*∈*[0,1]^{2}*.* We can think of this probability as the quantity of a
public good that is produced if types are (t1*, t*2). The above equation shows
that the public good is produced with a one-to-one technology where the
quantity produced equals the sum of agents’ payments. The quantity of the
public good can obviously not be more than one, and we might model this
by assuming that the public good’s marginal costs rise to infinity once the
quantity exceeds one.

Our model is then isomorphic to the traditional set-up for Bayesian mech-
anism design for public goods, except that we have to respect a budget con-
straint: For every *i∈ {1,*2} and every (t1*, t*2)*∈*[0,1]^{2} we must have:

*m** _{i}*(t

_{1}

*, t*

_{2})

*∈*[−0.5,+0.5].

It is this ex post budget constraint that distinguishes our set-up from the public-good models that have been investigated in earlier literature.

The budget constraint has no impact on the characterization presented in Lemma 1. This characterization is, with the above re-interpretation, the exact equivalent to incentive compatibility characterizations that apply in the traditional set-up. However, the budget constraint will play an important role in what follows below.

A second feature that distinguishes our set-up from the traditional public goods set-up is that there is no individual rationality constraint in our model.

In the public goods context, and in other related contexts, one is typically interested in characterizing all decision rules that are incentive compatible and individual rationality. But in our model there is no natural role for individual rationality.

The two differences between our context and the traditional set-up neu- tralize each other to some extent. Specifically, even though there is no in- dividual rationality constraint, there is a lower boundary for the interim expected utility of the agents because there is only a finite number of alter- natives, and agents cannot be asked to pay more than their budget allows.

We shall return to this point in Section 4.

4. Characterizations of Incentive Compatible Decision Rules We now provide further characterizations of incentive compatibility. The following characterization describes incentive compatibility in terms of prop-

erties of the interim expected utility. The following result is standard in
related settings^{4}, and therefore we omit the proof.

Lemma 2 *A decision rule* *f* *is incentive compatible if and only if for every*
*agent* *i*= 1,2:

*(i)* *q*_{i}*is monotonically increasing in* *t*_{i}*;*

*(ii) for every* *t*_{i}*∈*[0,1] *such that* *q*_{i}*is continuous at* *t*_{i}*:*
*U*_{i}* ^{0}*(t

*i*) =

*q*

*i*(t

*i*).

We can use this result to obtain a formula that links the interim expected probabilities of each agent’s favorite alternative to the interim expected prob- abilities of the compromise.

Lemma 3 *A decision rule* *f* *is incentive compatible if and only if for every*
*agent* *i*= 1,2:

*(i)* *q*_{i}*is monotonically increasing in* *t*_{i}*;*
*(ii)* *p** _{i}*(t

*) =*

_{i}*p*

*(1) +*

_{i}*q*

*(1)*

_{i}*−q*

*(t*

_{i}*)t*

_{i}

_{i}*−*R

_{1}

*t**i**q** _{i}*(s

*)ds*

_{i}

_{i}*for all*

*t*

_{i}*∈*[0,1].

Proof: We show that condition (ii) of Lemma 3 is equivalent to condition (ii) of Lemma 2:

*U*_{i}* ^{0}*(t

*) =*

_{i}*q*

*(t*

_{i}*) for all continuity points of*

_{i}*q*

_{i}*⇔*

*U*

*(t*

_{i}*) =*

_{i}*U*

*(1)*

_{i}*−*

Z _{1}

*t**i*

*q** _{i}*(s

*)ds*

_{i}*for all*

_{i}*t*

_{i}*∈*[0,1]

*⇔*

*p*

*i*(t

*i*) +

*q*

*i*(t

*i*)t

*i*=

*p*

*i*(1) +

*q*

*i*(1)

*−*

Z _{1}

*t**i*

*q**i*(s*i*)ds*i* for all *t**i* *∈*[0,1]*⇔*
*p** _{i}*(t

*) =*

_{i}*p*

*(1) +*

_{i}*q*

*(1)*

_{i}*−q*

*(t*

_{i}*)t*

_{i}

_{i}*−*

Z _{1}

*t**i*

*q** _{i}*(s

*)ds*

_{i}*for all*

_{i}*t*

_{i}*∈*[0,1].

Q.E.D.

4See, for example, Section 5.1.1 of Krishna (2002).

Next, we seek a result which tells us which functions that assign to every pair of types a probability of the compromise can be part of an incentive compatible decision rule. As the probability of the compromise in our model is the equivalent of the production decision in the public goods model, it seems natural to seek such a result. A characterization for incentive compat- ible production rules plays a crucial role in the study of Bayesian incentive compatibility in the public goods model. Unfortunately, the only result of this type that seems available in our model is substantially weaker than the result that holds in the public goods model, and this difference between our model and the public goods model is crucial. We therefore develop this point now in detail.

Suppose a compromise rule *f**B* is given, and assume that it satisfies the
monotonicity requirement in part (i) of Lemma 1, 2 or 3. Can we choose
interim expected probabilities of *A* and *C* so as to make the decision rule
incentive compatible? The equation in part (ii) of Lemma 2 is almost suffi-
cient for this purpose. The only difficulty is that the right hand side of this
equation refers to *p** _{i}*(1). We don’t know this probability if we know only

*f*

*. However, it is easy to see that when studying incentive-compatible decision rules it is without loss of generality to restrict attention to rules for which*

_{B}*p*

*(1) = 0 for*

_{i}*i*= 1,2. The reason is that, if agent

*i*is of type

*t*

*= 1, then instead of assigning probability to agent*

_{i}*i’s most preferred alternative, the*decision rule may as well assign probability to the compromise

*B. Agent*

*i*is indifferent between the compromise and his most preferred alternative.

To make the argument of the previous paragraph precise, we introduce the following two definitions:

Definition 2 *An incentive compatible decision rule* *f* *is called* regular *if*
*t*_{1} = 1*⇒f** _{A}*(t

_{1}

*, t*

_{2}) = 0

*and*

*t*

_{2}= 1

*⇒f*

*(t*

_{C}_{1}

*, t*

_{2}) = 0.

Definition 3 *Two decision rules* *f* *and* *f*˜*are called “interim equivalent” if*
*p** _{i}*(t

*) = ˜*

_{i}*p*

*(t*

_{i}*)*

_{i}*and*

*q*

*(t*

_{i}*) = ˜*

_{i}*q*

*(t*

_{i}*)*

_{i}*for every*

*i*= 1,2

*and every type*

*t*

_{i}*∈*[0,1].

The next lemma says provides the precise sense in which it is without loss of generality for us to restrict attention to regular decision rules.

Lemma 4 *For every incentive compatible decision rule* *f* *there is a regular*
*incentive compatible decision rule* *f*˜*that is interim equivalent to* *f.*

Proof: Let *f* be an incentive compatible decision rule. Define ˜*f* to be the
decision rule that satisfies:

(i) If *t*_{i}*<*1 for*i*= 1,2 then:

*f(t*˜ _{1}*, t*_{2}) =*f*(t_{1}*, t*_{2})

(ii) If *t*_{1} = 1 but *t*_{2} *<*1 then:

*f*˜* _{A}*(t

_{1}

*, t*

_{2}) = 0

*f*˜* _{B}*(t

_{1}

*, t*

_{2}) =

*f*

*(t*

_{A}_{1}

*, t*

_{2}) +

*f*

*(t*

_{B}_{1}

*, t*

_{2})

*f*˜

*C*(t1

*, t*2) =

*f*

*C*(t1

*, t*2)

(iii) If *t*1 *<*1 but *t*2 = 1 then:

*f*˜* _{A}*(t

_{1}

*, t*

_{2}) =

*f*

*(t*

_{A}_{1}

*, t*

_{2})

*f*˜*B*(t1*, t*2) = *f**B*(t1*, t*2) +*f**C*(t1*, t*2)
*f*˜*C*(t1*, t*2) = 0

(iv) If *t*_{1} =*t*_{2} = 1 then:

*f** _{B}*(t

_{1}

*, t*

_{2}) = 1

It is evident that the decision rule ˜*f* is equivalent to *f. It remains to*
show that ˜*f* is incentive compatible. Note first that the move from *f* to

*f*˜does not affect agent *i’s incentives if agent* *i* is of type *t** _{i}* = 1. Such an
agent’s interim utilities under ˜

*f*are the same as they are under

*f*, whether the agent tells the truth or lies. Consider now an agent

*i*with type

*t*

*i*

*<*1. The agent’s incentives to pretend to be some other type

*t*

^{0}

_{i}*<*1 are by construction not affected. Finally, note that such an agent’s incentives to pretend to be type

*t*

^{0}*= 1 have been negatively affected. Probability that was previously assigned to the agent’s most preferred alternative has now been shifted to the compromise, which the agent by assumption values less than the compromise.*

_{i}Thus, the agent will find it optimal to report his type truthfully.

Q.E.D.

Now we return to the question which compromise rules *f** _{B}* can be part
of an incentive compatible decision rule. Restricting attention to regular
incentive compatible decision rules, we obtain the following result:

Lemma 5 *Consider a function* *f*ˆ* _{B}* : [0,1]

^{2}

*→*[0,1]. For

*i*= 1,2

*define*

*the interim expected value of*

*f*ˆ

_{B}*to be:*

*q*ˆ

*(t*

_{i}*)*

_{i}*≡*R

_{1}

0 *f*ˆ(t_{i}*, t** _{j}*)g(t

*)dt*

_{i}

_{j}*for all*

*t*

*i*

*∈*[0,1], where

*j*

*6=i. If there is a regular incentive compatible decision rule*

*f*= (f

_{A}*, f*

_{B}*, f*

*)*

_{C}*such that*

*f*

*= ˆ*

_{B}*f*

_{B}*, then for*

*i*= 1,2:

*(i)* *q*ˆ* _{i}*(t

*)*

_{i}*is monotonically increasing in*

*t*

_{i}*;*

*(ii)*

Z _{1}

0

Z _{1}

0

*f*ˆ* _{B}*(t

_{1}

*, t*

_{2}) µ

*t*_{1}+*G(t*_{1})

*g(t*1) +*t*_{2}+*G(t*_{2})
*g(t*2) *−*1

¶

*g(t*_{1})g(t_{2})dt_{1}*dt*_{2}

= *q*ˆ1(1) + ˆ*q*2(1)*−*1

Condition (i) in Lemma 5 is, of course, the same as condition (i) in Lemmas 1-3. To obtain condition (ii) we take the formula in part (ii) of Lemma 3, calculate the ex-ante expected probability of the agent’s most

preferred alternatives, and write down the equation that the sum of these ex-ante expected probabilities plus the ex-ante expected probability of the compromise must be equal to 1. The details are as follows.

Proof: The necessity of (i) was already shown in Lemma 1. To see why
condition (ii) is necessary, suppose*f* is a regular incentive compatible decision
rule as described in the Lemma, and note first that the fact that probabilities
add up to one implies:

Z _{1}

0

*p*_{1}(t_{1})g(t_{1})dt_{1}+
Z _{1}

0

*p*_{2}(t_{2})g(t_{2})dt_{2}+
Z _{1}

0

Z _{1}

0

*f** _{B}*(t

_{1}

*, t*

_{2})g(t

_{1})g(t

_{2})dt

_{1}

*dt*

_{2}= 1.

Next we observe that condition (ii) in Lemma 3 implies for regular decision rules:

*p** _{i}*(t

*) =*

_{i}*q*

*(1)*

_{i}*−q*

*(t*

_{i}*)t*

_{i}

_{i}*−*Z

_{1}

*t**i*

*q** _{i}*(s

*)ds*

_{i}

_{i}*.*

Using this formula we can calculate the expected value of *p** _{i}*(t

*):*

_{i}Z _{1}

0

*p** _{i}*(t

*)g(t*

_{i}*)dt*

_{i}

_{i}=
Z _{1}

0

µ

*q** _{i}*(1)

*−q*

*(t*

_{i}*)t*

_{i}

_{i}*−*Z

_{1}

*t**i*

*q** _{i}*(s

*)ds*

_{i}

_{i}¶

*g(t** _{i}*)dt

_{i}= *q**i*(1)*−*
Z _{1}

0

*q**i*(t*i*)t*i**g(t**i*)dt*i**−*
Z _{1}

0

Z _{1}

*t**i*

*q**i*(s*i*)ds*i**g(t**i*)dt*i*

= *q** _{i}*(1)

*−*Z

_{1}

0

*q** _{i}*(t

*)t*

_{i}

_{i}*g(t*

*)dt*

_{i}

_{i}*−*Z

_{1}

0

*q** _{i}*(t

*)G(t*

_{i}*)dt*

_{i}

_{i}= *q**i*(1)*−*
Z _{1}

0

*q**i*(t*i*)
µ

*t**i*+ *G(t** _{i}*)

*g(t*

*)*

_{i}¶

*g(t**i*)dt*i**.*

Now we can substitute these expressions into the equation with which we started:

*q*_{1}(1)*−*
Z _{1}

0

*q*_{1}(t_{1})
µ

*t*_{1}+*G(t*_{1})
*g(t*_{1})

¶

*g(t*_{1})dt_{1}
*q*_{2}(1)*−*

Z _{1}

0

*q*_{2}(t_{2})
µ

*t*_{2}+*G(t*_{2})
*g(t*_{2})

¶

*g(t*_{2})dt_{2}
+

Z _{1}

0

Z _{1}

0

*f** _{B}*(t

_{1}

*, t*

_{2})g(t

_{1})g(t

_{2})dt

_{1}

*dt*

_{2}= 1.

Re-arranging terms, and recalling that*f** _{B}* = ˆ

*f*

*and that*

_{B}*q*

*(t*

_{i}*) = ˆ*

_{i}*q*

*(t*

_{i}*) yields the assertion.*

_{i}Q.E.D.

Condition (ii) in Lemma 5 is the analogue of the ex-ante balanced budget
constraint in the mechanism design for public goods problems. In the public
goods problem ex ante balanced budgets can also be balanced ex-post, as
observed in Theorem 1 of Mailath and Postlewaite (1990). Therefore, the
result of Lemma 5, in the public goods context, is an *if and only if* result.

In our context, however, Lemma 5 gives only a *necessary, not a* *sufficient*
condition for incentive compatibility. To prove that ex-ante budget balance
implies ex-post budget balance, Mailath and Postlewaite use a formula simi-
lar to one used in Cramton, Gibbons and Klemperer (1987, proof of Lemma
4). Using this formula in our context would lead to a violation of the agents
budget constraints. It is thus the fact that in our model agents have budget
constraints that makes it impossible to turn Lemma 5 into an *if and only if*
result.

To obtain necessary and sufficient conditions, we might also seek to apply
a result similar to Proposition 3.1 of Border (1991).^{5} Border’s result provides
necessary and sufficient conditions that the functions*p** _{i}*have to satisfy if they
can be obtained as the interim expected values of functions

*f*

*and*

_{A}*f*

*.*

_{C}^{6}We would need to generalize Border’s result to a setting in which the function

*f*

*is pre-determined. This seems complicated. Our paper shows that some*

_{B}5Border’s motivation for proving this result stems from the theory of optimal auctions with risk averse buyers. It generalizes earlier results obtained by Maskin and Riley (1984) and Matthews (1984) in the context of that particular application. We are grateful to Mark Armstrong for drawing our attention to the connection between our work and this literature.

6Border’s result assumes symmetry: *p*1=*p*2, and*f**A*(t, t* ^{0}*) =

*f*

*C*(t

^{0}*, t).*

headway can be made even if one works only with the necessary condition in Lemma 5.

5. Normative Properties of Decision Rules

We calculate the expected welfare resulting associated with a decision rule
*f* using a utilitarian welfare criterion. This corresponds to the evaluation of
the ex ante expected utility of an agent who does not know whether he will
be agent 1 or 2, and who does not yet know his type. We assume that the
probability of being either agent 1 or agent 2 is equal to 1/2. We can then
omit the probability weights when calculating ex ante expected utility and
simply consider an non-weighted sum.

Definition 4 *The ex ante expected utility associated with decision rule* *f* *is:*

Z _{1}

0

*U*_{1}(t_{1})g(t_{1})dt_{1}+
Z _{1}

0

*U*_{2}(t_{2})g(t_{2})dt_{2}*.*

The ex ante expected utility associated with a decision rule*f* can equiv-
alently be written as:

1 +
Z _{1}

0

Z _{1}

0

*f** _{B}*(t

_{1}

*, t*

_{2})(t

_{1}+

*t*

_{2}

*−*1)g(t

_{1})g(t

_{2})dt

_{1}

*dt*

_{2}

*.*

In this formula, we might as well omit the constant 1, which is what we shall do below.

From the above formula it is obvious that the decision rules *f* that max-
imize ex ante expected utility among all decision rules are those that are

“first-best” in the sense of the following definition.

Definition 5 *A decision rule* *f* *is called* first-best *if with probability 1 we*
*have:*

*t*1 +*t*2 *>*1*⇒f**B*(t1*, t*2) = 1 *and*
*t*1+*t*2 *<*1*⇒f**B*(t1*, t*2) = 0.

Note that there are many first-best decision rules. The reason is firstly
that Definition 5 requires the listed conditions to be true *with probability*
*one, but notalways. The reason is secondly, and more importantly, that the*
above definition does not restrict the probabilities with which alternatives *A*
and *C* are chosen if the compromise is *not* implemented.

A second normative property that will play a role in this paper is the following symmetry condition.

Definition 6 *A decision rule* *f* *is called* symmetric *if for all* *t, t*^{0}*∈*[0,1]*we*
*have:* *f** _{A}*(t, t

*) =*

^{0}*f*

*(t*

_{C}

^{0}*, t).*

Our interest will be in those decision rules that maximize ex ante expected utility among all incentive compatible rules. We define:

Definition 7 *A decision rule* *f* *is called* second-best *if it yields the largest*
*ex ante expected utility among all incentive compatible decision rules.*

The following simple result shows that when considering whether first- best rules are incentive compatible, or when investigating second-best rules, there is no loss of generality in considering symmetric and regular decision rules only.

Lemma 6 *For every incentive compatible decision rulef* *there is a symmet-*
*ric and regular incentive compatible decision rule* *f*^{0}*which yields the same ex*
*ante expected utility as* *f.*

Proof: Let*f* be an incentive compatible decision rule. Define first *f** ^{∗}* to be
the rule which results if agents are first asked to reveal their types, then a fair

coin is tossed and, if head comes up *f* is applied, but if tails comes up *f* is
applied except that the roles of the agents are reversed, i.e. agent 2 now plays
the role of agent 1, and vice versa. Because *f* is incentive compatible, both
agents would have no incentive to distort their preferences if they knew the
outcome of the coin toss in advance. Therefore they also have no incentive
to distort preferences ex ante, and *f** ^{∗}* is incentive compatible. It is obvious
that the new rule

*f*

*is symmetric, and that it has the same ex ante expected utility as*

^{∗}*f. Now replace*

*f*

*by its regular equivalent*

^{∗}*f*

*, as described in Lemma 4. Note that the new rule is still symmetric. Therefore,*

^{0}*f*

*has all the properties asserted in Lemma 6.*

^{0}Q.E.D.

6. Impossibility of Implementing First-Best Rules

We now investigate whether first-best decision rules are incentive com- patible.

Proposition 1 *No first-best decision rule is incentive compatible.*

We prove Proposition 1 by showing that a first-best decision rule that
is incentive compatible must violate condition (ii) in Lemma 5. This means
that the ex-ante probability of the compromise, and the ex-ante probabilities
of alternatives *A* and *C, as implied by incentive compatibility, don’t add*
up to one. The ex-ante balanced budget constraint, that is in our setting
necessary, although not sufficient, for incentive compatibility, is violated.

Note that the structure of our argument parallels the arguments behind well-known impossibility results, such as that of Myerson and Satterthwaite (1983) for bilateral trade, and the results of Rob (1989), Mailath and Postle- waite (1990), Hellwig (2003) and Norman (forthcoming) for public goods.

The most important difference between the structure of our results, and the
results in earlier papers, is that in those papers the boundary condition which
makes it possible to pin down agents’ payments results from an individual
rationality constraint, whereas in our paper it results from the fact that the
first-best outcome in state *t** _{i}* = 1 is such that each agent’s “budget” is ex-
hausted, and therefore utility cannot be shifted through side payments.

Proof: We assume that*f*is a first-best decision rule that is incentive compat-
ible. Without loss of generality, we assume that *f* is symmetric and regular.

We then deduce that *f* violates condition (ii) in Lemma 5. By Lemma 5 we
then have a contradiction.

We consider first the right hand side of the equation in condition (ii). A
first best decision rule satisfies for every agent *i* with probability 1:

*q** _{i}*(t

*) = 1*

_{i}*−G(1−t*

*).*

_{i}We wish to show that this equality needs to hold for *t** _{i}* = 1, i.e. that we
need to have:

*q*

*i*(1) = 1. The proof is indirect. Suppose

*q*

*i*(1)

*<*1. Because

*f*is regular, we have

*p*

*(1) = 0. Hence,*

_{i}*U*

*(1) =*

_{i}*q*

*(1)*

_{i}*<*1. Because

*q*

*(t*

_{i}*) = 1*

_{i}*−G(1−t*

*) with probability 1, there is a sequence*

_{i}*{t*

^{n}

_{i}*}*

*of elements of [0,1] such that lim*

_{n∈N}*n→∞*

*t*

^{n}*= 1, and*

_{i}*q*

*i*(t

^{n}*) = 1−*

_{i}*G(1−t*

^{n}*) for all*

_{i}*n∈*N. Hence lim

_{n→∞}*q*

*(t*

_{i}

^{n}*) = 1. Therefore, agent 1 will find it advantageous to pretend to be of type*

_{i}*t*

^{n}*for sufficiently large*

_{i}*n. Thus, the rule is not incentive*compatible. We conclude that

*q*

*i*(1) = for both

*i*= 1,2, and hence that the right hand side of the equation in condition (ii) of Lemma 5 equals 1.

When considering the left hand side of the equation in condition (ii) we can assume without loss of generality that the condition that defines first best holds always, and not just with probability 1. This is because the left hand side of the equation only contains integrals of the decision rule probabilities, and therefore values that are taken on sets of measure zero don’t matter.

The left hand side of the equation in condition (ii) of Lemma 5 can be written as the sum of three integrals:

X

*i∈{1,2}*

µZ _{1}

0

Z _{1}

0

*f** _{B}*(t

_{1}

*, t*

_{2}) µ

*t** _{i}*+

*G(t*

*)*

_{i}*g(t*

*)*

_{i}¶

*g(t*_{1})g(t_{2})dt_{1}*dt*_{2}

¶

*−*
Z _{1}

0

Z _{1}

0

*f** _{B}*(t

_{1}

*, t*

_{2})g(t

_{1})g(t

_{2})dt

_{1}

*dt*

_{2}We evaluate these three integrals separately.

We begin with the first two integrals. Let *i∈ {1,*2}, and assume *j* *6=i.*

Z _{1}

0

Z _{1}

0

*f** _{B}*(t

_{1}

*, t*

_{2}) µ

*t** _{i}*+

*G(t*

*i*)

*g(t*

*)*

_{i}¶

*g(t** _{i}*)g(t

*)dt*

_{j}

_{j}*dt*

_{i}=
Z _{1}

0

Z _{1}

0

*f** _{B}*(t

_{1}

*, t*

_{2})(t

_{i}*g(t*

*) +*

_{i}*G(t*

*))g(t*

_{i}*)dt*

_{j}

_{j}*dt*

_{i}=
Z _{1}

0

(t_{i}*g(t** _{i}*) +

*G(t*

*))(1*

_{i}*−G(1−t*

*))dt*

_{i}

_{i}=
Z _{1}

0

*t*_{i}*g(t** _{i}*)dt

*+ Z*

_{i}_{1}

0

*G(t** _{i}*)dt

_{i}*−*Z

_{1}

0

(t_{i}*g(t** _{i}*) +

*G(t*

*))G(1*

_{i}*−t*

*)dt*

_{i}

_{i}=

·

*t*_{i}*G(t** _{i}*)

¸_{1}

0

*−*
Z _{1}

0

*G(t** _{i}*)dt

*+ Z*

_{i}_{1}

0

*G(t** _{i}*)dt

_{i}*−*Z

_{1}

0

(t_{i}*g(t** _{i}*) +

*G(t*

*))G(1*

_{i}*−t*

*)dt*

_{i}

_{i}= 1*−*
Z _{1}

0

(t*i**g(t**i*) +*G(t**i*))G(1*−t**i*)dt*i**.*
Note also that:

Z _{1}

0

Z _{1}

0

*f** _{B}*(t

_{1}

*, t*

_{2})g(t

*)g(t*

_{i}*)dt*

_{j}

_{j}*dt*

_{i}=
Z _{1}

0

(1*−G(1−t**i*))g(t*i*)dt*i*

= 1*−*
Z _{1}

0

*g(t** _{i}*)G(1

*−t*

*)dt*

_{i}

_{i}*.*

Our objective is now to prove that the left hand side of condition (ii) in Lemma 3 is smaller than the right hand side, i.e.:

1*−*2
Z _{1}

0

(t_{i}*g(t** _{i}*) +

*G(t*

*))G(1*

_{i}*−t*

*)dt*

_{i}*+ Z*

_{i}_{1}

0

*g(t** _{i}*)G(1

*−t*

*)dt*

_{i}

_{i}*<*1

*⇔*Z

_{1}

0

(2t_{i}*−*1)G(1*−t** _{i}*)g(t

*)dt*

_{i}*+ 2 Z*

_{i}_{1}

0

*G(t** _{i}*)G(1

*−t*

*)dt*

_{i}

_{i}*>*0.

Denote the first integral on the left hand side of this inequality by *I:*

*I* *≡*
Z _{1}

0

(2t_{i}*−*1)G(1*−t** _{i}*)g(t

*)dt*

_{i}*. Integration by parts yields:*

_{i}*I* =

·

(2t_{i}*−*1)G(1*−t** _{i}*)G(t

*)*

_{i}¸_{1}

0

*−*
Z _{1}

0

*G(t** _{i}*) (2G(1

*−t*

*)*

_{i}*−*(2t

_{i}*−*1)g(1

*−t*

*))*

_{i}*dt*

_{i}= *−2*
Z _{1}

0

*G(t** _{i}*)G(1

*−t*

*)dt*

_{i}*+ Z*

_{i}_{1}

*i*

(2t_{i}*−*1)g(1*−t** _{i}*)G(t

*)dt*

_{i}

_{i}*.*

A change of variables, setting *τ** _{i}* = 1

*−t*

*, allows us to rewrite the second integral on the right hand side as follows:*

_{i}Z _{1}

*i*

(2t_{i}*−*1)g(1*−t** _{i}*)G(t

*)dt*

_{i}*=*

_{i}*−*Z

_{1}

0

(2τ_{i}*−*1)g(τ* _{i}*)G(1

*−τ*

*)dτ*

_{i}

_{i}*.*Thus, replacing

*τ*

*i*again by

*t*

*i*, we find:

*I* = *−2*

Z _{1}

0

*G(t** _{i}*)G(1

*−t*

*)dt*

_{i}

_{i}*−*Z

_{1}

0

(2t_{i}*−*1)g(t* _{i}*)G(1

*−t*

*)dt*

_{i}

_{i}= *−2*
Z _{1}

0

*G(t** _{i}*)G(1

*−t*

*)dt*

_{i}

_{i}*−I,*which implies

*I* =*−*
Z _{1}

0

*G(t** _{i}*)G(1

*−t*

*)dt*

_{i}*.*

_{i}Substituting this into the inequality which we want to prove, we obtain:

*−2*
Z _{1}

0

*G(t** _{i}*)G(1

*−t*

*)dt*

_{i}*+ 2*

_{i}*−*Z

_{1}

0

*G(t** _{i}*)G(1

*−t*

*)dt*

_{i}

_{i}*>*0

*⇔*Z

_{1}

0

*G(t**i*)G(1*−t**i*)dt*i* *>* 0,
which is obviously true.

Q.E.D.

7. An Auxiliary Second-Best Problem

In the light of Proposition 1, our next objective is to investigate second-
best decision rules. Finding general characteristics of second best rules is
hard. To see the difficulty, suppose we attempted to determine simultane-
ously all three functions,*f** _{A}*,

*f*

*and*

_{B}*f*

*that make up the second-best decision rule. The set of constraints that we would have to take into account would consist of the incentive compatibility constraints and the constraints that for each vector of types (t*

_{C}_{1}

*, t*

_{2}), the probabilities

*f*

*(t*

_{A}_{1}

*, t*

_{2}),

*f*

*(t*

_{B}_{1}

*, t*

_{2}), and

*f*

*C*(t1

*, t*2), have to be non-negative, and have to add up to one. This latter set of constraints is difficult to handle.

We shall take a different approach, and initially focus on the function
*f** _{B}*, that determines the probability of the compromise

*B*, only. Recall that ex-ante expected welfare depends on this function only. Focusing on this function is analogous to an approach that is often pursued in mechanism design in other context. This approach is to first determine welfare relevant decisions, and to supplement the analysis later with a determination of pay- ment rules that make the efficiency relevant part of the collective decision incentive compatible.

The constraints that we shall take account of when investigating the
second-best compromise rule *f** _{B}* are those listed in Lemma 5. We emphasize
that these constraints are necessary, but not sufficient for the function

*f*

*to be part of an incentive compatible decision rule. This differentiates our study from the traditional study of the public goods problem, where the anal- ogous constraints are necessary and sufficient. In our context, we are thus considering a problem in which some constraints are omitted. If the solution happens to satisfy also these other constraints, then it will be second best.*

_{B}However, if it violates these other constraints, then the second best will be

different. For some special cases we shall find in the next section that the omitted constraints are satisfied. However, for the general case, we have not been able to show this.

The optimization problem that we are considering to determine a candi-
date solution *f** _{B}* is as follows:

Auxiliary Optimization Problem: *Choose* *f**B* *so as to maximize*
Z _{1}

0

Z _{1}

0

*f** _{B}*(t

_{1}

*, t*

_{2})(t

_{1}+

*t*

_{2}

*−*1)g(t

_{1})g(t

_{2})dt

_{1}

*dt*

_{2}

*subject to:*

*(i)* *q** _{i}*(t

*)*

_{i}*is weakly monotonically increasing in*

*t*

_{i}*for every*

*i∈ {1,*2};

*(ii)*
Z _{1}

0

Z _{1}

0

*f**B*(t1*, t*2)
µ

*t*1+*G(t*_{1})

*g(t*_{1}) +*t*2+*G(t*_{2})
*g(t*_{2}) *−*1

¶

*g(t*1)g(t2)dt1*dt*2

= *q*_{1}(1) +*q*_{2}(1)*−*1

To simplify our analysis of the auxiliary optimization problem, we make the following assumption:

Monotonicity Assumption: *G(t)/g(t) is a monotonically increasing func-*
tion of *t.*

Under this assumption a solution to the auxiliary optimization problem that satisfies condition (ii) above also satisfies the monotonicity condition (i). A standard application of Lagrange multipliers leads to the following result.

We omit the proof of this result.