Top Banner
Markovian Assignment Rules * Francis Bloch Ecole Polytechnique and Brown University and David Cantala Colegio de Mexico May 2008 VERY PRELIMINARY AND INCOMPLETE Abstract We analyze dynamic assignment problems where agents succes- sively receive different objects (positions, offices, etc.). A finite set of n vertically differentiated indivisible objects are assigned to n agents who live n periods. At each period, a new agent enters society, and the oldest agent retires, leaving his object to be reassigned. A Markovian assignment rule specifies the probability that agents receive objects, and generates a finite Markov chain over the set of assignments. We define independent assignment rules (where the assignment of an ob- ject to an agent is independent of the objects currently held by the other agents), efficient assignment rules (where there does not exist another assignment rule with larger expected surplus), and analyze the dynamic properties of the Markov chains generated by assign- ment rules. When agents are homogenous, we characterize indepen- dent convergent assignment rules, and provide sufficient conditions for irreducibility and ergodicity. When agents draw at random their types, we prove that independence and efficiency are incompatible, and study the class of assignment rules which satisfy a property of quasi-convergence. JEL Classification Numbers: C78, D73, M51 Keywords: dynamic assignment, finite Markov chains, seniority, promotion rules * We thank Herv´ e Moulin and William Thomson for useful discussions on dynamic assignment rules. They should not be blamed for the shortcomings of this very preliminary draft.
35

Markovian assignment rules

Apr 04, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Markovian assignment rules

Markovian Assignment Rules∗

Francis BlochEcole Polytechnique and Brown University

and

David CantalaColegio de Mexico

May 2008

VERY PRELIMINARY AND INCOMPLETE

AbstractWe analyze dynamic assignment problems where agents succes-

sively receive different objects (positions, offices, etc.). A finite set ofn vertically differentiated indivisible objects are assigned to n agentswho live n periods. At each period, a new agent enters society, and theoldest agent retires, leaving his object to be reassigned. A Markovianassignment rule specifies the probability that agents receive objects,and generates a finite Markov chain over the set of assignments. Wedefine independent assignment rules (where the assignment of an ob-ject to an agent is independent of the objects currently held by theother agents), efficient assignment rules (where there does not existanother assignment rule with larger expected surplus), and analyzethe dynamic properties of the Markov chains generated by assign-ment rules. When agents are homogenous, we characterize indepen-dent convergent assignment rules, and provide sufficient conditionsfor irreducibility and ergodicity. When agents draw at random theirtypes, we prove that independence and efficiency are incompatible,and study the class of assignment rules which satisfy a property ofquasi-convergence.

JEL Classification Numbers: C78, D73, M51

Keywords: dynamic assignment, finite Markov chains, seniority,promotion rules

∗We thank Herve Moulin and William Thomson for useful discussions on dynamicassignment rules. They should not be blamed for the shortcomings of this very preliminarydraft.

Page 2: Markovian assignment rules

1 Introduction

Economic models of matching and assignment are essentially static, and onlyconsider assignments one at a time. In Gale and Shapley (1962)’s marriageproblem, divorce is not allowed and men and women are married forever ; inthe roommate problem, college students are assigned to dorms every year, in-dependently of their previous year’s assignment ; in the assignment problem(Gale and Shapley (1971)), workers and firms negotiate their contract irre-spective of their history, and in the school choice problem (Abdulkadirogluand Sonmez (2003)), pupils are admitted to schools independently of theirprevious schooling history.2 Models of assignment of indivisible goods, likeShapley and Scarf (1974)’s house allocation problem also focus on a single,static, assignment. The history is entirely captured by the current ownershipor tenancy structure (Abdulkadiroglu and Sonmez (1999)), and successivereassignments of houses are not considered.3

However, there exist situations where assignment rules are dynamic, andsuccessive assignments cannot be analyzed separately. For example, con-sider the assignment of positions to civil servants in centralized systems. InFrance, teachers who want to transfer are reassigned to high schools accord-ing to a complex priority system, which takes into account current position,seniority and on the job seniority, as illustrated in Table 1.4 In India, officersof the Indian Administrative Service are also reassigned according to theirseniority, current position, career history and rank at the entrance exam tothe IAS (Iyer and Mani (2008)). More generally, successive job assignmentsinside organizations are decided according to complex rules putting weighton seniority, performance and career history.

Priority systems based on seniority seem to be prevalent in many differentsettings. Airline pilots and flight attendants get to choose their flight assign-ments according to seniority.5 Assignment of subsidized housing to potential

2For a survey of models of two-sided matching, see the excellent monograph by Rothand Sotomayor (1990).

3See Thomson (2007) for an exhaustive account of the literature on the allocation ofindivisible goods.

4The exact priority rules are published every year by the FrenchMinistry of Education. For the 2008 rules, see for example www.ac-rouen.fr/rectorat/profession mutation/inter bareme.pdf.

5This emphasis on seniority explains the difficulties in merging pilots’ seniority listswhen two airline companies contemplate a merger. Northwest Airlines and Delta haverecently faced this problem, as pilot associations of the two airlines have so far beenunable to agree on a common seniority list. See “NWA pilots set merger conditions,”Minneapolis Star Tribune, January 18, 2008.

1

Page 3: Markovian assignment rules

tenants often gives priority to agents who have the highest seniority on thewaiting list. In many industries, seniority rules govern priorities for layoffsand promotions (see for example the historical account given by Lee (2004)).

In other situations, assignment rules do not favor the agents with highestseniority. For example, in order to minimize moving costs, offices of retir-ing employees are likely to be reassigned to newcomers. This rule favorsagents with the lowest seniority, and we will term it the ”replacement rule”.Alternatively, offices and positions can be reassigned at random. Randomassignments based on uniform distributions, termed ”uniform rules” are usedin a wide range of contexts and deserve a special study.

Dynamic assignment rules differ from static assignment rules in two im-portant ways. First, in a dynamic setting, agents are not only characterizedby their current preferences, but also by their history (past assignments andpast preferences). Assignment rules can use this information, and conditionthe allocation of objects to characteristics summarizing the agent’s history,like seniority or on the job seniority. Second, in a dynamic setting, the set ofagents to whom objects are allocated is not constant. Agents enter and leavethe pool of agents to whom objects are allocated. For example, every year,some civil servants retire while others are recruited. Given that the numberof positions is fixed, retiring civil servants free their positions, which can bereallocated to other agents, etc.., until all positions are filled. Similarly, if thenumber of offices is fixed, the allocation of offices in organizations dependson the flow of agents entering and leaving the organization. Agents leavingthe organization free their offices, which can be reallocated sequentially untilthe agents entering the organization are assigned an office.

In this paper, we analyze dynamic assignment rules to allocate a fixednumber of objects (offices or positions) to a dynamic set of agents enteringand leaving society. In order to simplify the analysis, we assume that allagents have the same preferences over the objects, which can thus be rankedaccording to their desirability.6. We identify a state of the society with anassignment of objects to agents, where agents are distinguished by their se-niority. A Markovian assignment rule specifies how objects are allocated toagents according to the current assignment of objects in society.

We focus attention in this paper on the dynamic properties of the finiteMarkov chains generated by different assignment rules. We study which as-signment rules are convergent (every assignment leads to a unique absorbing

6This is of course a very strong simplifying assumption, but it can often be justified.For example, civil servants often have congruent preferences over positions, ranking theimportance of various positions in the same way. Similarly, most agents will agree on theranking of offices according to their sizes and locations.

2

Page 4: Markovian assignment rules

state), ergodic (the long run behavior of the chain is independent of the initialassignment) and irreducible (all assignments occur with positive probabilityin the invariant distribution). We also define different notions of indepen-dence, specifying how the assignment of object j to agent i depends on theobjects currently held by the other agents. Finally, we consider a staticnotion of efficiency requiring that there does not exist an alternative assign-ment rule which generates an expected total surplus at least as large (andsometimes strictly larger) at every state.

We discuss four specific Markovian assignment rules. The seniority ruleallocates object j to the oldest agent who holds an object smaller than j. Therank rule allocates object j to the agent holding object j − 1. The uniformrule allocates object j with equal probability to all agents who currently holdobjects smaller than j. Finally, the replacement rule allocates the object ofthe agent leaving the society to the agent entering society.

Our main result shows that any convergent rule satisfying independencemust be a weighted combination of the rank and seniority rules. Convergencein our setting can be understood as a condition of equity across cohorts. Ina convergent Markov chain, the absorbing state will eventually be reached,and agents entering society at different dates will experience the exact samehistory. Hence, our analysis gives support to the rank and seniority rulesas rules satisfying both a natural notion of independence and a condition ofequity.

Ergodic and irreducible rules cannot be characterized as easily. However,we show that any rule which allocated object j to the agent currently holdingobject j − 1 with positive probability is ergodic. On the other hand, ruleswhich do not allow for transitions across two different assignments µ andµ′ such that µ(n) = µ′(n) = n, like the replacement rule, are not ergodic,and different initial conditions will lead to different long run behaviors of theMarkov chain.

We also provide a sufficient condition for irreducibility of independentchains, in terms of the graph of intercommunicability of different states.This condition states that, for an independent assignment rule, any object isreassigned to the entering agent with positive probability and the undirectedgraph formed by all pairs (i, j) of agents such that an agent holding objecti receives object j with positive probability, is connected. This conditionwill always be satisfied when the probability of allocating object j to theentering agent and the agent holding object j − 1 is positive, and also whenthe probability of allocating object j to both the entering agent and the agentholding object 1 is positive.

In the second part of the paper, we explore the properties of assignment

3

Page 5: Markovian assignment rules

rules where agents are heterogeneous, and draw at random a type beforeentering society. We first show that independence becomes a very strongcondition when agents are heterogeneous, as it implies that assignments areindependent of the agent’s type. Assuming that the surplus is a supermodularfunction of the quality of the object and the agent’s ordered types, efficiencyrequires an assortative matching between objects and types. We thus canstudy whether or not efficient assignment rules exist, and first show thatthere exists and incompatibility between independence and efficiency.

When agents’ types are drawn at random, the evolution of states is gov-erned by two simultaneous forces: the exogenous draw of type profiles and theendogenous evolution of assignments. We define quasi-convergent assignmentrules as rules for which the unique recurrent set only admits one assignmentper type profile, thereby generalizing convergent rules. This generalizationcaptures the same equity consideration as before, as any two agents bornat different points in time the same societies will experience the same his-tory if the assignment rule is quasi-convergent. We can easily characterizequasi-convergent independent rules. Whether there exist assignment rulessatisfying both our notions of equity and efficiency (quasi-convergent effi-cient rules) remains an open question that we hope to settle shortly.

1.1 Related literature

We situate our paper with respect to the existing literature in economics andoperations research. As noted above, economic models of assignment do notallow for the type of dynamic changes in the population and objects thatwe consider here. The papers which are more closely related to ours in thisliterature are papers by Moulin and Strong (2002) and (2003) which con-sider axiomatic rules to allocate balls entering in succession into multicolorurns. As in our model, the state of the urns varies over time, and the alloca-tion depends on the current state and influences the transition across states.However, in all other respects, the problems we study are very different.

To the best of our knowledge, the literature in operations research andcomputer science on queuing and assignment has not considered the problemwe study in this paper.7 Typically, problems of queuing in operations researchdo not model preferences of agents over the servers, do not identify agents whoenter repeatedly the system and whose utility is defined over a sequence ofassignments, and do not impose, as we do, individual rationality conditions,

7See for example, Kleinrock (1975) and (1976) for a simple exposition of assignmentmodels in queuing theory and computer science.

4

Page 6: Markovian assignment rules

stating that an agent cannot be assigned a lower object than the one hecurrently holds.

In management, the literature on Markov manpower systems also ana-lyzes the allocation of agents to vertically differentiated positions as a Markovprocess.8 But the focus of the literature is very different from our approach.Studies of manpower planning investigate how planners can achieve a fixedtarget (in terms of sizes of different grades in the hierarchy of the organiza-tion) by controlling promotion and recruitment flows. By contrast, our studyis based on the structure and properties of allocation rules which determinethe reassignment of objects at every period.

In personnel and labor economics, seniority rules for promotion and layoffshave been analyzed, both theoretically and empirically. (See Lazear (1995)).Theoretical models emphasize seniority promotion rules as a way to pro-vide incentives to worker to acquire firm-specific human capital (Carmichael(1983)). Empirical studies of internal labor markets, like Chiappori, Salanieand Valentin (1999)’s study of promotions and careers in a large firm inFrance, provide a more complex and nuanced view of the effects of seniority,human capital acquisition and innate abilities on career histories.

2 The Model

2.1 Markovian assignment rules

We consider a society I of n agents indexed by their age, i = 1, 2, ..., n, andn vertically differentiated indivisible goods, indexed by j = 1, 2, ..., n in theset J . Time is discrete and runs as t = 1, 2, .... Each agent lives for exactlyn periods, and at each date t, one agent dies and leaves the society whereasanother agent enters the society.

Definition 1 An assignment µ is a mapping from I to J assigning to everyagent i, the object j = µ(i). Given that I and J have the same cardinality, anassignment can be identified with a permutation over the finite set {1, 2, ...n}.

A state of the society is totally described by the assignment of objects toagents. Hence there are n! states in a society of n agents, each correspondingto a different permutation over the finite set {1, 2, ...n}.

8See the books by Bartholomew (1982) and Vajda (1978) for a review of the earlycontributions, and the recent paper by Nilakantan and Ragavhandra (2005) for an accountof recent work.

5

Page 7: Markovian assignment rules

We now consider the dynamical structure of the society. Let µt denotethe state of the economy at date t. At the beginning of period t + 1, theoldest agent alive in period t relinquishes object µt(n) and leaves the society.A new agent enters with no object – by convention we denote the null objectby 0 – and agents i = 2, 3, ..., n retain the objects they were assigned in theprevious period, µt(i−1). A new assignment will then be chosen, to allocateobject µt(n) to one of the current members of society. The assignment ofobject µt(n) will in turn, free a new object to be reassigned, etc.. The cascadeof assignments will end when the new agent is assigned an object.

Definition 2 A truncated assignment ν, given some object j is a mappingfrom I \ {1} to J \ {j}, reflecting the assignment of the objects in J \ {j} tothe n− 1 oldest agents in the society.

We focus on assignment rules which only depend on the current truncatedassignment ν of objects to agents, and not on the entire history of assignmentsin the society. Assignment rules which possess this Markovian property arecalled Markovian assignment rules. An assignment rule allocates any objectj to those agents who currently possess an object k < j. We suppose thatagents cannot be forced to give back their current object. The assignment rulemust satisfy an individual rationality condition, and cannot assign to agenti an object j < ν(i). Formally, we define:

Definition 3 A Markovian assignment rule is a collection of vectors αj(ν) in<n for j = 1, 2, ...n satisfying: αj(ν, i) ≥ 0 for all i and

∑i,ν(i)<j αj(ν, i) = 1.

The number αj(ν, i) denotes the probability that agent i receives object j giventhe truncated assignment ν.

2.2 Examples of Markovian assignment rules

We now describe four different Markovian assignment rules which have asimple interpretation.The seniority rule assigns object j to the oldest agent with an object smallerthan j, αj(ν, i) = 1 if and only if i = max{k|ν(k) < j}.The rank rule assigns object j to the agent who currently owns object j − 1,αj(ν, i) = 1 if and only if ν(i) = j − 1.The uniform rule assigns object j to all agents who own objects smaller thanj with equal probability, αj(ν, i) = 1

|{k|ν(k)<j}| for all i such that ν(i) < j.

The replacement rule assigns object j to the entering agent, αj(ν, i) = 1 ifand only if i = 1.

6

Page 8: Markovian assignment rules

Notice that some common rules are not Markovian. For example, rulesbased on on the job seniority require information about the number of periodsduring which agent i has owned object j, an information which cannot berecovered from the current assignment ν.

2.3 Independent assignment rules

A Markovian assignment rule may condition the assignment of object j toagent i on the objects currently held by the other agents (the truncatedassignment ν). A simple property of Markovian assignment rules is indepen-dence, stating that the assignment of object j to player i does not dependon the current assignment of objects held by the other players:

Definition 4 A Markovian assignment rule α satisfies independence if andonly if, for any j, for any i, for any ν, ν ′ such that ν(i) = ν ′(i), αj(ν, i) =αj(ν

′, i).

The independence property is appealing because it states that an agent’sassignment only depends on his characteristics (age and object currentlyheld) and not on the characteristics of the other agents. A stronger property,strong independence states that an agent’s assignment is independent of hisage:

Definition 5 A Markovian assignment rule α satisfies strong independenceif and only if, for any j, for any i, k, for any ν, ν ′ such that ν(i) = ν ′(k),αj(ν, i) = αj(ν

′, k).

If an assignment rule is strongly independent, it is fully characterizedby the probability of assigning object j to an agent holding object k < j.Abusing notations, we will denote this probability by αj(k). Notice that therank, uniform and replacement rules are all strongly independent.

The seniority rule is not independent, but satisfies a weaker independenceproperty, stating that the assignment αj(ν, i) only depends on the truncatedassignment of objects for agents who currently hold objects smaller than jand are thus eligible to receive object j. Formally:

Definition 6 A Markovian assignment rule α satisfies weak independenceif and only if, for any j, for any i, for any ν, ν ′ such that ν(k) = ν ′(k) forall k < j, αj(ν, i) = αj(ν

′, i).

7

Page 9: Markovian assignment rules

The following Lemma characterizes assignment rules satisfying indepen-dence, and highlights the gap between independence and strong indepen-dence.

Lemma 1 If a Markovian rule α satisfies independence, then for any j < n,ν, ν ′ and i, k such that ν(i) = ν ′(k), αj(ν, i) = αj(ν

′, k). Furthermore, forany ν, ν ′ such that ν(i) = ν ′(j), ν(j) = ν ′(i),αn(ν, i) + αn(ν, j) = α(ν ′, i) +αn(ν ′, j).

Lemma 1 shows that if a Markovian assignment rule satisfies indepen-dence, the assignment of any object j < n is strongly independent, and fullydetermined by the probabilities αj(k) of assigning object j to an agent cur-rently holding object k < j. However, this property does not hold for theassignment of the highest object , n. For the assignment of the last object, theonly constraint imposed by independence is that, for any two assignmentswhich only differ in the positions of i and j, the total probability assignedto agents i and j be constant. As the following simple example shows, thereexist assignment rules satisfying independence which allocate object n withdifferent probabilities to two agents of different ages holding the same object.

Example 1 Let n = 3. Consider the assignment of object 3 and the twotruncated assignments ν(2) = 1, ν(3) = 2, ν ′(2) = 2, ν ′(3) = 1. Independenceputs no restriction on the assignment rule α3, as there is no agent i forwhich ν(i) = ν ′(i). Now, we must have: α3(ν, 1) + α3(ν, 2) + α3(ν, 3) =1 = α3(ν

′, 1) + α3(ν′, 2) + α3(ν

′, 3). This implies that the assignment rulessatisfying independence are characterized by three numbers, α3(ν, 1), α3(ν, 2)and α3(ν

′, 2), but it does not imply that α3(ν, 2) = α3(ν′, 3) nor α3(ν

′, 2) =α3(ν, 3).

2.4 Convergent, irreducible and ergodic assignmentrules

Starting with any assignment µ0, any Markovian assignment rule α generatesa finite Markov chain over the set of assignments. More precisely, we candefine the probability of reaching state µ′ from state µ, p(µ′|µ) as follows:

Consider the sequence of agents i0 = n + 1, i1 = µ′−1(µ(i0 − 1)), ..., im =µ′−1(µ(im−1 − 1)), ..., iM = 1. This sequence of agents corresponds to theunique sequence of reallocations of goods for which society moves from as-signment µ to assignment µ′. First, the good held by the last agent at datet, µ(n) is assigned to agent i1 = µ′−1(µ(n)). Then the good held by agent

8

Page 10: Markovian assignment rules

i1 at period t + 1 (or by agent i1 − 1 at period t) is reallocated to the agenti2 = µ′−1(µ(i1−1)), etc. The process continues for a finite number of periodsuntil a good is assigned to agent iM = 1, after which no other good can bereallocated.

The probability of reaching µ′ from µ is thus simply the probability thatthe sequence of reallocations of goods between agents i0, ..., iM is realized:

p(µ′|µ) =M−1∏m=0

αµ(im−1)(νm, im+1) (1)

where νm(i) = µ(i− 1) for i 6= it, t = 1, 2, ..., m and νm(i) = µ′(i) for i = it,t = 1, 2, ..., m.

Having defined the Markov chain over assignments, we now consider thedynamic properties of this chain, and relate it to the Markovian assignmentrules. The following definitions are borrowed from classical books on finiteMarkov chains (Kemeny and Snell (1960), Isaacson and Madsen (1976)).

Definition 7 Two states i and j intercommunicate if there exists a path inthe Markov chain from i to j and a path from j to i.

Definition 8 A set of states C is closed if, for any states i ∈ C, k /∈ C, thetransition probability between i and k is zero.

Definition 9 A recurrent set is a closed set of states such that all states inthe set intercommunicate. If the recurrent set is a singleton, it is called anabsorbing state.

Definition 10 A Markovian assignment rule α is convergent if the inducedMarkov chain is convergent (admits a unique absorbing state, and any initialassignment converges to the absorbing state).

Definition 11 A Markovian assignment rule α is irreducible if the inducedMarkov chain is irreducible (the only recurrent set is the entire state set).

Definition 12 A Markovian assignment rule α is ergodic if the inducedMarkov chain is ergodic (has a unique recurrent set).9

9This definition of ergodicity does not agree with the definition given by Isaacson andMasden (1976) who also require all recurrent states to be aperiodic, so that an invariantdistribution exists, nor with Kemeny and Snell (1960)’s definition where an ergodic Markovchain is defined by the fact that the only recurrent set is the entire state set. For lack ofbetter terminology, we call ergodic a finite Markov chain such that the long run behaviorof the chain (whether it is a cycle or an invariant distribution) is independent of the initialconditions.

9

Page 11: Markovian assignment rules

We finally define a dynamic notion of equity, based on the idea that twoagents born at different dates must be treated identically in the long run.

Definition 13 A Markovian assignment rule α is time invariant if thereexists T > 0 such that, any two agents born at t, t′ > T experience the samehistory, µt+τ (i) = µt′+τ (i′) for τ = 0, 1, ..., n− 1, and agents i and i′ enteringsociety at dates t and t′.

The following Lemma is immediate and does not require a proof:

Lemma 2 A Markovian assignment rule α is time invariant if and only ifany recurrent set of the induced Markov chain is an absorbing state.

2.5 Markovian assignment rules among three agents

In this Section, we completely characterize the Markovian assignment rulesamong three agents. If n = 3, there are six possible assignments defined by:

µ1 : (1, 2, 3)

µ2 : (1, 3, 2)

µ3 : (2, 1, 3)

µ4 : (2, 3, 1)

µ5 : (3, 1, 2)

µ6 : (3, 2, 1) .

Given the constraint that∑

iν(i)≤j αj(i, ν) = 1, not all transitions amongstates can occur with positive probability, and the transition matrix mustdisplay the following zero entries:

P =

∗ ∗ ∗ 0 ∗ 0∗ 0 ∗ 0 0 0∗ ∗ 0 ∗ 0 ∗∗ 0 0 0 0 00 ∗ 0 ∗ 0 00 ∗ 0 0 0 0

Alternatively, Figure 1 below illustrates the transitions between states inthe Markov process induced by an assignment rule putting positive proba-bilities on all feasible transitions:

We now examine in turn the four assignment rules described above.

10

Page 12: Markovian assignment rules

2.5.1 The seniority rule

The seniority rule is represented by the transition matrix:

P =

1 0 0 0 0 01 0 0 0 0 01 0 0 0 0 01 0 0 0 0 00 1 0 0 0 00 1 0 0 0 0

The transitions between states are represented in Figure 2, which showsthat the seniority rule is in fact convergent.

2.5.2 The rank rule

The rank rule is represented by the transition matrix:

P =

1 0 0 0 0 01 0 0 0 0 00 1 0 0 0 01 0 0 0 0 00 1 0 0 0 00 1 0 0 0 0

The transitions between states are represented in Figure 3, which showsthat the rank rule is convergent.

2.5.3 The uniform rule

The uniform rule is represented by the transition matrix:

P =

16

13

16

0 13

012

0 12

0 0 013

16

0 16

0 13

1 0 0 0 0 00 1

20 1

20 0

0 1 0 0 0 0

The transitions between states are identical to those of Figure 1. TheMarkov chain is irreducible and the invariant distribution10 is given by

10Any irreducible finite Markov chain admits a unique invariant distribution, which canbe computed by solving the equation: πP = π in π. (See e.g. Isaacson and Masden (1976),Theorem III.2.2 p. 69)

11

Page 13: Markovian assignment rules

p1 =36

127' 0.28, p2 =

28

127' 0.22, p3 =

30

127' 0.24, p4 =

11

127' 0.08,

p5 =12

127' 0.09; p6 =

10

127' 0.07.

2.5.4 The replacement rule

The replacement rule is represented by the transition matrix:

P =

0 0 0 0 1 00 0 1 0 0 00 0 1 0 0 00 0 0 0 0 10 0 0 1 0 00 1 0 0 0 0

The Markov chain is not ergodic. As shown in Figure 4, there are twocyclical recurrent sets of period 3.

3 Dynamic properties of Markovian assign-

ment rules

3.1 Convergent Markovian assignement rules

We first characterize convergent assignment rules. Notice that, by construc-tion, an agent is never reassigned an object of lower value than the objecthe currently holds. Hence, for any i = 1, ..., n− 1, µt+1(i + 1) ≥ µt(i). If anassignment µ is an absorbing state, we must have

µ(i + 1) = µt(i + 1) = µt+1(i + 1) ≥ µt(i) = µ(i). (2)

Hence, at an absorbing state, the assignment must be monotone, assigninghigher objects to older agents. The only monotone assignment is the identityassignment ι for which ι(i) = i for all i = 1, ...n. Hence, the only candidateabsorbing state is the identity assignment ι. This observation also shows thatan assignment rule is time invariant if and only if it is convergent.

Proposition 1 Both the seniority and rank assignment rules are convergent.

12

Page 14: Markovian assignment rules

Proposition 1 shows that both the seniority and rank rules are convergentand that the absorbing state is reached in at most n periods. Furthermore, acareful inspection of the proof of the Proposition reveals that any Markovianassignment rule which can be written as a convex combination of the rank andseniority rule, is also convergent. However, the seniority and rank rules (andtheir convex combinations) are not the only convergent rules. A completecharacterization of convergent assignment rules is difficult, because the con-dition guaranteeing that the identity assignment is absorbing only pins downthe assignment rule for the truncated assignments νj, where νj(i) = i− 1 fori ≤ j and νj(i) = i for i > j, but does not impose any conditions for otherassignments. When assignments are independent of the assignments of theother agents, progress can be made and the next Theorem characterizes theone-parameter family of independent convergent rules.

Theorem 1 An assignment rule α is independent and convergent if and onlyif αj(j − 1) = 1 for all j < n, αn(ν, n) = 1 if ν(n) = n− 1, and there existsλ ∈ [0, 1] such that αn(ν, n) = λ and αn(ν, ν−1(n−1)) = 1−λ if ν(n) 6= n−1.

Theorem 1 characterizes the family of independent and convergent as-signment rules as rules which allocate any object j < n according to therank rule, and allocate object n according to a convex combination of therank and seniority rules. If, in addition, we require the assignment rule tobe strongly independent, if αn(ν, n) = 1 when ν(n) = n − 1, we must haveαn(n− 1) = 1, so that:

Corollary 1 The only strongly independent, convergent assignment rule isthe rank rule.

3.2 Ergodic assignment rules

We first recall some definitions of special permutations.

Definition 14 A permutation from a set of n elements to itself is a cycle,denoted κ, if π(i) = π(i + 1) for all i = 1, 2, .., n and π(n) = 1.

Definition 15 A permutation from a set of n elements to itself is an (i, j)transposition, denoted τi,j, if π(i) = j, π(j) = i and π(k) = k for all k 6= i, j.For a shorthand, we will denote any (1, i) transposition as τi.

Using these definitions, we can decompose the evolution of the Markovchain as a composition of cycles and transpositions. Consider an initial state

13

Page 15: Markovian assignment rules

µ at period t, and succession of reassignments i0, ..., iM . The state µ′ at periodt+1 is obtained by (i) first applying a cycle, which lets object µ(n) be assignedto the entering agent, (ii) then apply a transposition between agent 1 andagent i1, assigning object µ(n) to i1, (iii), then apply a transposition betweenagent 1 and agent i2, assigning object µ(i1) to agent i2, etc... Hence,we maywrite:

µ′ = µ ◦ κ ◦ τi1 ... ◦ τim ◦ ... ◦ τ1,

where it is understood that τ1, the identity permutation, is just added forthe sake of completeness, and to show that the composition of permutationends.

We are now ready to provide a simple characterization of ergodic assign-ment rules based on the accessibility of an assignment where the highestobject is assigned to the oldest player.

Theorem 2 An assignment rule α is ergodic if and only if there exists anassignment µ′ with µ′(n) = n such that, for all assignments µ with µ(n) = n,the permutation µ−1(µ′) can be decomposed into a sequence of permutations,µ−1(µ′) = π1 ◦ ... ◦ πm ◦ ...πM such that either πm is a cycle or a (1, i)transposition and, if it is a (1, i) transposition, α(µ◦...◦πm−1)−1(1)(ν

m−1, i) > 0,where νm−1(j) = (µ ◦ ... ◦ πm−1)(j) for all j = 2, ..., n.

Theorem 2 is based on the simple observation that any recurrent set mustcontain an assignment for which µ(n) = n, so that in order to check ergod-icity, one only needs to check that there exists an assignment assigning thehighest object to the oldest agent which can be reached from any assignmentassigning the highest object to the oldest agent. This condition is always vio-lated for the replacement rule, for which the set of states can be decomposedinto n cycles, each cycle containing a single assignment such that µ(n) = n,and for which there is no path between the cycles.

Proposition 2 does not pin down a simple condition guaranteeing the ex-istence of a path in the Markov chain from an assignment µ to an assignmentµ′ with µ(n) = µ′(n) = n. A simple sufficient condition is that any object iis assigned with positive probability to an agent of age i holding object i−1:

Corollary 2 Suppose that αi(i, ν) > 0 whenever ν(i) = i − 1, then theassignment rule α is ergodic.

Corollary 2 generalizes our result on the convergence of the rank andseniority rules, by showing that any assignment rule which assigns object i

14

Page 16: Markovian assignment rules

to agent i when he holds object i − 1 with positive probability (a conditionsatisfied both by the rank and seniority rule) must be ergodic. Furthermore, ifthe condition of Corollary 2 is satisfied, then it is possible to reach the identityassignment ι from itself, so that the period of the recurrent state ι is equalto one. As all states in a recurrent set must have the same period (Isaacsonand Masden (1976), Theorem II.2.2 p.54), all states in the unique recurrentset are aperiodic. Hence, the Markov chain is ergodic in the stronger senseof Isaacson and Masden (1976), and admits a unique invariant distribution.

The sufficient condition identified in Corollary 2 is not necessary. As thefollowing four player example shows, a Markovian assignment rule may beergodic even when it allows some ”gaps” (situations where the probability ofassigning object j to the agent holding object j − 1 is equal to zero).

Example 2 Let n = 4. Consider the strongly independent assignment ruleα4(3) = 1, α3(1) = 1, α2(1) = 1, α1(0) = 1.

Let all states such that µ(4) = 4 be ordered as in Subsection 2.5. Inaddition, define the states:

µ7 : (1, 3, 4, 2)

µ8 : (1, 2, 4, 3)

µ9 : (1, 4, 3, 2)

µ10 : (1, 4, 2, 3) .

Figure 5 illustrates the transitions between these states and shows thatthere exists a path leading to the identity matching from any other state,proving that the assignment rule is ergodic.

3.3 Irreducible assignment rules

In this Subsection, we characterize irreducible assignment rules, generatingirreducible finite Markov chains, where any state can be reached from anyother state.

Theorem 3 An assignment rule α is irreducible if and only if(i) For all j, all truncated assignments ν of objects in J \ j, αj(ν, 1) > 0 and(ii) For all assignments µ, µ′ such that µ(n) = µ′(n) = n, the permuta-tion µ−1(µ′) can be decomposed into a sequence of permutations, µ−1(µ′) =π1 ◦ ... ◦ πm ◦ ...πM such that either πm is a cycle or a (1, i) transposi-tion, if πm is a (1, i) transposition, both πm−1 and πm+1 are cycles, andα(µ◦...◦πm−1)−1(1)(ν

m−1, i) > 0, where νm−1(j) = (µ ◦ ... ◦ πm−1)(j) for allj = 2, ..., n.

15

Page 17: Markovian assignment rules

Theorem 3 provides a characterization of irreducible assignment ruleswhich relies on two conditions: (i) assumes that replacement (the alloca-tion of any object to the entering agent) occurs with positive probability atall states, (ii) assumes that any two assignments which allocate the highestobject to the oldest agent are related through a sequence of elementary per-mutations, with cycles and (1, i) transpositions such that any transpositionin the sequence is followed by a cycle.

At first glance, condition (ii) may appear to be a mere rephrasing of theirreducibility condition – guaranteeing that any state can be reached fromany state. However, condition (ii) is weaker than the irreducibility condi-tion, as it only applies to a set of states of cardinality (n − 1)! rather thann! Condition (ii) also focusses attention on a special sequence of “elemen-tary permutations” rather than arbitrary assignments. When condition (i) issatisfied, any path from a state µ to a state µ′ can be generated through ele-mentary permutations. Hence, in the direction of sufficiency, requiring thatthe states can be reached though elementary permutations is not more de-manding than requiring that the states can be reached through any arbitraryreassignment. In the direction of necessity, checking that there is no elemen-tary permutations leading from one state to another is easier than checkingthat states cannot be reached through any reassignment. Furthermore, thedescription of elementary permutations will serve as a building block for theanalysis of irreducible assignment rules satisfying independence.

Theorem 4 For any independent assignment rule α, consider the graphG(α) defined over the nodes {1, 2.., n − 1} by gi,j = 1 if and only if eitherαj(i) > 0 or αi(j) > 0. Any independent Markovian assignment rule α suchthat αj(0) > 0 for all j ≥ 1, and for which the graph G(α) is connected isirreducible.

Theorem 4 provides a simple sufficient condition to check whether anindependent assignment rule is irreducible. This condition is satisfied whenthe set of states for which transitions occur with positive probability is richenough. For example, it is always satisfied for the uniform assignment rulewhere αj(i) > 0 for all i ≤ j, or when the probability of assigning object jto an agent holding j − 1 is positive, αj(j − 1) > 0 (in which case the graphG(α) is a connected line), or if the probability of assigning object j to theagent holding object 1 is positive for all j, αj(1) > 0 (in which case the graphG(α) is a connected star with 1 as the hub).

However, as shown by the following example, the condition is not neces-sary. There exist irreducible assignment rules for which the graph G(α) isnot connected.

16

Page 18: Markovian assignment rules

Example 3 Let n = 4. Consider the strongly independent assignment rule,α1(0) = 1, α2(0) = 1, α3(0) = α3(1) = 1

2, α4(0) = α4(1), α4(2) = α4(3) = 1

4.

In this Example, the graph G(α) only contains the link (1, 3) and is notconnected. However, all assignments with µ(n) = 4 intercommunicate, asillustrated in Figure 6, which uses the same ordering of three player assign-ments as that used in Subsection 2.5.

4 Markovian assignment rules among hetero-

geneous agents

In this Section, we extend the model by allowing for heterogeneity acrossagents. More precisely, we suppose that agents independently draw types(or abilities) which affect the value of the surplus formed in any matching.Assuming that surpluses are supermodular functions of objects and types,efficient assignments require to assign higher objects to agents with highertypes. Of course, this requirement conflicts with the use of simple seniorityand rank rules, and the object of the analysis is to characterize richer classesof rules, which take into account agent’s types as well as their histories.

4.1 A model with heterogeneous agents

Let K = {1, 2, ..., m} be a finite ordered set of types indexed by k. At everyperiod, the type of the entering agent is drawn according to an independentdraw of a finite probability distribution q(k). The set of objects and typesare ordered in such a way that the surplus obtained by matching an agent oftype k with an object j, σ(k, j) is strictly supermodular: If k′ > k and j′ > j,

σ(k′, j′) + σ(k, j) > σ(k′, j) + σ(k, j′).

Hence, total surplus in society will be maximized by assigning objects ofhigher quality to agents with higher types.

A state s is now defined both by an assignment µ and a type profile θ,s = (µ, θ) where

• The assignment µ is a one-to-one mapping from the set I of agents tothe set J of objects,

• The type profile θ is a mapping from the set I of agents to the set Kof types.

17

Page 19: Markovian assignment rules

An assignment rule α is now a collection of mappings αj(ν, θ, i), definingthe probability of assigning object j to agent i given the truncated assignmentν and the type profile θ. As before, the assignment rule must satisfy:

i|ν(i)<j

αj(ν, θ, i) = 1.

As before, we define an assignment rule to be independent if the assign-ment of object j to agent i only depends on the characteristics of agenti: αj(ν, θ, i) = αj(ν

′, θ′, i) whenever ν(i) = ν ′(i) and θ(i) = θ′(i). Sim-ilarly, an assignment rule is weakly independent if αj(ν, θ, i) = αj(ν

′, θ′, i)whenever ν(k) = ν ′(k) and θ(k) = θ′(k) for all k such that ν(k) < j. Fi-nally, an assignment rule is strongly independent if the assignment of objectj to agent i only depends on the type and object currently held by agent i:αj(ν, θ, i) = αj(ν

′, θ′, k) whenever ν(i) = ν ′(k) and θ(i) = θ′(k).The following lemma shows that, if an assignment rule is independent,

assignment cannot depend on an agent’s type

Lemma 3 Let α be an independent assignment rule among heterogeneousagents. Then, for any θ, θ′, any j, ν and i, αj(ν, θ, i) = αj(ν, θ

′, i).

With heterogeneous players, independence thus puts a strong restrictionon the assignment rule, and limits the set of rules to those rules which satisfyindependence for homogeneous players (e.g. the rank or uniform rules, whichdo not take into account players’ types).

Finally, we may now define new rules, using the additional dimensiongiven by agent’s types:

Definition 16 The type-seniority rule is defined by αj(ν, θ, i) = 1 if θ(i) ≥θ(k) for all k such that ν(k) ≤ j and i > l for all l such that θ(l) = θ(i) andν(l) ≤ j.

Definition 17 The type-rank rule is defined by αj(ν, θ, i) = 1 if θ(i) ≥ θ(k)for all k such that ν(k) ≤ j and ν(i) > ν(l) for all l such that θ(l) = θ(i)and ν(l) ≤ j.

Definition 18 The type-uniform rule is defined by αj(ν, θ, i) = 1|{l|θ(l)=θ(i)}|

if θ(i) ≥ θ(k) for all k such that ν(k) ≤ j .

All these rules use a lexicographic ordering: they first select the set ofagents of highest type who may receive the object. If this set contains morethan one agent, the rules use a tie-breaking rule (seniority, rank or uniformdistribution) to allocate the object.

18

Page 20: Markovian assignment rules

4.2 Markov chains with heterogeneous agents

Given an assignment rule α and a probability distribution q over types, wecan compute the transition probability from state s to state s′:

• First, the conditional probability of type profile θ′ given type profile θis given by:

q(θ′|θ) = q(k) if θ′(1) = k, θ′(i) = θ(i− 1), i = 2, ..., n,

q(θ′|θ) = 0 otherwise.

• Second, given the new type profile θ′, and the assignment rule α, con-struct sequences of reassignments as in Subsection 2.4 to obtain p(µ′|µ).

When agents are heterogeneous, total surplus varies with the assignments,and different assignment rules result in different total surpluses. We thus de-fine a notion of efficiency of assignment rules, based on the following (static)criterion.

Definition 19 An assignment rule α is efficient if there does not exist an-other assignment rule α′, such that, for all state s = (µ, θ)

θ′

µ′

∑i

q(θ′|θ)p′(µ′|µ)σ(θ′(i), µ′(i)) ≥∑

θ′

µ′

∑i

q(θ′|θ)p(µ′|µ)σ(θ′(i), µ′(i)).

with strict inequality for some state.

Some remarks are in order. First, because the assignment rule α can bemade conditional on the type profile θ′, definitions of efficiency ex ante andat the interim stage (after the type profile θ′ has been drawn), are equiva-lent.11 Second, this definition of efficiency is static, and only considers totalsurplus at the next step of the Markov chain, and not the surplus generatedby the two assignment rules α and α′ along the entire path of the Markovchain. Third, by using this definition we impose the same constraint on theassignment rules α and α′, and in particular, we do not consider efficiencyimproving reassignments which would violate the individual rationality con-dition, namely the fact that an agent holding object j cannot be reassignedan object of value smaller than j.

11This is reminiscent of the equivalence between the definition of Bayesian equilibriausing ex ante or interim calculations. See Fudenberg and Tirole (1991) p. 215.

19

Page 21: Markovian assignment rules

Finally, while the notions of ergodic and irreducible assignment rulesare well defined, it is clear that when agents are heterogeneous, assignmentrules are never convergent. The random drawing of the type of the enteringagent every period introduces a source of randomness in the Markov chainwhich prevents the existence of absorbing states. However, distinguishingbetween the two sources of randomness (one linked to the exogenous drawingof the type of the entering agent every period, and one to the dynamics ofreassignments), we propose the following notion of quasi-convergence

Definition 20 A Markovian assignment rule α is quasi convergent if theinduced Markov chain has a unique recurrent set of nm states S such that,for any s, s′ in S, θ(s) 6= θ(s′).

In words, a quasi-convergent Markov chain ultimately settles in a recur-rent state, where a single assignment arises for every type profile θ. Whenthere is a unique type, this definition is of course equivalent to convergenceto a unique absorbing state. It is also related to the following extension ofthe notion of time invariance.

Definition 21 A Markov chain with heterogeneous agents is time-invariantif there exists T > 0 such that, any two agents born at t, t′ > T facingthe same societies, experience the same history. For any two agents i and i′

entering society at dates t and t′ where θt + τ = θt′ + τ(i′) for τ = 0, 1, ..., n−1, µt+τ (i) = µt′+τ (i′) for τ = 0, 1, ..., n− 1.

Time invariant and quasi-convergent assignment rules are related by thefollowing observation, which extends Lemma 2:

Lemma 4 A Markovian assignment rule with heterogeneous agents is timeinvariant if and only if any recurrent set S of the induced Markov chaincontains exactly nm states such that for any s, s′ in S, θ(s) 6= θ(s′).

4.3 Quasi-convergent, efficient and independent assign-ment rules

When agents are heterogeneous, as noted above, independence of assignmentrules is a very strong restriction. Our first result shows that this restrictionis incompatible with efficiency.

Theorem 5 Suppose that the set K contains at least two types. Then thereis no assignment rule satisfying independence and efficiency.

20

Page 22: Markovian assignment rules

The existence and characterization of independent and quasi-convergentassignment rules is a direct consequence of Lemma 3 and Theorem 1. Be-cause independent assignment rules are independent of agent’s types, thefamily of independent quasi-convergent assignment rules is exactly identi-cal to the family of independent, convergent assignment rules for identicalplayers. Hence the characterization of Theorem 1 remains valid, and inde-pendent, quasi-convergent assignment rules are convex combinations of theseniority and rank rules.

The next step will be to study the existence of efficient, quasi-convergentassignment rules.

21

Page 23: Markovian assignment rules

5 Proofs

Proof of Lemma 1: Consider two assignments ν and ν ′ such that ν(i) =ν ′(k), ν ′(i) = ν(k) = n and ν(l) = ν ′(l) for all l 6= i, k. For any j < n,αj(ν

′, i) = αj(ν, k) = 0. For any j, by independence, αj(ν, l) = αj(ν′, l).

Now, as∑

i,ν(i)<j αj(ν, i) =∑

i,ν′(i)<j αj(ν′, i), we conclude that αj(ν, i) =

αj(ν′, k).

Next, consider two assignments ν, ν ′ such that ν(i) = ν ′(j), ν(j) = ν ′(i).As

∑k αn(ν, k) =

∑k αn(ν ′, k), and, by independence, αn(ν, k) = αn(ν ′, k)

for all k 6= i, j, so that αn(ν, i) + αn(ν, j) = αn(ν ′, i) + αn(ν ′, j).

Proof of Proposition 1: We first check that the identity assignment isindeed an absorbing state. A necessary and sufficient condition for this tooccur is that:

∏j

αj(νj, j) = 1, (3)

where νj(i) = i− 1 for i ≤ j and νj(i) = i for i > j.Both the seniority and rank assignment rules satisfy this condition, as j

is at the same time the oldest agent eligible to receive object j and the agentwith the highest ranked object in the matching νj.

Next we show that starting from any initial state µ, there exists a time tat which the Markov chain is absorbed into the identity assignment ι.

In the rank rule, if µ(n) = k, all objects j = 1, 2, ..., k are reassigned tothe agents sequentially. In particular, at period 1, object 1 will be reassignedto the entering agent so that µ1(1) = 1. At period 2, object 2 is reassignedto agent 2 (who currently holds object 1) and object 1 is reassigned to theentering agent, so that µ2(1) = 1 and µ2(2) = 2. Following this argument, isit easy to see that µn = ι.

In the seniority rule, notice that the entering agent can never receive anyobject but object 1. Similarly if ν(1) = 1, αk(ν, 1) = 0 for all k > 2, andmore generally if ν(i) = i, αk(ν, i) = 0 for all k > i + 1.

The preceding argument shows that, starting from any µ at period 0,in the seniority rule µ1(1) = 1. Furthermore, at period 2, object 1 mustbe reassigned to the entering agent so that µ2(1) = 1, µ2(2) = 2. We thusconclude that µn = ι, namely, the Markov chain is absorbed into the identityassignment in at most n periods.

Proof of Theorem 1: By Proposition 1, the rank rule and the seniorityrules are convergent, so that the rule α, which is a convex combination ofthe seniority and rank rules, is also convergent.

22

Page 24: Markovian assignment rules

Next suppose that the rule α satisfies independence and is convergent.Because it is convergent, the identity assignment is an absorbing state, sothat

αj(νj, j) = 1.

By independence, from Lemma 1, αj(j − 1) = αj(νj, j) = 1 for all j < n.Furthermore, by independence again, from Lemma 1, for any two assignmentsν, ν ′ which only differ in the position of two agents, the total probability ofassigning object n to the two agents is constant. As αn(νn, n)+αn(νn, k) = 1for all k < n, we conclude that, for all ν,

αn(ν, n) + αn(ν, ν−1(n− 1)) = 1.

Next construct two different truncated assignments ν and ν ′ such thatν−1(n) = i, ν ′−1 = j and ν−1(n − 1) = ν ′n−1(n − 1) = k. By independence,αn(ν, ν−1(n−1)) = αn(ν ′, ν ′−1(n−1)) so that αn(ν, n) = αn(ν ′, n). Applyingindependence again, for any ν, ν ′ such that ν(n) = i < n − 1 and ν ′(n) =j < n− 1, we have:

αn(ν, n) = αn(ν ′, n) = λ,

so that

αn(ν, ν−1(n− 1)) = 1− λ

for any ν such that ν(n) 6= n− 1, establishing the result.

Proof of Theorem 2: Suppose first that the condition holds. Because ob-ject n is reassigned at least every n periods, and can only be reassigned whenµ(n) = n, any recurrent set must contain an assignment for which µ(n) = n.Suppose by contradiction that there are two recurrent sets, each containingan assignment where µ(n) = n, denoted µ1 and µ2. If the condition holds,there exists an assignment µ′ with µ′(n) = n which can be reached fromboth µ1 and µ2, contradicting the fact that µ1 and µ2 belong to two distinctrecurrent sets.

Conversely, suppose that the condition is violated and that there is asingle recurrent set. There must exist one assignment µ′ with µ′(n) = n inthe recurrent set. However, if the condition is violated there exists anotherassignment µ with µ(n) = n such that there is no path in the Markov chainfrom µ to µ′, contradicting the fact that there is a single recurrent set.

23

Page 25: Markovian assignment rules

Proof of Corollary 2: We will show the existence of a path to the identityassignment ι. Because object 1 is reassigned at least every n periods, thereexists a time t at which µt(1) = 1. Then, as α2(2, ν

t) > 0, we can construct apath where µt+1(1) = 1, µt+1(2) = 2. Repeating the argument, we eventuallyreach the identity assignment.

Proof of Theorem 3: (Sufficiency) Consider two assignments µ and µ′.We will exhibit a path from µ to µ′. First observe that because αj(ν, 1) >0 for all j, by successively applying cycles κ , one eventually reaches anassignment µ0 for which µ0(n) = n. Similarly, there exists an assignmentµ1 for which µ1(n) = n, and such that, by successively applying cycles, onereaches assignment µ′ from µ1.

By condition (ii), there exists a sequence of permutations from µ0 to µ1

such that, at each step, with positive probability, the object held by the lastplayer is assigned to player i, and the object held by player i to the enteringplayer. Hence, one can construct a path between µ0 and µ1, concluding thesufficiency part of the proof.(Necessity) Suppose first that condition (i) is violated, i.e. there exists j anda truncated assignment ν of objects in J \ j such that αj(ν, 1) = 0. Considerthe assignment µ such that µ(1) = j, µ(i) = ν(i) for i = 2, ..., n.For thisassignment to be reached, it must be that object j is assigned to the enteringplayer with positive probability when all other players hold the objects givenby the truncated assignment ν. Hence, if αj(ν, 1) = 0, assignment µ can neverbe reached from any other state, contradicting the fact that the Markov chainis irreducible.

Next suppose that condition (ii) is violated.We will first show that any reassignment from a matching µ to a matching

µ′, π = µ−1(µ′) can be decomposed into a sequence of cycles and (1 − i)transpositions such that, if πm is a (1, i) transposition, then πm−1 and πm+1

are cycles. Let i0 = n + 1, i1 = µ′−1(µ(i0 − 1)), ..., im = µ′−1(µ(im−1 −1)), ..., iM = 1 be the sequence of reassignments from µ to µ′.

Consider the following sequence of permutations:

• First assign object µ(n) to agent i1 and object µ(i1−1) to the enteringagent ( apply κ ◦ τi1)

• Then, during n− 1 periods, assign the object held by the last agent tothe entering agent ( apply κn−1)

After this first cycle of n permutations, we have that, for all j 6= i1−1, n,(π1 ◦ ... ◦ πn)(j) = j. For i1 − 1, we have (π1 ◦ ... ◦ πn)(i1) = π1(π2 ◦ .. ◦

24

Page 26: Markovian assignment rules

πn)(i1 − 1)) = π1(i1) = n. For n, we have (π1 ◦ ... ◦ πn)(n) = π1(π2 ◦ .. ◦πn)(n)) = π1(1) = i1 − 1. Hence, after the first cycle of n permutations,we have µn(j) = µ(j) for all j 6= i1 − 1, n, µn(i1 − 1) = µ(n) = µ′(i1) andµn(n) = µ(i1 − 1).

In the second cycle, we allocate object µ(i1 − 1) to agent i2, by applyinga cycle followed by a transposition τi2 , and then apply a cycle during n − 1periods, κn−1. The process is repeated until we assign object µ(iM−1) to theentering agent.

Next, if condition (i) is satisfied, it must be that all cycles can be gener-ated with positive probability. Furthermore, if state µ′ can be reached fromstate µ through some reassignment, we must have αµ(im−1)(ν

m, im+1) > 0 forall m, so that the transpositions τim occur with positive probability. Hence,if state µ′ can be reached from state µ by some arbitrary reassignment, itmust also be reached through a sequence of permutations alternating cyclesand transpositions τim as in the statement of the Proposition.

Hence, if for any sequence of permutations alternating cycles and trans-positions, state µ′ cannot be reached from state µ, we conclude that there isno path in the Markov chain from µ to µ′, and the Markov chain induced bythe assignment rule is not irreducible.

Proof of Theorem 4: Given Theorem 3, we need to show that there existsa path from any assignment µ such that µ(n) = n to any assignment µ′ suchthat µ′(n) = n. The mapping µ′−1 ◦ µ is a permutation over {1, ..., n} whichleaves the last element invariant. As any permutation can be decomposedinto a sequence of transpositions, the induced permutation over the elementsin {1, ..., n−1} can be decomposed onto a sequence of transpositions τim,im+1 ,m = 1, ..., M − 1, where 1 ≤ im ≤ n− 1 for all m.

µ′ = µ ◦ τi1,i2 ◦ ... ◦ τim,im+1 ◦ ... ◦ τiM−1,iM .

We now show that there exists a path in the Markov chain correspondingto this sequence of transpositions. We first consider the first transposition,τi1,i2 . Let j1 = µ(i1) and j2 = µ(i2). Suppose without loss of generalitythat j1 ≥ j2. Because the graph G(α) is connected, there exists a sequencej1 = j1, ..., jq, ..., jQ = j2 such that αjq(jq+1) > 0 for all q = 1, ..., Q − 1.Let iq = µ−1(jq) be the agent holding good jq in µ. We can decompose thetransposition τi1,i2 as:

τi1,i2 = τi1,i2 ◦ ..τiQ−2,iQ−1◦ τiQ−1,iQ ◦ τiQ−1,iQ−2

◦ ....τi2,i1

≡ χ

25

Page 27: Markovian assignment rules

To check this equality, notice that, for any i not included in the sequenceiq, τi1,i2(i) = i = χ(i). Furthermore,

χ(i1) = τi1,i2 ◦ ..τiQ−2,iQ−1◦ τiQ−1,iQ ◦ τiQ−1,iQ−2

◦ ....τi2,i1(i1)

= τi1,i2 ◦ ..τiQ−2,iQ−1(iQ)

= iQ

= i2.

Similarly,

χ(i2) = τi1,i2 ◦ ..τiQ−2,iQ−1◦ τiQ−1,iQ ◦ τiQ−1,iQ−2

◦ ....τi2,i1(iQ)

= τi1,i2 ◦ ..τiQ−2,iQ−1(iQ−1)

= i1

= i1.

Finally, for iq, q 6= 1, Q,

χ(iq) = τi1,i2 ◦ ..τiQ−2,iQ−1◦ τiQ−1,iQ ◦ τiQ−1,iQ−2

◦ ....τi2,i1(iQ)

= τi1,i2 ◦ ...τiq−1,iq ◦ ... ◦ τiQ−1,iQ ◦ ...τiq−1,iq(iq)

= τi1,i2 ◦ ...τiq−1,iq ◦ ... ◦ τiQ−1,iQ ◦ ...τiq ,iq+1(iq−1)

= τi1,i2 ◦ ...τiq−1,iq(iq−1)

= τi1,i2 ◦ ...τiq−2,iq−1(iq)

= iq.

We now construct a path from µ to µ ◦ τi1,i2 . We first apply cycle κ forn− i1 + 1 periods, so that µ ◦ κn−i1+1(1) = j1.

If i2 ≤ i1 − 1, then j2 = µ ◦ κn−i1+1(i2 + n − i1 + 1), and, becauseαj1(j2) > 0, we can apply the transposition τi2+n−i1+1, to obtain µ◦κn−i1+1 ◦τi2+n−i1+1(1) = j2 and µ ◦ κn−i1+1 ◦ τi2+n−i1+1(i2 + n− i1 + 1) = j1. Applyingthe cycle κ again for i1 + 1 periods, we finally have: µ ◦ κn−i1+1 ◦ τi2+n−i1+1 ◦κi1−1(i1) = j2,µ◦κn−i1+1◦τi2+n−i1+1◦κi1−1(i2) = j1 and µ◦κn−i1+1◦τi2+n−i1+1◦κi1−1(i) = µ(i) for all i 6= i1, i2.

If i2 ≥ i1 + 1, then j2 = µ ◦ κn−i1+1(i2 − i1 + 1), and we now apply thetransposition τi2−i1+1 followed by i1 − 1 cycles to finally obtain µ ◦ τi1,i2 .

A similar construction can be applied to construct a path from µ to anycomposition of µ with a sequence of transpositions, concluding the proof ofthe Theorem.

Proof of Lemma 3: Consider two type profiles θ, θ′ such that θk = θ′kfor all k 6= i and θi 6= θ′i. For any j and any ν,

∑i|ν(i)<j αj(ν, θ, i) =

26

Page 28: Markovian assignment rules

∑i|ν(i)<j αj(ν, θ

′, i) = 1. By independence, αj(ν, θ, k) = αj(ν, θ′, k) for any

k 6= i, so that αj(ν, θ, i) = αj(ν, θ′, i) for any ν, j. By independence again,

for any θ′′ such that θ′′(i) = θ′(i), αj(ν, θ′, i) = αj(ν, θ

′′, i), concluding theproof of the Lemma.

Proof of Theorem 5: We distinguish between two cases. (a) Consider firstan independent rule α which is not the replacement rule. Then there exists anobject j, an agent i > 1 and a truncated assignment ν such that αj(ν, i) > 0(where we have used Lemma 3 to eliminate dependence of the assignment onthe type). Let θ be such that θ(1) = m, θ(i) = 1 for i = 2, ..., n.

Consider the following alternative assignment rule α′:

α′j′(ν′, θ′, i′) = αj′(ν

′, θ′, i′) if j′ 6= j or i′ 6= i or ν ′ 6= ν or θ′ 6= θ,

α′j(ν, θ, i) = 0,

α′j(ν, θ, 1) = αj(ν, θ, 1) + αj(ν, θ, i)

The assignment rule α′ agrees with the assignment rule α everywhereexcept for the assignment of object j aith truncated assignment ν and typeprofile θ, where α′ shifts the positive weight assigned to agent i to the enteringagent.

Now consider a state s. If the type profile at state s does not satisfyθ(i) = 1 for i = 1, .., n− 1, the two assignment rules are identical. Similarly,if the assignment µ at state s is such that the assignment rule α does not putpositive weight on a path where the truncated reassignment ν is reached, thetwo assignment rules are identical.

Otherwise, whenever the type profile θ(1) = m, θ(i) = 1, i = 2, ..., n isdrawn, assignment rule α results in a total surplus:

i6=1

σ(1, µ(i)) + σ(m,µ(1)),

where by construction µ(1) < j. Assignment rule α′ instead leads to a totalsurplus

i6=1

σ(1, µ(i)) + σ(m, j).

The difference between the two surpluses is given by:

σ(m, j) + σ(1, µ(1))− σ(m, µ(1))− σ(1, j),

which, by strict supermodularity of the surplus function σ, is positive. Forany other type profile the two assignments are identical, showing that the

27

Page 29: Markovian assignment rules

assignment rule α is dominated at state s by the assignment rule α′, andcannot be efficient.

We now consider case (b), where the assignment rule is the replacementrule. Consider an assignment rule α′ which only differs from α for a typeprofile θ such that θ(i) = m for i = 2, ..., n and θ(1) = 1. Hence, by thesame reasoning as above, the replacement rule is strictly dominated by arule which shifts the weight αj(ν, θ, 1) to an agent with the highest type, sayagent n. This shows that the replacement rule cannot be efficient.

28

Page 30: Markovian assignment rules

6 References

1. Abdulkadiroglu, A. and T. Sonmez (1999) “House allocation with ex-isting tenants,” Journal of Economic Theory 88, 233-260.

2. Abdulkadiroglu, A. and T. Sonmez (2003) “School choice: A mecha-nism design approach,” American Economic Review 93, 729-747.

3. Bartholomew, J. (1982) Stochastic Models for Social Processes, NewYork: Wiley.

4. Carmichael, L. (1983) “Firm-specific human capital and promotion lad-ders,” Bell Journal of Economics 14, 251-258.

5. Chiappori, P.A., B. Salanie and J. Valentin (1999) “Early starters ver-sus late beginners,” Journal of Political Economy 107, 731-760.

6. Fudenberg, D. and J. Tirole (1991) Game Theory, Cambridge, MA:MIT Press.

7. Gale, D. and L. Shapley (1962) “College admissions and the stabilityof marriage,” American Mathematical Monthly 69, 9-14.

8. Isaacson, D. and R. Masden (1976) Markov Chaines Theory and Pub-lications, New York: John Wiley.

9. Iyer, L. and A. Mani (2008) “Traveling agents? Political change andbureaucratic turnover in India,” mimeo., Harvard Business School andUniversity of Warwick.

10. Kemeny, J. and L. Snell (1960) Finite Markov Chains, New York: VanNostrand.

11. Kleinrock, L. (1975) Queuing Systems I. Theory, New York: Wiley.

12. Kleinrock, L. (1976) Queuing Systems II. Computer Applications, NewYork: Wiley.

13. Lazear, E. (1995) Personnel Economics, Cambridge, MA: The MITPress.

14. Lee, S. (2004) “Seniority as an employment norm: The case of layoffsand promotions in the US employment relationship,” Socio-EconomicReview 2, 65-86.

29

Page 31: Markovian assignment rules

15. Moulin, H. and R. Strong (2002) “Fair queuing and other probabilisticallocation methods,” Mathematics of Operations Research 27, 1-31.

16. Moulin, H. and R. Strong (2003) “Filling a multicolor urn: An ax-iomatic analysis,” Games and Economic Behavior 45, 242-269.

17. Nilakantan, K. and B.G. Ragavendhra (2005) “Control aspects in pro-portionality Markov manpower systems,” Applied Mathematical Mod-elling 29, 85-116.

18. Roth, A. and M. Sotomayor (1990) Two-sided Matching: A Study inGame-Theoretic Modeling and Analysis, Cambridge: Cambridge Uni-versity Press.

19. Shapley, L. and H. Scarf (1974) “On cores and indivisibilities,” Journalof Mathematical Economics 1 23-37.

20. Thomson, W. (2007) “Fair allocation rules,” mimeo., University ofRochester.

21. Vajda, S. (1978) Mathematics of Manpower Planning, Chichester: JohnWiley.

30

Page 32: Markovian assignment rules

7 Tables and Figures

Seniority 7 points per ”echelon” (on average 3 years of service)+ 49 points after 25 years of service

On the job seniority 10 points per year+ 25 points every 4 years

Current job 50 points if first assignmentIf assigned in a ”violent” high school + 300 points after 4 years

Family circumstances 150.2 points if spouse is transferred+ 75 points per child

Table 1: Priority points for high school teachers in France

Figure 1: Markov process for three agents

31

Page 33: Markovian assignment rules

Figure 2: Seniority assignment for three agents

Figure 3: Rank assignment for three agents

32

Page 34: Markovian assignment rules

Figure 4: Replacement assignment for three agents

Figure 5: Transitions between states for Example 2

33

Page 35: Markovian assignment rules

Figure 6: Transitions between states for Example 3

34