Efficient approximately fair item allocation

When allocating objects among people with different preferences, two major goals are Pareto efficiency and fairness. Since the objects are indivisible, there may not exist any fair allocation. For example, when there is a single house and two people, every allocation of the house will be unfair to one person. Therefore, several common approximations have been studied, such as maximin-share fairness (MMS), envy-freeness up to one item (EF1), proportionality up to one item (PROP1), and equitability up to one item (EQ1). The problem of efficient approximately fair item allocation is to find an allocation that is both Pareto-efficient (PE) and satisfies one of these fairness notions. The problem was first presented at 2016 and has attracted considerable attention since then.

Setting
There is a finite set of objects, denoted by M. There are n agents. Each agent i has a value-function Vi, that assigns a value to each subset of objects. The goal is to partition M into n subsets, X1,...,Xn, and give each subset Xi to agent i, such that the allocation is both Pareto-efficient and approximately fair. There are various notions of approximate fairness.

Efficient approximately envy-free allocation
An allocation is called envy-free (EF) if for every agent believes that the value of his/her share is at least as high as that of any other agent. It is called envy-free up to one item (EF1) if, for every two agents i and j, if at most one item is removed from the bundle of j, then i does not envy j. Formally:

Some early algorithms could find an approximately fair allocation that satisfies a weak form of efficiency, but not PE.
 * $$\forall i,j: \exists Y\subseteq X_j: |Y|\leq 1, V_i(X_i) \geq V_i(X_j\setminus Y)$$

This raised the question of finding an allocation that is both PE and EF1.
 * The round-robin procedure returns a complete EF1 allocation with additive utilities. The envy-graph procedure returns a complete EF1 allocation for arbitrary monotone preference relations. Both are guaranteed to return an allocation with no envy-cycles. However, the allocation is not guaranteed to be Pareto-efficient.
 * The Approximate-CEEI mechanism returns a partial EF1 allocation for arbitrary preference relations. It is PE w.r.t. the allocated objects, but not PE w.r.t. all objects (since some objects may remain unallocated).

Maximum Nash Welfare rule
Caragiannis, Kurokawa, Moulin, Procaccia, Shah and Wang were the first to prove the existence of a PE+EF1 allocation. They proved that, when all agents have positive additive utilities, every allocation that maximizes the product of utilities (also known as the Nash welfare) is EF1. Since it is obvious that the maximizing allocation is PE, the existence of PE+EF1 allocation follows.

While the max-product allocation has desirable properties, it cannot be computed in polynomial time: finding a max-product allocation is NP-hard and even APX-hard. This led to various works attempting to approximate the maximum product, with improving approximation factors:

However, these approximations do not guarantee EF1. Some more recent algorithms guarantee both approximate max-product and fairness:
 * Cole and Gkatzelis presented a 2.89-factor approximation.
 * Anari, Gharan, Saberi and Singh presented an e-factor approximation.
 * Cole, Devanur, Gkatelis, Jain, Mai, Vazirani and Yazdanbod presented a 2-factor approximation.


 * Barman, Krishanmurthy and Vaish present an algorithm that guarantees PE, (1+epsilon)-EF1 and a 1.45 approximation to the max product, in pseudopolynomial time (see increasing price algorithm below).
 * Garg and McGlaughlin present an algorithm that guarantees PE, PROP1, and a 2-approximation to the max product, in strongly polynomial time.
 * Caragiannis, Gravin and Huang present an algorithm that guarantees EFX, PROP1, and a 2.9-approximation to the max product, by discarding some goods (they also show existence of a 2-approximate allocation).

The max-product solution is particularly appealing when the valuations are binary (the value of each item is either 0 or 1):


 * Amanatidis, Birmpas, Filos-Ratsikas, Hollender and Voudouris prove that with binary valuations the max-product solution is not only EF1 but also EFX (envy-free up to any good). This holds whenever the value of each item can take one of two values - not necessarily 0 or 1. With general additive valuations, max-product does not imply EFX but implies a natural generalization of it.
 * Halpern, Procaccia, Psomas and Shah prove that with binary valuations the max-product rule with lexicographically tie-breaking can be computed in polynomial time, and it is also group-strategyproof.

Non-additive valuations
If the agents' utilities are not additive, the max-product solution is not necessarily EF1; but if the agents' utilities are at least submodular, the max-product solution satisfies a weaker property called Marginal-Envy-Freeness except-1-item (MEF1): it means that each agent i values his bundle at least as much as the marginal utility of (the bundle of j with the best item removed from it). Formally: V_i(X_i \cup (X_j\setminus Y)) - V_i(X_i) $$
 * $$\forall i,j: \exists Y\subseteq X_j: |Y|\leq 1, V_i(X_i) \geq

Similar approximations have been found for more general utility functions:


 * Bei, Garg, Hoefer and Mehlhorn and Anari, Mai, Gharan and Vazirani study markets with multiple units of each item-kind, where the valuations are seperable [sic] piecewise-linear concave. This means that the utility of a bundle with different item-kinds is the sum of utilities for each single item-kind (this is the meaning of "separable"), but for each item-kind, the valuation has decreasing marginal utilities (this is the meaning of "piecewise-linear concave"). They give a 2-approximation to the max-product.
 * Ortega studies a multi-unit market with binary valuations. He proves that the egalitarian rule is Lorenz dominant (a property stronger than leximin-optimality), unique in utilities, and group-strategyproof.
 * Garg, Hoefer and Mehlhorn study budget-additive valuations - a subclass of submodular utilities. They give a (2.404 + ε)-approximation to the max-product in time polynomial in the input size and 1/ε.
 * Benabbou, Chakraborty, Igarashi and Zick study submodular utilities with binary marginal gains (i.e., each item adds either 0 or 1 to the value of each bundle). They prove that, with such valuations, both the max-product and the leximin allocations are EF1 and maximize the utilitarian welfare (sum of utilities).
 * Babaioff, Ezra and Feige also study submodular utilities with binary ("dichotomous") marginal gains. They present a deterministic truthful mechanism that outputs a Lorenz dominant allocation, which is hence EFX and max-product.

Mixed valuations
Martin and Walsh show that, with "mixed manna" (- additive valuations that may be both positive and negative), maximizing the product of utilities (or minimizing the product of minus the utilities) does not ensure EF1. They also prove that an EFX3 allocation may not exist even with identical utilities. However, with tertiary utilities, EFX and PO allocations, or EFX3 and PO allocations always exist; and with identical utilities, EFX and PO allocations always exist. For these cases there are polynomial-time algorithms.

Increasing price algorithm
Barman, Krishanmurthy and Vaish presented a pseudo-polynomial time algorithm for finding PE+EF1 allocations for positive additive valuations. They proved the following results.


 * The algorithm finds a PE+EF1 allocation in time O(poly(m,n,vmax)), where m is the number of objects, n the number of agents, and vmax the largest value of an item (all the valuations are integers).
 * The same algorithm provides a 1.45 approximation to the maximum Nash welfare.
 * The algorithm also proves the existence of an allocation which is both EF1 and fractionally Pareto optimal.

Basic concepts
Their algorithm is based on the notion of competitive equilibrium in a Fisher market. It uses the following concepts.


 * Approximate EF1 allocation: Given a constant e > 0, an allocation is e-EF1 if it satisfies the EF1 condition up to a multiplicative constant of (1+e). Formally:
 * $$\forall i,j: \exists Y\subseteq X_j: |Y|\leq 1, (1+e)\cdot V_i(X_i) \geq V_i(X_j\setminus Y)$$
 * Price-vector: a vector assigning a price (a real number) to each item.
 * Bang-for-buck ratio: for an agent i and an object o, it is the ratio of the agent's valuation of the item, to the item price: vij / pj.
 * Maximum bang-for-buck (MBB) set: for an agent i, it is the set of objects maximizing his bang-for-buck ratio (given a price-vector p).
 * MBB allocation: an allocation in which each agent receives only objects from his MBB set. In an MBB allocation, each agent maximizes his utility given his budget (but MBB allocation is a stronger condition). The first welfare theorem implies that an MBB allocation is fractionally Pareto optimal.
 * Price-envy-free (pEF) allocation: an allocation X in which, for every two agents i.j, the price of Xi (called the "income" of i) is at least as large as the price Xj. This implies that all incomes are identical. Moreover, an allocation that is both MBB and pEF is envy-free, since each agent maximizes his utility given his income, and all other agents have the same income.
 * Price-envy-free-except-one (pEF1) allocation: an allocation in which, for every two agents i.j, the price p(Xi) is at least as large as the price of Xj without its most expensive item. This does not imply that the incomes are identical. However, an allocation that is both MBB and pEF1 is also EF1.
 * e-pEF1 allocation, for some constant e > 0: an allocation in which, for every two agents i.j, the product (1+e)·p(Xi) is at least as large as p(Xj) without its most expensive item. Note that the e-pEF1 condition is weaker when e is larger.  In particular, a pEF1 allocation is e-pEF1 for every e > 0, but the opposite is not true.  An allocation that is both MBB and e-pEF1 is also e-EF1.
 * Least-spender: Given an allocation and a price-vector, it is the agent i such that p(Xi) is smallest (ties are broken based on some prespecified ordering of the agents). Note that an allocation is e-pEF1 iff the e-pEF1 condition is satisfied for the least spender (as agent i).
 * MBB Hierarchy of agent i (given allocation X and price-vector p): a tree-structure built in the following way.
 * Put agent i at the root (this is called level 0).
 * Connect to agent i all objects in his MBB set (given the price-vector p).
 * Connect to each object o the agent owning it in X, if it is not yet in the tree (this is called level 1)
 * Connect to each agent at level 1, all objects in his MBB set that are yet in the tree.
 * Continue adding agents and objects alternately in a similar way: for each agent, add his MBB objects; for each item, add its owning agent.
 * Violator (given allocation X and price-vector p): an agent h that violates the pEF1 condition w.r.t. the least-spender i. So the price of Xh, even when the most expensive item is removed from it, is higher than p(Xi). Similarly, e-violator is an agent that violates the e-pEF1 condition w.r.t. the least-spender.
 * Path-violator (given allocation X and price-vector p, and the MBB hierarchy): an agent h that appears on the MBB hierarchy of the least-spender i, and partially violates the pEF1 condition w.r.t. i. In more detail: suppose there is a path, along the edges of the MBB hierarchy, from the least-spender i to object o, and then an edge from object o to agent h (this means that Xh contains o). If p (Xh \ {o}) > p(Xi), then h is a path-violator. Note that every violator is a path-violator, but the opposite is not true. Similarly, If p (Xh \ {o}) > (1+e)·p(Xi), then h is an e-path-violator.

Algorithm
Given a parameter e, the algorithm aims to find an allocation that is both fPO and 3e-pEF1. It proceeds in several phases.

Phase 1: Construct an initial MBB allocation+price (X, p).


 * One way to do it is to give each object o to the agent i who values it the most (breaking ties arbitrarily), and set the price of o to vi,o. This guarantees that, for each agent, the bang-for-buck of all objects in his bundle is exactly 1, and the bang-for-buck of all objects in other bundles is at most 1. Hence the allocation is MBB, hence it is also fPO.
 * If the allocation is 3e-pEF1, return it; otherwise proceed to Phase 2.

Phase 2: Remove price-envy within MBB hierarchy:


 * Construct the MBB hierarchy of the least-spender, given the current (X, p).
 * For L = 1,2,...:
 * For each agent h in level L of the tree:
 * If h is an e-path-violator along the path:  i → ... → h'  → o → h, then transfer object o from agent h to agent h' (note that the allocation remains MBB). Restart Phase 2.
 * Once there are no more e-path-violators:
 * If the allocation is 3e-pEF1, return it; otherwise proceed to Phase 3.

Phase 3: Increase the prices. Increase the prices of all objects in the MBB hierarchy by the same multiplicative factor, until one of the following three things happen:


 * 1) The identity of the least-spender changes. This may happen if some agent outside the hierarchy (whose items do not increase in price) becomes the least-spender  In this case we restart at Phase 2.
 * 2) A new object gets added to the MBB hierarchy. This may happen since, when the prices of objects in the hierarchy increase by a factor r, the BB ratio of objects in the hierarchy decrease by r, while the BB ratio of objects outside the hierarchy remain the same. Therefore, when r is sufficiently large, some outside object will become MBB for some hierarchy agent. In this case, too, we restart at Phase 2.
 * 3) The current allocation X becomes 3e-pEF1. This may happen since, when the prices of objects in the hierarchy increase, the income of the least-spender increases, while the income of agents outside the hierarchy remains constant. Therefore, when r is sufficiently large, it is possible that the 3e-pEF1 condition is satisfied w.r.t. the least-spender. In this case, we return the allocation X and the new price p.

Proof of correctness
First, assume that the above algorithm is executed on an instance in which all values are powers of (1+e), for some fixed e>0.


 * The first challenge is to prove that the algorithm terminates. It can be proved that, when all valuations are powers of (1+e), the algorithm terminates in time O(poly(m, n, 1/e, ln(Vmax)), where Vmax is the largest value of an object to an agent.
 * The initial allocation is an MBB allocation, and all operations maintain this condition. Therefore, the returned allocation is MBB, so it is also fPO.
 * By the termination conditions, whenever the algorithm terminates, the returned allocation is 3e-pEF1, so it is also 3e-EF1.

Now, assume that we have an instance with general valuations. We run the above algorithm on a rounded instance, where each valuation is rounded upwards to the nearest power of (1+e). Note that for each agent i and object o, the rounded value Vi '(o) is bounded between Vi(o) and (1+e)Vi(o).


 * By the previous paragraph, the resulting allocation is fPO and 3e-EF1 with respect to the rounded instance.
 * For every e sufficiently small (particularly, less than 1 / 6 m3 Vmax4), an allocation that is fPO for the rounded instance is PO for the original instance.
 * By combining the 3e-EF1 guarantee for the rounded instance, with the bound for rounding, we get that the returned allocation satisfies the following approximate-EF1 condition:
 * $$\forall i,j: \exists C\subseteq X_j: |C|\leq 1, (1+e)\cdot (1+3e)\cdot V_i(X_i) \geq V_i(X_j\setminus C)$$
 * For sufficiently small e, the product (1+e) (1+3e) is at most (1+7e). Therefore, an allocation that is 3e-EF1 for the rounded instance is 7e-EF1 for the original instance.
 * Since the original valuations are integers, when e is sufficiently small, a 7e-EF1 allocation is also EF1.
 * Thus, the resulting allocation is PO+EF1 for the original instance.

Generalized Adjusted Winner
Aziz, Caragiannis, Igarashi and Walsh  extended the condition of EF1 to mixed valuations (objects can have both positive and negative utilities). They presented a generalization of the adjusted winner procedure, for finding a PO+EF1 allocation for two agents in time O(m2).

Efficient approximately proportional allocation
An allocation of objects is proportional (PROP) if every agent values his/her share at least 1/n of the value of all items. It is called proportional up to one item (PROP1) if for every agent i, if at most one item is added to the bundle of i, then i values the bundle at least 1/n of the total. Formally, for all i (where M is the set of all goods):


 * $$\exists Y\subseteq M \setminus X_i: |Y|\leq 1, V_i(X_i \cup Y) \geq V_i(M)/n$$.

The PROP1 condition was introduced by Conitzer, Freeman and Shah in the context of fair public decision making. They proved that, in this case, a PE+PROP1 allocation always exists.

Since every EF1 allocation is PROP1, a PE+PROP1 allocation exists in indivisible item allocation too; the question is whether such allocations can be found by faster algorithms than the ones for PE+EF1.

Barman and Krishnamurthy presented a strongy-polynomial-time algorithm finding a PE+PROP1 allocation for goods (objects with positive utility).

Branzei and Sandomirskiy extended the condition of PROP1 to chores (objects with negative utility). Formally, for all i:


 * $$\exists Y\subseteq X_i: |Y|\leq 1, V_i(X_i \setminus Y) \geq V_i(M)/n$$.

They presented an algorithm finding a PE+PROP1 allocation of chores. The algorithm is strongly polynomial-time if either the number of objects or the number of agents (or both) are fixed.

Aziz, Caragiannis, Igarashi and Walsh extended the condition of PROP1 to mixed valuations (objects can have both positive and negative utilities). In this setting, an allocation is called PROP1 if, for each agent i, if we remove one negative item from i's bundle, or add one positive item to i's bundle, then i's utility is at least 1/n of the total. Their Generalized Adjusted Winner algorithm finds a PE+EF1 allocation for two agents; such an allocation is also PROP1.

Aziz, Moulin and Sandomirskiy presented a strongly polynomial-time algorithm for finding an allocation that is fractionally PE (stronger than PE) and PROP1, with general mixed valuations, even if the number of agents or objects is not fixed, and even if the agents have different entitlements.

Efficient approximately equitable allocation
An allocation of objects is called equitable (EQ) if the subjective value of all agents is the same. The motivation for studying this notion comes from experiments showing that human subjects prefer equitable allocations to envy-free ones. An allocation is called equitable up to one item (EQ1) if, for every two agents i and j, if at most one item is removed from the bundle of j, then the subjective value of i is at least that of j. Formally, for all i, j:

A stronger notion is equitable up to any item (EQx): for every two agents i and j, if any single item is removed from the bundle of j, then the subjective value of i is at least that of j: EQx allocations were first studied by Gourves, Monnot and Tlilane, who used a different term: "near jealosy-free". They proved that a partial EQx allocation of always exists, even with the additional requirement that the union of all allocated goods is a basis of a given matroid. They used an algorithm similar to the envy-graph procedure. Suksompong proved that an EQ1 allocation exists even with the additional requirement that all allocations must be contiguous subsets of a line.
 * $$\exists Y\subseteq X_j: |Y|\leq 1, V_i(X_i) \geq V_j(X_j\setminus Y)$$.
 * $$\forall y\in X_j: V_i(X_i) \geq V_j(X_j\setminus \{y\})$$.

Freeman, Sidkar, Vaish and Xia proved the following stronger results:


 * When all valuations are strictly positive, a PE+EQx allocation always exists, and there is a pseudopolynomial time algorithm that finds a PE+EQ1 allocation.
 * When some valuations may be zero, even a PE+EQ1 allocation may not exist, and deciding whether a PE+EQ/EQ1/EQx exists is strongly NP-hard.
 * A PE+EQ1+EF1 allocation may not exist. Deciding whether it exists is strongly NP-hard in general, but polynomial-time solvable with binary (0 or 1) valuations.

Algorithms for a small number of agents
Bredereck, Kaczmarcyk, Knop and Niedermeier study a setting where there are few agents (small n) and few item-types (small m), the utility per item-type is upper-bounded (by V), but there can be many items of each type. For this setting, they prove the following meta-theorem (Theorem 2): Given an efficiency criterion E, and a fairness criterion F, if  $$n+u_{\max}+m$$ is fixed, then it is possible to decide in polynomial time whether exists an allocation that is both E-efficient and F-fair, as long as E and F satisfy the following properties:


 * Fairness: The question of whether an F-fair allocation exists can be modeled by an integer linear program with a fixed dimension.
 * Efficiency: The question of whether there exists an allocation that E-dominates another given allocation can be modeled by an integer program whose dual tree-depth, and the absolute value of the largest coefficient, are upper-bounded by some function $$f(n,u_{\max})$$.

Then, they prove that several common fairness and efficiency criteria satisfy these properties, including:

The runtime of their algorithm is polynomial in the input size (in bits) times $$d^{2.5 d}$$, where d is the number of variables in the resulting ILP, which is $$d = m(n+1) + m(n+1)\cdot(4\cdot (4n\cdot V+1)^n)^{m(n+1)}$$.
 * Fairness notions: envy-freeness, EF1, EFx, graph-EF (when the agents envy only their neighbors on a fixed graph), egalitarian, maximin-share, and average-group-envy-freeness (where each group of agents values its share as the arithmetic mean of the members' utilities).
 * Efficiency notions: Pareto-efficiency, graph Pareto-efficiency (where Pareto-domination considers only exchanges between neighbors on a fixed graph), and group-Pareto-efficiency.   An allocation X as k-group-Pareto-efficient (GPEk) if there is no other allocation Y that is at least as good (by arithmetic mean of utilities) for all groups of size k, and strictly better for at least one group of size k.  Note that GPE1 is equivalent to Pareto-efficiency. GPEn  is equivalent to a utilitarian-maximal allocation, since for the grand group G of size n, the arithmetic-mean utility is equivalent to the sum of all agents' utilities. For all k, GPEk+1 implies GPEk.

They later developed more practical algorithms for some of these problems.