Combinatorial participatory budgeting

Combinatorial participatory budgeting, also called indivisible participatory budgeting  or budgeted social choice,  is a problem in social choice. There are several candidate projects, each of which has a fixed costs. There is a fixed budget, that cannot cover all these projects. Each voter has different preferences regarding these projects. The goal is to find a budget-allocation - a subset of the projects, with total cost at most the budget, that will be funded. Combinatorial participatory budgeting is the most common form of participatory budgeting.

Combinatorial PB can be seen as a generalization of committee voting: committee voting is a special case of PB in which the "cost" of each candidate is 1, and the "budget" is the committee size. This assumption is often called the unit-cost assumption. The setting in which the projects are divisible (- can receive any amount of money) is also called portioning or fractional social choice.

PB rules have other applications besides proper budgeting. For example:


 * Selecting validators in consensus protocols, such as the blockchain;
 * Selecting web pages that should be displayed in response to user queries;
 * Locating public facilities;
 * Improving the quality of genetic algorithms.

Welfare-maximization rules
One class of rules aims to maximize a given social welfare function. In particular, the utilitarian rule aims to find a budget-allocation that maximizes the sum of agents' utilities. With cardinal voting, finding a utilitarian budget-allocation requires solving a knapsack problem. It is NP-hard, but can be solved reasonably fast in practice. There are also greedy algorithms that attain a constant-factor approximation of the maximum welfare.

With approval voting, there is an additional complication: we have to map the agents' approvals to utilities. Such a map is known as a satisfaction function. Several satisfaction functions are known:

Talmon and Faliszewski study greedy algorithms for cardinality-based, cost-based and Chamberlin-Courant utilities, as resolute (single-valued) rules. Baumeister, Boes and Seeger complement their work by studying irresolute (multi-valued) rules, and introduce hybrid greedy rules.
 * Chamberlin-Courant satisfaction assumes that an agent's utility is 1 if at least one of his approved projects is funded, and 0 otherwise.
 * Cardinality-based satisfaction assumes that an agent's utility equals the number of his approved projects that are funded.
 * Cost-based satisfaction assumes that an agent's utility equals the total cost of his approved projects that are funded.
 * Decreasing-normalized-satisfaction (DNS) functions are satisfaction functions that are weakly-increasing with the cost, but the increase-rate is not faster than the cost. Cardinality-satisfaction is one extreme, in which satisfaction does not change with the cost; cost-based is another extreme, in which satisfaction grows exactly like the cost. In between, there are utility functions such as the square-root of the cost, or the logarithm of the cost. Approval-based satisfaction assumes that there is a function sat that maps a set of projects to a positive number, and an agent's utility equals sat(approved-funded-projects). All previous utilities are special cases of approval-based satisfaction functions.

Sreedurga, Bhardwaj and Narahari study the egalitarian rule, which aims to maximize the smallest utility of an agent. They prove that finding an egalitarian budget-allocation is NP-hard, but give pseudo-polynomial time and polynomial-time algorithms when some natural paramerters are fixed. They propose an algorithm that achieves an additive approximation for restricted spaces of instances, and show that it gives exact optimal solutions on real-world datasets. They also prove that the egalitarian rule satisfies a new fairness axiom, which they call maximal coverage.

Annick Laruelle studies welfare maximization under weak ordinal voting, where a scoring rule is used to translate ranking to utility. She studies a greedy approximation to the utilitarian welfare and for the Chamberlin-Courant welfare. She tests three algorithms on real data from the PB in Portugalete in 2018; the results show that the algorithm including project costs in the ballot performs better than the other two.

Knapsack budgeting
Fluschnik, Skowron, Triphaus and Wilker study maximization of utilitarian welfare, Chamberlin-Courant welfare, and Nash welfare, assuming cardinal utilities.

The budgeting method most common in practice is a greedy solution to a variant of the knapsack problem: the projects are ordered by decreasing order of the number of votes they received, and selected one-by-one until the budget is exhausted. Alternatively, if the number of projects is sufficiently small, the knapsack problem may be solved exactly, by selecting a subset of projects that maximizes the total happiness of the citizens. Since this method (called "individually-best knapsack") might be unfair to minority groups, they suggest two fairer alternatives:


 * Diverse knapsack - maximizing the number of citizens for whom at least one preferred item funded (similarly to the Chamberlin-Courant rule for multiwinner voting).
 * Nash-optimal knapsack - maximizing the product of the citizens' utilities.

Unfortunately, for general utility domains, both these rules are NP-hard to compute. However, diverse-knapsack is polynomially-solvable in specific preference domains, or when the number of voters is small.

Proportional approval voting
Proportional approval voting is a rule for multiwinner voting, which was adapted to PB by Pierczynski, Peters and Skowron. It chooses a rule that maximizes the total score, which is defined by a harmonic function of the cardinality-based satisfaction. It is not very popular, since in the context of PB it does not satisfy the strong proportionality guarantees as in the context of multiwinner voting.

Sequential purchase rules
In sequential purchase rules, each candidate project should be "bought" by the voters, using some virtual currency. Several such rules are known.

Sequential Phragmen rule
The sequential phragmen rule generalizes Phragmen's rules for committee elections. Los, Christoff and Grossi describe it for approval ballots as a continuous process:


 * Each voter starts with 0 virtual money, and receives money in a constant rate of 1 per second.
 * At each time t, we define a not-yet-selected project x as affordable if the total money held by voters who approve x is at least the cost of x.
 * At the first time in which some project is affordable, we choose one affordable project y arbitrarily. We add y to the budget, and reset the virtual money of voters who approve y (as they "used" their vitrual money to fund y).
 * Voters keep earning virtual money and funding projects, until the next fundable project would bring the total cost over the total available budget; at that point, the algorithm stops.

Maximin support rule
This rule is an adaptation of the sequential Phragmen rule, which allows a redistribution of the loads in each round. It was first introduced as a multiwinner voting rule by Sanchez-Fernandez, Fernandez-Garcia, Fisteus and Brill. It was adapted to PB by Aziz, Lee and Talmon (though they call it 'Phragmen's rule'). They also present an efficient algorithm to compute it.

Method of equal shares
This method generalizes the method of equal shares for committee elections. The generalization to PB with cardinal ballots was done by Pierczynski, Peters and Skowron.


 * Each voter starts with B/n virtual money, where B is the available budget and n is the number of voters.
 * A project x is called r-affordable if it is possible to cover its cost by taking money from agents such that each agent i pays min(current-money-of-i, r*ui(x)). That is: each agent participates in funding x in proportion to ui(x). The number r represents the "price per unit of utility" (note that the utilities are normalized to the range [0,1]).
 * In the special case of approval ballots, the utilities are 0 or 1, so a project is r-affordable if it is possible to cover its cost by taking money from agents who approve x, such that each agent i pays min(current-money-of-i, r). Agents with less than r money pay only their current balance.
 * We iteratively add to the budget, an r-affordable project for the smallest possible value of r, and decrease the virtual balance of agents who support this project.
 * When no more r-affordable projects can be found, for any r, the process stops.

Other rules
Shapiro and Talmon present a polynomial-time algorithm for finding a budget-allocation satisfying the Condorcet criterion: the selected budget-allocation should be at least as good as any other proposed budget, according to a majority of the voters (no proposed change to it has majority support among the votes). Their algorithm uses Schwartz sets.

Skowron, Slinko, Szufa and Talmon present a rule called Minimal Transfers over Costs, that is particularly suited for cumulative voting. It can be seen as an adaptation of Single transferable vote.

Aziz and Lee present a rule called expanding approvals rule, that is particularly suited for weak-ordinal ballots. Pierczynski, Peters and Skowron present a variant of the method of equal shares for weak-ordinal ballots, and show that it is an expanding approvals rule.

Fairness considerations
An important consideration in budgeting is to be fair to both majority and minority groups. To illustrate the challenge, suppose that 51% of the population live in the north and 49% live in the south; suppose there are 10 projects in the north and 10 projects in the south, each of them costs 1 unit, and the available budget is 10. The rules currently in use, such as knapsack budgeting, will choose the 10 projects in the north, and no projects in the south, which is unfair to the southerners.

To partially address this issue, many municipalities perform a separate PB process in each district, to guarantee that each district receives a proportional representation. But this introduces other problems. For example, projects on the boundary of two districts can be voted only by one district, and thus may not be funded even if they are supported by many people from the other district. Additionally, projects without a specific location, that benefit the entire city, cannot be handled. Moreover, there are groups that are not geographic, such as: parents, or bike-riders.

The notion of fairness to groups is formally captured by extending the justified representation criteria from multiwinner voting. The idea of these criteria is that, if there is a sufficiently large group of voters who all agree on a sufficiently large group of projects, then these projects should receive a sufficiently large part of the budget. Formally, given a group N of voters and a set P of projects, we define:


 * N can afford P if $$|N|\cdot B / n \geq \text{cost}(P)$$, that is: N is sufficiently large to fund the projects in P with their proportional share of the budget.
 * The potential utility of N from P is $$\sum_{x\in P} \min_{i\in N} u_i(x)$$. In particular, in case of approval ballots and cardinal satisfaction, the potential utility is simply the number of projects in P approved by all members in N.

Based on these definitions, many fairness notions have been defined; see Rey and Maly for a taxonomy of the various fairness notions. Below, the chosen budget-allocation (the set of projects chosen to be funded) is denote by X.

Strong extended justified representation
Strong extended justified representation (SEJR) means that, for every group N of voters that can afford a set P of projects, the utility of every member of N from X is at least as high as the potential utility of N from P. In particular, with approval ballots and cardinal satisfaction, if N can afford P and all members in N approve P, then for each member i in N, at least |P| projects approved by i should be funded.

This property is too strong, even in the special case of approval ballots and unit-cost projects (committee elections) For example, suppose n=4 and B=2. There are three unit-cost projects {x, y, z}. The approval ballots are: {1:x, 2:y, 3:z, 4:xyz}. The group N={1,4} can afford P={x}, and their potential utility from {x} is 1; similarly, {2,4} can afford {y}, and {3,4} can afford {z}. Therefore, SEJR requires that the utility of each of the 4 agent must be at least 1. This can only be done by funding all 3 projects; but the budget is sufficient for only 2 projects. Note that this holds for any satisfaction function.

Fully justified representation
Fully justified representation (FJR) means that, for every group N of voters who can afford a set P of projects, the utility of at least one member of N from X is at least as high as the potential utility of N from P. In particular, with approval ballots and cardinal satisfaction, if N can afford P, and every member in N approves at least k elements of P, then for at least one member i in N, at least k projects approved by i should be funded.

The "at least one member" clause may make the FJR property seem weak. But note that it should hold for every group N of voters who can afford some set P of projects, so it implies fairness guarantees for many individual voters.

An FJR budget-allocation always exists. For example, in the example above, {a,b,c} satisfies FJR: in {1,2,3} and {3,4,5} and {5,6,7} all agents have utility at least 1, and in {7,8,9} voter #7 has utility at least 1. The existence proof is based on a rule called Greedy Cohesive Rule (GCR):


 * Iterate over all 2n groups of voters. For each group N, search a set P of projects, such that N can afford P, and subject to this, the potential utility of N from P is maximum.
 * If such a pair (N,P) is found, all projects in P are funded, all voters in N are removed, and the process repeats.
 * If no pair (N,P) is found, the algorithm stops.

It is easy to see that GCR always selects a feasible budget-allocation: whenever it funds a set P of projects, it removes a set N of voters satisfying $$|N|\cdot B / n \geq \text{cost}(P)$$. The total number of voters removed is at most n; hence, the total cost of projects added is at most $$n\cdot B / n = B$$.

To see that GCR satisfies FJR, consider any group N who can afford a set P, and has potential utility u(N,P). Let i be the member of N who was removed first. Voter i was removed as a member in some other voter-group M, who could afford a set Q, with potential utility u(M,Q). When M was removed, N was available; so the algorithm order implies that u(M,Q) ≥ u(N,P). Since the entire Q is funded, each agent in M - including agent i - receives utility at least u(M,Q), which is at least u(N,P). So the FJR condition is satisfied for i. Note that the proof holds even for non-additive monotone utilities.

GCR runs in time exponential in n. Indeed, finding an FJR budget-allocation is NP-hard even there is a single voter. The proof is by reduction from the knapsack problem. Given a knapsack problem, define a PB instance with a single voter in which the budget is the knapsack capacity, and for each item with weight w and value v, there is a project with cost w and utility v. Let P be the optimal solution to the knapsack instance. Since cost(P)=weight(P) is at most the budget, it is affordable by the single voter. Therefore, his utility in an EJR budget-allocation should be at least value(P). Therefore, finding an FJR budget-allocation yields a solution to the knapsack instance. The same hardness exists even with approval ballots and cost-based satisfaction, by reduction from the subset sum problem.

Extended justified representation
Extended justified representation (EJR) is a property slightly weaker than FJR. It means that the FJR condition should apply only to groups that are sufficiently "cohesive". In particular, with approval ballots, if N can afford P, and every member in N approves all elements of P, then for at least one member i in N, the satisfaction from i 's approved project in X should be at least as high as the satisfaction from P. In particular:


 * with cardinal-based satisfaction, this means that at least |P| projects approved by i should be funded;
 * with cost-based satisfaction, this means that some projects approved by i, with total cost at least cost(P), should be funded.

Since FJR implies EJR, an EJR budget-allocation always exists. However, similar to FJR, it is NP-hard to find an EJR allocation. The NP-hardness holds even with approval ballots, for any satisfaction function that is strictly increasing with the cost. But with cardinality-based satisfaction and approval ballots, the method of equal shares finds an EJR budget allocation.

Moreover, checking whether a given budget-allocation satisfies EJR is coNP-hard even with unit costs.

It is an open question, whether an EJR or an FJR budget-allocation can be found in time polynomial in n and B (that is, pseudopolynomial time).

Extended justified representation up to one project
EJR up-to one project (EJR-1) means that, for every group N of voters who can afford a set P of projects, there exists at least one member i in N such that one of the following holds:


 * The utility of i from X is at least the potential utility of N from P, or -
 * There exists a project y in P such that, the utility of i from X+y is strictly larger than the potential utility of N from P.

With cardinal ballots, EJR-1 is weaker than EJR; with approval ballots and cardinal-satisfaction, EJR-1 is equivalent to EJR. This is because all projects' utilities are 0 or 1. Therefore, if adding a single project makes i 's utility strictly larger than u(N,P), then without this single project, i 's utility is at least u(N,P).

Pierczynski, Skowron and Peters prove that the method of equal shares, which runs in polynomial time, always finds an EJR-1 budget allocation; hence, with approval ballots and cardinality-based satisfaction, it always finds an EJR budget allocation (even for non-unit costs).

EJR up-to any project (EJR-x) means that, for every group N of voters who can afford a set P of projects, and for every unfunded project y in P, the utility of i from X+y is strictly larger than the potential utility of N from P. Clearly, EJR implies EJR-x which implies EJR-1. Brill, Forster, Lackner, Maly and Peters prove that, for approval ballots and for any satisfaction function with decreasing normalized satisfaction (DNS), if the method of equal shares is applied with that satisfaction function, the outcome is EJR-x.

However, it may not be possible to satisfy EJR-x or even EJR-1 simultaneously for different satisfaction functions: there are instances in which no budget-allocation satisfies EJR-1 simultaneously for both cost-satisfaction and cardinality-satisfaction.

Proportional justified representation
Proportional justified representation (PJR) means that, for every group N of voters who can afford a set P of projects, the group-utility of N from the budget-allocation - defined as $$\sum_{x \text{ is funded }} \max_{i\in N} u_i(x)$$ - is at least the potential utility of N from P. In particular, with approval ballots, if N can afford P, and every member in N approves all elements of P, then the satisfaction from the set of all funded projects that are approved by at least one member of N should be at least as high as the satisfaction from P. In particular:


 * with cardinal-based satisfaction, this means that at least |P| projects from the union of approval-ses of all members in N should be funded;
 * with cost-based satisfaction, this means that projects of total cost at least cost(P), from the union of approval-ses of all members in N, should be funded (PJR for approval ballots with cost-based satisfaction is equivalent to the property called BPJR-L by Aziz, Lee and Talmon ).

Since EJR implies PJR, a PJR budget-allocation always exists. However, similar to EJR, it is NP-hard to find a PJR allocation even for a single voter (using the same reduction from knapsack). Moreover, checking whether a given budget-allocation satisfies PJR is coNP-hard even with unit costs and approval ballots.

Analogously to EJR-x, one can define PJR-x, which means PJR up to any project. Brill, Forster, Lackner, Maly and Peters prove that, for approval ballots, the sequential Phragmen rule, the maximin-support rule, and the method of equal shares with cardinality-satisfaction, all guarantee PJR-x simuntaleously for every DNS satisfaction function.

Local JR conditions
Aziz, Lee and Talmon present local variants of the above JR criteria, that can be satisfied in polynomial time. For each of these criteria, they also present a weaker variant where, instead of the external budget-limit B, the denominator is W, which is the actual amount used for funding. Since usually W<B, the W-variants are easier to satisfy than their B-variants.

Ordinal JR conditions
Aziz and Lee extend the justified-representation notions to weak-ordinal ballots, which contain approval ballots as a special case. They extend the notion of a cohesive group to a solid coalition, and define two incomparable proportionality notions: Comparative Proportionality for Solid Coalitions (CPSC) and Inclusion Proportionality for Solid Coalitions (IPSC). CPSC may not always exist, but IPSC always exists and can be found in polynomial time. Equal shares satisfies PSC – a weaker notion than both IPSC and CPSC.

Core fairness
One way to assess both fairness and stability of budget-allocations is to check whether any given group of voters could attain a higher utility by taking their share of the budget and dividing in a different way. This is captured by the notion of core from cooperative game theory. Formally, a budget-allocation X is in the weak core there is no group N of voters, and an alternative budget-allocation Z of $$|N|\cdot B / n $$, such that all members of N strictly prefer Z to X.

Core fairness is stronger than FJR, which is stronger than EJR. To see the relation between these conditions, note that, for the weak core, it is sufficient that, for each group N of voters, the utility of at least one member of N from X is at least as high as the potential utility of N from P; it is not required that N should be cohesive.

For the setting of divisible PB and cardinal ballots, a there are efficient algorithms for calculating a core budget-allocation for some natural classes of utility functions.

However, for indivisible PB, the weak core might be empty even with unit costs. For example: suppose there are 6 voters and 6 unit-cost projects, and the budget is 3. The utilities satisfy the following inequalities:


 * u1(a) > u1(b) > 0;   u2(b) > u2(c) > 0;    u3(c) > u3(a) > 0;
 * u4(d) > u4(e) > 0;   u5(e) > u5(f) > 0;     u6(f) > u6(d) > 0.

All other utilities are 0. Any feasible budget-allocation contains either at most one project {a,b,c} or at most one project {d,e,f}. W.l.o.g. suppose the former, and suppose that b and c are not funded. Then voters 2 and 3 can take their proportional share of the budget (which is 1) and fund project c, which will give both of them a higher utility. Note that the above example requires only 3 utility values (e.g. 2, 1, 0).

With only 2 utility values (i.e., approval ballots), it is an open question whether a weak-core allocation always exists, with or without unit costs; both with cardinality-satisfaction and cost-satisfaction.

Some approximations to the core can be attained: equal shares attains a multiplicative approximation of $$4 \log (2 \frac{u_{\max}}{u_{\min}})$$. Munagala, Shen, Wang and Wang prove that, for arbitrary monotone utilities, a 67-approximate core allocation exists can be computed in polynomial time. For additive utilities, a 9.27-approximate core allocation exists, but it is not known if it can be computed in polynomial time.

Jiang, Munagala and Wang consider a different notion of approximation called entitlement-approximation; they prove that a 32-approximate core by this notion always exists.

Priceability
Priceability means that it is possible to assign a fixed budget to each voter, and split each voter's budget among candidates he approves, such that each elected candidate is 'bought' by the candidates who approve him, and no unelected candidate can be bought by the remaining money of the voters who approve him. MES can be viewed as an implementation of Lindahl equilibrium in the discrete model, with the assumption that the customers sharing an item must pay the same price for the item. The definition is the same for cardinal ballots as for approval ballots.

A priceable allocation is computed by the rules of equal shares (for cardinal ballots), Sequential Phragmen (for approval ballots), and maximin support (for approval ballots).

With approval ballots, priceability implies PJR-x for cost-based satisfaction. Moreover, a slightly stronger priceability notion implies PJR-x simultaneously for all DNS satisfaction functions. This stronger notion is satisfied by equal shares with cardinality satisfaction, sequential Phragmen, and maximin support.

Laminar fairness
Laminar fairness is a condition on instances of a specific structure, called laminar instances. A special case of a laminar instance is an instance in which the population is partitioned into two or more disjoint groups, such that each group supports a disjoint set of projects. Equal shares and sequential Phragmen are laminar-proportional with unit costs, but not with general costs.

Fair share
Maly, Rey, Endriss and Lackner defined a new fairness notion for PB with approval ballots, that depends only on equality of resources, and not on a particular satisfaction function. The idea was first presented by Ronald Dworkin. They explain the rationale behind this new notion as follows: "we do not aim for a fair distribution of satisfaction, but instead we strive to invest the same effort into satisfying each voter. The advantage is that the amount of resources spent is a quantity we can measure objectively." They define the share of an agent i from the set P of funded projects as: $$\text{share}(P,i) := \sum_{x \in P\cap A_i} \frac{\text{cost}(x)}{|\{ j: x\in A_j \}|}$$. Intuitively, this quantity represents the amount of resources that society put into satisfying i. For each funded project x, the cost of x contributes equally to all agents who approve x. As an example, suppose the budget is 8, there are three projects x,y,z with costs 6,2,2, four agents with approval ballots xy, xy, y, z.

A budget-allocation satisfies fair share (FS) if the share of each agent is at least min(B/n, share(Ai,i)). Obviously, a fair-share allocation may not exist, for example, when there are two agents each of whom wants a different project, but the budget suffices for only one project. Moreover, even fair-share up-to one project (FS-1) allocation might not exist. For example, suppose B=5, there are 3 projects of cost 3, and the approval ballots are xy, yz, zx. The fair share is 5/3. But in any feasible allocation, at most one project is funded, so there is an agent with no approved funded project. For this agent, even adding one project would increase his share to 3/2=1.5, which is less than 5/3. Checking whether a FS or an FS-1 allocation exists is NP-hard. On practical instances from pabulib, it is possible to give each agent between 45% and 75% of their fair share; MES rules give a larger fraction than sequential Phragmen.
 * If {x,y} is selected, then the share of voters 1,2 is 6/3+2/2=3; the share of voter 3 is 6/3=2; and the share of voter 4 is 0.
 * If {x,z} is selected, then the share of voters 1,2,3 is 6/3=2, and the share of voter 4 is 2/1=2, so all voters got the same share.

A weaker relaxation, called local fair-share (Local-FS), requires that, for every unfunded project y, there exsits at least one agent i who approves y and has  share(X+y, i) > B/n. Local-FS can be satisfied by a variant of the method of equal shares in which the contribution of each agent to funding a project x is proportional to share({x},i), rather than to ui(x).

Another relaxation is the Extended Justified Share (EJS): it means that, for any group of agents N who can afford a set of projects P, such that every member in N approves all elements of P, there is at least one member i in N for whom share(X,i) ≥ share(P,i). It looks similar to EJR, but they are independent: there are instances in which some allocations are EJS and not EJR, while other allocations are EJR and not EJS. An EJS allocation always exists and can be found by the exponential-time Greedy Cohesive Rule, in time $$O(n\cdot 2^m)$$; finding an EJS allocation is NP-hard. But the above variant of MES satisfies EJS up-to one project (EJS-1). It is open whether EJS up-to any project (EJS-x) can be satisfied in polynomial time.

District fairness
District fairness is a fairness notion that focuses on the pre-specified districts in a city. Each district i deserves a budget Bi (a part of the entire city budget), which is usually proportional to the population size in the district. In many cities, there is a separate PB process in each district. It may be more efficient to do a single city-wide PB process, but it is important to do so in a way that does not harm the districts. Thus, a city-wide budget-allocation is district fair if it gives each district i at least the welfare it could get by an optimal allocation of Bi.

Hershkowitz, Kahng, Peters and Procaccia study the problem of welfare maximization subject to district fairness. They show that finding an optimal deterministic allocation is NP-hard, but finding an optimal randomized allocation that is district-fair in expectation can be done efficiently. Moreover, if it is allowed to overspend (by up to 65%), it is possible to find an allocation that maximizes social welfare and guarantees district-fairness up-to one project.

Monotonicity properties
It is natural to expect that, when some parameters of the PB instance change, the outcome of a PB rule would change in a predictable way. In particular:


 * Discount monotonicity says that, if a rule selects project x, and x becomes cheaper, and all other data does not change, the rule would still select x.
 * Limit monotonicity (inspired by resource monotonicity and house monotonicity) says that, if a rule selects project x, and the total budget increases, then the rule would still select x.
 * Merging monotonicity says that, if a rule selects a set X of projects, and all these projects merge into a single project y (such that cost(y)=cost(X), and every agent approves y iff he approves all projects in X), then the rule selects y.
 * Splitting monotonicity says that, if a rule selects a project x, and this project splits into a set of projects Y (such that cost(Y)=cost(x), and every agent approves x iff he approves all projects in Y), then the rule selects at least one project from Y.

Monotonicity properties have been studied for welfare-maximization rules and for their greedy variants.

Strategic properties
A PB rule is called strategyproof if no voter can increase his utility by reporting false preferences. With unit costs, the rule that maximizes the utilitarian welfare (choosing the B projects with the largest number of approvals) is strategyproof. This is not necessarily true with general costs. Goel, Krishnaswamy, Sakshuwong and Aitamurto define an approximation to strategyproof, strategyproof up-to one project, which means that no voter can increase his utility by more than the satisfaction from adding one project. They prove that, with approval ballots and cost-satisfaction, the greedy algorithm, that selects projects by the number of approvals, is strategyproof up-to one project. The result does not hold for cardinality-satisfaction.

The utilitarian rule is not proportional even with unit costs and approval ballots. Indeed, even in committee voting, there is a fundamental tradeoff between strategyproofness and proportionality; see multiwinner approval voting.

Constraints on the allocation
Often, there are constraints that forbid some subsets of projects from being the outcome of PB. For example:


 * some projects are incompatible and cannot be funded together;
 * some projects depend on other projects.

Rey, Endriss and de Haan develop a general framework to handle any constraints that can be described by propositional logic, by encoding PB instances as judgement aggregation. Their framework allows dependency constraints as well as category constraints, with possibly overlapping categories.

Fain, Munagala and Shah study a generalization of PB: allocating indivisible public goods, with possible constraints on the allocation. They consider matroid constraints, matching constraints, and packing constraints (which correspond to budget constraints).

Jain, Sornat, Talmon and Zehavi assume that projects are partitioned into disjoint categories, and there is a budget limit on each category, in addition to the general budget limit. They study the computational complexity of maximizing the social welfare subject to these constraints. In general the problem is hard, but efficient algorithms are given for settings with few categories.

Patel, Khan and Louis also assume that projects are partitioned into disjoint categories, with both upper and lower quotas on each category. They present approximation algorithms using dynamic programming.

Chen, Lackner and Maly assume that projects belong to possibly-overlapping categories, with upper and lower quotas on each category.

Motamed, Soeteman, Rey and Endriss show how to handle categorical constraints by reduction to PB with multiple resources.

Extensions
Recently, several extensions of the basic PB model have been studied.

The shortlisting stage
Rey, Endriss and de Haan consider an important stage that occurs, in real-life PB implementations, before the voting stage: choosing a short list of projects that will be presented to the voters. They model this shortlisting stage as a multiwinner voting process in which there is no limit on the total size or cost of the outcome. They analyze several rules that can be used in this stage, to guarantee diversity of the selected projects. They also analyze possible strategic manipulations in the shortlisting stage.

Repeated PB
Lackner, Maly and Rey note that, in reality, PB is not a one-time process, but rather a repeating process, that occurs annually. They extend some fairness notions from perpetual voting to PB. In particular, they assume that voters are partitioned into types, and try to achieve fairness to types over time.

Non-additive utilities
Jain, Sornat and Talmon assume that the projects may be substitute goods or complementary goods, and therefore the utility an agent receives from a set of projects is not necessarily the sum of utilities of each project. They analyze the computational complexity of welfare maximization in this extended setting. In this work, the interaction structure between the projects is fixed and identical for all voters; Jain, Talmon and Bulteau extend the model further by allowing voters to specify individual interaction structures.

Non-fixed costs
Lu and Boutilier consider a model of budgeted social choice, which is very similar to PB. In their setting, the cost of each project is the sum of a fixed cost, and a variable cost that increases with the number of agents "assigned" to the project. Motamed, Soeteman, Rey and Endriss consider multi-dimensional costs, e.g. costs in terms of money, time, and other resources. They extend some fairness properties and strategic properties to this setting, and consider the computational complexity of welfare maximization.

Uncertain costs
Baumeister, Boes and Laussmann assume that costs are uncertain: each cost has a probability distribution, and its actual cost is revealed only when it is completed. To reduce risk, it is possible to implement projects one after the other, so that if the first project costs too much, some later projects can be removed. But might cause some projects to be implemented very late. They show that it is impossible to both maintain a low risk of over-spending, and guarantee that all projects complete in their due time. They adapt the fairness criteria, as well as the method of equal shares, to this setting.

Different degrees of funding
Projects that can be funded in different degrees. For example, a new building for youth activities could have 1, 2 or 3 floors; it can be small or large; it can be built from wood or stone; etc. This can be seen as a middle-ground between indivisible PB (which allows only two levels) and divisible PB (which allows continuously many levels). Formally, each project j can be implemented in any degree between 0 and tj, where 0 means "not implemented at all" and tj is the maximum possible implementation. Each degree of implementation has a cost. The ballots are ranged-approval ballots, that is: each voter gives, for each project, a minimum and a maximum amount of money that should be put into this project.

Sreedurga considers utilitarian welfare maximization in this setting. He considers four satisfaction functions:


 * Cardinality-based satisfaction assumes that a voter's utility equals the number of projects that are funded between his minimum degree and maximum degree.
 * Cost-based satisfaction assumes that an agent's utility equals the total cost of projects that are funded between his minimum degree and maximum degree.
 * Capped-cost-based satisfaction assumes that an agent's utility equals the total cost of projects that are funded above his minimum degree, where if the funding is higher than his maximum degree, the utility is capped to the maximum degree.
 * Distance disutility assumes that an agent has a negative utility, which is - for each project - minus the distance between the actual funding to that project and the agent's approval range for that project.

For cardinality-satisfaction, maximizing the utilitarian welfare can be done in polynomial time by dynamic programming. For the other satisfaction functions, welfare maximization is NP-hard, but can be computed in pseudo-polynomial time or approximated by an FPTAS, and also fixed-parameter tractable for some natural parameters.

Additionally, Sreedurga defines several monotonicity and consistency axioms for this setting. He shows that each welfare-maximization rule satisfies some of these axioms, but no rule satisfies all of them.

Open-source platforms for participatory budgeting

 * PBStanford
 * Decidim
 * AdhocracyPlus