An important problem on social information sites is the recovery of ground truth from individual reports when the experts are in the minority. The wisdom of the crowd, i.e. the collective opinion of a group of individuals fails in such a scenario. However, the surprisingly popular (SP) algorithm~\cite{prelec2017solution} can recover the ground truth even when the experts are in the minority, by asking the individuals to report additional prediction reports -- their beliefs about the reports of others. Several recent works have extended the surprisingly popular algorithm to an equivalent voting rule (SP-voting) to recover the ground truth ranking over a set of m alternatives. However, we are yet to fully understand when SP-voting can recover the ground truth ranking, and if so, how many samples (votes and predictions) it needs. We answer this question by proposing two rank-order models and analyzing the sample complexity of SP-voting under these models. In particular, we propose concentric mixtures of Mallows and Plackett-Luce models with G (\geq 2) groups. Our models generalize previously proposed concentric mixtures of Mallows models with 2 groups, and we highlight the importance of G >2 groups by identifying three distinct groups (expert, intermediate, and non-expert) from existing datasets. Next, we provide conditions on the parameters of the underlying models so that SP-voting can recover ground-truth rankings with high probability, and also derive sample complexities under the same. We complement the theoretical results by evaluating SP-voting on simulated and real datasets.
Two-sided matching markets have demonstrated significant impact in many real-world applications, including school choice, medical residency placement, electric vehicle charging, ride sharing, and recommender systems. However, traditional models often assume that preferences are known, which is not always the case in modern markets, where preferences are unknown and must be learned. For example, a company may not know its preference over all job applicants a priori in online markets. Recent research has modeled matching markets as multi-armed bandit (MAB) setting and primarily focused on optimizing welfare for one side of the market, while often resulting in pessimal welfare for the other side. In this paper, we adopt the welfarist approach for both sides of the market, focusing on two metrics: (1) Utilitarian welfare and (2) Rawlsian welfare, while maintaining market stability. We propose epoch Explore-Then-Commit (ETC) algorithms and provide an analysis of the regret bounds. Finally, we conduct simulated experiments to evaluate both welfare and market stability.
In fair division of indivisible items, domain restriction has played a key role in escaping from negative results and providing structural insights into the computational and axiomatic boundaries of fairness. One notable subdomain of additive preferences, the lexicographic domain, has yielded several positive results in dealing with goods, chores, and mixtures thereof. However, the majority of work within this domain primarily consider strict linear orders over items, which do not allow the modeling of more expressive preferences that contain indifferences (ties). We investigate the most prominent fairness notions of envy-freeness up to any (EFX) or some (EF1) item under weakly lexicographic preferences. For the goods-only setting, we develop an algorithm that can be customized to guarantee EF1, EFX, maximin share (MMS), or a combination thereof, along the efficiency notion of Pareto optimality (PO). From the conceptual perspective, we propose techniques such as preference graphs and potential envy that are independently of interest when dealing with ties. Finally, we demonstrate challenges in dealing with chores and highlight key algorithmic and axiomatic differences of finding EFX solutions with the goods-only setting. Nevertheless, we show that there is an algorithm that always returns an EF1 and PO allocation for the chores-only instances.
Fair division provides a rich computational and mathematical framework for the allocation of indivisible goods, which has given rise to a large number of fairness concepts and their relaxations. In recent years, much attention has been given to theoretical and computational aspects of various fairness concepts. Nonetheless, the choice of which fairness concept is in practice perceived to be fairer by individuals is not well understood. We consider two conceptually different relaxations of envy-freeness and investigate how individuals perceive the induced allocations as fair. In particular, we examine a well-studied relaxation of envy-freeness up to one good (EF1) which is based on counterfactual thinking that any pairwise envy can be eliminated by the hypothetical removal of a single good from the envied agent's bundle. In contrast, a recently proposed epistemic notion, namely envy-freeness up to k hidden goods (HEF-k), provides a relaxation by hiding information about a small subset of k goods. Through various crowdsourcing experiments, we empirically demonstrate that allocations achieved by withholding information are perceived to be fairer compared to two variants of EF1. We further show that these results hold even when controlling for parameters such as the size of an instance, effort, and the balancedness of bundles.
We initiate the study of matching roommates and rooms wherein the preferences of agents over other agents and rooms are complementary and represented by Leontief utilities. In this setting, 2n agents must be paired up and assigned to n rooms. Each agent has cardinal valuations over the rooms as well as compatibility values over all other agents. Under Leontief utilities, an agent’s utility for a matching is the minimum of the two values. We focus on the tradeoff between maximizing utilitarian social welfare and strategyproofness. Our main result shows that---in a stark contrast to the additive case---under binary Leontief utilities, there exists strategyproof mechanisms that maximize social welfare. We further devise a strategyproof mechanism that implements such a welfare maximizing algorithm and is parameterized by the number of agents. Along the way, we highlight several possibility and impossibility results, and give upper bounds and lower bounds for welfare with or without strategyproofness.
supersedes AAAI-23 paper below.
We initiate the study of fairness among classes of agents in online bipartite matching where there is a given set of offline vertices (aka agents) and another set of vertices (aka items) that arrive online and must be matched irrevocably upon arrival. In this setting, agents are partitioned into classes and the matching is required to be fair with respect to the classes. We adapt popular fairness notions (e.g. envy-freeness, proportionality, and maximin share) and their relaxations to this setting and study deterministic algorithms for matching indivisible items (leading to integral matchings) and for matching divisible items (leading to fractional matchings). For matching indivisible items, we propose an adaptive-priority-based algorithm, MATCH-AND-SHIFT, prove that it achieves (1/2)-approximation of both class envy-freeness up to one item and class maximin share fairness, and show that each guarantee is tight. For matching divisible items, we design a water-filling-based algorithm, EQUAL-FILLING, that achieves (1-1/e)-approximation of class envy-freeness and class proportionality; we prove e to be tight for class proportionality and establish a 3/4 upper bound on class envy-freeness. Finally, we discuss several challenges in designing randomized algorithms that achieve reasonable fairness approximation ratios. Nonetheless, we build upon EQUAL-FILLING to design a randomized algorithm for matching indivisible items, EQUAL-FILLING-OCS, which achieves (0.593)-approximation of class proportionality.
supersedes AAMAS-23 paper below.
The classical house allocation problem involves assigning n houses (or items) to n agents according to their preferences. A key criterion in such problems is satisfying some fairness constraints such as envy-freeness. We consider a generalization of this problem, called Graphical House Allocation, wherein the agents are placed along the vertices of a graph (corresponding to a social network), and each agent can only experience envy towards its neighbors. Our goal is to minimize the aggregate envy among the agents as a natural fairness objective, i.e., the sum of the envy value over all edges in a social graph. We focus on graphical house allocation with identical valuations. When agents have identical and evenly-spaced valuations, our problem reduces to the well-studied Minimum Linear Arrangement. For identical valuations with possibly uneven spacing, we show a number of deep and surprising ways in which our setting is a departure from this classical problem. More broadly, we contribute several structural and computational results for various classes of graphs, including NP-hardness results for disjoint unions of paths, cycles, stars, cliques, and complete bipartite graphs; we also obtain fixed-parameter tractable (and, in some cases, polynomial-time) algorithms for paths, cycles, stars, cliques, complete bipartite graphs, and their disjoint unions. Additionally, a conceptual contribution of our work is the formulation of a structural property for disconnected graphs that we call splittability, which results in efficient parameterized algorithms for finding optimal allocations.
We consider the problem of recovering the ground truth ordering (ranking, top-k, or others) over a large number of alternatives. The wisdom of crowd is a heuristic approach based on Condorcet's Jury theorem to address this problem through collective opinions. This approach fails to recover the ground truth when the majority of the crowd is misinformed. The surprisingly popular (SP) algorithm is an alternative approach that is able to recover the ground truth even when experts are in minority. The SP algorithm requires the voters to predict other voters' report in the form of a full probability distribution over all rankings of alternatives. However, when the number of alternatives, m, is large, eliciting the prediction report or even the vote over m alternatives might be too costly. In this paper, we design a scalable alternative of the SP algorithm which only requires eliciting partial preferences from the voters, and propose new variants of the SP algorithm. In particular, we propose two versions---Aggregated-SP and Partial-SP---that ask voters to report vote and prediction on a subset of size k (<< m) in terms of top alternative, partial rank, or an approval set. Through a large-scale crowdsourcing experiment on MTurk, we show that both of our approaches outperform conventional preference aggregation algorithms for the recovery of ground truth rankings, when measured in terms of Kendall-Tau distance and Spearman's $\rho$. We further analyze the collected data and demonstrate that voters' behavior in the experiment, including the minority of the experts, and the SP phenomenon, can be correctly simulated by a concentric mixtures of Mallows model. Finally, we provide theoretical bounds on the sample complexity of SP algorithms with partial rankings to demonstrate the theoretical guarantees of the proposed methods.
Two-sided matching markets describe a large class of problems wherein participants from one side of the market must be matched to those from the other side according to their preferences. In many real-world applications (e.g. content matching or online labor markets), the knowledge about preferences may not be readily available and must be learned, i.e., one side of the market (aka agents) may not know their preferences over the other side (aka arms). Recent research on online settings has focused primarily on welfare optimization aspects (i.e. minimizing the overall regret) while paying little attention to the game-theoretic properties such as the stability of the final matching. In this paper, we exploit the structure of stable solutions to devise algorithms that improve the likelihood of finding stable solutions. We initiate the study of the sample complexity of finding a stable matching, and provide theoretical bounds on the number of samples needed to reach a stable matching with high probability. Finally, our empirical results demonstrate intriguing tradeoffs between stability and optimality of the proposed algorithms, further complementing our theoretical findings.
We initiate the study of fair distribution of delivery tasks among a set of agents wherein delivery jobs are placed along the vertices of a graph. Our goal is to fairly distribute delivery costs (modeled as a submodular function) among a fixed set of agents while satisfying some desirable notions of economic efficiency. We adopt well-established fairness concepts---such as envy-freeness up to one item (EF1) and minimax share (MMS)---to our setting and show that fairness is often incompatible with the efficiency notion of social optimality. Yet, we characterize instances that admit fair and socially optimal solutions by exploiting graph structures.We further show that achieving fairness along with Pareto optimality is computationally intractable. Nonetheless, we design an XP algorithm (parameterized by the number of agents) for finding MMS and Pareto optimal solutions on every instance, and show that the same algorithm can be modified to find efficient solutions along with EF1, when such solutions exist. We complement our theoretical results by experimentally analyzing the price of fairness on randomly generated graph structures.
The widespread adoption of Artificial Intelligence (AI) systems has profoundly reshaped decision-making in social, political, and commercial contexts. This paper explores the critical issue of fairness in AI-driven decision-making, particularly in allocating resources and tasks. By examining recent advancements and key questions in computational social choice, I highlight challenges and prospects in designing fair systems in collective decision-making that are scalable, adaptable to intricate environments, and are aligned with complex and diverse human preferences.
Matching markets consist of two disjoint sets of agents, where each agent has a preference list over agents on the other side. The primary objective is to find a stable matching between the agents such that no unmatched pair of agents prefer each other to their matched partners. The incompatibility between stability and strategy-proofness in this domain gives rise to a variety of strategic behavior of agents, which in turn may influence the resulting matching. In this paper, we discuss fundamental properties of stable matchings, review essential structural observations, survey key results in manipulation algorithms and their game-theoretical aspects, and more importantly, highlight a series of open research questions.
Fairness is one of the most desirable societal principles in collective decision-making. It has been extensively studied in the past decades for its axiomatic properties and has received substantial attention from the multiagent systems community in recent years for its theoretical and computational aspects in algorithmic decision-making. However, these studies are often not sufficiently rich to capture the intricacies of human perception of fairness in the ambivalent nature of the real-world problems. We argue that not only fair solutions should be deemed desirable by social planners (designers), but they should be governed by human and societal cognition, consider perceived outcomes based on human judgement, and be verifiable. We discuss how achieving this goal requires a broad transdisciplinary approach ranging from computing and AI to behavioral economics and human-AI interaction. In doing so, we identify shortcomings and long-term challenges of the current literature of fair division, describe recent efforts in addressing them, and more importantly, highlight a series of open research directions.
The Graphical House Allocation (GHA) problem asks: how can n houses (each with a fixed non-negative value) be assigned to the vertices of an undirected graph G, so as to minimize the sum of absolute differences along the edges of G? This problem generalizes the classical Minimum Linear Arrangement problem, as well as the well-known House Allocation Problem from Economics. Recent work has studied the computational aspects of GHA and observed that the problem is NP-hard and inapproximable even on particularly simple classes of graphs, such as vertex disjoint unions of paths. However, the dependence of any approximations on the structural properties of the underlying graph had not been studied. In this work, we give a nearly complete characterization of the approximability of GHA. We present algorithms to approximate the optimal envy on general graphs, trees, planar graphs, bounded-degree graphs, and bounded-degree planar graphs. For each of these graph classes, we then prove matching lower bounds, showing that in each case, no significant improvement can be attained unless P = NP. We also present general approximation ratios as a function of structural parameters of the underlying graph, such as treewidth; these match the tight upper bounds in general, and are significantly better approximations for many natural subclasses of graphs. Finally, we investigate the special case of bounded-degree trees in some detail. We first refute a conjecture by Hosseini et al. [2023] about the structural properties of exact optimal allocations on binary trees by means of a counterexample on a depth-3 complete binary tree. This refutation, together with our hardness results on trees, might suggest that approximating the optimal envy even on complete binary trees is infeasible. Nevertheless, we present a linear-time algorithm that attains a 3-approximation on complete binary trees.
Fair distribution of indivisible tasks with non-positive valuations (aka chores) has given rise to a large body of work in recent years. A popular approximate fairness notion is envy-freeness up to one item (EF1), which requires that any pairwise envy can be eliminated by the removal of a single item. While an EF1 and Pareto optimal (PO) allocation of goods always exists and can be computed via several well-known algorithms, even the existence of such solutions for chores remains open, to date. We take an epistemic approach utilizing information asymmetry by introducing dubious chores -- items that inflict no cost on receiving agents, but are perceived costly by others. On a technical level, dubious chores provide a more fine-grained approximation of envy-freeness -- compared to relaxations such as EF1 -- which enables progress towards addressing open problems on the existence and computation of EF1 and PO. In particular, we show that finding allocations with optimal number of dubious chores is computationally hard even for highly restricted classes of valuations. Nonetheless, we prove the existence of envy-free and PO allocations for n agents with only 2n−2 dubious chores and strengthen it to n−1 dubious chores in four special classes of valuations. Our experimental analysis demonstrate that baseline algorithms only require a relatively small number of dubious chores to achieve envy-freeness in practice.
The classic house allocation problem is primarily concerned with finding a matching between a set of agents and a set of houses that guarantees some notion of economic efficiency (e.g. utilitarian welfare). While recent works have shifted focus on achieving fairness (e.g. minimizing the number of envious agents), they often come with notable costs on efficiency notions such as utilitarian or egalitarian welfare. We investigate the trade-offs between these welfare measures and several natural fairness measures that rely on the number of envious agents, the total (aggregate) envy of all agents, and maximum total envy of an agent. In particular, by focusing on envy-free allocations, we first show that, should one exist, finding an envy-free allocation with maximum utilitarian or egalitarian welfare is computationally tractable. We highlight a rather stark contrast between utilitarian and egalitarian welfare by showing that finding utilitarian welfare maximizing allocations that minimize the aforementioned fairness measures can be done in polynomial time while their egalitarian counterparts remain intractable (for the most part) even under binary valuations. We complement our theoretical findings by giving insights into the relationship between the different fairness measures and conducting empirical analysis.
We study the fair allocation of mixtures of indivisible goods and chores under lexicographic preferences---a subdomain of additive preferences. A prominent fairness notion for allocating indivisible items is envy-freeness up to any item (EFX). Yet, its existence and computation has remained a notable open problem. By identifying a class of instances with ``terrible chores'', we show that determining the existence of an EFX allocation is NP-complete. This result immediately implies the intractability of EFX under additive preferences. Nonetheless, we propose a natural subclass of lexicographic preferences for which an EFX and Pareto optimal (PO) allocation is guaranteed to exist and can be computed efficiently for any mixed instance. Focusing on two weaker fairness notions, we investigate finding EF1 and PO allocations for special instances with terrible chores, and show that MMS and PO allocations can be computed efficiently for any mixed instance with lexicographic preferences.
Invited journal track of JAIR-22 paper below.
In fair division of indivisible goods, l-out-of-d maximin share (MMS) is the value that an agent can guarantee by partitioning the goods into $d$ bundles and choosing the l least preferred bundles. Most existing works aim to guarantee to all agents a constant fraction of their 1-out-of-n MMS. But this guarantee is sensitive to small perturbation in agents' cardinal valuations. We consider a more robust approximation notion, which depends only on the agents' ordinal rankings of bundles. We prove the existence of l-out-of-(l+1/2)n MMS allocations of goods for any integer l ≥ 1, and present a polynomial-time algorithm that finds a 1-out-of-3n/2 MMS allocation when l = 1. We further develop an algorithm that provides a weaker ordinal approximation to MMS for any l > 1.
We study fair allocation of indivisible goods and chores among agents with lexicographic preferences -- a subclass of additive valuations. In sharp contrast to the goods-only setting, we show that an allocation satisfying envy-freeness up to any item (EFX) could fail to exist for a mixture of objective goods and chores. To our knowledge, this negative result provides the first counterexample for EFX over (any subdomain of) additive valuations. To complement this non-existence result, we identify a class of instances with (possibly subjective) mixed items where an EFX and Pareto optimal allocation always exists and can be efficiently computed. When the fairness requirement is relaxed to maximin share (MMS), we show positive existence and computation for any mixed instance. More broadly, our work examines the existence and computation of fair and efficient allocations both for mixed items as well as chores-only instances, and highlights the additional difficulty of these problems vis-à-vis their goods-only counterparts.
The classical house allocation problem involves assigning n houses (or items) to n agents according to their preferences. A key criteria in such problems is satisfying some fairness constraints such as envy-freeness. We consider a generalization of this problem wherein the agents are placed along the vertices of a graph (corresponding to a social network), and each agent can only experience envy towards its neighbors. Our goal is to minimize the aggregate envy among the agents as a natural fairness objective, i.e., the sum of all pairwise envy values over all edges in a social graph. When agents have identical and evenly spaced valuations, our problem reduces to the well-studied problem of linear arrangements. For identical valuations with possibly uneven spacing, we show a number of deep and surprising ways in which our setting is a departure from this classical problem. More broadly, we contribute several structural and computational results for various classes of graphs, including NP-hardness results for disjoint unions of paths or cliques, and fixed-parameter tractable (and, in some cases, polynomial-time) algorithms for paths, cycles, stars, cliques, and their disjoint unions. Additionally, a conceptual contribution of our work is the formulation of a structural property for disconnected graphs that we call separability which results in efficient parameterized algorithms for finding optimal allocations.
We initiate the study of fairness among classes of agents in online bipartite matching where there is a given set of offline vertices (aka agents) and another set of vertices (aka items) that arrive online and must be matched irrevocably upon arrival. In this setting, agents are partitioned into a set of classes and the matching is required to be fair with respect to the classes. We adopt popular fairness notions (e.g. envy-freeness, proportionality, and maximin share) and their relaxations to this setting and study deterministic and randomized algorithms for matching indivisible items (leading to integral matchings) and for matching divisible items (leading to fractional matchings). For matching indivisible items, we propose an adaptive-priority-based algorithm, MATCH-AND-SHIFT, prove that it achieves (1/2)-approximation of both class envy-freeness up to one item and class maximin share fairness, and show that each guarantee is tight. For matching divisible items, we design a water-filling-based algorithm, EQUAL-FILLING, that achieves (1-1/e)-approximation of class envy-freeness and class proportionality; we prove (1-1/e) to be tight for class proportionality and establish a 3/4 upper bound on class envy-freeness.
With the advent of online educational platforms and the advances in pedagogical technologies, self-directed learning has emerged as one of the most popular modes of learning. Distance education---elevated by the COVID-19 pandemic---involves methods of instruction through a variety of remote activities which often rely on educational videos for mastery. In the absence of direct student engagement, the asynchronous nature of remote activities may deteriorate the quality of education for learners. Students often have an illusion of skill acquisition after watching videos, which results in overestimation of abilities and skills. We focus on the efficacy of skill acquisition through interactive technologies and assess their impact on computational thinking in comparison with delivery through other traditional media (e.g. videos and texts). In particular, we investigate the relationship between actual learning, perception of learning, and learners' confidence in adult learners. Our results reveal intriguing observations about the role of interactivity and visualization and their implications on the pedagogical design for self-directed learning modules.
In fair division of indivisible goods, l-out-of-d maximin share (MMS) is the value that an agent can guarantee by partitioning the goods into $d$ bundles and choosing the l least preferred bundles. Most existing works aim to guarantee to all agents a constant fraction of their 1-out-of-n MMS. But this guarantee is sensitive to small perturbation in agents' cardinal valuations. We consider a more robust approximation notion, which depends only on the agents' ordinal rankings of bundles. We prove the existence of l-out-of-(l+1/2)n MMS allocations of goods for any integer l ≥ 1, and present a polynomial-time algorithm that finds a 1-out-of-3n/2 MMS allocation when l = 1. We further develop an algorithm that provides a weaker ordinal approximation to MMS for any l > 1.
For object reallocation problems, if preferences are strict but otherwise unrestricted, the Top Trading Cycles rule (TTC) is the leading rule: It is the only rule satisfying efficiency, individual rationality, and strategy-proofness. However, on the subdomain of single-peaked preferences, Bade (2019) defines a new rule, the "crawler", which also satisfies these three properties. (i) The crawler selects an allocation by "visiting" agents in a specific order. A natural "dual" rule can be defined by proceeding in the reverse order. Our first theorem states that the crawler and its dual are actually the same. (ii) Single-peakedness of a preference profile may in fact hold for more than one order and its reverse. Our second theorem states that the crawler is invariant to the choice of the order. (iii) For object allocation problems (as opposed to reallocation problems), we define a probabilistic version of the crawler by choosing an endowment profile at random according to a uniform distribution, and applying the original definition. Our third theorem states that this rule is the same as the "random priority rule".
Strategic behavior in two-sided matching markets has been traditionally studied in a "one-sided" manipulation setting where the agent who misreports is also the intended beneficiary. Our work investigates "two-sided" manipulation of the deferred acceptance algorithm where the misreporting agent and the manipulator (or beneficiary) are on different sides. Specifically, we generalize the recently proposed accomplice manipulation model (where a man misreports on behalf of a woman) along two complementary dimensions: (a) the two for one model, with a pair of misreporting agents (man and woman) and a single beneficiary (the misreporting woman), and (b) the one for all model, with one misreporting agent (man) and a coalition of beneficiaries (all women). Our main contribution is to develop polynomial-time algorithms for finding an optimal manipulation in both settings. We obtain these results despite the fact that an optimal one for all strategy fails to be inconspicuous, while it is unclear whether an optimal two for one strategy satisfies the inconspicuousness property. We also study the conditions under which stability of the resulting matching is preserved. Experimentally, we show that two-sided manipulations are more frequently available and offer better quality matches than their one-sided counterparts.
We study the problem of fairly allocating a set of m indivisible chores (items with non-positive value) to n agents. We consider the desirable fairness notion of 1-out-of-d maximin share (MMS) -- the minimum value that an agent can guarantee by partitioning items into d bundles and receiving the least valued bundle -- and focus on ordinal approximation of MMS that aims at finding the largest d <= n for which 1-out-of-d MMS allocation exists. Our main contribution is a polynomial-time algorithm for 1-out-of-floor(2n/3) MMS allocation, and a proof of existence of 1-out-of-floor(3n/4) MMS allocation of chores. Furthermore, we show how to use recently-developed algorithms for bin-packing to approximate the latter bound up to a logarithmic factor in polynomial time.
The stable matching problem sets the economic foundation of several practical applications ranging from school choice and medical residency to ridesharing and refugee placement. It is concerned with finding a matching between two disjoint sets of agents wherein no pair of agents prefer each other to their matched partners. The Deferred Acceptance (DA) algorithm is an elegant procedure that guarantees a stable matching for any input; however, its outcome may be unfair as it always favors one side by returning a matching that is optimal for one side (say men) and pessimal for the other side (say women). A desirable fairness notion is minimizing the sex-equality cost, i.e. the difference between the total rankings of both sides. Computing such stable matchings is a strongly NP-hard problem, which raises the question of what tractable algorithms to adopt in practice. We conduct a series of empirical evaluations on the properties of sex-equal stable matchings when preferences of agents on both sides are correlated. Our empirical results suggest that under correlated preferences, the DA algorithm returns stable matchings with low sex-equality cost, which further confirms its broad use in many practical applications.
The deferred acceptance algorithm is an elegant solution to the stable matching problem that guarantees optimality and truthfulness for one side of the market. Despite these desirable guarantees, it is susceptible to strategic misreporting of preferences by the agents on the other side. We study a novel model of strategic behavior under the deferred acceptance algorithm: manipulation through an accomplice. Here, an agent on the proposed-to side (say, a woman) partners with an agent on the proposing side---an accomplice---to manipulate on her behalf (possibly at the expense of worsening his match). We show that the optimal manipulation strategy for an accomplice comprises of promoting exactly one woman in his true list (i.e., an inconspicuous manipulation). This structural result immediately gives a polynomial-time algorithm for computing an optimal accomplice manipulation. We also study the conditions under which the manipulated matching is stable with respect to the true preferences. Our experimental results show that accomplice manipulation outperforms self manipulation both in terms of the frequency of occurrence as well as the quality of matched partners.
The maximin share (MMS) guarantee is a desirable fairness notion for allocating indivisible goods. While MMS allocations do not always exist, several approximation techniques have been developed to ensure that all agents receive a fraction of their maximin share. We focus on an alternative approximation notion, based on the population of agents, that seeks to guarantee MMS for a fraction of agents. We show that no optimal approximation algorithm can satisfy more than a constant number of agents, and discuss the existence and computation of MMS for all but one agent and its relation to approximate MMS guarantees. We then prove the existence of allocations that guarantee MMS for 2/3 of agents, and devise a polynomial time algorithm that achieves this bound for up to nine agents. A key implication of our result is the existence of allocations that guarantee the value that an agent receives by partitioning the goods into 3n/2 bundles, improving the best known guarantee when goods are partitioned into 2n-2 bundles. Finally, we provide empirical experiments using synthetic data.
The wisdom of the crowd has long become the de facto approach for eliciting information from individuals or experts in order to predict the ground truth. However, classical democratic approaches for aggregating individual votes only work when the opinion of the majority of the crowd is relatively accurate. A clever recent approach, surprisingly popular voting, elicits additional information from the individuals, namely their prediction of other individuals' votes, and provably recovers the ground truth even when experts are in minority. This approach works well when the goal is to pick the correct option from a small list, but when the goal is to recover a true ranking of the alternatives, a direct application of the approach requires eliciting too much information. We explore practical techniques for extending the surprisingly popular algorithm to ranked voting by partial votes and predictions and designing robust aggregation rules. We experimentally demonstrate that even a little prediction information helps surprisingly popular voting outperform classical approaches.
Envy-freeness up to any good (EFX) provides a strong and intuitive guarantee of fairness in the allocation of indivisible goods. But whether such allocations always exist or whether they can be efficiently computed remains an important open question. We study the existence and computation of EFX in conjunction with various other economic properties under lexicographic preferences--a well-studied preference model in artificial intelligence and economics. In sharp contrast to the known results for additive valuations, we not only prove the existence of EFX and Pareto optimal allocations, but in fact provide an algorithmic characterization of these two properties. We also characterize the mechanisms that are, in addition, strategyproof, non-bossy, and neutral. When the efficiency notion is strengthened to rank-maximality, we obtain non-existence and computational hardness results, and show that tractability can be restored when EFX is relaxed to another well-studied fairness notion called maximin share guarantee (MMS).
We study the classic problem of matching n agents to n objects, where the agents have ranked preferences over the objects. We focus on two popular desiderata from the matching literature: Pareto optimality and rank-maximality. Instead of asking the agents to report their complete preferences, our goal is to learn a desirable matching from partial preferences, specifically a matching that is necessarily Pareto optimal (NPO) or necessarily rank-maximal (NRM) under any completion of the partial preferences. We focus on the top-k model in which agents reveal a prefix of their preference rankings. We design efficient algorithms to check if a given matching is NPO or NRM, and to check whether such a matching exists given top-k partial preferences. We also study online algorithms to elicit partial preferences adaptively, and prove bounds on their competitive ratio.
Stable matching models are widely used in market design, school admission, and donor organ exchange. The classic Deferred Acceptance (DA) algorithm guarantees a stable matching that is optimal for one side (say men) and pessimal for the other (say women). A sex-equal stable matching aims at providing a fair solution to this problem. We demonstrate that under a class of correlated preferences, the DA algorithm either returns a sex-equal solution or has a very low sex-equality cost.
The number of introductory programming learners is increasing worldwide. Delivering feedback to these learners is important to support their progress; however, traditional methods to deliver feedback do not scale to thousands of programs. We identify several opportunities to improve a recent data-driven technique to analyze individual program statements. These statements are grouped based on their semantic intent and usually differ on their actual implementation and syntax. The existing technique groups statements that are semantically close, and considers outliers those statements that reduce the cohesiveness of the clusters. Unfortunately, this approach leads to many statements to be considered outliers. We propose to reduce the number of outliers through a new clustering algorithm that processes vertices based on density. Our experiments over six real-world introductory programming assignments show that we are able to reduce the number of outliers and, therefore, increase the total coverage of the programs that are under evaluation.
We initiate the study of multi-layered cake cutting with the goal of fairly allocating multiple divisible resources (layers of a cake) among a set of agents. The key requirement is that each agent can only utilize a single resource at each time interval. Several real-life applications exhibit such restrictions on overlapping pieces; for example, assigning time intervals over multiple facilities and resources or assigning shifts to medical professionals. We investigate the existence and computation of envy-free and proportional allocations. We show that envy-free allocations that are both feasible and contiguous are guaranteed to exist for up to three agents with two types of preferences, when the number of layers is two. We also show that envy-free feasible allocations where each agent receives a polynomially bounded number of intervals exist for any number of agents and layers under mild conditions on agents' preferences. We further devise an algorithm for computing proportional allocations for any number of agents and layers.
Envy-freeness up to one good (EF1) is a well-studied fairness notion for indivisible goods that addresses pairwise envy by the removal of at most one good. In the worst case, each pair of agents might require the (hypothetical) removal of a different good, resulting in a weak aggregate guarantee. We study allocations that are nearly envy-free in aggregate, and define a novel fairness notion based on information withholding. Under this notion, an agent can withhold (or hide) some of the goods in its bundle and reveal the remaining goods to the other agents. We observe that in practice, envy-freeness can be achieved by withholding only a small number of goods overall. We show that finding allocations that withhold an optimal number of goods is computationally hard even for highly restricted classes of valuations. In contrast to the worst-case results, our experiments on synthetic and real-world preference data show that existing algorithms for finding EF1 allocations withhold a close-to-optimal amount of information.
Fair division is a subfield of multiagent systems related to object distribution. When objects are indivisible, the Maximin Share Guarantee (MMS) is a desirable fairness notion; however, it is not guaranteed to exist. While MMS does not exist, a relaxation of MMS is always guaranteed to exist. We show that there exists a family of instances for which this relaxation fails to guarantee MMS value of all but a small constant number of agents.
MatchU is a web-based platform that offers an interactive framework to find how to form mutually-beneficial relationships, decide how to distribute resources, or resolve conflicts through a suite of matching algorithms rooted in economics and artificial intelligence. In this paper, we discuss MatchU's vision, solutions, and future directions.
Game-based learning has emerged as an innovative learning technique that can increase student motivation, emotional involvement and enjoyment. Our study examines the effectiveness of game-based learning in planning education. Specifically, we explore the impact of gamification on planning students’ perception of learning, engagement and teamwork. Two lectures in an undergraduate planning course were delivered using two different methods of teaching (one traditional lecture-style, one game-based). Feedback was gathered through an online questionnaire and semi-structured interviews. Results show that students favored and were more engaged in the game-based lecture. Finally, we contend that gamification is particularly well suited for planning education.
We study the problem of allocating multiple objects to agents without transferable utilities, where each agent may receive more than one object according to a quota. Under lexicographic preferences, we characterize the set of strategyproof, non-bossy, and neutral quota mechanisms and show that under a mild Pareto efficiency condition, serial dictatorship quota mechanisms are the only mechanisms satisfying these properties. We then extend quota mechanisms to randomized settings, and show that the random serial dictatorship quota mechanisms (RSDQ) are envyfree, strategyproof, and ex post efficient for any number of agents and objects and any quota system, proving that the well-studied Random Serial Dictatorship (RSD) satisfies envyfreeness when preferences are lexicographic.
A complex family is a particular case of family structures in certain animal groups that arises from cooperative breeding. Inspired by this phenomenon, we study the problem of complex family formation from a game-theoretical perspective. While these structures may arise in various domains, such as search and rescue and robotics, the standard types of coalitional games (e.g. simple games) do not capture the observed intricacies of such complex formations. In this paper, we propose a characteristic function game for complex family structures and investigate the stability of coalitions and the existence of the core under various conditions. We provide theoretical bounds on the existence of complex families and the size of coalitions. Furthermore, we empirically examine the proposed framework through an ecological case study and show that our results are consistent with the observed family formations, shedding light on the causes of complex family compositions in the past.
We introduce a new manipulation strategy available to women in the men-proposing stable matching, called manipulation through an accomplice. In this strategy, a woman can team up with a potential male ``accomplice'' who manipulates on her behalf to obtain a better match for her. We investigate the stability of the matching obtained after this manipulation, provide an algorithm to compute such strategies, and show its benefit compared to single-woman manipulation strategies.
supersedes AAMAS-15 and EXPLORE-15 papers below.
We investigate the problem of opinion aggregation in a social network regarding an objective outcome. Agents receive independent noisy signals relating to the outcome, but may converse with their neighbors in the network before opinions are aggregated, resulting in incorrect opinions gaining prominence in the network. Recent work has shown that, in the general case, there is no procedure for inferring the correct outcome that incorporates information from the connections between agents (i.e. the structure of the social network). We develop a new approach for inferring the true outcome that can benefit from the additional information provided by the social network, under the simple assumption that agents will more readily convert to the true opinion than to a false one. Our proposed approach is computationally efficient, and provides significantly more accurate inference in many domains, which we demonstrate via both simulated and real-world datasets. We also theoretically characterize the properties that are necessary for our approach to perform well. Finally, we extend our approach to directed social networks, and cases with many alternatives, and outline areas for future research.
Game-based learning has received significant attention in educational pedagogy as an effective way of increasing student motivation and engagement. The majority of the work in this area has been focused on digital games or games involving technology. We focus on the use of traditional game design in improving student engagement and perception of learning in teaching computer science concepts in higher education. In addition, as part of an interdisciplinary effort, we discuss the interplay between game-based learning in higher education and disciplinary cultures, addressing the lack of empirical evidence on the impact of game design on learning outcomes, engagement, and students' perception of learning.
This book examines the increasing popularity of creativity and play in tertiary learning, and how it can be harnessed to enhance the student experience at university. While play is often misunderstood as something ‘trivial’ and associated with early years education, the editors and contributors argue that play contributes to social and human development and relations at a fundamental level. This volume invalidates the commonly held assumption that play is only for children, drawing together numerous case studies from higher education that demonstrate how researchers, students and managers can benefit from play as a means of liberating thought, overturning obstacles and discovering fresh approaches to persistent challenges. This diverse and wide-ranging edited collection unites play theory and practice to address the gulf in research on this fascinating topic. It will be of interest and value to educators, students and scholars of play and creativity, as well as practitioners and academic leaders looking to incorporate play into the curriculum.
The use of gameful activities in education has been widely celebrated in recent years as an effective pedagogical method in engaging students, exciting cognitive abilities, and promoting mastery. Despite the popularity of game-based learning, to date, little has been done to analyze the impacts of introducing such interventions on students and instructors alike. We focus on hybrid teaching strategies that blend educational games with instructional scaffolding in introductory computer science teaching. We assess the effectiveness of incorporating these teaching strategies by leveraging various empirical evaluation techniques and study their impacts from three different dimensions: students' point of view, instructors' perspective, and students' performance. In addition, we establish correlations between students' approaches to learning and game-based learning, and further discuss how learning concentration and curiosity relate to students' perception of game-based activities.
supersedes AAMAS-16 and EXPLORE-16 papers below.
One-sided matching mechanisms are fundamental for assigning a set of indivisible objects to a set of self-interested agents when monetary transfers are not allowed. Two widely-studied randomized mechanisms in multiagent settings are the Random Serial Dictatorship (RSD) and the Probabilistic Serial Rule (PS). Both mechanisms require only that agents specify ordinal preferences and have a number of desirable economic and computational properties. However, the induced outcomes of the mechanisms are often incomparable and thus there are challenges when it comes to deciding which mechanism to adopt in practice. In this paper, we first consider the space of general ordinal preferences and provide empirical results on the (in)comparability of RSD and PS. We analyze their respective economic properties under general and lexicographic preferences. We then instantiate utility functions with the goal of gaining insights on the manipulability, efficiency, and envyfreeness of the mechanisms under different risk-attitude models. Our results hold under various preference distribution models, which further confirm the broad use of RSD in most practical applications.
Artificial Intelligence techniques such as agent-based modeling and probabilistic reasoning have shown promise in modeling complex biological systems and testing ecological hypotheses through simulation. We develop an agent-based model of Arctic foxes from Medniy Island while utilizing Probabilistic Graphical Models to capture the conditional dependencies between the random variables. Such models provide valuable insights in analyzing factors behind catastrophic degradation of this population and in revealing evolutionary mechanisms of its persistence in high-density environment. Using empirical data from studies in Medniy Island, we create a realistic model of Arctic foxes as agents, and study their survival and population dynamics under a variety of conditions.
We present a game-theoretic analysis of optimal solutions for interactions between censors and censorship resistance systems (CRSs) by focusing on the data channel used by the CRS to smuggle clients’ data past the censors. This analysis leverages the inherent errors (false positives and negatives) made by the censor when trying to classify traffic as either non-circumvention traffic or as CRS traffic, as well as the underlying rate of CRS traffic. We identify Nash equilibrium solutions for several simple censorship scenarios and then extend those findings to more complex scenarios where we find that the deployment of a censorship apparatus does not qualitatively change the equilibrium solutions, but rather only affects the amount of traffic a CRS can support before being blocked. By leveraging these findings, we describe a general framework for exploring and identifying optimal strategies for the censorship circumventor, in order to maximize the amount of CRS traffic not blocked by the censor. We use this framework to analyze several scenarios with multiple data-channel protocols used as cover for the CRS. We show that it is possible to gain insights through this framework even without perfect knowledge of the censor’s (secret) values for the parameters in their utility function.
Gamification focuses on the application of game mechanics and gameful thinking in non-game contexts to engage users in solving problems or carrying out tasks. This interactive workshop will explore the theoretical and psychological relationship between games and learning, with particular focus on Bloom's taxonomy of learning and the relation between its affective domain and gamified learning. The workshop then introduces various elements of gameful design and a variety of gamification methods that can be used in a university classroom. Participants will learn strategies to incorporate gamification into a variety of learning environments and will have the opportunity to design game-based learning events that can be used in undergraduate lectures. Individuals from all disciplines can participate and benefit from techniques to identify and learn from a diverse set of strategies and methods that can be adopted to use in different disciplines and levels of teaching. Finally, the workshop provides an opportunity for participants to dive deeper and discuss strengths, weaknesses, and possible threats in gamified learning.
For one-sided matching problems, two widely studied mechanisms are the Random Serial Dictatorship (RSD) and the Probabilistic Serial Rule (PS). The induced outcomes of these two mechanisms are often incomparable and thus there are challenges when it comes to deciding which mechanism to adopt in practice. Working in the space of general preferences, we provide empirical results on the (in)comparability of RSD and PS and analyze their economic properties.
For one-sided matching problems, two widely studied mechanisms are the Random Serial Dictatorship (RSD) and the Probabilistic Serial Rule (PS). Both mechanisms require only that agents specify ordinal preferences and have a number of desirable economic and computational properties. However, the induced outcomes of the mechanisms are often incomparable and thus there are challenges when it comes to deciding which mechanism to adopt in practice. In this paper, working in the space of general ordinal preferences, we provide empirical results on the (in)comparability of RSD and PS and analyze their respective economic properties. We then instantiate utility functions for agents, consistent with the ordinal preferences, with the goal of gaining insights on the manipulability, efficiency, and envyfreeness of the mechanisms under different risk attitude models.
We consider the problem of allocating indivisible goods using the leading notion of fairness in economics: the competitive equilibrium from equal incomes. Focusing on two major classes of valuations, namely perfect substitutes and perfect complements, we establish the computational properties of algorithms operating in this framework. For the class of valuations with perfect complements, our algorithm yields a surprisingly succinct characterization of instances that admit a competitive equilibrium from equal incomes.
We consider the problem of repeatedly matching a set of alternatives to a set of agents with dynamic ordinal preferences. Despite a recent focus on designing one-shot matching mechanisms in the absence of monetary transfers, little study has been done on strategic behavior of agents in sequential assignment problems. We formulate a generic dynamic matching problem via a sequential stochastic matching process. We design a mechanism based on random serial dictatorship (RSD) that, given any history of preferences and matching decisions, guarantees global stochastic strategyproofness while satisfying desirable local properties. We further investigate the notion of envyfreeness in such sequential settings.
We consider the problem of repeatedly matching a set of alternatives to a set of agents in the absence of monetary transfer. We propose a generic framework for evaluating sequential matching mechanisms with dynamic preferences, and show that unlike single-shot settings, the random serial dictatorship mechanism is manipulable.
In certain voting problems, a hidden ground truth is inferred by aggregating the opinions of an electorate. We propose a novel model of these underlying social interactions, and derive maximum likelihood estimators for the ground truth in these models, given the social network and votes. We also evaluate these new estimators, as well as existing ones, on a class of simulated social networks.
In certain voting problems, a central authority must infer a hidden ground truth by aggregating the opinions of an electorate. When individual assessments are drawn i.i.d. and are correct with probability p > 0.5, aggregating enough votes will yield the ground truth with high probability. However, in reality voters’ opinions are often influenced by those of their friends. In certain social networks, this interaction may cause the aggregated opinions to be misleading. A center may use the network structure to recover the original distribution of votes from the post-interaction opinions of voters, and so recover the ground truth in spite of confounding discussions. In this paper, we consider a series of novel models of these underlying social interactions, and derive maximum likelihood estimators for the ground truth in these models, given the social network and votes. We also evaluate these new estimators, as well as existing ones, on common classes of social networks, and derive analytical bounds on the estimators’ performance in different environments. In all, we provide important insights into admitting influence from peers during voting.
Culture plays a significant role in human civilizations as a key determinant of relationships and organization formation; however, its role, key properties, and mechanisms are not yet fully understood. This work explores culture and cultural modelling from a complex systems, multi-dimensional, and multi-agency standpoint. The need for performing such modelling and simulation is evident since in-vivo organizational experiments are costly, not easily generalizable, and require lengthy analyses that may not be feasible in critical situations. Exploring the role and influence of culture on organizations is the aim of this chapter, whereby definitions, dimensions, and experiments are introduced in order to show the evolution and emergence of culture as a complex, distributed, social system. This work contributes to culture studies by (a) adding to the literature of culture as a complex system, (b) presenting a new seven-dimensional model to describe and encapsulate culture, and (c) simulating cultural interactions using a multi-agent system of high-functioning agents that achieve an equilibrium of beliefs.
This paper considers a novel approach to scalable multiagent resource allocation in dynamic settings. We propose an approximate solution in which each resource consumer is represented by an independent MDP-based agent that models expected utility using an average model of its expected access to resources given only limited information about all other agents. A global auction-based mechanism is proposed for allocations based on expected regret. We assume truthful bidding and a cooperative coordination mechanism, as we are considering healthcare scenarios. We illustrate the performance of our coordinated MDP approach against a Monte-Carlo based planning algorithm intended for large-scale applications, as well as other approaches suitable for allocating medical resources. The evaluations show that the global utility value across all consumer agents is closer to optimal when using our algorithms under certain time constraints, with low computational cost. As such, we offer a promising approach for addressing complex resource allocation problems that arise in healthcare settings.
The billiard ball model (BBM) is a model demonstrating the ability to build reversible logic circuits with no energy waste. However, there are some physical limitations when dealing with the creation of such models. In this paper, we provide a brief overview on different gates in billiard ball model and then propose an approach to create a universal Toffoli gate in BBM. We discuss the complexities of building larger Toffoli gates and the ways to have multi-level connected Toffoli gates to represent arbitrary logic functions.
This paper addresses the problem of norm adaptation using Bayesian reinforcement learning. We are concerned with the effectiveness of adding prior domain knowledge when facing environments with different settings as well as with the speed of adapting to a new environment. Individuals develop their normative framework via interaction with their surrounding environment (including other individuals). An agent acquires the domain-dependent knowledge in a certain environment and later reuses them in different settings. This work is novel in that it represents normative behaviors as probabilities over belief sets. We propose a two-level learning framework to learn the values of normative actions and set them as prior knowledge, when agents are confident about them, to feed them back to their belief sets. Developing a prior belief set about a certain domain can improve an agent’s learning process to adjust its norms to the new environment’s dynamics. Our evaluation shows that a normative agent, having been trained in an initial environment, is able to adjust its beliefs about the dynamics and behavioral norms in a new environment. Therefore, it converges to the optimal policy more quickly, especially in the early stages of learning.
Multiagent resource allocation under uncertainty raises various computational challenges in terms of efficiency such as intractability, communication cost, and preference representation. To date most approaches do not provide efficient solutions for dynamic environments where temporal constraints pose particular challenges. We propose two techniques to cope with such settings: auctions to allocate fairly according to preferences, and MDPs to address stochasticity. This research seeks to determine the ideal combination between the two methods to handle wide range of allocation problems with reduced computation and communication cost between agents.
Multiagent resource allocation under uncertainty raises various computational challenges in terms of efficiency such as intractability, communication cost, and preference representation. To date most approaches do not provide efficient solutions for dynamic environments where temporal constraints pose particular challenges. We propose two techniques to cope with such settings: auctions to allocate fairly according to preferences, and MDPs to address stochasticity. This research seeks to determine the ideal combination between the two methods to handle wide range of allocation problems with reduced computation and communication cost between agents.
Bug assignment in large software projects is typically a time-consuming and tedious task, effective assignment requires that bug triagers hold significant contextual information about both the reported bugs and the pool of available developers. In this paper, we propose an auction-based multiagent mechanism for assigning bugs to developers that is intended to minimize backlogs and overall bug lifetime. In this approach, developers and triagers are both modeled as intelligent software agents working on behalf of individuals in a multiagent environment. Upon receiving a bug report, triager agents auction off the bug and collect the requests. Developer agents compute their bids as a function of the developer's profile, preferences, current schedule of assigned bugs, and estimated time-to-fix of the bug. This value is then sent to the triager agent for the final decision. We use the Eclipse and Fire fox bug repositories to validate our approach, our studies suggest that the proposed auction-based multiagent mechanism can improve the bug assignment process compared to currently practiced methods. In particular, we found a 16% improvement in the number of fixed bugs compared to the historic data, based on a sample size of 213,000 bug reports over a period of 6 years.
In this paper we review the Billiard Ball Model (BBM) introduced by Toffoli and Fredkin. The analysis of a previous approach to design reversible networks based on BBM it shown to ignored physical realities. We prove that some logic function cannot be realized without additional control balls. For example, to realize the logical OR operation, at least three control balls are needed. We show how reversible Toffoli gates can be constructed with this model. Finally, a Toffoli gate module is proposed that can be used in a cascade of gates and thus implement arbitrary reversible functions.