Differentially private analysis of graphs

Differentially private analysis of graphs studies algorithms for computing accurate graph statistics while preserving differential privacy. Such algorithms are used for data represented in the form of a graph where nodes correspond to individuals and edges correspond to relationships between them. For examples, edges could correspond to friendships, sexual relationships, or communication patterns. A party that collected sensitive graph data can process it using a differentially private algorithm and publish the output of the algorithm. The goal of differentially private analysis of graphs is to design algorithms that compute accurate global information about graphs while preserving privacy of individuals whose data is stored in the graph.

Variants
Differential privacy imposes a restriction on the algorithm. Intuitively, it requires that the algorithm has roughly the same output distribution on neighboring inputs. If the input is a graph, there are two natural notions of neighboring inputs, edge neighbors and node neighbors, which yield two natural variants of differential privacy for graph data.

Let ε be a positive real number and $$\mathcal{A}$$ be a randomized algorithm that takes a graph as input and returns an output from a set $$\mathcal{O}$$. The algorithm $$\mathcal{A}$$ is $$\epsilon$$-differentially private if, for all neighboring graphs $$G_1$$ and $$G_2$$ and all subsets $$S$$ of $$\mathcal{O}$$,

$$\Pr[\mathcal{A}(G_1) \in S] \leq e^{\epsilon} \times \Pr[\mathcal{A}(G_2) \in S],$$

where the probability is taken over the randomness used by the algorithm.

Edge differential privacy
Two graphs are edge neighbors if they differ in one edge. An algorithm is $$\epsilon$$-edge-differentially private if, in the definition above, the notion of edge neighbors is used. Intuitively, an edge differentially private algorithm has similar output distributions on any pair of graphs that differ in one edge, thus protecting changes to graph edges.

Node differential privacy
Two graphs are node neighbors if one can be obtained from the other by deleting a node and its adjacent edges. An algorithm is $$\epsilon$$-node-differentially private if, in the definition above, the notion of node neighbors is used. Intuitively, a node differentially private algorithm has similar output distributions on any pair of graphs that differ in one one nodes and edges adjacent to it, thus protecting information pertaining to each individual. Node differential privacy give a stronger privacy protection than edge differential privacy.

Research history
The first edge differentially private algorithm was designed by Nissim, Raskhodnikova, and Smith. The distinction between edge and node differential privacy was first discussed by Hay, Miklau, and Jensen. However, it took several years before first node differentially private algorithms were published in Blocki et al., Kasiviswanathan et al., and Chen and Zhou. In all three papers, the algorithms are for releasing a single statistic, like a triangle count or counts of other subgraphs. Raskhodnikova and Smith gave the first node differentially private algorithm for releasing a vector, specifically, the degree count and the degree distribution.