----- - Let $G(n,p)$ denote the [[Erdos-Renyi random graph model]]. - Let $c$ denote the [[expectation|expected]] [[mean degree]] of $G(n,p)$, $c \neq 1$. > [!proposition] Proposition. ([[size of small components in Erdos-Renyi random graph model]]) > The [[expectation|expected size]] of the [[component]] to which a randomly chosen node in a [[small component]] $G(n,p)$ belongs is $\langle s \rangle =\frac{1}{1-c+cS} ,$ > where $S$ denotes the [[characterization of giant component in Erdos-Renyi random graph model|size of the giant component]] (as a fraction of $n$), if it exists, and $0$ otherwise. > ![[CleanShot 2023-10-29 at [email protected]|300]] > The [[expectation|expected degree]] in the [[small component]]s is $\langle k \rangle_{\text{small}}=(1-S)c$ when $n$ is large. > \ > The [[degree distribution]] within a [[small component]] (i.e., the fraction of nodes in a [[small component]] having [[degree]] $k$) is $e^{-c}c^{k}(1-S)^{k-1} / k!$. > [!proof]- Proof. ([[size of small components in Erdos-Renyi random graph model]]) > ~ \ Recall that [[small components of Erdos-Renyi random graph model are trees]]. Consider a node $i$ in a [[small component]] $A$ of $G(n,p)$, as below. ![[CleanShot 2023-10-29 at [email protected]|300]] $is edges lead to [[degree|deg i]]$=k$ mutually disjoint (since [[tree]]) [[subgraph|subnetworks]]. The size of the [[component of a graph|component]] to which $i$ belongs is the sum of sizes $t_{1}, \dots, t_{k}$ of subnetworks reachable along its edges, plus one. Then $A$ has size $s=1+ \sum_{m=1}^{k}t_{m}. (*)$ We now want to compute the mean size of a small component by averaging this expression over many different nodes in the small components of the [[network]]. First, we compute the average size of the component to which a node of [[degree]] $k$ (e.g., $i$) belongs. Taking the average on both sides of $(*)$ yields (using linearity of [[expectation]]) > $\langle s \rangle_{k}= 1 + \sum_{m=1}^{k} \langle t_{m} \rangle = 1 + k\langle t \rangle $ where the last expression follows because all neighbors are equivalent with uniform average $t$ nodes. > Now we average this expression over small-component nodes of any degree: $\langle s \rangle = 1 + \langle k \rangle_{\text{small}} \langle t \rangle , (1)$ where $\langle k \rangle_{\text{small}}$ is the average degree of a node in a small component. > We proceed to find $\langle k \rangle_{\text{small}}$ and $\langle t \rangle$. The [[giant component]] (if it exists) fills a fraction $S$ of the [[network]] (if it doesn't exist, $S=0$ and the computation doesn't change). So the small components combined fill a fraction $1-S$ of the [[network]]; hence overall $(1-S)n$ nodes belong to small components. Each of these is connected to any of the others with the usual [[probability]] $p$, hence $\langle k \rangle_{\text{small}}= [(1-S)n - 1]p=[(1-S)n - 1] \frac{c}{n-1} \approx (1-S)c \ \ (2)$ where we use [[Erdos-Renyi random graph model#^784c89|this expression for mean degree]]. > What about $\langle t \rangle$? We will use a *cavity method*. ![[CleanShot 2023-10-29 at [email protected]|300]] Let's consider the [[network]] obtained by removing $i$. Note that as $n \to \infty$ all properties such as size of [[giant component]] and [[small component]]s are preserved. In this modified [[network]], what were previously the subnetworks containing the neighbors $n_{1},n_{2},\dots$ of node $i$ (shaded regions) are now [[small component]]s in their own right. But since $G(n,p)$ and $G(n-1,p)$ behave identically in the [[function limit]] $n\to \infty$, the average size $\langle t \rangle$ of the components to which $n_{1},n_{2},\dots$ belong is simply equal to the average size of any small component, i.e., $\langle t \rangle=\langle s \rangle$. > Putting this together with $(1)$ and $(2)$, we get $\langle s \rangle=1+(1-S)c \langle s \rangle$ and solving for $\langle s \rangle$ yields the result desired. > > a) First consider a node of [[degree]] $k$ in $G(n,p)$. It belongs to a [[small component]] if none of its $k$ neighbors are in the [[giant component]]; this happens with [[probability]] $(1-S)^{k}$. > > b) Hence via [[Bayes' Theorem]] we can conclude that $\begin{align} > \mathbb{P}(\text{has degree }k | \text{ in SC})= & \frac{\mathbb{P}(\text{in SC | has degree }k)\mathbb{P}(\text{has degree }k)}{\mathbb{P}(\text{ in SC})} \\ > = & \frac{(1-S)^{k}c^{k}e^{-c}}{k! (1-S)} \\ > = & e^{-c}c^{k}(1-S)^{k-1} / k!, > \end{align}$ > as claimed, where we use that the [[degree distribution]] of the [[Erdos-Renyi random graph model]] is [[Poisson random variable|Poisson]] when we take $n \to \infty$. > ----- #### ---- #### References > [!backlink] > ```dataview > TABLE rows.file.link as "Further Reading" > FROM [[]] > FLATTEN file.tags as Tag > WHERE Tag = "#definition" OR Tag = "#theorem" OR Tag = "#MOC" OR Tag = "#proposition" OR Tag = "#axiom" > GROUP BY Tag > ``` > [!frontlink] > ```dataview > TABLE rows.file.link as "Further Reading" > FROM outgoing([[]]) > FLATTEN file.tags as Tag > WHERE Tag = "#definition" OR Tag = "#theorem" OR Tag = "#MOC" OR Tag = "#proposition" OR Tag = "#axiom" > GROUP BY Tag > ```