----
> [!theorem] Theorem. ([[characterization of giant component in Erdos-Renyi random graph model]])
> The [[Erdos-Renyi random graph model]] $G(n,p)$ with $n$ nodes possesses a [[giant component]] if and only if its [[mean degree]] $c$ exceeds $1$. When it exists, said [[giant component]] is unique (under mild assumptions); its size as a fraction of $n$ is the nonzero solution of $S=1-e^{-cS}.$
> ![[CleanShot 2023-10-28 at 19.16.57.jpg]]
> [!proof]- Proof. ([[characterization of giant component in Erdos-Renyi random graph model]])
> Fix $p$ and consider $G(n,p)$.
>
> **$\to$.** Suppose there is a [[giant component]] in the [[network]], meaning that as $n$ grows, the average size of the largest [[component of a graph|component]] grows with it, and hence the largest component occupies a constant fraction of the whole [[network]]. We will calculate said fraction in the [[function limit]] $n\to \infty$. Denote by $u$ the
> - [[expectation|expected]] number of nodes in the [[random graph]] that do not belong to the [[giant component]]
> - [[probability distribution|probability]] that the node $i$ in the [[network]] chosen uniformly at [[random variable|random]] does not belong to the [[giant component]].
> These notions are equivalent, but often it is nicer to think about one than the other.
>
> This means that for every other node $j$ in the [[network]], either:
> 1. $i$ is not connected to $j$
> 1. Happens with [[probability distribution|probability]] $1-p$
> 2. $i$ is connected to $j$ (prob. $p$) but $j$ is not in the [[giant component]] (prob. $u$)
> 1. Happens with [[probability distribution|probability]] $pu$.
> [[the exclusion rule|Hence the probability]] that $i$ is not connected to the [[giant component]] via node $j$ is $1-p+p u$. [[multiplication principle|It follows that]] the probability $u$ of $i$ not being connected to the [[giant component]] at all must be $(1-p+p u)^{n-1}$.
>
> We proceed to manipulate this expression for $u$ towards the theorem statement. Rearranging and substituting $p=\frac{c}{n-1}$ yields $u=\left[ 1-\frac{c}{n-1} (1-u) \right]^{n-1},$
> then we take $\ln$ on both sides (or just directly apply the $(1-x)^{n}$ approximation [[power series|here]]): $\ln u=(n-1)\ln \left[ 1- \frac{c}{n-1} (1-u) \right] \approx -(n-1) \frac{c}{n-1} (1-u)=-c(1-u).$
> Now exponentiation yields $u = e^{-c(1-u)}$
> and we conclude that the fraction $S=1-u$ of nodes in the [[giant component]] is $S = 1- e^{-cS}. (*)$
> This solution has no closed form solution for $S$, in general we use numerical methods.
>
> ![[CleanShot 2023-10-28 at 19.16.57.jpg|300]]
>
> We see graphically that the equation $S=1-e^{-cS}$ has only the trivial solution $S=0$ *iff* $c\leq 1$. To be more precise, we can say the [[derivative]] of $1-e^{-cS}$ monotonically decreases on $[0, \infty)$, so it is always bounded above by $\frac{d}{dS}[1-e^{-cS}] |_{S=0}=c$. The [[derivative]] of $y=S$ is constant $1$. Since $y=S$ and $y=1-e^{-cS}$ are [[continuous]], increasing functions with the same value initially at $S=0$ and when $c < 1$
> $\frac{d}{dS}[1-e^{-cS}] < c \leq 1=\frac{d}{dS}[S] \text{ for all } S>0$
> we have that $1-e^{-cS} < S$ for $c < 1$.
>
> Let $c < 1$.
> $S=0$ is clearly a solution, so suppose $S>0$. The [[derivative]] of $1-e^{-cS}$ monotonically decreases on $[0, \infty)$, so it is always bounded above by $\frac{d}{dS}[1-e^{-cS}] |_{S=0}=c$. Hence $\frac{d}{dS}[1-e^{-cS}] < c \leq 1=\frac{d}{dS}[S].$
> Both $y=S$ and $y=1-e^{-cS}$ are [[continuous]] [[increasing]] functions sharing initial value $y(0)=0$. We conclude $1-e^{-cS}< S$, so $S$ must not be a solution to $(*)$. If $c>1$ a similar argument shows $(*)$ has a nontrivial solution. The transition falls where the [[derivative]]s match at $0$. This is $c=1$. (see textbook)
>
> Hence there is no [[giant component]] component when $c\leq1$.
>
> **$\leftarrow.$** We now want to show that $c>1$ guarantees the existence and uniqueness of a [[giant component]].
>
> **Existence.** Let $c>1$. Let us think the formation of the [[giant component]]. Obtain a 'small' connected set $A$ of $s$ nodes somewhere in the [[network]] (as $n \to \infty$, $A$ exists with [[probability]] $1$ provided $p \neq 0$). ![[CleanShot 2023-10-29 at
[email protected]|300]]
> The $s$ nodes of $A$ can be divided into a (see above figure)
> - *core*: nodes that only have connections inside of $A$;
> - *periphery* — nodes that at least one neighbor outside of $A$.
>
> We say that we $\text{grow}$ $A$ when we add to it those nodes immediately adjacent to its periphery.
>
> The [[expectation|expected]] number of connections a node $i$ in the periphery has among the $n-s$ nodes outside of $A$ [[indicator random variable|is]] $\sum_{n-s \text{ terms}} 1 \cdot p=(n-s)p=c\frac{n-s}{n-1} \approx c$, assuming (arbitrarily) large $n$. This means that the expected size of the new periphery when we $\text{grow}$ $A$ is $c \cdot |(\text{old periphery})|$. We may repeat the $\text{grow}$ operation indefinitely; as we apply it as many times $N$ as we want, $c^{N}$ becomes exponentially smaller if $c <1$ and exponentially larger if $c > 1$ and the expected size of the new periphery does along with it. In the case that the expected size grows exponentially, we can grow a component whose size is comparable to that of the whole [[network]] — a [[giant component]].
>
> **Uniqueness.** Read the book, page 354.
>
>
>
----
####
-----
#### References
> [!backlink]
> ```dataview
TABLE rows.file.link as "Further Reading"
FROM [[]]
FLATTEN file.tags
GROUP BY file.tags as Tag
> [!frontlink]
> ```dataview
> TABLE rows.file.link as "Further Reading"
> FROM outgoing([[]])
> FLATTEN file.tags as Tag
> WHERE Tag = "#definition" OR Tag = "#theorem" OR Tag = "#MOC" OR Tag = "#proposition" OR Tag = "#axiom"
> GROUP BY Tag
> ```