guide:F175b3a555: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
<div class="d-none"><math> | |||
\newcommand{\mathds}{\mathbb}</math></div> | |||
{{Alert-warning|This article was automatically generated from a tex file and may contain conversion errors. If permitted, you may login and edit this article to improve the conversion. }} | |||
We would like to end this chapter with something still advanced, in relation with representations, but more analytic and refreshing, featuring some probability. | |||
It is about formal graphs <math>X_N</math> and their formal symmetry groups <math>G_N\subset S_N</math> that we want to talk about, in the <math>N\to\infty</math> limit, a bit as the physicists do. But, how to do this? Not clear at all, because while it is easy to talk about series of graphs <math>X=(X_N)</math>, in an intuitive way, the corresponding symmetry groups <math>G_N\subset S_N</math> are not necessarily compatible with each other, as to form an axiomatizable object <math>G=(G_N)</math>. | |||
Long story short, we are into potentially difficult mathematics here, and as a more concrete question that we can attempt to solve, we have: | |||
\begin{problem} | |||
What families of groups <math>G_N\subset S_N</math> have compatible representation theory invariants, in the <math>N\to\infty</math> limit? And then, can we use this in order to talk about “good” families of graphs <math>X_N</math>, whose symmetry groups <math>G_N=G(X_N)</math> are of this type? | |||
\end{problem} | |||
But probably too much talking, let us get to work. The simplest graphs are undoubtedly the empty graphs and the simplices, with symmetry group <math>S_N</math>, so as a first question, we would like to know if the symmetric groups <math>S_N</math> themselves have compatible representation theory invariants, with some nice <math>N\to\infty</math> asymptotics to work out. | |||
And here, surprise, or miracle, the answer is indeed yes, with the result, which is something very classical, remarkable, and beautiful, being as follows: | |||
{{proofcard|Theorem|theorem-1|The probability for a random <math>\sigma\in S_N</math> to have no fixed points is | |||
<math display="block"> | |||
P\simeq\frac{1}{e} | |||
</math> | |||
in the <math>N\to\infty</math> limit, where <math>e=2.7182\ldots</math> is the usual constant from analysis. More generally, the main character of <math>S_N</math>, which counts these permutations, given by | |||
<math display="block"> | |||
\chi=\sum_i\sigma_{ii} | |||
</math> | |||
via the standard embedding <math>S_N\subset O_N</math>, follows the Poisson law <math>p_1</math>, in the <math>N\to\infty</math> limit. Even more generally, the truncated characters of <math>S_N</math>, given by | |||
<math display="block"> | |||
\chi=\sum_{i=1}^{[tN]}\sigma_{ii} | |||
</math> | |||
with <math>t > 0</math>, follow the Poisson laws <math>p_t</math>, in the <math>N\to\infty</math> limit. | |||
|Obviously, many things going on here. The idea is as follows: | |||
(1) In order to prove the first assertion, which is the key one, and probably the most puzzling one too, we will use the inclusion-exclusion principle. Let us set: | |||
<math display="block"> | |||
S_N^k=\left\{\sigma\in S_N\Big|\sigma(k)=k\right\} | |||
</math> | |||
The set of permutations having no fixed points, called derangements, is then: | |||
<math display="block"> | |||
X_N=\left(\bigcup_kS_N^k\right)^c | |||
</math> | |||
Now the inclusion-exclusion principle tells us that we have: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
|X_N| | |||
&=&\left|\left(\bigcup_kS_N^k\right)^c\right|\\ | |||
&=&|S_N|-\sum_k|S_N^k|+\sum_{k < l}|S_N^k\cap S_N^l|-\ldots+(-1)^N\sum_{k_1 < \ldots < k_N}|S_N^{k_1}\cup\ldots\cup S_N^{k_N}|\\ | |||
&=&N!-N(N-1)!+\binom{N}{2}(N-2)!-\ldots+(-1)^N\binom{N}{N}(N-N)!\\ | |||
&=&\sum_{r=0}^N(-1)^r\binom{N}{r}(N-r)! | |||
\end{eqnarray*} | |||
</math> | |||
Thus, the probability that we are interested in, for a random permutation <math>\sigma\in S_N</math> to have no fixed points, is given by the following formula: | |||
<math display="block"> | |||
P | |||
=\frac{|X_N|}{N!} | |||
=\sum_{r=0}^N\frac{(-1)^r}{r!} | |||
</math> | |||
Since on the right we have the expansion of <math>1/e</math>, this gives the result. | |||
(2) Let us construct now the main character of <math>S_N</math>, as in the statement. The permutation matrices being given by <math>\sigma_{ij}=\delta_{i\sigma(j)}</math>, we have the following formula: | |||
<math display="block"> | |||
\chi(\sigma) | |||
=\sum_i\delta_{\sigma(i)i} | |||
=\#\left\{i\in\{1,\ldots,N\}\Big|\sigma(i)=i\right\} | |||
</math> | |||
In order to establish now the asymptotic result in the statement, regarding these characters, we must prove the following formula, for any <math>r\in\mathbb N</math>, in the <math>N\to\infty</math> limit: | |||
<math display="block"> | |||
P(\chi=r)\simeq\frac{1}{r!e} | |||
</math> | |||
We already know, from (1), that this formula holds at <math>r=0</math>. In the general case now, we have to count the permutations <math>\sigma\in S_N</math> having exactly <math>r</math> points. Now since having such a permutation amounts in choosing <math>r</math> points among <math>1,\ldots,N</math>, and then permuting the <math>N-r</math> points left, without fixed points allowed, we have: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
\#\left\{\sigma\in S_N\Big|\chi(\sigma)=r\right\} | |||
&=&\binom{N}{r}\#\left\{\sigma\in S_{N-r}\Big|\chi(\sigma)=0\right\}\\ | |||
&=&\frac{N!}{r!(N-r)!}\#\left\{\sigma\in S_{N-r}\Big|\chi(\sigma)=0\right\}\\ | |||
&=&N!\times\frac{1}{r!}\times\frac{\#\left\{\sigma\in S_{N-r}\Big|\chi(\sigma)=0\right\}}{(N-r)!} | |||
\end{eqnarray*} | |||
</math> | |||
By dividing everything by <math>N!</math>, we obtain from this the following formula: | |||
<math display="block"> | |||
\frac{\#\left\{\sigma\in S_N\Big|\chi(\sigma)=r\right\}}{N!}=\frac{1}{r!}\times\frac{\#\left\{\sigma\in S_{N-r}\Big|\chi(\sigma)=0\right\}}{(N-r)!} | |||
</math> | |||
Now by using the computation at <math>r=0</math>, that we already have, from (1), it follows that with <math>N\to\infty</math> we have the following estimate: | |||
<math display="block"> | |||
P(\chi=r) | |||
\simeq\frac{1}{r!}\cdot P(\chi=0) | |||
\simeq\frac{1}{r!}\cdot\frac{1}{e} | |||
</math> | |||
Thus, we obtain as limiting measure the Poisson law of parameter 1, as stated. | |||
(3) Finally, let us construct the truncated characters of <math>S_N</math>, as in the statement. As before in the case <math>t=1</math>, we have the following computation, coming from definitions: | |||
<math display="block"> | |||
\chi_t(\sigma) | |||
=\sum_{i=1}^{[tN]}\delta_{\sigma(i)i} | |||
=\#\left\{i\in\{1,\ldots,[tN]\}\Big|\sigma(i)=i\right\} | |||
</math> | |||
Also before in the proofs of (1) and (2), we obtain by inclusion-exclusion that: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
P(\chi_t=0) | |||
&=&\frac{1}{N!}\sum_{r=0}^{[tN]}(-1)^r\sum_{k_1 < \ldots < k_r < [tN]}|S_N^{k_1}\cap\ldots\cap S_N^{k_r}|\\ | |||
&=&\frac{1}{N!}\sum_{r=0}^{[tN]}(-1)^r\binom{[tN]}{r}(N-r)!\\ | |||
&=&\sum_{r=0}^{[tN]}\frac{(-1)^r}{r!}\cdot\frac{[tN]!(N-r)!}{N!([tN]-r)!} | |||
\end{eqnarray*} | |||
</math> | |||
Now with <math>N\to\infty</math>, we obtain from this the following estimate: | |||
<math display="block"> | |||
P(\chi_t=0) | |||
\simeq\sum_{r=0}^{[tN]}\frac{(-1)^r}{r!}\cdot t^r | |||
\simeq e^{-t} | |||
</math> | |||
More generally, by counting the permutations <math>\sigma\in S_N</math> having exactly <math>r</math> fixed points among <math>1,\ldots,[tN]</math>, as in the proof of (2), we obtain: | |||
<math display="block"> | |||
P(\chi_t=r)\simeq\frac{t^r}{r!e^t} | |||
</math> | |||
Thus, we obtain in the limit a Poisson law of parameter <math>t</math>, as stated.}} | |||
Escalating difficulties, let us discuss now the hyperoctahedral group <math>H_N</math>. Here the result is more technical, getting us into more advanced probability, as follows: | |||
{{proofcard|Theorem|theorem-2|For the hyperoctahedral group <math>H_N\subset O_N</math>, the law of the truncated character <math>\chi=g_{11}+\ldots +g_{ss}</math> with <math>s=[tN]</math> is, in the <math>N\to\infty</math> limit, the measure | |||
<math display="block"> | |||
b_t=e^{-t}\sum_{r=-\infty}^\infty\delta_r\sum_{p=0}^\infty \frac{(t/2)^{|r|+2p}}{(|r|+p)!p!} | |||
</math> | |||
called Bessel law of parameter <math>t > 0</math>. | |||
|We regard <math>H_N</math> as being the symmetry group of the graph <math>I_N=\{I^1,\ldots ,I^N\}</math> formed by <math>n</math> segments. The diagonal coefficients are then given by: | |||
<math display="block"> | |||
u_{ii}(g)=\begin{cases} | |||
\ 0\ \mbox{ if $g$ moves <math>I^i</math>}\\ | |||
\ 1\ \mbox{ if <math>g</math> fixes <math>I^i</math>}\\ | |||
-1\mbox{ if <math>g</math> returns <math>I^i</math>} | |||
\end{cases} | |||
</math> | |||
Let us denote by <math>F_g,R_g</math> the number of segments among <math>\{I^1,\ldots ,I^s\}</math> which are fixed, respectively returned by an element <math>g\in H_N</math>. With this notation, we have: | |||
<math display="block"> | |||
u_{11}+\ldots+u_{ss}=F_g-R_g | |||
</math> | |||
Let us denote by <math>P_N</math> the probabilities computed over <math>H_N</math>. The density of the law of the variable <math>u_{11}+\ldots+u_{ss}</math> at a point <math>r\geq 0</math> is then given by the following formula: | |||
<math display="block"> | |||
D(r) | |||
=P_N(F_g-R_g=r) | |||
=\sum_{p=0}^\infty P_N(F_g=r+p,R_g=p) | |||
</math> | |||
Assume first that we are in the case <math>t=1</math>. We have the following computation: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
\lim_{N\to\infty}D(r) | |||
&=&\lim_{N\to\infty}\sum_{p=0}^\infty(1/2)^{r+2p}\binom{r+2p}{r+p}P_N(F_g+R_g=r+2p)\\ | |||
&=&\sum_{p=0}^\infty(1/2)^{r+2p}\binom{r+2p}{r+p}\frac{1}{e(r+2p)!}\\ | |||
&=&\frac{1}{e}\sum_{p=0}^\infty\frac{(1/2)^{r+2p}}{(r+p)!p!} | |||
\end{eqnarray*} | |||
</math> | |||
The general case <math>0 < t\leq 1</math> follows by performing some modifications in the above computation. Indeed, the asymptotic density can be computed as follows: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
\lim_{N\to\infty}D(r) | |||
&=&\lim_{N\to\infty}\sum_{p=0}^\infty(1/2)^{r+2p}\binom{r+2p}{r+p}P_N(F_g+R_g=r+2p)\\ | |||
&=&\sum_{p=0}^\infty(1/2)^{r+2p}\binom{r+2p}{r+p}\frac{t^{r+2p}}{e^t(r+2p)!}\\ | |||
&=&e^{-t}\sum_{p=0}^\infty\frac{(t/2)^{r+2p}}{(r+p)!p!} | |||
\end{eqnarray*} | |||
</math> | |||
Together with <math>D(-r)=D(r)</math>, this gives the formula in the statement.}} | |||
In order to further discuss now all this, and extend the above results, we will need a number of standard probabilistic preliminaries. We have the following notion: | |||
{{defncard|label=|id=|Associated to any compactly supported positive measure <math>\nu</math> on <math>\mathbb C</math>, not necessarily of mass <math>1</math>, is the probability measure | |||
<math display="block"> | |||
p_\nu=\lim_{n\to\infty}\left(\left(1-\frac{t}{n}\right)\delta_0+\frac{1}{n}\nu\right)^{*n} | |||
</math> | |||
where <math>t=mass(\nu)</math>, called compound Poisson law.}} | |||
In what follows we will be mainly interested in the case where the measure <math>\nu</math> is discrete, as is for instance the case for <math>\nu=t\delta_1</math> with <math>t > 0</math>, which produces the Poisson laws. The following standard result allows one to detect compound Poisson laws: | |||
{{proofcard|Proposition|proposition-1|For <math>\nu=\sum_{i=1}^st_i\delta_{z_i}</math> with <math>t_i > 0</math> and <math>z_i\in\mathbb C</math>, we have | |||
<math display="block"> | |||
F_{p_\nu}(y)=\exp\left(\sum_{i=1}^st_i(e^{iyz_i}-1)\right) | |||
</math> | |||
where <math>F</math> denotes the Fourier transform. | |||
|Let <math>\mu_n</math> be the Bernoulli measure appearing in Definition 11.32, under the convolution sign. We have then the following Fourier transform computation: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
F_{\mu_n}(y)=\left(1-\frac{t}{n}\right)+\frac{1}{n}\sum_{i=1}^st_ie^{iyz_i} | |||
&\implies&F_{\mu_n^{*n}}(y)=\left(\left(1-\frac{t}{n}\right)+\frac{1}{n}\sum_{i=1}^st_ie^{iyz_i}\right)^n\\ | |||
&\implies&F_{p_\nu}(y)=\exp\left(\sum_{i=1}^st_i(e^{iyz_i}-1)\right) | |||
\end{eqnarray*} | |||
</math> | |||
Thus, we have obtained the formula in the statement.}} | |||
Getting back now to the Poisson and Bessel laws, we have: | |||
{{proofcard|Theorem|theorem-3|The Poisson and Bessel laws are compound Poisson laws, | |||
<math display="block"> | |||
p_t=p_{t\delta_1}\quad,\quad b_t=p_{t\varepsilon} | |||
</math> | |||
where <math>\delta_1</math> is the Dirac mass at <math>1</math>, and <math>\varepsilon</math> is the centered Bernoulli law, <math>\varepsilon=(\delta_1+\delta_{-1})/2</math>. | |||
|We have two assertions here, the idea being as follows: | |||
(1) The first assertion, regarding the Poisson law <math>p_t</math>, is clear from Definition 11.22, which for <math>\nu=t\delta_1</math> takes the following form: | |||
<math display="block"> | |||
p_\nu=\lim_{n\to\infty}\left(\left(1-\frac{t}{n}\right)\delta_0+\frac{t}{n}\,\delta_1\right)^{*n} | |||
</math> | |||
But this is a usual Poisson limit, producing the Poisson law <math>p_t</math>, as desired. | |||
(2) Regarding the second assertion, concerning the Bessel law <math>b_t</math>, by further building on the various formulae from Theorem 11.31 and its proof, we have: | |||
<math display="block"> | |||
F_{b_t}(y)=\exp\left(t\left(\frac{e^{iy}+e^{-iy}}{2}-1\right)\right) | |||
</math> | |||
On the other hand, the formula in Proposition 11.33 gives, for <math>\nu=t\varepsilon</math>, the same Fourier transform formula. Thus, with <math>\nu=t\varepsilon</math> we have <math>p_\nu=b_t</math>, as claimed.}} | |||
More generally now, and in answer to Problem 11.29, for the groups <math>H_N^s</math> from chapter 9 the asymptotic truncated characters follow the law <math>b_t^s=p_{t\varepsilon_s}</math>, with <math>\varepsilon_s</math> being the uniform measure on the <math>s</math>-roots of unity, and if you are really picky about what happens with <math>N\to\infty</math>, these are the only solutions. For details on this, you can check my book <ref name="ba1">T. Banica, Linear algebra and group theory (2024).</ref>. | |||
==General references== | |||
{{cite arXiv|last1=Banica|first1=Teo|year=2024|title=Graphs and their symmetries|eprint=2406.03664|class=math.CO}} | |||
==References== | |||
{{reflist}} |
Latest revision as of 21:17, 21 April 2025
We would like to end this chapter with something still advanced, in relation with representations, but more analytic and refreshing, featuring some probability.
It is about formal graphs [math]X_N[/math] and their formal symmetry groups [math]G_N\subset S_N[/math] that we want to talk about, in the [math]N\to\infty[/math] limit, a bit as the physicists do. But, how to do this? Not clear at all, because while it is easy to talk about series of graphs [math]X=(X_N)[/math], in an intuitive way, the corresponding symmetry groups [math]G_N\subset S_N[/math] are not necessarily compatible with each other, as to form an axiomatizable object [math]G=(G_N)[/math].
Long story short, we are into potentially difficult mathematics here, and as a more concrete question that we can attempt to solve, we have:
\begin{problem}
What families of groups [math]G_N\subset S_N[/math] have compatible representation theory invariants, in the [math]N\to\infty[/math] limit? And then, can we use this in order to talk about “good” families of graphs [math]X_N[/math], whose symmetry groups [math]G_N=G(X_N)[/math] are of this type?
\end{problem}
But probably too much talking, let us get to work. The simplest graphs are undoubtedly the empty graphs and the simplices, with symmetry group [math]S_N[/math], so as a first question, we would like to know if the symmetric groups [math]S_N[/math] themselves have compatible representation theory invariants, with some nice [math]N\to\infty[/math] asymptotics to work out.
And here, surprise, or miracle, the answer is indeed yes, with the result, which is something very classical, remarkable, and beautiful, being as follows:
The probability for a random [math]\sigma\in S_N[/math] to have no fixed points is
Obviously, many things going on here. The idea is as follows:
(1) In order to prove the first assertion, which is the key one, and probably the most puzzling one too, we will use the inclusion-exclusion principle. Let us set:
The set of permutations having no fixed points, called derangements, is then:
Now the inclusion-exclusion principle tells us that we have:
Thus, the probability that we are interested in, for a random permutation [math]\sigma\in S_N[/math] to have no fixed points, is given by the following formula:
Since on the right we have the expansion of [math]1/e[/math], this gives the result.
(2) Let us construct now the main character of [math]S_N[/math], as in the statement. The permutation matrices being given by [math]\sigma_{ij}=\delta_{i\sigma(j)}[/math], we have the following formula:
In order to establish now the asymptotic result in the statement, regarding these characters, we must prove the following formula, for any [math]r\in\mathbb N[/math], in the [math]N\to\infty[/math] limit:
We already know, from (1), that this formula holds at [math]r=0[/math]. In the general case now, we have to count the permutations [math]\sigma\in S_N[/math] having exactly [math]r[/math] points. Now since having such a permutation amounts in choosing [math]r[/math] points among [math]1,\ldots,N[/math], and then permuting the [math]N-r[/math] points left, without fixed points allowed, we have:
By dividing everything by [math]N![/math], we obtain from this the following formula:
Now by using the computation at [math]r=0[/math], that we already have, from (1), it follows that with [math]N\to\infty[/math] we have the following estimate:
Thus, we obtain as limiting measure the Poisson law of parameter 1, as stated.
(3) Finally, let us construct the truncated characters of [math]S_N[/math], as in the statement. As before in the case [math]t=1[/math], we have the following computation, coming from definitions:
Also before in the proofs of (1) and (2), we obtain by inclusion-exclusion that:
Now with [math]N\to\infty[/math], we obtain from this the following estimate:
More generally, by counting the permutations [math]\sigma\in S_N[/math] having exactly [math]r[/math] fixed points among [math]1,\ldots,[tN][/math], as in the proof of (2), we obtain:
Thus, we obtain in the limit a Poisson law of parameter [math]t[/math], as stated.
Escalating difficulties, let us discuss now the hyperoctahedral group [math]H_N[/math]. Here the result is more technical, getting us into more advanced probability, as follows:
For the hyperoctahedral group [math]H_N\subset O_N[/math], the law of the truncated character [math]\chi=g_{11}+\ldots +g_{ss}[/math] with [math]s=[tN][/math] is, in the [math]N\to\infty[/math] limit, the measure
We regard [math]H_N[/math] as being the symmetry group of the graph [math]I_N=\{I^1,\ldots ,I^N\}[/math] formed by [math]n[/math] segments. The diagonal coefficients are then given by:
Let us denote by [math]F_g,R_g[/math] the number of segments among [math]\{I^1,\ldots ,I^s\}[/math] which are fixed, respectively returned by an element [math]g\in H_N[/math]. With this notation, we have:
Let us denote by [math]P_N[/math] the probabilities computed over [math]H_N[/math]. The density of the law of the variable [math]u_{11}+\ldots+u_{ss}[/math] at a point [math]r\geq 0[/math] is then given by the following formula:
Assume first that we are in the case [math]t=1[/math]. We have the following computation:
The general case [math]0 \lt t\leq 1[/math] follows by performing some modifications in the above computation. Indeed, the asymptotic density can be computed as follows:
Together with [math]D(-r)=D(r)[/math], this gives the formula in the statement.
In order to further discuss now all this, and extend the above results, we will need a number of standard probabilistic preliminaries. We have the following notion:
Associated to any compactly supported positive measure [math]\nu[/math] on [math]\mathbb C[/math], not necessarily of mass [math]1[/math], is the probability measure
In what follows we will be mainly interested in the case where the measure [math]\nu[/math] is discrete, as is for instance the case for [math]\nu=t\delta_1[/math] with [math]t \gt 0[/math], which produces the Poisson laws. The following standard result allows one to detect compound Poisson laws:
For [math]\nu=\sum_{i=1}^st_i\delta_{z_i}[/math] with [math]t_i \gt 0[/math] and [math]z_i\in\mathbb C[/math], we have
Let [math]\mu_n[/math] be the Bernoulli measure appearing in Definition 11.32, under the convolution sign. We have then the following Fourier transform computation:
Thus, we have obtained the formula in the statement.
Getting back now to the Poisson and Bessel laws, we have:
The Poisson and Bessel laws are compound Poisson laws,
We have two assertions here, the idea being as follows:
(1) The first assertion, regarding the Poisson law [math]p_t[/math], is clear from Definition 11.22, which for [math]\nu=t\delta_1[/math] takes the following form:
But this is a usual Poisson limit, producing the Poisson law [math]p_t[/math], as desired.
(2) Regarding the second assertion, concerning the Bessel law [math]b_t[/math], by further building on the various formulae from Theorem 11.31 and its proof, we have:
On the other hand, the formula in Proposition 11.33 gives, for [math]\nu=t\varepsilon[/math], the same Fourier transform formula. Thus, with [math]\nu=t\varepsilon[/math] we have [math]p_\nu=b_t[/math], as claimed.
More generally now, and in answer to Problem 11.29, for the groups [math]H_N^s[/math] from chapter 9 the asymptotic truncated characters follow the law [math]b_t^s=p_{t\varepsilon_s}[/math], with [math]\varepsilon_s[/math] being the uniform measure on the [math]s[/math]-roots of unity, and if you are really picky about what happens with [math]N\to\infty[/math], these are the only solutions. For details on this, you can check my book [1].
General references
Banica, Teo (2024). "Graphs and their symmetries". arXiv:2406.03664 [math.CO].