Symmetric groups

[math] \newcommand{\mathds}{\mathbb}[/math]

11a. Character laws

We develop in what follows some general theory for the compact subgroups [math]G\subset U_N[/math], usually taken finite, with our main example being the symmetric group [math]S_N\subset O_N[/math]. Let us start with a definition that we have already met in chapter 9, namely:

Definition

A representation of a finite group [math]G[/math] is a group morphism

[[math]] u:G\to U_N [[/math]]
into a unitary group. The character of such a representation is the function

[[math]] \chi:G\to\mathbb C\quad,\quad g\to Tr(u_g) [[/math]]
where [math]Tr[/math] is the usual, unnormalized trace of the [math]N\times N[/math] matrices.

As explained in chapter 9, the simplest case of all this, namely [math]N=1[/math], is of particular interest. Here the representations coincide with their characters, and are by definition the group morphisms as follows, called characters of the group:

[[math]] \chi:G\to\mathbb T [[/math]]

These characters from an abelian group [math]\widehat{G}[/math], and when [math]G[/math] itself is abelian, the correspondence [math]G\to\widehat{G}[/math] is a duality, in the sense that it maps [math]\widehat{G}\to G[/math] as well. Moreover, a more detailed study shows that we have in fact an isomorphism [math]G\simeq\widehat{G}[/math], with this being something quite subtle, related at the same time to the structure theorem for the finite abelian groups, [math]G\simeq\mathbb Z_{N_1}\times\ldots\times\mathbb Z_{N_k}[/math], and to the Fourier transforms over such groups.


In what follows we will be interested in the general case, [math]N\in\mathbb N[/math]. It is technically convenient to assume that the representation [math]\pi:G\to U_N[/math] is faithful, by replacing if necessary [math]G[/math] with its image. Thus, we are led to the following definition:

Definition

The main character of a compact group [math]G\subset U_N[/math] is the map

[[math]] \chi:G\to\mathbb C\quad,\quad g\to Tr(g) [[/math]]
which associates to the group elements, viewed as unitary matrices, their trace.

We will see in a moment some motivations for the study of these characters. From a naive viewpoint, which is ours at the present stage, we want to do some linear algebra with our group elements [math]g\in U_N[/math], and we have several choices here, as follows:


  • A first idea would be to look at the determinant, [math]\det g\in\mathbb T[/math]. However, this is usually not a very interesting quantity, for instance because [math]g\in O_N[/math] implies [math]\det g=\pm1[/math]. Also, for groups like [math]SO_N,SU_N[/math], this determinant is by definition 1.
  • A second idea would be to try to compute eigenvalues and eigenvectors for the group elements [math]g\in G[/math], and then solve diagonalization questions for these elements. However, all this is quite complicated, so this idea is not good either.
  • Thus, we are left with looking at the trace, [math]Tr(g)\in\mathbb C[/math]. We will see soon that this is a very reasonable choice, with the mathematics being at the same time non-trivial, doable, and also interesting, for a whole number of reasons.


Before starting our study, let us mention as well the more advanced reasons leading to the study of characters. The idea here is that a given finite or compact group [math]G[/math] can have several representations [math]\pi:G\to U_N[/math], and these representations can be studied via their characters [math]\chi_\pi:G\to\mathbb C[/math], with a well-known and deep theorem basically stating that [math]\pi[/math] can be recovered from its character [math]\chi_\pi[/math]. We will be back to this later.


As a basic result regarding the characters, we have:

Theorem

Given a compact group [math]G\subset U_N[/math], its main character [math]\chi:G\to\mathbb C[/math] is a central function, in the sense that it satisfies the following condition:

[[math]] \chi(gh)=\chi(hg) [[/math]]
Equivalently, [math]\chi[/math] is constant on the conjugacy classes of [math]G[/math].


Show Proof

This is clear from the fact that the trace of matrices satisfies:

[[math]] Tr(AB)=Tr(BA) [[/math]]

Thus, we are led to the conclusion in the statement.

As before, there is some interesting mathematics behind all this. We will prove later, when doing representation theory, that any central function [math]f:G\to\mathbb C[/math] appears as a linear combination of characters [math]\chi_\pi:G\to\mathbb C[/math] of representations [math]\pi:G\to U_N[/math].


In order to work out now some examples, let us get back now to our main examples of finite groups, constructed in chapter 9, namely:

[[math]] \mathbb Z_N\subset D_N\subset S_N\subset H_N [[/math]]

We will do in what follows some character computations for these groups, which are all quite elementary, or at least not requiring very advanced theory.


Let us start with the following result, which covers [math]\mathbb Z_N\subset D_N\subset S_N[/math], or rather tells us what is to be done with these groups, in relation with their main characters:

Proposition

For the symmetric group, regarded as group of permutation matrices, [math]S_N\subset O_N[/math], the main character counts the number of fixed points:

[[math]] \chi(g)=\#\left\{i\in\{1,\ldots,N\}\Big|\sigma(i)=i\right\} [[/math]]
The same goes for any [math]G\subset S_N[/math], regarded as a matrix group via [math]G\subset S_N\subset O_N[/math].


Show Proof

This is indeed clear from definitions, because the diagonal entries of the permutation matrices correspond to the fixed points of the permutation.

Summarizing, we are left with counting fixed points. For the simplest possible group, namely the cyclic group [math]\mathbb Z_N\subset S_N[/math], the computation is as follows:

Proposition

The main character of [math]\mathbb Z_N\subset O_N[/math] is given by:

[[math]] \chi(g)=\begin{cases} 0&{\rm if}\ g\neq1\\ N&{\rm if}\ g=1 \end{cases} [[/math]]
Thus, at the probabilistic level, we have the following formula,

[[math]] law(\chi)=\left(1-\frac{1}{N}\right)\delta_0+\frac{1}{N}\delta_N [[/math]]
telling us that the main character [math]\chi[/math] follows a Bernoulli law.


Show Proof

The first formula is clear, because the cyclic permutation matrices have 0 on the diagonal, and so 0 as trace, unless the matrix is the identity, having trace [math]N[/math]. As for the second formula, this is a probabilistic reformulation of the first one.

For the dihedral group now, which is the next one in our hierarchy, the computation is more interesting, and the final answer is no longer uniform in [math]N[/math], as follows:

Proposition

For the dihedral group [math]D_N\subset S_N[/math] we have:

[[math]] law(\chi)=\begin{cases} \left(\frac{3}{4}-\frac{1}{2N}\right)\delta_0+\frac{1}{4}\delta_2+\frac{1}{2N}\delta_N&(N\ even)\\ &\\ \left(\frac{1}{2}-\frac{1}{2N}\right)\delta_0+\frac{1}{2}\delta_1+\frac{1}{2N}\delta_N&(N\ odd) \end{cases} [[/math]]


Show Proof

The dihedral group [math]D_N[/math] consists indeed of:


(1) [math]N[/math] symmetries, having each [math]1[/math] fixed point when [math]N[/math] is odd, and having 0 or 2 fixed points, distributed [math]50-50[/math], when [math]N[/math] is even.


(2) [math]N[/math] rotations, each having [math]0[/math] fixed points, except for the identity, which is technically a rotation too, and which has [math]N[/math] fixed points.


Thus, we are led to the formulae in the statement.

Regarding now the symmetric group [math]S_N[/math] itself, the permutations having no fixed points at all are called derangements, and the first question which appears, which is a classical question in combinatorics, is that of counting these derangements. And the result here, which is something remarkable, and very beautiful, is as follows:

Theorem

The probability for a random permutation [math]\sigma\in S_N[/math] to be a derangement is given by the following formula:

[[math]] P=1-\frac{1}{1!}+\frac{1}{2!}-\ldots+(-1)^{N-1}\frac{1}{(N-1)!}+(-1)^N\frac{1}{N!} [[/math]]
Thus we have the following asymptotic formula, in the [math]N\to\infty[/math] limit,

[[math]] P\simeq\frac{1}{e} [[/math]]
where [math]e=2.7182\ldots[/math] is the usual constant from analysis.


Show Proof

This is something very classical, which is best viewed by using the inclusion-exclusion principle. Consider indeed the following sets:

[[math]] S_N^i=\left\{\sigma\in S_N\Big|\sigma(i)=i\right\} [[/math]]

The set of permutations having no fixed points is then:

[[math]] X_N=\left(\bigcup_i S_N^i\right)^c [[/math]]

In order to compute now the cardinality [math]|X_N|[/math], consider as well the following sets, depending on indices [math]i_1 \lt \ldots \lt i_k[/math], obtained by taking intersections:

[[math]] S_N^{i_1\ldots i_k}=S_N^{i_1}\bigcap\ldots\bigcap S_N^{i_k} [[/math]]

Observe that we have the following formula:

[[math]] S_N^{i_1\ldots i_k}=\left\{\sigma\in S_N\Big|\sigma(i_1)=i_1,\ldots,\sigma(i_k)=i_k\right\} [[/math]]

The inclusion-exclusion principle tells us that we have:

[[math]] \begin{eqnarray*} &&|X_N|\\ &=&|S_N|-\sum_i|S_N^i|+\sum_{i \lt j}|S_N^i\cap S_N^j|-\ldots+(-1)^N\sum_{i_1 \lt \ldots \lt i_N}|S_N^{i_1}\cup\ldots\cup S_N^{i_N}|\\ &=&|S_N|-\sum_i|S_N^i|+\sum_{i \lt j}|S_N^{ij}|-\ldots+(-1)^N\sum_{i_1 \lt \ldots \lt i_N}|S_N^{i_1\ldots i_N}| \end{eqnarray*} [[/math]]


Thus, the probability that we are interested in is given by:

[[math]] \begin{eqnarray*} P &=&\frac{1}{N!}\left(|S_N|-\sum_i|S_N^i|+\sum_{i \lt j}|S_N^{ij}|-\ldots+(-1)^N\sum_{i_1 \lt \ldots \lt i_N}|S_N^{i_1\ldots i_N}|\right)\\ &=&\frac{1}{N!}\sum_{k=0}^N(-1)^k\sum_{i_1 \lt \ldots \lt i_k}|S_N^{i_1\ldots i_k}|\\ &=&\frac{1}{N!}\sum_{k=0}^N(-1)^k\sum_{i_1 \lt \ldots \lt i_k}(N-k)!\\ &=&\frac{1}{N!}\sum_{k=0}^N(-1)^k\binom{N}{k}(N-k)!\\ &=&\sum_{k=0}^N\frac{(-1)^k}{k!}\\ &=&1-\frac{1}{1!}+\frac{1}{2!}-\ldots+(-1)^{N-1}\frac{1}{(N-1)!}+(-1)^N\frac{1}{N!} \end{eqnarray*} [[/math]]


Since on the right we have the expansion of [math]\frac{1}{e}[/math], we obtain:

[[math]] P\simeq\frac{1}{e} [[/math]]

Thus, we are led to the conclusion in the statement.

The above result is something remarkable, and there are many versions and generalizations of it. We will discuss this gradually, in what follows, all this being key material. To start with, in terms of characters, the above result reformulates as follows:

Theorem

For the symmetric group, the probability for main character

[[math]] \chi:S_N\to\mathbb N [[/math]]
to vanish is given by the following formula:

[[math]] P(\chi=0)=1-\frac{1}{1!}+\frac{1}{2!}-\ldots+(-1)^{N-1}\frac{1}{(N-1)!}+(-1)^N\frac{1}{N!} [[/math]]
Thus we have the following asymptotic formula, in the [math]N\to\infty[/math] limit,

[[math]] P(\chi=0)\simeq\frac{1}{e} [[/math]]
where [math]e=2.7182\ldots[/math] as usual.


Show Proof

This follows indeed by combining Proposition 11.4, which tells us that [math]\chi[/math] counts the number of fixed points, with Theorem 11.7.

Let us discuss now, more generally, what happens when counting permutations having exactly [math]k[/math] fixed points. The result here, extending Theorem 11.7, is as follows:

Theorem

The probability for a random permutation [math]\sigma\in S_N[/math] to have exactly [math]k[/math] fixed points, with [math]k\in\mathbb N[/math], is given by the following formula:

[[math]] P=\frac{1}{k!}\left(1-\frac{1}{1!}+\frac{1}{2!}-\ldots+(-1)^{N-1}\frac{1}{(N-1)!}+(-1)^N\frac{1}{N!}\right) [[/math]]
Thus we have the following approximation formula,

[[math]] P\simeq\frac{1}{ek!} [[/math]]
in the [math]N\to\infty[/math] limit.


Show Proof

We already know, from Theorem 11.7, that this formula holds at [math]k=0[/math]. In the general case now, we have to count the permutations [math]\sigma\in S_N[/math] having exactly [math]k[/math] points. Since having such a permutation amounts in choosing [math]k[/math] points among [math]1,\ldots,N[/math], and then permuting the [math]N-k[/math] points left, without fixed points allowed, we have:

[[math]] \begin{eqnarray*} \#\left\{\sigma\in S_N\Big|\chi(\sigma)=k\right\} &=&\binom{N}{k}\#\left\{\sigma\in S_{N-k}\Big|\chi(\sigma)=0\right\}\\ &=&\frac{N!}{k!(N-k)!}\#\left\{\sigma\in S_{N-k}\Big|\chi(\sigma)=0\right\}\\ &=&N!\times\frac{1}{k!}\times\frac{\#\left\{\sigma\in S_{N-k}\Big|\chi(\sigma)=0\right\}}{(N-k)!} \end{eqnarray*} [[/math]]


Now by dividing everything by [math]N![/math], we obtain from this the following formula:

[[math]] \frac{\#\left\{\sigma\in S_N\Big|\chi(\sigma)=k\right\}}{N!}=\frac{1}{k!}\times\frac{\#\left\{\sigma\in S_{N-k}\Big|\chi(\sigma)=0\right\}}{(N-k)!} [[/math]]

By using now the computation at [math]k=0[/math], that we already have, from Theorem 11.7, it follows that with [math]N\to\infty[/math] we have the following estimate:

[[math]] \begin{eqnarray*} P(\chi=k) &\simeq&\frac{1}{k!}\cdot P(\chi=0)\\ &\simeq&\frac{1}{k!}\cdot\frac{1}{e} \end{eqnarray*} [[/math]]


Thus, we are led to the conclusion in the statement.

As before, in regards with derangements, we can reformulate what we found in terms of the main character, and we obtain in this way the following statement:

Theorem

For the symmetric group, the distribution of the main character

[[math]] \chi:S_N\to\mathbb N [[/math]]
is given by the following formula:

[[math]] P(\chi=k)=\frac{1}{k1}\left(1-\frac{1}{1!}+\frac{1}{2!}-\ldots+(-1)^{N-1}\frac{1}{(N-1)!}+(-1)^N\frac{1}{N!}\right) [[/math]]
Thus we have the following asymptotic formula, in the [math]N\to\infty[/math] limit,

[[math]] P(\chi=k)\simeq\frac{1}{ek!} [[/math]]
with [math]e=2.7182\ldots[/math] being the usual constant from analysis.


Show Proof

This follows indeed by combining Proposition 11.4, which tells us that [math]\chi[/math] counts the number of fixed points, with Theorem 11.9.

11b. Poisson limits

In order to best interpret the above results, and get some advanced insight into the structure of [math]S_N[/math], we will need some probability theory, coming as the “discrete” counterpart of the theory developed in chapter 6, for the Gaussian laws. We first have:

Definition

The Poisson law of parameter [math]1[/math] is the following measure,

[[math]] p_1=\frac{1}{e}\sum_k\frac{\delta_k}{k!} [[/math]]
and the Poisson law of parameter [math]t \gt 0[/math] is the following measure,

[[math]] p_t=e^{-t}\sum_k\frac{t^k}{k!}\,\delta_k [[/math]]
with the letter “p” standing for Poisson.

Observe that these laws have indeed mass 1, as they should, and this due to the following well-known formula, which is the foundational formula of calculus:

[[math]] e^t=\sum_k\frac{t^k}{k!} [[/math]]

We will see in the moment why these measures appear a bit everywhere, in discrete contexts, the reasons behind this coming from the Poisson Limit Theorem (PLT). Let us first develop some general theory. We first have:

Theorem

We have the following formula, for any [math]s,t \gt 0[/math],

[[math]] p_s*p_t=p_{s+t} [[/math]]
so the Poisson laws form a convolution semigroup.


Show Proof

We know that the convolution of Dirac masses is given by [math]\delta_k*\delta_l=\delta_{k+l}[/math], and by using this formula and the binomial formula, we obtain:

[[math]] \begin{eqnarray*} p_s*p_t &=&e^{-s}\sum_k\frac{s^k}{k!}\,\delta_k*e^{-t}\sum_l\frac{t^l}{l!}\,\delta_l\\ &=&e^{-s-t}\sum_{kl}\frac{s^kt^l}{k!l!}\delta_{k+l}\\ &=&e^{-s-t}\sum_n\delta_n\sum_{k+l=n}\frac{s^kt^l}{k!l!}\\ &=&e^{-s-t}\sum_n\frac{\delta_n}{n!}\sum_{k+l=n}\frac{n!}{k!l!}s^kt^l\\\ &=&e^{-s-t}\sum_n\frac{(s+t)^n}{n!}\,\delta_n\\ &=&p_{s+t} \end{eqnarray*} [[/math]]


Thus, we are led to the conclusion in the statement.

We have as well the following result:

Theorem

The Poisson laws appear as exponentials

[[math]] p_t=\sum_k\frac{t^k(\delta_1-\delta_0)^{*k}}{k!} [[/math]]
with respect to the convolution of measures [math]*[/math].


Show Proof

By using the binomial formula, the measure at right is:

[[math]] \begin{eqnarray*} \mu &=&\sum_k\frac{t^k}{k!}\sum_{p+q=k}(-1)^q\frac{k!}{p!q!}\delta_p\\ &=&\sum_kt^k\sum_{p+q=k}(-1)^q\frac{\delta_p}{p!q!}\\ &=&\sum_p\frac{t^p\delta_p}{p!}\sum_q\frac{(-1)^q}{q!}\\ &=&\frac{1}{e}\sum_p\frac{t^p\delta_p}{p!}\\ &=&p_t \end{eqnarray*} [[/math]]


Thus, we are led to the conclusion in the statement.

As in the continuous case, for the normal laws, our main tool for dealing with the Poisson laws will be the Fourier transform. The formula here is as follows:

Theorem

The Fourier transform of [math]p_t[/math] is given by

[[math]] F_{p_t}(x)=\exp\left((e^{ix}-1)t\right) [[/math]]
for any [math]t \gt 0[/math].


Show Proof

We know that the Fourier transform of a variable [math]f[/math] is given, by definition, by the formula [math]F_f(x)=\mathbb E(e^{ixf})[/math]. We therefore obtain the following formula:

[[math]] \begin{eqnarray*} F_{p_t}(x) &=&e^{-t}\sum_k\frac{t^k}{k!}F_{\delta_k}(x)\\ &=&e^{-t}\sum_k\frac{t^k}{k!}\,e^{ikx}\\ &=&e^{-t}\sum_k\frac{(e^{ix}t)^k}{k!}\\ &=&\exp(-t)\exp(e^{ix}t)\\ &=&\exp\left((e^{ix}-1)t\right) \end{eqnarray*} [[/math]]


Thus, we have reached to the formula in the statement.

Observe that we obtain in this way another proof for the convolution semigroup property of the Poisson laws, that we established above, by using the fact, that we know from chapter 6, that the logarithm of the Fourier transform linearizes the convolution.


We can now establish the Poisson Limit Theorem (PLT), as follows:

Theorem

We have the following convergence, in moments,

[[math]] \left(\left(1-\frac{t}{n}\right)\delta_0+\frac{t}{n}\delta_1\right)^{*n}\to p_t [[/math]]
for any [math]t \gt 0[/math].


Show Proof

Let us denote by [math]\mu_n[/math] the measure under the convolution sign:

[[math]] \mu_n=\left(1-\frac{t}{n}\right)\delta_0+\frac{t}{n}\delta_1 [[/math]]

We have then the following computation:

[[math]] \begin{eqnarray*} F_{\delta_r}(x)=e^{irx} &\implies&F_{\mu_n}(x)=\left(1-\frac{t}{n}\right)+\frac{t}{n}e^{ix}\\ &\implies&F_{\mu_n^{*n}}(x)=\left(\left(1-\frac{t}{n}\right)+\frac{t}{n}e^{ix}\right)^n\\ &\implies&F_{\mu_n^{*n}}(x)=\left(1+\frac{(e^{ix}-1)t}{n}\right)^n\\ &\implies&F(x)=\exp\left((e^{ix}-1)t\right) \end{eqnarray*} [[/math]]


Thus, we obtain the Fourier transform of [math]p_t[/math], as desired.

There are of course many other things that can be said about the PLT, including examples and illustrations, and more technical results regarding the convergence, and we refer here to any standard probability book, such as Feller [1] or Durrett [2]. In what follows, we will be rather doing more combinatorics. To start with, we have:

Theorem

The moments of [math]p_1[/math] are the Bell numbers,

[[math]] M_k(p_1)=|P(k)| [[/math]]
where [math]P(k)[/math] is the set of partitions of [math]\{1,\ldots,k\}[/math].


Show Proof

The moments of [math]p_1[/math] are given by the following formula:

[[math]] M_k=\frac{1}{e}\sum_s\frac{s^k}{s!} [[/math]]

We have the following recurrence formula for these moments:

[[math]] \begin{eqnarray*} M_{k+1} &=&\frac{1}{e}\sum_s\frac{(s+1)^{k+1}}{(s+1)!}\\ &=&\frac{1}{e}\sum_s\frac{(s+1)^k}{s!}\\ &=&\frac{1}{e}\sum_s\frac{s^k}{s!}\left(1+\frac{1}{s}\right)^k\\ &=&\frac{1}{e}\sum_s\frac{s^k}{s!}\sum_r\binom{k}{r}s^{-r}\\ &=&\sum_r\binom{k}{r}\cdot\frac{1}{e}\sum_s\frac{s^{k-r}}{s!}\\ &=&\sum_r\binom{k}{r}M_{k-r} \end{eqnarray*} [[/math]]


Let us try now to find a recurrence for the Bell numbers:

[[math]] B_k=|P(k)| [[/math]]

A partition of [math]\{1,\ldots,k+1\}[/math] appears by choosing [math]r[/math] neighbors for [math]1[/math], among the [math]k[/math] numbers available, and then partitioning the [math]k-r[/math] elements left. Thus, we have:

[[math]] B_{k+1}=\sum_r\binom{k}{r}B_{k-r} [[/math]]

Thus, the numbers [math]M_k[/math] satisfy the same recurrence as the numbers [math]B_k[/math]. Regarding now the initial values, for the moments of [math]p_1[/math], these are:

[[math]] M_0=1\quad,\quad M_1=1 [[/math]]

Indeed, the formula [math]M_0=1[/math] is clear, and the formula [math]M_1=1[/math] follows from:

[[math]] \begin{eqnarray*} M_1 &=&\frac{1}{e}\sum_s\frac{s}{s!}\\ &=&\frac{1}{e}\sum_s\frac{1}{(s-1)!}\\ &=&\frac{1}{e}\times e\\ &=&1 \end{eqnarray*} [[/math]]


Now by using the above recurrence we obtain from this:

[[math]] M_2 =\sum_r\binom{1}{r}M_{k-r} =1+1 =2 [[/math]]

Thus, we can say that the initial values for the moments are:

[[math]] M_1=1\quad,\quad M_2=2 [[/math]]

As for the Bell numbers, here the initial values are as follows:

[[math]] B_1=1\quad,\quad B_2=2 [[/math]]

Thus the initial values coincide, and so these numbers are equal, as stated.

More generally, we have the following result:

Theorem

The moments of [math]p_t[/math] are given by

[[math]] M_k(p_t)=\sum_{\pi\in P(k)}t^{|\pi|} [[/math]]
where [math]|.|[/math] is the number of blocks.


Show Proof

Observe first that the formula in the statement generalizes the one in Theorem 11.16, because at [math]t=1[/math] we obtain, as we should:

[[math]] \begin{eqnarray*} M_k(p_1) &=&\sum_{\pi\in P(k)}1^{|\pi|}\\ &=&|P(k)|\\ &=&B_k \end{eqnarray*} [[/math]]


In general now, the moments of [math]p_t[/math] with [math]t \gt 0[/math] are given by:

[[math]] M_k=e^{-t}\sum_s\frac{t^ss^k}{s!} [[/math]]

We have the following recurrence formula for the moments of [math]p_t[/math]:

[[math]] \begin{eqnarray*} M_{k+1} &=&e^{-t}\sum_s\frac{t^{s+1}(s+1)^{k+1}}{(s+1)!}\\ &=&e^{-t}\sum_s\frac{t^{s+1}(s+1)^k}{s!}\\ &=&e^{-t}\sum_s\frac{t^{s+1}s^k}{s!}\left(1+\frac{1}{s}\right)^k\\ &=&e^{-t}\sum_s\frac{t^{s+1}s^k}{s!}\sum_r\binom{k}{r}s^{-r}\\ &=&\sum_r\binom{k}{r}\cdot e^{-t}\sum_s\frac{t^{s+1}s^{k-r}}{s!}\\ &=&t\sum_r\binom{k}{r}M_{k-r} \end{eqnarray*} [[/math]]


As for the initial values of these moments, these are as follows:

[[math]] M_1=t\quad,\quad M_2=t+t^2 [[/math]]

On the other hand, consider the numbers in the statement, namely:

[[math]] S_k=\sum_{\pi\in P(k)}t^{|\pi|} [[/math]]

Since a partition of [math]\{1,\ldots,k+1\}[/math] appears by choosing [math]r[/math] neighbors for [math]1[/math], among the [math]k[/math] numbers available, and then partitioning the [math]k-r[/math] elements left, we have:

[[math]] S_{k+1}=t\sum_r\binom{k}{r}S_{k-r} [[/math]]

As for the initial values of these numbers, these are as follows:

[[math]] S_1=t\quad,\quad S_2=t+t^2 [[/math]]

Thus the initial values coincide, so these numbers are the moments, as stated.

Observe the analogy with the moment formulae for [math]g_t[/math] and [math]G_t[/math], discussed before. To be more precise, the moments for the main laws come from partitions, as follows:

[[math]] p_t\to P\quad,\quad g_t\to P_2\quad,\quad G_t\to\mathcal P_2 [[/math]]

We will be back later with some more conceptual explanations for these results.

11c. Truncated characters

With the above probabilistic preliminaries done, let us get back now to finite groups, and compute laws of characters. As a first piece of good news, our main result so far, namely Theorem 11.10, reformulates into something very simple, as follows:

Theorem

For the symmetric group [math]S_N\subset O_N[/math] we have

[[math]] \chi\sim p_1 [[/math]]
in the [math]N\to\infty[/math] limit.


Show Proof

This is indeed a reformulation of Theorem 11.10, which tells us that with [math]N\to\infty[/math] we have the following estimate:

[[math]] P(\chi=k)\simeq\frac{1}{ek!} [[/math]]

But, according to our definition of the Poisson laws, this tells us precisely that the asymptotic law of the main character [math]\chi[/math] is Poisson (1), as stated.

An interesting question now is that of recovering all the Poisson laws [math]p_t[/math], by using group theory. In order to do this, let us formulate the following definition:

Definition

Given a closed subgroup [math]G\subset U_N[/math], the function

[[math]] \chi:G\to\mathbb C\quad,\quad \chi_t(g)=\sum_{i=1}^{[tN]}g_{ii} [[/math]]
is called main truncated character of [math]G[/math], of parameter [math]t\in(0,1][/math].

As before with the plain characters, there is some general theory behind this definition, and we will discuss this later on, systematically, in chapters 13-16 below.


Getting back now to the symmetric groups, we first have the following result:

Proposition

Consider the symmetric group [math]S_N[/math], regarded as the permutation group of the [math]N[/math] coordinate axes of [math]\mathbb R^N[/math]. This picture provides us with an embedding

[[math]] S_N\subset O_N [[/math]]
the coordinate functions for the permutations being as folows:

[[math]] g_{ij}=\chi\left(\sigma\in S_N\Big|\sigma(j)=i\right) [[/math]]
In this picture, the truncated characters count the number of partial fixed points

[[math]] \chi_t(\sigma)=\#\left\{ i\in\{1,\ldots,[tN]\}\Big|\sigma(i)=i\right\} [[/math]]
with respect to the truncation parameter [math]t\in(0,1][/math].


Show Proof

All this is clear from definitions, with the formula for the coordinates being clear from the definition of the embedding [math]S_N\subset O_N[/math], and with the character formulae following from it, by summing over [math]i=j[/math]. To be more precise, we have:

[[math]] \begin{eqnarray*} \chi_t(\sigma) &=&\sum_{i=1}^{[tN]}\sigma_{ii}\\ &=&\sum_{i=1}^{[tN]}\delta_{\sigma(i)i}\\ &=&\#\left\{i\in\{1,\ldots,[tN]\}\Big|\sigma(i)=i\right\} \end{eqnarray*} [[/math]]


Thus, we are led to the conclusions in the statement.

Regarding now the asymptotic laws of the truncated characters, the result here, generalizing everything that we have so far, is as follows:

Theorem

For the symmetric group [math]S_N\subset O_N[/math] we have

[[math]] \chi_t\sim p_t [[/math]]
in the [math]N\to\infty[/math] limit, for any [math]t\in(0,1][/math].


Show Proof

We already know from Theorem 11.18 that the result holds at [math]t=1[/math]. In general, the proof is similar, the idea being as follows:


(1) Consider indeed the following sets, as in the proof of Theorem 11.18, or rather as in the proof of Theorem 11.7, leaading to Theorem 11.18:

[[math]] S_N^i=\left\{\sigma\in S_N\Big|\sigma(i)=i\right\} [[/math]]

The set of permutations having no fixed points among [math]1,\ldots,[tN][/math] is then:

[[math]] X_N=\left(\bigcup_{i\leq[tN]}S_N^i\right)^c [[/math]]

In order to compute now the cardinality [math]|X_N|[/math], consider as well the following sets, depending on indices [math]i_1 \lt \ldots \lt i_k[/math], obtained by taking intersections:

[[math]] S_N^{i_1\ldots i_k}=S_N^{i_1}\bigcap\ldots\bigcap S_N^{i_k} [[/math]]

As before in the proof of Theorem 11.18, we obtain by inclusion-exclusion that:

[[math]] \begin{eqnarray*} P(\chi_t=0) &=&\frac{1}{N!}\sum_{k=0}^{[tN]}(-1)^k\sum_{i_1 \lt \ldots \lt i_k \lt [tN]}|S_N^{i_1\ldots i_k}|\\ &=&\frac{1}{N!}\sum_{k=0}^{[tN]}(-1)^k\sum_{i_1 \lt \ldots \lt i_k \lt [tN]}(N-k)!\\ &=&\frac{1}{N!}\sum_{k=0}^{[tN]}(-1)^k\binom{[tN]}{k}(N-k)!\\ &=&\sum_{k=0}^{[tN]}\frac{(-1)^k}{k!}\cdot\frac{[tN]!(N-k)!}{N!([tN]-k)!} \end{eqnarray*} [[/math]]


With [math]N\to\infty[/math], we obtain from this the following estimate:

[[math]] \begin{eqnarray*} P(\chi_t=0) &\simeq&\sum_{k=0}^{[tN]}\frac{(-1)^k}{k!}\cdot t^k\\ &=&\sum_{k=0}^{[tN]}\frac{(-t)^k}{k!}\\ &\simeq&e^{-t} \end{eqnarray*} [[/math]]


(2) More generally now, by counting the permutations [math]\sigma\in S_N[/math] having exactly [math]k[/math] fixed points among [math]1,\ldots,[tN][/math], as in the proof of Theorem 11.9, our claim is that we get:

[[math]] P(\chi_t=k)\simeq\frac{t^k}{k!e^t} [[/math]]

We already know from (1) that this formula holds at [math]k=0[/math]. In the general case now, we have to count the permutations [math]\sigma\in S_N[/math] having exactly [math]k[/math] fixed points among [math]1,\ldots,[tN][/math]. Since having such a permutation amounts in choosing [math]k[/math] points among [math]1,\ldots,[tN][/math], and then permuting the [math]N-k[/math] points left, without fixed points among [math]1,\ldots,[tN][/math] allowed, we obtain the following formula, where [math]s\in(0,1][/math] is such that [math][s(N-k)]=[tN]-k[/math]:

[[math]] \begin{eqnarray*} \#\left\{\sigma\in S_N\Big|\chi_t(\sigma)=k\right\} &=&\binom{[tN]}{k}\#\left\{\sigma\in S_{N-k}\Big|\chi_s(\sigma)=0\right\}\\ &=&\frac{[tN]!}{k!([tN]-k)!}\#\left\{\sigma\in S_{N-k}\Big|\chi_s(\sigma)=0\right\}\\ &=&\frac{1}{k!}\times\frac{[tN]!(N-k)!}{([tN]-k)!}\times\frac{\#\left\{\sigma\in S_{N-k}\Big|\chi_s(\sigma)=0\right\}}{(N-k)!} \end{eqnarray*} [[/math]]


Now by dividing everything by [math]N![/math], we obtain from this the following formula:

[[math]] \frac{\#\left\{\sigma\in S_N\Big|\chi_t(\sigma)=k\right\}}{N!}=\frac{1}{k!}\times\frac{[tN]!(N-k)!}{N!([tN]-k)!}\times\frac{\#\left\{\sigma\in S_{N-k}\Big|\chi_s(\sigma)=0\right\}}{(N-k)!} [[/math]]

By using now the computation at [math]k=0[/math], that we already have, from (1) above, it follows that with [math]N\to\infty[/math] we have the following estimate:

[[math]] \begin{eqnarray*} P(\chi_t=k) &\simeq&\frac{1}{k!}\times\frac{[tN]!(N-k)!}{N!([tN]-k)!}\cdot P(\chi_s=0)\\ &\simeq&\frac{t^k}{k!}\cdot P(\chi_s=0)\\ &\simeq&\frac{t^k}{k!}\cdot\frac{1}{e^s} \end{eqnarray*} [[/math]]


Now recall that the parameter [math]s\in(0,1][/math] was chosen in the above such that:

[[math]] [s(N-k)]=[tN]-k [[/math]]

Thus in the [math]N\to\infty[/math] limit we have [math]s=t[/math], and so we obtain, as claimed:

[[math]] P(\chi_t=k)\simeq\frac{t^k}{k!}\cdot\frac{1}{e^t} [[/math]]

It follows that we obtain in the limit a Poisson law of parameter [math]t[/math], as stated.

11d. Further results

All the above is quite interesting, and is at the core of the theory that we want to develop, so let us further build on all this, with a number of more specialized results on the subject, which will be sometimes research-grade. We will be following [3].


To start with, let us first present a new, instructive proof for the above character results. The point indeed is that we can approach the problems as well directly, by integrating over [math]S_N[/math], and in order to do so, we can use the following result:

Theorem

Consider the symmetric group [math]S_N[/math], with its standard coordinates:

[[math]] g_{ij}=\chi\left(\sigma\in S_N\Big|\sigma(j)=i\right) [[/math]]
The products of these coordinates span the algebra [math]C(S_N)[/math], and the arbitrary integrals over [math]S_N[/math] are given, modulo linearity, by the formula

[[math]] \int_{S_N}g_{i_1j_1}\ldots g_{i_kj_k}=\begin{cases} \frac{(N-|\ker i|)!}{N!}&{\rm if}\ \ker i=\ker j\\ 0&{\rm otherwise} \end{cases} [[/math]]
where [math]\ker i[/math] denotes as usual the partition of [math]\{1,\ldots,k\}[/math] whose blocks collect the equal indices of [math]i[/math], and where [math]|.|[/math] denotes the number of blocks.


Show Proof

The first assertion follows from the Stone-Weierstrass theorem, because the standard coordinates [math]g_{ij}[/math] separate the points of [math]S_N[/math], and so the algebra [math] \lt g_{ij} \gt [/math] that they generate must be equal to the whole function algebra [math]C(S_N)[/math]:

[[math]] \lt g_{ij} \gt =C(S_N) [[/math]]

Regarding now the second assertion, according to the definition of the matrix coordinates [math]g_{ij}[/math], the integrals in the statement are given by:

[[math]] \int_{S_N}g_{i_1j_1}\ldots g_{i_kj_k}=\frac{1}{N!}\#\left\{\sigma\in S_N\Big|\sigma(j_1)=i_1,\ldots,\sigma(j_k)=i_k\right\} [[/math]]

Now observe that the existence of [math]\sigma\in S_N[/math] as above requires:

[[math]] i_m=i_n\iff j_m=j_n [[/math]]

Thus, the above integral vanishes when:

[[math]] \ker i\neq\ker j [[/math]]

Regarding now the case [math]\ker i=\ker j[/math], if we denote by [math]b\in\{1,\ldots,k\}[/math] the number of blocks of this partition [math]\ker i=\ker j[/math], we have [math]N-b[/math] points to be sent bijectively to [math]N-b[/math] points, and so [math](N-b)![/math] solutions, and the integral is [math]\frac{(N-b)!}{N!}[/math], as claimed.

As an illustration for the above formula, we can recover the computation of the asymptotic laws of the truncated characters [math]\chi_t[/math]. We have indeed:

Theorem

For the symmetric group [math]S_N\subset O_N[/math], regarded as a compact group of matrices, [math]S_N\subset O_N[/math], via the standard permutation matrices, the truncated character

[[math]] \chi_t(g)=\sum_{i=1}^{[tN]}g_{ii} [[/math]]
counts the number of fixed points among [math]\{1,\ldots,[tN]\}[/math], and its law with respect to the counting measure becomes, with [math]N\to\infty[/math], a Poisson law of parameter [math]t[/math].


Show Proof

The first assertion comes from the following formula:

[[math]] g_{ij}=\chi\left(\sigma\Big|\sigma(j)=i\right) [[/math]]

Regarding now the second assertion, we can use here the integration formula in Theorem 11.22. With [math]S_{kb}[/math] being the Stirling numbers, counting the partitions of [math]\{1,\ldots,k\}[/math] having exactly [math]b[/math] blocks, we have indeed the following formula:

[[math]] \begin{eqnarray*} \int_{S_N}\chi_t^k &=&\sum_{i_1\ldots i_k=1}^{[tN]}\int_{S_N}g_{i_1i_1}\ldots g_{i_ki_k}\\ &=&\sum_{\pi\in P(k)}\frac{[tN]!}{([tN]-|\pi|!)}\cdot\frac{(N-|\pi|!)}{N!}\\ &=&\sum_{b=1}^{[tN]}\frac{[tN]!}{([tN]-b)!}\cdot\frac{(N-b)!}{N!}\cdot S_{kb} \end{eqnarray*} [[/math]]


In particular with [math]N\to\infty[/math] we obtain the following formula:

[[math]] \lim_{N\to\infty}\int_{S_N}\chi_t^k=\sum_{b=1}^kS_{kb}t^b [[/math]]

But this is the [math]k[/math]-th moment of the Poisson law [math]p_t[/math], and so we are done.

Summarizing, we have a good understanding of our main result so far, involving the characters of the symmetric group [math]S_N[/math] and the Poisson laws of parameter [math]t\in(0,1][/math], by using 2 different methods. We will see in a moment a third proof as well, and we will be actually back to this in chapters 13-16 too, with a fourth method too.


As another result now regarding [math]S_N[/math], here is a useful related formula:

Theorem

We have the law formula

[[math]] {\rm law}(g_{11}+\ldots +g_{ss})=\frac{s!}{N!}\sum_{p=0}^s\frac{(N-p)!}{(s-p)!} \cdot\frac{\left(\delta_1-\delta_0\right)^{*p}}{p!} [[/math]]

where [math]g_{ij}[/math] are the standard coordinates of [math]S_N\subset O_N[/math].


Show Proof

We have the following moment formula, where [math]m_f[/math] is the number of permutations of [math]\{1,\ldots ,N\}[/math] having exactly [math]f[/math] fixed points in the set [math]\{1,\ldots ,s\}[/math]:

[[math]] \int_{S_N}(u_{11}+\ldots +u_{ss})^k=\frac{1}{N!}\sum_{f=0}^sm_ff^k [[/math]]

Thus the law in the statement, say [math]\nu_{sN}[/math], is the following average of Dirac masses:

[[math]] \nu_{sN}=\frac{1}{N!}\sum_{f=0}^s m_{f}\,\delta_f [[/math]]

Now observe that the permutations contributing to [math]m_f[/math] are obtained by choosing [math]f[/math] points in the set [math]\{1,\ldots ,s\}[/math], then by permuting the remaining [math]N-f[/math] points in [math]\{1,\ldots ,n\}[/math] in such a way that there is no fixed point in [math]\{1,\ldots,s\}[/math]. But these latter permutations are counted as follows: we start with all permutations, we substract those having one fixed point, we add those having two fixed points, and so on. We obtain in this way:

[[math]] \begin{eqnarray*} \nu_{sN} &=&\frac{1}{N!}\sum_{f=0}^s\begin{pmatrix}s\\ f\end{pmatrix}\left(\sum_{k=0}^{s-f}(-1)^k \begin{pmatrix}s-f\\ k\end{pmatrix}(N-f-k)!\right)\,\delta_f\\ &=&\sum_{f=0}^s\sum_{k=0}^{s-f}(-1)^k\frac{1}{N!}\cdot \frac{s!}{f!(s-f)!}\cdot\frac{(s-f)!(N-f-k)!}{k!(s-f-k)!}\,\delta_f\\ &=&\frac{s!}{N!}\sum_{f=0}^s\sum_{k=0}^{s-f}\frac{(-1)^k(N-f-k)!}{f!k!(s-f-k)!}\,\delta_f \end{eqnarray*} [[/math]]


We can proceed as follows, by using the new index [math]p=f+k[/math]:

[[math]] \begin{eqnarray*} \nu_{sN} &=&\frac{s!}{N!}\sum_{p=0}^s\sum_{k=0}^{p}\frac{(-1)^k (N-p)!}{(p-k)!k!(s-p)!}\,\delta_{p-k}\\ &=&\frac{s!}{N!}\sum_{p=0}^s\frac{(N-p)!}{(s-p)!p!} \sum_{k=0}^{p}(-1)^k\begin{pmatrix}p\\ k\end{pmatrix}\,\delta_{p-k}\\ &=&\frac{s!}{N!}\sum_{p=0}^s\frac{(N-p)!}{(s-p)!}\cdot \frac{\left(\delta_1-\delta_0\right)^{*p}}{p!} \end{eqnarray*} [[/math]]


Here [math]*[/math] is convolution of real measures, and the assertion follows.

Observe that the above formula is finer than most of our previous formulae, which were asymptotic, because it is valid at any [math]N\in\mathbb N[/math]. We can use this formula as follows:

Theorem

Let [math]g_{ij}[/math] be the standard coordinates of [math]C(S_N)[/math].

  • [math]u_{11}+\ldots +u_{ss}[/math] with [math]s=o(N)[/math] is a projection of trace [math]s/N[/math].
  • [math]u_{11}+\ldots +u_{ss}[/math] with [math]s=tN+o(N)[/math] is Poisson of parameter [math]t[/math].


Show Proof

We can use indeed the formula in Theorem 11.24, as follows:


(1) With [math]s[/math] fixed and [math]N\to\infty[/math] we have the following estimate:

[[math]] \begin{eqnarray*} &&{\rm law}(u_{11}+\ldots +u_{ss})\\ &=&\sum_{p=0}^s\frac{(N-p)!}{N!}\cdot\frac{s!}{(s-p)!} \cdot\frac{\left(\delta_1-\delta_0\right)^{*p}}{p!}\\ &=&\delta_0+\frac{s}{N}\,(\delta_1-\delta_0)+O(N^{-2}) \end{eqnarray*} [[/math]]


But the law on the right is that of a projection of trace [math]s/N[/math], as desired.


(2) We have a law formula of the following type:

[[math]] {\rm law}(u_{11}+\ldots +u_{ss})= \sum_{p=0}^sc_p\cdot\frac{(\delta_1-\delta_0)^{*p}}{p!} [[/math]]

The coefficients [math]c_p[/math] can be estimated by using the Stirling formula, as follows:

[[math]] \begin{eqnarray*} c_p &=&\frac{(tN)!}{N!}\cdot\frac{(N-p)!}{(tN-p)!}\\ &\simeq&\frac{(tN)^{tN}}{N^N}\cdot\frac{(N-p)^{N-p}}{(tN-p)^{tN-p}}\\ &=&\left(\frac{tN}{tN-p}\right)^{tN-p} \left( \frac{N-p}{N}\right)^{N-p}\left( \frac{tN}{N}\right)^p \end{eqnarray*} [[/math]]


But the last expression can be estimated by using the definition of the exponentials, and we obtain in this way the following estimate:

[[math]] c_p \simeq e^pe^{-p}t^p =t^p [[/math]]

We can now compute the Fourier transform with respect to a variable [math]y[/math]:

[[math]] \begin{eqnarray*} {\mathcal F}\left( {\rm law}(u_{11}+\ldots +u_{ss})\right) &\simeq&\sum_{p=0}^st^p\cdot\frac{(e^y-1)^p}{p!}\\ &=&e^{t(e^y-1)} \end{eqnarray*} [[/math]]


But this is precisely the Fourier transform of the Poisson law [math]p_t[/math], as explained in Theorem 11.14, and this gives the second assertion.

Let us discuss now, as an instructive variation of the above, the computation for the alternating group [math]A_N\subset S_N[/math]. We will see that with [math]N\to\infty[/math] nothing changes, and with this being part of a more general phenomenon, regarding more general types of reflection groups and subgroups, that we will further discuss in the next chapter.


Let us start with some algebraic considerations. We first have:

Proposition

For the symmetric group, regarded as group of permutations of the [math]N[/math] coordinate axes of [math]\mathbb R^N[/math], and so as group of permutation matrices,

[[math]] S_N\subset O_N [[/math]]
the determinant is the signature. The subgroup [math]A_N\subset S_N[/math] given by

[[math]] A_N=S_N\cap SO_N [[/math]]
and called alternating group, consists of the even permutations.


Show Proof

In this statement the first assertion is clear from the definition of the determinant, and of the permutation matrices, and all the rest is standard.

Regarding now character computations, the best here is to use an analogue of Theorem 11.22. To be more precise, we have here the following result:

Theorem

Consider the alternating group [math]A_N[/math], regarded as group of permutation matrices, with its standard coordinates:

[[math]] g_{ij}=\chi\left(\sigma\in A_N\Big|\sigma(j)=i\right) [[/math]]
The products of these coordinates span the algebra [math]C(A_N)[/math], and the arbitrary integrals over [math]A_N[/math] are given, modulo linearity, by the formula

[[math]] \int_{A_N}g_{i_1j_1}\ldots g_{i_kj_k}\simeq\begin{cases} \frac{(N-|\ker i|)!}{N!}&{\rm if}\ \ker i=\ker j\\ 0&{\rm otherwise} \end{cases} [[/math]]
with [math]N\to\infty[/math], where [math]\ker i[/math] denotes as usual the partition of [math]\{1,\ldots,k\}[/math] whose blocks collect the equal indices of [math]i[/math], and where [math]|.|[/math] denotes the number of blocks.


Show Proof

The first assertion follows from the Stone-Weierstrass theorem, because the standard coordinates [math]g_{ij}[/math] separate the points of [math]A_N[/math], and so the algebra [math] \lt g_{ij} \gt [/math] that they generate must be equal to the whole function algebra [math]C(A_N)[/math]:

[[math]] \lt g_{ij} \gt =C(A_N) [[/math]]

Regarding now the second assertion, according to the definition of the standard coordinates [math]g_{ij}[/math], the integrals in the statement are given by:

[[math]] \int_{A_N}g_{i_1j_1}\ldots g_{i_kj_k}=\frac{1}{N!/2}\#\left\{\sigma\in A_N\Big|\sigma(j_1)=i_1,\ldots,\sigma(j_k)=i_k\right\} [[/math]]

Now observe that the existence of [math]\sigma\in A_N[/math] as above requires:

[[math]] i_m=i_n\iff j_m=j_n [[/math]]

Thus, the above integral vanishes when the following holds:

[[math]] \ker i\neq\ker j [[/math]]

Regarding now the case [math]\ker i=\ker j[/math], if we denote by [math]b\in\{1,\ldots,k\}[/math] the number of blocks of this partition [math]\ker i=\ker j[/math], we have [math]N-b[/math] points to be sent bijectively to [math]N-b[/math] points. But when assuming [math]N \gt \gt 0[/math], and more specifically [math]N \gt k[/math], half of these bijections will be alternating, and so we have [math](N-b)!/2[/math] solutions. Thus, the integral is:

[[math]] \begin{eqnarray*} \int_{A_N}g_{i_1j_1}\ldots g_{i_kj_k} &=&\frac{1}{N!/2}\#\left\{\sigma\in A_N\Big|\sigma(j_1)=i_1,\ldots,\sigma(j_k)=i_k\right\}\\ &=&\frac{(N-b)!/2}{N!/2}\\ &=&\frac{(N-b)!}{N!} \end{eqnarray*} [[/math]]


Thus, we are led to the conclusion in the statement.

As an illustration for the above formula, we can recover the computation of the asymptotic laws of the truncated characters [math]\chi_t[/math]. We have indeed:

Theorem

For the alternating group [math]A_N\subset O_N[/math], regarded as a compact group of matrices, [math]A_N\subset O_N[/math], via the standard permutation matrices, the truncated character

[[math]] \chi_t(g)=\sum_{i=1}^{[tN]}g_{ii} [[/math]]
counts the number of fixed points among [math]\{1,\ldots,[tN]\}[/math], and its law with respect to the counting measure becomes, with [math]N\to\infty[/math], a Poisson law of parameter [math]t[/math].


Show Proof

The first assertion comes from the following formula:

[[math]] g_{ij}=\chi\left(\sigma\Big|\sigma(j)=i\right) [[/math]]

Regarding now the second assertion, we can use here the integration formula in Theorem 11.27. With [math]S_{kb}[/math] being the Stirling numbers, counting the partitions of [math]\{1,\ldots,k\}[/math] having exactly [math]b[/math] blocks, we have the following formula:

[[math]] \begin{eqnarray*} \int_{A_N}\chi_t^k &=&\sum_{i_1\ldots i_k=1}^{[tN]}\int_{A_N}g_{i_1i_1}\ldots g_{i_ki_k}\\ &\simeq&\sum_{\pi\in P(k)}\frac{[tN]!}{([tN]-|\pi|!)}\cdot\frac{(N-|\pi|!)}{N!}\\ &=&\sum_{b=1}^{[tN]}\frac{[tN]!}{([tN]-b)!}\cdot\frac{(N-b)!}{N!}\cdot S_{kb} \end{eqnarray*} [[/math]]


In particular with [math]N\to\infty[/math] we obtain the following formula:

[[math]] \lim_{N\to\infty}\int_{S_N}\chi_t^k=\sum_{b=1}^kS_{kb}t^b [[/math]]

But this is the [math]k[/math]-th moment of the Poisson law [math]p_t[/math], and so we are done.

Summarizing, when passing from the symmetric group [math]S_N[/math] to its subgroup [math]A_N\subset S_N[/math], in what concerns character computations, with [math]N\to\infty[/math] nothing changes. This is actually part of a more general phenomenon, regarding more general types of reflection groups and subgroups, that we will further discuss in the next chapter.


As a conclusion now to all this, we have seen that the truncated characters [math]\chi_t[/math] of the symmetric group [math]S_N[/math] have the Poisson laws [math]p_t[/math] as limiting distributions, with [math]N\to\infty[/math]. Moreover, we have seen several proofs for this fundamental fact, using inclusion-exclusion, direct integration, and convolution exponentials and Fourier transforms as well.


We will keep building on all this in the next chapter, by stating and proving similar results for more general reflection groups [math]G\subset U_N[/math]. Also, we will be back to the symmetric group [math]S_N[/math] and to the Poisson laws in chapters 13-16, with a fourth proof for our results, using representation theory, and a property of [math]S_N[/math] called easiness. More on this later.


General references

Banica, Teo (2024). "Linear algebra and group theory". arXiv:2206.09283 [math.CO].

References

  1. W. Feller, An introduction to probability theory and its applications, Wiley (1950).
  2. R. Durrett, Probability: theory and examples, Cambridge Univ. Press (1990).
  3. T. Banica and B. Collins, Integration over quantum permutation groups, J. Funct. Anal. 242 (2007), 641--657.