Poisson laws

[math] \newcommand{\mathds}{\mathbb}[/math]

This article was automatically generated from a tex file and may contain conversion errors. If permitted, you may login and edit this article to improve the conversion.

2a. Poisson limits

We have seen so far that the centered normal laws [math]g_t[/math] and their complex analogues [math]G_t[/math], which appear from the Central Limit Theorem (CLT), have interesting combinatorial properties, and appear in several group-theoretical and geometric contexts.


We discuss here the discrete counterpart of these results. The mathematics will involve the Poisson laws [math]p_t[/math], which appear via the Poisson Limit Theorem (PLT), and their generalized versions [math]p_\nu[/math], called compound Poisson laws, which appear via the Compound Poisson Limit Theorem (CPLT). Let us start with the following definition:

Definition

The Poisson law of parameter [math]1[/math] is the following measure,

[[math]] p_1=\frac{1}{e}\sum_{k\in\mathbb N}\frac{\delta_k}{k!} [[/math]]
and the Poisson law of parameter [math]t \gt 0[/math] is the following measure,

[[math]] p_t=e^{-t}\sum_{k\in\mathbb N}\frac{t^k}{k!}\,\delta_k [[/math]]
with the letter “p” standing for Poisson.

We are using here, as before, some simplified notations for these laws, which are in tune with the notations [math]g_t,G_t[/math] that we used for the centered Gaussian laws. Observe that our laws have indeed mass 1, as they should, due to the following key formula:

[[math]] e^t=\sum_{k\in\mathbb N}\frac{t^k}{k!} [[/math]]


We will see in the moment why these measures appear a bit everywhere, in discrete contexts, the reasons for this coming from the Poisson Limit Theorem (PLT). Let us first develop some general theory. We first have the following result:

Theorem

We have the following formula, for any [math]s,t \gt 0[/math],

[[math]] p_s*p_t=p_{s+t} [[/math]]
so the Poisson laws form a convolution semigroup.


Show Proof

By using [math]\delta_k*\delta_l=\delta_{k+l}[/math] and the binomial formula, we obtain:

[[math]] \begin{eqnarray*} p_s*p_t &=&e^{-s}\sum_k\frac{s^k}{k!}\,\delta_k*e^{-t}\sum_l\frac{t^l}{l!}\,\delta_l\\ &=&e^{-s-t}\sum_n\delta_n\sum_{k+l=n}\frac{s^kt^l}{k!l!}\\ &=&e^{-s-t}\sum_n\frac{\delta_n}{n!}\sum_{k+l=n}\frac{n!}{k!l!}s^kt^l\\\ &=&e^{-s-t}\sum_n\frac{(s+t)^n}{n!}\,\delta_n\\ &=&p_{s+t} \end{eqnarray*} [[/math]]


Thus, we are led to the conclusion in the statement.

Next in line, we have the following result, which is fundamental as well:

Theorem

The Poisson laws appear as formal exponentials

[[math]] p_t=\sum_k\frac{t^k(\delta_1-\delta_0)^{*k}}{k!} [[/math]]
with respect to the convolution of measures [math]*[/math].


Show Proof

By using the binomial formula, the measure on the right is:

[[math]] \begin{eqnarray*} \mu &=&\sum_k\frac{t^k}{k!}\sum_{r+s=k}(-1)^s\frac{k!}{r!s!}\delta_r\\ &=&\sum_kt^k\sum_{r+s=k}(-1)^s\frac{\delta_r}{r!s!}\\ &=&\sum_r\frac{t^r\delta_r}{r!}\sum_s\frac{(-1)^s}{s!}\\ &=&\frac{1}{e}\sum_r\frac{t^r\delta_r}{r!}\\ &=&p_t \end{eqnarray*} [[/math]]


Thus, we are led to the conclusion in the statement.

Regarding now the Fourier transform computation, this is as follows:

Theorem

The Fourier transform of [math]p_t[/math] is given by

[[math]] F_{p_t}(y)=\exp\left((e^{iy}-1)t\right) [[/math]]
for any [math]t \gt 0[/math].


Show Proof

We have indeed the following computation:

[[math]] \begin{eqnarray*} F_{p_t}(y) &=&e^{-t}\sum_k\frac{t^k}{k!}F_{\delta_k}(y)\\ &=&e^{-t}\sum_k\frac{t^k}{k!}\,e^{iky}\\ &=&e^{-t}\sum_k\frac{(e^{iy}t)^k}{k!}\\ &=&\exp(-t)\exp(e^{iy}t)\\ &=&\exp\left((e^{iy}-1)t\right) \end{eqnarray*} [[/math]]


Thus, we obtain the formula in the statement.

Observe that the above formula gives an alternative proof for Theorem 2.2, by using the fact that the logarithm of the Fourier transform linearizes the convolution. As another application, we can now establish the Poisson Limit Theorem, as follows:

Theorem (PLT)

We have the following convergence, in moments,

[[math]] \left(\left(1-\frac{t}{n}\right)\delta_0+\frac{t}{n}\,\delta_1\right)^{*n}\to p_t [[/math]]
for any [math]t \gt 0[/math].


Show Proof

Let us denote by [math]\nu_n[/math] the measure under the convolution sign, namely:

[[math]] \nu_n=\left(1-\frac{t}{n}\right)\delta_0+\frac{t}{n}\,\delta_1 [[/math]]


We have the following computation, for the Fourier transform of the limit:

[[math]] \begin{eqnarray*} F_{\delta_r}(y)=e^{iry} &\implies&F_{\nu_n}(y)=\left(1-\frac{t}{n}\right)+\frac{t}{n}\,e^{iy}\\ &\implies&F_{\nu_n^{*n}}(y)=\left(\left(1-\frac{t}{n}\right)+\frac{t}{n}\,e^{iy}\right)^n\\ &\implies&F_{\nu_n^{*n}}(y)=\left(1+\frac{(e^{iy}-1)t}{n}\right)^n\\ &\implies&F(y)=\exp\left((e^{iy}-1)t\right) \end{eqnarray*} [[/math]]


Thus, we obtain indeed the Fourier transform of [math]p_t[/math], as desired.

At the level of moments now, things are quite subtle for the Poisson laws, combinatorially speaking, and more complicated than for the normal laws. We first have the following result, dealing with the simplest case, where the parameter is [math]t=1[/math]:

Theorem

The moments of [math]p_1[/math] are the Bell numbers,

[[math]] M_k(p_1)=|P(k)| [[/math]]
where [math]P(k)[/math] is the set of partitions of [math]\{1,\ldots,k\}[/math].


Show Proof

The moments of [math]p_1[/math] are given by the following formula:

[[math]] M_k=\frac{1}{e}\sum_r\frac{r^k}{r!} [[/math]]


We therefore have the following recurrence formula for these moments:

[[math]] \begin{eqnarray*} M_{k+1} &=&\frac{1}{e}\sum_r\frac{(r+1)^{k+1}}{(r+1)!}\\ &=&\frac{1}{e}\sum_r\frac{r^k}{r!}\left(1+\frac{1}{r}\right)^k\\ &=&\frac{1}{e}\sum_r\frac{r^k}{r!}\sum_s\binom{k}{s}r^{-s}\\ &=&\sum_s\binom{k}{s}\cdot\frac{1}{e}\sum_r\frac{r^{k-s}}{r!}\\ &=&\sum_s\binom{k}{s}M_{k-s} \end{eqnarray*} [[/math]]


With this done, let us try now to find a recurrence for the Bell numbers:

[[math]] B_k=|P(k)| [[/math]]


A partition of [math]\{1,\ldots,k+1\}[/math] appears by choosing [math]s[/math] neighbors for [math]1[/math], among the [math]k[/math] numbers available, and then partitioning the [math]k-s[/math] elements left. Thus, we have:

[[math]] B_{k+1}=\sum_s\binom{k}{s}B_{k-s} [[/math]]


Thus, our moments [math]M_k[/math] satisfy the same recurrence as the numbers [math]B_k[/math]. Regarding now the initial values, in what concerns the first moment of [math]p_1[/math], we have:

[[math]] M_1=\frac{1}{e}\sum_r\frac{r}{r!}=1 [[/math]]


Also, by using the above recurrence for the numbers [math]M_k[/math], we obtain from this:

[[math]] M_2 =\sum_s\binom{1}{s}M_{k-s} =1+1 =2 [[/math]]


On the other hand, [math]B_1=1[/math] and [math]B_2=2[/math]. Thus we obtain [math]M_k=B_k[/math], as claimed.

More generally now, we have the following result, dealing with the case [math]t \gt 0[/math]:

Theorem

The moments of [math]p_t[/math] with [math]t \gt 0[/math] are given by

[[math]] M_k(p_t)=\sum_{\pi\in P(k)}t^{|\pi|} [[/math]]
where [math]|.|[/math] is the number of blocks.


Show Proof

The moments of the Poisson law [math]p_t[/math] with [math]t \gt 0[/math] are given by:

[[math]] M_k=e^{-t}\sum_r\frac{t^rr^k}{r!} [[/math]]


We have the following recurrence formula for these moments:

[[math]] \begin{eqnarray*} M_{k+1} &=&e^{-t}\sum_r\frac{t^{r+1}(r+1)^{k+1}}{(r+1)!}\\ &=&e^{-t}\sum_r\frac{t^{r+1}r^k}{r!}\left(1+\frac{1}{r}\right)^k\\ &=&e^{-t}\sum_r\frac{t^{r+1}r^k}{r!}\sum_s\binom{k}{s}r^{-s}\\ &=&\sum_s\binom{k}{s}\cdot e^{-t}\sum_r\frac{t^{r+1}r^{k-s}}{r!}\\ &=&t\sum_s\binom{k}{s}M_{k-s} \end{eqnarray*} [[/math]]


Regarding now the initial values, the first moment of [math]p_t[/math] is given by:

[[math]] M_1 =e^{-t}\sum_r\frac{t^rr}{r!} =e^{-t}\sum_r\frac{t^r}{(r-1)!} =t [[/math]]


Now by using the above recurrence we obtain from this:

[[math]] M_2 =t\sum_s\binom{1}{s}M_{k-s} =t(1+t) =t+t^2 [[/math]]


On the other hand, consider the numbers in the statement, namely:

[[math]] S_k=\sum_{\pi\in P(k)}t^{|\pi|} [[/math]]


Since a partition of [math]\{1,\ldots,k+1\}[/math] appears by choosing [math]s[/math] neighbors for [math]1[/math], among the [math]k[/math] numbers available, and then partitioning the [math]k-s[/math] elements left, we have:

[[math]] S_{k+1}=t\sum_s\binom{k}{s}S_{k-s} [[/math]]


As for the initial values of these numbers, these are [math]S_1=t[/math], [math]S_2=t+t^2[/math]. Thus the initial values coincide, and so these numbers are the moments of [math]p_t[/math], as stated.

As a conclusion to all this, the Poisson laws [math]p_t[/math] are now entitled to join the collection of “interesting” probability measures that we have, formed by the real and complex Gaussian laws [math]g_t[/math] and [math]G_t[/math]. Indeed, not only all these measures appear via key limiting theorems, and form convolution semigroups, but at the level of moments, we have:

Theorem

The moments of [math]\mu_t=p_t,g_t,G_t[/math] are given by the formula

[[math]] M_k(\mu_t)=\sum_{\pi\in D(k)}t^{|\pi|} [[/math]]
where [math]D=P,P_2,\mathcal P_2[/math] respectively, and [math]|.|[/math] is the number of blocks.


Show Proof

This follows indeed from Theorem 2.7, and from the results in chapter 1.

The above result raises a whole string of interesting questions. Are there more measures of this type? Is a classification of such measures possible? Are the convolution semigroup results consequences of the moment formula? What about the limiting theorems? And so on. All these questions will be answered, in due time.

2b. Derangements

In relation now with groups, and with pure mathematics in general, let us start with the following well-known, beautiful and fundamental result:

Theorem

The probability for a random [math]\sigma\in S_N[/math] to have no fixed points is

[[math]] P\simeq\frac{1}{e} [[/math]]
in the [math]N\to\infty[/math] limit, where [math]e=2.718\ldots[/math] is the usual constant from analysis.


Show Proof

This is best viewed by using the inclusion-exclusion principle. Let us set:

[[math]] S_N^k=\left\{\sigma\in S_N\Big|\sigma(k)=k\right\} [[/math]]


The set of permutations having no fixed points, called derangements, is then:

[[math]] X_N=\left(\bigcup_kS_N^k\right)^c [[/math]]


Now the inclusion-exclusion principle tells us that we have:

[[math]] \begin{eqnarray*} |X_N| &=&\left|\left(\bigcup_kS_N^k\right)^c\right|\\ &=&|S_N|-\sum_k|S_N^k|+\sum_{k \lt l}|S_N^k\cap S_N^l|-\ldots+(-1)^N\sum_{k_1 \lt \ldots \lt k_N}|S_N^{k_1}\cup\ldots\cup S_N^{k_N}|\\ &=&N!-N(N-1)!+\binom{N}{2}(N-2)!-\ldots+(-1)^N\binom{N}{N}(N-N)!\\ &=&\sum_{r=0}^N(-1)^r\binom{N}{r}(N-r)! \end{eqnarray*} [[/math]]


Thus, the probability that we are interested in, for a random permutation [math]\sigma\in S_N[/math] to have no fixed points, is given by the following formula:

[[math]] P =\frac{|X_N|}{N!} =\sum_{r=0}^N\frac{(-1)^r}{r!} [[/math]]


Since on the right we have the expansion of [math]1/e[/math], this gives the result.

In order to refine now the above result, as to reach to Poisson laws, we will need some basic notions from group theory. Let us start with the following standard definition:

Definition

Given a closed subgroup [math]G\subset U_N[/math], the function

[[math]] \chi:G\to\mathbb C\quad,\quad \chi(g)=\sum_{i=1}^Ng_{ii} [[/math]]
is called main character of [math]G[/math].

We will see later a number for motivations for the study of characters, the idea being that a compact group [math]G[/math] can have several representations [math]\pi:G\subset U_N[/math], which can be studied via their characters [math]\chi_\pi:G\to\mathbb C[/math]. For the moment, we will not need any kind of abstract motivations, and this because for [math]S_N[/math], we have the following beautiful result:

Theorem

Consider the symmetric group [math]S_N[/math], regarded as the permutation group, [math]S_N\subset O_N[/math], of the [math]N[/math] coordinate axes of [math]\mathbb R^N[/math].

  • The main character [math]\chi\in C(S_N)[/math] counts the number of fixed points.
  • The law of [math]\chi\in C(S_N)[/math] becomes Poisson [math](1)[/math], in the [math]N\to\infty[/math] limit.


Show Proof

We have two things to be proved here, the idea being as follows:


(1) The permutation matrices [math]\sigma\in O_N[/math], which give the embedding [math]S_N\subset O_N[/math] in the statement, being given by [math]\sigma_{ij}=\delta_{i\sigma(j)}[/math], we have the following computation:

[[math]] \chi(\sigma) =\sum_i\delta_{\sigma(i)i} =\#\left\{i\in\{1,\ldots,N\}\Big|\sigma(i)=i\right\} [[/math]]


(2) In order to establish now the asymptotic result in the statement, we must prove the following formula, for any [math]r\in\mathbb N[/math], in the [math]N\to\infty[/math] limit:

[[math]] P(\chi=r)\simeq\frac{1}{r!e} [[/math]]


We already know, from Theorem 2.9, that this formula holds at [math]r=0[/math]. In the general case now, we have to count the permutations [math]\sigma\in S_N[/math] having exactly [math]r[/math] points. Now since having such a permutation amounts in choosing [math]r[/math] points among [math]1,\ldots,N[/math], and then permuting the [math]N-r[/math] points left, without fixed points allowed, we have:

[[math]] \begin{eqnarray*} \#\left\{\sigma\in S_N\Big|\chi(\sigma)=r\right\} &=&\binom{N}{r}\#\left\{\sigma\in S_{N-r}\Big|\chi(\sigma)=0\right\}\\ &=&\frac{N!}{r!(N-r)!}\#\left\{\sigma\in S_{N-r}\Big|\chi(\sigma)=0\right\}\\ &=&N!\times\frac{1}{r!}\times\frac{\#\left\{\sigma\in S_{N-r}\Big|\chi(\sigma)=0\right\}}{(N-r)!} \end{eqnarray*} [[/math]]


By dividing everything by [math]N![/math], we obtain from this the following formula:

[[math]] \frac{\#\left\{\sigma\in S_N\Big|\chi(\sigma)=r\right\}}{N!}=\frac{1}{r!}\times\frac{\#\left\{\sigma\in S_{N-r}\Big|\chi(\sigma)=0\right\}}{(N-r)!} [[/math]]


Now by using the computation at [math]r=0[/math], that we already have, from Theorem 2.9, it follows that with [math]N\to\infty[/math] we have the following estimate:

[[math]] P(\chi=r) \simeq\frac{1}{r!}\cdot P(\chi=0) \simeq\frac{1}{r!}\cdot\frac{1}{e} [[/math]]


Thus, we obtain as limiting measure the Poisson law of parameter 1, as stated.

As a next step, let us try now to generalize what we have, namely Theorem 2.11, as to reach to the Poisson laws of arbitrary parameter [math]t \gt 0[/math]. We will need:

Definition

Given a closed subgroup [math]G\subset U_N[/math], the function

[[math]] \chi_t:G\to\mathbb C\quad,\quad \chi_t(g)=\sum_{i=1}^{[tN]}g_{ii} [[/math]]
is called main truncated character of [math]G[/math], of parameter [math]t\in(0,1][/math].

As before with plain characters, there is some theory behind this definition, and we will discuss this later on, in chapter 4 below. In relation with the present considerations, we actually already met such truncated characters, but in a disguised form, in chapter 1, when talking about [math]O_N,U_N[/math]. Indeed, the results there show that we have:

Proposition

For the orthogonal and unitary groups [math]O_N,U_N[/math], the rescalings

[[math]] \rho_{1/N}=\sqrt{N}\cdot\chi_{1/N} [[/math]]
become respectively real and complex Gaussian, in the [math]N\to\infty[/math] limit.


Show Proof

According to our conventions, given a closed subgroup [math]G\subset U_N[/math], the main character truncated at [math]t=1/N[/math] is simply the first coordinate:

[[math]] \chi_{1/N}(g)=g_{11} [[/math]]


With this remark made, the conclusions from the statement follow from the computations from chapter 1, for the laws of coordinates on [math]O_N,U_N[/math].

Getting back now to the symmetric groups, we have the following result, generalizing Theorem 2.11, and which will be our final saying on the subject:

Theorem

Consider the symmetric group [math]S_N[/math], regarded as the permutation group, [math]S_N\subset O_N[/math], of the [math]N[/math] coordinate axes of [math]\mathbb R^N[/math].

  • The variable [math]\chi_t[/math] counts the number of fixed points among [math]1,\ldots,[tN][/math].
  • The law of this variable [math]\chi_t[/math] becomes Poisson [math](t)[/math], in the [math]N\to\infty[/math] limit.


Show Proof

We already know from Theorem 2.11 that the results hold at [math]t=1[/math]. In general, the proof is similar, the idea being as follows:


(1) We have indeed the following computation, coming from definitions:

[[math]] \chi_t(\sigma) =\sum_{i=1}^{[tN]}\delta_{\sigma(i)i} =\#\left\{i\in\{1,\ldots,[tN]\}\Big|\sigma(i)=i\right\} [[/math]]


(2) Consider indeed the following sets, as in the proof of Theorem 2.9:

[[math]] S_N^k=\left\{\sigma\in S_N\Big|\sigma(k)=k\right\} [[/math]]


The set of permutations having no fixed points among [math]1,\ldots,[tN][/math] is then:

[[math]] X_N=\left(\bigcup_{k\leq[tN]}S_N^k\right)^c [[/math]]


As before in the proof of Theorem 2.9, we obtain by inclusion-exclusion that:

[[math]] \begin{eqnarray*} P(\chi_t=0) &=&\frac{1}{N!}\sum_{r=0}^{[tN]}(-1)^r\sum_{k_1 \lt \ldots \lt k_r \lt [tN]}|S_N^{k_1}\cap\ldots\cap S_N^{k_r}|\\ &=&\frac{1}{N!}\sum_{r=0}^{[tN]}(-1)^r\binom{[tN]}{r}(N-r)!\\ &=&\sum_{r=0}^{[tN]}\frac{(-1)^r}{r!}\cdot\frac{[tN]!(N-r)!}{N!([tN]-r)!} \end{eqnarray*} [[/math]]


Now with [math]N\to\infty[/math], we obtain from this the following estimate:

[[math]] \begin{eqnarray*} P(\chi_t=0) &\simeq&\sum_{r=0}^{[tN]}\frac{(-1)^r}{r!}\cdot t^r\\ &=&\sum_{r=0}^{[tN]}\frac{(-t)^r}{r!}\\ &\simeq&e^{-t} \end{eqnarray*} [[/math]]


More generally, by counting the permutations [math]\sigma\in S_N[/math] having exactly [math]r[/math] fixed points among [math]1,\ldots,[tN][/math], as in the proof of Theorem 2.11, we obtain:

[[math]] P(\chi_t=r)\simeq\frac{t^r}{r!e^t} [[/math]]


Thus, we obtain in the limit a Poisson law of parameter [math]t[/math], as stated.

The above result is quite fundamental, and worth proving a second time, by using an alternative method. We can indeed use the following formula:

Theorem

Consider the symmetric group [math]S_N[/math], with its standard coordinates:

[[math]] u_{ij}=\chi\left(\sigma\in S_N\Big|\sigma(j)=i\right) [[/math]]
We have then the following integration formula,

[[math]] \int_{S_N}u_{i_1j_1}\ldots u_{i_rj_r}=\begin{cases} \frac{(N-|\ker i|)!}{N!}&{\rm if}\ \ker i=\ker j\\ 0&{\rm otherwise} \end{cases} [[/math]]
where [math]\ker i[/math] denotes the partition of [math]\{1,\ldots,r\}[/math] whose blocks collect the equal indices of [math]i[/math], and where [math]|.|[/math] denotes the number of blocks.


Show Proof

Observe first that the above formula computes all the integrals over [math]S_N[/math], and this because the coordinates [math]u_{ij}[/math] separate the points of [math]S_N[/math]. In what regards the proof, according to the definition of [math]u_{ij}[/math], the integrals in the statement are given by:

[[math]] \int_{S_N}u_{i_1j_1}\ldots u_{i_rj_r}=\frac{1}{N!}\#\left\{\sigma\in S_N\Big|\sigma(j_1)=i_1,\ldots,\sigma(j_r)=i_r\right\} [[/math]]


Now observe that the existence of [math]\sigma\in S_N[/math] as above requires:

[[math]] i_k=i_l\iff j_k=j_l [[/math]]


Thus, the integral in the statement vanishes if [math]\ker i\neq\ker j[/math]. As for the case left, namely [math]\ker i=\ker j[/math], if we denote by [math]b\in\{1,\ldots,r\}[/math] the number of blocks of this partition [math]\ker i=\ker j[/math], then we have [math]N-b[/math] points to be sent bijectively to [math]N-b[/math] points, and so [math](N-b)![/math] solutions, and our integral follows to be [math](N-b)!/N![/math], as claimed.

As an illustration for the above formula, we can now formulate, as promised:

Theorem

For the symmetric group [math]S_N\subset O_N[/math], regarded as group of matrices, [math]S_N\subset O_N[/math], via the standard permutation matrices, the truncated character

[[math]] \chi_t=\sum_{i=1}^{[tN]}u_{ii} [[/math]]
counts the number of fixed points among [math]\{1,\ldots,[tN]\}[/math], and its law with respect to the counting measure becomes, with [math]N\to\infty[/math], a Poisson law of parameter [math]t[/math].


Show Proof

The first assertion is someting trivial, that we already know. Regarding now the second assertion, we can use here Theorem 2.15. With [math]S_{rb}[/math] being the Stirling numbers, counting the partitions of [math]\{1,\ldots,r\}[/math] having [math]b[/math] blocks, we have:

[[math]] \begin{eqnarray*} \int_{S_N}\chi_t^r &=&\sum_{i_1=1}^{[tN]}\ldots\sum_{i_r=1}^{[tN]}\int_{S_N}u_{i_1i_1}\ldots u_{i_ri_r}\\ &=&\sum_{\pi\in P(r)}\frac{[tN]!}{([tN]-|\pi|!)}\cdot\frac{(N-|\pi|!)}{N!}\\ &=&\sum_{b=1}^{[tN]}\frac{[tN]!}{([tN]-b)!}\cdot\frac{(N-b)!}{N!}\cdot S_{rb} \end{eqnarray*} [[/math]]


In particular with [math]N\to\infty[/math] we obtain the following formula:

[[math]] \lim_{N\to\infty}\int_{S_N}\chi_t^r=\sum_{b=1}^rS_{rb}t^b [[/math]]


But this is a Poisson ([math]t[/math]) moment, according to our formula for the moments of [math]p_t[/math], which in terms of Stirling numbers is the above one, and so we are done.

As a conclusion to all this, the Poisson laws [math]p_t[/math] appear to be quite similar to the real and complex Gaussian laws [math]g_t[/math] and [math]G_t[/math], in the sense that:

  • All these laws appear via basic limiting theorems.
  • They form semigroups with respect to convolution.
  • Their moments can be computed by counting certain partitions.
  • There is a relation with pure mathematics as well, involving [math]S_N,O_N,U_N[/math].

All this remains of course to be further discussed. We will be back to this right next, then in chapters 3-4 below, following [1], [2], [3], and then later on as well.

2c. Compound Poisson

We have so far many interesting results regarding [math]p_t,g_t,G_t[/math], on one hand regarding moments, with the sets of partitions [math]P,P_2,\mathcal P_2[/math] being involved, and on the other hand regarding characters, with the groups [math]S_N,O_N,U_N[/math] involved. All this is quite nice, but looks a bit incomplete, and we are led to the following question: \begin{question} What is the complex analogue of the Poisson law? \end{question} To be more precise, for obvious reasons, we would like to have a complex analogue [math]P_t[/math] of the Poisson law [math]p_t[/math], as to be able to draw a nice square diagram, as follows:

[[math]] \xymatrix@R=40pt@C=40pt{ P_t\ar@{-}[r]\ar@{-}[d]&G_t\ar@{-}[d]\\ p_t\ar@{-}[r]&g_t} [[/math]]


All this is quite philosophical, of course. In view of what we have, the first thought goes to moments, and we are led here to the following question, philosophical as well: \begin{question} Is it possible to talk about the set [math]\mathcal P[/math] of matching partitions? \end{question} To be more precise, we would like our moment formula [math]M_k(\mu_t)=\sum_{\pi\in D(k)}t^{|\pi|}[/math] to hold as well for the mysterious law [math]P_t[/math] that we are looking for, with [math]D=\mathcal P[/math]. So, we would like to have a set [math]\mathcal P[/math] of “matching partitions”, as to be able to draw the following diagram:

[[math]] \xymatrix@R=40pt@C=40pt{ \mathcal P\ar@{-}[r]\ar@{-}[d]&\mathcal P_2\ar@{-}[d]\\ P\ar@{-}[r]&P_2} [[/math]]


However, things are quite unclear with [math]\mathcal P[/math], because when [math]\pi\in P[/math] has all blocks having even size, [math]\pi\in P_{even}[/math], we can probably declare that we have [math]\pi\in\mathcal P[/math] when the equality [math]\#\circ=\#\bullet[/math] holds in each block. But when [math]\pi\notin P_{even}[/math], it is not clear at all what to do.


Let us record our conclusions in the form of a vague thought, as follows: \begin{thought} There are probably no complex Poisson law [math]P_t[/math], and no set of matching partitions [math]\mathcal P[/math]. However, we should have diagrams of type

[[math]] \xymatrix@R=40pt@C=40pt{ B_t\ar@{-}[r]\ar@{-}[d]&G_t\ar@{-}[d]\\ b_t\ar@{-}[r]&g_t}\qquad\ \qquad \item[a]ymatrix@R=40pt@C=40pt{ \mathcal P_{even}\ar@{-}[r]\ar@{-}[d]&\mathcal P_2\ar@{-}[d]\\ P_{even}\ar@{-}[r]&P_2} [[/math]]

with the laws [math]b_t,B_t[/math], coming from [math]P_{even},\mathcal P_{even}[/math] via the formula

[[math]] M_k(\mu_t)=\sum_{\pi\in D(k)}t^{|\pi|} [[/math]]

to be computed, and being the “true” discrete analogues of [math]g_t,G_t[/math]. \end{thought} So, this is what we have. Of course, all this looks a bit like science fiction, and shall we follow this luminous new way or not, and I would agree with you that rather not.


This being said, there is still a chance for some reasonable theory coming from groups, and we have here, as a complement to Question 2.17 and Question 2.18: \begin{question} What is the complex analogue of the symmetric group [math]S_N[/math]? \end{question} As before with Question 2.17 and Question 2.18, this is something quite philosophical. But the subject is now quite fruitful, because there are plenty of interesting reflection groups [math]G_N\subset U_N[/math], and with a bit of luck, by studying such groups, we can reach to an answer to our questions. Let us record this in the form of a second thought, as follows: \begin{thought} The answers to our questions should come from a diagram of type

[[math]] \xymatrix@R=40pt@C=40pt{ K_N\ar@{-}[r]\ar@{-}[d]&U_N\ar@{-}[d]\\ H_N\ar@{-}[r]&O_N} [[/math]]

with [math]H_N\subset K_N\subset U_N[/math] being certain reflection groups, and with [math]H_N=S_N[/math], ideally. \end{thought} Here we have used an unknown [math]H_N[/math] instead of [math]H_N=S_N[/math] that we are interested in, and this because of Thought 2.19, which is still there, suggesting potential troubles.


Anyway, what to do? Get to work, of course, and in the lack of any clear idea, let us do some character computations for reflection groups [math]G_N\subset U_N[/math]. An obvious choice here is the hyperoctahedral group, whose definition and basic properties are as follows:

Theorem

Consider the hyperoctahedral group [math]H_N\subset O_N[/math], consisting of the various symmetries of the hypercube in [math]\mathbb R^N[/math].

  • [math]H_N[/math] is the symmetry group of the [math]N[/math] coordinate axes of [math]\mathbb R^N[/math].
  • [math]H_N[/math] consists of the permutation-like matrices over [math]\{-1,0,1\}[/math].
  • We have the cardinality formula [math]|H_N|=2^NN![/math].
  • We have a crossed product decomposition [math]H_N=S_N\rtimes\mathbb Z_2^N[/math].
  • We have a wreath product decomposition [math]H_N=\mathbb Z_2\wr S_N[/math].


Show Proof

Consider indeed the standard cube in [math]\mathbb R^N[/math], which is by definition centered at 0, and having as vertices the points having coordinates [math]\pm1[/math].


(1) With the above picture of the cube in hand, it is clear that the symmetries of the cube coincide with the symmetries of the [math]N[/math] coordinate axes of [math]\mathbb R^N[/math].


(2) Each of the permutations [math]\sigma\in S_N[/math] of the [math]N[/math] coordinate axes of [math]\mathbb R^N[/math] can be further “decorated” by a sign vector [math]\varepsilon\in\{\pm1\}^N[/math], consisting of the possible [math]\pm1[/math] flips which can be applied to each coordinate axis, at the arrival. In matrix terms, this gives the result.


(3) By using the above interpretation of [math]H_N[/math], we have the following formula:

[[math]] |H_N| =|S_N|\cdot|\mathbb Z_2^N| =N!\cdot2^N [[/math]]


(4) We know from (3) that at the level of cardinalities we have [math]|H_N|=|S_N\times\mathbb Z_2^N|[/math], and with a bit more work, we obtain that we have [math]H_N=S_N\rtimes\mathbb Z_2^N[/math], as claimed.


(5) This is simply a reformulation of (4), in terms of wreath products.

Getting back now to our character computations, following [4], we have:

Theorem

For the hyperoctahedral group [math]H_N\subset O_N[/math], the law of the variable [math]\chi=g_{11}+\ldots +g_{ss}[/math] with [math]s=[tN][/math] is, in the [math]N\to\infty[/math] limit, the following measure,

[[math]] b_t=e^{-t}\sum_{r=-\infty}^\infty\delta_r\sum_{p=0}^\infty \frac{(t/2)^{|r|+2p}}{(|r|+p)!p!} [[/math]]
called Bessel law of parameter [math]t \gt 0[/math].


Show Proof

We regard [math]H_N[/math] as being the symmetry group of the graph [math]I_N=\{I^1,\ldots ,I^N\}[/math] formed by [math]n[/math] segments. The diagonal coefficients are then given by:

[[math]] u_{ii}(g)=\begin{cases} \ 0\ \mbox{ if $g$ moves \ltmath\gtI^i[[/math]]
}\\ \ 1\ \mbox{ if [math]g[/math] fixes [math]I^i[/math]}\\ -1\mbox{ if [math]g[/math] returns [math]I^i[/math]} \end{cases} </math>


Let us denote by [math]F_g,R_g[/math] the number of segments among [math]\{I^1,\ldots ,I^s\}[/math] which are fixed, respectively returned by an element [math]g\in H_N[/math]. With this notation, we have:

[[math]] u_{11}+\ldots+u_{ss}=F_g-R_g [[/math]]


We denote by [math]P_N[/math] probabilities computed over [math]H_N[/math]. The density of the law of the variable [math]u_{11}+\ldots+u_{ss}[/math] at a point [math]r\geq 0[/math] is then given by the following formula:

[[math]] D(r) =P_N(F_g-R_g=r) =\sum_{p=0}^\infty P_N(F_g=r+p,R_g=p) [[/math]]


Assume first that we are in the case [math]t=1[/math]. We have the following computation:

[[math]] \begin{eqnarray*} \lim_{N\to\infty}D(r) &=&\lim_{N\to\infty}\sum_{p=0}^\infty(1/2)^{r+2p}\binom{r+2p}{r+p}P_N(F_g+R_g=r+2p)\\ &=&\sum_{p=0}^\infty(1/2)^{r+2p}\binom{r+2p}{r+p}\frac{1}{e(r+2p)!}\\ &=&\frac{1}{e}\sum_{p=0}^\infty\frac{(1/2)^{r+2p}}{(r+p)!p!} \end{eqnarray*} [[/math]]


The general case [math]0 \lt t\leq 1[/math] follows by performing some modifications in the above computation. Indeed, the asymptotic density can be computed as follows:

[[math]] \begin{eqnarray*} \lim_{N\to\infty}D(r) &=&\lim_{N\to\infty}\sum_{p=0}^\infty(1/2)^{r+2p}\binom{r+2p}{r+p}P_N(F_g+R_g=r+2p)\\ &=&\sum_{p=0}^\infty(1/2)^{r+2p}\binom{r+2p}{r+p}\frac{t^{r+2p}}{e^t(r+2p)!}\\ &=&e^{-t}\sum_{p=0}^\infty\frac{(t/2)^{r+2p}}{(r+p)!p!} \end{eqnarray*} [[/math]]


Together with [math]D(-r)=D(r)[/math], this gives the formula in the statement.

The above result is quite interesting, because the densities that we found there are the following functions, called Bessel functions of the first kind:

[[math]] f_r(t)=\sum_{p=0}^\infty\frac{t^{|r|+2p}}{(|r|+p)!p!} [[/math]]


Due to this fact, the limiting measures are called Bessel laws, as mentioned in Theorem 2.23. Let us study now these Bessel laws. We first have the following result, from [4]:

Theorem

The Bessel laws [math]b_t[/math] have the property

[[math]] b_s*b_t=b_{s+t} [[/math]]
so they form a truncated one-parameter semigroup with respect to convolution.


Show Proof

With [math]f_r(t)[/math] being the Bessel functions of the first kind, we have:

[[math]] b_t=e^{-t}\sum_{r=-\infty}^\infty\delta_r\,f_r(t/2) [[/math]]


The Fourier transform of this measure [math]b_t[/math] is given by:

[[math]] F_{b_t}(y)=e^{-t}\sum_{r=-\infty}^\infty e^{iry}\,f_r(t/2) [[/math]]


We compute now the derivative with respect to the variable [math]t[/math]:

[[math]] F_{b_t}(y)'=-F_{b_t}(y)+\frac{e^{-t}}{2}\sum_{r=-\infty}^\infty e^{iry}\,f_r'(t/2) [[/math]]


On the other hand, the derivative of [math]f_r[/math] with [math]r\geq 1[/math] is given by:

[[math]] \begin{eqnarray*} f_r'(t) &=&\sum_{p=0}^\infty\frac{(r+2p)t^{r+2p-1}}{(r+p)!p!}\\ &=&\sum_{p=0}^\infty\frac{(r+p)t^{r+2p-1}}{(r+p)!p!}+\sum_{p=0}^\infty\frac{p\,t^{r+2p-1}}{(r+p)!p!}\\ &=&\sum_{p=0}^\infty\frac{t^{r+2p-1}}{(r+p-1)!p!}+\sum_{p=1}^\infty\frac{t^{r+2p-1}}{(r+p)!(p-1)!}\\ &=&\sum_{p=0}^\infty\frac{t^{(r-1)+2p}}{((r-1)+p)!p!}+\sum_{p=1}^\infty\frac{t^{(r+1)+2(p-1)}}{((r+1)+(p-1))!(p-1)!}\\ &=&f_{r-1}(t)+f_{r+1}(t) \end{eqnarray*} [[/math]]


This computation works in fact for any [math]r[/math], and we obtain in this way:

[[math]] \begin{eqnarray*} F_{b_t}(y)' &=&-F_{b_t}(y)+\frac{e^{-t}}{2}\sum_{r=-\infty}^\infty e^{iry}(f_{r-1}(t/2)+f_{r+1}(t/2))\\ &=&-F_{b_t}(y)+\frac{e^{-t}}{2}\sum_{r=-\infty}^\infty e^{i(r+1)y}f_r(t/2)+e^{i(r-1)y}f_r(t/2)\\ &=&-F_{b_t}(y)+\frac{e^{iy}+e^{-iy}}{2}\,F_{b_t}(y)\\ &=&\left(\frac{e^{iy}+e^{-iy}}{2}-1\right)F_{b_t}(y) \end{eqnarray*} [[/math]]


By integrating, we obtain from this the following formula:

[[math]] F_{b_t}(y)=\exp\left(\left(\frac{e^{iy}+e^{-iy}}{2}-1\right)t\right) [[/math]]


Thus the log of the Fourier transform is linear in [math]t[/math], and we get the assertion.

In order to further discuss all this, and extend the above results, we will need a number of standard probabilistic preliminaries. We have the following notion, extending the Poisson limit theory developed in the beginning of the present chapter:

Definition

Associated to any compactly supported positive measure [math]\nu[/math] on [math]\mathbb C[/math], not necessarily of mass [math]1[/math], is the probability measure

[[math]] p_\nu=\lim_{n\to\infty}\left(\left(1-\frac{t}{n}\right)\delta_0+\frac{1}{n}\nu\right)^{*n} [[/math]]
where [math]t=mass(\nu)[/math], called compound Poisson law.

In what follows we will be mainly interested in the case where the measure [math]\nu[/math] is discrete, as is for instance the case for [math]\nu=t\delta_1[/math] with [math]t \gt 0[/math], which produces the Poisson laws. The following standard result allows one to detect compound Poisson laws:

Proposition

For [math]\nu=\sum_{i=1}^st_i\delta_{z_i}[/math] with [math]t_i \gt 0[/math] and [math]z_i\in\mathbb C[/math], we have

[[math]] F_{p_\nu}(y)=\exp\left(\sum_{i=1}^st_i(e^{iyz_i}-1)\right) [[/math]]
where [math]F[/math] denotes the Fourier transform.


Show Proof

Let [math]\eta_n[/math] be the measure in Definition 2.25, under the convolution sign:

[[math]] \eta_n=\left(1-\frac{t}{n}\right)\delta_0+\frac{1}{n}\nu [[/math]]


We have then the following computation:

[[math]] \begin{eqnarray*} F_{\eta_n}(y)=\left(1-\frac{t}{n}\right)+\frac{1}{n}\sum_{i=1}^st_ie^{iyz_i} &\implies&F_{\eta_n^{*n}}(y)=\left(\left(1-\frac{t}{n}\right)+\frac{1}{n}\sum_{i=1}^st_ie^{iyz_i}\right)^n\\ &\implies&F_{p_\nu}(y)=\exp\left(\sum_{i=1}^st_i(e^{iyz_i}-1)\right) \end{eqnarray*} [[/math]]


Thus, we have obtained the formula in the statement.

We have as well the following result, providing an alternative to Definition 2.25, and which will be our formulation here of the Compound Poisson Limit Theorem:

Theorem (CPLT)

For [math]\nu=\sum_{i=1}^st_i\delta_{z_i}[/math] with [math]t_i \gt 0[/math] and [math]z_i\in\mathbb C[/math], we have

[[math]] p_\nu={\rm law}\left(\sum_{i=1}^sz_i\alpha_i\right) [[/math]]
where the variables [math]\alpha_i[/math] are Poisson [math](t_i)[/math], independent.


Show Proof

Let [math]\alpha[/math] be the sum of Poisson variables in the statement, namely:

[[math]] \alpha=\sum_{i=1}^sz_i\alpha_i [[/math]]


By using some standard Fourier transform formulae, we have:

[[math]] \begin{eqnarray*} F_{\alpha_i}(y)=\exp(t_i(e^{iy}-1)) &\implies&F_{z_i\alpha_i}(y)=\exp(t_i(e^{iyz_i}-1))\\ &\implies&F_\alpha(y)=\exp\left(\sum_{i=1}^st_i(e^{iyz_i}-1)\right) \end{eqnarray*} [[/math]]


Thus we have indeed the same formula as in Proposition 2.26, as desired.

Summarizing, we have now a full generalization of the PLT. Getting back now to the Poisson and Bessel laws, with the above formalism in hand, we have:

Theorem

The Poisson and Bessel laws are compound Poisson laws,

[[math]] p_t=p_{t\delta_1}\quad,\quad b_t=p_{t\varepsilon} [[/math]]
where [math]\delta_1[/math] is the Dirac mass at [math]1[/math], and [math]\varepsilon[/math] is the centered Bernoulli law, [math]\varepsilon=(\delta_1+\delta_{-1})/2[/math].


Show Proof

We have two assertions here, the idea being as follows:


(1) The first assertion, regarding the Poisson law [math]p_t[/math], is clear from Definition 2.25, which for [math]\nu=t\delta_1[/math] takes the following form:

[[math]] p_\nu=\lim_{n\to\infty}\left(\left(1-\frac{t}{n}\right)\delta_0+\frac{t}{n}\,\delta_1\right)^{*n} [[/math]]


Indeed, according to the PLT, the limit on the right produces the Poisson law [math]p_t[/math], as desired. Alternatively, the result follows as well from Proposition 2.26, which gives:

[[math]] F_{p_\nu}(y)=\exp\left(t(e^{iy}-1)\right) [[/math]]


But the simplest way of proving the result is by invoking Theorem 2.28, which tells us that for [math]\nu=t\delta_1[/math] we have [math]p_\nu=law(\alpha)[/math], with [math]\alpha[/math] being Poisson ([math]t[/math]).


(2) Regarding the second assertion, concerning [math]b_t[/math], the most convenient here is to use the formula of the Fourier transform found in the proof of Theorem 2.24, namely:

[[math]] F_{b_t}(y)=\exp\left(t\left(\frac{e^{iy}+e^{-iy}}{2}-1\right)\right) [[/math]]


On the other hand, the formula in Proposition 2.26 gives, for [math]\nu=t\varepsilon[/math]:

[[math]] F_{p_\nu}(y)=\exp\left(\frac{t}{2}(e^{iy}-1)+\frac{t}{2}(e^{-iy}-1)\right) [[/math]]


Thus, with [math]\nu=t\varepsilon[/math] we have [math]p_\nu=b_t[/math], as claimed.

As a conclusion to all this, we can add the Bessel laws [math]b_t[/math] to the family of “interesting” probability measures that we have, consisting so far of the real and complex Gaussian laws [math]g_t[/math] and [math]G_t[/math], and the Poisson laws [math]p_t[/math]. Indeed, the measures [math]p_t,b_t,g_t,G_t[/math] all appear via basic limiting theorems, they form convolution semigroups, and they are related to group theory as well, and more specifically to the groups [math]S_N,H_N,O_N,U_N[/math].


Still missing, however, for [math]b_t[/math] is a combinatorial formula for the moments, in the spirit of the formulae that we have for [math]p_t,g_t,G_t[/math]. This is something quite tricky, and the formula is as follows, with [math]P_{even}[/math] standing for the partitions all whose blocks have even size:

[[math]] M_k(b_t)=\sum_{\pi\in P_{even}(k)}t^{|\pi|} [[/math]]


It is possible to prove this out of what we have, for instance by taking the generating function of the above numbers, then converting this series into a Fourier one, with the conclusion that we obtain indeed the Fourier transform [math]F_{b_t}[/math] computed above. However, the computations are quite complex, and this even in the simplest case, [math]t=1[/math], and instead of embarking into this, we will leave it for later, when we will have better tools.

2d. Bessel laws

Moving ahead, Theorem 2.28 suggests formulating the following definition, which unifies the Poisson laws with the real Bessel laws that we found above:

Definition

The Bessel law of level [math]s\in\mathbb N\cup\{\infty\}[/math] and parameter [math]t \gt 0[/math] is

[[math]] b^s_t=p_{t\varepsilon_s} [[/math]]
with [math]\varepsilon_s[/math] being the uniform measure on the [math]s[/math]-th roots of unity. The measures

[[math]] b_t=b_t^2\quad,\quad B_t=b^\infty_t [[/math]]
are called real Bessel law, and complex Bessel law.

Here we use the same convention as in the continuous case, namely that capital letters stand for complexifications. We will see in a moment that [math]B_t[/math] is indeed a complexification of [math]b_t[/math], in a suitable sense, so that the couple [math]b_t/B_t[/math] stands as the correct “discrete analogue” of the couple [math]g_t/G_t[/math]. Which is something quite interesting, philosophically speaking.


In practice now, we first have to study the measures [math]b^s_t[/math] in our standard way, meaning density, moments, Fourier, semigroup property, limiting theorems, and relation with group theory. In what regards limiting theorems, the measures [math]b^s_t[/math] appear by definition via the CPLT, so done with that. As a consequence of this, however, let us record:

Proposition

The Bessel laws are given by

[[math]] b^s_t={\rm law}\left(\sum_{k=1}^sw^ka_k\right) [[/math]]
where [math]a_1,\ldots,a_s[/math] are Poisson [math](t)[/math] independent, and [math]w=e^{2\pi i/s}[/math].


Show Proof

At [math]s=1,2[/math] this is something that we already know, coming from Theorem 2.28 and its proof. In general, this follows from Theorem 2.27.

Following [5], where the laws [math]b^s_t[/math] were introduced and studied, let us discuss now Fourier transforms and the semigroup property. Consider the level [math]s[/math] exponential function:

[[math]] \exp_sz=\sum_{k=0}^\infty\frac{z^{sk}}{(sk)!} [[/math]]


We have then the following formula, in terms of [math]w=e^{2\pi i/s}[/math]:

[[math]] \exp_sz=\frac{1}{s}\sum_{k=1}^s\exp(w^kz) [[/math]]


Observe that [math]\exp_1=\exp[/math] and [math]\exp_2=\cosh[/math]. We have the following result:

Theorem

The Fourier transform of [math]b^s_t[/math] is given by

[[math]] \log F^s_t(z)=t\left(\exp_sz-1\right) [[/math]]
where [math]\exp_sz[/math] is as above. In particular we have the formula

[[math]] b^s_t*b^s_{t'}=b^s_{t+t'} [[/math]]
so the measures [math]b^t_s[/math] form a one-parameter convolution semigroup.


Show Proof

Consider, as in Proposition 2.30, the variable [math]a=\sum_{k=1}^sw^ka_k[/math]. We have then the following Fourier transform computation:

[[math]] \log F_a(z) =\sum_{k=1}^s\log F_{a_k}(w^kz) =\sum_{k=1}^s\frac{t}{s}\left(\exp(w^kz)-1\right) [[/math]]


But this gives the following formula:

[[math]] \log F_a(z) =t\left(\left(\frac{1}{s}\sum_{k=1}^s\exp(w^kz)\right)-1\right) =t\left(\exp_sz-1\right) [[/math]]


Now since [math]b^s_t[/math] is the law of [math]a[/math], this gives the formula in the statement. As for the last assertion, this comes from the fact that the log of the Fourier transform is linear in [math]t[/math].

Still following [5], we can compute the density of [math]b^s_t[/math], as follows:

Theorem

We have the formula

[[math]] b^s_t=e^{-t}\sum_{c_1=0}^\infty\ldots\sum_{c_s=0}^\infty\frac{1}{c_1!\ldots c_s!}\,\left(\frac{t}{s}\right)^{c_1+\ldots+c_s}\delta\left(\sum_{k=1}^sw^kc_k\right) [[/math]]
where [math]w=e^{2\pi i/s}[/math], and the [math]\delta[/math] symbol is a Dirac mass.


Show Proof

The Fourier transform of the measure on the right is given by:

[[math]] \begin{eqnarray*} F(z) &=&e^{-t}\sum_{c_1=0}^\infty\ldots\sum_{c_s=0}^\infty\frac{1}{c_1!\ldots c_s!}\left(\frac{t}{s}\right)^{c_1+\ldots+c_s}F\delta\left(\sum_{k=1}^sw^kc_k\right)(z)\\ &=&e^{-t}\sum_{c_1=0}^\infty\ldots\sum_{c_s=0}^\infty\frac{1}{c_1!\ldots c_s!}\left(\frac{t}{s}\right)^{c_1+\ldots+c_s}\exp\left(\sum_{k=1}^sw^kc_kz\right)\\ &=&e^{-t}\sum_{r=0}^\infty\left(\frac{t}{s}\right)^r\sum_{\Sigma c_i=r}\frac{\exp\left(\sum_{k=1}^sw^kc_kz\right)}{c_1!\ldots c_s!} \end{eqnarray*} [[/math]]


We multiply now by [math]e^t[/math], and we compute the derivative with respect to [math]t[/math]:

[[math]] \begin{eqnarray*} (e^tF(z))' &=&\sum_{r=1}^\infty\frac{r}{s}\left(\frac{t}{s}\right)^{r-1}\sum_{\Sigma c_i=r}\frac{\exp\left(\sum_{k=1}^sw^kc_kz\right)}{c_1!\ldots c_s!}\\ &=&\frac{1}{s}\sum_{r=1}^\infty\left(\frac{t}{s}\right)^{r-1}\sum_{\Sigma c_i=r}\left(\sum_{l=1}^sc_l\right)\frac{\exp\left(\sum_{k=1}^sw^kc_kz\right)}{c_1!\ldots c_s!}\\ &=&\frac{1}{s}\sum_{r=1}^\infty\left(\frac{t}{s}\right)^{r-1}\sum_{\Sigma c_i=r}\sum_{l=1}^s\frac{\exp\left(\sum_{k=1}^sw^kc_kz\right)}{c_1!\ldots c_{l-1}!(c_l-1)!c_{l+1}!\ldots c_s!}\\ \end{eqnarray*} [[/math]]


By using the variable [math]u=r-1[/math], we obtain in this way:

[[math]] \begin{eqnarray*} (e^tF(z))' &=&\frac{1}{s}\sum_{u=0}^\infty\left(\frac{t}{s}\right)^u\sum_{\Sigma d_i=u}\sum_{l=1}^s\frac{\exp\left(w^lz+\sum_{k=1}^sw^kd_kz\right)}{d_1!\ldots d_s!}\\ &=&\left(\frac{1}{s}\sum_{l=1}^s\exp(w^lz)\right)\left(\sum_{u=0}^\infty\left(\frac{t}{s}\right)^u\sum_{\Sigma d_i=u}\frac{\exp\left(\sum_{k=1}^sw^kd_kz\right)}{d_1!\ldots d_s!}\right)\\ &=&(\exp_sz)(e^tF(z)) \end{eqnarray*} [[/math]]


On the other hand, [math]\Phi(t)=\exp(t\exp_sz)[/math] satisfies the same equation, namely:

[[math]] \Phi'(t)=(\exp_sz)\Phi(t) [[/math]]


Thus, we have the [math]e^tF(z)=\Phi(t)[/math], which gives the following formula:

[[math]] \begin{eqnarray*} \log F &=&\log(e^{-t}\exp(t\exp_sz))\\ &=&\log(\exp(t(\exp_sz-1)))\\ &=&t(\exp_sz-1) \end{eqnarray*} [[/math]]


Thus, we obtain the formulae in the statement.

Regarding now the questions which are left, namely moments and relation with groups, these are quite technical, and related. Let us start by discussing the relation with groups. Obviously we need here a generalization of the groups [math]S_N,H_N[/math], involving a parameter [math]s\in\mathbb N\cup\{\infty\}[/math], and the answer to this question is straightforward, as follows:

Definition

The complex reflection group [math]H_N^s\subset U_N[/math] is the group of permutations of [math]N[/math] copies of the [math]s[/math]-simplex. Equivalently, we have

[[math]] H_N^s=M_N(\mathbb Z_s\cup\{0\})\cap U_N [[/math]]
telling us that [math]H_N^s[/math] consists of the permutation-type matrices with [math]s[/math]-th roots of unity as entries. Also equivalently, we have the formula [math]H_N^s=\mathbb Z_s\wr S_N[/math].

Here the equivalence between the various viewpoints on [math]H_N^s[/math] comes as in Theorem 2.22, which corresponds to the case [math]s=2[/math]. In fact, the basic examples are as follows:


(1) [math]s=1[/math]. Here [math]H_N^1=S_N[/math], trivially, no matter which viewpoint we take.


(2) [math]s=2[/math]. Here [math]H_N^2=H_2[/math], with this coming from Theorem 2.22.


(3) [math]s=\infty[/math]. Here [math]H_N^\infty=K_N[/math] is an interesting group, and more on it later.


In general, [math]H_N^s[/math] are well-known in group theory, the idea being that, up to a number of exceptional examples, the complex reflection groups are exactly these groups [math]H_N^s[/math], and their versions [math]H_N^{sd}[/math] obtained by adding the supplementary condition [math](\det U)^d=1[/math].


In relation with the Bessel laws, we have the following result, from [5]:

Theorem

For the complex reflection group [math]H_N^s[/math] we have, with [math]N\to\infty[/math]:

[[math]] \chi_t\sim b^s_t [[/math]]
Moreover, the asymptotic moments of this variable are the numbers

[[math]] M_k(b_t^s)=\sum_{\pi\in P^s(k)}t^{|\pi|} [[/math]]
where [math]P^s(k)[/math] are the partitions of [math]\{1,\ldots,k\}[/math] satisfying [math]\#\circ=\#\bullet(s)[/math], in each block.


Show Proof

This is something quite long, that we will discuss in detail in chapters 3-4 below, when systematically doing representation theory, the idea being as follows:


(1) At [math]s=1[/math] the reflection group is [math]H_N^1=S_N[/math], the Bessel law is the Poisson law, [math]b_t^1=p_t[/math], and the formula [math]\chi_t\sim p_t[/math] with [math]N\to\infty[/math] is something that we know. As for the moment formula, where [math]P^1=P[/math], this is something that we know too.


(2) At [math]s=2[/math] the reflection group is [math]H_N^2=H_N[/math], the Bessel law is [math]b_t^2=b_t[/math], and the formula [math]\chi_t\sim b_t[/math] with [math]N\to\infty[/math] is something that we know. As for the moment formula, where [math]P^2=P_{even}[/math], this is something more technical, which remains to be discussed.


(3) At [math]s=\infty[/math] the reflection group is [math]H_N^\infty=K_N[/math], the Bessel law is [math]b_t^\infty=B_t[/math], and the formula [math]\chi_t\sim B_t[/math] with [math]N\to\infty[/math] is something that can be proved as for [math]S_N,H_N[/math]. As for the moment formula, where [math]P^\infty=\mathcal P_{even}[/math], this remains to be discussed.


(4) In the general case, where [math]s\in\mathbb N\cup\{\infty\}[/math], the formula [math]\chi_t\sim b_t^s[/math] with [math]N\to\infty[/math] can be established a bit like for [math]S_N,H_N[/math], and the moment formula is something quite technical. We will discuss both questions in chapter 4 below, using more advanced tools.

All the above is very nice, theoretically speaking, and we can now answer the various philosophical questions raised in the beginning of this section, as follows: \begin{conclusion} There is no complex Poisson law [math]P_t[/math], no set of matching partitions [math]\mathcal P[/math], and no complex analogue of [math]S_N[/math]. However, we have diagrams

[[math]] \xymatrix@R=40pt@C=40pt{ B_t\ar@{-}[r]\ar@{-}[d]&G_t\ar@{-}[d]\\ b_t\ar@{-}[r]&g_t}\qquad\ \qquad \item[a]ymatrix@R=40pt@C=40pt{ \mathcal P_{even}\ar@{-}[r]\ar@{-}[d]&\mathcal P_2\ar@{-}[d]\\ P_{even}\ar@{-}[r]&P_2}\qquad\ \qquad \item[a]ymatrix@R=40pt@C=40pt{ K_N\ar@{-}[r]\ar@{-}[d]&U_N\ar@{-}[d]\\ H_N\ar@{-}[r]&O_N} [[/math]]

with the laws [math]b_t,B_t[/math] being related to [math]P_{even},\mathcal P_{even}[/math] and to the groups [math]H_N,K_N[/math] in the standard way, and these laws [math]b_t,B_t[/math] are the true discrete analogues of [math]g_t,G_t[/math]. \end{conclusion} Summarizing, what we did so far in this book, namely the Gaussian and Poisson laws, and their various versions, have interesting combinatorics. All the above was an introduction to this combinatorics, following the classical theory, and [5], [4], [2] and related papers. We will be back to these laws and results on numerous occasions.


Let us also mention that all the above is in fact just half of the story, because all 4 measures in Conclusion 2.35 are of “classical” nature, and there will be 4 more measures, appearing as “free versions” of these. So, expect our final result on the subject to be a cube formed by 8 measures, coming with accompanying cubes of partitions and groups. But more on this later, after some substantial work, towards the end of this book.

General references

Banica, Teo (2024). "Calculus and applications". arXiv:2401.00911 [math.CO].

References

  1. T. Banica and B. Collins, Integration over compact quantum groups, Publ. Res. Inst. Math. Sci. 43 (2007), 277--302.
  2. 2.0 2.1 B. Collins and P. \'Sniady, Integration with respect to the Haar measure on unitary, orthogonal and symplectic groups, Comm. Math. Phys. 264 (2006), 773--795.
  3. D. Weingarten, Asymptotic behavior of group integrals in the limit of infinite rank, J. Math. Phys. 19 (1978), 999--1001.
  4. 4.0 4.1 4.2 T. Banica, J. Bichon and B. Collins, The hyperoctahedral quantum group, J. Ramanujan Math. Soc. 22 (2007), 345--384.
  5. 5.0 5.1 5.2 5.3 T. Banica, S.T. Belinschi, M. Capitaine and B. Collins, Free Bessel laws, Canad. J. Math. 63 (2011), 3--37.