Poisson limits

[math] \newcommand{\mathds}{\mathbb}[/math]

This article was automatically generated from a tex file and may contain conversion errors. If permitted, you may login and edit this article to improve the conversion.

11a. Poisson limits

We have seen that free probability leads to two key limiting theorems, namely the free analogues of the CLT and CCLT. The limiting measures are the Wigner semicircle laws [math]\gamma_t[/math] and the Voiculescu circular laws [math]\Gamma_t[/math]. Together with the Gaussian laws [math]g_t[/math] and [math]G_t[/math] coming from the classical CLT and CCLT, these laws form a square diagram, as follows:

[[math]] \xymatrix@R=50pt@C=50pt{ \gamma_t\ar@{-}[r]\ar@{-}[d]&\Gamma_t\ar@{-}[d]\\ g_t\ar@{-}[r]&G_t } [[/math]]


Motivated by this, in this chapter we develop more free limiting theorems. First, we will find a free analogue of the PLT, with the corresponding limiting measures, appearing as the free analogues of the Poisson laws [math]p_t[/math], being the Marchenko-Pastur laws [math]\pi_t[/math]. This will lead to an extension to the above square diagram, into a rectangle, as follows:

[[math]] \xymatrix@R=50pt@C=50pt{ \pi_t\ar@{-}[r]\ar@{-}[d]&\gamma_t\ar@{-}[r]\ar@{-}[d]&\Gamma_t\ar@{-}[d]\\ p_t\ar@{-}[r]&g_t\ar@{-}[r]&G_t } [[/math]]


More generally, we will find a free analogue of the compound Poisson limit theorem (CPLT), that we know from chapter 2. At the level of the philosophy, and of the above diagram, there are no complex analogues of [math]p_t,\pi_t[/math], but by using certain measures found via the classical and free CPLT, namely the real and purely complex Bessel laws [math]b_t,B_t[/math] discussed in chapter 2, and their free analogues [math]\beta_t,\mathfrak B_t[/math] to be discussed here, we will be able to modify and then fold the diagram, as to complete it into a cube, as follows:

[[math]] \xymatrix@R=20pt@C=22pt{ &\mathfrak B_t\ar@{-}[rr]\ar@{-}[dd]&&\Gamma_t\ar@{-}[dd]\\ \beta_t\ar@{-}[rr]\ar@{-}[dd]\ar@{-}[ur]&&\gamma_t\ar@{-}[dd]\ar@{-}[ur]\\ &B_t\ar@{-}[rr]\ar@{-}[uu]&&G_t\ar@{-}[uu]\\ b_t\ar@{-}[uu]\ar@{-}[ur]\ar@{-}[rr]&&g_t\ar@{-}[uu]\ar@{-}[ur] } [[/math]]


Which is of course quite nice, theoretically speaking, because this leads to a kind of 3D orientation inside classical and free probability, which is something very useful.


Getting started now, we would first like to have a free analogue of the Poisson Limit Theorem (PLT). Although elementary from what we have, this was something not done by Voiculescu himself, and not appearing in the foundational book [1], and only explained later, in the book of Hiai and Petz [2]. The statement is as follows:

Theorem (Free PLT)

The following limit converges, for any [math]t \gt 0[/math],

[[math]] \lim_{n\to\infty}\left(\left(1-\frac{t}{n}\right)\delta_0+\frac{t}{n}\delta_1\right)^{\boxplus n} [[/math]]
and we obtain the Marchenko-Pastur law of parameter [math]t[/math],

[[math]] \pi_t=\max(1-t,0)\delta_0+\frac{\sqrt{4t-(x-1-t)^2}}{2\pi x}\,dx [[/math]]
also called free Poisson law of parameter [math]t[/math].


Show Proof

Consider the measure in the statement, under the convolution sign:

[[math]] \eta=\left(1-\frac{t}{n}\right)\delta_0+\frac{t}{n}\delta_1 [[/math]]


The Cauchy transform of this measure is easy to compute, and is given by:

[[math]] G_\eta(\xi)=\left(1-\frac{t}{n}\right)\frac{1}{\xi}+\frac{t}{n}\cdot\frac{1}{\xi-1} [[/math]]


In order to prove the result, we want to compute the following [math]R[/math]-transform:

[[math]] R =R_{\eta^{\boxplus n}}(y) =nR_\eta(y) [[/math]]


According to the formula of [math]G_\eta[/math], the equation for this function [math]R[/math] is as follows:

[[math]] \left(1-\frac{t}{n}\right)\frac{1}{1/y+R/n}+\frac{t}{n}\cdot\frac{1}{1/y+R/n-1}=y [[/math]]


By multiplying both sides by [math]n/y[/math], this equation can be written as:

[[math]] \frac{t+yR}{1+yR/n}=\frac{t}{1+yR/n-y} [[/math]]


With [math]n\to\infty[/math] things simplify, and we obtain the following formula:

[[math]] t+yR=\frac{t}{1-y} [[/math]]


Thus we have the following formula, for the [math]R[/math]-transform that we are interested in:

[[math]] R=\frac{t}{1-y} [[/math]]


But this gives the result, since [math]R_{\pi_t}[/math] is elementary to compute from what we have, by “doubling” the results for the Wigner law [math]\gamma_t[/math], and is given by the same formula.

As in the continuous case, most of the basic theory of [math]\pi_t[/math] was already done before, in chapters 6-7, with all this partly coming from the theory of [math]SO_3[/math], at [math]t=1[/math]. One thing which was missing there, however, was that of understanding how the law [math]\pi_t[/math], with parameter [math]t \gt 0[/math], exactly appears, out of [math]\pi_1[/math]. We can now solve this question:

Theorem

The Marchenko-Pastur laws have the property

[[math]] \pi_s\boxplus\pi_t=\pi_{s+t} [[/math]]
so they form a [math]1[/math]-parameter semigroup with respect to free convolution.


Show Proof

This follows either from Theorem 11.1, or from the fact that the [math]R[/math]-transform of [math]\pi_t[/math], computed in the proof of Theorem 11.1, is linear in [math]t[/math].

All this is very nice, conceptually speaking, and we can now summarize the various discrete probability results that we have, classical and free, as follows:

Theorem

The Poisson laws [math]p_t[/math] and the Marchenko-Pastur laws [math]\pi_t[/math], given by

[[math]] p_t=e^{-t}\sum_k\frac{t^k}{k!}\,\delta_k [[/math]]

[[math]] \pi_t=\max(1-t,0)\delta_0+\frac{\sqrt{4t-(x-1-t)^2}}{2\pi x}\,dx [[/math]]
have the following properties:

  • They appear via the PLT, and the free PLT.
  • They form semigroups with respect to [math]*[/math] and [math]\boxplus[/math].
  • Their transforms are [math]\log F_{p_t}(x)=t(e^{ix}-1)[/math], [math]R_{\pi_t}(x)=t/(1-x)[/math].
  • Their moments are [math]M_k=\sum_{\pi\in D(k)}t^{|\pi|}[/math], with [math]D=P,NC[/math].


Show Proof

These are all results that we already know, from here and from the previous chapters. To be more precise:


(1) The PLT is from chapter 2, and the FPLT is from here.


(2) The semigroup properties are from chapter 2, and from here.


(3) The formula for [math]F_{p_t}[/math] is from chapter 2, and the one for [math]R_{\pi_t}[/math], from here.


(4) The moment formulae follow from the formulae of functional transforms.

We can in fact merge this with our previous continuous results, and we obtain:

Theorem

The moments of the various central limiting measures, namely

[[math]] \xymatrix@R=45pt@C=45pt{ \pi_t\ar@{-}[r]\ar@{-}[d]&\gamma_t\ar@{-}[r]\ar@{-}[d]&\Gamma_t\ar@{-}[d]\\ p_t\ar@{-}[r]&g_t\ar@{-}[r]&G_t } [[/math]]
are always given by the same formula, involving partitions, namely

[[math]] M_k=\sum_{\pi\in D(k)}t^{|\pi|} [[/math]]
where the sets of partitions [math]D(k)[/math] in question are respectively

[[math]] \xymatrix@R=45pt@C=45pt{ \pi_t\ar@{-}[r]\ar@{-}[d]&\gamma_t\ar@{-}[r]\ar@{-}[d]&\Gamma_t\ar@{-}[d]\\ p_t\ar@{-}[r]&g_t\ar@{-}[r]&G_t } [[/math]]
and where [math]|.|[/math] is the number of blocks.


Show Proof

This follows indeed by putting together the various results that we have, from chapter 10 for the square on the right, and from here for the edge on the left.

We will later some more conceptual explanations for all this, featuring classical and free cumulants, classical and free quantum groups, and many more.


Moving ahead now, let us try to find a free analogue of the CPLT. We will follow the CPLT material from chapter 2, by performing modifications where needed, as to replace everywhere classical probability with free probability. Let us start with the following straightforward definition, similar to the one from the classical case:

Definition

Associated to any compactly supported positive measure [math]\rho[/math] on [math]\mathbb C[/math] is the probability measure

[[math]] \pi_\rho=\lim_{n\to\infty}\left(\left(1-\frac{c}{n}\right)\delta_0+\frac{1}{n}\rho\right)^{\boxplus n} [[/math]]
where [math]c=mass(\rho)[/math], called compound free Poisson law.

In what follows we will be mostly interested in the case where [math]\rho[/math] is discrete, as is for instance the case for the measure [math]\rho=t\delta_1[/math] with [math]t \gt 0[/math], which produces the free Poisson laws. The following result allows one to detect compound free Poisson laws:

Proposition

For a discrete measure, written as

[[math]] \rho=\sum_{i=1}^sc_i\delta_{z_i} [[/math]]
with [math]c_i \gt 0[/math] and [math]z_i\in\mathbb C[/math], we have the following formula,

[[math]] R_{\pi_\rho}(y)=\sum_{i=1}^s\frac{c_iz_i}{1-yz_i} [[/math]]
where [math]R[/math] denotes as usual the Voiculescu [math]R[/math]-transform.


Show Proof

In order to prove this result, let [math]\eta_n[/math] be the measure appearing in Definition 11.5, under the free convolution sign, namely:

[[math]] \eta_n=\left(1-\frac{c}{n}\right)\delta_0+\frac{1}{n}\rho [[/math]]


The Cauchy transform of [math]\eta_n[/math] is then given by the following formula:

[[math]] G_{\eta_n}(\xi)=\left(1-\frac{c}{n}\right)\frac{1}{\xi}+\frac{1}{n}\sum_{i=1}^s\frac{c_i}{\xi-z_i} [[/math]]


Consider now the [math]R[/math]-transform of the measure [math]\eta_n^{\boxplus n}[/math], which is given by:

[[math]] R_{\eta_n^{\boxplus n}}(y)=nR_{\eta_n}(y) [[/math]]


By using the general theory of the [math]R[/math]-transform, from chapter 9, the above formula of [math]G_{\eta_n}[/math] shows that the equation for [math]R=R_{\eta_n^{\boxplus n}}[/math] is as follows:

[[math]] \begin{eqnarray*} &&\left(1-\frac{c}{n}\right)\frac{1}{1/y+R/n}+\frac{1}{n}\sum_{i=1}^s\frac{c_i}{1/y+R/n-z_i}=y\\ &\implies&\left(1-\frac{c}{n}\right)\frac{1}{1+yR/n}+\frac{1}{n}\sum_{i=1}^s\frac{c_i}{1+yR/n-yz_i}=1 \end{eqnarray*} [[/math]]


Now multiplying by [math]n[/math], then rearranging the terms, and letting [math]n\to\infty[/math], we get:

[[math]] \begin{eqnarray*} \frac{c+yR}{1+yR/n}=\sum_{i=1}^s\frac{c_i}{1+yR/n-yz_i} &\implies&c+yR_{\pi_\rho}(y)=\sum_{i=1}^s\frac{c_i}{1-yz_i}\\ &\implies&R_{\pi_\rho}(y)=\sum_{i=1}^s\frac{c_iz_i}{1-yz_i} \end{eqnarray*} [[/math]]


Thus, we are led to the conclusion in the statement.

We have as well the following result, providing an alternative to Definition 11.5, and which, together with Definition 11.5, can be thought of as being the free CPLT:

Theorem

For a discrete measure, written as

[[math]] \rho=\sum_{i=1}^sc_i\delta_{z_i} [[/math]]
with [math]c_i \gt 0[/math] and [math]z_i\in\mathbb C[/math], we have the formula

[[math]] \pi_\rho={\rm law}\left(\sum_{i=1}^sz_i\alpha_i\right) [[/math]]
where the variables [math]\alpha_i[/math] are free Poisson[math](c_i)[/math], free.


Show Proof

Let [math]\alpha[/math] be the sum of free Poisson variables in the statement:

[[math]] \alpha=\sum_{i=1}^sz_i\alpha_i [[/math]]


In order to prove the result, we will show that the [math]R[/math]-transform of [math]\alpha[/math] is given by the formula in Proposition 11.6. We have the following computation:

[[math]] \begin{eqnarray*} R_{\alpha_i}(y)=\frac{c_i}{1-y} &\implies&R_{z_i\alpha_i}(y)=\frac{c_iz_i}{1-yz_i}\\ &\implies&R_\alpha(y)=\sum_{i=1}^s\frac{c_iz_i}{1-yz_i} \end{eqnarray*} [[/math]]


Thus we have the same formula as in Proposition 11.6, and we are done.

All the above is quite general, and in practice, in order to obtain concrete results, the simplest measures that we can use as “input” for the CPLT are the same measures as those that we used in the classical case, namely the measures of type [math]\rho=t\varepsilon_s[/math], with [math]t \gt 0[/math], and with [math]\varepsilon_s[/math] being the uniform measure on the [math]s[/math]-th roots of unity. We discuss this in what follows, by following the classical material from chapter 2, and the paper [3].


Let us also mention that we already met in fact the compound free Poisson laws in chapters 7-8, when discussing the asymptotic distributions of the block-modified Wishart matrices. We will clarify this as well, at the end of the present chapter.

11b. Bessel laws

As mentioned above, for various reasons, including the construction of the “standard cube” discussed in the beginning of this chapter, we are interested in the applications of the free CPLT with the “simplest” input measures, with these simplest measures being those of type [math]\rho=t\varepsilon_s[/math], with [math]t \gt 0[/math], and with [math]\varepsilon_s[/math] being the uniform measure on the [math]s[/math]-th roots of unity. We are led in this way the following class of measures:

Definition

The Bessel and free Bessel laws, depending on parameters [math]s\in\mathbb N\cup\{\infty\}[/math] and [math]t \gt 0[/math], are the following compound Poisson and free Poisson laws,

[[math]] b^s_t=p_{t\varepsilon_s}\quad,\quad \beta^s_t=\pi_{t\varepsilon_s} [[/math]]
with [math]\varepsilon_s[/math] being the uniform measure on the [math]s[/math]-th roots of unity. In particular:

  • At [math]s=1[/math] we recover the Poisson laws [math]p_t,\pi_t[/math].
  • At [math]s=2[/math] we have the real Bessel laws [math]b_t,\beta_t[/math].
  • At [math]s=\infty[/math] we have the complex Bessel laws [math]B_t,\mathfrak B_t[/math].

The terminology here comes from the fact, that we know from chapter 2, that the density of the measure [math]b_t[/math], appearing at [math]s=2[/math], is a Bessel function of the first kind. This was something first discovered in [4], and we refer to that paper, and to the subsequent literature, including [3], for more comments on this phenomenon.


Our next task will be that upgrading our results about the free Poisson law [math]\pi_t[/math] in this setting, using a parameter [math]s\in\mathbb N\cup\{\infty\}[/math]. First, we have the following result:

Theorem

The free Bessel laws have the property

[[math]] \beta^s_t\boxplus\beta^s_{t'}=\beta^s_{t+t'} [[/math]]
so they form a [math]1[/math]-parameter semigroup with respect to free convolution.


Show Proof

This follows indeed from the fact that the [math]R[/math]-transform of [math]\beta^s_t[/math] is linear in [math]t[/math], which is something that we already know, from the above.

Let us discuss now, following the paper [3], some more advanced aspects of the free Bessel laws. Given a real probability measure [math]\mu[/math], one can ask whether the convolution powers [math]\mu^{\boxtimes s}[/math] and [math]\mu^{\boxplus t}[/math] exist, for various values of the parameters [math]s,t \gt 0[/math]. For the free Poisson law, the answer to these questions is as follows:

Proposition

The free convolution powers of the free Poisson law

[[math]] \pi^{\boxtimes s}\quad,\quad \pi^{\boxplus t} [[/math]]
exist for any positive values of the paremeters, [math]s,t \gt 0[/math].


Show Proof

We have two measures to be studied, the idea being as follows:


(1) The free Poisson law [math]\pi[/math] is by definition the [math]t=1[/math] particular case of the free Poisson law of parameter [math]t[/math], or Marchenko-Pastur law of parameter [math]t \gt 0[/math], given by:

[[math]] \pi_t=\max (1-t,0)\delta_0+\frac{\sqrt{4t-(x-1-t)^2}}{2\pi x}\,dx [[/math]]


The Cauchy transform of this measure is given by:

[[math]] G(\xi)=\frac{(\xi+1-t)+\sqrt{(\xi+1-t)^2-4\xi}}{2\xi} [[/math]]


We can compute now the [math]R[/math] transform, by proceeding as follows:

[[math]] \begin{eqnarray*} \item[a]i G^2+1=(\xi+1-t)G &\implies&Kz^2+1=(K+1-t)z\\ &\implies&Rz^2+z+1=(R+1-t)z+1\\ &\implies&Rz=R-t\\ &\implies&R=t/(1-z) \end{eqnarray*} [[/math]]


The last expression being linear in [math]t[/math], the measures [math]\pi_t[/math] form a semigroup with respect to free convolution. Thus we have [math]\pi_t=\pi^{\boxplus t}[/math], which proves the second assertion.


(2) Regarding now the measure [math]\pi^{\boxtimes s}[/math], there is no explicit formula for its density. However, we can prove that this measure exists, by using some abstract results. Indeed, we have the following computation for the [math]S[/math] transform of [math]\pi_t[/math]:

[[math]] \begin{eqnarray*} \item[a]i G^2+1=(\xi+1-t)G &\implies&zf^2+1=(1+z-zt)f\\ &\implies&z(\psi+1)^2+1=(1+z-zt)(\psi+1)\\ &\implies&\chi(z+1)^2+1=(1+\chi-\chi t)(z+1)\\ &\implies&\chi(z+1)(t+z)=z\\ &\implies&S=1/(t+z) \end{eqnarray*} [[/math]]


In particular at [math]t=1[/math] we have the following formula:

[[math]] S(z)=\frac{1}{1+z} [[/math]]


Thus the [math]\Sigma[/math] transform of [math]\pi[/math], which is by definition [math]\Sigma(z)=S(z/(1-z))[/math], is given by:

[[math]] \Sigma(z)=1-z [[/math]]


On the other hand, it is well-known from the general theory of the [math]S[/math]-transform that the [math]\Sigma[/math] transforms of the probability measures which are [math]\boxtimes[/math]-infinitely divisible are the functions of the form [math]\Sigma(z)=e^{v(z)}[/math], where [math]v:\mathbb C-[0,\infty)\to\mathbb C[/math] is analytic, satisfying:

[[math]] v(\bar{z})=\bar{v}(z)\quad,\quad v(\mathbb C^+)\subset\mathbb C^- [[/math]]


Now in the case of the free Poisson law, the function [math]v(z)=\log (1-z)[/math] satisfies these properties, and we are led to the conclusion in the statement. See [3].

Getting now towards the free Bessel laws, we have the following remarkable identity, in relation with the above convolution powers of [math]\pi[/math], also established in [3]:

Theorem

We have the formula

[[math]] \pi^{\boxtimes s-1}\boxtimes\pi^{\boxplus t} =((1-t)\delta_0+t\delta_1)\boxtimes\pi^{\boxtimes s} [[/math]]
valid for any [math]s\geq 1[/math], and any [math]t\in (0,1][/math].


Show Proof

We know from the previous proof that the [math]S[/math] transform of the free Poisson law [math]\pi[/math] is given by the following formula:

[[math]] S_1(z)=\frac{1}{1+z} [[/math]]


We also know from there that the [math]S[/math] transform of [math]\pi^{\boxplus t}[/math] is given by:

[[math]] S_t(z)=\frac{1}{t+z} [[/math]]


Thus the measure on the left in the statement has the following [math]S[/math] transform:

[[math]] S(z)=\frac{1}{(1+z)^{s-1}}\cdot\frac{1}{t+z} [[/math]]


The [math]S[/math] transform of [math]\alpha_t=(1-t)\delta_0+t\delta_1[/math] can be computed as follows:

[[math]] \begin{eqnarray*} f=1+tz/(1-z) &\implies&\psi=tz/(1-z)\\ &\implies&z=t\chi/(1-\chi)\\ &\implies&\chi=z/(t+z)\\ &\implies& S=(1+z)/(t+z) \end{eqnarray*} [[/math]]


Thus the measure on the right in the statement has the following [math]S[/math] transform:

[[math]] S(z)=\frac{1}{(1+z)^s}\cdot\frac{1+z}{t+z} [[/math]]


Thus the [math]S[/math] transforms of our two measures are the same, and we are done.

The relation with the free Bessel laws, as previously defined, comes from:

Theorem

The free Bessel law is the real probability measure [math]\beta^s_t[/math], with

[[math]] (s,t)\in (0,\infty)\times(0,\infty)-(0,1)\times (1,\infty) [[/math]]
defined concretely as follows:

  • For [math]s\geq 1[/math] we set [math]\beta^s_t=\pi^{\boxtimes s-1}\boxtimes\pi^{\boxplus t}[/math].
  • For [math]t\leq 1[/math] we set [math]\beta^s_t=((1-t)\delta_0+t\delta_1)\boxtimes\pi^{\boxtimes s}[/math].


Show Proof

This follows indeed from the above results. To be more precise, these results show that the measures constructed in the statement exist indeed, and coincide with the free Bessel laws, as previously defined, as compound free Poisson laws.

In view of the above, we can regard the free Bessel law [math]\beta^s_t[/math] as being a natural two-parameter generalization of the free Poisson law [math]\pi[/math], in connection with Voiculescu's free convolution operations [math]\boxtimes[/math] and [math]\boxplus[/math]. Observe that we have the following formulae:

[[math]] \begin{cases} \beta^s_1=\pi^{\boxtimes s}\\ \beta^1_t=\pi^{\boxplus t} \end{cases} [[/math]]


As a comment here, concerning the precise range of the parameters [math](s,t)[/math], the above results can be probably improved. The point is that the measure [math]\beta^s_t[/math] still exists for certain points in the critical rectangle [math](0,1)\times (1,\infty)[/math], but not for all of them. To be more precise, the known numeric checks for this question, discussed in [3], show that the critical values of [math](s,t)[/math] tend to form an algebraic curve contained in [math](0,1)\times (1,\infty)[/math], having [math]s=1[/math] as an asymptote. However, the case we are the most interested in is [math]t\in (0,1][/math], and here there is no problem, because [math]\beta^s_t[/math] exists for any [math]s \gt 0[/math]. Thus, we will stop this discussion here.


As before following [3], we have the following result:

Proposition

The Stieltjes transform of [math]\beta^s_t[/math] satisfies:

[[math]] f=1+zf^s(f+t-1) [[/math]]
In particular at [math]t=1[/math] we have the formula [math]f=1+zf^{s+1}[/math].


Show Proof

We have the following computation:

[[math]] \begin{eqnarray*} S=\frac{1}{(1+z)^{s-1}}\cdot\frac{1}{t+z} &\implies&\chi=\frac{z}{(1+z)^s}\cdot\frac{1}{t+z}\\ &\implies&z=\frac{\psi}{(1+\psi)^s}\cdot\frac{1}{t+\psi}\\ &\implies&z=\frac{f-1}{f^s}\cdot\frac{1}{t+f-1} \end{eqnarray*} [[/math]]


Thus, we obtain the equation in the statement.

At [math]t=1[/math], we have in fact the following result, also from [3], which is more explicit:

Theorem

The Stieltjes transform of [math]\beta^s_1[/math] with [math]s\in\mathbb N[/math] is given by

[[math]] f(z)=\sum_{p\in NC_s}z^{k(p)} [[/math]]
where [math]NC_s[/math] is the set of noncrossing partitions all whose blocks have as size multiples of [math]s[/math], and where [math]k:NC_s\to\mathbb N[/math] is the normalized length.


Show Proof

With the notation [math]C_k=\# NC_s(k)[/math], where [math]NC_s(k)\subset NC_s[/math] consists of the partitions of [math]\{1,\ldots,sk\}[/math] belonging to [math]NC_s[/math], the sum on the right is:

[[math]] f(z)=\sum_kC_{k}z^k [[/math]]


For a given partition [math]p\in NC_s(k+1)[/math] we can consider the last [math]s[/math] legs of the first block, and make cuts at right of them. This gives a decomposition of [math]p[/math] into [math]s+1[/math] partitions in [math]NC_s[/math], and we obtain in this way the following recurrence formula for the numbers [math]C_k[/math]:

[[math]] C_{k+1}=\sum_{\Sigma k_i=k}C_{k_0}\ldots C_{k_s} [[/math]]


By multiplying now by [math]z^{k+1}[/math], and then summing over [math]k[/math], we obtain that the generating series of these numbers [math]C_k[/math] satisfies the following equation:

[[math]] f-1=zf^{s+1} [[/math]]


But this is the equation found in Proposition 11.13, so we obtain the result.

Next, still following [3], we have the following result, dealing with the case [math]t \gt 0[/math]:

Theorem

The Stieltjes transform of [math]\beta^s_t[/math] with [math]s\in\mathbb N[/math] is given by:

[[math]] f(z)=\sum_{p\in NC_s}z^{k(p)}t^{b(p)} [[/math]]
where [math]k,b:NC_s\to\mathbb N[/math] are the normalized length, and the number of blocks.


Show Proof

With notations from the previous proof, let [math]F_{kb}[/math] be the number of partitions in [math]NC_s(k)[/math] having [math]b[/math] blocks, and set [math]F_{kb}=0[/math] for other integer values of [math]k,b[/math]. All sums will be over integer indices [math]\geq 0[/math]. The sum on the right in the statement is then:

[[math]] f(z)=\sum_{kb}F_{kb}z^kt^b [[/math]]


The recurrence formula for the numbers [math]C_k[/math] in the previous proof becomes:

[[math]] \sum_bF_{k+1,b}=\sum_{\Sigma k_i=k}\sum_{b_i}F_{k_0b_0}\ldots F_{k_sb_s} [[/math]]


In this formula, each term contributes to [math]F_{k+1,b}[/math] with [math]b=\Sigma b_i[/math], except for those of the form [math]F_{00}F_{k_1b_1}\ldots F_{k_sb_s}[/math], which contribute to [math]F_{k+1,b+1}[/math]. We get:

[[math]] \begin{eqnarray*} F_{k+1,b}&=&\sum_{\Sigma k_i=k}\sum_{\Sigma b_i=b}F_{k_0b_0}\ldots F_{k_sb_s}\cr &+&\sum_{\Sigma k_i=k}\sum_{\Sigma b_i=b-1}F_{k_1b_1}\ldots F_{k_sb_s}\cr &-&\sum_{\Sigma k_i=k}\sum_{\Sigma b_i=b}F_{k_1b_1}\ldots F_{k_sb_s} \end{eqnarray*} [[/math]]


This gives the following formula for the polynomials [math]P_k=\sum_bF_{kb}t^b[/math]:

[[math]] P_{k+1}=\sum_{\Sigma k_i=k}P_{k_0}\ldots P_{k_s}+(t-1)\sum_{\Sigma k_i=k}P_{k_1}\ldots P_{k_s} [[/math]]


Consider now the following generating function:

[[math]] f=\sum_kP_kz^k [[/math]]


In terms of this generating function, we get the following equation:

[[math]] f-1=zf^{s+1}+(t-1)zf^s [[/math]]


But this is the same as the equation of the Stieltjes transform of [math]\beta^s_t[/math], namely:

[[math]] f=1+zf^s(f+t-1) [[/math]]

Thus, we are led to the conclusion in the statement.

Let us discuss now the computation of the moments of the free Bessel laws. The idea will be that of expressing these moments in terms of generalized binomial coefficients. We recall that the coefficient corresponding to [math]\alpha\in\mathbb R[/math], [math]k\in\mathbb N[/math] is:

[[math]] \binom{\alpha}{k}=\frac{\alpha(\alpha-1)\ldots(\alpha-k+1)}{k!} [[/math]]


We denote by [math]m_1,m_2,m_3,\ldots[/math] the sequence of moments of a given probability measure. With this convention, we first have the following result, from [3]:

Theorem

The moments of [math]\beta^s_1[/math] with [math]s \gt 0[/math] are

[[math]] m_k=\frac{1}{sk+1}\binom{sk+k}{k} [[/math]]
which are the Fuss-Catalan numbers.


Show Proof

In the case [math]s\in\mathbb N[/math], we know that we have [math]m_k=\# NC_s(k)[/math]. The formula in the statement follows then by counting such partitions. In the general case [math]s \gt 0[/math], observe first that the Fuss-Catalan number in the statement is a polynomial in [math]s[/math]:

[[math]] \frac{1}{sk+1}\binom{sk+k}{k}=\frac{(sk+2)(sk+3)\ldots(sk+k)}{k!} [[/math]]


Thus, in order to pass from the case [math]s\in\mathbb N[/math] to the case [math]s \gt 0[/math], it is enough to check that the [math]k[/math]-th moment of [math]\pi_{s1}[/math] is analytic in [math]s[/math]. But this is clear from the equation [math]f=1+zf^{s+1}[/math] of the Stieltjes transform of [math]\pi_{s1}[/math], and this gives the result.

We have as well the following result, which deals with the general case [math]t \gt 0[/math]:

Theorem

The moments of [math]\beta^s_t[/math] with [math]s \gt 0[/math] are

[[math]] m_k=\sum_{b=1}^k\frac{1}{b}\binom{k-1}{b-1}\binom{sk}{b-1}t^b [[/math]]
which are the Fuss-Narayana numbers.


Show Proof

In the case [math]s\in\mathbb N[/math], we know from the above that we have the following formula, where [math]F_{kb}[/math] is the number of partitions in [math]NC_s(k)[/math] having [math]b[/math] blocks:

[[math]] m_k=\sum_bF_{kb}t^b [[/math]]


With this observation in hand, the formula in the statement follows by counting such partitions, with this count being well-known. This result can be then extended to any parameter [math]s \gt 0[/math], by using a standard complex variable argument, as before. See [3].

In the case [math]s\notin\mathbb N[/math], the moments of [math]\beta^s_t[/math] can be further expressed in terms of gamma functions. In the case [math]s=1/2[/math], the result, also from [3], is as follows:

Theorem

The moments of [math]\beta^{1/2}_1[/math] are given by the following formulae:

[[math]] m_{2p}=\frac{1}{p+1}\binom{3p}{p} [[/math]]

[[math]] m_{2p-1}=\frac{2^{-4p+3}p}{(6p-1)(2p+1)}\cdot\frac{p!(6p)!}{(2p)!(2p)!(3p)!} [[/math]]


Show Proof

According to our various results above, the even moments of the free Bessel law [math]\beta^s_t[/math] with [math]s=n-1/2[/math], [math]n\in\mathbb N[/math], are given by:

[[math]] \begin{eqnarray*} m_{2p} &=&\frac{1}{(n-1/2)(2p)+1}\binom{(n+1/2)2p}{2p}\\ &=&\frac{1}{(2n-1)p+1}\binom{(2n+1)p}{2p} \end{eqnarray*} [[/math]]


With [math]n=1[/math] we get the formula in the statement. Now for the odd moments, we can use here the following well-known identity:

[[math]] \begin{pmatrix}m-1/2\cr k\end{pmatrix}=\frac{4^{-k}}{k!}\cdot\frac{(2m)!}{m!}\cdot\frac{(m-k)!}{(2m-2k)!} [[/math]]


With [math]m=2np+p-n[/math] and [math]k=2p-1[/math] we get:

[[math]] \begin{eqnarray*} m_{2p-1} &=&\frac{1}{(n-1/2)(2p-1)+1}\binom{(n+1/2)(2p-1)}{2p-1}\\ &=&\frac{2}{(2n-1)(2p-1)+2}\binom{(2np+p-n)-1/2}{2p-1}\\ &=&\frac{2^{-4p+3}}{(2p-1)!}\cdot\frac{(4np+2p-2n)!}{(2np+p-n)!}\cdot\frac{(2np-p-n+1)!}{(4np-2p-2n+3)!} \end{eqnarray*} [[/math]]


In particular with [math]n=1[/math] we obtain:

[[math]] \begin{eqnarray*} m_{2p-1} &=&\frac{2^{-4p+3}}{(2p-1)!}\cdot\frac{(6p-2)!}{(3p-1)!}\cdot\frac{p!}{(2p+1)!}\\ &=&\frac{2^{-4p+3}(2p)}{(2p)!}\cdot\frac{(6p)!(3p)}{(3p)!(6p-1)6p}\cdot\frac{p!}{(2p)!(2p+1)} \end{eqnarray*} [[/math]]


But this gives the formula in the statement.

There are many other interesting things, of both combinatorial and complex analytic nature, that can be said about the free Bessel laws, their moments and their densities, and we refer here to [3]. Also, there is as well a relation with the combinatorics of the intermediate subfactors, and the Fuss-Catalan algebra of Bisch and Jones [5]. All this is a bit technical, and we will be back to it later, whan taking about subfactors.


In what follows we will rather focus on the free Bessel laws that we are truly interested in, namely those appearing at [math]s=1,2,\infty[/math]. We will be particularly interested in the cases [math]s=2,\infty[/math], which can be thought of as being “fully real” and “purely complex”.


Also, instead of insisting on combinatorics and complex analysis, we will rather discuss the question of finding matrix models for the free Bessel laws, which is of key importance, in view of the various random matrix considerations from chapters 5-8.

11c. The standard cube

Let us get back now to the fundamental question, mentioned in the beginning of this chapter, of arranging the main probability measures that we know, classical and free, into a cube, and this as for having a kind of 3D orientation, inside probability at large. For this purpose, we will need the following result, coming from the above study:

Theorem

The moments of [math]\beta^s_t[/math] are the numbers

[[math]] M_k=\sum_{\pi\in NC^s(k)}t^{|\pi|} [[/math]]
where [math]NC^s[/math] are the noncrossing partitions satisfying [math]\#\circ=\#\bullet(s)[/math] in each block.


Show Proof

At [math]t=1[/math] the formula to be proved is as follows:

[[math]] M_k(\beta^s_1)=|NC^s(k)| [[/math]]


But this can be proved by using Theorem 11.14, via the bijection between the set [math]NC_s[/math] there and the set [math]NC^s[/math] here. At [math]t \gt 0[/math] now, the formula to be proved is as follows:

[[math]] M_k(\beta^s_t)=\sum_{\pi\in NC^s(k)}t^{|\pi|} [[/math]]


But this can be proved again by doing some computations, or by using Theorem 11.15, via the bijection between the set [math]NC_s[/math] there and the set [math]NC^s[/math] here.

At the combinatorial level, this is quite interesting, and we have:

Theorem

The various classical and free central limiting measures,

[[math]] \xymatrix@R=45pt@C=45pt{ \beta^s_t\ar@{-}[r]\ar@{-}[d]&\gamma_t\ar@{-}[r]\ar@{-}[d]&\Gamma_t\ar@{-}[d]\\ b^s_t\ar@{-}[r]&g_t\ar@{-}[r]&G_t } [[/math]]
have moments always given by the same formula, involving partitions, namely

[[math]] M_k=\sum_{\pi\in D(k)}t^{|\pi|} [[/math]]
where the sets of partitions [math]D(k)[/math] in question are respectively

[[math]] \xymatrix@R=50pt@C=50pt{ NC^s\ar[d]&NC_2\ar[d]\ar[l]&\mathcal{NC}_2\ar[l]\ar[d]\\ P^s&P_2\ar[l]&\mathcal P_2\ar[l]} [[/math]]
and where [math]|.|[/math] is the number of blocks.


Show Proof

This follows by putting together the various moment results that we have, namely those from chapter 10, and those from Theorem 11.19.

The above result is quite nice, and is complete as well, containing all the moment results that we have established so far, throughout this book. However, forgetting about being as general as possible, we can in fact do better. Nothing in life is better than having some 3D orientation, and as a main application of the above, we can modify a bit the above diagram, as to have a nice-looking cube, as follows:

Theorem

The moments of the main central limiting measures,

[[math]] \xymatrix@R=20pt@C=22pt{ &\mathfrak B_t\ar@{-}[rr]\ar@{-}[dd]&&\Gamma_t\ar@{-}[dd]\\ \beta_t\ar@{-}[rr]\ar@{-}[dd]\ar@{-}[ur]&&\gamma_t\ar@{-}[dd]\ar@{-}[ur]\\ &B_t\ar@{-}[rr]\ar@{-}[uu]&&G_t\ar@{-}[uu]\\ b_t\ar@{-}[uu]\ar@{-}[ur]\ar@{-}[rr]&&g_t\ar@{-}[uu]\ar@{-}[ur] } [[/math]]
are always given by the same formula, involving partitions, namely

[[math]] M_k=\sum_{\pi\in D(k)}t^{|\pi|} [[/math]]
where the sets of partitions [math]D(k)[/math] in question are respectively

[[math]] \xymatrix@R=20pt@C=5pt{ &\mathcal{NC}_{even}\ar[dl]\ar[dd]&&\ \ \ \mathcal{NC}_2\ \ \ \ar[ll]\ar[dd]\ar[dl]\\ NC_{even}\ar[dd]&&NC_2\ar[ll]\ar[dd]\\ &\mathcal P_{even}\ar[dl]&&\mathcal P_2\ar[ll]\ar[dl]\\ P_{even}&&P_2\ar[ll] } [[/math]]
and where [math]|.|[/math] is the number of blocks.


Show Proof

This follows by putting together the various moment results that we have. To be more precise, the result follows from Theorem 11.20, by restricting the attention on the left to the cases [math]s=2,\infty[/math], which can be thought of as being “fully real” and “purely complex”, and then folding the 8-measure diagram into a cube, as above.

The above cube, which is something very nice, will basically keep us busy for the rest of this book. Among others, we will see later more conceptual explanations for it.


Importantly, we will find as well an axiomatization for all this, with the result, called “Ground Zero theorem”, stating that, when imposing a number of strong combinatorial axioms, only the above cube, which is obviously rock-solid, survives. More later.

11d. Matrix models

We discuss here the relation between the above free PLT theory and the random matrices. As a starting point, the free Poisson laws [math]\pi_t[/math] that we found in the above, via the free PLT, coincide with the Marchenko-Pastur laws, shown in chapter 7 to appear as limiting laws for the complex Wishart matrices. This is certainly nice, conceptually speaking, but the point is that we can now truly improve the Marchenko-Pastur result from chapter 7, with an asymptotic freeness statement added, as follows:

Theorem

Given a family of sequences of complex Wishart matrices,

[[math]] Z^i_N=Y^i_N(Y^i_N)^*\in M_N(L^\infty(X))\quad,\quad i\in I [[/math]]
with each [math]Y^i_N[/math] being a [math]N\times M[/math] matrix, with entries following the normal law [math]G_1[/math], and with all these entries being pairwise independent, the rescaled sequences of matrices

[[math]] \frac{Z^i_N}{N}\in M_N(L^\infty(X))\quad,\quad i\in I [[/math]]
become with [math]M=tN\to\infty[/math] Marchenko-Pastur, each following the law [math]\pi_t[/math], and free.


Show Proof

Here the first assertion is the Marchenko-Pastur theorem, and the second assertion follows from the freeness result for the Gaussian matrices, from chapter 10.

At a more technical level now, we know from chapters 5-8 that the random matrices provide explicit models for most of the limiting laws appearing in free probability. This is surely an important phenomenon, and in fact, by pushing things a bit, free probability can be even regarded as a theory providing a conceptual framework for random matrix theory. Our goal now, with the standard cube from the previous section in mind, will be that of completing what we know, with matrix models for the free Bessel laws [math]\beta^s_t[/math]. We have two types of models to be investigated, which are both fundamental, as follows:


(1) Multiplicative models. We know from chapters 5-8 that by multiplying two Gaussian matrices we obtain a Wishart matrix, and so a model for the free Poisson law [math]\pi_t[/math]. Following [3], we will generalize here such constructions, by looking at more general products of Gaussian matrices, which will turn to be related to the laws [math]\beta^s_t[/math].


(2) Block-modified models. We also know from chapters 5-8 that by performing suitable block modifications on a complex Wishart matrix we obtain certain modifications of the free Poisson law [math]\pi_t[/math], which are compound free Poisson laws. We will further discuss here this phenomenon, with the aim of modelling in this way the laws [math]\beta^s_t[/math].


Summarizing, many things to be done, which promise to be quite technical. Let us start with the multiplicative models. We will first restrict attention to the case [math]t=1[/math], since we have [math]\beta^s_t=\pi^{\boxtimes s-1}\boxtimes\pi^{\boxplus t}[/math], and therefore matrix models for [math]\beta^s_t[/math] will follow from matrix models for [math]\pi^{\boxtimes s}[/math]. Following [3], we first have the following result:

Theorem

Let [math]G_1,\ldots,G_s[/math] be a family of [math]N\times N[/math] independent matrices formed by independent centered Gaussian variables, of variance [math]1/N[/math]. Then with

[[math]] M=G_1\ldots G_s [[/math]]
the moments of the spectral distribution of [math]MM^*[/math] converge, up to a normalization, to the corresponding moments of [math]\beta^s_1[/math], as [math]N\to\infty[/math].


Show Proof

We prove this by recurrence. At [math]s=1[/math] it is well-known that [math]MM^*[/math] is a model for [math]\beta^1_1=\pi[/math]. So, assume that the result holds for [math]s-1\geq 1[/math]. We have:

[[math]] \begin{eqnarray*} tr(MM^*)^k &=&tr(G_1\ldots G_sG_s^*\ldots G_1^*)^k\\ &=&tr\big(G_1(G_2\ldots G_sG_s^*\ldots G_1^*G_1)^{k-1}G_2\ldots G_sG_s^*\ldots G_1^*\big) \end{eqnarray*} [[/math]]


We can pass the first [math]G_1[/math] matrix to the right, and we get:

[[math]] \begin{eqnarray*} tr(MM^*)^k &=&tr\big((G_2\ldots G_sG_s^*\ldots G_1^*G_1)^{k-1}G_2\ldots G_sG_s^*\ldots G_1^*G_1\big)\\ &=&tr(G_2\ldots G_sG_s^*\ldots G_1^*G_1)^k\\ &=&tr((G_2\ldots G_sG_s^*\ldots G_2^*)(G_1^*G_1))^k \end{eqnarray*} [[/math]]


We know that [math]G_1^*G_1[/math] is a Wishart matrix, hence is a model for [math]\pi[/math]:

[[math]] G_1^*G_1\sim\pi [[/math]]


Also, we know by recurrence that [math]G_2\ldots G_sG_s^*\ldots G_2^*[/math] gives a matrix model for [math]\beta^{s-1}_1[/math]:

[[math]] G_2\ldots G_sG_s^*\ldots G_2^*\sim \beta^{s-1}_1 [[/math]]


Now since the matrices [math]G_1^*G_1[/math] and [math]G_2\ldots G_sG_s^*\ldots G_2^*[/math] are asymptotically free, their product gives a matrix model for [math]\pi_{s-1,1}\boxtimes\pi_{11}=\beta^s_1[/math], and we are done.

We should mention that the above result, from [3], has inspired a whole string of extensions and generalizations. We refer here to [3] and the subsequent literature. Again following [3], we have as well the following result, which is of different nature:

Theorem

If [math]W[/math] is a complex Wishart matrix of parameters [math](sN,N)[/math] and

[[math]] D=\begin{pmatrix} 1_N&0&&0\\ 0&w1_N&&0\\ &&\ddots&\\ 0&0&&w^{s-1}1_N \end{pmatrix} [[/math]]
with [math]w=e^{2\pi i/s}[/math] then the moments of the spectral distribution of [math](DW)^s[/math] converge, up to a normalization, to the corresponding moments of [math]\beta^s_1[/math], as [math]N\to\infty[/math].


Show Proof

We use the following complex Wishart matrix formula of Graczyk, Letac and Massam [6], whose proof is via standard combinatorics:

[[math]] E(Tr(DW)^K)=\sum_{\sigma\in S_K}\frac{M^{\gamma(\sigma^{-1}\pi)}}{M^K}\,r_\sigma(D) [[/math]]


Here [math]W[/math] is by definition a complex Wishart matrix of parameters [math](M,N)[/math], and [math]D[/math] is a deterministic [math]M\times M[/math] matrix. As for the right term, this is as follows:

  • [math]\pi[/math] is the cycle [math](1,\ldots,K)[/math].
  • [math]\gamma(\sigma)[/math] is the number of disjoint cycles of [math]\sigma[/math].
  • If we denote by [math]C(\sigma)[/math] the set of such cycles and for any cycle [math]c[/math], by [math]|c|[/math] its length, then the function on the right is given by:
    [[math]] r_\sigma(D)=\prod_{c\in C(\sigma)}Tr(D^{|c|}) [[/math]]

In our situation we have [math]K=sk[/math] and [math]M=sN[/math], and we get:

[[math]] E(Tr(DW)^{sk})= \sum_{\sigma\in S_{sk}}\frac{(sN)^{\gamma(\sigma^{-1}\pi)}}{(sN)^{sk}}\,r_\sigma(D) [[/math]]


Now since [math]D[/math] is uniformly formed by [math]s[/math]-roots of unity, we have:

[[math]] Tr(D^p)= \begin{cases} sN\mbox{ if }s|p\\ 0\ \ \,\mbox{ if }s\!\!\not|p \end{cases} [[/math]]


Thus if we denote by [math]S_{sk}^s[/math] the set of permutations [math]\sigma\in S_{sk}[/math] having the property that all the cycles of [math]\sigma[/math] have length multiple of [math]s[/math], the above formula reads:

[[math]] E(Tr(DW)^{sk})=\sum_{\sigma\in S_{sk}^s}\frac{(sN)^{\gamma(\sigma^{-1}\pi)}}{(sN)^{sk}}\,(sN)^{\gamma(\sigma)} [[/math]]


In terms of the normalized trace [math]tr[/math], we obtain the following formula:

[[math]] E(tr(DW)^{sk})=\sum_{\sigma\in S_{sk}^s}(sN)^{\gamma(\sigma^{-1}\pi)+\gamma(\sigma)-sk-1} [[/math]]


The exponent on the right, say [math]L_\sigma[/math], can be estimated by using the distance on the Cayley graph of [math]S_{sk}[/math], in the following way:

[[math]] \begin{eqnarray*} L_\sigma &=&\gamma(\sigma^{-1}\pi)+\gamma(\sigma)-sk-1\\ &=&(sk-d(\sigma,\pi))+(sk-d(e,\sigma))-sk-1\\ &=&sk-1-(d(e,\sigma)+d(\sigma,\pi))\\ &\leq&sk-1-d(e,\pi)\\ &=&0 \end{eqnarray*} [[/math]]


Now when taking the limit [math]N\to\infty[/math] in the above formula of [math]E(tr(DW)^{sk})[/math], the only terms that count are those coming from permutations [math]\sigma\in S_{sk}^s[/math] having the property [math]L_\sigma=0[/math], which each contribute with a 1 value. We therefore obtain:

[[math]] \begin{eqnarray*} \lim_{N\to\infty}E(tr(DW)^{sk}) &=&\#\{\sigma\in S_{sk}^s\ |\ L_\sigma=0\}\\ &=&\#\{\sigma\in S_{sk}^s\ |\ d(e,\sigma)+d(\sigma,\pi)=d(e,\pi)\}\\ &=&\#\{\sigma\in S_{sk}^s\ |\ \sigma\in [e,\pi]\} \end{eqnarray*} [[/math]]


But this number that we obtained is well-known to be the same as the number of noncrossing partitions of [math]\{1,\ldots,sk\}[/math] having all blocks of size multiple of [math]s[/math]. Thus we have reached to the sets [math]NC_s(k)[/math] from the above, and we are done.

As a consequence of the above random matrix formula, we have the following alternative approach to the free CPLT, in the case of the free Bessel laws, from [3]:

Theorem

The moments of the free Bessel law [math]\pi_{s1}[/math] with [math]s\in\mathbb N[/math] coincide with those of the variable

[[math]] \left(\sum_{k=1}^sw^k\alpha_k\right)^s [[/math]]
where [math]\alpha_1,\ldots,\alpha_s[/math] are free random variables, each of them following the free Poisson law of parameter [math]1/s[/math], and [math]w=e^{2\pi i/s}[/math].


Show Proof

This is something that we already know, coming from the combinatorics of the free CPLT, but we can prove this now by using random matrices as well. For this purpose, let [math]G_1,\ldots,G_s[/math] be a family of independent [math]sN\times N[/math] matrices formed by independent, centered complex Gaussian variables, of variance [math]1/(sN)[/math]. The following matrices [math]H_1,\ldots,H_s[/math] are then complex Gaussian and independent as well:

[[math]] H_k=\frac{1}{\sqrt{s}}\sum_{p=1}^sw^{kp}G_p [[/math]]


Thus the following matrix provides a model for the variable [math]\Sigma w^k\alpha_k[/math]:

[[math]] \begin{eqnarray*} M &=&\sum_{k=1}^sw^kH_kH_k^*\\ &=&\frac{1}{s}\sum_{k=1}^s\sum_{p=1}^s\sum_{q=1}^sw^{k+kp-kq}G_pG_q^*\\ &=&\sum_{p=1}^s\sum_{q=1}^s\left(\frac{1}{s}\sum_{k=1}^s\left(w^{1+p-q}\right)^k\right)G_pG_q^*\\ &=&G_1G_2^*+G_2G_3^*+\ldots+G_{s-1}G_s^*+G_sG_1^* \end{eqnarray*} [[/math]]


Now observe that this matrix can be written as follows:

[[math]] \begin{eqnarray*} M &=&\begin{pmatrix}G_1&G_2&\ldots&G_{s-1}&G_s\end{pmatrix} \begin{pmatrix}G_2^*\\ G_3^*\\\vdots\\ G_s^*\\ G_1^*\end{pmatrix}\\ &=&\begin{pmatrix}G_1&G_2&\ldots&G_{s-1}&G_s\end{pmatrix} \begin{pmatrix} 0&1_N&0&\ldots&0\\ 0&0&1_N&\ldots&0\\ &&&\ddots&&\\ 0&0&0&\ldots&1_N\\ 1_N&0&0&\ldots&0 \end{pmatrix} \begin{pmatrix}G_1^*\\ G_2^*\\\vdots\\ G_{s-1}^*\\ G_s^*\end{pmatrix}\\ &=&GOG^* \end{eqnarray*} [[/math]]


In this formula [math]G=(G_1\ \ldots\ G_s)[/math] is the [math]sN\times sN[/math] Gaussian matrix obtained by concatenating [math]G_1,\ldots,G_s[/math], and [math]O[/math] is the matrix in the middle. But this latter matrix is of the form [math]O=UDU^*[/math] with [math]U[/math] unitary, so and we have:

[[math]] M=GUDU^*G^* [[/math]]


Now since [math]GU[/math] is a Gaussian matrix, [math]M[/math] has the same law as the following matrix:

[[math]] M'=GDG^* [[/math]]


By using this, we obtain the following moment formula:

[[math]] \begin{eqnarray*} E\left(\left(\sum_{l=1}^sw^l\alpha_l\right)^{sk}\right) &=&\lim_{N\to \infty}E(tr(M^{sk}))\\ &=&\lim_{N\to\infty}E(tr(GDG^*)^{sk})\\ &=&\lim_{N\to\infty}E(tr(D(G^*G))^{sk}) \end{eqnarray*} [[/math]]


Thus with [math]W=G^*G[/math] we get the result.

Summarizing, we have applications to the random matrices, and random matrix models for all the 8 basic probability laws, appearing from limiting theorems. As already mentioned, the above results, from [3], have inspired a whole string of extensions and generalizations. We refer here to [3] and the subsequent literature.


As a last topic regarding the free CPLT, which is perhaps the most important, let us review now the results regarding the block-modified Wishart matrices from chapter 8, with free probability tools. We will see in particular that the laws obtained there are free combinations of free Poisson laws, or compound free Poisson laws.


Consider a complex Wishart matrix of parameters [math](dn,dm)[/math]. In other words, we start with a [math]dn\times dm[/math] matrix [math]Y[/math] having independent complex [math]G_1[/math] entries, and we set:

[[math]] W=YY^* [[/math]]


This matrix has size [math]dn\times dn[/math], and is best thought of as being a [math]d\times d[/math] array of [math]n\times n[/math] matrices. We will be interested here in the study of the block-modified versions of [math]W[/math], obtained by applying to the [math]n\times n[/math] blocks a given linear map, as follows:

[[math]] \varphi:M_n(\mathbb C)\to M_n(\mathbb C) [[/math]]


We recall from chapter 8 that we have the following asymptotic moment formula, extending the usual moment computation for the Wishart matrices:

Theorem

The asymptotic moments of a block-modified Wishart matrix

[[math]] \widetilde{W}=(id\otimes\varphi)W [[/math]]
with parameters [math]d,m,n\in\mathbb N[/math], as above, are given by the formula

[[math]] \lim_{d\to\infty}M_e\left(\frac{\widetilde{W}}{d}\right)=\sum_{\sigma\in NC_p}(mn)^{|\sigma|}(M^\sigma_e\otimes M^\gamma_e)(\Lambda) [[/math]]
where [math]\Lambda\in M_n(\mathbb C)\otimes M_n(\mathbb C)[/math] is the square matrix associated to [math]\varphi:M_n(\mathbb C)\to M_n(\mathbb C)[/math].


Show Proof

This is something that we know well from chapter 8, coming from the Wick formula, and with the correspondence between linear maps [math]\varphi:M_n(\mathbb C)\to M_n(\mathbb C)[/math] and square matrices [math]\Lambda\in M_n(\mathbb C)\otimes M_n(\mathbb C)[/math] being as well explained there.

As explained in chapter 8, it is possible to further build on the above result, with some concrete applications, by doing some combinatorics and calculus. That combinatorics and calculus was something a bit ad-hoc in the context of chapter 8, and congratulations of course for having survived that. With the free probability theory that we learned so far, we can now clarify all this. Following [7], [8], we first have the following result:

Proposition

Given a square matrix [math]\Lambda\in M_n(\mathbb C)\otimes M_n(\mathbb C)[/math], having distribution

[[math]] \rho=law(\Lambda) [[/math]]
the moments of the compound free Poisson law [math]\pi_{mn\rho}[/math] are given by

[[math]] M_e(\pi_{mn\rho})=\sum_{\sigma\in NC_p}(mn)^{|\sigma|}(M^\sigma_e\otimes M^\sigma_e)(\Lambda) [[/math]]
for any choice of the extra parameter [math]m\in\mathbb N[/math].


Show Proof

This can be proved in several ways, as follows:


(1) A first method is by a straightforward computation, based on the general formula of the [math]R[/math]-transform of the compound free Poisson laws, given in the above, and we will leave the computations here, which are all elementary, as an instructive exercise.


(2) Another method, originally used in [8], is by using the well-known fact, that we will discuss in a moment, in chapter 12 below, that the free cumulants of [math]\pi_{mn\rho}[/math] coincide with the moments of [math]mn\rho[/math]. Thus, these free cumulants are given by:

[[math]] \begin{eqnarray*} \kappa_e(\pi_{mn\rho}) &=&M_e(mn\rho)\\ &=&mn\cdot M_e(\Lambda)\\ &=&mn\cdot (M^\gamma_e\otimes M^\gamma_e)(\Lambda) \end{eqnarray*} [[/math]]


By using now Speicher's free moment-cumulant formula, from [9], [10], to be explained in chapter 12 below as well, this gives the result.

We can see now an obvious similarity with the formula in Theorem 11.26. In order to exploit this similarity, once again by following [8], let us introduce:

Definition

We call a square matrix [math]\Lambda\in M_n(\mathbb C)\otimes M_n(\mathbb C)[/math] multiplicative when

[[math]] (M^\sigma_e\otimes M^\gamma_e)(\Lambda)=(M^\sigma_e\otimes M^\sigma_e)(\Lambda) [[/math]]
holds for any [math]p\in\mathbb N[/math], any exponents [math]e_1,\ldots,e_p\in\{1,*\}[/math], and any [math]\sigma\in NC_p[/math].

This notion is something quite technical, but we will see many examples in what follows. For instance, the square matrices [math]\Lambda[/math] coming from the basic linear maps [math]\varphi[/math] appearing in chapter 8 are all multiplicative. Now with the above notion in hand, we can formulate an asymptotic result regarding the block-modified Wishart matrices, as follows:

Theorem

Consider a block-modified Wishart matrix

[[math]] \widetilde{W}=(id\otimes\varphi)W [[/math]]
and assume that the matrix [math]\Lambda\in M_n(\mathbb C)\otimes M_n(\mathbb C)[/math] associated to [math]\varphi[/math] is multiplicative. Then

[[math]] \frac{\widetilde{W}}{d}\sim\pi_{mn\rho} [[/math]]
holds, in moments, in the [math]d\to\infty[/math] limit, where [math]\rho=law(\Lambda)[/math].


Show Proof

By comparing the moment formulae in Theorem 11.26 and in Proposition 11.27, we conclude that the asymptotic formula [math]\frac{\widetilde{W}}{d}\sim\pi_{mn\rho}[/math] is equivalent to the following equality, which should hold for any [math]p\in\mathbb N[/math], and any exponents [math]e_1,\ldots,e_p\in\{1,*\}[/math]:

[[math]] \sum_{\sigma\in NC_p}(mn)^{|\sigma|}(M^\sigma_e\otimes M^\gamma_e)(\Lambda)=\sum_{\sigma\in NC_p}(mn)^{|\sigma|}(M^\sigma_e\otimes M^\sigma_e)(\Lambda) [[/math]]


Now by assuming that [math]\Lambda[/math] is multiplicative, in the sense of Definition 11.28, these two sums are trivially equal, and this gives the result.

Summarizing, we have now a much better understanding of what is going on with the block-modified Wishart matrices, and in particular with what exactly is behind Theorem 11.29. For the continuation of all this, we refer to [11], [7], [8] and the subsequent literature on the subject, including the more recent papers [12], [13], [14].


In what concerns us, we will rather navigate in what follows towards quantum algebra, but we will be back to random matrix questions on several occasions, and notably in chapter 16 below, in the context of an all-catching final discussion, regarding the relation between Voiculescu's free probability and Jones' subfactor theory.

General references

Banica, Teo (2024). "Calculus and applications". arXiv:2401.00911 [math.CO].

References

  1. D.V. Voiculescu, K.J. Dykema and A. Nica, Free random variables, AMS (1992).
  2. F. Hiai and D. Petz, The semicircle law, free random variables and entropy, AMS (2000).
  3. 3.00 3.01 3.02 3.03 3.04 3.05 3.06 3.07 3.08 3.09 3.10 3.11 3.12 3.13 3.14 3.15 3.16 3.17 3.18 3.19 3.20 T. Banica, S.T. Belinschi, M. Capitaine and B. Collins, Free Bessel laws, Canad. J. Math. 63 (2011), 3--37.
  4. T. Banica, J. Bichon and B. Collins, The hyperoctahedral quantum group, J. Ramanujan Math. Soc. 22 (2007), 345--384.
  5. D. Bisch and V.F.R. Jones, Algebras associated to intermediate subfactors, Invent. Math. 128 (1997), 89--157.
  6. P. Graczyk, G. Letac and H. Massam, The complex Wishart distribution and the symmetric group, Ann. Statist. 31 (2003), 287--309.
  7. 7.0 7.1 T. Banica and I. Nechita, Asymptotic eigenvalue distributions of block-transposed Wishart matrices, J. Theoret. Probab. 26 (2013), 855--869.
  8. 8.0 8.1 8.2 8.3 T. Banica and I. Nechita, Block-modified Wishart matrices and free Poisson laws, Houston J. Math. 41 (2015), 113--134.
  9. A. Nica and R. Speicher, Lectures on the combinatorics of free probability, Cambridge Univ. Press (2006).
  10. R. Speicher, Multiplicative functions on the lattice of noncrossing partitions and free convolution, Math. Ann. 298 (1994), 611--628.
  11. G. Aubrun, Partial transposition of random states and non-centered semicircular distributions, Random Matrices Theory Appl. 1 (2012), 125--145.
  12. O. Arizmendi, I. Nechita and C. Vargas, On the asymptotic distribution of block-modified random matrices, J. Math. Phys. 57 (2016), 1--27.
  13. M. Fukuda and P. \'Sniady, Partial transpose of random quantum states: exact formulas and meanders, J. Math. Phys. 54 (2013), 1--31.
  14. J.A. Mingo and M. Popa, Freeness and the transposes of unitarily invariant random matrices, J. Funct. Anal. 271 (2016), 883--921.