Revision as of 19:39, 21 April 2025 by Bot
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

8c. Basic computations

[math] \newcommand{\mathds}{\mathbb}[/math]

This article was automatically generated from a tex file and may contain conversion errors. If permitted, you may login and edit this article to improve the conversion.

Once again by following [1], [2], let us introduce, as a solution to the questions mentioned above, the following technical notion:

Definition

We call a square matrix [math]\Lambda\in M_n(\mathbb C)\otimes M_n(\mathbb C)[/math] multiplicative when

[[math]] (M^\sigma_e\otimes M^\gamma_e)(\Lambda)=(M^\sigma_e\otimes M^\sigma_e)(\Lambda) [[/math]]
holds for any [math]p\in\mathbb N[/math], any exponents [math]e_1,\ldots,e_p\in\{1,*\}[/math], and any [math]\sigma\in NC(p)[/math].

This notion is something quite technical, but we will see many examples in what follows. For instance, the square matrices [math]\Lambda[/math] coming from the basic linear maps [math]\varphi[/math] appearing in Definition 8.2 are all multiplicative. More on this later.


Regarding now the output measure, that we want to compute, this can only appear as some kind of modification of the Marchenko-Pastur law [math]\pi_t[/math]. In order to discuss such modifications, recall from chapter 7 the following key formula:

[[math]] R_{\pi_t}(\xi)=\frac{t}{1-\xi} [[/math]]

To be more precise, this is something that we used in chapter 7, when dealing with the block-transposed Wishart matrices. But this suggests formulating:

Definition

A measure [math]\mu[/math] having as [math]R[/math]-transform a function of type

[[math]] R_\mu(\xi)=\sum_{i=1}^s\frac{c_iz_i}{1-\xi z_i} [[/math]]
with [math]c_i \gt 0[/math] and [math]z_i\in\mathbb R[/math], will be called modified Marchenko-Pastur law.

All this might seem a bit mysterious, but we are into difficult mathematics here, so we will use the above notion as stated, and we will understand later what is behind our computations. By anticipating a bit, however, the situation is as follows:


(1) As a first comment on the above notion, there is an obvious similarity here with the theory of the compound Poisson laws from chapter 2.


(2) The truth is that [math]\pi_t[/math] is the free Poisson law of parameter [math]t[/math], and the modified Marchenko-Pastur laws introduced above are the general compound free Poisson laws.


(3) Also, the mysterious [math]R[/math]-transform used above is the Voiculescu [math]R[/math]-transform [3], which is the analogue of the log of the Fourier transform in free probability.


More on all this later, in chapters 9-12 below, when systematically doing free probability. Based on this analogy, however, we can label our modified Marchenko-Pastur laws, in the same way as we labelled in chapter 2 the compound Poisson laws, as follows:

Definition

We denote by [math]\pi_\rho[/math] the modified Marchenko-Pastur law satisfying

[[math]] R_\mu(\xi)=\sum_{i=1}^s\frac{c_iz_i}{1-\xi z_i} [[/math]]
with [math]c_i \gt 0[/math] and [math]z_i\in\mathbb R[/math], with [math]\rho[/math] being the following measure,

[[math]] \rho=\sum_{i=1}^sc_i\delta_{z_i} [[/math]]
which is a discrete positive measure in the complex plane, not necessarily of mass [math]1[/math].

Getting back now to the block-modified Wishart matrices, and to the formula in Theorem 8.10, the above abstract notions, from Definition 8.11 and from Definition 8.12, are exactly what we need for further improving all this. Again by following [1], [2], we have the following result, substantially building on Theorem 8.10:

Theorem

Consider a block-modified Wishart matrix

[[math]] \widetilde{W}=(id\otimes\varphi)W [[/math]]
and assume that the matrix [math]\Lambda\in M_n(\mathbb C)\otimes M_n(\mathbb C)[/math] associated to [math]\varphi[/math] is multiplicative. Then

[[math]] \frac{\widetilde{W}}{d}\sim\pi_{mn\rho} [[/math]]
holds, in moments, in the [math]d\to\infty[/math] limit, where [math]\rho=law(\Lambda)[/math].


Show Proof

This is something quite tricky, using all the above:


(1) Our starting point is the asymptotic moment formula found in Theorem 8.10, for an arbitrary block-modified Wishart matrix, namely:

[[math]] \lim_{d\to\infty}M_e\left(\frac{\widetilde{W}}{d}\right)=\sum_{\sigma\in NC_p}(mn)^{|\sigma|}(M^\sigma_e\otimes M^\gamma_e)(\Lambda) [[/math]]


(2) Since our modification matrix [math]\Lambda\in M_n(\mathbb C)\otimes M_n(\mathbb C)[/math] was assumed to be multiplicative, in the sense of Definition 8.11, this formula reads:

[[math]] \lim_{d\to\infty}M_e\left(\frac{\widetilde{W}}{d}\right)=\sum_{\sigma\in NC_p}(mn)^{|\sigma|}(M^\sigma_e\otimes M^\sigma_e)(\Lambda) [[/math]]


(3) On the other hand, a bit of calculus and combinatorics show that, in the context of Definition 8.12, given a square matrix [math]\Lambda\in M_n(\mathbb C)\otimes M_n(\mathbb C)[/math], having distribution [math]\rho=law(\Lambda)[/math], the moments of the modified Marchenko-Pastur law [math]\pi_{mn\rho}[/math] are given by the following formula, for any choice of the extra parameter [math]m\in\mathbb N[/math]:

[[math]] M_e(\pi_{mn\rho})=\sum_{\sigma\in NC_p}(mn)^{|\sigma|}(M_\sigma^e\otimes M_\sigma^e)(\Lambda) [[/math]]


(4) The point now is that with this latter formula in hand, our previous asymptotic moment formula for the block-modified Wishart matrix [math]\widetilde{W}[/math] simply reads:

[[math]] \lim_{d\to\infty}M_e\left(\frac{\widetilde{W}}{d}\right)=M_e(\pi_{mn\rho}) [[/math]]


Thus we have indeed [math]\widetilde{W}/d\sim\pi_{mn\rho}[/math], in the [math]d\to\infty[/math] limit, as stated.

All the above was of course a bit technical, but we will come back later to this, with some further details, once we will have a better understanding of the [math]R[/math]-transform, of the free Poisson limit theorem, and of the other things which are hidden in all the above. In any case, welcome to free probability. Or perhaps to theoretical physics. The above theorem was our first free probability one, in this book, and many other to follow.


Let us we work out now some explicit consequences of Theorem 8.14, by using the modified easy linear maps from Definition 8.6. We recall from there that any modified easy linear map [math]\varphi_\pi[/math] can be viewed as a “block-modification” map, as follows:

[[math]] \varphi_\pi:M_{N^s}(\mathbb C)\to M_{N^s}(\mathbb C) [[/math]]


In order to verify that the corresponding matrices [math]\Lambda_\pi[/math] are multiplicative, we will need to check that all the functions [math]\varphi(\sigma,\tau)=(M_\sigma^e\otimes M_\tau^e)(\Lambda_\pi)[/math] have the following property:

[[math]] \varphi(\sigma,\gamma)=\varphi(\sigma,\sigma) [[/math]]


For this purpose, we can use the following result, coming from [2]:

Proposition

The following functions [math]\varphi:NC(p)\times NC(p)\to\mathbb R[/math] are multiplicative, in the sense that they satisfy the condition [math]\varphi(\sigma,\gamma)=\varphi(\sigma,\sigma)[/math]:

  • [math]\varphi(\sigma,\tau)=|\sigma\tau^{-1}|-|\tau|[/math].
  • [math]\varphi(\sigma,\tau)=|\sigma\tau|-|\tau|[/math].
  • [math]\varphi(\sigma,\tau)=|\sigma\wedge\tau|-|\tau|[/math].


Show Proof

All this is elementary, and can be proved as follows:


(1) This follows indeed from the following computation:

[[math]] \varphi_1(\sigma,\gamma) =|\sigma\gamma^{-1}|-1 =p-|\sigma| =\varphi_1(\sigma,\sigma) [[/math]]


(2) This follows indeed from the following computation:

[[math]] \varphi_2(\sigma,\gamma) =|\sigma\gamma|-1 =|\sigma^2|-|\sigma| =\varphi_2(\sigma,\sigma) [[/math]]


(3) This follows indeed from the following computation:

[[math]] \varphi_3(\sigma,\gamma) =|\gamma|-|\gamma| =0 =|\sigma|-|\sigma| =\varphi_3(\sigma,\sigma) [[/math]]


Thus, we are led to the conclusions in the statement.

We can get back now to the easy modification maps, and we have:

Proposition

The partitions [math]\pi\in P_{even}(2,2)[/math] are as follows,

[[math]] \pi_1=\begin{bmatrix}\circ&\bullet\\ \circ&\bullet\end{bmatrix}\quad,\quad \pi_2=\begin{bmatrix}\circ&\bullet\\ \bullet&\circ\end{bmatrix}\quad,\quad \pi_3=\begin{bmatrix}\circ&\circ\\ \bullet&\bullet\end{bmatrix}\quad,\quad \pi_4=\begin{bmatrix}\circ&\circ\\ \circ&\circ\end{bmatrix} [[/math]]
with the associated linear maps [math]\varphi_\pi:M_n(\mathbb C)\to M_N(\mathbb C)[/math] being as follows:

[[math]] \varphi_1(A)=A\quad,\quad \varphi_2(A)=A^t\quad,\quad \varphi_3(A)=Tr(A)1\quad,\quad \varphi_4(A)=A^\delta [[/math]]
The corresponding matrices [math]\Lambda_\pi[/math] are all multiplicative, in the sense of Definition 8.11.


Show Proof

The first part of the statement is something that we already know, from Theorem 8.7. In order to prove the last assertion, recall from Theorem 8.7 that the associated square matrices, appearing via [math]\Lambda_{ab,cd}=\varphi(e_{ac})_{bd}[/math], are given by:

[[math]] \Lambda^1_{ab,cd}=\delta_{ab}\delta_{cd}\quad,\quad \Lambda^2_{ab,cd}=\delta_{ad}\delta_{bc}\quad,\quad \Lambda^3_{ab,cd}=\delta_{ac}\delta_{bd}\quad,\quad \Lambda^4_{ab,cd}=\delta_{abcd} [[/math]]


Since these matrices are all self-adjoint, we can assume that all the exponents are 1 in Definition 8.11, and the multiplicativity condition there becomes:

[[math]] (M_\sigma\otimes M_\gamma)(\Lambda)=(M_\sigma\otimes M_\sigma)(\Lambda) [[/math]]


In order to check this condition, observe that for the above 4 matrices, we have:

[[math]] \begin{eqnarray*} (M^\sigma\otimes M^\tau)(\Lambda_1)&=&\frac{1}{n^{|\sigma|+|\tau|}}\sum_{i_1\ldots i_p}\delta_{i_{\sigma(1)}i_{\tau(1)}}\ldots\delta_{i_{\sigma(p)}i_{\tau(p)}}=n^{|\sigma\tau^{-1}|-|\sigma|-|\tau|}\\ (M^\sigma\otimes M^\tau)(\Lambda_2)&=&\frac{1}{n^{|\sigma|+|\tau|}}\sum_{i_1\ldots i_p}\delta_{i_1i_{\sigma\tau(1)}}\ldots\delta_{i_pi_{\sigma\tau(p)}}=n^{|\sigma\tau|-|\sigma|-|\tau|}\\ (M^\sigma\otimes M^\tau)(\Lambda_3)&=&\frac{1}{n^{|\sigma|+|\tau|}}\sum_{i_1\ldots i_p}\sum_{j_1\ldots j_p}\delta_{i_1i_{\sigma(1)}}\delta_{j_1j_{\tau(1)}}\ldots\delta_{i_pi_{\sigma(p)}}\delta_{j_pj_{\tau(p)}}=1\\ (M^\sigma\otimes M^\tau)(\Lambda_4)&=&\frac{1}{n^{|\sigma|+|\tau|}}\sum_{i_1\ldots i_p}\delta_{i_1i_{\sigma(1)}i_{\tau(1)}}\ldots\delta_{i_pi_{\sigma(p)}i_{\tau(p)}}=n^{|\sigma\wedge\tau|-|\sigma|-|\tau|} \end{eqnarray*} [[/math]]


By using now the results in Proposition 8.15, this gives the result.

Summarizing, the partitions [math]\pi\in P_{even}(2,2)[/math] provide us with some concrete input for Theorem 8.14. The point now is that, when using this input, we obtain the main known computations for the block-modified Wishart matrices, from [4], [5], [6], [7]:

Theorem

The asymptotic distribution results for the block-modified Wishart matrices coming from the partitions [math]\pi_1,\pi_2,\pi_3,\pi_4\in P_{even}(2,2)[/math] are as follows:

  • Marchenko-Pastur: [math]\frac{1}{d}W\sim\pi_t[/math], where [math]t=m/n[/math].
  • Aubrun type: [math]\frac{1}{d}(id\otimes t)W\sim\pi_\nu[/math], with [math]\nu=\frac{m(n-1)}{2}\delta_{-1}+\frac{m(n+1)}{2}\delta_1[/math].
  • Collins-Nechita one: [math]n(id\otimes tr(.)1)W\sim\pi_t[/math], where [math]t=mn[/math].
  • Collins-Nechita two: [math]\frac{1}{d}(id\otimes(.)^\delta)W\sim\pi_m[/math].


Show Proof

All these results follow from Theorem 8.14, with the maps [math]\varphi_1,\varphi_2,\varphi_3,\varphi_4[/math] in Proposition 8.16 producing the 4 matrices in the statement, modulo some rescalings, and with the computation of the corresponding distributions being as follows:


(1) Here [math]\Lambda=\sum_{ac}e_{ac}\otimes e_{ac}[/math], and so [math]\Lambda=nP[/math], where [math]P[/math] is the rank one projection on [math]\sum_ae_a\otimes e_a\in\mathbb C^n\otimes\mathbb C^n[/math]. Thus we have the following formula, which gives the result:

[[math]] \rho=\frac{n^2-1}{n^2}\delta_0+\frac{1}{n^2}\delta_n [[/math]]


(2) Here [math]\Lambda=\sum_{ac}e_{ac}\otimes e_{ca}[/math] is the flip operator, [math]\Lambda(e_c\otimes e_a)=e_a\otimes e_c[/math]. Thus [math]\rho=\frac{n-1}{2n}\delta_{-1}+\frac{n+1}{2n}\delta_1[/math], and so we have the following formula, which gives the result:

[[math]] mn\rho=\frac{m(n-1)}{2}\delta_{-1}+\frac{m(n+1)}{2}\delta_1 [[/math]]


(3) Here [math]\Lambda=\sum_{ab}e_{aa}\otimes e_{bb}[/math] is the identity matrix, [math]\Lambda=1[/math]. Thus in this case we have the following formula, which gives [math]\pi_{mn\rho}=\pi_{mn}[/math], and so [math]n\widetilde{W}\sim\pi_{mn}[/math], as claimed:

[[math]] \rho=\delta_1 [[/math]]


(4) Here [math]\Lambda=\sum_ae_{aa}\otimes e_{aa}[/math] is the orthogonal projection on [math]span(e_a\otimes e_a)\subset\mathbb C^n\otimes\mathbb C^n[/math]. Thus we have the following formula, which gives the result:

[[math]] \rho=\frac{n-1}{n}\delta_0+\frac{1}{n}\delta_1 [[/math]]


Summarizing, we have proved all the assertions in the statement.

General references

Banica, Teo (2024). "Calculus and applications". arXiv:2401.00911 [math.CO].

References

  1. 1.0 1.1 T. Banica and I. Nechita, Asymptotic eigenvalue distributions of block-transposed Wishart matrices, J. Theoret. Probab. 26 (2013), 855--869.
  2. 2.0 2.1 2.2 T. Banica and I. Nechita, Block-modified Wishart matrices and free Poisson laws, Houston J. Math. 41 (2015), 113--134.
  3. D.V. Voiculescu, Addition of certain noncommuting random variables, J. Funct. Anal. 66 (1986), 323--346.
  4. G. Aubrun, Partial transposition of random states and non-centered semicircular distributions, Random Matrices Theory Appl. 1 (2012), 125--145.
  5. B. Collins and I. Nechita, Random quantum channels I: graphical calculus and the Bell state phenomenon, Comm. Math. Phys. 297 (2010), 345--370.
  6. B. Collins and I. Nechita, Random quantum channels II: entanglement of random subspaces, Rényi entropy estimates and additivity problems, Adv. Math. 226 (2011), 1181--1201.
  7. V.A. Marchenko and L.A. Pastur, Distribution of eigenvalues in certain sets of random matrices, Mat. Sb. 72 (1967), 507--536.