6d. Unitary groups

[math] \newcommand{\mathds}{\mathbb}[/math]

This article was automatically generated from a tex file and may contain conversion errors. If permitted, you may login and edit this article to improve the conversion.

We discuss here an alternative interpretation of the limiting laws [math]\gamma_t[/math] that we found above, by using Lie groups, the idea being that the standard semicircle law [math]\gamma_1[/math], and more generally all the laws [math]\gamma_t[/math], naturally appear in connection with the group [math]SU_2[/math].


This is something quite natural, and good to know, and will be useful for us later on. In relation with the above, the knowledge of this fact can be used as an alternative to both Stieltjes inversion, and cheating, in order to establish the Wigner theorem.


Let us start with the following fundamental group theory result, coming as a complement to the general theory for compact groups developed in chapter 4:

Theorem

We have the following formula,

[[math]] SU_2=\left\{\begin{pmatrix}\alpha&\beta\\ -\bar{\beta}&\bar{\alpha}\end{pmatrix}\ \Big|\ |\alpha|^2+|\beta|^2=1\right\} [[/math]]
which makes [math]SU_2[/math] isomorphic to the unit sphere [math]S^1_\mathbb C\subset\mathbb C^2[/math].


Show Proof

Consider an arbitrary [math]2\times 2[/math] matrix, written as follows:

[[math]] U=\begin{pmatrix}\alpha&\beta\\ \gamma&\delta\end{pmatrix} [[/math]]


Assuming that we have [math]\det U=1[/math], the inverse of this matrix is then given by:

[[math]] U^{-1}=\begin{pmatrix}\delta&-\beta\\ -\gamma&\alpha\end{pmatrix} [[/math]]


On the other hand, assuming [math]U\in U_2[/math], the inverse must be the adjoint:

[[math]] U^{-1}=\begin{pmatrix}\bar{\alpha}&\bar{\gamma}\\ \bar{\beta}&\bar{\delta}\end{pmatrix} [[/math]]


We conclude that our matrix must be of the following special form:

[[math]] U=\begin{pmatrix}\alpha&\beta\\ -\bar{\beta}&\bar{\alpha}\end{pmatrix} [[/math]]


Now since the determinant is 1, we must have [math]|\alpha|^2+|\beta|^2=1[/math], so we are done with one direction. As for the converse, this is clear, the matrices in the statement being unitaries, and of determinant 1, and so being elements of [math]SU_2[/math]. Finally, we have:

[[math]] S^1_\mathbb C=\left\{(\alpha,\beta)\in\mathbb C^2\ \Big|\ |\alpha|^2+|\beta|^2=1\right\} [[/math]]


Thus, the final assertion in the statement holds as well.

Next, we have the following useful reformulation of Theorem 6.20:

Theorem

We have the formula

[[math]] SU_2=\left\{\begin{pmatrix}p+iq&r+is\\ -r+is&p-iq\end{pmatrix}\ \Big|\ p^2+q^2+r^2+s^2=1\right\} [[/math]]
which makes [math]SU_2[/math] isomorphic to the unit real sphere [math]S^3_\mathbb R\subset\mathbb R^3[/math].


Show Proof

We recall from Theorem 6.20 that we have:

[[math]] SU_2=\left\{\begin{pmatrix}\alpha&\beta\\ -\bar{\beta}&\bar{\alpha}\end{pmatrix}\ \Big|\ |\alpha|^2+|\beta|^2=1\right\} [[/math]]


Now let us write our parameters [math]\alpha,\beta\in\mathbb C[/math], which belong to the complex unit sphere [math]S^1_\mathbb C\subset\mathbb C^2[/math], in terms of their real and imaginary parts, as follows:

[[math]] \alpha=p+iq\quad,\quad \beta=r+is [[/math]]


In terms of [math]p,q,r,s\in\mathbb R[/math], our formula for a generic matrix [math]U\in SU_2[/math] reads:

[[math]] U=\begin{pmatrix}p+iq&r+is\\ -r+is&p-iq\end{pmatrix} [[/math]]


As for the condition to be satisfied by the parameters [math]p,q,r,s\in\mathbb R[/math], this comes the condition [math]|\alpha|^2+|\beta|^2=1[/math] to be satisfied by [math]\alpha,\beta\in\mathbb C[/math], which reads:

[[math]] p^2+q^2+r^2+s^2=1 [[/math]]


Thus, we are led to the conclusion in the statement. Regarding now the last assertion, recall that the unit sphere [math]S^3_\mathbb R\subset\mathbb R^4[/math] is given by:

[[math]] S^3_\mathbb R=\left\{(p,q,r,s)\ \Big|\ p^2+q^2+r^2+s^2=1\right\} [[/math]]


Thus, we have an isomorphism of compact spaces [math]SU_2\simeq S^3_\mathbb R[/math], as claimed.

Here is yet another useful reformulation of our main result so far, regarding [math]SU_2[/math], obtained by further building on the parametrization from Theorem 6.21:

Theorem

We have the following formula,

[[math]] SU_2=\left\{p\beta_1+q\beta_2+r\beta_3+s\beta_4\ \Big|\ p^2+q^2+r^2+s^2=1\right\} [[/math]]
where [math]\beta_1,\beta_2,\beta_3,\beta_4[/math] are the following matrices,

[[math]] \beta_1=\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\quad,\quad \beta_2=\begin{pmatrix}i&0\\ 0&-i\end{pmatrix}\quad,\quad \beta_3=\begin{pmatrix}0&1\\ -1&0\end{pmatrix}\quad,\quad \beta_4=\begin{pmatrix}0&i\\ i&0\end{pmatrix} [[/math]]
called Pauli spin matrices.


Show Proof

We recall from Theorem 6.21 that the group [math]SU_2[/math] can be parametrized by the real sphere [math]S^3_\mathbb R\subset\mathbb R^4[/math], in the following way:

[[math]] SU_2=\left\{\begin{pmatrix}p+iq&r+is\\ -r+is&p-iq\end{pmatrix}\ \Big|\ p^2+q^2+r^2+s^2=1\right\} [[/math]]


But this gives the formula in the statement, with the Pauli matrices [math]\beta_1,\beta_2,\beta_3,\beta_4[/math] being the coefficients of [math]p,q,r,s[/math], in this parametrization.

The above result is often the most convenient one, when dealing with [math]SU_2[/math]. This is because the Pauli matrices have a number of remarkable properties, as follows:

Proposition

The Pauli matrices multiply according to the following formulae,

[[math]] \beta_2^2=\beta_3^2=\beta_4^2=-1 [[/math]]

[[math]] \beta_2\beta_3=-\beta_3\beta_2=\beta_4 [[/math]]

[[math]] \beta_3\beta_4=-\beta_4\beta_3=\beta_2 [[/math]]

[[math]] \beta_4\beta_2=-\beta_2\beta_4=\beta_3 [[/math]]
they conjugate according to the following rules,

[[math]] \beta_1^*=\beta_1,\ \beta_2^*=-\beta_2,\ \beta_3^*=-\beta_3,\ \beta_4^*=-\beta_4 [[/math]]
and they form an orthonormal basis of [math]M_2(\mathbb C)[/math], with respect to the scalar product

[[math]] \lt x,y \gt =tr(xy^*) [[/math]]
with [math]tr:M_2(\mathbb C)\to\mathbb C[/math] being the normalized trace of [math]2\times 2[/math] matrices, [math]tr=Tr/2[/math].


Show Proof

The first two assertions, regarding the multiplication and conjugation rules for the Pauli matrices, follow from some elementary computations. As for the last assertion, this follows by using these rules. Indeed, the fact that the Pauli matrices are pairwise orthogonal follows from computations of the following type, for [math]i\neq j[/math]:

[[math]] \lt \beta_i,\beta_j \gt =tr(\beta_i\beta_j^*) =tr(\pm\beta_i\beta_j) =tr(\pm\beta_k) =0 [[/math]]


As for the fact that the Pauli matrices have norm 1, this follows from:

[[math]] \lt \beta_i,\beta_i \gt =tr(\beta_i\beta_i^*) =tr(\pm\beta_i^2) =tr(\beta_1) =1 [[/math]]


Thus, we are led to the conclusion in the statement.

Now back to probability, we can recover our semicircular measures, as follows:

Theorem

The main character of [math]SU_2[/math] follows the following law,

[[math]] \gamma_1=\frac{1}{2\pi}\sqrt{4-x^2}dx [[/math]]
which is the Wigner law of parameter [math]1[/math].


Show Proof

This follows from Theorem 6.21, by identifying [math]SU_2[/math] with the sphere [math]S^3_\mathbb R[/math], the variable [math]\chi=2Re(p)[/math] being semicircular. Indeed, let us write, as in Theorem 6.21:

[[math]] SU_2=\left\{\begin{pmatrix}p+iq&r+is\\ -p+iq&r-is\end{pmatrix}\ \Big|\ p^2+q^2+r^2+s^2=1\right\} [[/math]]


In this picture, the main character is given by the following formula:

[[math]] \chi\begin{pmatrix}p+iq&r+is\\ -r+is&p-iq\end{pmatrix}=2p [[/math]]


We are therefore left with computing the law of the following variable:

[[math]] p\in C(S^3_\mathbb R) [[/math]]


For this purpose, we can use the moment method. Let us recall from chapter 1 that the polynomial integrals over the real spheres are given by the following formula:

[[math]] \int_{S^{N-1}_\mathbb R}x_1^{k_1}\ldots x_N^{k_N}\,dx=\frac{(N-1)!!k_1!!\ldots k_N!!}{(N+\Sigma k_i-1)!!} [[/math]]


In our case, where [math]N=4[/math], we obtain the following moment formula:

[[math]] \begin{eqnarray*} \int_{S^3_\mathbb R}p^{2k} &=&\frac{3!!(2k)!!}{(2k+3)!!}\\ &=&2\cdot\frac{3\cdot5\cdot7\ldots (2k-1)}{2\cdot4\cdot6\ldots (2k+2)}\\ &=&2\cdot\frac{(2k)!}{2^kk!2^{k+1}(k+1)!}\\ &=&\frac{1}{4^k}\cdot\frac{1}{k+1}\binom{2k}{k}\\ &=&\frac{C_k}{4^k} \end{eqnarray*} [[/math]]


Thus the variable [math]2p\in C(S^3_\mathbb R)[/math] follows the Wigner semicircle law [math]\gamma_1[/math], as claimed.

Summarizing, we have managed to recover the Wigner semicircle law [math]\gamma_1[/math] out of purely geometric considerations, involving the real sphere [math]S^3_\mathbb R[/math] and the special complex rotation group [math]SU_2[/math]. Moreover, with a change of variable, our results extend to [math]\gamma_t[/math] with [math]t \gt 0[/math]. And this is quite interesting, philosophically, and also makes an interesting connection with the Lie group material from chapter 4, which remains to be further investigated.


Finally, as the physicists say, there is no [math]SU_2[/math] without [math]SO_3[/math], so let us discuss as well the computation for [math]SO_3[/math], that we will certainly need later. Let us start with:

Proposition

The adjoint action [math]SU_2\curvearrowright M_2(\mathbb C)[/math], given by [math]T_U(A)=UAU^*[/math], leaves invariant the following real vector subspace of [math]M_2(\mathbb C)[/math],

[[math]] \mathbb R^4=span(\beta_1,\beta_2,\beta_3,\beta_4) [[/math]]
and we obtain in this way a group morphism [math]SU_2\to GL_4(\mathbb R)[/math].


Show Proof

We have two assertions to be proved, as follows:


(1) We must first prove that, with [math]E\subset M_2(\mathbb C)[/math] being the real vector space in the statement, we have the following implication:

[[math]] U\in SU_2,A\in E\implies UAU^*\in E [[/math]]


But this is clear from the multiplication rules for the Pauli matrices, from Proposition 6.23. Indeed, let us write our matrices [math]U,A[/math] as follows:

[[math]] U=x\beta_1+y\beta_2+z\beta_3+t\beta_4 [[/math]]

[[math]] A=a\beta_1+b\beta_2+c\beta_3+d\beta_4 [[/math]]


We know that the coefficients [math]x,y,z,t[/math] and [math]a,b,c,d[/math] are all real, due to [math]U\in SU_2[/math] and [math]A\in E[/math]. The point now is that when computing [math]UAU^*[/math], by using the various rules from Proposition 6.23, we obtain a matrix of the same type, namely a combination of [math]\beta_1,\beta_2,\beta_3,\beta_4[/math], with real coefficients. Thus, we have [math]UAU^*\in E[/math], as desired.


(2) In order to conclude, let us identify [math]E\simeq\mathbb R^4[/math], by using the basis [math]\beta_1,\beta_2,\beta_3,\beta_4[/math]. The result found in (1) shows that we have a correspondence as follows:

[[math]] SU_2\to M_4(\mathbb R)\quad,\quad U\to (T_U)_{|E} [[/math]]


Now observe that for any [math]U\in SU_2[/math] and any [math]A\in M_2(\mathbb C)[/math] we have:

[[math]] T_{U^*}T_U(A)=U^*UAU^*U=A [[/math]]


Thus [math]T_{U^*}=T_U^{-1}[/math], and so the correspondence that we found can be written as:

[[math]] SU_2\to GL_4(\mathbb R)\quad,\quad U\to (T_U)_{|E} [[/math]]


But this a group morphism, due to the following computation:

[[math]] T_UT_V(A)=UVAV^*U^*=T_{UV}(A) [[/math]]


Thus, we are led to the conclusion in the statement.

The point now is that Proposition 6.25 can be improved as follows:

Proposition

The adjoint action [math]SU_2\curvearrowright M_2(\mathbb C)[/math], given by

[[math]] T_U(A)=UAU^* [[/math]]
leaves invariant the following real vector subspace of [math]M_2(\mathbb C)[/math],

[[math]] F=span_\mathbb R(\beta_2,\beta_3,\beta_4) [[/math]]
and we obtain in this way a group morphism [math]SU_2\to SO_3[/math].


Show Proof

We can do this in several steps, as follows:


(1) Our first claim is that the group morphism [math]SU_2\to GL_4(\mathbb R)[/math] constructed in Proposition 6.25 is in fact a morphism [math]SU_2\to O_4[/math]. In order to prove this, recall the following formula, valid for any [math]U\in SU_2[/math], from the proof of Proposition 6.25:

[[math]] T_{U^*}=T_U^{-1} [[/math]]


We want to prove that the matrices [math]T_U\in GL_4(\mathbb R)[/math] are orthogonal, and in view of the above formula, it is enough to prove that we have:

[[math]] T_U^*=(T_U)^t [[/math]]


So, let us prove this. For any two matrices [math]A,B\in E[/math], we have:

[[math]] \begin{eqnarray*} \lt T_{U^*}(A),B \gt &=& \lt U^*AU,B \gt \\ &=&tr(U^*AUB)\\ &=&tr(AUBU^*) \end{eqnarray*} [[/math]]


On the other hand, we have as well the following formula:

[[math]] \begin{eqnarray*} \lt (T_U)^t(A),B \gt &=& \lt A,T_U(B) \gt \\ &=& \lt A,UBU^* \gt \\ &=&tr(AUBU^*) \end{eqnarray*} [[/math]]


Thus we have indeed [math]T_U^*=(T_U)^t[/math], which proves our [math]SU_2\to O_4[/math] claim.


(2) In order now to finish, recall that we have by definition [math]\beta_1=1[/math], as a matrix. Thus, the action of [math]SU_2[/math] on the vector [math]\beta_1\in E[/math] is given by:

[[math]] T_U(\beta_1)=U\beta_1U^*=UU^*=1=\beta_1 [[/math]]

We conclude that [math]\beta_1\in E[/math] is invariant under [math]SU_2[/math], and by orthogonality the following subspace of [math]E[/math] must be invariant as well under the action of [math]SU_2[/math]:

[[math]] \beta_1^\perp=span_\mathbb R(\beta_2,\beta_3,\beta_4) [[/math]]


Now if we call this subspace [math]F[/math], and we identify [math]F\simeq\mathbb R^3[/math] by using the basis [math]\beta_2,\beta_3,\beta_4[/math], we obtain by restriction to [math]F[/math] a morphism of groups as follows:

[[math]] SU_2\to O_3 [[/math]]


But since this morphism is continuous and [math]SU_2[/math] is connected, its image must be connected too. Now since the target group decomposes as [math]O_3=SO_3\sqcup(-SO_3)[/math], and [math]1\in SU_2[/math] gets mapped to [math]1\in SO_3[/math], the whole image must lie inside [math]SO_3[/math], and we are done.

The above result is quite interesting, because we will see in a moment that the morphism [math]SU_2\to SO_3[/math] constructed there is surjective. Thus, we will have a way of parametrizing the elements [math]V\in SO_3[/math] by elements [math]U\in SU_2[/math], and so ultimately by parameters [math](x,y,z,t)\in S^3_\mathbb R[/math]. In order to work out all this, let us start with the following result, coming as a continuation of Proposition 6.25, independently of Proposition 6.26:

Proposition

With respect to the standard basis [math]\beta_1,\beta_2,\beta_3,\beta_4[/math] of the vector space [math]\mathbb R^4=span(\beta_1,\beta_2,\beta_3,\beta_4)[/math], the morphism [math]T:SU_2\to GL_4(\mathbb R)[/math] is given by:

[[math]] T_U=\begin{pmatrix} 1&0&0&0\\ 0&p^2+q^2-r^2-s^2&2(qr-ps)&2(pr+qs)\\ 0&2(ps+qr)&p^2+r^2-q^2-s^2&2(rs-pq)\\ 0&2(qs-pr)&2(pq+rs)&p^2+s^2-q^2-r^2 \end{pmatrix} [[/math]]
Thus, when looking at [math]T[/math] as a group morphism [math]SU_2\to O_4[/math], what we have in fact is a group morphism [math]SU_2\to O_3[/math], and even [math]SU_2\to SO_3[/math].


Show Proof

With notations from Proposition 6.25 and its proof, let us first look at the action [math]L:SU_2\curvearrowright\mathbb R^4[/math] by left multiplication, [math]L_U(A)=UA[/math]. We have:

[[math]] L_U=\begin{pmatrix} p&-q&-r&-s\\ q&p&-s&r\\ r&s&p&-q\\ s&-r&q&p \end{pmatrix} [[/math]]


Similarly, in what regards now the action [math]R:SU_2\curvearrowright\mathbb R^4[/math] by right multiplication, [math]R_U(A)=AU^*[/math], the corresponding matrix is given by:

[[math]] R_U=\begin{pmatrix} p&q&r&s\\ -q&p&-s&r\\ -r&s&p&-q\\ -s&-r&q&p \end{pmatrix} [[/math]]


Now by composing, the matrix of the adjoint matrix in the statement is:

[[math]] \begin{eqnarray*} T_U &=&R_UL_U\\ &=&\begin{pmatrix} p&q&r&s\\ -q&p&-s&r\\ -r&s&p&-q\\ -s&-r&q&p \end{pmatrix} \begin{pmatrix} p&-q&-r&-s\\ q&p&-s&r\\ r&s&p&-q\\ s&-r&q&p \end{pmatrix}\\ &=&\begin{pmatrix} 1&0&0&0\\ 0&p^2+q^2-r^2-s^2&2(qr-ps)&2(pr+qs)\\ 0&2(ps+qr)&p^2+r^2-q^2-s^2&2(rs-pq)\\ 0&2(qs-pr)&2(pq+rs)&p^2+s^2-q^2-r^2 \end{pmatrix} \end{eqnarray*} [[/math]]


Thus, we have the formula in the statement, and this gives the result.

We can now formulate a famous result, due to Euler-Rodrigues, as follows:

Theorem

We have the Euler-Rodrigues formula

[[math]] U=\begin{pmatrix} p^2+q^2-r^2-s^2&2(qr-ps)&2(pr+qs)\\ 2(ps+qr)&p^2+r^2-q^2-s^2&2(rs-pq)\\ 2(qs-pr)&2(pq+rs)&p^2+s^2-q^2-r^2 \end{pmatrix} [[/math]]
with [math]p^2+q^2+r^2+s^2=1[/math], for the generic elements of [math]SO_3[/math].


Show Proof

We know from the above that we have a group morphism [math]SU_2\to SO_3[/math], given by the formula in the statement, and the problem now is that of proving that this is a double cover map, in the sense that it is surjective, and with kernel [math]\{\pm1\}[/math].


(1) Regarding the kernel, this is elementary to compute, as follows:

[[math]] \begin{eqnarray*} \ker(SU_2\to SO_3) &=&\left\{U\in SU_2\Big|T_U(A)=A,\forall A\in E\right\}\\ &=&\left\{U\in SU_2\Big|UA=AU,\forall A\in E\right\}\\ &=&\left\{U\in SU_2\Big|U\beta_i=\beta_iU,\forall i\right\}\\ &=&\{\pm1\} \end{eqnarray*} [[/math]]


(2) Thus, we are done with this, and as a side remark here, this result shows that our morphism [math]SU_2\to SO_3[/math] is ultimately a morphism as follows:

[[math]] PU_2\subset SO_3\quad,\quad PU_2=SU_2/\{\pm1\} [[/math]]


Here [math]P[/math] stands for “projective”, and it is possible to say more about the construction [math]G\to PG[/math], which can be performed for any subgroup [math]G\subset U_N[/math]. But we will not get here into this, our next goal being anyway that of proving that we have [math]PU_2=SO_3[/math].


(3) We must prove now that the morphism [math]SU_2\to SO_3[/math] is surjective. This is something non-trivial, and there are several proofs for this, as follows:


-- A first proof is by using Lie theory. To be more precise, the tangent spaces at [math]1[/math] of both [math]SU_2[/math] and [math]SO_3[/math] can be explicitly computed, by doing some linear algebra, and the morphism [math]SU_2\to SO_3[/math] follows to be surjective around 1, and then globally.


-- Another proof is via representation theory. Indeed, the representations of [math]SU_2[/math] and [math]SO_3[/math] can be explicitly computed, and follow to be subject to very similar formulae, called Clebsch-Gordan rules, and this shows that [math]SU_2\to SO_3[/math] is surjective.


-- Yet another advanced proof, which is actually quite bordeline for what can be called “proof”, is by using the ADE/McKay classification of the subgroups [math]G\subset SO_3[/math], which shows that there is no room strictly inside [math]SO_3[/math] for something as big as [math]PU_2[/math].


(4) Thus, done with this, one way or another. Alternatively, a more pedestrian proof for the surjectivity of the morphism [math]SU_2\to SO_3[/math] is based on the fact that any rotation [math]U\in SO_3[/math] has an axis, and we will leave the computations here as an instructive exercise.

Now back to probability, let us formulate the following definition:

Definition

The standard Marchenko-Pastur law [math]\pi_1[/math] is given by:

[[math]] f\sim\gamma_1\implies f^2\sim\pi_1 [[/math]]
That is, [math]\pi_1[/math] is the law of the square of a variable following the semicircle law [math]\gamma_1[/math].

Here the fact that [math]\pi_1[/math] is indeed well-defined comes from the fact that a measure is uniquely determined by its moments. More explicitly now, we have:

Proposition

The density of the Marchenko-Pastur law is

[[math]] \pi_1=\frac{1}{2\pi}\sqrt{4x^{-1}-1}\,dx [[/math]]
and the moments of this measure are the Catalan numbers.


Show Proof

There are several proofs here, the simplest being by cheating. Indeed, the moments of [math]\pi_1[/math] can be computed with the change of variable [math]x=4\cos^2t[/math], as follows:

[[math]] \begin{eqnarray*} M_k &=&\frac{1}{2\pi}\int_0^4\sqrt{4x^{-1}-1}\,x^kdx\\ &=&\frac{1}{2\pi}\int_0^{\pi/2}\frac{\sin t}{\cos t}\cdot(4\cos^2t)^k\cdot 2\cos t\sin t\,dt\\ &=&\frac{4^{k+1}}{\pi}\int_0^{\pi/2}\cos^{2k}t\sin^2t\,dt\\ &=&\frac{4^{k+1}}{\pi}\cdot\frac{\pi}{2}\cdot\frac{(2k)!!2!!}{(2k+3)!!}\\ &=&2\cdot 4^k\cdot\frac{(2k)!/2^kk!}{2^{k+1}(k+1)!}\\ &=&C_k \end{eqnarray*} [[/math]]


Thus, we are led to the conclusion in the statement.

We can do now the character computation for [math]SO_3[/math], as follows:

Theorem

The main character of [math]SO_3[/math], modified by adding [math]1[/math] to it, given in standard Euler-Rodrigues coordinates by

[[math]] \chi=4p^2 [[/math]]
follows a squared semicircle law, or Marchenko-Pastur law [math]\pi_1[/math].


Show Proof

This follows by using the quotient map [math]SU_2\to SO_3[/math], and the result for [math]SU_2[/math]. Indeed, by using the Euler-Rodrigues formula, in the context of Theorem 6.24 and its proof, the main character of [math]SO_3[/math], modified by adding [math]1[/math] to it, is given by:

[[math]] \chi=(3p^2-q^2-r^2-s^2)+1=4p^2 [[/math]]


Now recall from the proof of Theorem 6.24 that we have:

[[math]] 2p\sim\gamma_1 [[/math]]


On the other hand, a quick comparison between the moment formulae for the Wigner and Marchenko-Pastur laws, which are very similar, shows that we have:

[[math]] f\sim\gamma_1\implies f^2\sim\pi_1 [[/math]]


Thus, with [math]f=2p[/math], we obtain the result in the statement.

General references

Banica, Teo (2024). "Calculus and applications". arXiv:2401.00911 [math.CO].