guide:484ce543f8: Difference between revisions

From Stochiki
No edit summary
 
No edit summary
 
Line 1: Line 1:
<div class="d-none"><math>
\newcommand{\mathds}{\mathbb}</math></div>
{{Alert-warning|This article was automatically generated from a tex file and may contain conversion errors. If permitted, you may login and edit this article to improve the conversion. }}


With the above basic combinatorial study done, let us discuss now a number of more advanced results regarding the Voiculescu circular laws <math>\Gamma_t</math>, which are of multiplicative nature, and quite often have no classical counterpart. Things here will be quite technical, and all that follows will be rather an introduction to the subject.
In general now, in order to deal with multiplicative questions for the free random variables, we are in need of results regarding the multiplicative free convolution operation <math>\boxtimes</math>. Let us recall from chapter 9 that we have the following result:
{{defncard|label=|id=|We have a free convolution operation <math>\boxtimes</math>, constructed as follows:
<ul><li> For abstract distributions, via <math>\mu_a\boxtimes\mu_b=\mu_{ab}</math>, with <math>a,b</math> free.
</li>
<li> For real measures, via <math>\mu_a\boxtimes\mu_b=\mu_{\sqrt{a}b\sqrt{a}}</math>, with <math>a,b</math> self-adjoint and free.
</li>
</ul>}}
All this is quite tricky, explained in chapter 9, the idea being that, while  (1) is straightforward, (2) is not, and comes by considering the variable <math>c=\sqrt{a}b\sqrt{a}</math>, which unlike <math>ab</math> is always self-adjoint, and whose moments are given by:
<math display="block">
\begin{eqnarray*}
tr(c^k)
&=&tr[(\sqrt{a}b\sqrt{a})^k]\\
&=&tr[\sqrt{a}ba\ldots ab\sqrt{a}]\\
&=&tr[\sqrt{a}\cdot \sqrt{a}ba\ldots ab]\\
&=&tr[(ab)^k]
\end{eqnarray*}
</math>
As a remark here, observe that we have used in the above, and actually for the first time since talking about freeness, the trace property of the trace, namely:
<math display="block">
tr(ab)=tr(ba)
</math>
This is quite interesting, philosophically speaking, because in the operator algebra world there are many interesting examples of subalgebras <math>A\subset B(H)</math> coming with natural linear forms <math>\varphi:A\to\mathbb C</math> which are continuous and positive, but which are not traces. See <ref name="bla">B. Blackadar, Operator algebras: theory of C<math>^*</math>-algebras and von Neumann algebras, Springer (2006).</ref>. It is possible to do a bit of free probability on such algebras, but not much.
Quite remarkably, the free multiplicative convolution operation <math>\boxtimes</math> can be linearized, in analogy with what happens for the usual multiplicative convolution <math>\times</math>, and the additive operations <math>*,\boxplus</math> as well. We have here the following result, due to Voiculescu <ref name="vo3">D.V. Voiculescu, Multiplication of certain noncommuting random variables, ''J. Operator Theory'' '''18''' (1987), 223--235.</ref>:
{{proofcard|Theorem|theorem-1|The free multiplicative convolution operation <math>\boxtimes</math> for the real probability measures <math>\mu\in\mathcal P(\mathbb R)</math> can be linearized as follows:
<ul><li> Start with the sequence of moments <math>M_k</math>, then compute the moment generating function, or Stieltjes transform of the measure:
<math display="block">
f(z)=1+M_1z+M_2z^2+M_3z^3+\ldots
</math>
</li>
<li> Perform the following operations to the Stieltjes transform:
<math display="block">
\psi(z)=f(z)-1
</math>
<math display="block">
\psi(\chi(z))=z
</math>
<math display="block">
S(z)=\left(1+\frac{1}{z}\right)\chi(z)
</math>
</li>
<li> Then <math>\log S</math> linearizes the free multiplicative convolution, <math>S_{\mu\boxtimes\nu}=S_\mu S_\nu</math>.
</li>
</ul>
|There are several proofs here, with the original proof of Voiculescu <ref name="vo3">D.V. Voiculescu, Multiplication of certain noncommuting random variables, ''J. Operator Theory'' '''18''' (1987), 223--235.</ref> being quite similar to the proof of the <math>R</math>-transform theorem, using free Fock space models, then with a proof by Haagerup <ref name="haa">U. Haagerup, On Voiculescu's R and S transforms for free non-commuting random variables, ''Fields Inst. Comm.'' '''12''' (1997), 127--148.</ref>, obtained by further improving on this, and finally with the proof from the book of Nica and Speicher <ref name="nsp">A. Nica and R. Speicher, Lectures on the combinatorics of free probability, Cambridge Univ. Press (2006).</ref>, using pure combinatorics. The proof of Haagerup <ref name="haa">U. Haagerup, On Voiculescu's R and S transforms for free non-commuting random variables, ''Fields Inst. Comm.'' '''12''' (1997), 127--148.</ref>, which is the most in tune with the present book, is as follows:
(1) According to our conventions from Definition 10.17, we want to prove that, given noncommutative variables <math>a,b</math> which are free, we have the following formula:
<math display="block">
S_{\mu_{ab}}(z)=S_{\mu_a}(z)S_{\mu_b}(z)
</math>
(2) For this purpose, consider the orthogonal shifts <math>S,T</math> on the free Fock space, as in chapter 9. By using the algebraic arguments from chapter 9, from the proof of the <math>R</math>-transform theorem, we can assume as there that our variables have a special form, that fits our present objectives, and to be more specifically, the following form:
<math display="block">
a=(1+S)f(S^*)\quad,\quad
b=(1+T)g(T^*)
</math>
Our claim, which will prove the theorem, is that we have the following formulae, for the <math>S</math>-transforms of the various variables involved:
<math display="block">
S_{\mu_a}(z)=\frac{1}{f(z)}\quad,\quad
S_{\mu_b}(z)=\frac{1}{g(z)}\quad,\quad
S_{\mu_{ab}}(z)=\frac{1}{f(z)g(z)}
</math>
(3) Let us first compute <math>S_{\mu_a}</math>. We know that we have <math>a=(1+S)f(S^*)</math>, with <math>S</math> being the shift on <math>l^2(\mathbb N)</math>. Given <math>|z| < 1</math>, consider the following vector:
<math display="block">
p=\sum_{k\geq0}z^ke_k
</math>
The shift and its adjoint act on this vector in the following way:
<math display="block">
Sp=\sum_{k\geq0}z^ke_{k+1}=\frac{p-e_0}{z}
</math>
<math display="block">
S^*p=\sum_{k\geq1}z^ke_{k-1}=zp
</math>
Thus <math>f(S^*)p=f(z)p</math>, and we deduce from this that we have:
<math display="block">
\begin{eqnarray*}
ap
&=&(1+S)f(z)p\\
&=&f(z)(p+Sp)\\
&=&f(z)\left(p+\frac{p-e_0}{z}\right)\\
&=&\left(1+\frac{1}{z}\right)f(z)p-\frac{f(z)}{z}e_0
\end{eqnarray*}
</math>
By dividing everything by <math>(1+1/z)f(z)</math>, this formula becomes:
<math display="block">
\frac{z}{1+z}\cdot\frac{1}{f(z)}\,ap=p-\frac{e_0}{1+z}
</math>
We can write this latter formula in the following way:
<math display="block">
\left(1-\frac{z}{1+z}\cdot\frac{1}{f(z)}\,a\right)p=\frac{e_0}{1+z}
</math>
Now by inverting, we obtain from this the following formula:
<math display="block">
\left(1-\frac{z}{1+z}\cdot\frac{1}{f(z)}\,a\right)^{-1}e_0=(1+z)p
</math>
(4) But this gives us the formula of <math>S_{\mu_a}</math>. Indeed, consider the following function:
<math display="block">
\rho(z)=\frac{z}{1+z}\cdot\frac{1}{f(z)}
</math>
With this notation, the formula that we found in (3) becomes:
<math display="block">
(1-\rho(z)a)^{-1}e_0=(1+z)p
</math>
By using this, in terms of <math>\varphi(T)= < Te_0,e_0 > </math>, we obtain:
<math display="block">
\begin{eqnarray*}
\varphi\left((1-\rho(z)a)^{-1}\right)
&=& < (1-\rho(z)a)^{-1}e_0,e_0 > \\
&=& < (1+z)p,e_0 > \\
&=&1+z
\end{eqnarray*}
</math>
Thus the above function <math>\rho</math> is the inverse of the following function:
<math display="block">
\psi(z)=\varphi\left(\frac{1}{1-za}\right)-1
</math>
But this latter function is the <math>\psi</math> function from the statement, and so <math>\rho</math> is the function <math>\chi</math> from the statement, and we can finish our computation, as follows:
<math display="block">
\begin{eqnarray*}
S_{\mu_a}(z)
&=&\frac{1+z}{z}\cdot\rho(z)\\
&=&\frac{1+z}{z}\cdot\frac{z}{1+z}\cdot\frac{1}{f(z)}\\
&=&\frac{1}{f(z)}
\end{eqnarray*}
</math>
(5) A similar computation, or just a symmetry argument, gives <math>S_{\mu_b}(z)=1/g(z)</math>. In order to compute now <math>S_{\mu_{ab}}(z)</math>, we use a similar trick. Consider the following vector of <math>l^2(\mathbb N*\mathbb N)</math>, with the primes and double primes referring to the two copies of <math>\mathbb N</math>:
<math display="block">
q=e_0+\sum_{k\geq1}(e_1'+e_1''+e_1'\otimes e_1'')^{\otimes k}
</math>
The adjoints of the shifts <math>S,T</math> act as follows on this vector:
<math display="block">
S^*q=z(1+T)q\quad,\quad T^*q=zq
</math>
By using these formulae, we have the following computation:
<math display="block">
\begin{eqnarray*}
abq
&=&(1+S)f(S^*)(1+T)g(T^*)q\\
&=&(1+S)f(S^*)(1+T)g(z)q\\
&=&g(z)(1+S)f(S^*)(1+T)q
\end{eqnarray*}
</math>
In order to compute the last term, observe that we have:
<math display="block">
\begin{eqnarray*}
S^*(1+T)q
&=&(S^*+S^*T)q\\
&=&S^*q\\
&=&z(1+T)q
\end{eqnarray*}
</math>
Thus <math>f(S^*)(1+T)q=f(z)(1+T)q</math>, and back to our computation, we have:
<math display="block">
\begin{eqnarray*}
abq
&=&g(z)(1+S)f(z)(1+T)q\\
&=&f(z)g(z)(1+S)(1+T)q\\
&=&f(z)g(z)\left(\frac{1+z}{z}\cdot q-\frac{e_0}{z}\right)
\end{eqnarray*}
</math>
Now observe that we can write this formula as follows:
<math display="block">
\left(1-\frac{z}{1+z}\cdot\frac{1}{f(z)g(z)}\cdot ab\right)q=\frac{e_0}{1+z}
</math>
By inverting, we obtain from this the following formula:
<math display="block">
\left(1-\frac{z}{1+z}\cdot\frac{1}{f(z)g(z)}\cdot ab\right)^{-1}e_0=(1+z)q
</math>
(6) But this formula that we obtained is similar to the formula that we obtained at the end of (3) above. Thus, we can use the same argument as in (4), and we obtain:
<math display="block">
S_{\mu_{ab}}(z)=\frac{1}{f(z)g(z)}
</math>
We are therefore done with the computations, and this finishes the proof.}}
Getting back now to the circular variables, let us look at the polar decomposition of such variables. In order to discuss this, let us start with a well-known result:
{{proofcard|Theorem|theorem-2|We have the following results:
<ul><li> Any matrix <math>T\in M_N(\mathbb C)</math> has a polar decomposition, <math>T=U|T|</math>.
</li>
<li> Assuming <math>T\in A\subset M_N(\mathbb C)</math>, we have <math>U,|T|\in A</math>.
</li>
<li> Any operator <math>T\in B(H)</math> has a polar decomposition, <math>T=U|T|</math>.
</li>
<li> Assuming <math>T\in A\subset B(H)</math>, we have <math>U,|T|\in\bar{A}</math>, weak closure.
</li>
</ul>
|All this is standard, the idea being as follows:
(1) In each case under consideration, the first observation is that the matrix or general operator <math>T^*T</math> being positive, it has a square root:
<math display="block">
|T|=\sqrt{T^*T}
</math>
(2) With this square root extracted, in the invertible case we can compare the action of <math>T</math> and <math>|T|</math>, and we conclude that we have <math>T=U|T|</math>, with <math>U</math> being a unitary. In the general, non-invertible case, a similar analysis leads to the conclusion that we have as well <math>T=U|T|</math>, but with <math>U</math> being this time a partial isometry.
(3) In what regards now algebraic and topological aspects, in finite dimensions the extraction of the square root, and so the polar decomposition itself, takes place over the matrix blocks of the ambient algebra <math>A\subset M_N(\mathbb C)</math>, and so takes place inside <math>A</math> itself.
(4) In infinite dimensions however, we must take the weak closure, an illustrating example here being the functions <math>f\in A</math> belonging to the algebra <math>A=C(X)</math>, represented on <math>H=L^2(X)</math>, whose polar decomposition leads into the bigger algebra <math>\bar{A}=L^\infty(X)</math>.}}
Summarizing, we have a basic linear algebra result, regarding the polar decomposition of the usual matrices, and in infinite dimensions pretty much the same happens, with the only subtlety coming from the fact that the ambient operator algebra <math>A\subset B(H)</math> must be taken weakly closed. We will be back to this, with more details, in chapter 15 below, when talking about such algebras <math>A\subset B(H)</math>, which are called von Neumann algebras.
In connection with our probabilistic questions, we first have the following result:
{{proofcard|Proposition|proposition-1|The polar decomposition of semicircular variables is <math>s=eq</math>, with the variables <math>e,q</math> being as follows:
<ul><li> <math>e</math> has moments <math>1,0,1,0,1,\ldots</math>
</li>
<li> <math>q</math> is quarter-circular.
</li>
<li> <math>e,q</math> are independent.
</li>
</ul>
|It is enough to prove the result in a model of our choice, and the best choice here is the most straightforward model for the semicircular variables, namely:
<math display="block">
s=x\in L^\infty\Big([-2,2],\gamma_1\Big)
</math>
To be more precise, we endow the interval <math>[-2,2]</math> with the probability measure <math>\gamma_1</math>, and we consider here the variable <math>s=x=(x\to x)</math>, which is trivially semicircular. The polar decomposition of this variable is then <math>s=eq</math>, with <math>e,q</math> being as follows:
<math display="block">
e=sgn(x)\quad,\quad
q=|x|
</math>
Now since <math>e</math> has moments <math>1,0,1,0,1,\ldots\,</math>, and also <math>q</math> is quarter-circular, and finally <math>e,q</math> are independent, this gives the result in our model, and so in general.}}
Less trivial now is the following result, due to Voiculescu <ref name="vo4">D.V. Voiculescu, Limit laws for random matrices and free products, ''Invent. Math.'' '''104''' (1991), 201--220.</ref>:
{{proofcard|Theorem|theorem-3|The polar decomposition of circular variables is <math>c=uq</math>, with the variables <math>u,q</math> being as follows:
<ul><li> <math>u</math> is a Haar unitary.
</li>
<li> <math>q</math> is quarter-circular.
</li>
<li> <math>u,q</math> are free.
</li>
</ul>
|This is something which looks quite similar to Proposition 10.20, but which is more difficult, and can be however proved, via various techniques:
(1) The original proof, by Voiculescu in <ref name="vo4">D.V. Voiculescu, Limit laws for random matrices and free products, ''Invent. Math.'' '''104''' (1991), 201--220.</ref>, uses Gaussian random matrix models for the circular variables. We will discuss this proof at the end of the present chapter, after developing the needed Gaussian random matrix model technology.
(2) A second proof, obtained by pure combinatorics, in the spirit of Theorem 10.13, regarding the free Wick formula, and of Theorem 10.18, regarding the <math>S</math>-transform, or rather in the spirit of the underlying combinatorics of these results, is the one in <ref name="nsp">A. Nica and R. Speicher, Lectures on the combinatorics of free probability, Cambridge Univ. Press (2006).</ref>.
(3) Finally, there is as well a third proof, from <ref name="ba1">T. Banica, On the polar decomposition of circular variables, ''Integral Equations Operator Theory'' '''24''' (1996), 372--377.</ref>, more in the spirit of the free Fock space proofs for the <math>R</math> and <math>S</math> transform results, from <ref name="vo2">D.V. Voiculescu, Addition of certain noncommuting random variables, ''J. Funct. Anal.'' '''66''' (1986), 323--346.</ref>, <ref name="vo3">D.V. Voiculescu, Multiplication of certain noncommuting random variables, ''J. Operator Theory'' '''18''' (1987), 223--235.</ref>, using a suitable generalization of the free Fock spaces. We will discuss this proof right below.}}
==General references==
{{cite arXiv|last1=Banica|first1=Teo|year=2024|title=Calculus and applications|eprint=2401.00911|class=math.CO}}
==References==
{{reflist}}

Latest revision as of 19:39, 21 April 2025

[math] \newcommand{\mathds}{\mathbb}[/math]

This article was automatically generated from a tex file and may contain conversion errors. If permitted, you may login and edit this article to improve the conversion.

With the above basic combinatorial study done, let us discuss now a number of more advanced results regarding the Voiculescu circular laws [math]\Gamma_t[/math], which are of multiplicative nature, and quite often have no classical counterpart. Things here will be quite technical, and all that follows will be rather an introduction to the subject.


In general now, in order to deal with multiplicative questions for the free random variables, we are in need of results regarding the multiplicative free convolution operation [math]\boxtimes[/math]. Let us recall from chapter 9 that we have the following result:

Definition

We have a free convolution operation [math]\boxtimes[/math], constructed as follows:

  • For abstract distributions, via [math]\mu_a\boxtimes\mu_b=\mu_{ab}[/math], with [math]a,b[/math] free.
  • For real measures, via [math]\mu_a\boxtimes\mu_b=\mu_{\sqrt{a}b\sqrt{a}}[/math], with [math]a,b[/math] self-adjoint and free.

All this is quite tricky, explained in chapter 9, the idea being that, while (1) is straightforward, (2) is not, and comes by considering the variable [math]c=\sqrt{a}b\sqrt{a}[/math], which unlike [math]ab[/math] is always self-adjoint, and whose moments are given by:

[[math]] \begin{eqnarray*} tr(c^k) &=&tr[(\sqrt{a}b\sqrt{a})^k]\\ &=&tr[\sqrt{a}ba\ldots ab\sqrt{a}]\\ &=&tr[\sqrt{a}\cdot \sqrt{a}ba\ldots ab]\\ &=&tr[(ab)^k] \end{eqnarray*} [[/math]]


As a remark here, observe that we have used in the above, and actually for the first time since talking about freeness, the trace property of the trace, namely:

[[math]] tr(ab)=tr(ba) [[/math]]


This is quite interesting, philosophically speaking, because in the operator algebra world there are many interesting examples of subalgebras [math]A\subset B(H)[/math] coming with natural linear forms [math]\varphi:A\to\mathbb C[/math] which are continuous and positive, but which are not traces. See [1]. It is possible to do a bit of free probability on such algebras, but not much.


Quite remarkably, the free multiplicative convolution operation [math]\boxtimes[/math] can be linearized, in analogy with what happens for the usual multiplicative convolution [math]\times[/math], and the additive operations [math]*,\boxplus[/math] as well. We have here the following result, due to Voiculescu [2]:

Theorem

The free multiplicative convolution operation [math]\boxtimes[/math] for the real probability measures [math]\mu\in\mathcal P(\mathbb R)[/math] can be linearized as follows:

  • Start with the sequence of moments [math]M_k[/math], then compute the moment generating function, or Stieltjes transform of the measure:
    [[math]] f(z)=1+M_1z+M_2z^2+M_3z^3+\ldots [[/math]]
  • Perform the following operations to the Stieltjes transform:
    [[math]] \psi(z)=f(z)-1 [[/math]]
    [[math]] \psi(\chi(z))=z [[/math]]
    [[math]] S(z)=\left(1+\frac{1}{z}\right)\chi(z) [[/math]]
  • Then [math]\log S[/math] linearizes the free multiplicative convolution, [math]S_{\mu\boxtimes\nu}=S_\mu S_\nu[/math].


Show Proof

There are several proofs here, with the original proof of Voiculescu [2] being quite similar to the proof of the [math]R[/math]-transform theorem, using free Fock space models, then with a proof by Haagerup [3], obtained by further improving on this, and finally with the proof from the book of Nica and Speicher [4], using pure combinatorics. The proof of Haagerup [3], which is the most in tune with the present book, is as follows:


(1) According to our conventions from Definition 10.17, we want to prove that, given noncommutative variables [math]a,b[/math] which are free, we have the following formula:

[[math]] S_{\mu_{ab}}(z)=S_{\mu_a}(z)S_{\mu_b}(z) [[/math]]


(2) For this purpose, consider the orthogonal shifts [math]S,T[/math] on the free Fock space, as in chapter 9. By using the algebraic arguments from chapter 9, from the proof of the [math]R[/math]-transform theorem, we can assume as there that our variables have a special form, that fits our present objectives, and to be more specifically, the following form:

[[math]] a=(1+S)f(S^*)\quad,\quad b=(1+T)g(T^*) [[/math]]


Our claim, which will prove the theorem, is that we have the following formulae, for the [math]S[/math]-transforms of the various variables involved:

[[math]] S_{\mu_a}(z)=\frac{1}{f(z)}\quad,\quad S_{\mu_b}(z)=\frac{1}{g(z)}\quad,\quad S_{\mu_{ab}}(z)=\frac{1}{f(z)g(z)} [[/math]]


(3) Let us first compute [math]S_{\mu_a}[/math]. We know that we have [math]a=(1+S)f(S^*)[/math], with [math]S[/math] being the shift on [math]l^2(\mathbb N)[/math]. Given [math]|z| \lt 1[/math], consider the following vector:

[[math]] p=\sum_{k\geq0}z^ke_k [[/math]]

The shift and its adjoint act on this vector in the following way:

[[math]] Sp=\sum_{k\geq0}z^ke_{k+1}=\frac{p-e_0}{z} [[/math]]

[[math]] S^*p=\sum_{k\geq1}z^ke_{k-1}=zp [[/math]]


Thus [math]f(S^*)p=f(z)p[/math], and we deduce from this that we have:

[[math]] \begin{eqnarray*} ap &=&(1+S)f(z)p\\ &=&f(z)(p+Sp)\\ &=&f(z)\left(p+\frac{p-e_0}{z}\right)\\ &=&\left(1+\frac{1}{z}\right)f(z)p-\frac{f(z)}{z}e_0 \end{eqnarray*} [[/math]]


By dividing everything by [math](1+1/z)f(z)[/math], this formula becomes:

[[math]] \frac{z}{1+z}\cdot\frac{1}{f(z)}\,ap=p-\frac{e_0}{1+z} [[/math]]


We can write this latter formula in the following way:

[[math]] \left(1-\frac{z}{1+z}\cdot\frac{1}{f(z)}\,a\right)p=\frac{e_0}{1+z} [[/math]]


Now by inverting, we obtain from this the following formula:

[[math]] \left(1-\frac{z}{1+z}\cdot\frac{1}{f(z)}\,a\right)^{-1}e_0=(1+z)p [[/math]]


(4) But this gives us the formula of [math]S_{\mu_a}[/math]. Indeed, consider the following function:

[[math]] \rho(z)=\frac{z}{1+z}\cdot\frac{1}{f(z)} [[/math]]


With this notation, the formula that we found in (3) becomes:

[[math]] (1-\rho(z)a)^{-1}e_0=(1+z)p [[/math]]


By using this, in terms of [math]\varphi(T)= \lt Te_0,e_0 \gt [/math], we obtain:

[[math]] \begin{eqnarray*} \varphi\left((1-\rho(z)a)^{-1}\right) &=& \lt (1-\rho(z)a)^{-1}e_0,e_0 \gt \\ &=& \lt (1+z)p,e_0 \gt \\ &=&1+z \end{eqnarray*} [[/math]]


Thus the above function [math]\rho[/math] is the inverse of the following function:

[[math]] \psi(z)=\varphi\left(\frac{1}{1-za}\right)-1 [[/math]]


But this latter function is the [math]\psi[/math] function from the statement, and so [math]\rho[/math] is the function [math]\chi[/math] from the statement, and we can finish our computation, as follows:

[[math]] \begin{eqnarray*} S_{\mu_a}(z) &=&\frac{1+z}{z}\cdot\rho(z)\\ &=&\frac{1+z}{z}\cdot\frac{z}{1+z}\cdot\frac{1}{f(z)}\\ &=&\frac{1}{f(z)} \end{eqnarray*} [[/math]]


(5) A similar computation, or just a symmetry argument, gives [math]S_{\mu_b}(z)=1/g(z)[/math]. In order to compute now [math]S_{\mu_{ab}}(z)[/math], we use a similar trick. Consider the following vector of [math]l^2(\mathbb N*\mathbb N)[/math], with the primes and double primes referring to the two copies of [math]\mathbb N[/math]:

[[math]] q=e_0+\sum_{k\geq1}(e_1'+e_1''+e_1'\otimes e_1'')^{\otimes k} [[/math]]


The adjoints of the shifts [math]S,T[/math] act as follows on this vector:

[[math]] S^*q=z(1+T)q\quad,\quad T^*q=zq [[/math]]


By using these formulae, we have the following computation:

[[math]] \begin{eqnarray*} abq &=&(1+S)f(S^*)(1+T)g(T^*)q\\ &=&(1+S)f(S^*)(1+T)g(z)q\\ &=&g(z)(1+S)f(S^*)(1+T)q \end{eqnarray*} [[/math]]


In order to compute the last term, observe that we have:

[[math]] \begin{eqnarray*} S^*(1+T)q &=&(S^*+S^*T)q\\ &=&S^*q\\ &=&z(1+T)q \end{eqnarray*} [[/math]]


Thus [math]f(S^*)(1+T)q=f(z)(1+T)q[/math], and back to our computation, we have:

[[math]] \begin{eqnarray*} abq &=&g(z)(1+S)f(z)(1+T)q\\ &=&f(z)g(z)(1+S)(1+T)q\\ &=&f(z)g(z)\left(\frac{1+z}{z}\cdot q-\frac{e_0}{z}\right) \end{eqnarray*} [[/math]]


Now observe that we can write this formula as follows:

[[math]] \left(1-\frac{z}{1+z}\cdot\frac{1}{f(z)g(z)}\cdot ab\right)q=\frac{e_0}{1+z} [[/math]]


By inverting, we obtain from this the following formula:

[[math]] \left(1-\frac{z}{1+z}\cdot\frac{1}{f(z)g(z)}\cdot ab\right)^{-1}e_0=(1+z)q [[/math]]


(6) But this formula that we obtained is similar to the formula that we obtained at the end of (3) above. Thus, we can use the same argument as in (4), and we obtain:

[[math]] S_{\mu_{ab}}(z)=\frac{1}{f(z)g(z)} [[/math]]


We are therefore done with the computations, and this finishes the proof.

Getting back now to the circular variables, let us look at the polar decomposition of such variables. In order to discuss this, let us start with a well-known result:

Theorem

We have the following results:

  • Any matrix [math]T\in M_N(\mathbb C)[/math] has a polar decomposition, [math]T=U|T|[/math].
  • Assuming [math]T\in A\subset M_N(\mathbb C)[/math], we have [math]U,|T|\in A[/math].
  • Any operator [math]T\in B(H)[/math] has a polar decomposition, [math]T=U|T|[/math].
  • Assuming [math]T\in A\subset B(H)[/math], we have [math]U,|T|\in\bar{A}[/math], weak closure.


Show Proof

All this is standard, the idea being as follows:


(1) In each case under consideration, the first observation is that the matrix or general operator [math]T^*T[/math] being positive, it has a square root:

[[math]] |T|=\sqrt{T^*T} [[/math]]


(2) With this square root extracted, in the invertible case we can compare the action of [math]T[/math] and [math]|T|[/math], and we conclude that we have [math]T=U|T|[/math], with [math]U[/math] being a unitary. In the general, non-invertible case, a similar analysis leads to the conclusion that we have as well [math]T=U|T|[/math], but with [math]U[/math] being this time a partial isometry.


(3) In what regards now algebraic and topological aspects, in finite dimensions the extraction of the square root, and so the polar decomposition itself, takes place over the matrix blocks of the ambient algebra [math]A\subset M_N(\mathbb C)[/math], and so takes place inside [math]A[/math] itself.


(4) In infinite dimensions however, we must take the weak closure, an illustrating example here being the functions [math]f\in A[/math] belonging to the algebra [math]A=C(X)[/math], represented on [math]H=L^2(X)[/math], whose polar decomposition leads into the bigger algebra [math]\bar{A}=L^\infty(X)[/math].

Summarizing, we have a basic linear algebra result, regarding the polar decomposition of the usual matrices, and in infinite dimensions pretty much the same happens, with the only subtlety coming from the fact that the ambient operator algebra [math]A\subset B(H)[/math] must be taken weakly closed. We will be back to this, with more details, in chapter 15 below, when talking about such algebras [math]A\subset B(H)[/math], which are called von Neumann algebras.


In connection with our probabilistic questions, we first have the following result:

Proposition

The polar decomposition of semicircular variables is [math]s=eq[/math], with the variables [math]e,q[/math] being as follows:

  • [math]e[/math] has moments [math]1,0,1,0,1,\ldots[/math]
  • [math]q[/math] is quarter-circular.
  • [math]e,q[/math] are independent.


Show Proof

It is enough to prove the result in a model of our choice, and the best choice here is the most straightforward model for the semicircular variables, namely:

[[math]] s=x\in L^\infty\Big([-2,2],\gamma_1\Big) [[/math]]


To be more precise, we endow the interval [math][-2,2][/math] with the probability measure [math]\gamma_1[/math], and we consider here the variable [math]s=x=(x\to x)[/math], which is trivially semicircular. The polar decomposition of this variable is then [math]s=eq[/math], with [math]e,q[/math] being as follows:

[[math]] e=sgn(x)\quad,\quad q=|x| [[/math]]


Now since [math]e[/math] has moments [math]1,0,1,0,1,\ldots\,[/math], and also [math]q[/math] is quarter-circular, and finally [math]e,q[/math] are independent, this gives the result in our model, and so in general.

Less trivial now is the following result, due to Voiculescu [5]:

Theorem

The polar decomposition of circular variables is [math]c=uq[/math], with the variables [math]u,q[/math] being as follows:

  • [math]u[/math] is a Haar unitary.
  • [math]q[/math] is quarter-circular.
  • [math]u,q[/math] are free.


Show Proof

This is something which looks quite similar to Proposition 10.20, but which is more difficult, and can be however proved, via various techniques:


(1) The original proof, by Voiculescu in [5], uses Gaussian random matrix models for the circular variables. We will discuss this proof at the end of the present chapter, after developing the needed Gaussian random matrix model technology.


(2) A second proof, obtained by pure combinatorics, in the spirit of Theorem 10.13, regarding the free Wick formula, and of Theorem 10.18, regarding the [math]S[/math]-transform, or rather in the spirit of the underlying combinatorics of these results, is the one in [4].


(3) Finally, there is as well a third proof, from [6], more in the spirit of the free Fock space proofs for the [math]R[/math] and [math]S[/math] transform results, from [7], [2], using a suitable generalization of the free Fock spaces. We will discuss this proof right below.

General references

Banica, Teo (2024). "Calculus and applications". arXiv:2401.00911 [math.CO].

References

  1. B. Blackadar, Operator algebras: theory of C[math]^*[/math]-algebras and von Neumann algebras, Springer (2006).
  2. 2.0 2.1 2.2 D.V. Voiculescu, Multiplication of certain noncommuting random variables, J. Operator Theory 18 (1987), 223--235.
  3. 3.0 3.1 U. Haagerup, On Voiculescu's R and S transforms for free non-commuting random variables, Fields Inst. Comm. 12 (1997), 127--148.
  4. 4.0 4.1 A. Nica and R. Speicher, Lectures on the combinatorics of free probability, Cambridge Univ. Press (2006).
  5. 5.0 5.1 D.V. Voiculescu, Limit laws for random matrices and free products, Invent. Math. 104 (1991), 201--220.
  6. T. Banica, On the polar decomposition of circular variables, Integral Equations Operator Theory 24 (1996), 372--377.
  7. D.V. Voiculescu, Addition of certain noncommuting random variables, J. Funct. Anal. 66 (1986), 323--346.