Revision as of 19:39, 21 April 2025 by Bot
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

8d. Further results

[math] \newcommand{\mathds}{\mathbb}[/math]

This article was automatically generated from a tex file and may contain conversion errors. If permitted, you may login and edit this article to improve the conversion.

We develop now some general theory, for the partitions [math]\pi\in P_{even}(2s,2s)[/math], with [math]s\in\mathbb N[/math]. Let us begin with a reformulation of Definition 8.6, in terms of square matrices:

Proposition

Given [math]\pi\in P(2s,2s)[/math], the square matrix [math]\Lambda_\pi\in M_n(\mathbb C)\otimes M_n(\mathbb C)[/math] associated to the linear map [math]\varphi_\pi:M_n(\mathbb C)\to M_n(\mathbb C)[/math], with [math]n=N^s[/math], is given by:

[[math]] (\Lambda_\pi)_{a_1\ldots a_s,b_1\ldots b_s,c_1\ldots c_s,d_1\ldots d_s}= \delta_\pi\begin{pmatrix}a_1&\ldots&a_s&c_1&\ldots&c_s\\ b_1&\ldots&b_s&d_1&\ldots&d_s\end{pmatrix} [[/math]]
In addition, we have [math]\Lambda_\pi^*=\Lambda_{\pi^\circ}[/math], where [math]\pi\to\pi^\circ[/math] is the blockwise middle symmetry.


Show Proof

The formula for [math]\Lambda_\pi[/math] follows from the formula of [math]\varphi_\pi[/math] from Definition 8.6, by using our standard convention [math]\Lambda_{ab,cd}=\varphi(e_{ac})_{bd}[/math]. Regarding now the second assertion, observe that with [math]\pi\to\pi^\circ[/math] being as above, for any multi-indices [math]a,b,c,d[/math] we have:

[[math]] \delta_\pi\begin{pmatrix}c_1&\ldots&c_s&a_1&\ldots&a_s\\ d_1&\ldots&d_s&b_1&\ldots&b_s\end{pmatrix} =\delta_{\pi^\circ}\begin{pmatrix}a_1&\ldots&a_s&c_1&\ldots&c_s\\ b_1&\ldots&b_s&d_1&\ldots&d_s\end{pmatrix} [[/math]]


Since [math]\Lambda_\pi[/math] is real, we conclude we have the following formula:

[[math]] (\Lambda_\pi^*)_{ab,cd}=(\Lambda_\pi)_{cd,ab}=(\Lambda_{\pi^\circ})_{ab,cd} [[/math]]


This being true for any [math]a,b,c,d[/math], we obtain [math]\Lambda_\pi^*=\Lambda_{\pi^\circ}[/math], as claimed.

In order to compute now the generalized [math]*[/math]-moments of [math]\Lambda_\pi[/math], we first have:

Proposition

With [math]\pi\in P(2s,2s)[/math] and [math]\Lambda_\pi[/math] being as above, we have

[[math]] \begin{eqnarray*} (M_\sigma^e\otimes M_\tau^e)(\Lambda_\pi) &=&\frac{1}{n^{|\sigma|+|\tau|}}\sum_{i_1^1\ldots i_p^s}\sum_{j_1^1\ldots j_p^s} \delta_{\pi^{e_1}}\begin{pmatrix}i_1^1&\ldots&i_1^s&i_{\sigma(1)}^1&\ldots&i_{\sigma(1)}^s\\ j_1^1&\ldots&j_1^s&j_{\tau(1)}^1&\ldots&j_{\tau(1)}^s\end{pmatrix}\\ &&\hskip62mm\vdots\\ &&\hskip31mm\delta_{\pi^{e_p}}\begin{pmatrix}i_p^1&\ldots&i_p^s& i_{\sigma(p)}^1&\ldots&i_{\sigma(p)}^s\\ j_p^1&\ldots&j_p^s&j_{\tau(p)}^1&\ldots&j_{\tau(p)}^s\end{pmatrix} \end{eqnarray*} [[/math]]
with the exponents [math]e_1,\ldots,e_p\in\{1,*\}[/math] at left corresponding to [math]e_1,\ldots,e_p\in\{1,\circ\}[/math] at right.


Show Proof

In multi-index notation, the general formula for the generalized [math]*[/math]-moments for a tensor product square matrix [math]\Lambda\in M_n(\mathbb C)\otimes M_n(\mathbb C)[/math], with [math]n=N^s[/math], is:

[[math]] \begin{eqnarray*} (M_\sigma^e\otimes M_\tau^e)(\Lambda) &=&\frac{1}{n^{|\sigma|+|\tau|}}\sum_{i_1^1\ldots i_p^s}\sum_{j_1^1\ldots j_p^s} \Lambda^{e_1}_{i_1^1\ldots i_1^sj_1^1\ldots j_1^s,i_{\sigma(1)}^1\ldots i_{\sigma(1)}^sj_{\tau(1)}^1\ldots j_{\tau(1)}^s}\\ &&\hskip52mm\vdots\\ &&\hskip30mm\Lambda^{e_p}_{i_p^1\ldots i_p^sj_p^1\ldots j_p^s,i_{\sigma(p)}^1\ldots i_{\sigma(p)}^sj_{\tau(p)}^1\ldots j_{\tau(p)}^s} \end{eqnarray*} [[/math]]


By using now the formulae in Proposition 8.3 for the matrix entries of [math]\Lambda_\pi[/math], and of its adjoint matrix [math]\Lambda_\pi^*=\Lambda_{\pi^\circ}[/math], this gives the formula in the statement.

As a conclusion, the quantities [math](M_\sigma^e\otimes M_\tau^e)(\Lambda_\pi)[/math] that we are interested in can be theoretically computed in terms of [math]\pi[/math], but the combinatorics is quite non-trivial. As explained in [1], some simplifications appear in the symmetric case, [math]\pi=\pi^\circ[/math]. Indeed, for such partitions we can use the following decomposition result:

Proposition

Each symmetric partition [math]\pi\in P_{even}(2s,2s)[/math] has a finest symmetric decomposition [math]\pi=[\pi_1,\ldots,\pi_R][/math], with the components [math]\pi_t[/math] being of two types, as follows:

  • Symmetric blocks of [math]\pi[/math]. Such a block must have [math]r+r[/math] matching upper legs and [math]v+v[/math] matching lower legs, with [math]r+v \gt 0[/math].
  • Unions [math]\beta\sqcup\beta^\circ[/math] of asymmetric blocks of [math]\pi[/math]. Here [math]\beta[/math] must have [math]r+u[/math] unmatching upper legs and [math]v+w[/math] unmatching lower legs, with [math]r+u+v+w \gt 0[/math].


Show Proof

Consider indeed the block decomposition of our partition, [math]\pi=[\beta_1,\ldots,\beta_T][/math]. Then [math][\beta_1,\ldots,\beta_T]=[\beta_1^\circ,\ldots,\beta_T^\circ][/math], so each block [math]\beta\in\pi[/math] is either symmetric, [math]\beta=\beta^\circ[/math], or is asymmetric, and disjoint from [math]\beta^\circ[/math], which must be a block of [math]\pi[/math] too. The result follows.

The idea will be that of decomposing over the components of [math]\pi[/math]. First, we have:

Proposition

For the pairing [math]\eta\in P_{even}(2s,2s)[/math] having horizontal strings,

[[math]] \eta=\begin{bmatrix} a&b&c&\ldots&a&b&c&\ldots\\ \alpha&\beta&\gamma&\ldots&\alpha&\beta&\gamma&\ldots \end{bmatrix} [[/math]]
we have [math](M_\sigma\otimes M_\tau)(\Lambda_\eta)=1[/math], for any [math]p\in\mathbb N[/math], and any [math]\sigma,\tau\in NC(p)[/math].


Show Proof

As a first observation, the result holds at [math]s=1[/math], due to the computations in the proof of Proposition 8.16. In general, by using Proposition 8.19, we obtain:

[[math]] \begin{eqnarray*} (M_\sigma\otimes M_\tau)(\Lambda_\eta) &=&\frac{1}{n^{|\sigma|+|\tau|}}\sum_{i_1^1\ldots i_p^s}\sum_{j_1^1\ldots j_p^s}\delta_{i_1^1i_{\sigma(1)}^1}\ldots\delta_{i_1^si_{\sigma(1)}^s}\cdot\delta_{j_1^1j_{\tau(1)}^1}\ldots\delta_{j_1^sj_{\tau(1)}^s}\\ &&\hskip52mm\vdots\\ &&\hskip30mm\delta_{i_p^1i_{\sigma(p)}^1}\ldots\delta_{i_p^si_{\sigma(p)}^s}\cdot\delta_{j_p^1j_{\tau(p)}^1}\ldots\delta_{j_p^sj_{\tau(p)}^s} \end{eqnarray*} [[/math]]


By transposing the two [math]p\times s[/math] matrices of Kronecker symbols, we obtain:

[[math]] \begin{eqnarray*} (M_\sigma\otimes M_\tau)(\Lambda_\eta) &=&\frac{1}{n^{|\sigma|+|\tau|}}\sum_{i_1^1\ldots i_p^1}\sum_{j_1^1\ldots j_p^1}\delta_{i_1^1i_{\sigma(1)}^1}\ldots\delta_{i_p^1i_{\sigma(p)}^1}\cdot\delta_{j_1^1j_{\tau(1)}^1}\ldots\delta_{j_p^1j_{\tau(p)}^1}\\ &&\hskip52mm\vdots\\ &&\hskip13.5mm\sum_{i_1^s\ldots i_p^s}\sum_{j_1^s\ldots j_p^s}\delta_{i_1^si_{\sigma(1)}^s}\ldots\delta_{i_p^si_{\sigma(p)}^s}\cdot\delta_{j_1^sj_{\tau(1)}^s}\ldots\delta_{j_p^sj_{\tau(p)}^s} \end{eqnarray*} [[/math]]


We can now perform all the sums, and we obtain in this way:

[[math]] (M_\sigma\otimes M_\tau)(\Lambda_\eta) =\frac{1}{n^{|\sigma|+|\tau|}}(N^{|\sigma|}N^{|\tau|})^s=1 [[/math]]

Thus, the formula in the statement holds indeed.

We can now perform the decomposition over the components, as follows:

Theorem

Assuming that [math]\pi\in P_{even}(2s,2s)[/math] is symmetric, [math]\pi=\pi^\circ[/math], we have

[[math]] (M_\sigma\otimes M_\tau)(\Lambda_\pi)=\prod_{t=1}^R(M_\sigma\otimes M_\tau)(\Lambda_{\pi_t}) [[/math]]
whenever [math]\pi=[\pi_1,\ldots,\pi_R][/math] is a decomposition into symmetric subpartitions, which each [math]\pi_t[/math] being completed with horizontal strings, coming from the standard pairing [math]\eta[/math].


Show Proof

We use the general formula in Proposition 8.19. In the symmetric case the various [math]e_x[/math] exponents dissapear, and we can write the formula there as follows:

[[math]] (M_\sigma\otimes M_\tau)(\Lambda_\pi) =\frac{1}{n^{|\sigma|+|\tau|}}\#\left\{i,j\Big|\ker\begin{pmatrix}i_x^1&\ldots&i_x^s&i_{\sigma(x)}^1&\ldots&i_{\sigma(x)}^s\\ j_x^1&\ldots&j_x^s&j_{\tau(x)}^1&\ldots&j_{\tau(x)}^s\end{pmatrix}\leq\pi,\forall x\right\} [[/math]]


The point now is that in this formula, the number of double arrays [math][ij][/math] that we are counting naturally decomposes over the subpartitions [math]\pi_t[/math]. Thus, we have a formula of the following type, with [math]K[/math] being a certain normalization constant:

[[math]] (M_\sigma\otimes M_\tau)(\Lambda_\pi)=K\prod_{t=1}^R(M_\sigma\otimes M_\tau)(\Lambda_{\pi_t}) [[/math]]


Regarding now the precise value of [math]K[/math], our claim is that this is given by:

[[math]] K=\frac{n^{(|\sigma|+|\tau|)R}}{n^{|\sigma|+|\tau|}}\cdot\frac{1}{n^{(|\sigma|+|\tau|)(R-1)}}=1 [[/math]]


Indeed, the fraction on the left comes from the standard [math]\frac{1}{n^{|\sigma|+|\tau|}}[/math] normalizations of all the [math](M_\sigma\otimes M_\tau)(\Lambda)[/math] quantities involved. As for the term on the right, this comes from the contribution of the horizontal strings, which altogether contribute as the strings of the standard pairing [math]\eta\in P_{even}(2s,2s)[/math], counted [math]R-1[/math] times. But, according to Proposition 8.21, the strings of [math]\eta[/math] contribute with a [math]n^{|\sigma|+|\tau|}[/math] factor, and this gives the result.

Summarizing, in the easy case we are led to the study of the partitions [math]\pi\in P_{even}(2s,2s)[/math] which are symmetric, and we have so far a decomposition formula for them.


Let us keep building on the material developed above. Our purpose will be that of converting Theorem 8.22 into an explicit formula, that we can use later on. For this, we have to compute the contributions of the components. First, we have:

Proposition

For a symmetric partition [math]\pi\in P_{even}(2s,2s)[/math], consisting of one symmetric block, completed with horizontal strings, we have

[[math]] (M_\sigma\otimes M_\tau)(\Lambda_\pi)=N^{|\lambda|-r|\sigma|-v|\tau|} [[/math]]
where [math]\lambda\in P(p)[/math] is a partition constructed as follows,

[[math]] \lambda=\begin{cases} \sigma\wedge\tau&{\rm if}\ r,v\geq1\\ \sigma&{\rm if}\ r\geq1,v=0\\ \tau&{\rm if}\ r=0,v\geq1 \end{cases} [[/math]]
and where [math]r/v[/math] is half of the number of upper/lower legs of the symmetric block.


Show Proof

Let us denote by [math]a_1,\ldots,a_r[/math] and [math]b_1,\ldots,b_v[/math] the upper and lower legs of the symmetric block, appearing at left, and by [math]A_1,\ldots,A_{s-r}[/math] and [math]B_1,\ldots,B_{s-v}[/math] the remaining legs, appearing at left as well. With this convention, Proposition 8.19 gives:

[[math]] \begin{eqnarray*} (M_\sigma\otimes M_\tau)(\Lambda_\pi) &=&\frac{1}{n^{|\sigma|+|\tau|}}\sum_{i_1^1\ldots i_p^s}\sum_{j_1^1\ldots j_p^s}\prod_x\delta_{i_x^{a_1}\ldots i_x^{a_r}i_{\sigma(x)}^{a_1}\ldots i_{\sigma(x)}^{a_r}j_x^{b_1}\ldots j_x^{b_v}j_{\tau(x)}^{b_1}\ldots j_{\tau(x)}^{b_v}}\\ &&\hskip37mm\delta_{i_x^{A_1}i_{\sigma(x)}^{A_1}}\ldots\ldots\delta_{i_x^{A_{s-r}}i_{\sigma(x)}^{A_{s-r}}}\\ &&\hskip37mm\delta_{j_x^{B_1}j_{\tau(x)}^{B_1}}\ldots\ldots\delta_{j_x^{B_{s-v}}j_{\tau(x)}^{B_{s-v}}} \end{eqnarray*} [[/math]]


If we denote by [math]k_1,\ldots,k_p[/math] the common values of the indices affected by the long Kronecker symbols, coming from the symmetric block, we have then:

[[math]] \begin{eqnarray*} (M_\sigma\otimes M_\tau)(\Lambda_\pi) &=&\frac{1}{n^{|\sigma|+|\tau|}}\sum_{k_1\ldots k_p}\\ &&\sum_{i_1^1\ldots i_p^s}\prod_x\delta_{i_x^{a_1}\ldots i_x^{a_r}i_{\sigma(x)}^{a_1}\ldots i_{\sigma(x)}^{a_r}k_x}\cdot\delta_{i_x^{A_1}i_{\sigma(x)}^{A_1}}\ldots\delta_{i_x^{A_{s-r}}i_{\sigma(x)}^{A_{s-r}}}\\ &&\sum_{j_1^1\ldots j_p^s}\prod_x\delta_{j_x^{b_1}\ldots j_x^{b_v}j_{\tau(x)}^{b_1}\ldots j_{\tau(x)}^{b_v}k_x}\cdot\delta_{j_x^{B_1}j_{\tau(x)}^{B_1}}\ldots\delta_{j_x^{B_{s-v}}j_{\tau(x)}^{B_{s-v}}} \end{eqnarray*} [[/math]]


Let us compute now the contributions of the various [math]i,j[/math] indices involved. If we regard both [math]i,j[/math] as being [math]p\times s[/math] arrays of indices, the situation is as follows:

-- On the [math]a_1,\ldots,a_r[/math] columns of [math]i[/math], the equations are [math]i_x^{a_e}=i_{\sigma(x)}^{a_e}=k_x[/math] for any [math]e,x[/math]. Thus when [math]r\neq0[/math] we must have [math]\ker k\leq\sigma[/math], in order to have solutions, and if this condition is satisfied, the solution is unique. As for the case [math]r=0[/math], here there is no special condition to be satisfied by [math]k[/math], and we have once again a unique solution.

-- On the [math]A_1,\ldots,A_{s-r}[/math] columns of [math]i[/math], the conditions on the indices are the “trivial” ones, examined in the proof of Proposition 8.21. According to the computation there, the total contribution coming from these indices is [math](N^{|\sigma|})^{s-r}=N^{(s-r)|\sigma|}[/math].

-- Regarding now [math]j[/math], the situation is similar, with a unique solution coming from the [math]b_1,\ldots,b_v[/math] columns, provided that the condition [math]\ker k\leq\tau[/math] is satisfied at [math]v\neq0[/math], and with a total [math]N^{(s-v)|\tau|}[/math] contribution coming from the [math]B_1,\ldots,B_{s-v}[/math] columns.

As a conclusion, in order to have solutions [math]i,j[/math], we are led to the condition [math]\ker k\leq\lambda[/math], where [math]\lambda\in\{\sigma\wedge\tau,\sigma,\tau\}[/math] is the partition constructed in the statement. Now by putting everything together, we deduce that we have the following formula:

[[math]] \begin{eqnarray*} (M_\sigma\otimes M_\tau)(\Lambda_\pi) &=&\frac{1}{n^{|\sigma|+|\tau|}}\sum_{\ker k\leq\lambda}N^{(s-r)|\sigma|+(s-v)|\tau|}\\ &=&N^{-s|\sigma|-s|\tau|}N^{|\lambda|}N^{(s-r)|\sigma|+(s-v)|\tau|}\\ &=&N^{|\lambda|-r|\sigma|-v|\tau|} \end{eqnarray*} [[/math]]


Thus, we have obtained the formula in the statement, and we are done.

In the two-block case now, we have a similar result, as follows:

Proposition

For a symmetric partition [math]\pi\in P_{even}(2s,2s)[/math], consisting of a symmetric union [math]\beta\sqcup\beta^\circ[/math] of two asymmetric blocks, completed with horizontal strings, we have

[[math]] (M_\sigma\otimes M_\tau)(\Lambda_\pi)=N^{|\lambda|-(r+u)|\sigma|-(v+w)|\tau|} [[/math]]
where [math]r+u[/math] and [math]v+w[/math] represent the number of upper and lower legs of [math]\beta[/math], and where [math]\lambda\in P(p)[/math] is a partition constructed according to the following table,

[[math]] \begin{matrix} ru\backslash vw&&11&10&01&00\\ \\ 11&&\sigma^2\wedge\sigma\tau\wedge\sigma\tau^{-1}&\sigma^2\wedge\sigma\tau^{-1}&\sigma^2\wedge\sigma\tau&\sigma^2\\ 10&&\sigma\tau\wedge\sigma\tau^{-1}&\sigma\tau^{-1}&\sigma\tau&\emptyset\\ 01&&\tau\sigma\wedge\tau^2&\tau\sigma&\tau^{-1}\sigma&\emptyset\\ 00&&\tau^2&\emptyset&\emptyset&- \end{matrix} [[/math]]
with the [math]1/0[/math] indexing symbols standing for the positivity/nullness of the corresponding variables [math]r,u,v,w[/math], and where [math]\emptyset[/math] denotes a formal partition, having [math]0[/math] blocks.


Show Proof

Let us denote by [math]a_1,\ldots,a_r[/math] and [math]c_1,\ldots,c_u[/math] the upper legs of [math]\beta[/math], by [math]b_1,\ldots,b_v[/math] and [math]d_1,\ldots,d_w[/math] the lower legs of [math]\beta[/math], and by [math]A_1,\ldots,A_{s-r-u}[/math] and [math]B_1,\ldots,B_{s-v-w}[/math] the remaining legs of [math]\pi[/math], not belonging to [math]\beta\sqcup\beta^\circ[/math]. The formula in Proposition 8.19 gives:

[[math]] \begin{eqnarray*} (M_\sigma\otimes M_\tau)(\Lambda_\pi) &=&\frac{1}{n^{|\sigma|+|\tau|}}\sum_{i_1^1\ldots i_p^s}\sum_{j_1^1\ldots j_p^s}\prod_x\delta_{i_x^{a_1}\ldots i_x^{a_r}i_{\sigma(x)}^{c_1}\ldots i_{\sigma(x)}^{c_u}j_x^{b_1}\ldots j_x^{b_v}j_{\tau(x)}^{d_1}\ldots j_{\tau(x)}^{d_w}}\\ &&\hskip37mm\delta_{i_x^{c_1}\ldots i_x^{c_u}i_{\sigma(x)}^{a_1}\ldots i_{\sigma(x)}^{a_r}j_x^{d_1}\ldots j_x^{d_w}j_{\tau(x)}^{b_1}\ldots j_{\tau(x)}^{b_v}}\\ &&\hskip37mm\delta_{i_x^{A_1}i_{\sigma(x)}^{A_1}}\ldots\ldots\delta_{i_x^{A_{s-r}}i_{\sigma(x)}^{A_{s-r-u}}}\\ &&\hskip37mm\delta_{j_x^{B_1}j_{\tau(x)}^{B_1}}\ldots\ldots\delta_{j_x^{B_{s-v}}j_{\tau(x)}^{B_{s-v-w}}} \end{eqnarray*} [[/math]]


We have now two long Kronecker symbols, coming from [math]\beta\sqcup\beta^\circ[/math], and if we denote by [math]k_1,\ldots,k_p[/math] and [math]l_1,\ldots,l_p[/math] the values of the indices affected by them, we obtain:

[[math]] \begin{eqnarray*} &&(M_\sigma\otimes M_\tau)(\Lambda_\pi) =\frac{1}{n^{|\sigma|+|\tau|}}\sum_{k_1\ldots k_p}\sum_{l_1\ldots l_p}\\ &&\hskip20mm\sum_{i_1^1\ldots i_p^s}\prod_x\delta_{i_x^{a_1}\ldots i_x^{a_r}i_{\sigma(x)}^{c_1}\ldots i_{\sigma(x)}^{c_u}k_x}\cdot\delta_{i_x^{c_1}\ldots i_x^{c_u}i_{\sigma(x)}^{a_1}\ldots i_{\sigma(x)}^{a_r}l_x}\cdot\delta_{i_x^{A_1}i_{\sigma(x)}^{A_1}}\ldots\delta_{i_x^{A_{s-r-u}}i_{\sigma(x)}^{A_{s-r-u}}}\\ &&\hskip20mm\sum_{j_1^1\ldots j_p^s}\prod_x\delta_{j_x^{b_1}\ldots j_x^{b_v}j_{\tau(x)}^{d_1}\ldots j_{\tau(x)}^{d_w}k_x}\cdot\delta_{j_x^{d_1}\ldots j_x^{d_w}j_{\tau(x)}^{b_1}\ldots j_{\tau(x)}^{b_v}l_x}\cdot\delta_{j_x^{B_1}j_{\tau(x)}^{B_1}}\ldots\delta_{j_x^{B_{s-v-w}}j_{\tau(x)}^{B_{s-v-w}}} \end{eqnarray*} [[/math]]


Let us compute now the contributions of the various [math]i,j[/math] indices. On the [math]a_1,\ldots,a_r[/math] and [math]c_1,\ldots,c_u[/math] columns of [math]i[/math], regarded as an [math]p\times s[/math] array, the equations are as follows:

[[math]] i_x^{a_e}=i_{\sigma(x)}^{c_f}=k_x\quad,\quad i_x^{c_f}=i_{\sigma(x)}^{a_e}=l_x [[/math]]


If we denote by [math]i_x[/math] the common value of the [math]i_x^{a_e}[/math] indices, when [math]e[/math] varies, and by [math]I_x[/math] the common value of the [math]i_x^{c_f}[/math] indices, when [math]f[/math] varies, these equations simply become:

[[math]] i_x=I_{\sigma(x)}=k_x\quad,\quad I_x=i_{\sigma(x)}=l_x [[/math]]


Thus we have 0 or 1 solutions. To be more precise, depending now on the positivity/nullness of the parameters [math]r,u[/math], we are led to 4 cases, as follows:

\underline{Case 11.} Here [math]r,u\geq1[/math], and we must have [math]k_x=l_{\sigma(x)},k_{\sigma(x)}=l_x[/math].

\underline{Case 10.} Here [math]r\geq1,u=0[/math], and we must have [math]k_{\sigma(x)}=l_x[/math].

\underline{Case 01.} Here [math]r=0,u\geq1[/math], and we must have [math]k_x=l_{\sigma(x)}[/math].

\underline{Case 00.} Here [math]r=u=0[/math], and there is no condition on [math]k,l[/math].

In what regards now the [math]A_1,\ldots,A_{s-r}[/math] columns of [math]i[/math], the conditions on the indices are the “trivial” ones, examined in the proof of Proposition 8.21. According to the computation there, the total contribution coming from these indices is:

[[math]] C_i=(N^{|\sigma|})^{s-r}=N^{(s-r)|\sigma|} [[/math]]


The study for the [math]j[/math] indices is similar, and we will only record here the final conclusions. First, in what regards the [math]b_1,\ldots,b_v[/math] and [math]d_1,\ldots,d_w[/math] columns of [math]j[/math], the same discussion as above applies, and we have once again 0 or 1 solutions, as follows:

\underline{Case 11'.} Here [math]v,w\geq1[/math], and we must have [math]k_x=l_{\tau(x)},k_{\tau(x)}=l_x[/math].

\underline{Case 10'.} Here [math]v\geq1,w=0[/math], and we must have [math]k_{\tau(x)}=l_x[/math].

\underline{Case 01'.} Here [math]v=0,w\geq1[/math], and we must have [math]k_x=l_{\tau(x)}[/math].

\underline{Case 00'.} Here [math]v=w=0[/math], and there is no condition on [math]k,l[/math].

As for the [math]B_1,\ldots,B_{s-v-w}[/math] columns of [math]j[/math], the conditions on the indices here are “trivial”, as in Proposition 8.21, and the total contribution coming from these indices is:

[[math]] C_j=(N^{|\tau|})^{s-v-w}=N^{(s-v-w)|\tau|} [[/math]]


Let us put now everything together. First, we must merge the conditions on [math]k,l[/math] found in the cases 00-11 above with those found in the cases 00'-11'. There are [math]4\times4=16[/math] computations to be performed here, and the “generic” computation, corresponding to the merger of case 11 with the case 11', is as follows:

[[math]] \begin{eqnarray*} &&k_x=l_{\sigma(x)},k_{\sigma(x)}=l_x,k_x=l_{\tau(x)},k_{\tau(x)}=l_x\\ &\iff&l_x=k_{\sigma(x)},k_x=l_{\sigma(x)},k_x=l_{\tau(x)},k_x=l_{\tau^{-1}(x)}\\ &\iff&l_x=k_{\sigma(x)},k_x=k_{\sigma^2(x)}=k_{\sigma\tau(x)}=k_{\sigma\tau^{-1}(x)} \end{eqnarray*} [[/math]]


Thus in this case [math]l[/math] is uniquely determined by [math]k[/math], and [math]k[/math] itself must satisfy:

[[math]] \ker k\leq\sigma^2\wedge\sigma\tau\wedge\sigma\tau^{-1} [[/math]]


We conclude that the total contribution of the [math]k,l[/math] indices in this case is:

[[math]] C_{kl}^{11,11}=N^{|\sigma^2\wedge\sigma\tau\wedge\sigma\tau^{-1}|} [[/math]]


In the remaining 15 cases the computations are similar, with some of the above 4 conditions, that we started with, dissapearing. The conclusion is that the total contribution of the [math]k,l[/math] indices is as follows, with [math]\lambda[/math] being the partition in the statement:

[[math]] C_{kl}=N^{|\lambda|} [[/math]]


With this result in hand, we can now finish our computation, as follows:

[[math]] \begin{eqnarray*} (M_\sigma\otimes M_\tau)(\Lambda_\pi) &=&\frac{1}{n^{|\sigma|+|\tau|}}C_{kl}C_iC_j\\ &=&N^{|\lambda|-(r+u)|\sigma|-(v+w)|\tau|} \end{eqnarray*} [[/math]]


Thus, we have obtained the formula in the statement, and we are done.

As a conclusion now to all this, we have the following result:

Theorem

For a symmetric partition [math]\pi\in P_{even}(2s,2s)[/math], having only one component, in the sense of Proposition 8.20, completed with horizontal strings, we have

[[math]] (M_\sigma\otimes M_\tau)(\Lambda_\pi)=N^{|\lambda|-r|\sigma|-v|\tau|} [[/math]]
where [math]\lambda\in P(p)[/math] is the partition constructed as in Proposition 8.23 and Proposition 8.24, and where [math]r/v[/math] is half of the total number of upper/lower legs of the component.


Show Proof

This follows indeed from Proposition 8.23 and Proposition 8.24.

Generally speaking, the formula that we found in Theorem 8.25 does not lead to the multiplicativity condition from Definition 8.11, and this due to the fact that the various partitions [math]\lambda\in P_p[/math] constructed in Proposition 8.24 have in general a quite complicated combinatorics. To be more precise, we first have the following result:

Proposition

For a symmetric partition [math]\pi\in P_{even}(2s,2s)[/math] we have

[[math]] (M_\sigma\otimes M_\tau)(\Lambda_\pi)=N^{f_1+f_2} [[/math]]
where [math]f_1,f_2[/math] are respectively linear combinations of the following quantities:

  • [math]1,|\sigma|,|\tau|,|\sigma\wedge\tau|,|\sigma\tau|,|\sigma\tau^{-1}|,|\tau\sigma|,|\tau^{-1}\sigma|[/math].
  • [math]|\sigma^2|,|\tau^2|,|\sigma^2\wedge\sigma\tau|,|\sigma^2\wedge\sigma\tau^{-1}|,|\tau\sigma\wedge\tau^2|,|\sigma\tau\wedge\sigma\tau^{-1}|,|\sigma^2\wedge\sigma\tau\wedge\sigma\tau^{-1}|[/math].


Show Proof

This follows indeed by combining Theorem 8.22 and Theorem 8.25, with concrete input from Proposition 8.23 and Proposition 8.24.

In the above result, the partitions in (1) lead to the multiplicativity condition in Definition 8.11, and so to compound free Poisson laws, via Theorem 8.14. However, the partitions in (2) have a more complicated combinatorics, which does not fit with Definition 8.11, nor with the finer multiplicativity notions introduced in [1].


Summarizing, in order to extend the 4 basic computations that we have, we must fine-tune our formalism. A natural answer here comes from the following result:

Proposition

For a partition [math]\pi\in P(2s,2s)[/math], the following are equivalent:

  • [math]\varphi_\pi[/math] is unital modulo scalars, i.e. [math]\varphi_\pi(1)=c1[/math], with [math]c\in\mathbb C[/math].
  • [math][^\mu_\pi]=\mu[/math], where [math]\mu\in P(0,2s)[/math] is the pairing connecting [math]\{i\}-\{i+s\}[/math], and where [math][^\mu_\pi]\in P(0,2s)[/math] is the partition obtained by putting [math]\mu[/math] on top of [math]\pi[/math].

In addition, these conditions are satisfied for the [math]4[/math] partitions in [math]P_{even}(2,2)[/math].


Show Proof

We use the formula of [math]\varphi_\pi[/math] from Definition 8.6, namely:

[[math]] \varphi_\pi(e_{a_1\ldots a_s,c_1\ldots c_s})=\sum_{b_1\ldots b_s}\sum_{d_1\ldots d_s}\delta_\pi\begin{pmatrix}a_1&\ldots&a_s&c_1&\ldots&c_s\\ b_1&\ldots&b_s&d_1&\ldots&d_s\end{pmatrix}e_{b_1\ldots b_s,d_1\ldots d_s} [[/math]]


By summing over indices [math]a_i=c_i[/math], we obtain the following formula:

[[math]] \varphi_\pi(1)=\sum_{a_1\ldots a_s}\sum_{b_1\ldots b_s}\sum_{d_1\ldots d_s} \delta_\pi\begin{pmatrix}a_1&\ldots&a_s&a_1&\ldots&a_s\\ b_1&\ldots&b_s&d_1&\ldots&d_s\end{pmatrix}e_{b_1\ldots b_s,d_1\ldots d_s} [[/math]]


Let us first find out when [math]\varphi_\pi(1)[/math] is diagonal. In order for this condition to hold, the off-diagonal terms of [math]\varphi_\pi(1)[/math] must all vanish, and so we must have:

[[math]] b\neq d\implies\delta_\pi\begin{pmatrix}a_1&\ldots&a_s&a_1&\ldots&a_s\\ b_1&\ldots&b_s&d_1&\ldots&d_s\end{pmatrix}=0,\forall a [[/math]]


Our claim is that for any [math]\pi\in P(2s,2s)[/math] we have the following formula:

[[math]] \sup_{a_1\ldots a_s}\delta_\pi\begin{pmatrix}a_1&\ldots&a_s&a_1&\ldots&a_s\\ b_1&\ldots&b_s&d_1&\ldots&d_s\end{pmatrix}=\delta_{[^\mu_\pi]}\begin{pmatrix}b_1&\ldots&b_s&d_1&\ldots&d_s\end{pmatrix} [[/math]]


Indeed, each of the terms of the sup on the left are smaller than the quantity on the right, so [math]\leq[/math] holds. Also, assuming [math]\delta_{[^\mu_\pi]}(bd)=1[/math], we can take [math]a_1,\ldots,a_s[/math] to be the indices appearing on the strings of [math]\mu[/math], and we obtain the following formula:

[[math]] \delta_\pi\begin{pmatrix}a&a\\ b&d\end{pmatrix}=1 [[/math]]


Thus, we have equality. Now with this equality in hand, we conclude that we have:

[[math]] \begin{eqnarray*} &&\varphi_\pi(1)=\varphi_\pi(1)^\delta\\ &\iff&\delta_{[^\mu_\pi]}\begin{pmatrix}b_1&\ldots&b_s&d_1&\ldots&d_s\end{pmatrix}=0,\forall b\neq d\\ &\iff&\delta_{[^\mu_\pi]}\begin{pmatrix}b_1&\ldots&b_s&d_1&\ldots&d_s\end{pmatrix}\leq\delta_\mu\begin{pmatrix}b_1&\ldots&b_s&d_1&\ldots&d_s\end{pmatrix},\forall b,d\\ &\iff&\begin{bmatrix}\mu\\ \pi\end{bmatrix}\leq\mu \end{eqnarray*} [[/math]]


Let us investigate now when (1) holds. We already know that [math]\pi[/math] must satisfy [math][^\mu_\pi]\leq\mu[/math], and the remaining conditions, concerning the diagonal terms, are as follows:

[[math]] \sum_{a_1\ldots a_s}\delta_\pi\begin{pmatrix}a_1&\ldots&a_s&a_1&\ldots&a_s\\ b_1&\ldots&b_s&b_1&\ldots&b_s\end{pmatrix}=c,\forall b [[/math]]


As a first observation, the quantity on the left is a decreasing function of [math]\lambda=\ker b[/math]. Now in order for this decreasing function to be constant, we must have:

[[math]] \sum_{a_1\ldots a_s}\delta_\pi\begin{pmatrix}a_1&\ldots&a_s&a_1&\ldots&a_s\\ 1&\ldots&s&1&\ldots&s\end{pmatrix}=\sum_{a_1\ldots a_s}\delta_\pi\begin{pmatrix}a_1&\ldots&a_s&a_1&\ldots&a_s\\ 1&\ldots&1&1&\ldots&1\end{pmatrix} [[/math]]


We conclude that the condition [math][^\mu_\pi]\leq\mu[/math] must be strengthened into [math][^\mu_\pi]=\mu[/math], as claimed. Finally, the last assertion is clear, by using either (1) or (2).

In the symmetric case, [math]\pi=\pi^\circ[/math], we have the following result:

Proposition

Given a partition [math]\pi\in P(2s,2s)[/math] which is symmetric, [math]\varphi_\pi[/math] is unital modulo scalars precisely when its symmetric components are as follows,

  • Symmetric blocks with [math]v\leq 1[/math],
  • Unions of asymmetric blocks with [math]r+u=0,v+w=1[/math],
  • Unions of asymmetric blocks with [math]r+u\geq1,v+w\leq1[/math],

with the conventions from Proposition 8.20 for the values of [math]r,u,v,w[/math].


Show Proof

This follows from what we have, the idea being as follows:


-- We know from Proposition 8.27 that the condition in the statement is equivalent to [math][^\mu_\pi]=\mu[/math], and we can see from this that [math]\pi[/math] satisfies the condition if and only if all the symmetric components of [math]\pi[/math] satisfy the condition. Thus, we must simply check the validity of [math][^\mu_\pi]=\mu[/math] for the partitions in Proposition 8.20, and this gives the result.


-- To be more precise, for the 1-block components the study is trivial, and we are led to (1). Regarding the 2-block components, in the case [math]r+u=0[/math] we must have [math]v+w=1[/math], as stated in (2). Finally, assuming [math]r+u\geq1[/math], when constructing [math][^\mu_\pi][/math] all the legs on the bottom will become connected, and so we must have [math]v+w\leq1[/math], as stated in (3).

Summarizing, the condition that [math]\varphi_\pi[/math] is unital modulo scalars is a natural generalization of what happens for the 4 basic partitions in [math]P_{even}(2,2)[/math], and in the symmetric case, we have a good understanding of such partitions. However, the associated matrices [math]\Lambda_\pi[/math] still fail to be multiplicative, and we must come up with a second condition, coming from:

Theorem

If [math]\pi\in P(2s,2s)[/math] is symmetric, the following are equivalent:

  • The linear maps [math]\varphi_\pi,\varphi_{\pi^*}[/math] are both unital modulo scalars.
  • The symmetric components have [math]\leq2[/math] upper legs, and [math]\leq2[/math] lower legs.
  • The symmetric components appear as copies of the [math]4[/math] elements of [math]P_{even}(2,2)[/math].


Show Proof

By applying the results in Proposition 8.28 to the partitions [math]\pi,\pi^*[/math], and by merging these results, we conclude that the equivalence [math](1)\iff(2)[/math] holds indeed. As for the equivalence [math](2)\iff(3)[/math], this is clear from definitions.

Let us put now everything together. The idea will be that of using the partitions found in Theorem 8.29 as an input for Proposition 8.26, and then for the general block-modification machinery developed in the beginning of this chapter. We will need:

Proposition

The following functions [math]\varphi:NC(p)\times NC(p)\to\mathbb R[/math] are multiplicative, in the sense that they satisfy the condition [math]\varphi(\sigma,\gamma)=\varphi(\sigma,\sigma)[/math]:

  • [math]\varphi(\sigma,\tau)=|\tau\sigma|-|\tau|[/math].
  • [math]\varphi(\sigma,\tau)=|\tau^{-1}\sigma|-|\tau|[/math].


Show Proof

This follows from some standard combinatorics, the idea being as follows:


(1) We can use here the well-known fact, explained in chapter 7, that the numbers [math]|\gamma\sigma|-1[/math] and [math]|\sigma^2|-|\sigma|[/math] are equal, both counting the number of blocks of [math]\sigma[/math] having even size. Thus we have the following computation, which gives the result:

[[math]] \varphi_1(\sigma,\gamma)=|\gamma\sigma|-1=|\sigma^2|-|\sigma|=\varphi_1(\sigma,\sigma) [[/math]]


(2) Here we can use the well-known formula [math]|\sigma\gamma^{-1}|-1=p-|\sigma|[/math], and the fact that [math]\sigma\gamma^{-1},\gamma^{-1}\sigma[/math] have the same cycle structure as the left and right Kreweras complements of [math]\sigma[/math], and so have the same number of blocks. Thus we have the following computation:

[[math]] \varphi_2(\sigma,\gamma)=|\gamma^{-1}\sigma|-1=p-|\sigma|=\varphi_2(\sigma,\sigma) [[/math]]


But this gives the second formula in the statement, and we are done.

We can now formulate our main multiplicativity result, as follows:

Proposition

Assuming that [math]\pi\in P_{even}(2s,2s)[/math] is symmetric, [math]\pi=\pi^\circ[/math], and is such that [math]\varphi_\pi,\varphi_{\pi^*}[/math] are unital modulo scalars, we have a formula of the following type:

[[math]] (M_\sigma\otimes M_\tau)(\Lambda_\pi)=N^{a+b|\sigma|+c|\tau|+d|\sigma\wedge\tau|+e|\sigma\tau|+f|\sigma\tau^{-1}|+g|\tau\sigma|+h|\tau^{-1}\sigma|} [[/math]]
Moreover, the square matrix [math]\Lambda_\pi[/math] is multiplicative, in the sense of Definition 8.11.


Show Proof

The first assertion follows from Proposition 8.26. Indeed, according to the various results in Theorem 8.29, the list of partitions appearing in Proposition 8.26 (2) dissapears in the case where both [math]\varphi_\pi,\varphi_{\pi^*}[/math] are unital modulo scalars, and this gives the result. As for the second assertion, this follows from the formula in the statement, and from the various results in Proposition 8.15 and Proposition 8.30.

As a main consequence, Theorem 8.14 applies, and gives:

Theorem

Given a partition [math]\pi\in P_{even}(2s,2s)[/math] which is symmetric, [math]\pi=\pi^\circ[/math], and which is such that [math]\varphi_\pi,\varphi_{\pi^*}[/math] are unital modulo scalars, for the corresponding block-modified Wishart matrix [math]\widetilde{W}=(id\otimes\varphi_\pi)W[/math] we have the asymptotic convergence formula

[[math]] m\widetilde{W}\sim\pi_{mn\rho} [[/math]]
in [math]*[/math]-moments, in the [math]d\to\infty[/math] limit, where [math]\rho=law(\Lambda_\pi)[/math].


Show Proof

This follows by putting together the results that we have. Indeed, due to Proposition 8.31, Theorem 8.14 applies, and gives the convergence result.

Summarizing, we have now an explicit block-modification machinery, valid for certain suitable partitions [math]\pi\in P_{even}(2s,2s)[/math], which improves the previous theory from [1].


As a conclusion to all this, the block modification of the complex Wishart matrices leads, somehow out of nothing, to a whole new world, populated by beasts such as the [math]R[/math]-transform, the modified Marchenko-Pastur laws, and many more. Looks like we have opened the Pandora box. We will see however later, in chapters 9-12 below, that this whole new world, called free probability, is in fact not that much different from ours.

General references

Banica, Teo (2024). "Calculus and applications". arXiv:2401.00911 [math.CO].

References

  1. 1.0 1.1 1.2 T. Banica and I. Nechita, Block-modified Wishart matrices and free Poisson laws, Houston J. Math. 41 (2015), 113--134.