Glow computations

[math] \newcommand{\mathds}{\mathbb}[/math]

This article was automatically generated from a tex file and may contain conversion errors. If permitted, you may login and edit this article to improve the conversion.

11a. Basic results

We discuss here the computation of the glow of the complex Hadamard matrices, as a continuation of the material from chapter 2, where we discussed the basics of the glow in the real case, and as a continuation as well of the material from chapter 10. As a first motivation for all this, we have the Gale-Berlekamp game [1], [2]. Another motivation comes from the questions regarding the bistochastic matrices, in relation with the Ideal-Wolf theorem [3], explained in chapter 10. Finally, we have the question of connecting the defect, and other invariants of the Hadamard matrices, to the glow.


Let us begin by reviewing the few theoretical things that we know about the glow, from chapter 10. The main results there can be summarized as follows:

Theorem

The glow of [math]H\in M_N(\mathbb C)[/math], which is the law [math]\mu\in\mathcal P(\mathbb C)[/math] of the excess

[[math]] E=\sum_{ij}H_{ij} [[/math]]
over the Hadamard equivalence class of [math]H[/math], has the following properties:

  • [math]\mu=\varepsilon\times\mu^+[/math], where [math]\mu^+=law(|E|)[/math].
  • [math]\mu[/math] is invariant under rotations.
  • [math]H\in\sqrt{N}U_N[/math] implies [math]supp(\mu)\subset N\sqrt{N}\,\mathbb D[/math].
  • [math]H\in\sqrt{N}U_N[/math] implies as well [math]N\sqrt{N}\,\mathbb T\subset supp(\mu)[/math].


Show Proof

We already know all this from chapter 10, the idea being as follows:


(1) This follows indeed by using [math]H\to zH[/math] with [math]|z|=1[/math].


(2) This follows from (1), the convolution with [math]\varepsilon[/math] bringing the invariance.


(3) This follows indeed from Cauchy-Schwarz.


(4) This is something highly non-trivial, coming from [3].

In what follows we will be mainly interested in the Hadamard matrix case, but since the computations here are quite difficult, let us begin our study with other matrices. It is convenient to normalize our matrices, as to make them a bit similar to the complex Hadamard ones. To be more precise, consider the [math]2[/math]-norm on the vector space of the complex [math]N\times N[/math] matrices, which is given by the following formula:

[[math]] ||H||_2=\sqrt{\sum_{ij}|H_{ij}|^2} [[/math]]

We will assume in what follows, by multiplying our matrix [math]H\in M_N(\mathbb C)[/math] by a suitable scalar, that this norm takes the same value as for the Hadamard matrices, namely:

[[math]] ||H||_2=N [[/math]]


We know from chapter 2 that in the real case, the real glow is asymptotically Gaussian. In the complex matrix case, we will reach to the conclusion that the glow is asymptotically complex Gaussian, with the complex Gaussian distribution being as follows:

Proposition

The complex Gaussian distribution [math]\mathcal C[/math] is the law of the variable

[[math]] z=\frac{1}{\sqrt{2}}(x+iy) [[/math]]
with [math]x,y[/math] being independent standard Gaussian variables. We have

[[math]] \mathbb E(|z|^{2p})=p! [[/math]]
and this moment formula, along with rotational invariance, determines [math]\mathcal C[/math].


Show Proof

This is standard probability theory, with the main result, namely the moment formula in the statement, coming from some routine computations. For more on all this, we refer to any standard probability book, such as Durrett [4].

Finally, we use in what follows the symbol [math]\sim[/math] to denote an equality of distributions. With these conventions, we have the following result, to start with:

Proposition

We have the following computations:

  • For the rescaled identity [math]\widetilde{I}_N=\sqrt{N}I_N[/math] we have
    [[math]] E\sim\sqrt{N}(q_1+\ldots +q_N) [[/math]]
    with [math]q\in\mathbb T^N[/math] random. With [math]N\to\infty[/math] we have [math]E/N\sim\mathcal C[/math].
  • For the flat matrix [math]J_N=(1)_{ij}[/math] we have
    [[math]] E\sim(a_1+\ldots+a_N)(b_1+\ldots+b_N) [[/math]]
    with [math](a,b)\in\mathbb T^N\times\mathbb T^N[/math] random. With [math]N\to\infty[/math] we have [math]E/N\sim\mathcal C\times\mathcal C[/math].


Show Proof

We use Theorem 11.1, and the moment method:


(1) Here we have [math]E=\sqrt{N}\sum_{i}a_ib_i[/math], with [math]a,b\in\mathbb T^N[/math] random. With [math]q_i=a_ib_i[/math] this gives the first assertion. Let us estimate now the moments of [math]|E|^2[/math]. We have:

[[math]] \begin{eqnarray*} \int_{\mathbb T^N\times\mathbb T^N}|E|^{2p} &=&N^p\int_{\mathbb T^N}|q_1+\ldots+q_N|^{2p}dq\\ &=&N^p\int_{\mathbb T^N}\sum_{ij}\frac{q_{i_1}\ldots q_{i_p}}{q_{j_1}\ldots q_{j_p}}\,dq\\ &=&N^p\#\left\{(i,j)\in\{1,\ldots,N\}^p\times\{1,\ldots,N\}^p\Big|[i_1,\ldots,i_p]=[j_1,\ldots,j_p]\right\}\\ &\simeq&N^p\cdot p!N(N-1)\ldots(N-p+1)\\ &\simeq&N^p\cdot p!N^p\\ &=&p!N^{2p} \end{eqnarray*} [[/math]]


Here, and in what follows, the sets between brackets are by defintion sets with repetition, and the middle estimate comes from the fact that, with [math]N\to\infty[/math], only the multi-indices [math]i=(i_1,\ldots,i_p)[/math] having distinct entries contribute. But this gives the result.


(2) Here we have the following formula, which gives the first assertion:

[[math]] E =\sum_{ij}a_ib_j =\sum_ia_i\sum_jb_j [[/math]]


Now since [math]a,b\in\mathbb T^N[/math] are independent, so are the quantities [math]\sum_ia_i,\sum_jb_j[/math], so we have:

[[math]] \int_{\mathbb T^N\times\mathbb T^N}|E|^{2p} =\left(\int_{\mathbb T^N}|q_1+\ldots+q_N|^{2p}dq\right)^2 \simeq(p!N^{p})^2 [[/math]]


Here we have used the estimate in the proof of (1), and this gives the result.

As a conclusion, the glow is intimately related to the basic hypertoral law, namely the law of the variable [math]q_1+\ldots+q_N[/math], with [math]q\in\mathbb T^N[/math] being random. Observe that at [math]N=1[/math] this hypertoral law is the Dirac mass [math]\delta_1[/math], and that at [math]N=2[/math] we obtain the following law:

[[math]] \begin{eqnarray*} law|1+q| &=&law\sqrt{(1+e^{it})(1+e^{-it})}\\ &=&law\sqrt{2+2\cos t}\\ &=&law\left(2\cos\frac{t}{2}\right) \end{eqnarray*} [[/math]]


In general, the law of [math]\sum q_i[/math] is known to be related to the Pòlya random walk [5]. Also, as explained for instance in chapter 9, the moments of this law are:

[[math]] \int_{\mathbb T^N}|q_1+\ldots+q_N|^{2p}dq=\sum_{\pi\in P(p)}\binom{p}{\pi}\frac{N!}{(N-|\pi|)!} [[/math]]


As a second conclusion, even under the normalization [math]||H||_2=N[/math], the glow can behave quite differently in the [math]N\to\infty[/math] limit. So, let us restrict now the attention to the complex Hadamard matrices. At [math]N=2[/math] we only have [math]F_2[/math] to be invesigated, the result being:

Proposition

For the Fourier matrix [math]F_2[/math] we have

[[math]] |E|^2=4+2Re(\alpha-\beta) [[/math]]
for certain variables [math]\alpha,\beta\in\mathbb T[/math] which are uniform, and independent.


Show Proof

The matrix that we interested in, namely the Fourier matrix [math]F_2[/math] altered by a vertical switching vector [math](a,b)[/math] and an horizontal switching vector [math](c,d)[/math], is:

[[math]] \widetilde{F}_2=\begin{pmatrix}ac&ad\\bc&-bd\end{pmatrix} [[/math]]


With this notation, we have the following formula:

[[math]] \begin{eqnarray*} |E|^2 &=&|ac+ad+bc-bd|^2\\ &=&4+\frac{ad}{bc}+\frac{bc}{ad}-\frac{bd}{ac}-\frac{ac}{bd} \end{eqnarray*} [[/math]]


For proving that the variables [math]\alpha=\frac{ad}{bc}[/math] and [math]\beta=\frac{bd}{ac}[/math] are independent, we can use the moment method, as follows:

[[math]] \begin{eqnarray*} \int_{\mathbb T^4}\left(\frac{ad}{bc}\right)^p\left(\frac{bd}{ac}\right)^q &=&\int_{\mathbb T}a^{p-q}\int_{\mathbb T}b^{q-p}\int_{\mathbb T}c^{-p-q}\int_{\mathbb T}d^{p+q}\\ &=&\delta_{pq}\delta_{pq}\delta_{p,-q}\delta_{p,-q}\\ &=&\delta_{p,q,0} \end{eqnarray*} [[/math]]


Thus [math]\alpha,\beta[/math] are indeed independent, and we are done.

It is possible of course to derive from this some more concrete formulae, but let us look instead at the case [math]N=3[/math]. Here the matrix that we are interested in is:

[[math]] \widetilde{F}_3=\begin{pmatrix}ad&ae&af\\ bd&wbe&w^2bf\\ cd&w^2ce&wcf\end{pmatrix} [[/math]]


Thus, we would like to compute the law of the following quantity:

[[math]] |E|=|ad+ae+af+bd+wbe+w^2bf+cd+w^2ce+wcf| [[/math]]


The problem is that when trying to compute [math]|E|^2[/math], the terms won't cancel much. More precisely, we have a formula of the following type:

[[math]] |E|^2=9+C_0+C_1w+C_2w^2 [[/math]]


Here the quantities [math]C_0,C_1,C_2[/math] are as follows:

[[math]] \begin{eqnarray*} C_0&=&\frac{ae}{bd}+\frac{ae}{cd}+\frac{af}{bd}+\frac{af}{cd}+\frac{bd}{ae}+\frac{bd}{af} +\frac{be}{cf}+\frac{bf}{ce}+\frac{cd}{ae}+\frac{cd}{af}+\frac{ce}{bf}+\frac{cf}{be}\\ C_1&=&\frac{ad}{bf}+\frac{ad}{ce}+\frac{ae}{bf}+\frac{af}{ce}+\frac{bd}{ce}+\frac{be}{ad} +\frac{be}{af}+\frac{be}{cd}+\frac{cd}{bf}+\frac{cf}{ad}+\frac{cf}{ae}+\frac{cf}{bd}\\ C_2&=&\frac{ad}{be}+\frac{ad}{cf}+\frac{ae}{cf}+\frac{af}{be}+\frac{bd}{cf}+\frac{bf}{ad} +\frac{bf}{ae}+\frac{bf}{cd}+\frac{cd}{be}+\frac{ce}{ad}+\frac{ce}{af}+\frac{ce}{bd} \end{eqnarray*} [[/math]]


In short, all this obviously leads nowhere, and the exact study stops at [math]F_2[/math]. In general now, one idea is that of using Bernoulli-type variables coming from the row sums, a bit as we did in chapter 2 in the real case, the result here being as follows:

Theorem

The glow of [math]H\in M_N(\mathbb C)[/math] is given by the formula

[[math]] law(E)=\int_{a\in\mathbb T^N}B((Ha)_1,\ldots,(Ha)_N) [[/math]]
where the quantities on the right are

[[math]] B(c_1,\ldots,c_N)=law\left(\sum_i\lambda_ic_i\right) [[/math]]
with [math]\lambda\in\mathbb T^N[/math] being random.


Show Proof

This is clear indeed from the following formula:

[[math]] E= \lt a,Hb \gt [[/math]]


To be more precise, when the vector [math]a\in\mathbb T^N[/math] is assumed to be fixed, this variable [math]E[/math] follows the law [math]B((Ha)_1,\ldots,(Ha)_N)[/math] in the statement.

Observe that, in what regards the laws appearing in Theorem 11.5, we can write a formula for them of the following type, with [math]\times[/math] being a multiplicative convolution:

[[math]] B(c_1,\ldots,c_N)=\varepsilon\times\beta(|c_1|,\ldots,|c_N|) [[/math]]


To be more precise, such a formula holds indeed, with the measure [math]\beta(r_1,\ldots,r_N)\in\mathcal P(\mathbb R_+)[/math] with [math]r_1,\ldots,r_N\geq 0[/math] being given by the following formula:

[[math]] \beta(r_1,\ldots,r_N)=law\left|\sum_i\lambda_ir_i\right| [[/math]]


Regarding now the explicit computation of [math]\beta[/math], observe we have:

[[math]] \beta(r_1,\ldots,r_N)=law\sqrt{\sum_{ij}\frac{\lambda_i}{\lambda_j}\cdot r_ir_j} [[/math]]


Consider now the following variable, which is easily seen, for instance by using the moment method, to be uniform over the projective torus [math]\mathbb T^{N-1}=\mathbb T^N/\mathbb T[/math]:

[[math]] (\mu_1,\mu_2,\ldots,\mu_N)=\left(\frac{\lambda_1}{\lambda_2},\frac{\lambda_2}{\lambda_3},\ldots,\frac{\lambda_N}{\lambda_1}\right) [[/math]]


Now since we have [math]\lambda_i/\lambda_j=\mu_i\mu_{i+1}\ldots\mu_j[/math], with the convention [math]\mu_i\ldots\mu_j=\overline{\mu_j\ldots\mu_i}[/math] for [math]i \gt j[/math], this gives the following formula, with [math]\mu\in\mathbb T^{N-1}[/math] random:

[[math]] \beta(r_1,\ldots,r_N)=law\sqrt{\sum_{ij}\mu_i\mu_{i+1}\ldots\mu_j\cdot r_ir_j} [[/math]]


It is possible to further study the laws [math]\beta[/math] by using this formula. However, in practice, it is more convenient to use the complex measures [math]B[/math] from Theorem 11.5.


Let us end these preliminaries with a discussion of the “arithmetic” version of the problem, which makes the link with the Gale-Berlekamp game [1], [2] and with the work in the real case, from chapter 2. We have the following unifying formalism:

Definition

Given [math]H\in M_N(\mathbb C)[/math] and [math]s\in\mathbb N\cup\{\infty\}[/math], we define a measure

[[math]] \mu_s\in\mathcal P(\mathbb C) [[/math]]
by the following formula, valid for any continuous function [math]\varphi[/math],

[[math]] \int_\mathbb C\varphi(x)d\mu_s(x)=\int_{\mathbb Z^N_s\times\mathbb Z^N_s}\varphi\left(\sum_{ij}a_ib_jH_{ij}\right)d(a,b) [[/math]]
where [math]\mathbb Z_s\subset\mathbb T[/math] is the group of the [math]s[/math]-roots of unity, with the convention [math]\mathbb Z_\infty=\mathbb T[/math].

Observe that at [math]s=\infty[/math] we obtain the measure in Theorem 11.1. Also, at [math]s=2[/math] and for a usual Hadamard matrix, [math]H\in M_N(\pm1)[/math], we obtain the measure from chapter 2. Observe also that for [math]H\in M_N(\pm1)[/math], knowing [math]\mu_2[/math] is the same as knowing the statistics of the number of one entries, [math]|1\in H|[/math]. This follows indeed from the following formula:

[[math]] \begin{eqnarray*} \sum_{ij}H_{ij} &=&|1\in H|-|-1\in H|\\ &=&2|1\in H|-N^2 \end{eqnarray*} [[/math]]


More generally, at [math]s=p[/math] prime, we have the following result:

Theorem

When [math]s[/math] is prime and [math]H\in M_N(\mathbb Z_s)[/math], the statistics of the number of one entries, [math]|1\in H|[/math], can be recovered from that of the total sum, [math]E=\sum_{ij}H_{ij}[/math].


Show Proof

The problem here is of vectorial nature, so given [math]V\in\mathbb Z_s^n[/math], we would like to compare the quantities [math]|1\in V|[/math] and [math]\sum V_i[/math]. Let us write, up to permutations:

[[math]] V=(\underbrace{1\ldots1}_{a_0}\,\,\underbrace{w\ldots w}_{a_1}\,\ldots\ldots\,\underbrace{w^{s-1}\ldots w^{s-1}}_{a_{s-1}}) [[/math]]


We have then [math]|1\in V|=a_0[/math], as well as:

[[math]] \sum V_i=a_0+a_1w+\ldots+a_{s-1}w^{s-1} [[/math]]


We also know that [math]a_0+a_1+\ldots+a_{s-1}=n[/math]. Now when [math]s[/math] is prime, the only ambiguity in recovering [math]a_0[/math] from [math]a_0+a_1w+\ldots+a_{s-1}w^{s-1}[/math] can come from:

[[math]] 1+w+\ldots+w^{s-1}=0 [[/math]]


But since the sum of the numbers [math]a_i[/math] is fixed, [math]a_0+a_1+\ldots+a_{s-1}=n[/math], this ambiguity dissapears, and this gives the result.

11b. Glow moments

Let us investigate now the glow of the complex Hadamard matrices, by using the moment method. We use the moment formula from chapter 10, namely:

Proposition

For [math]H\in M_N(\mathbb T)[/math] the even moments of [math]|E|[/math] are given by

[[math]] \int_{\mathbb T^N\times\mathbb T^N}|E|^{2p}=\sum_{[i]=[k],[j]=[l]}\frac{H_{i_1j_1}\ldots H_{i_pj_p}}{H_{k_1l_1}\ldots H_{k_pl_p}} [[/math]]
where the sets between brackets are by definition sets with repetition.


Show Proof

As explained in chapter 10, with [math]E=\sum_{ij}H_{ij}a_ib_j[/math] we obtain:

[[math]] \begin{eqnarray*} \int_{\mathbb T^N\times\mathbb T^N}|E|^{2p} &=&\int_{\mathbb T^N\times\mathbb T^N}\left(\sum_{ijkl}\frac{H_{ij}}{H_{kl}}\cdot\frac{a_ib_j}{a_kb_l}\right)^p\\ &=&\sum_{ijkl}\frac{H_{i_1j_1}\ldots H_{i_pj_p}}{H_{k_1l_1}\ldots H_{k_pl_p}}\int_{\mathbb T^N}\frac{a_{i_1}\ldots a_{i_p}}{a_{k_1}\ldots a_{k_p}}\int_{\mathbb T^N}\frac{b_{j_1}\ldots b_{j_p}}{b_{l_1}\ldots b_{l_p}} \end{eqnarray*} [[/math]]


The integrals on the right being [math]\delta_{[i],[k]}[/math] and [math]\delta_{[j],[l]}[/math], we obtain the result.

As a first application, let us investigate the tensor products. We have:

Proposition

The even moments of the variable [math]|E|[/math] for a tensor product

[[math]] L=H\otimes K [[/math]]
are given by the following formula,

[[math]] \int_{\mathbb T^{NM}\times\mathbb T^{NM}}|E|^{2p}=\sum_{[ia]=[kc],[jb]=[ld]}\frac{H_{i_1j_1}\ldots H_{i_pj_p}}{H_{k_1l_1}\ldots H_{k_pl_p}}\cdot\frac{K_{a_1b_1}\ldots K_{a_pb_p}}{K_{c_1d_1}\ldots K_{c_pd_p}} [[/math]]
where the sets between brackets are as usual sets with repetition.


Show Proof

With [math]L=H\otimes K[/math], the formula in Proposition 11.8 reads:

[[math]] \int_{\mathbb T^{NM}\times\mathbb T^{NM}}|E|^{2p}=\sum_{[ia]=[kc],[jb]=[ld]}\frac{L_{i_1a_1,j_1b_1}\ldots L_{i_pa_p,j_pb_p}}{L_{k_1c_1,l_1d_1}\ldots L_{k_pc_p,l_pd_p}} [[/math]]


But this gives the formula in the statement, and we are done.

Thus, we cannot reconstruct the glow of [math]H\otimes K[/math] from that of [math]H,K[/math], because the indices “get mixed”. We have as well a result regarding the deformations, as follows:

Proposition

The even moments of [math]|E|[/math] for a deformed tensor product

[[math]] L=H\otimes_Q K [[/math]]
are given by the following formula,

[[math]] \int_{\mathbb T^{NM}\times\mathbb T^{NM}}|E|^{2p} =\sum_{[ia]=[kc],[jb]=[ld]}\frac{Q_{i_1b_1}\ldots Q_{i_pb_p}}{Q_{k_1d_1}\ldots Q_{k_pb_p}}\cdot\frac{H_{i_1j_1}\ldots H_{i_pj_p}}{H_{k_1l_1}\ldots H_{k_pl_p}}\cdot\frac{K_{a_1b_1}\ldots K_{a_pb_p}}{K_{c_1d_1}\ldots K_{c_pd_p}} [[/math]]
where the sets between brackets are as usual sets with repetition.


Show Proof

As before, we use the formula in Proposition 11.8. We have:

[[math]] L_{ia,jb}=Q_{ib}H_{ij}K_{ab} [[/math]]


Thus, we obtain the following formula for the moments:

[[math]] \begin{eqnarray*} \int_{\mathbb T^{NM}\times\mathbb T^{NM}}|E|^{2p} &=&\sum_{[ia]=[kc],[jb]=[ld]}\frac{L_{i_1a_1,j_1b_1}\ldots L_{i_pa_p,j_pb_p}}{L_{k_1c_1,l_1d_1}\ldots L_{k_pc_p,l_pd_p}}\\ &=&\sum_{[ia]=[kc],[jb]=[ld]}\frac{Q_{i_1b_1}\ldots Q_{i_pb_p}}{Q_{k_1d_1}\ldots Q_{k_pb_p}}\cdot\frac{H_{i_1j_1}\ldots H_{i_pj_p}}{H_{k_1l_1}\ldots H_{k_pl_p}}\cdot\frac{K_{a_1b_1}\ldots K_{a_pb_p}}{K_{c_1d_1}\ldots K_{c_pd_p}} \end{eqnarray*} [[/math]]


Thus, we are led to the conclusion in the statement.

The above formulae might look quite complicated, but they have some practical use. Let us go back indeed to a question that we had open since chapter 5, namely classifying the [math]4\times4[/math] complex Hadamard matrices, up to equivalence. We can now formulate:

Theorem

The complex Hadamard matrices at [math]N=4[/math] are, up to equivalence, the following matrices, with [math]s=e^{it}[/math] with [math]t\in[0,\pi/2][/math],

[[math]] F_4^s=\begin{pmatrix} 1&1&1&1\\ 1&-1&1&-1\\ 1&s&-1&-s\\ 1&-s&-1&s \end{pmatrix} [[/math]]
and these matrices are distinguished by the third moment of the glow. Alternatively, these matrices are distinguished by the third order term of the glow.


Show Proof

We know from chapter 5 that the [math]4\times4[/math] complex Hadamard matrices are those in the statement, with [math]s\in\mathbb T[/math], and we also know from there that we have equivalences as follows, which in practice means that we can assume [math]s=e^{it}[/math] with [math]t\in[0,\pi/2][/math]:

[[math]] F_4^s\sim F_4^{-s}\sim F_4^{\bar{s}}\sim F_4^{-\bar{s}} [[/math]]


It remains to prove that these matrices, namely [math]F_4^s[/math] with [math]s=e^{it}[/math] with [math]t\in[0,\pi/2][/math], are not equivalent. For this purpose, let us look at the moments of the glow:


(1) Regarding the first moment, this is not something useful, because we have the following formula, coming from Proposition 11.8, valid for any [math]H\in M_N(\mathbb T)[/math]:

[[math]] \int_{\mathbb T^N\times\mathbb T^N}|E|^2=\sum_{i=k,j=l}\frac{H_{ij}}{H_{kl}}=\sum_{ij}\frac{H_{ij}}{H_{ij}}=N^2 [[/math]]


(2) Regarding the second moment, this is something not useful either, because once again by using Proposition 11.8, we obtain a formula as follows, for any [math]H\in M_N(\mathbb T)[/math]:

[[math]] \begin{eqnarray*} \int_{\mathbb T^N\times\mathbb T^N}|E|^4 &=&\sum_{[ik]=[mp],[jl]=[nq]}\frac{H_{ij}H_{kl}}{H_{mn}H_{pq}}\\ &=&P(N)+2\sum_{i\neq k,j\neq l}\frac{H_{ij}H_{kl}}{H_{il}H_{kj}}\\ &=&Q(N)+2\sum_{ijkl}\frac{H_{ij}H_{kl}}{H_{il}H_{kj}}\\ &=&Q(N)+2\sum_{ik}| \lt H_i,H_k \gt |^2\\ &=&Q(N)+2N^3 \end{eqnarray*} [[/math]]


To be more precise, here [math]P[/math] is a certain polynomial, not depending on [math]H[/math], collecting the contributions from the “trivial” solutions of [math][ik]=[mp][/math], [math][jl]=[nq][/math], and then [math]Q[/math] is another polynomial, again not depending on [math]H[/math], obtained from [math]P[/math] via a summing trick.


(3) However, when getting to the third moment, or higher, things become interesting. Indeed, the equivalences [math]F_4^s\sim F_4^{-s}\sim F_4^{\bar{s}}\sim F_4^{-\bar{s}}[/math] tell us that the [math]p[/math]-th moment of [math]|E|^2[/math] is a degree [math]p[/math] even, symmetric Laurent polynomial in [math]s\in\mathbb T[/math], and a direct computation at [math]p=3[/math], based on the formula in Proposition 11.10, shows that the parameter [math]s\in\mathbb T[/math] can be recaptured, up to identifying [math]\{s,-s,\bar{s},-\bar{s}\}[/math], from the knowledge of this polynomial.


(4) Alternatively, we can say that the parameter [math]s\in\mathbb T[/math] can be recaptured, again up to identifying [math]\{s,-s,\bar{s},-\bar{s}\}[/math], from the knowledge of the third order term of the glow, with this meaning by definition the [math]N^{-2}[/math] factor in the [math]N^{-1}[/math] expansion of the law of [math]|E|/N[/math].

Summarizing, some interesting things going on here, which will actually need some time to be fully understood. So, let us develop now some systematic moment machinery for the glow, along the above lines. Let [math]P(p)[/math] be the set of partitions of [math]\{1,\ldots,p\}[/math], with its standard order relation [math]\leq[/math], which is such that, for any [math]\pi\in P(p)[/math]:

[[math]] \sqcap\hskip-1.6mm\sqcap\ldots\leq\pi\leq|\ |\ldots|\ | [[/math]]

We denote by [math]\mu(\pi,\sigma)[/math] the associated Möbius function, given by:

[[math]] \mu(\pi,\sigma)=\begin{cases} 1&{\rm if}\ \pi=\sigma\\ -\sum_{\pi\leq\tau \lt \sigma}\mu(\pi,\tau)&{\rm if}\ \pi \lt \sigma\\ 0&{\rm if}\ \pi\not\leq\sigma \end{cases} [[/math]]


To be more precise, the Möbius function is defined by recurrence, by using this formula. The main interest in the Möbius function comes from the Möbius inversion formula, which states that the following happens, at the level of the functions on [math]P(p)[/math]:

[[math]] f(\sigma)=\sum_{\pi\leq\sigma}g(\pi) \quad\implies\quad g(\sigma)=\sum_{\pi\leq\sigma}\mu(\pi,\sigma)f(\pi) [[/math]]


For [math]\pi\in P(p)[/math] we use the following notation, where [math]b_1,\ldots,b_{|\pi|}[/math] are the block lenghts:

[[math]] \binom{p}{\pi}=\binom{p}{b_1\ldots b_{|\pi|}}=\frac{p!}{b_1!\ldots b_{|\pi|}!} [[/math]]


Finally, we use the following notation, where [math]H_1,\ldots,H_N\in\mathbb T^N[/math] are the rows of [math]H[/math]:

[[math]] H_\pi(i)=\bigotimes_{\beta\in\pi}\prod_{r\in\beta}H_{i_r} [[/math]]


With these notations, we have the following result:

Theorem

The glow moments of a matrix [math]H\in M_N(\mathbb T)[/math] are given by

[[math]] \int_{\mathbb T^N\times\mathbb T^N}|E|^{2p}=\sum_{\pi\in P(p)}K(\pi)N^{|\pi|}I(\pi) [[/math]]
where the coefficients are given by

[[math]] K(\pi)=\sum_{\sigma\in P(p)}\mu(\pi,\sigma)\binom{p}{\sigma} [[/math]]
and where the contributions are given by

[[math]] I(\pi)=\frac{1}{N^{|\pi|}}\sum_{[i]=[j]} \lt H_\pi(i),H_\pi(j) \gt [[/math]]
by using the above notations and conventions.


Show Proof

We know from Proposition 11.8 that the moments are given by:

[[math]] \int_{\mathbb T^N\times\mathbb T^N}|E|^{2p} =\sum_{[i]=[j],[x]=[y]}\frac{H_{i_1x_1}\ldots H_{i_px_p}}{H_{j_1y_1}\ldots H_{j_py_p}} [[/math]]


With [math]\sigma=\ker x,\rho=\ker y[/math], we deduce that the moments of [math]|E|^2[/math] decompose over partitions, according to a formula as follows:

[[math]] \int_{\mathbb T^N\times\mathbb T^N}|E|^{2p}=\int_{\mathbb T^N}\sum_{\sigma,\rho\in P(p)}C(\sigma,\rho) [[/math]]


To be more precise, the contributions are as follows:

[[math]] C(\sigma,\rho)=\sum_{\ker x=\sigma,\ker y=\rho}\delta_{[x],[y]}\sum_{ij} \frac{H_{i_1x_1}\ldots H_{i_px_p}}{H_{j_1y_1}\ldots H_{j_py_p}} \cdot\frac{a_{i_1}\ldots a_{i_p}}{a_{j_1}\ldots a_{j_p}} [[/math]]


We have [math]C(\sigma,\rho)=0[/math] unless [math]\sigma\sim\rho[/math], in the sense that [math]\sigma,\rho[/math] must have the same block structure. The point now is that the sums of type [math]\sum_{\ker x=\sigma}[/math] can be computed by using the Möbius inversion formula. We obtain a formula as follows:

[[math]] C(\sigma,\rho)=\delta_{\sigma\sim\rho}\sum_{\pi\leq\sigma}\mu(\pi,\sigma)\prod_{\beta\in\pi}C_{|\beta|}(a) [[/math]]


Here the functions on the right are by definition given by:

[[math]] \begin{eqnarray*} C_r(a) &=&\sum_x\sum_{ij}\frac{H_{i_1x}\ldots H_{i_rx}}{H_{j_1x}\ldots H_{j_rx}}\cdot\frac{a_{i_1}\ldots a_{i_r}}{a_{j_1}\ldots a_{j_r}}\\ &=&\sum_{ij} \lt H_{i_1}\ldots H_{i_r},H_{j_1}\ldots H_{j_r} \gt \cdot\frac{a_{i_1}\ldots a_{i_r}}{a_{j_1}\ldots a_{j_r}} \end{eqnarray*} [[/math]]


Now since there are [math]\binom{p}{\sigma}[/math] partitions having the same block structure as [math]\sigma[/math], we obtain:

[[math]] \begin{eqnarray*} &&\int_{\mathbb T^N\times\mathbb T^N}|\Omega|^{2p}\\ &=&\int_{\mathbb T^N}\sum_{\pi\in P(p)}\left(\sum_{\sigma\sim\rho}\sum_{\mu\leq\sigma}\mu(\pi,\sigma)\right)\prod_{\beta\in\pi}C_{|\beta|}(a)\\ &=&\sum_{\pi\in P(p)}\left(\sum_{\sigma\in P(p)}\mu(\pi,\sigma)\binom{p}{\sigma}\right)\int_{\mathbb T^N}\prod_{\beta\in\pi}C_{|\beta|}(a) \end{eqnarray*} [[/math]]


But this gives the formula in the statement, and we are done.

Let us discuss now the asymptotic behavior of the glow. For this purpose, we first study the coefficients [math]K(\pi)[/math] in Theorem 11.12. We have here the following result:

Proposition

The coeffients appearing in the above, namely

[[math]] K(\pi)=\sum_{\pi\leq\sigma}\mu(\pi,\sigma)\binom{p}{\sigma} [[/math]]
have the following properties:

  • The function [math]\widetilde{K}(\pi)=\frac{K(\pi)}{p!}[/math] is multiplicative, in the sense that:
    [[math]] \widetilde{K}(\pi\pi')=\widetilde{K}(\pi)\widetilde{K}(\pi') [[/math]]
  • On the one-block partitions, we have:
    [[math]] K(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{\sigma\in P(p)}(-1)^{|\sigma|-1}(|\sigma|-1)!\binom{p}{\sigma} [[/math]]
  • We have as well the following fomula,
    [[math]] K(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{r=1}^p(-1)^{r-1}(r-1)!C_{pr} [[/math]]
    where the coefficients on the right are given by:
    [[math]] C_{pr}=\sum_{p=a_1+\ldots+a_r}\binom{p}{a_1,\ldots,a_r}^2 [[/math]]


Show Proof

This follows from some standard computations, as follows:


(1) We can use here the following formula, which is a well-known property of the Möbius function, which can be proved by recurrence:

[[math]] \mu(\pi\pi',\sigma\sigma')=\mu(\pi,\sigma)\mu(\pi',\sigma') [[/math]]


Now if [math]b_1,\ldots,b_s[/math] and [math]c_1,\ldots,c_t[/math] are the block lengths of [math]\sigma,\sigma'[/math], we obtain, as claimed:

[[math]] \begin{eqnarray*} \widetilde{K}(\pi\pi') &=&\sum_{\pi\pi'\leq\sigma\sigma'}\mu(\pi\pi',\sigma\sigma')\cdot\frac{1}{b_1!\ldots b_s!}\cdot\frac{1}{c_1!\ldots c_t!}\\ &=&\sum_{\pi\leq\sigma,\pi'\leq\sigma'}\mu(\pi,\sigma)\mu(\pi',\sigma')\cdot\frac{1}{b_1!\ldots b_s!}\cdot\frac{1}{c_1!\ldots c_t!}\\ &=&\widetilde{K}(\pi)\widetilde{K}(\pi') \end{eqnarray*} [[/math]]


(2) We can use here the following formula, which once again is well-known, and can be proved by recurrence on [math]|\sigma|[/math]:

[[math]] \mu(\sqcap\!\!\sqcap\ldots\sqcap,\sigma)=(-1)^{|\sigma|-1}(|\sigma|-1)! [[/math]]


We therefore obtain, as claimed:

[[math]] K(\sqcap\!\!\sqcap\ldots\sqcap) =\sum_{\sigma\in P(p)}\mu(\sqcap\!\!\sqcap\ldots\sqcap,\sigma)\binom{p}{\sigma} =\sum_{\sigma\in P(p)}(-1)^{|\sigma|-1}(|\sigma|-1)!\binom{p}{\sigma} [[/math]]


(3) By using the formula in (2), and summing over [math]r=|\sigma|[/math], we obtain:

[[math]] K(\sqcap\!\!\sqcap\ldots\sqcap) =\sum_{r=1}^p(-1)^{r-1}(r-1)!\sum_{|\sigma|=r}\binom{p}{\sigma} [[/math]]


Now if we denote by [math]a_1,\ldots,a_r[/math] with [math]a_i\geq1[/math] the block lengths of [math]\sigma[/math], then:

[[math]] \binom{p}{\sigma}=\binom{p}{a_1,\ldots,a_r} [[/math]]


On the other hand, given [math]a_1,\ldots,a_r\geq1[/math] with [math]a_1+\ldots+a_r=p[/math], the number of partitions [math]\sigma[/math] having these numbers as block lengths is:

[[math]] N_{a_1,\ldots,a_r}=\binom{p}{a_1,\ldots,a_r} [[/math]]


Thus, we are led to the conclusion in the statement.

Now let us take a closer look at the integrals [math]I(\pi)[/math] from Theorem 11.12, namely:

[[math]] I(\pi)=\frac{1}{N^{|\pi|}}\sum_{[i]=[j]} \lt H_\pi(i),H_\pi(j) \gt [[/math]]


We have here the following result:

Proposition

Consider the one-block partition [math]\sqcap\!\!\sqcap\ldots\sqcap\in P(p)[/math].

  • [math]I(\sqcap\!\!\sqcap\ldots\sqcap)=\#\{i,j\in\{1,\ldots,N\}^p|[i]=[j]\}[/math].
  • [math]I(\sqcap\!\!\sqcap\ldots\sqcap)=\int_{\mathbb T^N}|\sum_ia_i|^{2p}da[/math].
  • [math]I(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{\sigma\in P(p)}\binom{p}{\sigma}\frac{N!}{(N-|\sigma|)!}[/math].
  • [math]I(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{r=1}^{p-1}C_{pr}\frac{N!}{(N-r)!}[/math], where [math]C_{pr}=\sum_{p=b_1+\ldots+b_r}\binom{p}{b_1,\ldots,b_r}^2[/math].


Show Proof

Once again, these formulae follow from some standard combinatorics:


(1) This follows indeed from the following computation:

[[math]] I(\sqcap\!\!\sqcap\ldots\sqcap) =\sum_{[i]=[j]}\frac{1}{N} \lt H_{i_1}\ldots H_{i_r},H_{j_1}\ldots H_{j_r} \gt =\sum_{[i]=[j]}1 [[/math]]


(2) This follows from the following computation:

[[math]] \int_{\mathbb T^N}\left|\sum_ia_i\right|^{2p} =\int_{\mathbb T^N}\sum_{ij}\frac{a_{i_1}\ldots a_{i_p}}{a_{j_1}\ldots a_{j_p}}da =\#\left\{i,j\Big|[i]=[j]\right\} [[/math]]


(3) If we let [math]\sigma=\ker i[/math] in the above formula of [math]I(\sqcap\!\!\sqcap\ldots\sqcap)[/math], we obtain:

[[math]] I(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{\sigma\in P(p)}\#\left\{i,j\Big|\ker i=\sigma,[i]=[j]\right\} [[/math]]


Now since there are [math]\frac{N!}{(N-|\sigma|)!}[/math] choices for the multi-index [math]i[/math], and then [math]\binom{p}{\sigma}[/math] choices for the multi-index [math]j[/math], this gives the result.


(4) If we set [math]r=|\sigma|[/math], the formula in (3) becomes:

[[math]] I(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{r=1}^{p-1}\frac{N!}{(N-r)!}\sum_{\sigma\in P(p),|\sigma|=r}\binom{p}{\sigma} [[/math]]


Now since there are exactly [math]\binom{p}{b_1,\ldots,b_r}[/math] permutations [math]\sigma\in P(p)[/math] having [math]b_1,\ldots,b_r[/math] as block lengths, the sum on the right is given by:

[[math]] \sum_{\sigma\in P(p),|\sigma|=r}\binom{p}{\sigma}=\sum_{p=b_1+\ldots+b_r}\binom{p}{b_1,\ldots,b_r}^2 [[/math]]


Thus, we are led to the conclusion in the statement.

In general, the integrals [math]I(\pi)[/math] can be estimated as follows:

Proposition

Let [math]H\in M_N(\mathbb T)[/math], having its rows pairwise orthogonal.

  • [math]I(|\,|\,\ldots|)=N^p[/math].
  • [math]I(|\,|\,\ldots|\ \pi)=N^aI(\pi)[/math], for any [math]\pi\in P(p-a)[/math].
  • [math]|I(\pi)|\lesssim p!N^p[/math], for any [math]\pi\in P(p)[/math].


Show Proof

This is something elementary, as follows:


(1) Since the rows of [math]H[/math] are pairwise orthogonal, we have:

[[math]] \begin{eqnarray*} I(|\,|\ldots|) &=&\sum_{[i]=[j]}\prod_{r=1}^p\delta_{i_r,j_r}\\ &=&\sum_{[i]=[j]}\delta_{ij}\\ &=&\sum_i1\\ &=&N^p \end{eqnarray*} [[/math]]


(2) This follows by the same computation as the above one for (1).


(3) We have indeed the following estimate:

[[math]] \begin{eqnarray*} |I(\pi)| &\leq&\sum_{[i]=[j]}\prod_{\beta\in\pi}1\\ &=&\sum_{[i]=[j]}1\\ &=&\#\left\{i,j\in\{1,\ldots,N\}\Big|[i]=[j]\right\}\\ &\simeq&p!N^p \end{eqnarray*} [[/math]]


Thus we have obtained the formula in the statement, and we are done.

We have now all needed ingredients for a universality result:

Theorem

The glow of a complex Hadamard matrix [math]H\in M_N(\mathbb T)[/math] is given by:

[[math]] \frac{1}{p!}\int_{\mathbb T^N\times\mathbb T^N}\left|\frac{E}{N}\right|^{2p}=1-\binom{p}{2}N^{-1}+O(N^{-2}) [[/math]]
In particular, [math]E/N[/math] becomes complex Gaussian in the [math]N\to\infty[/math] limit.


Show Proof

We use the moment formula in Theorem 11.12, namely:

[[math]] \int_{\mathbb T^N\times\mathbb T^N}|E|^{2p}=\sum_{\pi\in P(p)}K(\pi)N^{|\pi|}I(\pi) [[/math]]


By using Proposition 11.15 (3), we conclude that only the [math]p[/math]-block and [math](p-1)[/math]-block partitions contribute at order 2, so:

[[math]] \begin{eqnarray*} \int_{\mathbb T^N\times\mathbb T^N}|E|^{2p} &=&K(|\,|\ldots|)N^pI(|\,|\ldots|)\\ &+&\binom{p}{2}K(\sqcap|\ldots|)N^{p-1}I(\sqcap|\ldots|)\\ &+&O(N^{2p-2}) \end{eqnarray*} [[/math]]


Now by dividing by [math]N^{2p}[/math] and then by using the various formulae in Proposition 11.13, Proposition 11.14 and Proposition 11.15, we obtain, as claimed:

[[math]] \int_{\mathbb T^N\times\mathbb T^N}\left|\frac{E}{N}\right|^{2p} =p! -\binom{p}{2}\frac{p!}{2}\cdot\frac{2N-1}{N^2} +O(N^{-2}) [[/math]]


Finally, since the law of [math]E[/math] is invariant under centered rotations in the complex plane, this moment formula gives as well the last assertion.

Summarizing, the complex glow of the complex Hadamard matrices appears to have similar properties to the real glow of the real Hadamard matrices.

11c. Fourier matrices

Let us study now the glow of the Fourier matrices, [math]F=F_G[/math]. We use the following standard formulae, which all come from definitions:

[[math]] F_{ix}F_{iy}=F_{i,x+y}\quad,\quad \overline{F}_{ix}=F_{i,-x}\quad,\quad \sum_xF_{ix}=N\delta_{i0} [[/math]]


We first have the following result:

Proposition

For a Fourier matrix [math]F_G[/math] we have

[[math]] I(\pi)=\#\left\{i,j\Big|[i]=[j],\sum_{r\in\beta}i_r=\sum_{r\in\beta}j_r,\forall\beta\in\pi\right\} [[/math]]
with all the indices, and with the sums at right, taken inside [math]G[/math].


Show Proof

The basic components of the integrals [math]I(\pi)[/math] are given by:

[[math]] \begin{eqnarray*} \frac{1}{N}\left\langle\prod_{r\in\beta}F_{i_r},\prod_{r\in\beta}F_{j_r}\right\rangle &=&\frac{1}{N}\left\langle F_{\sum_{r\in\beta}i_r},F_{\sum_{r\in\beta}i_r}\right\rangle\\ &=&\delta_{\sum_{r\in\beta}i_r,\sum_{r\in\beta}j_r} \end{eqnarray*} [[/math]]


But this gives the formula in the statement, and we are done.

We have the following interpretation of the above integrals:

Proposition

For any partition [math]\pi[/math] we have the formula

[[math]] I(\pi)=\int_{\mathbb T^N}\prod_{b\in\pi}\left(\frac{1}{N^2}\sum_{ij}|H_{ij}|^{2|\beta|}\right)da [[/math]]
where [math]H=FAF^*[/math], with [math]F=F_G[/math] and [math]A=diag(a_0,\ldots,a_{N-1})[/math].


Show Proof

We have the following computation:

[[math]] \begin{eqnarray*} H=F^*AF &\implies&|H_{xy}|^2=\sum_{ij}\frac{F_{iy}F_{jx}}{F_{ix}F_{jy}}\cdot\frac{a_i}{a_j}\\ &\implies&|H_{xy}|^{2p}=\sum_{ij}\frac{F_{j_1x}\ldots F_{j_px}}{F_{i_1x}\ldots F_{i_px}}\cdot\frac{F_{i_1y}\ldots F_{i_py}}{F_{j_1y}\ldots F_{j_py}}\cdot\frac{a_{i_1}\ldots a_{i_p}}{a_{j_1}\ldots a_{j_p}}\\ &\implies&\sum_{xy}|H_{xy}|^{2p}=\sum_{ij}\left| \lt H_{i_1}\ldots H_{i_p},H_{j_1}\ldots H_{j_p} \gt \right|^2\cdot\frac{a_{i_1}\ldots a_{i_p}}{a_{j_1}\ldots a_{j_p}} \end{eqnarray*} [[/math]]


But this gives the formula in the statement, and we are done.

We must estimate now the quantities [math]I(\pi)[/math]. We first have the following result:

Proposition

For [math]F_G[/math] we have the estimate

[[math]] I(\pi)=b_1!\ldots b_{|\pi|}!N^p+O(N^{p-1}) [[/math]]
where the numbers [math]b_1,\ldots,b_{|\pi|}[/math] with [math]b_1+\ldots+b_{|\pi|}=p[/math] are the block lengths of [math]\pi[/math].


Show Proof

With [math]\sigma=\ker i[/math] we obtain:

[[math]] I(\pi)=\sum_{\sigma\in P(p)}\#\left\{i,j\Big|\ker i=\sigma,[i]=[j],\sum_{r\in\beta}i_r=\sum_{r\in\beta}j_r,\forall\beta\in\pi\right\} [[/math]]


The number of choices for [math]i[/math] satisfying [math]\ker i=\sigma[/math] is:

[[math]] \frac{N!}{(N-|\sigma|)!}\simeq N^{|\sigma|} [[/math]]


Then, the number of choices for [math]j[/math] satisfying [math][i]=[j][/math] is:

[[math]] \binom{p}{\sigma}=O(1) [[/math]]


We conclude that the main contribution comes from the following partition:

[[math]] \sigma=|\,|\ldots| [[/math]]


Thus, we have the following formula:

[[math]] I(\pi)=\#\left\{i,j\Big|\ker i=|\,|\ldots|,[i]=[j],\sum_{r\in\beta}i_r=\sum_{r\in\beta}j_r,\forall\beta\in\pi\right\}+O(N^{p-1}) [[/math]]


Now [math]\ker i=|\,|\ldots|[/math] tells us that [math]i[/math] must have distinct entries, and there are [math]\frac{N!}{(N-p)!}\simeq N^p[/math] choices for such multi-indices [math]i[/math]. Regarding the indices [math]j[/math], the main contribution comes from those obtained from [math]i[/math] by permuting the entries over the blocks of [math]\pi[/math], and there are [math]b_1!\ldots b_{|\pi|}![/math] choices here. Thus, we are led to the conclusion in the statement.

At the second order now, the estimate is as follows:

Proposition

For [math]F_G[/math] we have the formula

[[math]] \frac{I(\pi)}{b_1!\ldots b_s!N^p}=1+\left(\sum_{i \lt j}\sum_{c\geq2}\binom{b_i}{c}\binom{b_j}{c}-\frac{1}{2}\sum_i\binom{b_i}{2}\right)N^{-1}+O(N^{-2}) [[/math]]
where [math]b_1,\ldots,b_s[/math] being the block lengths of [math]\pi\in P(p)[/math].


Show Proof

Let us define the “non-arithmetic” part of [math]I(\pi)[/math] as follows:

[[math]] I^\circ(\pi)=\#\left\{i,j\Big|[i_r|r\in\beta]=[j_r|r\in\beta],\forall\beta\in\pi\right\} [[/math]]


We then have the following formula:

[[math]] I^\circ(\pi)=\prod_{\beta\in\pi}\left\{i,j\in I^{|\beta|}\Big|[i]=[j]\right\}=\prod_{\beta\in\pi}I(\beta) [[/math]]


Also, Proposition 11.19 shows that we have the following estimate:

[[math]] I(\pi)=I^\circ(\pi)+O(N^{p-1}) [[/math]]


Our claim now is that we have the following formula:

[[math]] \frac{I(\pi)-I^\circ(\pi)}{b_1!\ldots b_s!N^p}=\sum_{i \lt j}\sum_{c\geq2}\binom{b_i}{c}\binom{b_j}{c}N^{-1}+O(N^{-2}) [[/math]]


Indeed, according to Proposition 11.19, we have a formula of the following type:

[[math]] I(\pi)=I^\circ(\pi)+I^1(\pi)+O(N^{p-2}) [[/math]]


More precisely, this formula holds indeed, with [math]I^1(\pi)[/math] coming from [math]i_1,\ldots,i_p[/math] distinct, [math][i]=[j][/math], and with one constraint of type:

[[math]] \sum_{r\in\beta}i_r=\sum_{j\in\beta}j_r\quad,\quad [i_r|r\in\beta]\neq[j_r|r\in\beta] [[/math]]

Now observe that for a two-block partition [math]\pi=(a,b)[/math] this constraint is implemented, up to permutations which leave invariant the blocks of [math]\pi[/math], as follows:

[[math]] \begin{matrix} i_1\ldots i_c&k_1\ldots k_{a-c}&&j_1\ldots j_c&l_1\ldots l_{a-c}\\ \underbrace{j_1\ldots j_c}_c&\underbrace{k_1\ldots k_{a-c}}_{a-c}&&\underbrace{i_1\ldots i_c}_c&\underbrace{l_1\ldots l_{a-c}}_{b-c} \end{matrix} [[/math]]


Let us compute now [math]I^1(a,b)[/math]. We cannot have [math]c=0,1[/math], and once [math]c\geq2[/math] is given, we have [math]\binom{a}{c},\binom{b}{c}[/math] choices for the positions of the [math]i,j[/math] variables in the upper row, then [math]N^{p-1}+O(N^{p-2})[/math] choices for the variables in the upper row, and then finally we have [math]a!b![/math] permutations which can produce the lower row. We therefore obtain:

[[math]] I^1(a,b)=a!b!\sum_{c\geq2}\binom{a}{c}\binom{b}{c}N^{p-1}+O(N^{p-2}) [[/math]]


In the general case now, a similar discussion applies. Indeed, the constraint of type [math]\sum_{r\in\beta}i_r=\sum_{r\in\beta}j_r[/math] with [math][i_r|r\in\beta]\neq[j_r|r\in\beta][/math] cannot affect [math]\leq1[/math] blocks, because we are not in the non-arithmetic case, and cannot affect either [math]\geq3[/math] blocks, because affecting [math]\geq3[/math] blocks would require [math]\geq2[/math] constraints. Thus this condition affects exactly [math]2[/math] blocks, and if we let [math]i \lt j[/math] be the indices in [math]\{1,\ldots,s\}[/math] corresponding to these 2 blocks, we obtain:

[[math]] I^1(\pi)=b_1!\ldots b_s!\sum_{i \lt j}\sum_{c\geq2}\binom{b_i}{c}\binom{b_j}{c}N^{p-1}+O(N^{p-2}) [[/math]]


But this proves the above claim. Let us estimate now [math]I(\sqcap\!\!\sqcap\ldots\sqcap)[/math]. We have:

[[math]] \begin{eqnarray*} &&I(\sqcap\!\!\sqcap\ldots\sqcap)\\ &=&p!\frac{N!}{(N-p)!}+\binom{p}{2}\frac{p!}{2}\cdot\frac{N!}{(N-p+1)!}+O(N^{p-2})\\ &=&p!N^r\left(1-\binom{p}{2}N^{-1}+O(N^{-2})\right)+\binom{p}{2}\frac{p!}{2}N^{p-1}+O(N^{p-2})\\ &=&p!N^p\left(1-\frac{1}{2}\binom{p}{2}N^{-1}+O(N^{-2})\right) \end{eqnarray*} [[/math]]


Now recall that we have:

[[math]] I^\circ(\pi)=\prod_{\beta\in\pi}I(\beta) [[/math]]


We therefore obtain:

[[math]] I^\circ(\pi)=b_1!\ldots b_s!N^p\left(1-\frac{1}{2}\sum_i\binom{b_i}{2}N^{-1}+O(N^{-2})\right) [[/math]]


By plugging this quantity into the above estimate, we obtain the result.

In order to estimate glow, we will need the explicit formula of [math]I(\sqcap\sqcap)[/math]:

Proposition

For [math]F_G[/math] with [math]G=\mathbb Z_{N_1}\times\ldots\times\mathbb Z_{N_k}[/math] we have the formula

[[math]] I(\sqcap\sqcap)=N(4N^3-11N+2^e+7) [[/math]]
where [math]e\in\{0,1,\ldots,k\}[/math] is the number of even numbers among [math]N_1,\ldots,N_k[/math].


Show Proof

The conditions defining the quantities [math]I(\pi)[/math] are as follows:

[[math]] \sum_{r\in\beta}i_r=\sum_{r\in\beta}j_r [[/math]]


We use the fact that, when dealing with these conditions, one can always erase some of the variables [math]i_r,j_r[/math], as to reduce to the “purely arithmetic” case, namely:

[[math]] \{i_r|r\in\beta\}\cap\{j_r|r\in\beta\}=\emptyset [[/math]]


We deduce from this that we have:

[[math]] I(\sqcap\sqcap)=I^\circ(\sqcap\sqcap)+I^{ari}(\sqcap\sqcap) [[/math]]


Let us compute now [math]I^{ari}(\sqcap\sqcap)[/math]. There are 3 contributions to this quantity, namely:


(1) \underline{Case [math](^{iijj}_{jjii})[/math]}, with [math]i\neq j[/math], [math]2i=2j[/math]. Since [math]2(i_1,\ldots,i_k)=2(j_1,\ldots,j_k)[/math] corresponds to the collection of conditions [math]2i_r=2j_r[/math], inside [math]\mathbb Z_{N_r}[/math], which each have 1 or 2 solutions, depending on whether [math]N_r[/math] is odd or even, the contribution here is:

[[math]] \begin{eqnarray*} I^{ari}_1(\sqcap\sqcap) &=&\#\{i\neq j|2i=2j\}\\ &=&\#\{i,j|2i=2j\}-\#\{i,j|i=j\}\\ &=&2^eN-N\\ &=&(2^e-1)N \end{eqnarray*} [[/math]]


(2) \underline{Case [math](^{iijk}_{jkii})[/math]}, with [math]i,j,k[/math] distinct, [math]2i=j+k[/math]. The contribution here is:

[[math]] \begin{eqnarray*} I^{ari}_2(\sqcap\sqcap) &=&4\#\{i,j,k\ {\rm distinct}|2i=j+k\}\\ &=&4\#\{i\neq j|2i-j\neq i,j\}\\ &=&4\#\{i\neq j|2i\neq 2j\}\\ &=&4(\#\{i,j|i\neq j\}-\#\{i\neq j|2i=2j\})\\ &=&4(N(N-1)-(2^e-1)N)\\ &=&4N(N-2^e) \end{eqnarray*} [[/math]]


(3) \underline{Case [math](^{ijkl}_{klij})[/math]}, with [math]i,j,k,l[/math] distinct, [math]i+j=k+l[/math]. The contribution here is:

[[math]] \begin{eqnarray*} I^{ari}_3(\sqcap\sqcap) &=&4\#\{i,j,k,l\ {\rm distinct}|i+j=k+l\}\\ &=&4\#\{i,j,k\ {\rm distinct}|i+j-k\neq i,j,k\}\\ &=&4\#\{i,j,k\ {\rm distinct}|i+j-k\neq k\}\\ &=&4\#\{i,j,k\ {\rm distinct}|i\neq 2k-j\} \end{eqnarray*} [[/math]]


We can split this quantity over two cases, [math]2j\neq 2k[/math] and [math]2j=2k[/math], and we obtain:

[[math]] \begin{eqnarray*} I^{ari}_3(\sqcap\sqcap) &=&4(\#\{i,j,k\ {\rm distinct}|2j\neq 2k,i\neq 2k-j\}\\ &&+\#\{i,j,k\ {\rm distinct}|2j=2k,i\neq 2k-j\}) \end{eqnarray*} [[/math]]


The point now is that in the first case, [math]2j\neq 2k[/math], the numbers [math]j,k,2k-j[/math] are distinct, while in the second case, [math]2j=2k[/math], we simply have [math]2k-j=j[/math]. Thus, we obtain:

[[math]] \begin{eqnarray*} I^{ari}_3(\sqcap\sqcap) &=&4\left(\sum_{j\neq k,2j\neq 2k}\#\{i|i\neq j,k,2k-j\}+\sum_{j\neq k,2j=2k}\#\{i|i\neq j,k\}\right)\\ &=&4(N(N-2^e)(N-3)+N(2^e-1)(N-2))\\ &=&4N(N(N-3)-2^e(N-3)+2^e(N-2)-(N-2))\\ &=&4N(N^2-4N+2^e+2) \end{eqnarray*} [[/math]]


We can now compute the arithmetic part. This is given by:

[[math]] \begin{eqnarray*} I^{ari}(\sqcap\sqcap) &=&(2^e-1)N+4N(N-2^e)+4N(N^2-4N+2^e+2)\\ &=&N(2^e-1+4(N-2^e)+4(N^2-4N+2^e+2))\\ &=&N(4N^2-12N+2^e+7) \end{eqnarray*} [[/math]]


Thus the integral to be computed is given by:

[[math]] \begin{eqnarray*} I(\sqcap\sqcap) &=&N^2(2N-1)^2+N(4N^2-12N+2^e+7)\\ &=&N(4N^3-4N^2+N+4N^2-12N+2^e+7)\\ &=&N(4N^3-11N+2^e+7) \end{eqnarray*} [[/math]]


Thus we have reached to the formula in the statement, and we are done.

11d. Universality

We have the following asymptotic result:

Theorem

The glow of [math]F_G[/math], with [math]|G|=N[/math], is given by

[[math]] \frac{1}{p!}\int_{\mathbb T^N\times\mathbb T^N}\left|\frac{E}{N}\right|^{2p}=1-K_1N^{-1}+K_2N^{-2}-K_3N^{-3}+O(N^{-4}) [[/math]]

with the coefficients being as follows:

[[math]] K_1=\binom{p}{2}\quad,\quad K_2=\binom{p}{2}\frac{3p^2+p-8}{12}\quad,\quad K_3=\binom{p}{3}\frac{p^3+4p^2+p-18}{8} [[/math]]
Thus, the rescaled complex glow is asymptotically complex Gaussian,

[[math]] \frac{E}{N}\sim\mathcal C [[/math]]
and we have in fact universality at least up to order [math]3[/math].


Show Proof

We use the following quantities:

[[math]] \widetilde{K}(\pi)=\frac{K(\pi)}{p!}\quad,\quad \widetilde{I}(\pi)=\frac{I(\pi)}{N^p} [[/math]]


These are subject to the following formulae:

[[math]] \widetilde{K}(\pi|\ldots|)=\widetilde{K}(\pi)\quad,\quad \widetilde{I}(\pi|\ldots|)=\widetilde{I}(\pi) [[/math]]


Consider as well the following quantities:

[[math]] J(\sigma)=\binom{p}{\sigma}\widetilde{K}(\sigma)\widetilde{I}(\sigma) [[/math]]


In terms of these quantities, we have:

[[math]] \begin{eqnarray*} \frac{1}{p!}\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p} &=&J(\emptyset)\\ &+&N^{-1}J(\sqcap)\\ &+&N^{-2}\left(J(\sqcap\!\sqcap)+J(\sqcap\sqcap)\right)\\ &+&N^{-3}\left(J(\sqcap\!\!\sqcap\!\!\sqcap)+J(\sqcap\!\!\sqcap\sqcap)+J(\sqcap\sqcap\sqcap)\right)\\ &+&O(N^{-4}) \end{eqnarray*} [[/math]]


We have the following formulae:

[[math]] \widetilde{K}_0=1 [[/math]]

[[math]] \widetilde{K}_1=1 [[/math]]

[[math]] \widetilde{K}_2=\frac{1}{2}-1=-\frac{1}{2} [[/math]]


[[math]] \widetilde{K}_3=\frac{1}{6}-\frac{3}{2}+2=\frac{2}{3} [[/math]]

[[math]] \widetilde{K}_4=\frac{1}{24}-\frac{4}{6}-\frac{3}{4}+\frac{12}{2}-6=-\frac{11}{8} [[/math]]


Regarding now the numbers [math]C_{pr}[/math] in Proposition 11.19, these are given by:

[[math]] C_{p1}=1 [[/math]]

[[math]] C_{p2}=\frac{1}{2}\binom{2p}{p}-1 [[/math]]

[[math]] \vdots [[/math]]

[[math]] C_{p,p-1}=\frac{p!}{2}\binom{p}{2} [[/math]]

[[math]] C_{pp}=p! [[/math]]


We deduce that we have the following formulae:

[[math]] I(|)=N [[/math]]

[[math]] I(\sqcap)=N(2N-1) [[/math]]

[[math]] I(\sqcap\!\sqcap)=N(6N^2-9N+4) [[/math]]

[[math]] I(\sqcap\!\!\sqcap\!\!\sqcap)=N(24N^3-72N^2+82N-33) [[/math]]


By using Proposition 11.20 and Proposition 11.21, we obtain the following formula:

[[math]] \begin{eqnarray*} \frac{1}{p!}\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p} &=&1-\frac{1}{2}\binom{p}{2}(2N^{-1}-N^{-2})+\frac{2}{3}\binom{p}{3}(6N^{-2}-9N^{-3})\\ &+&3\binom{p}{4}N^{-2}-33\binom{p}{4}N^{-3}-40\binom{p}{5}N^{-3}\\ &-&15\binom{p}{6}N^{-3}+O(N^{-4}) \end{eqnarray*} [[/math]]


But this gives the formulae of [math]K_1,K_2,K_3[/math] in the statement, and we are done.

It is possible to compute the next term as well, the result being as follows:

Theorem

Let [math]G=\mathbb Z_{N_1}\times\ldots\times\mathbb Z_{N_k}[/math] be a finite abelian group, and set:

[[math]] N=N_1\ldots N_k [[/math]]
Then the glow of the associated Fourier matrix [math]F_G[/math] is given by

[[math]] \frac{1}{p!}\int_{\mathbb T^N\times\mathbb T^N}\left|\frac{E}{N}\right|^{2p}=1-K_1N^{-1}+K_2N^{-2}-K_3N^{-3}+K_4N^{-4}+O(N^{-5}) [[/math]]
where the quantities [math]K_1,K_2,K_3,K_4[/math] are given by

[[math]] \begin{eqnarray*} K_1&=&\binom{p}{2}\\ K_2&=&\binom{p}{2}\frac{3p^2+p-8}{12}\\ K_3&=&\binom{p}{3}\frac{p^3+4p^2+p-18}{8}\\ K_4&=&\frac{8}{3}\binom{p}{3}+\frac{3}{4}\left(121+\frac{2^e}{N}\right)\binom{p}{4}+416\binom{p}{5}+\frac{2915}{2}\binom{p}{6}+40\binom{p}{7}+105\binom{p}{8} \end{eqnarray*} [[/math]]
where [math]e\in\{0,1,\ldots,k\}[/math] is the number of even numbers among [math]N_1,\ldots,N_k[/math].


Show Proof

This is something that we already know, up to order 3, and the next coefficient [math]K_4[/math] can be computed in a similar way, based on results that we already have.

The passage to Theorem 11.23 is quite interesting, because it shows that the glow of the Fourier matrices [math]F_G[/math] is not polynomial in [math]N=|G|[/math]. When restricting the attention to the usual Fourier matrices [math]F_N[/math], the glow up to order 4 is polynomial both in [math]N[/math] odd, and in [math]N[/math] even, but it is not clear what happens at higher order. An interesting question here is that of computing the complex glow of the Walsh matrices. Indeed, for the Walsh matrices the integrals [math]I(\pi)[/math], and hence the glow itself, might be polynomial in [math]N[/math].


General references

Banica, Teo (2024). "Invitation to Hadamard matrices". arXiv:1910.06911 [math.CO].

References

  1. 1.0 1.1 P.C. Fishburn and N.J.A. Sloane, The solution to Berlekamp's switching game, Discrete Math. 74 (1989), 263--290.
  2. 2.0 2.1 R. Roth and K. Viswanathan, On the hardness of decoding the Gale-Berlekamp code, IEEE Trans. Inform. Theory 54 (2008), 1050--1060.
  3. 3.0 3.1 M. Idel and M.M. Wolf, Sinkhorn normal form for unitary matrices, Linear Algebra Appl. 471 (2015), 76--84.
  4. R. Durrett, Probability: theory and examples, Cambridge Univ. Press (1990).
  5. G. Pòlya, \"Uber eine Aufgabe der Wahrscheinlichkeitsrechnung betreffend die Irrfahrt im Strassennetz, Math. Ann. 84 (1921), 149--160.