guide:96f1e9aa91: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
<div class="d-none"><math> | |||
\newcommand{\mathds}{\mathbb}</math></div> | |||
{{Alert-warning|This article was automatically generated from a tex file and may contain conversion errors. If permitted, you may login and edit this article to improve the conversion. }} | |||
===11a. Basic results=== | |||
We discuss here the computation of the glow of the complex Hadamard matrices, as a continuation of the material from chapter 2, where we discussed the basics of the glow in the real case, and as a continuation as well of the material from chapter 10. As a first motivation for all this, we have the Gale-Berlekamp game <ref name="fsl">P.C. Fishburn and N.J.A. Sloane, The solution to Berlekamp's switching game, ''Discrete Math.'' '''74''' (1989), 263--290.</ref>, <ref name="rvi">R. Roth and K. Viswanathan, On the hardness of decoding the Gale-Berlekamp code, ''IEEE Trans. Inform. Theory'' '''54''' (2008), 1050--1060.</ref>. Another motivation comes from the questions regarding the bistochastic matrices, in relation with the Ideal-Wolf theorem <ref name="iwo">M. Idel and M.M. Wolf, Sinkhorn normal form for unitary matrices, ''Linear Algebra Appl.'' '''471''' (2015), 76--84.</ref>, explained in chapter 10. Finally, we have the question of connecting the defect, and other invariants of the Hadamard matrices, to the glow. | |||
Let us begin by reviewing the few theoretical things that we know about the glow, from chapter 10. The main results there can be summarized as follows: | |||
{{proofcard|Theorem|theorem-1|The glow of <math>H\in M_N(\mathbb C)</math>, which is the law <math>\mu\in\mathcal P(\mathbb C)</math> of the excess | |||
<math display="block"> | |||
E=\sum_{ij}H_{ij} | |||
</math> | |||
over the Hadamard equivalence class of <math>H</math>, has the following properties: | |||
<ul><li> <math>\mu=\varepsilon\times\mu^+</math>, where <math>\mu^+=law(|E|)</math>. | |||
</li> | |||
<li> <math>\mu</math> is invariant under rotations. | |||
</li> | |||
<li> <math>H\in\sqrt{N}U_N</math> implies <math>supp(\mu)\subset N\sqrt{N}\,\mathbb D</math>. | |||
</li> | |||
<li> <math>H\in\sqrt{N}U_N</math> implies as well <math>N\sqrt{N}\,\mathbb T\subset supp(\mu)</math>. | |||
</li> | |||
</ul> | |||
|We already know all this from chapter 10, the idea being as follows: | |||
(1) This follows indeed by using <math>H\to zH</math> with <math>|z|=1</math>. | |||
(2) This follows from (1), the convolution with <math>\varepsilon</math> bringing the invariance. | |||
(3) This follows indeed from Cauchy-Schwarz. | |||
(4) This is something highly non-trivial, coming from <ref name="iwo">M. Idel and M.M. Wolf, Sinkhorn normal form for unitary matrices, ''Linear Algebra Appl.'' '''471''' (2015), 76--84.</ref>.}} | |||
In what follows we will be mainly interested in the Hadamard matrix case, but since the computations here are quite difficult, let us begin our study with other matrices. It is convenient to normalize our matrices, as to make them a bit similar to the complex Hadamard ones. To be more precise, consider the <math>2</math>-norm on the vector space of the complex <math>N\times N</math> matrices, which is given by the following formula: | |||
<math display="block"> | |||
||H||_2=\sqrt{\sum_{ij}|H_{ij}|^2} | |||
</math> | |||
We will assume in what follows, by multiplying our matrix <math>H\in M_N(\mathbb C)</math> by a suitable scalar, that this norm takes the same value as for the Hadamard matrices, namely: | |||
<math display="block"> | |||
||H||_2=N | |||
</math> | |||
We know from chapter 2 that in the real case, the real glow is asymptotically Gaussian. In the complex matrix case, we will reach to the conclusion that the glow is asymptotically complex Gaussian, with the complex Gaussian distribution being as follows: | |||
{{proofcard|Proposition|proposition-1|The complex Gaussian distribution <math>\mathcal C</math> is the law of the variable | |||
<math display="block"> | |||
z=\frac{1}{\sqrt{2}}(x+iy) | |||
</math> | |||
with <math>x,y</math> being independent standard Gaussian variables. We have | |||
<math display="block"> | |||
\mathbb E(|z|^{2p})=p! | |||
</math> | |||
and this moment formula, along with rotational invariance, determines <math>\mathcal C</math>. | |||
|This is standard probability theory, with the main result, namely the moment formula in the statement, coming from some routine computations. For more on all this, we refer to any standard probability book, such as Durrett <ref name="dur">R. Durrett, Probability: theory and examples, Cambridge Univ. Press (1990).</ref>.}} | |||
Finally, we use in what follows the symbol <math>\sim</math> to denote an equality of distributions. With these conventions, we have the following result, to start with: | |||
{{proofcard|Proposition|proposition-2|We have the following computations: | |||
<ul><li> For the rescaled identity <math>\widetilde{I}_N=\sqrt{N}I_N</math> we have | |||
<math display="block"> | |||
E\sim\sqrt{N}(q_1+\ldots +q_N) | |||
</math> | |||
with <math>q\in\mathbb T^N</math> random. With <math>N\to\infty</math> we have <math>E/N\sim\mathcal C</math>. | |||
</li> | |||
<li> For the flat matrix <math>J_N=(1)_{ij}</math> we have | |||
<math display="block"> | |||
E\sim(a_1+\ldots+a_N)(b_1+\ldots+b_N) | |||
</math> | |||
with <math>(a,b)\in\mathbb T^N\times\mathbb T^N</math> random. With <math>N\to\infty</math> we have <math>E/N\sim\mathcal C\times\mathcal C</math>. | |||
</li> | |||
</ul> | |||
|We use Theorem 11.1, and the moment method: | |||
(1) Here we have <math>E=\sqrt{N}\sum_{i}a_ib_i</math>, with <math>a,b\in\mathbb T^N</math> random. With <math>q_i=a_ib_i</math> this gives the first assertion. Let us estimate now the moments of <math>|E|^2</math>. We have: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p} | |||
&=&N^p\int_{\mathbb T^N}|q_1+\ldots+q_N|^{2p}dq\\ | |||
&=&N^p\int_{\mathbb T^N}\sum_{ij}\frac{q_{i_1}\ldots q_{i_p}}{q_{j_1}\ldots q_{j_p}}\,dq\\ | |||
&=&N^p\#\left\{(i,j)\in\{1,\ldots,N\}^p\times\{1,\ldots,N\}^p\Big|[i_1,\ldots,i_p]=[j_1,\ldots,j_p]\right\}\\ | |||
&\simeq&N^p\cdot p!N(N-1)\ldots(N-p+1)\\ | |||
&\simeq&N^p\cdot p!N^p\\ | |||
&=&p!N^{2p} | |||
\end{eqnarray*} | |||
</math> | |||
Here, and in what follows, the sets between brackets are by defintion sets with repetition, and the middle estimate comes from the fact that, with <math>N\to\infty</math>, only the multi-indices <math>i=(i_1,\ldots,i_p)</math> having distinct entries contribute. But this gives the result. | |||
(2) Here we have the following formula, which gives the first assertion: | |||
<math display="block"> | |||
E | |||
=\sum_{ij}a_ib_j | |||
=\sum_ia_i\sum_jb_j | |||
</math> | |||
Now since <math>a,b\in\mathbb T^N</math> are independent, so are the quantities <math>\sum_ia_i,\sum_jb_j</math>, so we have: | |||
<math display="block"> | |||
\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p} | |||
=\left(\int_{\mathbb T^N}|q_1+\ldots+q_N|^{2p}dq\right)^2 | |||
\simeq(p!N^{p})^2 | |||
</math> | |||
Here we have used the estimate in the proof of (1), and this gives the result.}} | |||
As a conclusion, the glow is intimately related to the basic hypertoral law, namely the law of the variable <math>q_1+\ldots+q_N</math>, with <math>q\in\mathbb T^N</math> being random. Observe that at <math>N=1</math> this hypertoral law is the Dirac mass <math>\delta_1</math>, and that at <math>N=2</math> we obtain the following law: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
law|1+q| | |||
&=&law\sqrt{(1+e^{it})(1+e^{-it})}\\ | |||
&=&law\sqrt{2+2\cos t}\\ | |||
&=&law\left(2\cos\frac{t}{2}\right) | |||
\end{eqnarray*} | |||
</math> | |||
In general, the law of <math>\sum q_i</math> is known to be related to the Pòlya random walk <ref name="pol">G. Pòlya, \"Uber eine Aufgabe der Wahrscheinlichkeitsrechnung betreffend die Irrfahrt im Strassennetz, ''Math. Ann.'' '''84''' (1921), 149--160.</ref>. Also, as explained for instance in chapter 9, the moments of this law are: | |||
<math display="block"> | |||
\int_{\mathbb T^N}|q_1+\ldots+q_N|^{2p}dq=\sum_{\pi\in P(p)}\binom{p}{\pi}\frac{N!}{(N-|\pi|)!} | |||
</math> | |||
As a second conclusion, even under the normalization <math>||H||_2=N</math>, the glow can behave quite differently in the <math>N\to\infty</math> limit. So, let us restrict now the attention to the complex Hadamard matrices. At <math>N=2</math> we only have <math>F_2</math> to be invesigated, the result being: | |||
{{proofcard|Proposition|proposition-3|For the Fourier matrix <math>F_2</math> we have | |||
<math display="block"> | |||
|E|^2=4+2Re(\alpha-\beta) | |||
</math> | |||
for certain variables <math>\alpha,\beta\in\mathbb T</math> which are uniform, and independent. | |||
|The matrix that we interested in, namely the Fourier matrix <math>F_2</math> altered by a vertical switching vector <math>(a,b)</math> and an horizontal switching vector <math>(c,d)</math>, is: | |||
<math display="block"> | |||
\widetilde{F}_2=\begin{pmatrix}ac&ad\\bc&-bd\end{pmatrix} | |||
</math> | |||
With this notation, we have the following formula: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
|E|^2 | |||
&=&|ac+ad+bc-bd|^2\\ | |||
&=&4+\frac{ad}{bc}+\frac{bc}{ad}-\frac{bd}{ac}-\frac{ac}{bd} | |||
\end{eqnarray*} | |||
</math> | |||
For proving that the variables <math>\alpha=\frac{ad}{bc}</math> and <math>\beta=\frac{bd}{ac}</math> are independent, we can use the moment method, as follows: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
\int_{\mathbb T^4}\left(\frac{ad}{bc}\right)^p\left(\frac{bd}{ac}\right)^q | |||
&=&\int_{\mathbb T}a^{p-q}\int_{\mathbb T}b^{q-p}\int_{\mathbb T}c^{-p-q}\int_{\mathbb T}d^{p+q}\\ | |||
&=&\delta_{pq}\delta_{pq}\delta_{p,-q}\delta_{p,-q}\\ | |||
&=&\delta_{p,q,0} | |||
\end{eqnarray*} | |||
</math> | |||
Thus <math>\alpha,\beta</math> are indeed independent, and we are done.}} | |||
It is possible of course to derive from this some more concrete formulae, but let us look instead at the case <math>N=3</math>. Here the matrix that we are interested in is: | |||
<math display="block"> | |||
\widetilde{F}_3=\begin{pmatrix}ad&ae&af\\ bd&wbe&w^2bf\\ cd&w^2ce&wcf\end{pmatrix} | |||
</math> | |||
Thus, we would like to compute the law of the following quantity: | |||
<math display="block"> | |||
|E|=|ad+ae+af+bd+wbe+w^2bf+cd+w^2ce+wcf| | |||
</math> | |||
The problem is that when trying to compute <math>|E|^2</math>, the terms won't cancel much. More precisely, we have a formula of the following type: | |||
<math display="block"> | |||
|E|^2=9+C_0+C_1w+C_2w^2 | |||
</math> | |||
Here the quantities <math>C_0,C_1,C_2</math> are as follows: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
C_0&=&\frac{ae}{bd}+\frac{ae}{cd}+\frac{af}{bd}+\frac{af}{cd}+\frac{bd}{ae}+\frac{bd}{af} | |||
+\frac{be}{cf}+\frac{bf}{ce}+\frac{cd}{ae}+\frac{cd}{af}+\frac{ce}{bf}+\frac{cf}{be}\\ | |||
C_1&=&\frac{ad}{bf}+\frac{ad}{ce}+\frac{ae}{bf}+\frac{af}{ce}+\frac{bd}{ce}+\frac{be}{ad} | |||
+\frac{be}{af}+\frac{be}{cd}+\frac{cd}{bf}+\frac{cf}{ad}+\frac{cf}{ae}+\frac{cf}{bd}\\ | |||
C_2&=&\frac{ad}{be}+\frac{ad}{cf}+\frac{ae}{cf}+\frac{af}{be}+\frac{bd}{cf}+\frac{bf}{ad} | |||
+\frac{bf}{ae}+\frac{bf}{cd}+\frac{cd}{be}+\frac{ce}{ad}+\frac{ce}{af}+\frac{ce}{bd} | |||
\end{eqnarray*} | |||
</math> | |||
In short, all this obviously leads nowhere, and the exact study stops at <math>F_2</math>. In general now, one idea is that of using Bernoulli-type variables coming from the row sums, a bit as we did in chapter 2 in the real case, the result here being as follows: | |||
{{proofcard|Theorem|theorem-2|The glow of <math>H\in M_N(\mathbb C)</math> is given by the formula | |||
<math display="block"> | |||
law(E)=\int_{a\in\mathbb T^N}B((Ha)_1,\ldots,(Ha)_N) | |||
</math> | |||
where the quantities on the right are | |||
<math display="block"> | |||
B(c_1,\ldots,c_N)=law\left(\sum_i\lambda_ic_i\right) | |||
</math> | |||
with <math>\lambda\in\mathbb T^N</math> being random. | |||
|This is clear indeed from the following formula: | |||
<math display="block"> | |||
E= < a,Hb > | |||
</math> | |||
To be more precise, when the vector <math>a\in\mathbb T^N</math> is assumed to be fixed, this variable <math>E</math> follows the law <math>B((Ha)_1,\ldots,(Ha)_N)</math> in the statement.}} | |||
Observe that, in what regards the laws appearing in Theorem 11.5, we can write a formula for them of the following type, with <math>\times</math> being a multiplicative convolution: | |||
<math display="block"> | |||
B(c_1,\ldots,c_N)=\varepsilon\times\beta(|c_1|,\ldots,|c_N|) | |||
</math> | |||
To be more precise, such a formula holds indeed, with the measure <math>\beta(r_1,\ldots,r_N)\in\mathcal P(\mathbb R_+)</math> with <math>r_1,\ldots,r_N\geq 0</math> being given by the following formula: | |||
<math display="block"> | |||
\beta(r_1,\ldots,r_N)=law\left|\sum_i\lambda_ir_i\right| | |||
</math> | |||
Regarding now the explicit computation of <math>\beta</math>, observe we have: | |||
<math display="block"> | |||
\beta(r_1,\ldots,r_N)=law\sqrt{\sum_{ij}\frac{\lambda_i}{\lambda_j}\cdot r_ir_j} | |||
</math> | |||
Consider now the following variable, which is easily seen, for instance by using the moment method, to be uniform over the projective torus <math>\mathbb T^{N-1}=\mathbb T^N/\mathbb T</math>: | |||
<math display="block"> | |||
(\mu_1,\mu_2,\ldots,\mu_N)=\left(\frac{\lambda_1}{\lambda_2},\frac{\lambda_2}{\lambda_3},\ldots,\frac{\lambda_N}{\lambda_1}\right) | |||
</math> | |||
Now since we have <math>\lambda_i/\lambda_j=\mu_i\mu_{i+1}\ldots\mu_j</math>, with the convention <math>\mu_i\ldots\mu_j=\overline{\mu_j\ldots\mu_i}</math> for <math>i > j</math>, this gives the following formula, with <math>\mu\in\mathbb T^{N-1}</math> random: | |||
<math display="block"> | |||
\beta(r_1,\ldots,r_N)=law\sqrt{\sum_{ij}\mu_i\mu_{i+1}\ldots\mu_j\cdot r_ir_j} | |||
</math> | |||
It is possible to further study the laws <math>\beta</math> by using this formula. However, in practice, it is more convenient to use the complex measures <math>B</math> from Theorem 11.5. | |||
Let us end these preliminaries with a discussion of the “arithmetic” version of the problem, which makes the link with the Gale-Berlekamp game <ref name="fsl">P.C. Fishburn and N.J.A. Sloane, The solution to Berlekamp's switching game, ''Discrete Math.'' '''74''' (1989), 263--290.</ref>, <ref name="rvi">R. Roth and K. Viswanathan, On the hardness of decoding the Gale-Berlekamp code, ''IEEE Trans. Inform. Theory'' '''54''' (2008), 1050--1060.</ref> and with the work in the real case, from chapter 2. We have the following unifying formalism: | |||
{{defncard|label=|id=|Given <math>H\in M_N(\mathbb C)</math> and <math>s\in\mathbb N\cup\{\infty\}</math>, we define a measure | |||
<math display="block"> | |||
\mu_s\in\mathcal P(\mathbb C) | |||
</math> | |||
by the following formula, valid for any continuous function <math>\varphi</math>, | |||
<math display="block"> | |||
\int_\mathbb C\varphi(x)d\mu_s(x)=\int_{\mathbb Z^N_s\times\mathbb Z^N_s}\varphi\left(\sum_{ij}a_ib_jH_{ij}\right)d(a,b) | |||
</math> | |||
where <math>\mathbb Z_s\subset\mathbb T</math> is the group of the <math>s</math>-roots of unity, with the convention <math>\mathbb Z_\infty=\mathbb T</math>.}} | |||
Observe that at <math>s=\infty</math> we obtain the measure in Theorem 11.1. Also, at <math>s=2</math> and for a usual Hadamard matrix, <math>H\in M_N(\pm1)</math>, we obtain the measure from chapter 2. Observe also that for <math>H\in M_N(\pm1)</math>, knowing <math>\mu_2</math> is the same as knowing the statistics of the number of one entries, <math>|1\in H|</math>. This follows indeed from the following formula: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
\sum_{ij}H_{ij} | |||
&=&|1\in H|-|-1\in H|\\ | |||
&=&2|1\in H|-N^2 | |||
\end{eqnarray*} | |||
</math> | |||
More generally, at <math>s=p</math> prime, we have the following result: | |||
{{proofcard|Theorem|theorem-3|When <math>s</math> is prime and <math>H\in M_N(\mathbb Z_s)</math>, the statistics of the number of one entries, <math>|1\in H|</math>, can be recovered from that of the total sum, <math>E=\sum_{ij}H_{ij}</math>. | |||
|The problem here is of vectorial nature, so given <math>V\in\mathbb Z_s^n</math>, we would like to compare the quantities <math>|1\in V|</math> and <math>\sum V_i</math>. Let us write, up to permutations: | |||
<math display="block"> | |||
V=(\underbrace{1\ldots1}_{a_0}\,\,\underbrace{w\ldots w}_{a_1}\,\ldots\ldots\,\underbrace{w^{s-1}\ldots w^{s-1}}_{a_{s-1}}) | |||
</math> | |||
We have then <math>|1\in V|=a_0</math>, as well as: | |||
<math display="block"> | |||
\sum V_i=a_0+a_1w+\ldots+a_{s-1}w^{s-1} | |||
</math> | |||
We also know that <math>a_0+a_1+\ldots+a_{s-1}=n</math>. Now when <math>s</math> is prime, the only ambiguity in recovering <math>a_0</math> from <math>a_0+a_1w+\ldots+a_{s-1}w^{s-1}</math> can come from: | |||
<math display="block"> | |||
1+w+\ldots+w^{s-1}=0 | |||
</math> | |||
But since the sum of the numbers <math>a_i</math> is fixed, <math>a_0+a_1+\ldots+a_{s-1}=n</math>, this ambiguity dissapears, and this gives the result.}} | |||
===11b. Glow moments=== | |||
Let us investigate now the glow of the complex Hadamard matrices, by using the moment method. We use the moment formula from chapter 10, namely: | |||
{{proofcard|Proposition|proposition-4|For <math>H\in M_N(\mathbb T)</math> the even moments of <math>|E|</math> are given by | |||
<math display="block"> | |||
\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p}=\sum_{[i]=[k],[j]=[l]}\frac{H_{i_1j_1}\ldots H_{i_pj_p}}{H_{k_1l_1}\ldots H_{k_pl_p}} | |||
</math> | |||
where the sets between brackets are by definition sets with repetition. | |||
|As explained in chapter 10, with <math>E=\sum_{ij}H_{ij}a_ib_j</math> we obtain: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p} | |||
&=&\int_{\mathbb T^N\times\mathbb T^N}\left(\sum_{ijkl}\frac{H_{ij}}{H_{kl}}\cdot\frac{a_ib_j}{a_kb_l}\right)^p\\ | |||
&=&\sum_{ijkl}\frac{H_{i_1j_1}\ldots H_{i_pj_p}}{H_{k_1l_1}\ldots H_{k_pl_p}}\int_{\mathbb T^N}\frac{a_{i_1}\ldots a_{i_p}}{a_{k_1}\ldots a_{k_p}}\int_{\mathbb T^N}\frac{b_{j_1}\ldots b_{j_p}}{b_{l_1}\ldots b_{l_p}} | |||
\end{eqnarray*} | |||
</math> | |||
The integrals on the right being <math>\delta_{[i],[k]}</math> and <math>\delta_{[j],[l]}</math>, we obtain the result.}} | |||
As a first application, let us investigate the tensor products. We have: | |||
{{proofcard|Proposition|proposition-5|The even moments of the variable <math>|E|</math> for a tensor product | |||
<math display="block"> | |||
L=H\otimes K | |||
</math> | |||
are given by the following formula, | |||
<math display="block"> | |||
\int_{\mathbb T^{NM}\times\mathbb T^{NM}}|E|^{2p}=\sum_{[ia]=[kc],[jb]=[ld]}\frac{H_{i_1j_1}\ldots H_{i_pj_p}}{H_{k_1l_1}\ldots H_{k_pl_p}}\cdot\frac{K_{a_1b_1}\ldots K_{a_pb_p}}{K_{c_1d_1}\ldots K_{c_pd_p}} | |||
</math> | |||
where the sets between brackets are as usual sets with repetition. | |||
|With <math>L=H\otimes K</math>, the formula in Proposition 11.8 reads: | |||
<math display="block"> | |||
\int_{\mathbb T^{NM}\times\mathbb T^{NM}}|E|^{2p}=\sum_{[ia]=[kc],[jb]=[ld]}\frac{L_{i_1a_1,j_1b_1}\ldots L_{i_pa_p,j_pb_p}}{L_{k_1c_1,l_1d_1}\ldots L_{k_pc_p,l_pd_p}} | |||
</math> | |||
But this gives the formula in the statement, and we are done.}} | |||
Thus, we cannot reconstruct the glow of <math>H\otimes K</math> from that of <math>H,K</math>, because the indices “get mixed”. We have as well a result regarding the deformations, as follows: | |||
{{proofcard|Proposition|proposition-6|The even moments of <math>|E|</math> for a deformed tensor product | |||
<math display="block"> | |||
L=H\otimes_Q K | |||
</math> | |||
are given by the following formula, | |||
<math display="block"> | |||
\int_{\mathbb T^{NM}\times\mathbb T^{NM}}|E|^{2p} | |||
=\sum_{[ia]=[kc],[jb]=[ld]}\frac{Q_{i_1b_1}\ldots Q_{i_pb_p}}{Q_{k_1d_1}\ldots Q_{k_pb_p}}\cdot\frac{H_{i_1j_1}\ldots H_{i_pj_p}}{H_{k_1l_1}\ldots H_{k_pl_p}}\cdot\frac{K_{a_1b_1}\ldots K_{a_pb_p}}{K_{c_1d_1}\ldots K_{c_pd_p}} | |||
</math> | |||
where the sets between brackets are as usual sets with repetition. | |||
|As before, we use the formula in Proposition 11.8. We have: | |||
<math display="block"> | |||
L_{ia,jb}=Q_{ib}H_{ij}K_{ab} | |||
</math> | |||
Thus, we obtain the following formula for the moments: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
\int_{\mathbb T^{NM}\times\mathbb T^{NM}}|E|^{2p} | |||
&=&\sum_{[ia]=[kc],[jb]=[ld]}\frac{L_{i_1a_1,j_1b_1}\ldots L_{i_pa_p,j_pb_p}}{L_{k_1c_1,l_1d_1}\ldots L_{k_pc_p,l_pd_p}}\\ | |||
&=&\sum_{[ia]=[kc],[jb]=[ld]}\frac{Q_{i_1b_1}\ldots Q_{i_pb_p}}{Q_{k_1d_1}\ldots Q_{k_pb_p}}\cdot\frac{H_{i_1j_1}\ldots H_{i_pj_p}}{H_{k_1l_1}\ldots H_{k_pl_p}}\cdot\frac{K_{a_1b_1}\ldots K_{a_pb_p}}{K_{c_1d_1}\ldots K_{c_pd_p}} | |||
\end{eqnarray*} | |||
</math> | |||
Thus, we are led to the conclusion in the statement.}} | |||
The above formulae might look quite complicated, but they have some practical use. Let us go back indeed to a question that we had open since chapter 5, namely classifying the <math>4\times4</math> complex Hadamard matrices, up to equivalence. We can now formulate: | |||
{{proofcard|Theorem|theorem-4|The complex Hadamard matrices at <math>N=4</math> are, up to equivalence, the following matrices, with <math>s=e^{it}</math> with <math>t\in[0,\pi/2]</math>, | |||
<math display="block"> | |||
F_4^s=\begin{pmatrix} | |||
1&1&1&1\\ | |||
1&-1&1&-1\\ | |||
1&s&-1&-s\\ | |||
1&-s&-1&s | |||
\end{pmatrix} | |||
</math> | |||
and these matrices are distinguished by the third moment of the glow. Alternatively, these matrices are distinguished by the third order term of the glow. | |||
|We know from chapter 5 that the <math>4\times4</math> complex Hadamard matrices are those in the statement, with <math>s\in\mathbb T</math>, and we also know from there that we have equivalences as follows, which in practice means that we can assume <math>s=e^{it}</math> with <math>t\in[0,\pi/2]</math>: | |||
<math display="block"> | |||
F_4^s\sim F_4^{-s}\sim F_4^{\bar{s}}\sim F_4^{-\bar{s}} | |||
</math> | |||
It remains to prove that these matrices, namely <math>F_4^s</math> with <math>s=e^{it}</math> with <math>t\in[0,\pi/2]</math>, are not equivalent. For this purpose, let us look at the moments of the glow: | |||
(1) Regarding the first moment, this is not something useful, because we have the following formula, coming from Proposition 11.8, valid for any <math>H\in M_N(\mathbb T)</math>: | |||
<math display="block"> | |||
\int_{\mathbb T^N\times\mathbb T^N}|E|^2=\sum_{i=k,j=l}\frac{H_{ij}}{H_{kl}}=\sum_{ij}\frac{H_{ij}}{H_{ij}}=N^2 | |||
</math> | |||
(2) Regarding the second moment, this is something not useful either, because once again by using Proposition 11.8, we obtain a formula as follows, for any <math>H\in M_N(\mathbb T)</math>: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
\int_{\mathbb T^N\times\mathbb T^N}|E|^4 | |||
&=&\sum_{[ik]=[mp],[jl]=[nq]}\frac{H_{ij}H_{kl}}{H_{mn}H_{pq}}\\ | |||
&=&P(N)+2\sum_{i\neq k,j\neq l}\frac{H_{ij}H_{kl}}{H_{il}H_{kj}}\\ | |||
&=&Q(N)+2\sum_{ijkl}\frac{H_{ij}H_{kl}}{H_{il}H_{kj}}\\ | |||
&=&Q(N)+2\sum_{ik}| < H_i,H_k > |^2\\ | |||
&=&Q(N)+2N^3 | |||
\end{eqnarray*} | |||
</math> | |||
To be more precise, here <math>P</math> is a certain polynomial, not depending on <math>H</math>, collecting the contributions from the “trivial” solutions of <math>[ik]=[mp]</math>, <math>[jl]=[nq]</math>, and then <math>Q</math> is another polynomial, again not depending on <math>H</math>, obtained from <math>P</math> via a summing trick. | |||
(3) However, when getting to the third moment, or higher, things become interesting. Indeed, the equivalences <math>F_4^s\sim F_4^{-s}\sim F_4^{\bar{s}}\sim F_4^{-\bar{s}}</math> tell us that the <math>p</math>-th moment of <math>|E|^2</math> is a degree <math>p</math> even, symmetric Laurent polynomial in <math>s\in\mathbb T</math>, and a direct computation at <math>p=3</math>, based on the formula in Proposition 11.10, shows that the parameter <math>s\in\mathbb T</math> can be recaptured, up to identifying <math>\{s,-s,\bar{s},-\bar{s}\}</math>, from the knowledge of this polynomial. | |||
(4) Alternatively, we can say that the parameter <math>s\in\mathbb T</math> can be recaptured, again up to identifying <math>\{s,-s,\bar{s},-\bar{s}\}</math>, from the knowledge of the third order term of the glow, with this meaning by definition the <math>N^{-2}</math> factor in the <math>N^{-1}</math> expansion of the law of <math>|E|/N</math>.}} | |||
Summarizing, some interesting things going on here, which will actually need some time to be fully understood. So, let us develop now some systematic moment machinery for the glow, along the above lines. Let <math>P(p)</math> be the set of partitions of <math>\{1,\ldots,p\}</math>, with its standard order relation <math>\leq</math>, which is such that, for any <math>\pi\in P(p)</math>: | |||
<math display="block"> | |||
\sqcap\hskip-1.6mm\sqcap\ldots\leq\pi\leq|\ |\ldots|\ | | |||
</math> | |||
We denote by <math>\mu(\pi,\sigma)</math> the associated Möbius function, given by: | |||
<math display="block"> | |||
\mu(\pi,\sigma)=\begin{cases} | |||
1&{\rm if}\ \pi=\sigma\\ | |||
-\sum_{\pi\leq\tau < \sigma}\mu(\pi,\tau)&{\rm if}\ \pi < \sigma\\ | |||
0&{\rm if}\ \pi\not\leq\sigma | |||
\end{cases} | |||
</math> | |||
To be more precise, the Möbius function is defined by recurrence, by using this formula. The main interest in the Möbius function comes from the Möbius inversion formula, which states that the following happens, at the level of the functions on <math>P(p)</math>: | |||
<math display="block"> | |||
f(\sigma)=\sum_{\pi\leq\sigma}g(\pi) | |||
\quad\implies\quad g(\sigma)=\sum_{\pi\leq\sigma}\mu(\pi,\sigma)f(\pi) | |||
</math> | |||
For <math>\pi\in P(p)</math> we use the following notation, where <math>b_1,\ldots,b_{|\pi|}</math> are the block lenghts: | |||
<math display="block"> | |||
\binom{p}{\pi}=\binom{p}{b_1\ldots b_{|\pi|}}=\frac{p!}{b_1!\ldots b_{|\pi|}!} | |||
</math> | |||
Finally, we use the following notation, where <math>H_1,\ldots,H_N\in\mathbb T^N</math> are the rows of <math>H</math>: | |||
<math display="block"> | |||
H_\pi(i)=\bigotimes_{\beta\in\pi}\prod_{r\in\beta}H_{i_r} | |||
</math> | |||
With these notations, we have the following result: | |||
{{proofcard|Theorem|theorem-5|The glow moments of a matrix <math>H\in M_N(\mathbb T)</math> are given by | |||
<math display="block"> | |||
\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p}=\sum_{\pi\in P(p)}K(\pi)N^{|\pi|}I(\pi) | |||
</math> | |||
where the coefficients are given by | |||
<math display="block"> | |||
K(\pi)=\sum_{\sigma\in P(p)}\mu(\pi,\sigma)\binom{p}{\sigma} | |||
</math> | |||
and where the contributions are given by | |||
<math display="block"> | |||
I(\pi)=\frac{1}{N^{|\pi|}}\sum_{[i]=[j]} < H_\pi(i),H_\pi(j) > | |||
</math> | |||
by using the above notations and conventions. | |||
|We know from Proposition 11.8 that the moments are given by: | |||
<math display="block"> | |||
\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p} | |||
=\sum_{[i]=[j],[x]=[y]}\frac{H_{i_1x_1}\ldots H_{i_px_p}}{H_{j_1y_1}\ldots H_{j_py_p}} | |||
</math> | |||
With <math>\sigma=\ker x,\rho=\ker y</math>, we deduce that the moments of <math>|E|^2</math> decompose over partitions, according to a formula as follows: | |||
<math display="block"> | |||
\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p}=\int_{\mathbb T^N}\sum_{\sigma,\rho\in P(p)}C(\sigma,\rho) | |||
</math> | |||
To be more precise, the contributions are as follows: | |||
<math display="block"> | |||
C(\sigma,\rho)=\sum_{\ker x=\sigma,\ker y=\rho}\delta_{[x],[y]}\sum_{ij} | |||
\frac{H_{i_1x_1}\ldots H_{i_px_p}}{H_{j_1y_1}\ldots H_{j_py_p}} | |||
\cdot\frac{a_{i_1}\ldots a_{i_p}}{a_{j_1}\ldots a_{j_p}} | |||
</math> | |||
We have <math>C(\sigma,\rho)=0</math> unless <math>\sigma\sim\rho</math>, in the sense that <math>\sigma,\rho</math> must have the same block structure. The point now is that the sums of type <math>\sum_{\ker x=\sigma}</math> can be computed by using the Möbius inversion formula. We obtain a formula as follows: | |||
<math display="block"> | |||
C(\sigma,\rho)=\delta_{\sigma\sim\rho}\sum_{\pi\leq\sigma}\mu(\pi,\sigma)\prod_{\beta\in\pi}C_{|\beta|}(a) | |||
</math> | |||
Here the functions on the right are by definition given by: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
C_r(a) | |||
&=&\sum_x\sum_{ij}\frac{H_{i_1x}\ldots H_{i_rx}}{H_{j_1x}\ldots H_{j_rx}}\cdot\frac{a_{i_1}\ldots a_{i_r}}{a_{j_1}\ldots a_{j_r}}\\ | |||
&=&\sum_{ij} < H_{i_1}\ldots H_{i_r},H_{j_1}\ldots H_{j_r} > \cdot\frac{a_{i_1}\ldots a_{i_r}}{a_{j_1}\ldots a_{j_r}} | |||
\end{eqnarray*} | |||
</math> | |||
Now since there are <math>\binom{p}{\sigma}</math> partitions having the same block structure as <math>\sigma</math>, we obtain: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
&&\int_{\mathbb T^N\times\mathbb T^N}|\Omega|^{2p}\\ | |||
&=&\int_{\mathbb T^N}\sum_{\pi\in P(p)}\left(\sum_{\sigma\sim\rho}\sum_{\mu\leq\sigma}\mu(\pi,\sigma)\right)\prod_{\beta\in\pi}C_{|\beta|}(a)\\ | |||
&=&\sum_{\pi\in P(p)}\left(\sum_{\sigma\in P(p)}\mu(\pi,\sigma)\binom{p}{\sigma}\right)\int_{\mathbb T^N}\prod_{\beta\in\pi}C_{|\beta|}(a) | |||
\end{eqnarray*} | |||
</math> | |||
But this gives the formula in the statement, and we are done.}} | |||
Let us discuss now the asymptotic behavior of the glow. For this purpose, we first study the coefficients <math>K(\pi)</math> in Theorem 11.12. We have here the following result: | |||
{{proofcard|Proposition|proposition-7|The coeffients appearing in the above, namely | |||
<math display="block"> | |||
K(\pi)=\sum_{\pi\leq\sigma}\mu(\pi,\sigma)\binom{p}{\sigma} | |||
</math> | |||
have the following properties: | |||
<ul><li> The function <math>\widetilde{K}(\pi)=\frac{K(\pi)}{p!}</math> is multiplicative, in the sense that: | |||
<math display="block"> | |||
\widetilde{K}(\pi\pi')=\widetilde{K}(\pi)\widetilde{K}(\pi') | |||
</math> | |||
</li> | |||
<li> On the one-block partitions, we have: | |||
<math display="block"> | |||
K(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{\sigma\in P(p)}(-1)^{|\sigma|-1}(|\sigma|-1)!\binom{p}{\sigma} | |||
</math> | |||
</li> | |||
<li> We have as well the following fomula, | |||
<math display="block"> | |||
K(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{r=1}^p(-1)^{r-1}(r-1)!C_{pr} | |||
</math> | |||
where the coefficients on the right are given by: | |||
<math display="block"> | |||
C_{pr}=\sum_{p=a_1+\ldots+a_r}\binom{p}{a_1,\ldots,a_r}^2 | |||
</math> | |||
</li> | |||
</ul> | |||
|This follows from some standard computations, as follows: | |||
(1) We can use here the following formula, which is a well-known property of the Möbius function, which can be proved by recurrence: | |||
<math display="block"> | |||
\mu(\pi\pi',\sigma\sigma')=\mu(\pi,\sigma)\mu(\pi',\sigma') | |||
</math> | |||
Now if <math>b_1,\ldots,b_s</math> and <math>c_1,\ldots,c_t</math> are the block lengths of <math>\sigma,\sigma'</math>, we obtain, as claimed: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
\widetilde{K}(\pi\pi') | |||
&=&\sum_{\pi\pi'\leq\sigma\sigma'}\mu(\pi\pi',\sigma\sigma')\cdot\frac{1}{b_1!\ldots b_s!}\cdot\frac{1}{c_1!\ldots c_t!}\\ | |||
&=&\sum_{\pi\leq\sigma,\pi'\leq\sigma'}\mu(\pi,\sigma)\mu(\pi',\sigma')\cdot\frac{1}{b_1!\ldots b_s!}\cdot\frac{1}{c_1!\ldots c_t!}\\ | |||
&=&\widetilde{K}(\pi)\widetilde{K}(\pi') | |||
\end{eqnarray*} | |||
</math> | |||
(2) We can use here the following formula, which once again is well-known, and can be proved by recurrence on <math>|\sigma|</math>: | |||
<math display="block"> | |||
\mu(\sqcap\!\!\sqcap\ldots\sqcap,\sigma)=(-1)^{|\sigma|-1}(|\sigma|-1)! | |||
</math> | |||
We therefore obtain, as claimed: | |||
<math display="block"> | |||
K(\sqcap\!\!\sqcap\ldots\sqcap) | |||
=\sum_{\sigma\in P(p)}\mu(\sqcap\!\!\sqcap\ldots\sqcap,\sigma)\binom{p}{\sigma} | |||
=\sum_{\sigma\in P(p)}(-1)^{|\sigma|-1}(|\sigma|-1)!\binom{p}{\sigma} | |||
</math> | |||
(3) By using the formula in (2), and summing over <math>r=|\sigma|</math>, we obtain: | |||
<math display="block"> | |||
K(\sqcap\!\!\sqcap\ldots\sqcap) | |||
=\sum_{r=1}^p(-1)^{r-1}(r-1)!\sum_{|\sigma|=r}\binom{p}{\sigma} | |||
</math> | |||
Now if we denote by <math>a_1,\ldots,a_r</math> with <math>a_i\geq1</math> the block lengths of <math>\sigma</math>, then: | |||
<math display="block"> | |||
\binom{p}{\sigma}=\binom{p}{a_1,\ldots,a_r} | |||
</math> | |||
On the other hand, given <math>a_1,\ldots,a_r\geq1</math> with <math>a_1+\ldots+a_r=p</math>, the number of partitions <math>\sigma</math> having these numbers as block lengths is: | |||
<math display="block"> | |||
N_{a_1,\ldots,a_r}=\binom{p}{a_1,\ldots,a_r} | |||
</math> | |||
Thus, we are led to the conclusion in the statement.}} | |||
Now let us take a closer look at the integrals <math>I(\pi)</math> from Theorem 11.12, namely: | |||
<math display="block"> | |||
I(\pi)=\frac{1}{N^{|\pi|}}\sum_{[i]=[j]} < H_\pi(i),H_\pi(j) > | |||
</math> | |||
We have here the following result: | |||
{{proofcard|Proposition|proposition-8|Consider the one-block partition <math>\sqcap\!\!\sqcap\ldots\sqcap\in P(p)</math>. | |||
<ul><li> <math>I(\sqcap\!\!\sqcap\ldots\sqcap)=\#\{i,j\in\{1,\ldots,N\}^p|[i]=[j]\}</math>. | |||
</li> | |||
<li> <math>I(\sqcap\!\!\sqcap\ldots\sqcap)=\int_{\mathbb T^N}|\sum_ia_i|^{2p}da</math>. | |||
</li> | |||
<li> <math>I(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{\sigma\in P(p)}\binom{p}{\sigma}\frac{N!}{(N-|\sigma|)!}</math>. | |||
</li> | |||
<li> <math>I(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{r=1}^{p-1}C_{pr}\frac{N!}{(N-r)!}</math>, where <math>C_{pr}=\sum_{p=b_1+\ldots+b_r}\binom{p}{b_1,\ldots,b_r}^2</math>. | |||
</li> | |||
</ul> | |||
|Once again, these formulae follow from some standard combinatorics: | |||
(1) This follows indeed from the following computation: | |||
<math display="block"> | |||
I(\sqcap\!\!\sqcap\ldots\sqcap) | |||
=\sum_{[i]=[j]}\frac{1}{N} < H_{i_1}\ldots H_{i_r},H_{j_1}\ldots H_{j_r} > | |||
=\sum_{[i]=[j]}1 | |||
</math> | |||
(2) This follows from the following computation: | |||
<math display="block"> | |||
\int_{\mathbb T^N}\left|\sum_ia_i\right|^{2p} | |||
=\int_{\mathbb T^N}\sum_{ij}\frac{a_{i_1}\ldots a_{i_p}}{a_{j_1}\ldots a_{j_p}}da | |||
=\#\left\{i,j\Big|[i]=[j]\right\} | |||
</math> | |||
(3) If we let <math>\sigma=\ker i</math> in the above formula of <math>I(\sqcap\!\!\sqcap\ldots\sqcap)</math>, we obtain: | |||
<math display="block"> | |||
I(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{\sigma\in P(p)}\#\left\{i,j\Big|\ker i=\sigma,[i]=[j]\right\} | |||
</math> | |||
Now since there are <math>\frac{N!}{(N-|\sigma|)!}</math> choices for the multi-index <math>i</math>, and then <math>\binom{p}{\sigma}</math> choices for the multi-index <math>j</math>, this gives the result. | |||
(4) If we set <math>r=|\sigma|</math>, the formula in (3) becomes: | |||
<math display="block"> | |||
I(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{r=1}^{p-1}\frac{N!}{(N-r)!}\sum_{\sigma\in P(p),|\sigma|=r}\binom{p}{\sigma} | |||
</math> | |||
Now since there are exactly <math>\binom{p}{b_1,\ldots,b_r}</math> permutations <math>\sigma\in P(p)</math> having <math>b_1,\ldots,b_r</math> as block lengths, the sum on the right is given by: | |||
<math display="block"> | |||
\sum_{\sigma\in P(p),|\sigma|=r}\binom{p}{\sigma}=\sum_{p=b_1+\ldots+b_r}\binom{p}{b_1,\ldots,b_r}^2 | |||
</math> | |||
Thus, we are led to the conclusion in the statement.}} | |||
In general, the integrals <math>I(\pi)</math> can be estimated as follows: | |||
{{proofcard|Proposition|proposition-9|Let <math>H\in M_N(\mathbb T)</math>, having its rows pairwise orthogonal. | |||
<ul><li> <math>I(|\,|\,\ldots|)=N^p</math>. | |||
</li> | |||
<li> <math>I(|\,|\,\ldots|\ \pi)=N^aI(\pi)</math>, for any <math>\pi\in P(p-a)</math>. | |||
</li> | |||
<li> <math>|I(\pi)|\lesssim p!N^p</math>, for any <math>\pi\in P(p)</math>. | |||
</li> | |||
</ul> | |||
|This is something elementary, as follows: | |||
(1) Since the rows of <math>H</math> are pairwise orthogonal, we have: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
I(|\,|\ldots|) | |||
&=&\sum_{[i]=[j]}\prod_{r=1}^p\delta_{i_r,j_r}\\ | |||
&=&\sum_{[i]=[j]}\delta_{ij}\\ | |||
&=&\sum_i1\\ | |||
&=&N^p | |||
\end{eqnarray*} | |||
</math> | |||
(2) This follows by the same computation as the above one for (1). | |||
(3) We have indeed the following estimate: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
|I(\pi)| | |||
&\leq&\sum_{[i]=[j]}\prod_{\beta\in\pi}1\\ | |||
&=&\sum_{[i]=[j]}1\\ | |||
&=&\#\left\{i,j\in\{1,\ldots,N\}\Big|[i]=[j]\right\}\\ | |||
&\simeq&p!N^p | |||
\end{eqnarray*} | |||
</math> | |||
Thus we have obtained the formula in the statement, and we are done.}} | |||
We have now all needed ingredients for a universality result: | |||
{{proofcard|Theorem|theorem-6|The glow of a complex Hadamard matrix <math>H\in M_N(\mathbb T)</math> is given by: | |||
<math display="block"> | |||
\frac{1}{p!}\int_{\mathbb T^N\times\mathbb T^N}\left|\frac{E}{N}\right|^{2p}=1-\binom{p}{2}N^{-1}+O(N^{-2}) | |||
</math> | |||
In particular, <math>E/N</math> becomes complex Gaussian in the <math>N\to\infty</math> limit. | |||
|We use the moment formula in Theorem 11.12, namely: | |||
<math display="block"> | |||
\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p}=\sum_{\pi\in P(p)}K(\pi)N^{|\pi|}I(\pi) | |||
</math> | |||
By using Proposition 11.15 (3), we conclude that only the <math>p</math>-block and <math>(p-1)</math>-block partitions contribute at order 2, so: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p} | |||
&=&K(|\,|\ldots|)N^pI(|\,|\ldots|)\\ | |||
&+&\binom{p}{2}K(\sqcap|\ldots|)N^{p-1}I(\sqcap|\ldots|)\\ | |||
&+&O(N^{2p-2}) | |||
\end{eqnarray*} | |||
</math> | |||
Now by dividing by <math>N^{2p}</math> and then by using the various formulae in Proposition 11.13, Proposition 11.14 and Proposition 11.15, we obtain, as claimed: | |||
<math display="block"> | |||
\int_{\mathbb T^N\times\mathbb T^N}\left|\frac{E}{N}\right|^{2p} | |||
=p! | |||
-\binom{p}{2}\frac{p!}{2}\cdot\frac{2N-1}{N^2} | |||
+O(N^{-2}) | |||
</math> | |||
Finally, since the law of <math>E</math> is invariant under centered rotations in the complex plane, this moment formula gives as well the last assertion.}} | |||
Summarizing, the complex glow of the complex Hadamard matrices appears to have similar properties to the real glow of the real Hadamard matrices. | |||
===11c. Fourier matrices=== | |||
Let us study now the glow of the Fourier matrices, <math>F=F_G</math>. We use the following standard formulae, which all come from definitions: | |||
<math display="block"> | |||
F_{ix}F_{iy}=F_{i,x+y}\quad,\quad | |||
\overline{F}_{ix}=F_{i,-x}\quad,\quad | |||
\sum_xF_{ix}=N\delta_{i0} | |||
</math> | |||
We first have the following result: | |||
{{proofcard|Proposition|proposition-10|For a Fourier matrix <math>F_G</math> we have | |||
<math display="block"> | |||
I(\pi)=\#\left\{i,j\Big|[i]=[j],\sum_{r\in\beta}i_r=\sum_{r\in\beta}j_r,\forall\beta\in\pi\right\} | |||
</math> | |||
with all the indices, and with the sums at right, taken inside <math>G</math>. | |||
|The basic components of the integrals <math>I(\pi)</math> are given by: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
\frac{1}{N}\left\langle\prod_{r\in\beta}F_{i_r},\prod_{r\in\beta}F_{j_r}\right\rangle | |||
&=&\frac{1}{N}\left\langle F_{\sum_{r\in\beta}i_r},F_{\sum_{r\in\beta}i_r}\right\rangle\\ | |||
&=&\delta_{\sum_{r\in\beta}i_r,\sum_{r\in\beta}j_r} | |||
\end{eqnarray*} | |||
</math> | |||
But this gives the formula in the statement, and we are done.}} | |||
We have the following interpretation of the above integrals: | |||
{{proofcard|Proposition|proposition-11|For any partition <math>\pi</math> we have the formula | |||
<math display="block"> | |||
I(\pi)=\int_{\mathbb T^N}\prod_{b\in\pi}\left(\frac{1}{N^2}\sum_{ij}|H_{ij}|^{2|\beta|}\right)da | |||
</math> | |||
where <math>H=FAF^*</math>, with <math>F=F_G</math> and <math>A=diag(a_0,\ldots,a_{N-1})</math>. | |||
|We have the following computation: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
H=F^*AF | |||
&\implies&|H_{xy}|^2=\sum_{ij}\frac{F_{iy}F_{jx}}{F_{ix}F_{jy}}\cdot\frac{a_i}{a_j}\\ | |||
&\implies&|H_{xy}|^{2p}=\sum_{ij}\frac{F_{j_1x}\ldots F_{j_px}}{F_{i_1x}\ldots F_{i_px}}\cdot\frac{F_{i_1y}\ldots F_{i_py}}{F_{j_1y}\ldots F_{j_py}}\cdot\frac{a_{i_1}\ldots a_{i_p}}{a_{j_1}\ldots a_{j_p}}\\ | |||
&\implies&\sum_{xy}|H_{xy}|^{2p}=\sum_{ij}\left| < H_{i_1}\ldots H_{i_p},H_{j_1}\ldots H_{j_p} > \right|^2\cdot\frac{a_{i_1}\ldots a_{i_p}}{a_{j_1}\ldots a_{j_p}} | |||
\end{eqnarray*} | |||
</math> | |||
But this gives the formula in the statement, and we are done.}} | |||
We must estimate now the quantities <math>I(\pi)</math>. We first have the following result: | |||
{{proofcard|Proposition|proposition-12|For <math>F_G</math> we have the estimate | |||
<math display="block"> | |||
I(\pi)=b_1!\ldots b_{|\pi|}!N^p+O(N^{p-1}) | |||
</math> | |||
where the numbers <math>b_1,\ldots,b_{|\pi|}</math> with <math>b_1+\ldots+b_{|\pi|}=p</math> are the block lengths of <math>\pi</math>. | |||
|With <math>\sigma=\ker i</math> we obtain: | |||
<math display="block"> | |||
I(\pi)=\sum_{\sigma\in P(p)}\#\left\{i,j\Big|\ker i=\sigma,[i]=[j],\sum_{r\in\beta}i_r=\sum_{r\in\beta}j_r,\forall\beta\in\pi\right\} | |||
</math> | |||
The number of choices for <math>i</math> satisfying <math>\ker i=\sigma</math> is: | |||
<math display="block"> | |||
\frac{N!}{(N-|\sigma|)!}\simeq N^{|\sigma|} | |||
</math> | |||
Then, the number of choices for <math>j</math> satisfying <math>[i]=[j]</math> is: | |||
<math display="block"> | |||
\binom{p}{\sigma}=O(1) | |||
</math> | |||
We conclude that the main contribution comes from the following partition: | |||
<math display="block"> | |||
\sigma=|\,|\ldots| | |||
</math> | |||
Thus, we have the following formula: | |||
<math display="block"> | |||
I(\pi)=\#\left\{i,j\Big|\ker i=|\,|\ldots|,[i]=[j],\sum_{r\in\beta}i_r=\sum_{r\in\beta}j_r,\forall\beta\in\pi\right\}+O(N^{p-1}) | |||
</math> | |||
Now <math>\ker i=|\,|\ldots|</math> tells us that <math>i</math> must have distinct entries, and there are <math>\frac{N!}{(N-p)!}\simeq N^p</math> choices for such multi-indices <math>i</math>. Regarding the indices <math>j</math>, the main contribution comes from those obtained from <math>i</math> by permuting the entries over the blocks of <math>\pi</math>, and there are <math>b_1!\ldots b_{|\pi|}!</math> choices here. Thus, we are led to the conclusion in the statement.}} | |||
At the second order now, the estimate is as follows: | |||
{{proofcard|Proposition|proposition-13|For <math>F_G</math> we have the formula | |||
<math display="block"> | |||
\frac{I(\pi)}{b_1!\ldots b_s!N^p}=1+\left(\sum_{i < j}\sum_{c\geq2}\binom{b_i}{c}\binom{b_j}{c}-\frac{1}{2}\sum_i\binom{b_i}{2}\right)N^{-1}+O(N^{-2}) | |||
</math> | |||
where <math>b_1,\ldots,b_s</math> being the block lengths of <math>\pi\in P(p)</math>. | |||
|Let us define the “non-arithmetic” part of <math>I(\pi)</math> as follows: | |||
<math display="block"> | |||
I^\circ(\pi)=\#\left\{i,j\Big|[i_r|r\in\beta]=[j_r|r\in\beta],\forall\beta\in\pi\right\} | |||
</math> | |||
We then have the following formula: | |||
<math display="block"> | |||
I^\circ(\pi)=\prod_{\beta\in\pi}\left\{i,j\in I^{|\beta|}\Big|[i]=[j]\right\}=\prod_{\beta\in\pi}I(\beta) | |||
</math> | |||
Also, Proposition 11.19 shows that we have the following estimate: | |||
<math display="block"> | |||
I(\pi)=I^\circ(\pi)+O(N^{p-1}) | |||
</math> | |||
Our claim now is that we have the following formula: | |||
<math display="block"> | |||
\frac{I(\pi)-I^\circ(\pi)}{b_1!\ldots b_s!N^p}=\sum_{i < j}\sum_{c\geq2}\binom{b_i}{c}\binom{b_j}{c}N^{-1}+O(N^{-2}) | |||
</math> | |||
Indeed, according to Proposition 11.19, we have a formula of the following type: | |||
<math display="block"> | |||
I(\pi)=I^\circ(\pi)+I^1(\pi)+O(N^{p-2}) | |||
</math> | |||
More precisely, this formula holds indeed, with <math>I^1(\pi)</math> coming from <math>i_1,\ldots,i_p</math> distinct, <math>[i]=[j]</math>, and with one constraint of type: | |||
<math display="block"> | |||
\sum_{r\in\beta}i_r=\sum_{j\in\beta}j_r\quad,\quad | |||
[i_r|r\in\beta]\neq[j_r|r\in\beta] | |||
</math> | |||
Now observe that for a two-block partition <math>\pi=(a,b)</math> this constraint is implemented, up to permutations which leave invariant the blocks of <math>\pi</math>, as follows: | |||
<math display="block"> | |||
\begin{matrix} | |||
i_1\ldots i_c&k_1\ldots k_{a-c}&&j_1\ldots j_c&l_1\ldots l_{a-c}\\ | |||
\underbrace{j_1\ldots j_c}_c&\underbrace{k_1\ldots k_{a-c}}_{a-c}&&\underbrace{i_1\ldots i_c}_c&\underbrace{l_1\ldots l_{a-c}}_{b-c} | |||
\end{matrix} | |||
</math> | |||
Let us compute now <math>I^1(a,b)</math>. We cannot have <math>c=0,1</math>, and once <math>c\geq2</math> is given, we have <math>\binom{a}{c},\binom{b}{c}</math> choices for the positions of the <math>i,j</math> variables in the upper row, then <math>N^{p-1}+O(N^{p-2})</math> choices for the variables in the upper row, and then finally we have <math>a!b!</math> permutations which can produce the lower row. We therefore obtain: | |||
<math display="block"> | |||
I^1(a,b)=a!b!\sum_{c\geq2}\binom{a}{c}\binom{b}{c}N^{p-1}+O(N^{p-2}) | |||
</math> | |||
In the general case now, a similar discussion applies. Indeed, the constraint of type <math>\sum_{r\in\beta}i_r=\sum_{r\in\beta}j_r</math> with <math>[i_r|r\in\beta]\neq[j_r|r\in\beta]</math> cannot affect <math>\leq1</math> blocks, because we are not in the non-arithmetic case, and cannot affect either <math>\geq3</math> blocks, because affecting <math>\geq3</math> blocks would require <math>\geq2</math> constraints. Thus this condition affects exactly <math>2</math> blocks, and if we let <math>i < j</math> be the indices in <math>\{1,\ldots,s\}</math> corresponding to these 2 blocks, we obtain: | |||
<math display="block"> | |||
I^1(\pi)=b_1!\ldots b_s!\sum_{i < j}\sum_{c\geq2}\binom{b_i}{c}\binom{b_j}{c}N^{p-1}+O(N^{p-2}) | |||
</math> | |||
But this proves the above claim. Let us estimate now <math>I(\sqcap\!\!\sqcap\ldots\sqcap)</math>. We have: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
&&I(\sqcap\!\!\sqcap\ldots\sqcap)\\ | |||
&=&p!\frac{N!}{(N-p)!}+\binom{p}{2}\frac{p!}{2}\cdot\frac{N!}{(N-p+1)!}+O(N^{p-2})\\ | |||
&=&p!N^r\left(1-\binom{p}{2}N^{-1}+O(N^{-2})\right)+\binom{p}{2}\frac{p!}{2}N^{p-1}+O(N^{p-2})\\ | |||
&=&p!N^p\left(1-\frac{1}{2}\binom{p}{2}N^{-1}+O(N^{-2})\right) | |||
\end{eqnarray*} | |||
</math> | |||
Now recall that we have: | |||
<math display="block"> | |||
I^\circ(\pi)=\prod_{\beta\in\pi}I(\beta) | |||
</math> | |||
We therefore obtain: | |||
<math display="block"> | |||
I^\circ(\pi)=b_1!\ldots b_s!N^p\left(1-\frac{1}{2}\sum_i\binom{b_i}{2}N^{-1}+O(N^{-2})\right) | |||
</math> | |||
By plugging this quantity into the above estimate, we obtain the result.}} | |||
In order to estimate glow, we will need the explicit formula of <math>I(\sqcap\sqcap)</math>: | |||
{{proofcard|Proposition|proposition-14|For <math>F_G</math> with <math>G=\mathbb Z_{N_1}\times\ldots\times\mathbb Z_{N_k}</math> we have the formula | |||
<math display="block"> | |||
I(\sqcap\sqcap)=N(4N^3-11N+2^e+7) | |||
</math> | |||
where <math>e\in\{0,1,\ldots,k\}</math> is the number of even numbers among <math>N_1,\ldots,N_k</math>. | |||
|The conditions defining the quantities <math>I(\pi)</math> are as follows: | |||
<math display="block"> | |||
\sum_{r\in\beta}i_r=\sum_{r\in\beta}j_r | |||
</math> | |||
We use the fact that, when dealing with these conditions, one can always erase some of the variables <math>i_r,j_r</math>, as to reduce to the “purely arithmetic” case, namely: | |||
<math display="block"> | |||
\{i_r|r\in\beta\}\cap\{j_r|r\in\beta\}=\emptyset | |||
</math> | |||
We deduce from this that we have: | |||
<math display="block"> | |||
I(\sqcap\sqcap)=I^\circ(\sqcap\sqcap)+I^{ari}(\sqcap\sqcap) | |||
</math> | |||
Let us compute now <math>I^{ari}(\sqcap\sqcap)</math>. There are 3 contributions to this quantity, namely: | |||
(1) \underline{Case <math>(^{iijj}_{jjii})</math>}, with <math>i\neq j</math>, <math>2i=2j</math>. Since <math>2(i_1,\ldots,i_k)=2(j_1,\ldots,j_k)</math> corresponds to the collection of conditions <math>2i_r=2j_r</math>, inside <math>\mathbb Z_{N_r}</math>, which each have 1 or 2 solutions, depending on whether <math>N_r</math> is odd or even, the contribution here is: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
I^{ari}_1(\sqcap\sqcap) | |||
&=&\#\{i\neq j|2i=2j\}\\ | |||
&=&\#\{i,j|2i=2j\}-\#\{i,j|i=j\}\\ | |||
&=&2^eN-N\\ | |||
&=&(2^e-1)N | |||
\end{eqnarray*} | |||
</math> | |||
(2) \underline{Case <math>(^{iijk}_{jkii})</math>}, with <math>i,j,k</math> distinct, <math>2i=j+k</math>. The contribution here is: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
I^{ari}_2(\sqcap\sqcap) | |||
&=&4\#\{i,j,k\ {\rm distinct}|2i=j+k\}\\ | |||
&=&4\#\{i\neq j|2i-j\neq i,j\}\\ | |||
&=&4\#\{i\neq j|2i\neq 2j\}\\ | |||
&=&4(\#\{i,j|i\neq j\}-\#\{i\neq j|2i=2j\})\\ | |||
&=&4(N(N-1)-(2^e-1)N)\\ | |||
&=&4N(N-2^e) | |||
\end{eqnarray*} | |||
</math> | |||
(3) \underline{Case <math>(^{ijkl}_{klij})</math>}, with <math>i,j,k,l</math> distinct, <math>i+j=k+l</math>. The contribution here is: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
I^{ari}_3(\sqcap\sqcap) | |||
&=&4\#\{i,j,k,l\ {\rm distinct}|i+j=k+l\}\\ | |||
&=&4\#\{i,j,k\ {\rm distinct}|i+j-k\neq i,j,k\}\\ | |||
&=&4\#\{i,j,k\ {\rm distinct}|i+j-k\neq k\}\\ | |||
&=&4\#\{i,j,k\ {\rm distinct}|i\neq 2k-j\} | |||
\end{eqnarray*} | |||
</math> | |||
We can split this quantity over two cases, <math>2j\neq 2k</math> and <math>2j=2k</math>, and we obtain: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
I^{ari}_3(\sqcap\sqcap) | |||
&=&4(\#\{i,j,k\ {\rm distinct}|2j\neq 2k,i\neq 2k-j\}\\ | |||
&&+\#\{i,j,k\ {\rm distinct}|2j=2k,i\neq 2k-j\}) | |||
\end{eqnarray*} | |||
</math> | |||
The point now is that in the first case, <math>2j\neq 2k</math>, the numbers <math>j,k,2k-j</math> are distinct, while in the second case, <math>2j=2k</math>, we simply have <math>2k-j=j</math>. Thus, we obtain: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
I^{ari}_3(\sqcap\sqcap) | |||
&=&4\left(\sum_{j\neq k,2j\neq 2k}\#\{i|i\neq j,k,2k-j\}+\sum_{j\neq k,2j=2k}\#\{i|i\neq j,k\}\right)\\ | |||
&=&4(N(N-2^e)(N-3)+N(2^e-1)(N-2))\\ | |||
&=&4N(N(N-3)-2^e(N-3)+2^e(N-2)-(N-2))\\ | |||
&=&4N(N^2-4N+2^e+2) | |||
\end{eqnarray*} | |||
</math> | |||
We can now compute the arithmetic part. This is given by: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
I^{ari}(\sqcap\sqcap) | |||
&=&(2^e-1)N+4N(N-2^e)+4N(N^2-4N+2^e+2)\\ | |||
&=&N(2^e-1+4(N-2^e)+4(N^2-4N+2^e+2))\\ | |||
&=&N(4N^2-12N+2^e+7) | |||
\end{eqnarray*} | |||
</math> | |||
Thus the integral to be computed is given by: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
I(\sqcap\sqcap) | |||
&=&N^2(2N-1)^2+N(4N^2-12N+2^e+7)\\ | |||
&=&N(4N^3-4N^2+N+4N^2-12N+2^e+7)\\ | |||
&=&N(4N^3-11N+2^e+7) | |||
\end{eqnarray*} | |||
</math> | |||
Thus we have reached to the formula in the statement, and we are done.}} | |||
===11d. Universality=== | |||
We have the following asymptotic result: | |||
{{proofcard|Theorem|theorem-7|The glow of <math>F_G</math>, with <math>|G|=N</math>, is given by | |||
<math display="block"> | |||
\frac{1}{p!}\int_{\mathbb T^N\times\mathbb T^N}\left|\frac{E}{N}\right|^{2p}=1-K_1N^{-1}+K_2N^{-2}-K_3N^{-3}+O(N^{-4}) | |||
</math> | |||
with the coefficients being as follows: | |||
<math display="block"> | |||
K_1=\binom{p}{2}\quad,\quad | |||
K_2=\binom{p}{2}\frac{3p^2+p-8}{12}\quad,\quad | |||
K_3=\binom{p}{3}\frac{p^3+4p^2+p-18}{8} | |||
</math> | |||
Thus, the rescaled complex glow is asymptotically complex Gaussian, | |||
<math display="block"> | |||
\frac{E}{N}\sim\mathcal C | |||
</math> | |||
and we have in fact universality at least up to order <math>3</math>. | |||
|We use the following quantities: | |||
<math display="block"> | |||
\widetilde{K}(\pi)=\frac{K(\pi)}{p!}\quad,\quad | |||
\widetilde{I}(\pi)=\frac{I(\pi)}{N^p} | |||
</math> | |||
These are subject to the following formulae: | |||
<math display="block"> | |||
\widetilde{K}(\pi|\ldots|)=\widetilde{K}(\pi)\quad,\quad | |||
\widetilde{I}(\pi|\ldots|)=\widetilde{I}(\pi) | |||
</math> | |||
Consider as well the following quantities: | |||
<math display="block"> | |||
J(\sigma)=\binom{p}{\sigma}\widetilde{K}(\sigma)\widetilde{I}(\sigma) | |||
</math> | |||
In terms of these quantities, we have: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
\frac{1}{p!}\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p} | |||
&=&J(\emptyset)\\ | |||
&+&N^{-1}J(\sqcap)\\ | |||
&+&N^{-2}\left(J(\sqcap\!\sqcap)+J(\sqcap\sqcap)\right)\\ | |||
&+&N^{-3}\left(J(\sqcap\!\!\sqcap\!\!\sqcap)+J(\sqcap\!\!\sqcap\sqcap)+J(\sqcap\sqcap\sqcap)\right)\\ | |||
&+&O(N^{-4}) | |||
\end{eqnarray*} | |||
</math> | |||
We have the following formulae: | |||
<math display="block"> | |||
\widetilde{K}_0=1 | |||
</math> | |||
<math display="block"> | |||
\widetilde{K}_1=1 | |||
</math> | |||
<math display="block"> | |||
\widetilde{K}_2=\frac{1}{2}-1=-\frac{1}{2} | |||
</math> | |||
<math display="block"> | |||
\widetilde{K}_3=\frac{1}{6}-\frac{3}{2}+2=\frac{2}{3} | |||
</math> | |||
<math display="block"> | |||
\widetilde{K}_4=\frac{1}{24}-\frac{4}{6}-\frac{3}{4}+\frac{12}{2}-6=-\frac{11}{8} | |||
</math> | |||
Regarding now the numbers <math>C_{pr}</math> in Proposition 11.19, these are given by: | |||
<math display="block"> | |||
C_{p1}=1 | |||
</math> | |||
<math display="block"> | |||
C_{p2}=\frac{1}{2}\binom{2p}{p}-1 | |||
</math> | |||
<math display="block"> | |||
\vdots | |||
</math> | |||
<math display="block"> | |||
C_{p,p-1}=\frac{p!}{2}\binom{p}{2} | |||
</math> | |||
<math display="block"> | |||
C_{pp}=p! | |||
</math> | |||
We deduce that we have the following formulae: | |||
<math display="block"> | |||
I(|)=N | |||
</math> | |||
<math display="block"> | |||
I(\sqcap)=N(2N-1) | |||
</math> | |||
<math display="block"> | |||
I(\sqcap\!\sqcap)=N(6N^2-9N+4) | |||
</math> | |||
<math display="block"> | |||
I(\sqcap\!\!\sqcap\!\!\sqcap)=N(24N^3-72N^2+82N-33) | |||
</math> | |||
By using Proposition 11.20 and Proposition 11.21, we obtain the following formula: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
\frac{1}{p!}\int_{\mathbb T^N\times\mathbb T^N}|E|^{2p} | |||
&=&1-\frac{1}{2}\binom{p}{2}(2N^{-1}-N^{-2})+\frac{2}{3}\binom{p}{3}(6N^{-2}-9N^{-3})\\ | |||
&+&3\binom{p}{4}N^{-2}-33\binom{p}{4}N^{-3}-40\binom{p}{5}N^{-3}\\ | |||
&-&15\binom{p}{6}N^{-3}+O(N^{-4}) | |||
\end{eqnarray*} | |||
</math> | |||
But this gives the formulae of <math>K_1,K_2,K_3</math> in the statement, and we are done.}} | |||
It is possible to compute the next term as well, the result being as follows: | |||
{{proofcard|Theorem|theorem-8|Let <math>G=\mathbb Z_{N_1}\times\ldots\times\mathbb Z_{N_k}</math> be a finite abelian group, and set: | |||
<math display="block"> | |||
N=N_1\ldots N_k | |||
</math> | |||
Then the glow of the associated Fourier matrix <math>F_G</math> is given by | |||
<math display="block"> | |||
\frac{1}{p!}\int_{\mathbb T^N\times\mathbb T^N}\left|\frac{E}{N}\right|^{2p}=1-K_1N^{-1}+K_2N^{-2}-K_3N^{-3}+K_4N^{-4}+O(N^{-5}) | |||
</math> | |||
where the quantities <math>K_1,K_2,K_3,K_4</math> are given by | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
K_1&=&\binom{p}{2}\\ | |||
K_2&=&\binom{p}{2}\frac{3p^2+p-8}{12}\\ | |||
K_3&=&\binom{p}{3}\frac{p^3+4p^2+p-18}{8}\\ | |||
K_4&=&\frac{8}{3}\binom{p}{3}+\frac{3}{4}\left(121+\frac{2^e}{N}\right)\binom{p}{4}+416\binom{p}{5}+\frac{2915}{2}\binom{p}{6}+40\binom{p}{7}+105\binom{p}{8} | |||
\end{eqnarray*} | |||
</math> | |||
where <math>e\in\{0,1,\ldots,k\}</math> is the number of even numbers among <math>N_1,\ldots,N_k</math>. | |||
|This is something that we already know, up to order 3, and the next coefficient <math>K_4</math> can be computed in a similar way, based on results that we already have.}} | |||
The passage to Theorem 11.23 is quite interesting, because it shows that the glow of the Fourier matrices <math>F_G</math> is not polynomial in <math>N=|G|</math>. When restricting the attention to the usual Fourier matrices <math>F_N</math>, the glow up to order 4 is polynomial both in <math>N</math> odd, and in <math>N</math> even, but it is not clear what happens at higher order. An interesting question here is that of computing the complex glow of the Walsh matrices. Indeed, for the Walsh matrices the integrals <math>I(\pi)</math>, and hence the glow itself, might be polynomial in <math>N</math>. | |||
==General references== | |||
{{cite arXiv|last1=Banica|first1=Teo|year=2024|title=Invitation to Hadamard matrices|eprint=1910.06911|class=math.CO}} | |||
==References== | |||
{{reflist}} |
Latest revision as of 23:14, 21 April 2025
11a. Basic results
We discuss here the computation of the glow of the complex Hadamard matrices, as a continuation of the material from chapter 2, where we discussed the basics of the glow in the real case, and as a continuation as well of the material from chapter 10. As a first motivation for all this, we have the Gale-Berlekamp game [1], [2]. Another motivation comes from the questions regarding the bistochastic matrices, in relation with the Ideal-Wolf theorem [3], explained in chapter 10. Finally, we have the question of connecting the defect, and other invariants of the Hadamard matrices, to the glow.
Let us begin by reviewing the few theoretical things that we know about the glow, from chapter 10. The main results there can be summarized as follows:
The glow of [math]H\in M_N(\mathbb C)[/math], which is the law [math]\mu\in\mathcal P(\mathbb C)[/math] of the excess
- [math]\mu=\varepsilon\times\mu^+[/math], where [math]\mu^+=law(|E|)[/math].
- [math]\mu[/math] is invariant under rotations.
- [math]H\in\sqrt{N}U_N[/math] implies [math]supp(\mu)\subset N\sqrt{N}\,\mathbb D[/math].
- [math]H\in\sqrt{N}U_N[/math] implies as well [math]N\sqrt{N}\,\mathbb T\subset supp(\mu)[/math].
We already know all this from chapter 10, the idea being as follows:
(1) This follows indeed by using [math]H\to zH[/math] with [math]|z|=1[/math].
(2) This follows from (1), the convolution with [math]\varepsilon[/math] bringing the invariance.
(3) This follows indeed from Cauchy-Schwarz.
(4) This is something highly non-trivial, coming from [3].
In what follows we will be mainly interested in the Hadamard matrix case, but since the computations here are quite difficult, let us begin our study with other matrices. It is convenient to normalize our matrices, as to make them a bit similar to the complex Hadamard ones. To be more precise, consider the [math]2[/math]-norm on the vector space of the complex [math]N\times N[/math] matrices, which is given by the following formula:
We will assume in what follows, by multiplying our matrix [math]H\in M_N(\mathbb C)[/math] by a suitable scalar, that this norm takes the same value as for the Hadamard matrices, namely:
We know from chapter 2 that in the real case, the real glow is asymptotically Gaussian. In the complex matrix case, we will reach to the conclusion that the glow is asymptotically complex Gaussian, with the complex Gaussian distribution being as follows:
The complex Gaussian distribution [math]\mathcal C[/math] is the law of the variable
This is standard probability theory, with the main result, namely the moment formula in the statement, coming from some routine computations. For more on all this, we refer to any standard probability book, such as Durrett [4].
Finally, we use in what follows the symbol [math]\sim[/math] to denote an equality of distributions. With these conventions, we have the following result, to start with:
We have the following computations:
- For the rescaled identity [math]\widetilde{I}_N=\sqrt{N}I_N[/math] we have
[[math]] E\sim\sqrt{N}(q_1+\ldots +q_N) [[/math]]with [math]q\in\mathbb T^N[/math] random. With [math]N\to\infty[/math] we have [math]E/N\sim\mathcal C[/math].
- For the flat matrix [math]J_N=(1)_{ij}[/math] we have
[[math]] E\sim(a_1+\ldots+a_N)(b_1+\ldots+b_N) [[/math]]with [math](a,b)\in\mathbb T^N\times\mathbb T^N[/math] random. With [math]N\to\infty[/math] we have [math]E/N\sim\mathcal C\times\mathcal C[/math].
We use Theorem 11.1, and the moment method:
(1) Here we have [math]E=\sqrt{N}\sum_{i}a_ib_i[/math], with [math]a,b\in\mathbb T^N[/math] random. With [math]q_i=a_ib_i[/math] this gives the first assertion. Let us estimate now the moments of [math]|E|^2[/math]. We have:
Here, and in what follows, the sets between brackets are by defintion sets with repetition, and the middle estimate comes from the fact that, with [math]N\to\infty[/math], only the multi-indices [math]i=(i_1,\ldots,i_p)[/math] having distinct entries contribute. But this gives the result.
(2) Here we have the following formula, which gives the first assertion:
Now since [math]a,b\in\mathbb T^N[/math] are independent, so are the quantities [math]\sum_ia_i,\sum_jb_j[/math], so we have:
Here we have used the estimate in the proof of (1), and this gives the result.
As a conclusion, the glow is intimately related to the basic hypertoral law, namely the law of the variable [math]q_1+\ldots+q_N[/math], with [math]q\in\mathbb T^N[/math] being random. Observe that at [math]N=1[/math] this hypertoral law is the Dirac mass [math]\delta_1[/math], and that at [math]N=2[/math] we obtain the following law:
In general, the law of [math]\sum q_i[/math] is known to be related to the Pòlya random walk [5]. Also, as explained for instance in chapter 9, the moments of this law are:
As a second conclusion, even under the normalization [math]||H||_2=N[/math], the glow can behave quite differently in the [math]N\to\infty[/math] limit. So, let us restrict now the attention to the complex Hadamard matrices. At [math]N=2[/math] we only have [math]F_2[/math] to be invesigated, the result being:
For the Fourier matrix [math]F_2[/math] we have
The matrix that we interested in, namely the Fourier matrix [math]F_2[/math] altered by a vertical switching vector [math](a,b)[/math] and an horizontal switching vector [math](c,d)[/math], is:
With this notation, we have the following formula:
For proving that the variables [math]\alpha=\frac{ad}{bc}[/math] and [math]\beta=\frac{bd}{ac}[/math] are independent, we can use the moment method, as follows:
Thus [math]\alpha,\beta[/math] are indeed independent, and we are done.
It is possible of course to derive from this some more concrete formulae, but let us look instead at the case [math]N=3[/math]. Here the matrix that we are interested in is:
Thus, we would like to compute the law of the following quantity:
The problem is that when trying to compute [math]|E|^2[/math], the terms won't cancel much. More precisely, we have a formula of the following type:
Here the quantities [math]C_0,C_1,C_2[/math] are as follows:
In short, all this obviously leads nowhere, and the exact study stops at [math]F_2[/math]. In general now, one idea is that of using Bernoulli-type variables coming from the row sums, a bit as we did in chapter 2 in the real case, the result here being as follows:
The glow of [math]H\in M_N(\mathbb C)[/math] is given by the formula
This is clear indeed from the following formula:
To be more precise, when the vector [math]a\in\mathbb T^N[/math] is assumed to be fixed, this variable [math]E[/math] follows the law [math]B((Ha)_1,\ldots,(Ha)_N)[/math] in the statement.
Observe that, in what regards the laws appearing in Theorem 11.5, we can write a formula for them of the following type, with [math]\times[/math] being a multiplicative convolution:
To be more precise, such a formula holds indeed, with the measure [math]\beta(r_1,\ldots,r_N)\in\mathcal P(\mathbb R_+)[/math] with [math]r_1,\ldots,r_N\geq 0[/math] being given by the following formula:
Regarding now the explicit computation of [math]\beta[/math], observe we have:
Consider now the following variable, which is easily seen, for instance by using the moment method, to be uniform over the projective torus [math]\mathbb T^{N-1}=\mathbb T^N/\mathbb T[/math]:
Now since we have [math]\lambda_i/\lambda_j=\mu_i\mu_{i+1}\ldots\mu_j[/math], with the convention [math]\mu_i\ldots\mu_j=\overline{\mu_j\ldots\mu_i}[/math] for [math]i \gt j[/math], this gives the following formula, with [math]\mu\in\mathbb T^{N-1}[/math] random:
It is possible to further study the laws [math]\beta[/math] by using this formula. However, in practice, it is more convenient to use the complex measures [math]B[/math] from Theorem 11.5.
Let us end these preliminaries with a discussion of the “arithmetic” version of the problem, which makes the link with the Gale-Berlekamp game [1], [2] and with the work in the real case, from chapter 2. We have the following unifying formalism:
Given [math]H\in M_N(\mathbb C)[/math] and [math]s\in\mathbb N\cup\{\infty\}[/math], we define a measure
Observe that at [math]s=\infty[/math] we obtain the measure in Theorem 11.1. Also, at [math]s=2[/math] and for a usual Hadamard matrix, [math]H\in M_N(\pm1)[/math], we obtain the measure from chapter 2. Observe also that for [math]H\in M_N(\pm1)[/math], knowing [math]\mu_2[/math] is the same as knowing the statistics of the number of one entries, [math]|1\in H|[/math]. This follows indeed from the following formula:
More generally, at [math]s=p[/math] prime, we have the following result:
When [math]s[/math] is prime and [math]H\in M_N(\mathbb Z_s)[/math], the statistics of the number of one entries, [math]|1\in H|[/math], can be recovered from that of the total sum, [math]E=\sum_{ij}H_{ij}[/math].
The problem here is of vectorial nature, so given [math]V\in\mathbb Z_s^n[/math], we would like to compare the quantities [math]|1\in V|[/math] and [math]\sum V_i[/math]. Let us write, up to permutations:
We have then [math]|1\in V|=a_0[/math], as well as:
We also know that [math]a_0+a_1+\ldots+a_{s-1}=n[/math]. Now when [math]s[/math] is prime, the only ambiguity in recovering [math]a_0[/math] from [math]a_0+a_1w+\ldots+a_{s-1}w^{s-1}[/math] can come from:
But since the sum of the numbers [math]a_i[/math] is fixed, [math]a_0+a_1+\ldots+a_{s-1}=n[/math], this ambiguity dissapears, and this gives the result.
11b. Glow moments
Let us investigate now the glow of the complex Hadamard matrices, by using the moment method. We use the moment formula from chapter 10, namely:
For [math]H\in M_N(\mathbb T)[/math] the even moments of [math]|E|[/math] are given by
As explained in chapter 10, with [math]E=\sum_{ij}H_{ij}a_ib_j[/math] we obtain:
The integrals on the right being [math]\delta_{[i],[k]}[/math] and [math]\delta_{[j],[l]}[/math], we obtain the result.
As a first application, let us investigate the tensor products. We have:
The even moments of the variable [math]|E|[/math] for a tensor product
With [math]L=H\otimes K[/math], the formula in Proposition 11.8 reads:
But this gives the formula in the statement, and we are done.
Thus, we cannot reconstruct the glow of [math]H\otimes K[/math] from that of [math]H,K[/math], because the indices “get mixed”. We have as well a result regarding the deformations, as follows:
The even moments of [math]|E|[/math] for a deformed tensor product
As before, we use the formula in Proposition 11.8. We have:
Thus, we obtain the following formula for the moments:
Thus, we are led to the conclusion in the statement.
The above formulae might look quite complicated, but they have some practical use. Let us go back indeed to a question that we had open since chapter 5, namely classifying the [math]4\times4[/math] complex Hadamard matrices, up to equivalence. We can now formulate:
The complex Hadamard matrices at [math]N=4[/math] are, up to equivalence, the following matrices, with [math]s=e^{it}[/math] with [math]t\in[0,\pi/2][/math],
We know from chapter 5 that the [math]4\times4[/math] complex Hadamard matrices are those in the statement, with [math]s\in\mathbb T[/math], and we also know from there that we have equivalences as follows, which in practice means that we can assume [math]s=e^{it}[/math] with [math]t\in[0,\pi/2][/math]:
It remains to prove that these matrices, namely [math]F_4^s[/math] with [math]s=e^{it}[/math] with [math]t\in[0,\pi/2][/math], are not equivalent. For this purpose, let us look at the moments of the glow:
(1) Regarding the first moment, this is not something useful, because we have the following formula, coming from Proposition 11.8, valid for any [math]H\in M_N(\mathbb T)[/math]:
(2) Regarding the second moment, this is something not useful either, because once again by using Proposition 11.8, we obtain a formula as follows, for any [math]H\in M_N(\mathbb T)[/math]:
To be more precise, here [math]P[/math] is a certain polynomial, not depending on [math]H[/math], collecting the contributions from the “trivial” solutions of [math][ik]=[mp][/math], [math][jl]=[nq][/math], and then [math]Q[/math] is another polynomial, again not depending on [math]H[/math], obtained from [math]P[/math] via a summing trick.
(3) However, when getting to the third moment, or higher, things become interesting. Indeed, the equivalences [math]F_4^s\sim F_4^{-s}\sim F_4^{\bar{s}}\sim F_4^{-\bar{s}}[/math] tell us that the [math]p[/math]-th moment of [math]|E|^2[/math] is a degree [math]p[/math] even, symmetric Laurent polynomial in [math]s\in\mathbb T[/math], and a direct computation at [math]p=3[/math], based on the formula in Proposition 11.10, shows that the parameter [math]s\in\mathbb T[/math] can be recaptured, up to identifying [math]\{s,-s,\bar{s},-\bar{s}\}[/math], from the knowledge of this polynomial.
(4) Alternatively, we can say that the parameter [math]s\in\mathbb T[/math] can be recaptured, again up to identifying [math]\{s,-s,\bar{s},-\bar{s}\}[/math], from the knowledge of the third order term of the glow, with this meaning by definition the [math]N^{-2}[/math] factor in the [math]N^{-1}[/math] expansion of the law of [math]|E|/N[/math].
Summarizing, some interesting things going on here, which will actually need some time to be fully understood. So, let us develop now some systematic moment machinery for the glow, along the above lines. Let [math]P(p)[/math] be the set of partitions of [math]\{1,\ldots,p\}[/math], with its standard order relation [math]\leq[/math], which is such that, for any [math]\pi\in P(p)[/math]:
We denote by [math]\mu(\pi,\sigma)[/math] the associated Möbius function, given by:
To be more precise, the Möbius function is defined by recurrence, by using this formula. The main interest in the Möbius function comes from the Möbius inversion formula, which states that the following happens, at the level of the functions on [math]P(p)[/math]:
For [math]\pi\in P(p)[/math] we use the following notation, where [math]b_1,\ldots,b_{|\pi|}[/math] are the block lenghts:
Finally, we use the following notation, where [math]H_1,\ldots,H_N\in\mathbb T^N[/math] are the rows of [math]H[/math]:
With these notations, we have the following result:
The glow moments of a matrix [math]H\in M_N(\mathbb T)[/math] are given by
We know from Proposition 11.8 that the moments are given by:
With [math]\sigma=\ker x,\rho=\ker y[/math], we deduce that the moments of [math]|E|^2[/math] decompose over partitions, according to a formula as follows:
To be more precise, the contributions are as follows:
We have [math]C(\sigma,\rho)=0[/math] unless [math]\sigma\sim\rho[/math], in the sense that [math]\sigma,\rho[/math] must have the same block structure. The point now is that the sums of type [math]\sum_{\ker x=\sigma}[/math] can be computed by using the Möbius inversion formula. We obtain a formula as follows:
Here the functions on the right are by definition given by:
Now since there are [math]\binom{p}{\sigma}[/math] partitions having the same block structure as [math]\sigma[/math], we obtain:
But this gives the formula in the statement, and we are done.
Let us discuss now the asymptotic behavior of the glow. For this purpose, we first study the coefficients [math]K(\pi)[/math] in Theorem 11.12. We have here the following result:
The coeffients appearing in the above, namely
- The function [math]\widetilde{K}(\pi)=\frac{K(\pi)}{p!}[/math] is multiplicative, in the sense that:
[[math]] \widetilde{K}(\pi\pi')=\widetilde{K}(\pi)\widetilde{K}(\pi') [[/math]]
- On the one-block partitions, we have:
[[math]] K(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{\sigma\in P(p)}(-1)^{|\sigma|-1}(|\sigma|-1)!\binom{p}{\sigma} [[/math]]
- We have as well the following fomula,
[[math]] K(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{r=1}^p(-1)^{r-1}(r-1)!C_{pr} [[/math]]where the coefficients on the right are given by:[[math]] C_{pr}=\sum_{p=a_1+\ldots+a_r}\binom{p}{a_1,\ldots,a_r}^2 [[/math]]
This follows from some standard computations, as follows:
(1) We can use here the following formula, which is a well-known property of the Möbius function, which can be proved by recurrence:
Now if [math]b_1,\ldots,b_s[/math] and [math]c_1,\ldots,c_t[/math] are the block lengths of [math]\sigma,\sigma'[/math], we obtain, as claimed:
(2) We can use here the following formula, which once again is well-known, and can be proved by recurrence on [math]|\sigma|[/math]:
We therefore obtain, as claimed:
(3) By using the formula in (2), and summing over [math]r=|\sigma|[/math], we obtain:
Now if we denote by [math]a_1,\ldots,a_r[/math] with [math]a_i\geq1[/math] the block lengths of [math]\sigma[/math], then:
On the other hand, given [math]a_1,\ldots,a_r\geq1[/math] with [math]a_1+\ldots+a_r=p[/math], the number of partitions [math]\sigma[/math] having these numbers as block lengths is:
Thus, we are led to the conclusion in the statement.
Now let us take a closer look at the integrals [math]I(\pi)[/math] from Theorem 11.12, namely:
We have here the following result:
Consider the one-block partition [math]\sqcap\!\!\sqcap\ldots\sqcap\in P(p)[/math].
- [math]I(\sqcap\!\!\sqcap\ldots\sqcap)=\#\{i,j\in\{1,\ldots,N\}^p|[i]=[j]\}[/math].
- [math]I(\sqcap\!\!\sqcap\ldots\sqcap)=\int_{\mathbb T^N}|\sum_ia_i|^{2p}da[/math].
- [math]I(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{\sigma\in P(p)}\binom{p}{\sigma}\frac{N!}{(N-|\sigma|)!}[/math].
- [math]I(\sqcap\!\!\sqcap\ldots\sqcap)=\sum_{r=1}^{p-1}C_{pr}\frac{N!}{(N-r)!}[/math], where [math]C_{pr}=\sum_{p=b_1+\ldots+b_r}\binom{p}{b_1,\ldots,b_r}^2[/math].
Once again, these formulae follow from some standard combinatorics:
(1) This follows indeed from the following computation:
(2) This follows from the following computation:
(3) If we let [math]\sigma=\ker i[/math] in the above formula of [math]I(\sqcap\!\!\sqcap\ldots\sqcap)[/math], we obtain:
Now since there are [math]\frac{N!}{(N-|\sigma|)!}[/math] choices for the multi-index [math]i[/math], and then [math]\binom{p}{\sigma}[/math] choices for the multi-index [math]j[/math], this gives the result.
(4) If we set [math]r=|\sigma|[/math], the formula in (3) becomes:
Now since there are exactly [math]\binom{p}{b_1,\ldots,b_r}[/math] permutations [math]\sigma\in P(p)[/math] having [math]b_1,\ldots,b_r[/math] as block lengths, the sum on the right is given by:
Thus, we are led to the conclusion in the statement.
In general, the integrals [math]I(\pi)[/math] can be estimated as follows:
Let [math]H\in M_N(\mathbb T)[/math], having its rows pairwise orthogonal.
- [math]I(|\,|\,\ldots|)=N^p[/math].
- [math]I(|\,|\,\ldots|\ \pi)=N^aI(\pi)[/math], for any [math]\pi\in P(p-a)[/math].
- [math]|I(\pi)|\lesssim p!N^p[/math], for any [math]\pi\in P(p)[/math].
This is something elementary, as follows:
(1) Since the rows of [math]H[/math] are pairwise orthogonal, we have:
(2) This follows by the same computation as the above one for (1).
(3) We have indeed the following estimate:
Thus we have obtained the formula in the statement, and we are done.
We have now all needed ingredients for a universality result:
The glow of a complex Hadamard matrix [math]H\in M_N(\mathbb T)[/math] is given by:
We use the moment formula in Theorem 11.12, namely:
By using Proposition 11.15 (3), we conclude that only the [math]p[/math]-block and [math](p-1)[/math]-block partitions contribute at order 2, so:
Now by dividing by [math]N^{2p}[/math] and then by using the various formulae in Proposition 11.13, Proposition 11.14 and Proposition 11.15, we obtain, as claimed:
Finally, since the law of [math]E[/math] is invariant under centered rotations in the complex plane, this moment formula gives as well the last assertion.
Summarizing, the complex glow of the complex Hadamard matrices appears to have similar properties to the real glow of the real Hadamard matrices.
11c. Fourier matrices
Let us study now the glow of the Fourier matrices, [math]F=F_G[/math]. We use the following standard formulae, which all come from definitions:
We first have the following result:
For a Fourier matrix [math]F_G[/math] we have
The basic components of the integrals [math]I(\pi)[/math] are given by:
But this gives the formula in the statement, and we are done.
We have the following interpretation of the above integrals:
For any partition [math]\pi[/math] we have the formula
We have the following computation:
But this gives the formula in the statement, and we are done.
We must estimate now the quantities [math]I(\pi)[/math]. We first have the following result:
For [math]F_G[/math] we have the estimate
With [math]\sigma=\ker i[/math] we obtain:
The number of choices for [math]i[/math] satisfying [math]\ker i=\sigma[/math] is:
Then, the number of choices for [math]j[/math] satisfying [math][i]=[j][/math] is:
We conclude that the main contribution comes from the following partition:
Thus, we have the following formula:
Now [math]\ker i=|\,|\ldots|[/math] tells us that [math]i[/math] must have distinct entries, and there are [math]\frac{N!}{(N-p)!}\simeq N^p[/math] choices for such multi-indices [math]i[/math]. Regarding the indices [math]j[/math], the main contribution comes from those obtained from [math]i[/math] by permuting the entries over the blocks of [math]\pi[/math], and there are [math]b_1!\ldots b_{|\pi|}![/math] choices here. Thus, we are led to the conclusion in the statement.
At the second order now, the estimate is as follows:
For [math]F_G[/math] we have the formula
Let us define the “non-arithmetic” part of [math]I(\pi)[/math] as follows:
We then have the following formula:
Also, Proposition 11.19 shows that we have the following estimate:
Our claim now is that we have the following formula:
Indeed, according to Proposition 11.19, we have a formula of the following type:
More precisely, this formula holds indeed, with [math]I^1(\pi)[/math] coming from [math]i_1,\ldots,i_p[/math] distinct, [math][i]=[j][/math], and with one constraint of type:
Now observe that for a two-block partition [math]\pi=(a,b)[/math] this constraint is implemented, up to permutations which leave invariant the blocks of [math]\pi[/math], as follows:
Let us compute now [math]I^1(a,b)[/math]. We cannot have [math]c=0,1[/math], and once [math]c\geq2[/math] is given, we have [math]\binom{a}{c},\binom{b}{c}[/math] choices for the positions of the [math]i,j[/math] variables in the upper row, then [math]N^{p-1}+O(N^{p-2})[/math] choices for the variables in the upper row, and then finally we have [math]a!b![/math] permutations which can produce the lower row. We therefore obtain:
In the general case now, a similar discussion applies. Indeed, the constraint of type [math]\sum_{r\in\beta}i_r=\sum_{r\in\beta}j_r[/math] with [math][i_r|r\in\beta]\neq[j_r|r\in\beta][/math] cannot affect [math]\leq1[/math] blocks, because we are not in the non-arithmetic case, and cannot affect either [math]\geq3[/math] blocks, because affecting [math]\geq3[/math] blocks would require [math]\geq2[/math] constraints. Thus this condition affects exactly [math]2[/math] blocks, and if we let [math]i \lt j[/math] be the indices in [math]\{1,\ldots,s\}[/math] corresponding to these 2 blocks, we obtain:
But this proves the above claim. Let us estimate now [math]I(\sqcap\!\!\sqcap\ldots\sqcap)[/math]. We have:
Now recall that we have:
We therefore obtain:
By plugging this quantity into the above estimate, we obtain the result.
In order to estimate glow, we will need the explicit formula of [math]I(\sqcap\sqcap)[/math]:
For [math]F_G[/math] with [math]G=\mathbb Z_{N_1}\times\ldots\times\mathbb Z_{N_k}[/math] we have the formula
The conditions defining the quantities [math]I(\pi)[/math] are as follows:
We use the fact that, when dealing with these conditions, one can always erase some of the variables [math]i_r,j_r[/math], as to reduce to the “purely arithmetic” case, namely:
We deduce from this that we have:
Let us compute now [math]I^{ari}(\sqcap\sqcap)[/math]. There are 3 contributions to this quantity, namely:
(1) \underline{Case [math](^{iijj}_{jjii})[/math]}, with [math]i\neq j[/math], [math]2i=2j[/math]. Since [math]2(i_1,\ldots,i_k)=2(j_1,\ldots,j_k)[/math] corresponds to the collection of conditions [math]2i_r=2j_r[/math], inside [math]\mathbb Z_{N_r}[/math], which each have 1 or 2 solutions, depending on whether [math]N_r[/math] is odd or even, the contribution here is:
(2) \underline{Case [math](^{iijk}_{jkii})[/math]}, with [math]i,j,k[/math] distinct, [math]2i=j+k[/math]. The contribution here is:
(3) \underline{Case [math](^{ijkl}_{klij})[/math]}, with [math]i,j,k,l[/math] distinct, [math]i+j=k+l[/math]. The contribution here is:
We can split this quantity over two cases, [math]2j\neq 2k[/math] and [math]2j=2k[/math], and we obtain:
The point now is that in the first case, [math]2j\neq 2k[/math], the numbers [math]j,k,2k-j[/math] are distinct, while in the second case, [math]2j=2k[/math], we simply have [math]2k-j=j[/math]. Thus, we obtain:
We can now compute the arithmetic part. This is given by:
Thus the integral to be computed is given by:
Thus we have reached to the formula in the statement, and we are done.
11d. Universality
We have the following asymptotic result:
The glow of [math]F_G[/math], with [math]|G|=N[/math], is given by
with the coefficients being as follows:
We use the following quantities:
These are subject to the following formulae:
Consider as well the following quantities:
In terms of these quantities, we have:
We have the following formulae:
Regarding now the numbers [math]C_{pr}[/math] in Proposition 11.19, these are given by:
We deduce that we have the following formulae:
By using Proposition 11.20 and Proposition 11.21, we obtain the following formula:
But this gives the formulae of [math]K_1,K_2,K_3[/math] in the statement, and we are done.
It is possible to compute the next term as well, the result being as follows:
Let [math]G=\mathbb Z_{N_1}\times\ldots\times\mathbb Z_{N_k}[/math] be a finite abelian group, and set:
This is something that we already know, up to order 3, and the next coefficient [math]K_4[/math] can be computed in a similar way, based on results that we already have.
The passage to Theorem 11.23 is quite interesting, because it shows that the glow of the Fourier matrices [math]F_G[/math] is not polynomial in [math]N=|G|[/math]. When restricting the attention to the usual Fourier matrices [math]F_N[/math], the glow up to order 4 is polynomial both in [math]N[/math] odd, and in [math]N[/math] even, but it is not clear what happens at higher order. An interesting question here is that of computing the complex glow of the Walsh matrices. Indeed, for the Walsh matrices the integrals [math]I(\pi)[/math], and hence the glow itself, might be polynomial in [math]N[/math].
General references
Banica, Teo (2024). "Invitation to Hadamard matrices". arXiv:1910.06911 [math.CO].
References
- 1.0 1.1 P.C. Fishburn and N.J.A. Sloane, The solution to Berlekamp's switching game, Discrete Math. 74 (1989), 263--290.
- 2.0 2.1 R. Roth and K. Viswanathan, On the hardness of decoding the Gale-Berlekamp code, IEEE Trans. Inform. Theory 54 (2008), 1050--1060.
- 3.0 3.1 M. Idel and M.M. Wolf, Sinkhorn normal form for unitary matrices, Linear Algebra Appl. 471 (2015), 76--84.
- R. Durrett, Probability: theory and examples, Cambridge Univ. Press (1990).
- G. Pòlya, \"Uber eine Aufgabe der Wahrscheinlichkeitsrechnung betreffend die Irrfahrt im Strassennetz, Math. Ann. 84 (1921), 149--160.