Complex matrices

[math] \newcommand{\mathds}{\mathbb}[/math]

This article was automatically generated from a tex file and may contain conversion errors. If permitted, you may login and edit this article to improve the conversion.

5a. Basic theory

We have seen that the Hadamard matrices [math]H\in M_N(\pm1)[/math] are very interesting objects. In what follows, we will be interested in their complex versions:

Definition

A complex Hadamard matrix is a square matrix whose entries belong to the unit circle in the complex plane,

[[math]] H\in M_N(\mathbb T) [[/math]]
and whose rows are pairwise orthogonal, with respect to the scalar product of [math]\mathbb C^N[/math].

Here, and in what follows, the scalar product is the usual one on [math]\mathbb C^N[/math], taken to be linear in the first variable and antilinear in the second one:

[[math]] \lt x,y \gt =\sum_ix_i\bar{y}_i [[/math]]


As basic examples of complex Hamadard matrices, we have the real Hadamard matrices, [math]H\in M_N(\pm1)[/math], which have sizes [math]N\in\{2\}\cup4\mathbb N[/math]. Here is now a new, motivating example, with [math]w=e^{2\pi i/3}[/math], which appears at the forbidden size value [math]N=3[/math]:

[[math]] F_3=\begin{pmatrix}1&1&1\\ 1&w&w^2\\ 1&w^2&w\end{pmatrix} [[/math]]


And here is another example, which appears at [math]N=4[/math], and whose combinatorics is different from the one of the unique [math]4\times4[/math] real Hadamard matrix, [math]W_4\sim K_4[/math]:

[[math]] F_4=\begin{pmatrix} 1&1&1&1\\ 1&i&-1&-i\\ 1&-1&1&-1\\ 1&-i&-1&i \end{pmatrix} [[/math]]


We will see that there are many other examples, and in particular that there are such matrices at any [math]N\in\mathbb N[/math], which in addition can be chosen to be circulant. Thus, the HC and CHC problematics will dissapear in the general complex setting. And we will also see that many other questions about the real Hadamard matrices [math]H\in M_N(\pm1)[/math] become far more clear, and sometimes even solvable, when passing to the complex case.


Before anything, however, let us recommend some reading. Although the field of complex numbers [math]\mathbb C[/math] is something very familiar in mathematics, and there are plenty of good reasons for sometimes using it, instead of the field of real numbers [math]\mathbb R[/math], in what concerns the matrices, things are more tricky. Why, after all, looking at [math]M_N(\mathbb C)[/math]?


The answer to this question comes from physics, and more specifically from quantum mechanics. Remember Newton, Leibnitz and others who started talking about functions, derivatives, integrals, and all sorts of other things, that we learn now in 1st year at the university, motivated by classical mechanics? Well, pretty much the same happened with Heisenberg, Schrödinger, Dirac and others, who all of the sudden started to talk about complex matrices, motivated by quantum mechanics. And with these complex matrices being now part of the mathematical landscape too, starting with the 3rd year or so.


So, quantum mechanics. This is, and we repeat, something that you need to know a bit, in order to love the complex matrices, and appreciate the remainder of this book. Standard places for learning it are the books of Feynman [1], Griffiths [2], Weinberg [3]. There are some delightful good old books as well, if you prefer, such as Dirac [4], von Neumann [5], Weyl [6]. And for more fancy stuff, if you're really into action, teaching you how to win a war by totally paralyzing the enemy, with a powerful quantum computer, go with Bengtsson-\.Zyczkowski [7], Nielsen-Chuang [8], Watrous [9].


Getting back now to the complex Hadamard matrices, although these originate in a 1962 paper by Butson [10], motivated by pure mathematics, their study only really took off in the 90s, under the influence of people like Haagerup [11], Jones [12], Popa [13], all mathematicians interested in quantum mechanics. Later on physicists joined too, of course. And so again, conclusion to this, to be kept in mind: quantum mechanics.


In what follows we will take Definition 5.1 as it is, as a nice and natural mathematical definition, which is fully motivated, mathematically speaking, by the few remarks made afterwards. Let us start our study of the complex Hadamard matrices by extending some basic results from the real case, from chapter 1. First, we have:

Proposition

The set formed by the [math]N\times N[/math] complex Hadamard matrices is the real algebraic manifold

[[math]] X_N=M_N(\mathbb T)\cap\sqrt{N}U_N [[/math]]
where [math]U_N[/math] is the unitary group, the intersection being taken inside [math]M_N(\mathbb C)[/math].


Show Proof

Let [math]H\in M_N(\mathbb T)[/math]. Then [math]H[/math] is Hadamard if and only if its rescaling [math]U=H/\sqrt{N}[/math] belongs to the unitary group [math]U_N[/math], and so when [math]H\in X_N[/math], as claimed.

We should mention that the above manifold [math]X_N[/math], while appearing by definition as an intersection of smooth manifolds, is very far from being smooth. We will be back to this, later on. As a basic consequence now of the above result, we have:

Proposition

Let [math]H\in M_N(\mathbb C)[/math] be an Hadamard matrix.

  • The columns of [math]H[/math] must be pairwise orthogonal.
  • The matrices [math]H^t,\bar{H},H^*\in M_N(\mathbb C)[/math] are Hadamard as well.


Show Proof

We use the well-known fact that if a matrix is unitary, [math]U\in U_N[/math], then so is its complex conjugate [math]\bar{U}=(\bar{U}_{ij})[/math], the inversion formulae being as follows:

[[math]] U^*=U^{-1}\quad,\quad U^t=\bar{U}^{-1} [[/math]]


Thus the unitary group [math]U_N[/math] is stable under the following operations:

[[math]] U\to U^t\quad,\quad U\to\bar{U}\quad,\quad U\to U^* [[/math]]


It follows that the algebraic manifold [math]X_N[/math] constructed in Proposition 5.2 is stable as well under these operations. But this gives all the assertions.

Let us introduce now the following equivalence notion for the complex Hadamard matrices, taking into account some basic operations which can be performed:

Definition

Two complex Hadamard matrices are called equivalent, and we write [math]H\sim K[/math], when it is possible to pass from [math]H[/math] to [math]K[/math] via the following operations:

  • Permuting the rows, or permuting the columns.
  • Multiplying the rows or columns by numbers in [math]\mathbb T[/math].

Also, we say that [math]H[/math] is dephased when its first row and column consist of [math]1[/math] entries.

The same remarks as in the real case apply. First of all, we have not taken into account the results in Proposition 5.3 when formulating the above definition, because the operations [math]H\to H^t,\bar{H},H^*[/math] are far more subtle than those in (1,2) above.


Regarding the equivalence, there is a certain group [math]G[/math] acting there, made of two copies of [math]S_N[/math], one for the rows and one for the columns, and of two copies of [math]\mathbb T^N[/math], once again one for the rows, and one for the columns. It is possible to be a bit more explicit here, with a formula for [math]G[/math] and so on, but we will not need this, in what follows next.


Observe that, up to the above equivalence relation, any complex Hadamard matrix [math]H\in M_N(\mathbb T)[/math] can be put in dephased form. Moreover, the dephasing operation is unique, if we allow only the operations (2) in Definition 5.4, namely row and column multiplications by numbers in [math]\mathbb T[/math]. In what follows, “dephasing the matrix” will have precisely this meaning, namely dephasing by using the operations (2) in Definition 5.4.


Regarding analytic aspects, once again in analogy with the study from the real case, we can locate the complex Hadamard matrices inside [math]M_N(\mathbb T)[/math], as follows:

Theorem

Given a matrix [math]H\in M_N(\mathbb T)[/math], we have

[[math]] |\det(H)|\leq N^{N/2} [[/math]]
with equality precisely when [math]H[/math] is Hadamard.


Show Proof

By using the basic properties of the determinant, as in the real case, we have indeed the following estimate, valid for any vectors [math]H_1,\ldots,H_N\in\mathbb T^N[/math]:

[[math]] \begin{eqnarray*} |\det(H_1,\ldots,H_N)| &\leq&||H_1||\times\ldots\times||H_N||\\ &=&(\sqrt{N})^N \end{eqnarray*} [[/math]]


Moreover, again as in the real case, the equality situation appears precisely when our vectors [math]H_1,\ldots,H_N\in\mathbb T^N[/math] are pairwise orthogonal, and this gives the result.

From a “dual” point of view, the question of locating [math]X_N[/math] inside [math]\sqrt{N}U_N[/math], once again via analytic methods, makes sense as well, and we have here the following result:

Theorem

Given a matrix [math]U\in U_N[/math] we have

[[math]] ||U||_1\leq N\sqrt{N} [[/math]]
with equality precisely when [math]H=\sqrt{N}U[/math] is Hadamard.


Show Proof

We have indeed the following estimate, valid for any [math]U\in U_N[/math]:

[[math]] \begin{eqnarray*} ||U||_1 &=&\sum_{ij}|U_{ij}|\\ &\leq&N\left(\sum_{ij}|U_{ij}|^2\right)^{1/2}\\ &=&N\sqrt{N} \end{eqnarray*} [[/math]]


The equality case holds when [math]|U_{ij}|=\sqrt{N}[/math], for any [math]i,j[/math]. But this amounts in saying that the rescaled matrix [math]H=\sqrt{N}U[/math] must satisfy [math]H\in M_N(\mathbb T)[/math], as desired.

The above Cauchy-Schwarz estimate can be improved with a Hölder estimate, the conclusion being that the rescaled Hadamard matrices maximize the [math]p[/math]-norm on [math]U_N[/math] at any [math]p\in[1,2)[/math], and minimize it at any [math]p\in(2,\infty][/math]. We will be back to this.

5b. Fourier matrices

At the level of the examples now, we have the following basic construction:

Theorem

The Fourier matrix, [math]F_N=(w^{ij})[/math] with [math]w=e^{2\pi i/N}[/math], which in standard matrix form, with indices [math]i,j=0,1,\ldots,N-1[/math], is as follows,

[[math]] F_N=\begin{pmatrix} 1&1&1&\ldots&1\\ 1&w&w^2&\ldots&w^{N-1}\\ 1&w^2&w^4&\ldots&w^{2(N-1)}\\ \vdots&\vdots&\vdots&&\vdots\\ 1&w^{N-1}&w^{2(N-1)}&\ldots&w^{(N-1)^2} \end{pmatrix} [[/math]]
is a complex Hadamard matrix, in dephased form.


Show Proof

By using the standard fact that the averages of complex numbers correspond to barycenters, we conclude that the scalar products between the rows of [math]F_N[/math] are:

[[math]] \begin{eqnarray*} \lt R_a,R_b \gt &=&\sum_jw^{aj}w^{-bj}\\ &=&\sum_jw^{(a-b)j}\\ &=&N\delta_{ab} \end{eqnarray*} [[/math]]


Thus [math]F_N[/math] is indeed a complex Hadamard matrix. As for the fact that [math]F_N[/math] is dephased, this follows from our convention [math]i,j=0,1,\ldots,N-1[/math], which is there for this.

As an obvious consequence of the above result, there is no analogue of the HC in the complex case. We will see later on, in chapter 9 below, that the Fourier matrix [math]F_N[/math] can be put in circulant form, so there is no analogue of the CHC either, in this setting. As a first classification result now, in the complex case, we have:

Proposition

The Fourier matrices [math]F_2,F_3[/math], which are given by

[[math]] F_2=\begin{pmatrix}1&1\\ 1&-1\end{pmatrix}\quad,\quad F_3=\begin{pmatrix}1&1&1\\ 1&w&w^2\\ 1&w^2&w\end{pmatrix} [[/math]]
with [math]w=e^{2\pi i/3}[/math] are the only Hadamard matrices at [math]N=2,3[/math], up to equivalence.


Show Proof

The proof at [math]N=2[/math] is similar to the proof from the real case, from chapter 1. Indeed, given [math]H\in M_N(\mathbb T)[/math] Hadamard, we can dephase it, as follows:

[[math]] \begin{pmatrix}a&b\\c&d\end{pmatrix} \to\begin{pmatrix}1&1\\\bar{a}c&\bar{b}d\end{pmatrix} \to\begin{pmatrix}1&1\\1&a\bar{b}\bar{c}d\end{pmatrix} [[/math]]


Thus, we obtain by dephasing the matrix [math]F_2[/math]. Regarding now the case [math]N=3[/math], consider an Hadamard matrix [math]H\in M_3(\mathbb T)[/math], assumed to be in dephased form:

[[math]] H=\begin{pmatrix}1&1&1\\ 1&x&y\\ 1&z&t\end{pmatrix} [[/math]]


The orthogonality conditions between the rows of this matrix read:

[[math]] (1\perp2)\quad:\quad x+y=-1 [[/math]]

[[math]] (1\perp3)\quad:\quad z+t=-1 [[/math]]

[[math]] \ \ \ \,(2\perp3)\quad:\quad x\bar{z}+y\bar{t}=-1 [[/math]]


In order to process this, consider an arbitrary equation of the following type:

[[math]] p+q=-1\quad,\quad p,q\in\mathbb T [[/math]]


This equation tells us that the triangle having vertices at [math]1,p,q[/math] must be equilateral, and so that we must have [math]\{p,q\}=\{w,w^2\}[/math], with [math]w=e^{2\pi i/3}[/math]. By using this fact, for the first two equations, we conclude that we must have:

[[math]] \{x,y\}=\{w,w^2\}\quad,\quad \{z,t\}=\{w,w^2\} [[/math]]


As for the third equation, this gives [math]x\neq z[/math]. Thus, [math]H[/math] is either the Fourier matrix [math]F_3[/math], or the matrix obtained from [math]F_3[/math] by permuting the last two columns, and we are done.

In order to deal now with the case [math]N=4[/math], we already know, from our study in the real case, that we will need tensor products. So, let us formulate:

Definition

The tensor product of complex Hadamard matrices is given, in double indices, by [math](H\otimes K)_{ia,jb}=H_{ij}K_{ab}[/math]. In other words, we have the formula

[[math]] H\otimes K= \begin{pmatrix} H_{11}K&\ldots&H_{1M}K\\ \vdots&&\vdots\\ H_{M1}K&\ldots&H_{MM}K \end{pmatrix} [[/math]]
by using the lexicographic order on the double indices.

Here the fact that [math]H\otimes K[/math] is indeed Hadamard comes from the fact that its rows [math]R_{ia}[/math] are pairwise orthogonal, as shown by the following computation:

[[math]] \begin{eqnarray*} \lt R_{ia},R_{kc} \gt &=&\sum_{jb}H_{ij}K_{ab}\cdot \bar{H}_{kj}\bar{K}_{cb}\\ &=&\sum_jH_{ij}\bar{H}_{kj}\sum_bK_{ab}\bar{K}_{cb}\\ &=&M\delta_{ik}\cdot N\delta_{ac}\\ &=&MN\delta_{ia,kc} \end{eqnarray*} [[/math]]


In order to advance now, our first task will be that of tensoring the Fourier matrices. We have here the following statement, refining and generalizing Theorem 5.7:

Theorem

Given a finite abelian group [math]G[/math], with dual group [math]\widehat{G}=\{\chi:G\to\mathbb T\}[/math], consider the Fourier coupling [math]\mathcal F_G:G\times\widehat{G}\to\mathbb T[/math], given by [math](i,\chi)\to\chi(i)[/math].

  • Via the standard isomorphism [math]G\simeq\widehat{G}[/math], this Fourier coupling can be regarded as a square matrix, [math]F_G\in M_G(\mathbb T)[/math], which is a complex Hadamard matrix.
  • In the case of the cyclic group [math]G=\mathbb Z_N[/math] we obtain in this way, via the standard identification [math]\mathbb Z_N=\{1,\ldots,N\}[/math], the Fourier matrix [math]F_N[/math].
  • In general, when using a decomposition [math]G=\mathbb Z_{N_1}\times\ldots\times\mathbb Z_{N_k}[/math], the corresponding Fourier matrix is given by [math]F_G=F_{N_1}\otimes\ldots\otimes F_{N_k}[/math].


Show Proof

This follows indeed from some basic facts from group theory:


(1) With the identification [math]G\simeq\widehat{G}[/math] made our matrix is given by [math](F_G)_{i\chi}=\chi(i)[/math], and the scalar products between the rows are then, as desired:

[[math]] \begin{eqnarray*} \lt R_i,R_j \gt &=&\sum_\chi\chi(i)\overline{\chi(j)}\\ &=&\sum_\chi\chi(i-j)\\ &=&|G|\cdot\delta_{ij} \end{eqnarray*} [[/math]]


(2) This follows from the well-known and elementary fact that, via the identifications [math]\mathbb Z_N=\widehat{\mathbb Z_N}=\{1,\ldots,N\}[/math], the Fourier coupling here is as follows, with [math]w=e^{2\pi i/N}[/math]:

[[math]] (i,j)\to w^{ij} [[/math]]


(3) We use here the following well-known formula, for the duals of products:

[[math]] \widehat{H\times K}=\widehat{H}\times\widehat{K} [[/math]]


At the level of the corresponding Fourier couplings, we obtain from this:

[[math]] F_{H\times K}=F_H\otimes F_K [[/math]]


Now by decomposing [math]G[/math] into cyclic groups, as in the statement, and by using (2) for the cyclic components, we obtain the formula in the statement.

As a first application of the above result, we have:

Proposition

The Walsh matrix, [math]W_N[/math] with [math]N=2^n[/math], which is given by

[[math]] W_N=\begin{pmatrix}1&1\\1&-1\end{pmatrix}^{\otimes n} [[/math]]
is the Fourier matrix of the finite abelian group [math]K_N=\mathbb Z_2^n[/math].


Show Proof

We know that the first Walsh matrix is a Fourier matrix:

[[math]] W_2=F_2=F_{K_2} [[/math]]


Now by taking tensor powers we obtain from this that we have, for any [math]N=2^n[/math]:

[[math]] W_N =W_2^{\otimes n} =F_{K_2}^{\otimes n} =F_{K_2^n} =F_{K_N} [[/math]]


Thus, we are led to the conclusion in the statement.

By getting back to classification, we will need the following result of Di\c t\u a [14]:

Theorem

If [math]H\in M_M(\mathbb T)[/math] and [math]K\in M_N(\mathbb T)[/math] are Hadamard, then so are the following two matrices, for any choice of a parameter matrix [math]Q\in M_{M\times N}(\mathbb T)[/math]:

  • [math]H\otimes_QK\in M_{MN}(\mathbb T)[/math], given by [math](H\otimes_QK)_{ia,jb}=Q_{ib}H_{ij}K_{ab}[/math].
  • [math]H\!\!{\ }_Q\!\otimes K\in M_{MN}(\mathbb T)[/math], given by [math](H\!\!{\ }_Q\!\otimes K)_{ia,jb}=Q_{ja}H_{ij}K_{ab}[/math].

These are called right and left Di\c t\u a deformations of [math]H\otimes K[/math], with parameter [math]Q[/math].


Show Proof

These results follow from the same computations as in the usual tensor product case, the idea being that the [math]Q[/math] parameters will cancel:


(1) The rows [math]R_{ia}[/math] of the matrix [math]H\otimes_QK[/math] are indeed pairwise orthogonal, because:

[[math]] \begin{eqnarray*} \lt R_{ia},R_{kc} \gt &=&\sum_{jb}Q_{ib}H_{ij}K_{ab}\cdot\bar{Q}_{kb}\bar{H}_{kj}\bar{K}_{cb}\\ &=&M\delta_{ik}\sum_bK_{ab}\bar{K}_{cb}\\ &=&M\delta_{ik}\cdot N\delta_{ac}\\ &=&MN\delta_{ik,ac} \end{eqnarray*} [[/math]]


(2) The rows [math]L_{ia}[/math] of the matrix [math]H\!\!{\ }_Q\!\otimes K[/math] are orthogonal as well, because:

[[math]] \begin{eqnarray*} \lt L_{ia},L_{kc} \gt &=&\sum_{jb}Q_{ja}H_{ij}K_{ab}\cdot\bar{Q}_{jc}\bar{H}_{kj}\bar{K}_{cb}\\ &=&N\delta_{ac}\sum_jH_{ij}\bar{H}_{kj}\\ &=&N\delta_{ac}\cdot M\delta_{ik}\\ &=&MN\delta_{ik,ac} \end{eqnarray*} [[/math]]


Thus, both the matrices in the statement are Hadamard, as claimed.

As a first observation, when the parameter matrix is the all-one matrix [math]\mathbb I\in M_{M\times N}(\mathbb T)[/math], we obtain in this way the usual tensor product of our matrices:

[[math]] H\otimes_{\mathbb I}K =H\!\!{\ }_{\mathbb I}\!\otimes K =H\otimes K [[/math]]


As a non-trivial example now, let us compute the right deformations of the Walsh matrix [math]W_4=F_2\otimes F_2[/math], with arbitrary parameter matrix [math]Q=(^p_r{\ }^q_s)[/math]. We have:

[[math]] \begin{eqnarray*} F_2\otimes_QF_2 &=&\begin{pmatrix} 1&1\\ 1&-1 \end{pmatrix} \otimes_{\begin{pmatrix} p&q\\ r&s \end{pmatrix}} \begin{pmatrix} 1&1\\ 1&-1 \end{pmatrix}\\ &=&\begin{pmatrix} p&q&p&q\\ p&-q&p&-q\\ r&s&-r&-s\\ r&-s&-r&s \end{pmatrix} \end{eqnarray*} [[/math]]


This follows indeed by carefully working out what happens, by using the lexicographic order on the double indices, as explained in chapter 1. To be more precise, the usual tensor product [math]W_4=F_2\otimes F_2[/math] appears as follows:

[[math]] W_4= \begin{pmatrix} ia\backslash jb&&00&01&10&11\\ \\ 00&&1&1&1&1\\ 01&&1&-1&1&-1\\ 10&&1&1&-1&-1\\ 11&&1&-1&-1&1 \end{pmatrix} [[/math]]


The corresponding values of the parameters [math]Q_{ib}[/math] to be inserted are as follows:

[[math]] (Q_{ib})=\begin{pmatrix} ia\backslash jb&&00&01&10&11\\ \\ 00&&Q_{00}&Q_{01}&Q_{00}&Q_{01}\\ 01&&Q_{00}&Q_{01}&Q_{00}&Q_{01}\\ 10&&Q_{10}&Q_{11}&Q_{10}&Q_{11}\\ 11&&Q_{10}&Q_{11}&Q_{10}&Q_{11} \end{pmatrix} [[/math]]


With the notation [math]Q=(^p_r{\ }^q_s)[/math], this latter matrix becomes:

[[math]] (Q_{ib})=\begin{pmatrix} ia\backslash jb&&00&01&10&11\\ \\ 00&&p&q&p&q\\ 01&&p&q&p&q\\ 10&&r&s&r&s\\ 11&&r&s&r&s \end{pmatrix} [[/math]]


Now by pointwise multiplying this latter matrix with the matrix [math]W_4[/math] given above, we obtain the announced formula for the deformed tensor product [math]F_2\otimes_QF_2[/math].


As for the left deformations of [math]W_4=F_2\otimes F_2[/math], once again with arbitrary parameter matrix [math]Q=(^p_r{\ }^q_s)[/math], these are given by a similar formula, as follows:

[[math]] \begin{eqnarray*} F_2\!\!{\ }_Q\!\otimes F_2 &=&\begin{pmatrix} 1&1\\ 1&-1 \end{pmatrix} \!{\ }_{\begin{pmatrix} p&q\\ r&s \end{pmatrix}}\!\otimes \begin{pmatrix} 1&1\\ 1&-1 \end{pmatrix}\\ &=&\begin{pmatrix} p&p&r&r\\ q&-q&s&-s\\ p&p&-r&-r\\ q&-q&-s&s \end{pmatrix} \end{eqnarray*} [[/math]]


Observe that this latter matrix is transpose to [math]F_2\otimes_QF_2[/math]. However, this is something accidental, coming from the fact that [math]F_2[/math], and so [math]W_4[/math] as well, are self-transpose.


With the above constructions in hand, we have the following result:

Theorem

The only complex Hadamard matrices at [math]N=4[/math] are, up to the standard equivalence relation, the matrices

[[math]] F_4^s=\begin{pmatrix} 1&1&1&1\\ 1&-1&1&-1\\ 1&s&-1&-s\\ 1&-s&-1&s \end{pmatrix} [[/math]]
with [math]s\in\mathbb T[/math], which appear as right Di\c t\u a deformations of [math]W_4=F_2\otimes F_2[/math]. Moreover,

[[math]] F_4^s\sim F_4^{-s}\sim F_4^{\bar{s}}\sim F_4^{-\bar{s}} [[/math]]
so we can assume, up to equivalence, that we have [math]s=e^{it}[/math] with [math]t\in[0,\pi/2][/math].


Show Proof

There are several things to be done here, the idea being as follows:


(1) First of all, the matrix [math]F_4^s[/math] is indeed Hadamard, appearing from the construction in Theorem 5.12, assuming that the parameter matrix [math]Q\in M_2(\mathbb T)[/math] is dephased:

[[math]] Q=\begin{pmatrix}1&1\\1&s\end{pmatrix} [[/math]]


Observe also that, conversely, any right Di\c t\u a deformation of [math]W_4=F_2\otimes F_2[/math] is of this form. Indeed, if we consider such a deformation, with general parameter matrix [math]Q=(^p_r{\ }^q_s)[/math] as above, by dephasing we obtain an equivalence with [math]F_4^{s'}[/math], where [math]s'=ps/qr[/math]:

[[math]] \begin{eqnarray*} \begin{pmatrix} p&q&p&q\\ p&-q&p&-q\\ r&s&-r&-s\\ r&-s&-r&s \end{pmatrix} &\to& \begin{pmatrix} 1&1&1&1\\ 1&-1&1&-1\\ r/p&s/q&-r/p&-s/q\\ r/p&-s/q&-r/p&s/q \end{pmatrix}\\ &\to& \begin{pmatrix} 1&1&1&1\\ 1&-1&1&-1\\ 1&ps/qr&-1&-ps/qr\\ 1&-ps/qr&-1&ps/qr \end{pmatrix} \end{eqnarray*} [[/math]]


(2) Summarizing, in what regards the first assertion, we must prove that any complex Hadamard matrix [math]H\in M_4(\mathbb T)[/math] is equivalent to one of the matrices [math]F_4^s[/math]. But this follows by using the same arguments as in the proof from the real case, from chapter 1, at [math]N=4[/math], and from the proof of Proposition 5.8. Indeed, let us first dephase our matrix:

[[math]] H=\begin{pmatrix}1&1&1&1\\ 1&a&b&c\\ 1&d&e&f\\ 1&g&h&i\end{pmatrix} [[/math]]


We use now the fact, coming from plane geometry, that the solutions [math]x,y,z,t\in\mathbb T[/math] of the equation [math]x+y+z+t=0[/math] are as follows, with [math]p,q\in\mathbb T[/math]:

[[math]] \{x,y,z,t\}=\{p,q,-p,-q\} [[/math]]

In our case, we have [math]1+a+d+g=0[/math], and so up to a permutation of the last 3 rows, our matrix must look at follows, for a certain [math]s\in\mathbb T[/math]:

[[math]] H=\begin{pmatrix}1&1&1&1\\ 1&-1&b&c\\ 1&s&e&f\\ 1&-s&h&i\end{pmatrix} [[/math]]


(3) In the case [math]s=\pm1[/math] we can permute the middle two columns, then repeat the same reasoning, and we end up with the matrix in the statement.


(4) In the case [math]s\neq\pm1[/math] we have [math]1+s+e+f=0[/math], and so [math]-1\in\{e,f\}[/math]. Up to a permutation of the last columns, we can assume [math]e=-1[/math], and our matrix becomes:

[[math]] H=\begin{pmatrix}1&1&1&1\\ 1&-1&b&c\\ 1&s&-1&-s\\ 1&-s&h&i\end{pmatrix} [[/math]]


Similarly, from [math]1-s+h+i=0[/math] we deduce that [math]-1\in\{h,i\}[/math]. In the case [math]h=-1[/math] our matrix must look as follows, and we are led to the matrix in the statement:

[[math]] H=\begin{pmatrix}1&1&1&1\\ 1&-1&b&c\\ 1&s&-1&-s\\ 1&-s&-1&i\end{pmatrix} [[/math]]


As for the remaining case [math]i=-1[/math], here our matrix must look as follows:

[[math]] H=\begin{pmatrix}1&1&1&1\\ 1&-1&b&c\\ 1&s&-1&-s\\ 1&-s&h&-1\end{pmatrix} [[/math]]


We obtain from the last column [math]c=s[/math], then from the second row [math]b=-s[/math], then from the third column [math]h=s[/math], and so our matrix must be as follows:

[[math]] H=\begin{pmatrix}1&1&1&1\\ 1&-1&-s&s\\ 1&s&-1&-s\\ 1&-s&s&-1\end{pmatrix} [[/math]]


But, in order for the second and third row to be orthogonal, we must have [math]s\in\mathbb R[/math], and so [math]s=\pm1[/math], which contradicts our above assumption [math]s\neq\pm1[/math].


(5) Thus, we are done with the proof of the main assertion. Regarding now the second assertion, observe first that by permuting the last two rows we have:

[[math]] F_4^s=\begin{pmatrix} 1&1&1&1\\ 1&-1&1&-1\\ 1&s&-1&-s\\ 1&-s&-1&s \end{pmatrix} \sim\begin{pmatrix} 1&1&1&1\\ 1&-1&1&-1\\ 1&-s&-1&s\\ 1&s&-1&-s \end{pmatrix} =F_4^{-s} [[/math]]


Also, by starting with [math]F_4^{\bar{s}}[/math] and multiplying the last three rows by [math]-1,s,-s[/math], then intechanging the first two columns, and the last two columns, we have:

[[math]] F_4^{\bar{s}} =\begin{pmatrix} 1&1&1&1\\ 1&-1&1&-1\\ 1&\bar{s}&-1&-\bar{s}\\ 1&-\bar{s}&-1&\bar{s} \end{pmatrix} \sim\begin{pmatrix} 1&1&1&1\\ -1&1&-1&1\\ s&1&-s&-1\\ -s&1&s&-1 \end{pmatrix} \sim\begin{pmatrix} 1&1&1&1\\ 1&-1&1&-1\\ 1&s&-1&-s\\ 1&-s&-1&s \end{pmatrix} =F_4^s [[/math]]


Thus, we are led to the final conclusion in the statement too.

As a comment here, Theorem 5.13 does not close the discussion at [math]N=4[/math], because we would still like to prove that the matrices [math]F_4^s[/math] are non-equivalent, up to identifying [math]\{s,-s,\bar{s},-\bar{s}\}[/math]. However, this is something undobale with bare hands, so we must trick. To be more precise, we would like to have an invariant which distinguishes the matrices [math]F_s[/math], and a natural candidate here is the “complex glow”, which should be by definition the law over the equivalence class of the following quantity, called excess:

[[math]] E(H)=\sum_{ij}H_{ij} [[/math]]


We will discuss this later, in chapters 10-11, but as an advertisement for the material there, let us mention that the quantities to look at are the moments [math]\int|E|^{2p}[/math], which are Laurent polynomials in [math]s\in\mathbb T[/math], and with [math]p=3[/math] doing the job. More on this later.

5c. Haagerup theorem

At [math]N=5[/math] now, the situation is considerably more complicated, with [math]F_5[/math] being the only matrix. The key technical result here, due to Haagerup [11], is as follows:

Proposition

Given an Hadamard matrix [math]H\in M_5(\mathbb T)[/math], chosen dephased,

[[math]] H=\begin{pmatrix} 1&1&1&1&1\\ 1&a&x&*&*\\ 1&y&b&*&*\\ 1&*&*&*&*\\ 1&*&*&*&* \end{pmatrix} [[/math]]
the numbers [math]a,b,x,y[/math] must satisfy the equation [math](x-y)(x-ab)(y-ab)=0[/math].


Show Proof

This is something quite surprising, and tricky, the proof in [11] being as follows. Let us look at the upper 3-row truncation of [math]H[/math], which is of the following form:

[[math]] H'=\begin{pmatrix} 1&1&1&1&1\\ 1&a&x&p&q\\ 1&y&b&r&s \end{pmatrix} [[/math]]


By using the orthogonality of the rows, we have:

[[math]] (1+a+x)(1+\bar{b}+\bar{y})(1+\bar{a}y+b\bar{x}) =-(p+q)(r+s)(\bar{p}r+\bar{q}s) [[/math]]


On the other hand, by using [math]p,q,r,s\in\mathbb T[/math], we have:

[[math]] \begin{eqnarray*} (p+q)(r+s)(\bar{p}r+\bar{q}s) &=&(r+p\bar{q}s+\bar{p}qr+s)(\bar{r}+\bar{s})\\ &=&1+p\bar{q}\bar{r}s+\bar{p}q+\bar{r}s+r\bar{s}+p\bar{q}+\bar{p}qr\bar{s}+1\\ &=&2Re(1+p\bar{q}+r\bar{s}+p\bar{q}r\bar{s})\\ &=&2Re[(1+p\bar{q})(1+r\bar{s})] \end{eqnarray*} [[/math]]


We conclude that we have the following formula, involving [math]a,b,x,y[/math] only:

[[math]] (1+a+x)(1+\bar{b}+\bar{y})(1+\bar{a}y+b\bar{x})\in\mathbb R [[/math]]


Now this is a product of type [math](1+\alpha)(1+\beta)(1+\gamma)[/math], with the first summand being 1, and with the last summand, namely [math]\alpha\beta\gamma[/math], being real as well, as shown by the above general [math]p,q,r,s\in\mathbb T[/math] computation. Thus, when expanding, and we are left with:

[[math]] \begin{eqnarray*} &&(a+x)+(\bar{b}+\bar{y})+(\bar{a}y+b\bar{x})+(a+x)(\bar{b}+\bar{y})\\ &+&(a+x)(\bar{a}y+b\bar{x})+(\bar{b}+\bar{y})(\bar{a}y+b\bar{x})\in\mathbb R \end{eqnarray*} [[/math]]


By expanding all the products, our formula looks as follows:

[[math]] \begin{eqnarray*} &&a+x+\bar{b}+\bar{y}+\bar{a}y+b\bar{x}+a\bar{b}+a\bar{y}+\bar{b}x+x\bar{y}\\ &+&1+ab\bar{x}+\bar{a}xy+b+\bar{a}\bar{b}y+\bar{x}+\bar{a}+b\bar{x}\bar{y}\in\mathbb R \end{eqnarray*} [[/math]]


By removing from this all terms of type [math]z+\bar{z}[/math], we are left with:

[[math]] a\bar{b}+x\bar{y}+ab\bar{x}+\bar{a}\bar{b}y+\bar{a}xy+b\bar{x}\bar{y}\in\mathbb R [[/math]]


Now by getting back to our Hadamard matrix, all this remains true when transposing it, which amounts in interchanging [math]x\leftrightarrow y[/math]. Thus, we have as well:

[[math]] a\bar{b}+\bar{x}y+ab\bar{y}+\bar{a}\bar{b}x+\bar{a}xy+b\bar{x}\bar{y}\in\mathbb R [[/math]]


By substracting now the two equations that we have, we obtain:

[[math]] x\bar{y}-\bar{x}y+ab(\bar{x}-\bar{y})+\bar{a}\bar{b}(y-x)\in\mathbb R [[/math]]


Now observe that this number, say [math]Z[/math], is purely imaginary, because [math]\bar{Z}=-Z[/math]. Thus our equation reads [math]Z=0[/math]. On the other hand, we have the following formula:

[[math]] \begin{eqnarray*} abxyZ &=&abx^2-aby^2+a^2b^2(y-x)+xy(y-x)\\ &=&(y-x)(a^2b^2+xy-ab(x+y))\\ &=&(y-x)(ab-x)(ab-y) \end{eqnarray*} [[/math]]


Thus, our equation [math]Z=0[/math] corresponds to the formula in the statement.

We are led in this way to the following theorem, also from Haagerup [11]:

Theorem

The only Hadamard matrix at [math]N=5[/math] is the Fourier matrix,

[[math]] F_5=\begin{pmatrix} 1&1&1&1&1\\ 1&w&w^2&w^3&w^4\\ 1&w^2&w^4&w&w^3\\ 1&w^3&w&w^4&w^2\\ 1&w^4&w^3&w^2&w \end{pmatrix} [[/math]]
with [math]w=e^{2\pi i/5}[/math], up to the standard equivalence relation for such matrices.


Show Proof

Assume that have an Hadamard matrix [math]H\in M_5(\mathbb T)[/math], chosen dephased, and written as in Proposition 5.14, with emphasis on the upper left [math]2\times2[/math] subcorner:

[[math]] H=\begin{pmatrix} 1&1&1&1&1\\ 1&a&x&*&*\\ 1&y&b&*&*\\ 1&*&*&*&*\\ 1&*&*&*&* \end{pmatrix} [[/math]]


(1) We know from Proposition 5.14, applied to [math]H[/math] itself, and to its transpose [math]H^t[/math] as well, that the entries [math]a,b,x,y[/math] must satisfy the following equations:

[[math]] (a-b)(a-xy)(b-xy)=0 [[/math]]

[[math]] (x-y)(x-ab)(y-ab)=0 [[/math]]


Our first claim is that, by doing some combinatorics, we can actually obtain from this [math]a=b[/math] and [math]x=y[/math], up to the equivalence relation for the Hadamard matrices:

[[math]] H\sim\begin{pmatrix} 1&1&1&1&1\\ 1&a&x&*&*\\ 1&x&a&*&*\\ 1&*&*&*&*\\ 1&*&*&*&* \end{pmatrix} [[/math]]


Indeed, the above two equations lead to 9 possible cases, the first of which is, as desired, [math]a=b[/math] and [math]x=y[/math]. As for the remaining 8 cases, here again things are determined by 2 parameters, and in practice, we can always permute the first 3 rows and 3 columns, and then dephase our matrix, as for our matrix to take the above special form.


(2) With this result in hand, the combinatorics of the scalar products between the first 3 rows, and between the first 3 columns as well, becomes something which is quite simple to investigate. By doing a routine study here, and then completing it with a study of the lower right [math]2\times2[/math] corner as well, we are led to 2 possible cases, as follows:

[[math]] H\sim\begin{pmatrix} 1&1&1&1&1\\ 1&a&b&c&d\\ 1&b&a&d&c\\ 1&c&d&a&b\\ 1&d&c&b&a \end{pmatrix}\quad,\quad H\sim\begin{pmatrix} 1&1&1&1&1\\ 1&a&b&c&d\\ 1&b&a&d&c\\ 1&c&d&b&a\\ 1&d&c&a&b \end{pmatrix} [[/math]]


(3) Our claim now is that the first case is in fact not possible. Indeed, we must have:

[[math]] \begin{eqnarray*} a+b+c+d&=&-1\\ 2Re(a\bar{b})+2Re(c\bar{d})&=&-1\\ 2Re(a\bar{c})+2Re(b\bar{d})&=&-1\\ 2Re(a\bar{d})+2Re(b\bar{c})&=&-1 \end{eqnarray*} [[/math]]


Now since [math]|Re(x)|\leq1[/math] for any [math]x\in\mathbb T[/math], we deduce from the second equation that:

[[math]] Re(a\bar{b})\leq 1/2 [[/math]]


In other words, the arc length between [math]a,b[/math] satisfies:

[[math]] \theta(a,b)\geq\pi/3 [[/math]]


The same argument applies to [math]c,d[/math], and to the other pairs of numbers in the last 2 equations. Now since our equations are invariant under permutations of [math]a,b,c,d[/math], we can assume that [math]a,b,c,d[/math] are ordered in this way on the unit circle, and by the above, separated by [math]\geq\pi/3[/math] arc lengths. But this tells us that we have the following inequalities:

[[math]] \theta(a,c)\geq 2\pi/3\quad,\quad \theta(b,d)\geq 2\pi/3 [[/math]]


These two inequalities give the following estimates:

[[math]] Re(a\bar{c})\leq-1/2\quad,\quad Re(b\bar{d})\leq-1/2 [[/math]]


But these estimates contradict the third equation. Thus, our claim is proved.


(4) Summarizing, we have proved so far that our matrix must be as follows:

[[math]] H\sim\begin{pmatrix} 1&1&1&1&1\\ 1&a&b&c&d\\ 1&b&a&d&c\\ 1&c&d&b&a\\ 1&d&c&a&b \end{pmatrix} [[/math]]


We are now in position of finishing. The orthogonality equations are as follows:

[[math]] \begin{eqnarray*} a+b+c+d&=&-1\\ 2Re(a\bar{b})+2Re(c\bar{d})&=&-1\\ a\bar{c}+c\bar{b}+b\bar{d}+d\bar{a}&=&-1 \end{eqnarray*} [[/math]]


The third equation can be written in the following equivalent form:

[[math]] \begin{eqnarray*} Re[(a+b)(\bar{c}+\bar{d})]&=&-1\\ Im[(a-b)(\bar{c}-\bar{d})]&=&0 \end{eqnarray*} [[/math]]


By using now [math]a,b,c,d\in\mathbb T[/math], we obtain from this:

[[math]] \frac{a+b}{a-b}\in i\mathbb R\quad,\quad \frac{c+d}{c-d}\in i\mathbb R [[/math]]


Thus we can find [math]s,t\in\mathbb R[/math] such that:

[[math]] a+b=is(a-b)\quad,\quad c+d=it(c-d) [[/math]]


By plugging in these values, our system of equations simplifies, as follows:

[[math]] \begin{eqnarray*} (a+b)+(c+d)&=&-1\\ |a+b|^2+|c+d|^2&=&3\\ (a+b)(\bar{c}+\bar{d})&=&-1 \end{eqnarray*} [[/math]]


Now observe that the last equation implies in particular that we have:

[[math]] |a+b|^2\cdot|c+d|^2=1 [[/math]]


Thus [math]|a+b|^2,|c+d|^2[/math] must be roots of the following polynomial:

[[math]] X^2-3X+1=0 [[/math]]


But this gives the following equality of sets:

[[math]] \Big\{|a+b|\,,\,|c+d|\Big\}=\left\{\frac{\sqrt{5}+1}{2}\,,\,\frac{\sqrt{5}-1}{2}\right\} [[/math]]


This is good news, because we are now into 5-th roots of unity. To be more precise, we have 2 cases to be considered, the first one being as follows, with [math]z\in\mathbb T[/math]:

[[math]] a+b=\frac{\sqrt{5}+1}{2}\,z\quad,\quad c+d=-\frac{\sqrt{5}-1}{2}\,z [[/math]]


From [math]a+b+c+d=-1[/math] we obtain [math]z=-1[/math], and by using this we obtain [math]b=\bar{a}[/math], [math]d=\bar{c}[/math]. Thus we have the following formulae:

[[math]] Re(a)=\cos(2\pi/5)\quad,\quad Re(c)=\cos(\pi/5) [[/math]]


We conclude that we have [math]H\sim F_5[/math], as claimed. As for the second case, with [math]a,b[/math] and [math]c,d[/math] interchanged, this leads to [math]H\sim F_5[/math] as well.

5d. Further matrices

At [math]N=6[/math] now, the situation becomes very complicated, with lots of “exotic” solutions, and with the structure of the Hadamard manifold [math]X_6[/math] being not understood yet, despite years of efforts. In fact, [math]X_6[/math] looks as complicated as the real algebraic manifolds can get. The simplest examples of Hadamard matrices at [math]N=6[/math] are as follows:

Theorem

We have the following basic Hadamard matrices, at [math]N=6[/math]:

  • The Fourier matrix [math]F_6[/math].
  • The Di\c t\u a deformations of [math]F_2\otimes F_3[/math] and of [math]F_3\otimes F_2[/math].
  • The Haagerup matrix [math]H_6^q[/math].
  • The Tao matrix [math]T_6[/math].


Show Proof

All this is elementary, the idea, and formulae of the matrices, being as follows:


(1) This is something that we know well.


(2) Consider indeed the dephased Di\c t\u a deformations of [math]F_2\otimes F_3[/math] and [math]F_3\otimes F_2[/math]:

[[math]] F_6^{(rs)}=F_2 \otimes_{\begin{pmatrix} 1&1&1\\ 1&r&s \end{pmatrix}} F_3\qquad,\qquad F_6^{(^r_s)}=F_3 \otimes_{\begin{pmatrix} 1&1\\ 1&r\\ 1&s \end{pmatrix}}F_2 [[/math]]


Here [math]r,s[/math] are two parameters on the unit circle, [math]r,s\in\mathbb T[/math]. In matrix form:

[[math]] F_6^{(rs)}=\begin{pmatrix} 1&1&1&&1&1&1\\ 1&w&w^2&&1&w&w^2\\ 1&w^2&w&&1&w^2&w\\ \\ 1&r&s&&-1&-r&-s\\ 1&wr&w^2s&&-1&-wr&-w^2s\\ 1&w^2r&ws&&-1&-w^2r&-ws \end{pmatrix} [[/math]]


As for the other deformation, this is given by:

[[math]] F_6^{(^r_s)} =\begin{pmatrix} 1&1&&1&1&&1&1\\ 1&-1&&1&-1&&1&-1\\ \\ 1&r&&w&wr&&w^2&w^2r\\ 1&-r&&w&-wr&&w^2&-w^2r\\ \\ 1&s&&w^2&w^2s&&w&ws\\ 1&-s&&w^2&-w^2s&&w&-ws \end{pmatrix} [[/math]]


(3) The matrix here, from Haagerup's paper [11], is as follows, with [math]q\in\mathbb T[/math]:

[[math]] H_6^q=\begin{pmatrix} 1&1&1&1&1&1\\ 1&-1&i&i&-i&-i\\ 1&i&-1&-i&q&-q\\ 1&i&-i&-1&-q&q\\ 1&-i&\bar{q}&-\bar{q}&i&-1\\ 1&-i&-\bar{q}&\bar{q}&-1&i \end{pmatrix} [[/math]]


(4) The matrix here, from Tao's paper [15], is as follows, with [math]w=e^{2\pi i/3}[/math]:

[[math]] T_6=\begin{pmatrix} 1&1&1&1&1&1\\ 1&1&w&w&w^2&w^2\\ 1&w&1&w^2&w^2&w\\ 1&w&w^2&1&w&w^2\\ 1&w^2&w^2&w&1&w\\ 1&w^2&w&w^2&w&1 \end{pmatrix} [[/math]]


Observe that both [math]H_6^q[/math] and [math]T_6[/math] are indeed complex Hadamard matrices.

The matrices in Theorem 5.16 are “regular”, in the sense that the scalar products between rows appear in the simplest possible way, namely from vanishing sums of roots of unity, possibly rotated by a scalar. We will be back to this in chapter 6 below, with a result stating that these matrices are the only regular ones, at [math]N=6[/math].


In the non-regular case now, there are many known constructions at [math]N=6[/math]. Here is one such construction, found by Björck and Fröberg in [16]:

Proposition

The following is a complex Hadamard matrix,

[[math]] BF_6=\begin{pmatrix} 1&ia&-a&-i&-\bar{a}&i\bar{a}\\ i\bar{a}&1&ia&-a&-i&-\bar{a}\\ -\bar{a}&i\bar{a}&1&ia&-a&-i\\ -i&-\bar{a}&i\bar{a}&1&ia&-a\\ -a&-i&-\bar{a}&i\bar{a}&1&ia\\ ia&-a&-i&-\bar{a}&i\bar{a}&1 \end{pmatrix} [[/math]]
where [math]a\in\mathbb T[/math] is one of the roots of [math]a^2+(\sqrt{3}-1)a+1=0[/math].


Show Proof

The matrix in the statement is circulant, in the sense that the rows appear by cyclically permuting the first row. Thus, we only have to check that the first row is orthogonal to the other 5 rows. But this follows from [math]a^2+(\sqrt{3}-1)a+1=0[/math].

The obvious question here is how Björck and Fröberg were able to construct the above matrix. This was done via some general theory for the circulant Hadamard matrices, and some computer simulations. We will discuss this in chapter 9 below.


Further study in the [math]N=6[/math] case leads to fairly complicated things, and we have here, as an illustrating example, the following result of Beauchamp-Nicoara [17]:

Theorem

The self-adjoint [math]6\times6[/math] Hadamard matrices are, up to equivalence

[[math]] BN_6^q= \begin{pmatrix} 1&1&1&1&1&1\\ 1&-1&\bar{x}&-y&-\bar{x}&y\\ 1&x&-1&t&-t&-x\\ 1&-\bar{y}&\bar{t}&-1&\bar{y}&-\bar{t}\\ 1&-x&-\bar{t}&y&1&\bar{z}\\ 1&\bar{y}&-\bar{x}&-t&z&1 \end{pmatrix} [[/math]]
with [math]x,y,z,t\in\mathbb T[/math] depending on a parameter [math]q\in\mathbb T[/math], in a complicated way.


Show Proof

The study here can be done via a lot of work, the equations being:

[[math]] \begin{eqnarray*} x&=&\frac{1+2q+q^2-\sqrt{2}\sqrt{1+2q+2q^3+q^4}}{1+2q-q^2}\\ y&=&q\\ z&=&\frac{1+2q-q^2}{q(-1+2q+q^2)}\\ t&=&\frac{1+2q+q^2-\sqrt{2}\sqrt{1+2q+2q^3+q^4}}{-1+2q+q^2} \end{eqnarray*} [[/math]]


All this is quite technical, and we refer here to [17].

There are many other examples at [math]N=6[/math], and no classification known. For a recent discussion on this subject, we refer to the survey paper of Tadej-\.Zyczkowski [18].


Let us discuss now the case [math]N=7[/math]. We will restrict the attention to case where the combinatorics comes from roots of unity. We use the following result of Szöllősi [19]: si construction}

Theorem

If [math]H\in M_N(\pm 1)[/math] with [math]N\geq 8[/math] is dephased symmetric Hadamard, and

[[math]] w=\frac{(1\pm i\sqrt{N-5})^2}{N-4} [[/math]]
then the following procedure yields a complex Hadamard matrix [math]M\in M_{N-1}(\mathbb T)[/math]:

  • Erase the first row and column of [math]H[/math].
  • Replace all diagonal [math]1[/math] entries with [math]-w[/math].
  • Replace all off-diagonal [math]-1[/math] entries with [math]w[/math].


Show Proof

We know from chapter 1 that the scalar product between any two rows of [math]H[/math], normalized as there, appears as follows:

[[math]] \begin{eqnarray*} P &=&\frac{N}{4}\cdot1\cdot1+\frac{N}{4}\cdot1\cdot(-1)+\frac{N}{4}\cdot(-1)\cdot1+\frac{N}{4}\cdot(-1)\cdot(-1)\\ &=&0 \end{eqnarray*} [[/math]]


Let us peform now the above operations (1,2,3), in reverse order. When replacing [math]-1\to w[/math], all across the matrix, the above scalar product becomes:

[[math]] \begin{eqnarray*} P' &=&\frac{N}{4}\cdot1\cdot1+\frac{N}{4}\cdot1\cdot\bar{w}+\frac{N}{4}\cdot w\cdot1+\frac{N}{4}\cdot(-1)\cdot(-1)\\ &=&\frac{N}{2}(1+Re(w)) \end{eqnarray*} [[/math]]


Now when adjusting the diagonal via [math]w\to-1[/math] back, and [math]1\to-w[/math], this amounts in adding the quantity [math]-2(1+Re(w))[/math] to our product. Thus, our product becomes:

[[math]] \begin{eqnarray*} P'' &=&\left(\frac{N}{2}-2\right)(1+Re(w))\\ &=&\frac{N-4}{2}\left(1+\frac{6-N}{N-4}\right)\\ &=&1 \end{eqnarray*} [[/math]]


Finally, erasing the first row and column amounts in substracting 1 from our scalar product. Thus, our scalar product becomes [math]P'''=1-1=0[/math], and we are done.

Observe that the number [math]w[/math] in the above statement is a root of unity precisely at [math]N=8[/math], where the only matrix satisfying the conditions in the statement is the Walsh matrix [math]W_8[/math]. So, let us apply, as in [19], the above construction to this matrix, namely:

[[math]] W_8=\begin{pmatrix} 1&1&1&1&1&1&1&1\\ 1&-1&1&-1&1&-1&1&-1\\ 1&1&-1&-1&1&1&-1&-1\\ 1&-1&-1&1&1&-1&-1&1\\ 1&1&1&1&-1&-1&-1&-1\\ 1&-1&1&-1&-1&1&-1&1\\ 1&-1&-1&-1&-1&-1&1&1\\ 1&-1&-1&1&-1&1&1&-1 \end{pmatrix} [[/math]]


We obtain in this way the following matrix:

[[math]] W_8'=\begin{pmatrix} *&*&*&*&*&*&*&*\\ *&-1&1&w&1&w&1&w\\ *&1&-1&w&1&1&w&w\\ *&w&w&-w&1&w&w&1\\ *&1&1&1&-1&w&w&w\\ *&w&1&w&w&-w&w&1\\ *&1&w&w&w&w&-w&1\\ *&w&w&1&w&1&1&-1 \end{pmatrix} [[/math]]


The Hadamard matrix obtained in this way, by deleting the [math]*[/math] entries, is the Petrescu matrix [math]P_7[/math], found in [20]. Thus, we have the following result:

Theorem

[math]P_7[/math] is the unique matrix formed by roots of unity that can be obtained by the Szöllősi construction. It appears at [math]N=8[/math], from [math]H=W_8[/math]. Its formula is

[[math]] (P_7)_{ijk,abc}= \begin{cases} -w&{\rm if}\ (ijk)=(abc),\ ia+jb+kc=0(2)\\ w&{\rm if}\ (ijk)\neq(abc),\ ia+jb+kc\neq 0(2)\\ (-1)^{ia+jb+kc}&{\rm otherwise} \end{cases} [[/math]]
where [math]w=e^{2\pi i/3}[/math], and with the indices belonging to the set [math]\{0,1\}^3-\{(0,0,0)\}[/math].


Show Proof

We know that the Szöllősi construction maps [math]W_8\to P_7[/math]. Since the formula of the second Fourier matrix is [math](F_2)_{ij}=(-1)^{ij}[/math], the formula of the Walsh matrix [math]W_8[/math] is:

[[math]] (W_8)_{ijk,abc}=(-1)^{ia+jb+kc} [[/math]]


But this gives the formula in the statement.

Now observe that we are in the quite special situation [math]H=F_2\otimes K[/math], with [math]K[/math] being dephased and symmetric. Thus, we can search for a one-parameter affine deformation [math]K(q)[/math] which is dephased and symmetric, and then build the following matrix:

[[math]] H(q)=\begin{pmatrix}K(q)&K\\ K&-K(\bar{q})\end{pmatrix} [[/math]]


In our case, such a deformation [math]K(q)=W_4(q)[/math] can be obtained by putting the [math]q[/math] parameters in the [math]2\times 2[/math] middle block. Now by performing the Szöllősi construction, with the parameters [math]q,\bar{q}[/math] left untouched, we obtain the parametric Petrescu matrix [20]:

Theorem

The following is a complex Hadamard matrix,

[[math]] P_7^q =\begin{pmatrix} -q&q&w&1&w&1&w\\ q&-q&w&1&1&w&w\\ w&w&-w&1&w&w&1\\ 1&1&1&-1&w&w&w\\ w&1&w&w&-\bar{q}w&\bar{q}w&1\\ 1&w&w&w&\bar{q}w&-\bar{q}w&1\\ w&w&1&w&1&1&-1 \end{pmatrix} [[/math]]
where [math]w=e^{2\pi i/3}[/math], and [math]q\in\mathbb T[/math].


Show Proof

This follows from the above considerations, or from a direct verification of the orthogonality of the rows, which uses either [math]1-1=0[/math], or [math]1+w+w^2=0[/math].

Observe that the above matrix [math]P_7^q[/math] has the property of being “regular”, in the sense that the scalar products between rows appear from vanishing sums of roots of unity, possibly rotated by a scalar. We will be back to this in the next chapter, with the conjectural statement that [math]F_7,P_7^q[/math] are the only regular Hadamard matrices at [math]N=7[/math].

General references

Banica, Teo (2024). "Invitation to Hadamard matrices". arXiv:1910.06911 [math.CO].

References

  1. R.P. Feynman, R.B. Leighton and M. Sands, The Feynman lectures on physics III: quantum mechanics, Caltech (1966).
  2. D.J. Griffiths and D.F. Schroeter, Introduction to quantum mechanics, Cambridge Univ. Press (2018).
  3. S. Weinberg, Lectures on quantum mechanics, Cambridge Univ. Press (2012).
  4. P.A.M. Dirac, Principles of quantum mechanics, Oxford Univ. Press (1930).
  5. J. von Neumann, Mathematical foundations of quantum mechanics, Princeton Univ. Press (1955).
  6. H. Weyl, The theory of groups and quantum mechanics, Princeton Univ. Press (1931).
  7. I. Bengtsson and K. \.Zyczkowski, Geometry of quantum states, Cambridge Univ. Press (2006).
  8. M.A. Nielsen and I.L. Chuang, Quantum computation and quantum information, Cambridge Univ. Press (2000).
  9. J. Watrous, The theory of quantum information, Cambridge Univ. Press (2018).
  10. A.T. Butson, Generalized Hadamard matrices, Proc. Amer. Math. Soc. 13 (1962), 894--898.
  11. 11.0 11.1 11.2 11.3 11.4 U. Haagerup, Orthogonal maximal abelian [math]*[/math]-subalgebras of the [math]n\times n[/math] matrices and cyclic [math]n[/math]-roots, in “Operator algebras and quantum field theory”, International Press (1997), 296--323.
  12. V.F.R. Jones, Planar algebras I (1999).
  13. S. Popa, Orthogonal pairs of [math]*[/math]-subalgebras in finite von Neumann algebras, J. Operator Theory 9 (1983), 253--268.
  14. P. Di\c t\u a, Some results on the parametrization of complex Hadamard matrices, J. Phys. A 37 (2004), 5355--5374.
  15. T. Tao, Fuglede's conjecture is false in 5 and higher dimensions, Math. Res. Lett. 11 (2004), 251--258.
  16. G. Björck and R. Fröberg, A faster way to count the solutions of inhomogeneous systems of algebraic equations, with applications to cyclic [math]n[/math]-roots, J. Symbolic Comput. 12 (1991), 329--336.
  17. 17.0 17.1 K. Beauchamp and R. Nicoara, Orthogonal maximal abelian [math]*[/math]-subalgebras of the [math]6\times 6[/math] matrices, Linear Algebra Appl. 428 (2008), 1833--1853.
  18. W. Tadej and K. \.Zyczkowski, A concise guide to complex Hadamard matrices, Open Syst. Inf. Dyn. 13 (2006), 133--177.
  19. 19.0 19.1 F. Szöllősi, Exotic complex Hadamard matrices and their equivalence, Cryptogr. Commun. 2 (2010), 187--198.
  20. 20.0 20.1 M. Petrescu, Existence of continuous families of complex Hadamard matrices of certain prime dimensions and related results, Ph.D. Thesis, UCLA (1997).