Revision as of 23:15, 21 April 2025 by Bot
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Generalizations

[math] \newcommand{\mathds}{\mathbb}[/math]

This article was automatically generated from a tex file and may contain conversion errors. If permitted, you may login and edit this article to improve the conversion.

15a. Unitary entries

We have seen in the previous chapter that associated to any complex Hadamard matrix [math]H\in M_N(\mathbb C)[/math] is a certain quantum permutation group [math]G\subset S_N^+[/math], which describes the symmetries of the matrix. The main example for this construction [math]H\to G[/math] is, as it normally should, [math]F_N\to\mathbb Z_N[/math], and more generally, [math]F_G\to G[/math]. Moreover, we have seen that all this is related to interesting questions from operator algebras, making a potential link with mathematical physics. We discuss here two extensions of the construction [math]H\to G[/math], which are both quite interesting, each having its own set of motivations, as follows:


(1) A first idea is that of using Hadamard matrices with noncommutative entries, [math]H\in M_N(A)[/math], with [math]A[/math] being a [math]C^*[/math]-algebra. The motivation here comes from the continuous families of complex Hadamard matrices, where [math]A=C(X)[/math], and also from all sorts of other constructions involving the complex Hadamard matrices, such as the MUB.


(2) A second idea is that of using partial Hadamard matrices (PHM), with usual complex entries, [math]H\in M_{M\times N}(\mathbb C)[/math]. Here the motivation comes from the theory of the PHM, developed at various places in this book, and also from the theory of the resulting symmetry-encoding objects [math]G[/math], which are certain interesting quantum semigroups.


Technically speaking now, looking at (1) and (2) above certainly suggests that there is room for some unification here, by taking about partial complex Hadamard matrices with noncommutative entries. However, this is something quite theoretical, which has not been done yet. And so again, an interesting question to be put on your to-do list. And with the warning however that, before going head-first into any kind of generalization, you should have some clear motivations, preferably coming from physics. Without clear motivation, if you just want to generalize the construction [math]H\to G[/math], you will most likely end up into some terribly complicated and abstract algebra, having 0 uses.


Back to work now, let us begin by discussing (1). Let [math]A[/math] be an arbitrary [math]C^*[/math]-algebra. For most of the applications [math]A[/math] will be a commutative algebra, [math]A=C(X)[/math] with [math]X[/math] being a compact space, or a matrix algebra, [math]A=M_K(\mathbb C)[/math] with [math]K\in\mathbb N[/math]. We will sometimes consider, as a joint generalization, the random matrix algebras [math]A=M_K(C(X))[/math]. Two row or column vectors over [math]A[/math], say [math]a=(a_1,\ldots,a_N)[/math] and [math]b=(b_1,\ldots,b_N)[/math] by writing both of them horizontally, are called orthogonal when:

[[math]] \sum_ia_ib_i^*=\sum_ia_i^*b_i=0 [[/math]]


Observe that, by applying the involution, we have as well:

[[math]] \sum_ib_ia_i^*=\sum_ib_i^*a_i=0 [[/math]]


With this orthogonality notion in hand, we can formulate:

Definition

An Hadamard matrix over an arbitrary [math]C^*[/math]-algebra [math]A[/math] is a square matrix [math]H\in M_N(A)[/math] such that:

  • All the entries of [math]H[/math] are unitaries, [math]H_{ij}\in U(A)[/math].
  • These entries commute on all rows and all columns of [math]H[/math].
  • The rows and columns of [math]H[/math] are pairwise orthogonal.

As a first remark, in the simplest case [math]A=\mathbb C[/math] the unitary group is the unit circle in the complex plane, [math]U(\mathbb C)=\mathbb T[/math], and we obtain the usual complex Hadamard matrices. In the general commutative case, [math]A=C(X)[/math] with [math]X[/math] compact space, our Hadamard matrix must be formed of “fibers”, one for each point [math]x\in X[/math]. Therefore, we obtain:

Proposition

The Hadamard matrices [math]H\in M_N(A)[/math] over a commutative algebra [math]A=C(X)[/math] are exactly the families of complex Hadamard matrices of type

[[math]] H=\left\{H^x\Big|x\in X\right\} [[/math]]
with [math]H^x[/math] depending continuously on the parameter [math]x\in X[/math].


Show Proof

This follows indeed by combining the above two observations. Observe that, when we wrote [math]A=C(X)[/math] in the above statement, we used the Gelfand theorem.

Let us comment now on the above axioms. For [math]U,V\in U(A)[/math] the commutation relation [math]UV=VU[/math] implies as well the following commutation relations:

[[math]] UV^*=V^*U\quad,\quad U^*V=VU^*\quad,\quad U^*V^*=U^*V^* [[/math]]


Thus the axiom (2) tells us that the [math]C^*[/math]-algebras [math]R_1,\ldots,R_N[/math] and [math]C_1,\ldots,C_N[/math] generated by the rows and the columns of [math]A[/math] must be all commutative. In view of this, we will be particulary interested in what follows in the following type of matrices:

Definition

An Hadamard matrix [math]H\in M_N(A)[/math] is called “non-classical” if the [math]C^*[/math]-algebra generated by its coefficients is not commutative.

Let us comment now on the axiom (3). According to our definition of orthogonality there are 4 sets of relations to be satisfied, namely for any [math]i\neq k[/math] we must have:

[[math]] \begin{eqnarray*} \sum_jH_{ij}H_{kj}^* &=&\sum_jH_{ij}^*H_{kj}\\ &=&\sum_jH_{ji}H_{jk}^*\\ &=&\sum_jH_{ji}^*H_{jk}\\ &=&0 \end{eqnarray*} [[/math]]


Now since by axiom (1) all the entries [math]H_{ij}[/math] are known to be unitaries, we can replace this formula by the following more general equation, valid for any [math]i,k[/math]:

[[math]] \begin{eqnarray*} \sum_jH_{ij}H_{kj}^* &=&\sum_jH_{ij}^*H_{kj}\\ &=&\sum_jH_{ji}H_{jk}^*\\ &=&\sum_jH_{ji}^*H_{jk}\\ &=&N\delta_{ik} \end{eqnarray*} [[/math]]


The point now is that everything simplifies in terms of the following matrices:

[[math]] H=(H_{ij})\quad,\quad H^*=(H_{ji}^*)\quad,\quad H^t=(H_{ji})\quad,\quad \bar{H}=(H_{ij}^*) [[/math]]


Indeed, the above equations simply read:

[[math]] HH^*=H^*H=H^t\bar{H}=\bar{H}H^t=N1_N [[/math]]


So, let us recall now that a square matrix [math]H\in M_N(A)[/math] is called “biunitary” if both [math]H[/math] and [math]H^t[/math] are unitaries. In the particular case where [math]A[/math] is commutative, [math]A=C(X)[/math], we have “[math]H[/math] unitary [math]\implies[/math] [math]H^t[/math] unitary”, so in this case biunitary means of course unitary. In terms of this notion, we have the following reformulation of Definition 15.1:

Proposition

Assume that [math]H\in M_N(A)[/math] has unitary entries, which commute on all rows and all columns of [math]H[/math]. Then the following are equivalent:

  • [math]H[/math] is Hadamard.
  • [math]H/\sqrt{N}[/math] is biunitary.
  • [math]HH^*=H^t\bar{H}=N1_N[/math].


Show Proof

This basically follows from the above discussion, as follows:


-- We know from definitions that the condition (1) in the statement happens if and only if the axiom (3) in Definition 15.1 is satisfied.


-- By the above discussion, it follows that this axiom (3) in Definition 15.1 is equivalent to the condition (2) in the statement.


-- Regarding now the equivalence with the condition (3) in the statement, this follows from the commutation axiom (2) in Definition 15.1.


-- By putting now everything together, we see that all the conditions in the statement are indeed equivalent.

Observe now that if [math]H=(H_{ij})[/math] is Hadamard, then so are the following matrices:

[[math]] \bar{H}=(H_{ij}^*)\quad,\quad H^t=(H_{ji})\quad,\quad H^*=(H_{ji}^*) [[/math]]


In addition, we have the following result:

Proposition

The class of Hadamard matrices [math]H\in M_N(A)[/math] is stable under:

  • Permuting the rows or columns.
  • Multiplying the rows or columns by central unitaries.

When successively combining these two operations, we obtain an equivalence relation on the class of Hadamard matrices [math]H\in M_N(A)[/math].


Show Proof

This is clear from definitions, exactly as in the usual complex Hadamard matrix case. Observe that in the commutative case [math]A=C(X)[/math] any unitary is central, so we can multiply the rows or columns by any unitary. In particular in this case we can always “dephase” the matrix, i.e. assume that its first row and column consist of [math]1[/math] entries. Note that this operation is not allowed in the general case.

Let us discuss now the tensor product operation. We have here:

Proposition

Let [math]H\in M_N(A)[/math] and [math]K\in M_M(A)[/math] be Hadamard matrices, and assume that [math] \lt H_{ij} \gt [/math] commutes with [math] \lt K_{ab} \gt [/math]. Then the “tensor product”

[[math]] H\otimes K\in M_{NM}(A) [[/math]]
given by [math](H\otimes K)_{ia,jb}=H_{ij}K_{ab}[/math], is an Hadamard matrix.


Show Proof

This follows from definitions, and is as well a consequence of the more general Theorem 15.7 below, that will be proved with full details.

Following Di\c t\u a [1], the deformed tensor products can be constructed as follows:

Theorem

Let [math]H\in M_N(A)[/math] and [math]K\in M_M(A)[/math] be Hadamard matrices, and [math]Q\in M_{N\times M}(U_A)[/math]. Then the “deformed tensor product” [math]H\otimes_QK\in M_{NM}(A)[/math], given by

[[math]] (H\otimes_QK)_{ia,jb}=Q_{ib}H_{ij}K_{ab} [[/math]]
is an Hadamard matrix as well, provided that the entries of [math]Q[/math] commute on rows and columns, and that the algebras [math] \lt H_{ij} \gt [/math], [math] \lt K_{ab} \gt [/math], [math] \lt Q_{ib} \gt [/math] pairwise commute.


Show Proof

First, the entries of [math]L=H\otimes_QK[/math] are unitaries, and its rows are orthogonal:

[[math]] \begin{eqnarray*} \sum_{jb}L_{ia,jb}L_{kc,jb}^* &=&\sum_{jb}Q_{ib}H_{ij}K_{ab}\cdot Q_{kb}^*K_{cb}^*H_{kj}^*\\ &=&N\delta_{ik}\sum_bQ_{ib}K_{ab}\cdot Q_{kb}^*K_{cb}^*\\ &=&N\delta_{ik}\sum_jK_{ab}K_{cb}^*\\ &=&NM\cdot\delta_{ik}\delta_{ac} \end{eqnarray*} [[/math]]


The orthogonality of columns can be checked as follows:

[[math]] \begin{eqnarray*} \sum_{ia}L_{ia,jb}L_{ia,kc}^* &=&\sum_{ia}Q_{ib}H_{ij}K_{ab}\cdot Q_{ic}^*K_{ac}^*H_{ik}^*\\ &=&M\delta_{bc}\sum_iQ_{ib}H_{ij}\cdot Q_{ic}^*H_{ik}^*\\ &=&M\delta_{bc}\sum_iH_{ij}H_{ik}^*\\ &=&NM\cdot\delta_{jk}\delta_{bc} \end{eqnarray*} [[/math]]


For the commutation on rows we use in addition the commutation on rows for [math]Q[/math]:

[[math]] \begin{eqnarray*} L_{ia,jb}L_{kc,jb} &=&Q_{ib}H_{ij}K_{ab}\cdot Q_{kb}H_{kj}K_{cb}\\ &=&Q_{ib}Q_{kb}\cdot H_{ij}H_{kj}\cdot K_{ab}K_{cb}\\ &=&Q_{kb}Q_{ib}\cdot H_{kj}H_{ij}\cdot K_{cb}K_{ab}\\ &=&Q_{kb}H_{kj}K_{cb}\cdot Q_{ib}H_{ij}K_{ab}\\ &=&L_{kc,jb}L_{ia,jb} \end{eqnarray*} [[/math]]


The commutation on columns is similar, using the commutation on columns for [math]Q[/math]:

[[math]] \begin{eqnarray*} L_{ia,jb}L_{ia,kc} &=&Q_{ib}H_{ij}K_{ab}\cdot q_{ic}H_{ik}K_{ac}\\ &=&Q_{ib}Q_{ic}\cdot H_{ij}H_{ik}\cdot K_{ab}K_{ac}\\ &=&Q_{ic}Q_{ib}\cdot H_{ik}H_{ij}\cdot K_{ac}K_{ab}\\ &=&Q_{ic}H_{ik}K_{ac}\cdot Q_{ib}H_{ij}K_{ab}\\ &=&L_{ia,kc}L_{ia,jb} \end{eqnarray*} [[/math]]


Thus all the axioms are satisfied, and [math]L[/math] is indeed Hadamard.

As a basic example, we have the following construction:

Proposition

The following matrix is Hadamard,

[[math]] M=\begin{pmatrix}x&y&x&y\\ x&-y&x&-y\\ z&t&-z&-t\\ z&-t&-z&t\end{pmatrix} [[/math]]
for any unitaries [math]x,y,z,t[/math] satisfying the following condition:

[[math]] [x,y]=[x,z]=[y,t]=[z,t]=0 [[/math]]


Show Proof

This follows indeed from Theorem 15.7, because we have:

[[math]] \begin{pmatrix}1&1\\ 1&-1\end{pmatrix}\otimes_{\begin{pmatrix}x&y\\ z&t\end{pmatrix}}\begin{pmatrix}1&1\\ 1&-1\end{pmatrix} =\begin{pmatrix}x&y&x&y\\ x&-y&x&-y\\ z&t&-z&-t\\ z&-t&-z&t\end{pmatrix} [[/math]]


In addition, the commutation relations in Theorem 15.7 are satisfied indeed.

The usual complex Hadamard matrices were classified by Haagerup in [2] at [math]N=2,3,4,5[/math]. In what follows we investigate the case of the general Hadamard matrices. We use the equivalence relation constructed in Proposition 15.5. We first have:

Proposition

The [math]2\times 2[/math] Hadamard matrices are all classical, and are all equivalent to the Fourier matrix [math]F_2[/math].


Show Proof

Consider indeed an arbitrary [math]2\times 2[/math] Hadamard matrix:

[[math]] H=\begin{pmatrix}A&B\\ C&D\end{pmatrix} [[/math]]


We already know that [math]A,D[/math] each commute with [math]B,C[/math]. Also, we have:

[[math]] AB^*+CD^*=0 [[/math]]


We deduce that [math]A=-CD^*B[/math] commutes with [math]D[/math], and that [math]C=-AB^*D[/math] commutes with [math]B[/math]. Thus our matrix is classical, any since all unitaries are now central, we can dephase our matrix, which follows therefore to be the Fourier matrix [math]F_2[/math].

Let us discuss now the case [math]N=3[/math]. Here the classification in the classical case uses the key fact that any formula of type [math]a+b+c=0[/math], with [math]|a|=|b|=|c|=1[/math], must be, up to a permutation of terms, a “trivial” formula of the following type, with [math]j=e^{2\pi i/3}[/math]:

[[math]] a+ja+j^2a=0 [[/math]]


Here is the noncommutative analogue of this simple fact:

Proposition

Assume that we have a vanishing sum of unitaries:

[[math]] a+b+c=0 [[/math]]

Then this sum must be of the following special type,

[[math]] a+wa+w^2a=0 [[/math]]
with [math]w[/math] being a unitary satisfying [math]1+w+w^2=0[/math].


Show Proof

Since [math]-c=a+b[/math] is unitary we have the following formula:

[[math]] (a+b)(a+b)^*=1 [[/math]]


Thus we have [math]ab^*+ba^*=-1[/math], and so we obtain:

[[math]] ab^*ba^*+(ba^*)^2=-ba^* [[/math]]


But with [math]w=ba^*[/math] we obtain from this equality that we have:

[[math]] 1+w^2=-w [[/math]]


Thus, we are led to the conclusion in the statement.

With the above result in hand, we can start the [math]N=3[/math] classification. We first have the following technical result, that we will improve later on:

Proposition

Any [math]3\times 3[/math] Hadamard matrix must be of the form

[[math]] H=\begin{pmatrix}a&b&c\\ ua&uv^*w^2vb&uv^*wvc\\ va&wvb&w^2vc\end{pmatrix} [[/math]]
with [math]w[/math] being subject to the equation [math]1+w+w^2=0[/math].


Show Proof

Consider an arbitrary Hadamard matrix [math]H\in M_3(A)[/math]. We define [math]a,b,c,u,v,w[/math] as for that part of the matrix to be exactly as in the statement, as follows:

[[math]] H=\begin{pmatrix}a&b&c\\ ua&x&y\\ va&wvb&z\end{pmatrix} [[/math]]


Let us look first at the scalar product between the first and third row:

[[math]] vaa^*+wvbb^*+zc^*=0 [[/math]]


By simplifying we obtain [math]v+wv+zc^*=0[/math], and by using Proposition 15.10 we conclude that we have [math]1+w+w^2=0[/math], and that [math]zc^*=w^2v[/math], and so [math]z=w^2vc[/math], as claimed. The scalar products of the first column with the second and third ones are:

[[math]] a^*b+a^*u^*x+a^*v^*wvb=0 [[/math]]

[[math]] a^*c+a^*u^*y+a^*v^*w^2vc=0 [[/math]]


By multiplying to the left by [math]va[/math], and to the right by [math]b^*v^*[/math] and [math]c^*v^*[/math], we obtain:

[[math]] 1+vu^*xb^*v^*+w=0 [[/math]]

[[math]] 1+vu^*yc^*v^*+w^2=0 [[/math]]


Now by using Proposition 15.10 again, we obtain:

[[math]] vu^*xb^*v^*=w^2 [[/math]]

[[math]] vu^*yc^*v^*=w [[/math]]


Thus [math]x=uv^*w^2vb[/math] and [math]y=uv^*wvc[/math], and we are done.

We can already deduce now a first classification result, as follows:

Proposition

There is no Hadamard matrix [math]H\in M_3(A)[/math] with self-adjoint entries.


Show Proof

We use Proposition 15.11. Since the entries are idempotents, we have:

[[math]] a^2=b^2=c^2=u^2=v^2=(uw)^2=(vw)^2=1 [[/math]]


It follows that our matrix is in fact of the following form:

[[math]] H=\begin{pmatrix}a&b&c\\ ua&uwb&uw^2c\\ va&wvb&w^2vc\end{pmatrix} [[/math]]


The commutation between [math]H_{22},H_{23}[/math] reads:

[[math]] \begin{eqnarray*} [uwb,wvb]=0 &\implies&[uw,wv]=0\\ &\implies&uwwv=wvuw\\ &\implies&uvw=vuw^2\\ &\implies&w=1 \end{eqnarray*} [[/math]]


Thus we have reached to a contradiction, and we are done.

Let us go back now to the general case. We have the following technical result, which refines Proposition 15.11, and which will be in turn further refined, later on:

Proposition

Any [math]3\times 3[/math] Hadamard matrix must be of the form

[[math]] H=\begin{pmatrix}a&b&c\\ ua&w^2ub&wuc\\ va&wvb&w^2vc\end{pmatrix} [[/math]]
where [math](a,b,c)[/math] and [math](u,v,w)[/math] are triples of commuting unitaries, and [math]1+w+w^2=0[/math].


Show Proof

We use Proposition 15.11. With [math]e=uv^*[/math], the matrix there becomes:

[[math]] H=\begin{pmatrix}a&b&c\\ eva&ew^2vb&ewvc\\ va&wvb&w^2vc\end{pmatrix} [[/math]]


The commutation relation between [math]H_{22},H_{32}[/math] reads:

[[math]] \begin{eqnarray*} [ew^2vb,wvb]=0 &\implies&[ew^2v,wv]=0\\ &\implies&ew^2vwv=wvew^2v\\ &\implies&ew^2v=wvew\\ &\implies&[ew,wv]=0 \end{eqnarray*} [[/math]]


Similarly, the commutation between [math]H_{23},H_{33}[/math] reads:

[[math]] \begin{eqnarray*} [ewvc,w^2vc]=0 &\implies&[ewv,w^2v]=0\\ &\implies&ewvw^2v=w^2vewv\\ &\implies&ewv=w^2vew^2\\ &\implies&[ew^2,w^2v]=0 \end{eqnarray*} [[/math]]


We can rewrite this latter relation by using the formula [math]w^2=-1-w[/math], and then, by further processing it by using the first relation, we obtain:

[[math]] \begin{eqnarray*} [e(1+w),(1+w)v]=0 &\implies&[e,wv]+[ew,v]=0\\ &\implies&2ewv-wve-vew=0\\ &\implies&ewv=\frac{1}{2}(wve+vew) \end{eqnarray*} [[/math]]


We use now the key fact that when an average of two unitaries is unitary, then the three unitaries involved are in fact all equal. This gives:

[[math]] ewv=wve=vew [[/math]]


Thus we obtain [math][w,e]=[w,v]=0[/math], so [math]w,e,v[/math] commute. Our matrix becomes:

[[math]] H=\begin{pmatrix}a&b&c\\ eva&w^2evb&wevc\\ va&wvb&w^2vc\end{pmatrix} [[/math]]


Now by remembering that [math]u=ev[/math], this gives the formula in the statement.

We can now formulate our main classification result, as follows:

Theorem

The [math]3\times 3[/math] Hadamard matrices are all classical, and are all equivalent to the Fourier matrix [math]F_3[/math].


Show Proof

We know from Proposition 15.13 that we can write our matrix in the following way, where [math](a,b,c)[/math] and [math](u,v,w)[/math] pairwise commute, and where [math]1+w+w^2=0[/math]:

[[math]] H=\begin{pmatrix}a&b&c\\ au&buw&cuw^*\\ av&bvw^*&cvw\end{pmatrix} [[/math]]


We also know that [math](a,u,v)[/math], [math](b,uw,vw^*)[/math], [math](c,uw^*,vw)[/math] and [math](ab,ac,bc,w)[/math] have entries which pairwise commute. We first show that [math]uv[/math] is central. Indeed, we have:

[[math]] \begin{eqnarray*} buv &=&buvww^*\\ &=&b(uw)(vw^*)\\ &=&(uw)(vw^*)b\\ &=&uvb \end{eqnarray*} [[/math]]


Similarly, [math]cuv=uvc[/math]. It follows that we may in fact suppose that [math]uv[/math] is a scalar. But since our relations are homogeneous, we may assume in fact that [math]u=v^*[/math]. Let us prove now that we have [math][abc,vw^*]=0[/math]. Indeed, we have the following computation:

[[math]] \begin{eqnarray*} abc &=&a(bc)ww^*\\ &=&aw(bc)w^*\\ &=&av(wv^*)bcw^*\\ &=&avb(wv^*)cw^*\\ &=&v(ab)wv^*cw^*\\ &=&vw(ab)v^*cw^*\\ &=&vw(ab)w(w^*v^*)cw^*\\ &=&vw^2(ab)c(w^*v^*)w^*\\ &=&vw^*abcv^*w \end{eqnarray*} [[/math]]


We know also that [math][b,vw^*]=0[/math]. Hence [math][ac,vw^*]=0[/math]. But [math][ac,w^*]=0[/math]. Hence [math][ac,v]=0[/math]. But [math][a,v]=0[/math]. Hence [math][c,v]=0[/math]. But [math][c,vw]=0[/math]. So [math][c,w]=0[/math]. But [math][bc,w]=0[/math]. So [math][b,w]=0[/math]. But [math][b,v^*w]=0[/math] and [math][ab,w]=0[/math], so respectively [math][b,v]=0[/math] and [math][a,w]=0[/math]. Thus all operators [math]a,b,c,v,w[/math] pairwise commute, and we are done.

At [math]N=4[/math] now, the classification work for the usual complex Hadamard matrices uses the fact that an equation of type [math]a+b+c+d=0[/math] with [math]|a|=|b|=|c|=|d|=1[/math] must be, up to a permutation of the terms, a “trivial” equation of the following form:

[[math]] a-a+b-b=0 [[/math]]


In our setting, however, we have for instance:

[[math]] \begin{pmatrix}a&0\\ 0&x\end{pmatrix}+\begin{pmatrix}-a&0\\ 0&y\end{pmatrix}+\begin{pmatrix}b&0\\ 0&-x\end{pmatrix}+\begin{pmatrix}-b&0\\ 0&-y\end{pmatrix}=0 [[/math]]


It is probably possible to further complicate this kind of identity, and this makes the [math]N=4[/math] classification a quite difficult task. As for the case [math]N=5[/math] or higher, things here are most likely very complicated, and we will stop our classification work here.

15b. Quantum groups

With the above basic theory developed, let us get now to the point where we wanted to get. The generalized Hadamard matrices produce quantum groups, as follows:

Theorem

If [math]H\in M_N(A)[/math] is Hadamard, the following matrices [math]P_{ij}\in M_N(A)[/math] form altogether a magic matrix [math]P=(P_{ij})[/math], over the algebra [math]M_N(A)[/math]:

[[math]] (P_{ij})_{ab}=\frac{1}{N}H_{ia}H_{ja}^*H_{jb}H_{ib}^* [[/math]]
Thus, we can let [math]\pi:C(S_N^+)\to M_N(A)[/math] be the representation associated to [math]P[/math], mapping [math]u_{ij}\to P_{ij}[/math], and then factorize this representation as follows,

[[math]] \pi:C(S_N^+)\to C(G)\to M_N(A) [[/math]]
with the closed subgroup [math]G\subset S_N^+[/math] chosen minimal.


Show Proof

The magic condition can be checked in three steps, as follows:


(1) Let us first check that each [math]P_{ij}[/math] is a projection, i.e. that we have [math]P_{ij}=P_{ij}^*=P_{ij}^2[/math]. Regarding the first condition, namely [math]P_{ij}=P_{ij}^*[/math], this simply follows from:

[[math]] \begin{eqnarray*} (P_{ij})_{ba}^* &=&\frac{1}{N}(H_{ib}H_{jb}^*H_{ja}H_{ia}^*)^*\\ &=&\frac{1}{N}H_{ia}H_{ja}^*H_{jb}H_{ib}^*\\ &=&(P_{ij})_{ab} \end{eqnarray*} [[/math]]


As for the second condition, [math]P_{ij}=P_{ij}^2[/math], this follows from the fact that all the entries [math]H_{ij}[/math] are assumed to be unitaries, i.e. follows from axiom (1) in Definition 15.1:

[[math]] \begin{eqnarray*} (P_{ij}^2)_{ab} &=&\sum_c(P_{ij})_{ac}(P_{ij})_{cb}\\ &=&\frac{1}{N^2}\sum_cH_{ia}H_{ja}^*H_{jc}H_{ic}^*H_{ic}H_{jc}^*H_{jb}H_{ib}^*\\ &=&\frac{1}{N}H_{ia}H_{ja}^*H_{jb}H_{ib}^*\\ &=&(P_{ij})_{ab} \end{eqnarray*} [[/math]]


(2) Let us check now that fact that the entries of [math]P[/math] sum up to 1 on each row. For this purpose we use the equality [math]H^*H=N1_N[/math], coming from the axiom (3), which gives:

[[math]] \begin{eqnarray*} (\sum_jP_{ij})_{ab} &=&\frac{1}{N}\sum_jH_{ia}H_{ja}^*H_{jb}H_{ib}^*\\ &=&\frac{1}{N}H_{ia}(H^*H)_{ab}H_{ib}^*\\ &=&\delta_{ab}H_{ia}H_{ib}^*\\ &=&\delta_{ab} \end{eqnarray*} [[/math]]


(3) Finally, let us check that the entries of [math]P[/math] sum up to 1 on each column. This is the tricky check, because it involves, besides axiom (1) and the formula [math]H^t\bar{H}=N1_N[/math] coming from axiom (3), the commutation on the columns of [math]H[/math], coming from axiom (2):

[[math]] \begin{eqnarray*} (\sum_iP_{ij})_{ab} &=&\frac{1}{N}\sum_iH_{ia}H_{ja}^*H_{jb}H_{ib}^*\\ &=&\frac{1}{N}\sum_iH_{ja}^*H_{ia}H_{ib}^*H_{jb}\\ &=&\frac{1}{N}H_{ja}^*(H^t\bar{H})_{ab}H_{jb}\\ &=&\delta_{ab}H_{ja}^*H_{jb}\\ &=&\delta_{ab} \end{eqnarray*} [[/math]]


Thus [math]P[/math] is indeed a magic matrix in the above sense, and we are done.

As an illustration, consider a usual Hadamard matrix [math]H\in M_N(\mathbb C)[/math]. If we denote its rows by [math]H_1,\ldots,H_N[/math] and we consider the vectors [math]\xi_{ij}=H_i/H_j[/math], then we have:

[[math]] \xi_{ij}=\left(\frac{H_{i1}}{H_{j1}},\ldots,\frac{H_{iN}}{H_{jN}}\right) [[/math]]


Thus the orthogonal projection on this vector [math]\xi_{ij}[/math] is given by:

[[math]] \begin{eqnarray*} (P_{\xi_{ij}})_{ab} &=&\frac{1}{||\xi_{ij}||^2}(\xi_{ij})_a\overline{(\xi_{ij})_b}\\ &=&\frac{1}{N}H_{ia}H_{ja}^*H_{jb}H_{ib}^*\\ &=&(P_{ij})_{ab} \end{eqnarray*} [[/math]]


We conclude that we have [math]P_{ij}=P_{\xi_{ij}}[/math] for any [math]i,j[/math], so our construction from Theorem 15.15 is compatible with the construction for the usual complex Hadamard matrices.


Let us discuss now the computation of the quantum permutation groups associated to the deformed tensor products of Hadamard matrices. This is actually something that we have not discussed in chapter 14, when talking about the usual Hadamard models, so the results below are relevant even in the case of these usual models. Let us begin with a study of the associated magic unitary. We have here the following result:

Proposition

The magic unitary associated to [math]H\otimes_QK[/math] is given by

[[math]] P_{ia,jb}=R_{ij}\otimes\frac{1}{N}(Q_{ic}Q_{jc}^*Q_{jd}Q_{id}^*\cdot K_{ac}K_{bc}^*K_{bd}K_{ad}^*)_{cd} [[/math]]
where [math]R_{ij}[/math] is the magic unitary matrix associated to [math]H[/math].


Show Proof

With standard conventions for deformed tensor products and for double indices, the entries of [math]L=H\otimes_QK[/math] are by definition the following elements:

[[math]] L_{ia,jb}=Q_{ib}H_{ij}K_{ab} [[/math]]


Thus the projections [math]P_{ia,jb}[/math] constructed in Theorem 15.15 are given by:

[[math]] \begin{eqnarray*} (P_{ia,jb})_{kc,ld} &=&\frac{1}{MN}L_{ia,kc}L_{jb,kc}^*L_{jb,ld}L_{ia,ld}^*\\ &=&\frac{1}{MN}(Q_{ic}H_{ik}K_{ac})(Q_{jc}H_{jk}K_{bc})^*(Q_{jd}H_{jl}K_{bd})(Q_{id}H_{il}K_{ad})^*\\ &=&\frac{1}{MN}(Q_{ic}Q_{jc}^*Q_{jd}Q_{id}^*)(H_{ik}H_{jk}^*H_{jl}H_{il}^*)(K_{ac}K_{bc}^*K_{bd}K_{ad}^*) \end{eqnarray*} [[/math]]


In terms now of the standard matrix units [math]e_{kl},e_{cd}[/math], we have:

[[math]] \begin{eqnarray*} &&P_{ia,jb}\\ &=&\frac{1}{MN}\sum_{kcld}e_{kl}\otimes e_{cd}\otimes(Q_{ic}Q_{jc}^*Q_{jd}Q_{id}^*)(H_{ik}H_{jk}^*H_{jl}H_{il}^*)(K_{ac}K_{bc}^*K_{bd}K_{ad}^*)\\ &=&\frac{1}{MN}\sum_{kcld}\left(e_{kl}\otimes 1\otimes H_{ik}H_{jk}^*H_{jl}H_{il}^*\right)(1\otimes e_{cd}\otimes Q_{ic}Q_{jc}^*Q_{jd}Q_{id}^*\cdot K_{ac}K_{bc}^*K_{bd}K_{ad}^*) \end{eqnarray*} [[/math]]


Since the quantities on the right commute, this gives the formula in the statement.

In order to investigate the Di\c t\u a deformations, we use:

Definition

Let [math]C(S_M^+)\to A[/math] and [math]C(S_N^+)\to B[/math] be Hopf algebra quotients, with fundamental corepresentations denoted [math]u,v[/math]. We let

[[math]] A*_wB=A^{*N}*B/ \lt [u_{ab}^{(i)},v_{ij}]=0 \gt [[/math]]
with the Hopf algebra structure making [math]w_{ia,jb}=u_{ab}^{(i)}v_{ij}[/math] a corepresentation.

The fact that we have indeed a Hopf algebra follows from the fact that [math]w[/math] is magic. In terms of quantum groups, if [math]A=C(G)[/math], [math]B=C(H)[/math], we write [math]A*_wB=C(G\wr_*H)[/math]:

[[math]] C(G)*_wC(H)=C(G\wr_*H) [[/math]]


The [math]\wr_*[/math] operation is the free analogue of [math]\wr[/math], the usual wreath product, and we refer for instance to [3] for more on this. With this convention, we have the following result:

Theorem

The representation associated to [math]L=H\otimes_QK[/math] factorizes as

[[math]] \xymatrix{C(S_{NM}^+)\ar[rr]^{\pi_L}\ar[rd]&&M_{NM}(\mathbb C)\\&C(S_M^+\wr_*G_H)\ar[ur]&} [[/math]]
and so the quantum group associated to [math]L[/math] appears as a subgroup [math]G_L\subset S_M^+\wr_*G_H[/math].


Show Proof

We use the formula in Proposition 15.16. For simplifying writing we agree to use instead of expressions of type [math]H_{ia}H_{ja}^*H_{jb}H_{ib}^*[/math], fractions as follows, by keeping in mind that the variables are only subject to the commutation relations in Definition 15.1:

[[math]] \frac{H_{ia}H_{jb}}{H_{ja}H_{ib}} [[/math]]


Our claim is that the factorization can be indeed constructed, as follows:

[[math]] U_{ab}^{(i)}=\sum_jP_{ia,jb}\quad,\quad V_{ij}=\sum_aP_{ia,jb} [[/math]]


Indeed, we have three verifications to be made, as follows:


(1) We must prove that the elements [math]V_{ij}=\sum_aP_{ia,jb}[/math] do not depend on [math]b[/math], and generate a copy of [math]C(G_H)[/math]. But if we denote by [math](R_{ij})[/math] the magic matrix for [math]H[/math], we have indeed:

[[math]] \begin{eqnarray*} V_{ij} &=&\frac{1}{N}\left(\frac{Q_{ic}Q_{jd}}{Q_{id}Q_{jc}}\cdot\frac{H_{ik}H_{jl}}{H_{il}H_{jk}}\cdot\delta_{cd}\right)_{kc,ld}\\ &=&((R_{ij})_{kl}\delta_{cd})_{kc,ld}\\ &=&R_{ij}\otimes 1 \end{eqnarray*} [[/math]]


(2) We prove now that for any [math]i[/math], the elements [math]U_{ab}^{(i)}=\sum_jP_{ia,jb}[/math] form a magic matrix. Since [math]P=(P_{ia,jb})[/math] is magic, the elements [math]U_{ab}^{(i)}=\sum_jP_{ia,jb}[/math] are self-adjoint, and we have [math]\sum_bU_{ab}^{(i)}=\sum_{bj}P_{ia,jb}=1[/math]. The fact that each [math]U_{ab}^{(i)}[/math] is an idempotent follows from:

[[math]] \begin{eqnarray*} &&((U_{ab}^{(i)})^2)_{kc,ld}\\ &=&\frac{1}{N^2M^2}\sum_{mejn}\frac{Q_{ic}Q_{je}}{Q_{ie}Q_{jc}}\cdot\frac{H_{ik}H_{jm}}{H_{im}H_{jk}}\cdot\frac{K_{ac}K_{be}}{K_{ae}K_{bc}}\cdot\frac{Q_{ie}Q_{nd}}{Q_{id}Q_{ne}}\cdot\frac{H_{im}H_{nl}}{H_{il}H_{nm}}\cdot\frac{K_{ae}K_{bd}}{K_{ad}K_{be}}\\ &=&\frac{1}{NM^2}\sum_{ejn}\frac{Q_{ic}Q_{je}Q_{nd}}{Q_{jc}Q_{id}Q_{ne}}\cdot\frac{H_{ik}H_{nl}}{H_{jk}H_{il}}\delta_{jn}\cdot\frac{K_{ac}K_{bd}}{K_{bc}K_{ad}}\\ &=&\frac{1}{NM^2}\sum_{ej}\frac{Q_{ic}Q_{je}Q_{jd}}{Q_{jc}Q_{id}Q_{je}}\cdot\frac{H_{ik}H_{jl}}{H_{jk}H_{il}}\cdot\frac{K_{ac}K_{bd}}{K_{bc}K_{ad}}\\ &=&\frac{1}{NM}\sum_j\frac{Q_{ic}Q_{jd}}{Q_{jc}Q_{id}}\cdot\frac{H_{ik}H_{jl}}{H_{jk}H_{il}}\cdot\frac{K_{ac}K_{bd}}{K_{bc}K_{ad}}\\ &=&(U_{ab}^{(i)})_{kc,ld} \end{eqnarray*} [[/math]]


Finally, the condition [math]\sum_aU_{ab}^{(i)}=1[/math] can be checked as follows:

[[math]] \begin{eqnarray*} \sum_aU_{ab}^{(i)} &=&\frac{1}{N}\left(\sum_j\frac{Q_{ic}Q_{jd}}{Q_{id}Q_{jc}}\cdot\frac{H_{ik}H_{jl}}{H_{il}H_{jk}}\cdot\delta_{cd}\right)_{kc,ld}\\ &=&\frac{1}{N}\left(\sum_j\frac{H_{ik}H_{jl}}{H_{il}H_{jk}}\cdot\delta_{cd}\right)_{kc,ld}\\ &=&1 \end{eqnarray*} [[/math]]


(3) It remains to prove that we have [math]U_{ab}^{(i)}V_{ij}=V_{ij}U_{ab}^{(i)}=P_{ia,jb}[/math]. First, we have:

[[math]] \begin{eqnarray*} (U_{ab}^{(i)}V_{ij})_{kc,ld} &=&\frac{1}{N^2M}\sum_{mn}\frac{Q_{ic}Q_{nd}}{Q_{id}Q_{nc}}\cdot\frac{H_{ik}H_{nm}}{H_{im}H_{nk}}\cdot\frac{K_{ac}K_{bd}}{K_{ad}K_{bc}}\cdot\frac{H_{im}H_{jl}}{H_{il}H_{jm}}\\ &=&\frac{1}{NM}\sum_n\frac{Q_{ic}Q_{nd}}{Q_{id}Q_{nc}}\cdot\frac{H_{ik}H_{jl}}{H_{nk}H_{il}}\delta_{nj}\cdot\frac{K_{ac}K_{bd}}{K_{ad}K_{bc}}\\ &=&\frac{1}{NM}\cdot\frac{Q_{ic}Q_{jd}}{Q_{id}Q_{jc}}\cdot\frac{H_{ik}H_{jl}}{H_{jk}H_{il}}\cdot\frac{K_{ac}K_{bd}}{K_{ad}K_{bc}}\\ &=&(P_{ia,jb})_{kc,ld} \end{eqnarray*} [[/math]]


The remaining computation is similar, as follows:

[[math]] \begin{eqnarray*} (V_{ij}U_{ab}^{(i)})_{kc,ld} &=&\frac{1}{N^2M}\sum_{mn}\frac{H_{ik}H_{jm}}{H_{im}H_{jk}}\cdot\frac{Q_{ic}Q_{nd}}{Q_{id}Q_{nc}}\cdot\frac{H_{im}H_{nl}}{H_{il}H_{nm}}\cdot\frac{K_{ac}K_{bd}}{K_{ad}K_{bc}}\\ &=&\frac{1}{NM}\sum_n\frac{Q_{ic}Q_{nd}}{Q_{id}Q_{nc}}\cdot\frac{H_{ik}H_{nl}}{H_{jk}H_{il}}\delta_{jn}\cdot\frac{K_{ac}K_{bd}}{K_{ad}K_{bc}}\\ &=&\frac{1}{NM}\cdot\frac{Q_{ic}Q_{jd}}{Q_{id}Q_{jc}}\cdot\frac{H_{ik}H_{jl}}{H_{jk}H_{il}}\cdot\frac{K_{ac}K_{bd}}{K_{ad}K_{bc}}\\ &=&(P_{ia,jb})_{kc,ld} \end{eqnarray*} [[/math]]


Thus we have checked all the relations, and we are done.

In general, the problem of further factorizing the above representation is a quite difficult one, and this even in the case of the usual Hadamard matrices. For a number of results here, which are however quite specialized, we refer to [4] and related papers.

15c. Partial permutations

Let us discuss now another generalization of the construction [math]H\to G[/math], which is independent from the one above. The idea, following [5], will be that of looking at the partial Hadamard matrices (PHM), and their connection with the partial permutations. Let us start with the following standard definition:

Definition

A partial permutation of [math]\{1\,\ldots,N\}[/math] is a bijection

[[math]] \sigma:X\simeq Y [[/math]]
between two subsets of the index set, as follows:

[[math]] X,Y\subset\{1,\ldots,N\} [[/math]]
We denote by [math]\widetilde{S}_N[/math] the set formed by such partial permutations.

We have [math]S_N\subset\widetilde{S}_N[/math], and the embedding [math]u:S_N\subset M_N(0,1)[/math] given by the standard permutation matrices can be extended to an embedding [math]u:\widetilde{S}_N\subset M_N(0,1)[/math], as follows:

[[math]] u_{ij}(\sigma)= \begin{cases} 1&{\rm if}\ \sigma(j)=i\\ 0&{\rm otherwise} \end{cases} [[/math]]


By looking at the image of this embedding, we see that [math]\widetilde{S}_N[/math] is in bijection with the matrices [math]M\in M_N(0,1)[/math] having at most one 1 entry on each row and column. In analogy now with Wang's theory in [6], we have the following definition:

Definition

A submagic matrix is a matrix [math]u\in M_N(A)[/math] whose entries are projections, which are pairwise orthogonal on rows and columns. We let [math]C(\widetilde{S}_N^+)[/math] be the universal [math]C^*[/math]-algebra generated by the entries of a [math]N\times N[/math] submagic matrix.

Here the fact that the algebra [math]C(\widetilde{S}_N^+)[/math] is indeed well-defined is clear. As a first observation, this algebra has a comultiplication, given by the following formula:

[[math]] \Delta(u_{ij})=\sum_ku_{ik}\otimes u_{kj} [[/math]]


This algebra has as well a counit, given by the following formula:

[[math]] \varepsilon(u_{ij})=\delta_{ij} [[/math]]


Thus [math]\widetilde{S}_N^+[/math] is a quantum semigroup, and we have maps as follows, with the bialgebras at left corresponding to the quantum semigroups at right:

[[math]] \begin{matrix} C(\widetilde{S}_N^+)&\to&C(S_N^+)\\ \\ \downarrow&&\downarrow\\ \\ C(\widetilde{S}_N)&\to&C(S_N) \end{matrix} \quad \quad \quad:\quad \quad\quad \begin{matrix} \widetilde{S}_N^+&\supset&S_N^+\\ \\ \cup&&\cup\\ \\ \widetilde{S}_N&\supset&S_N \end{matrix} [[/math]]


The relation of all this with the PHM is immediate, appearing as follows:

Theorem

If [math]H\in M_{M\times N}(\mathbb T)[/math] is a PHM, with rows denoted [math]H_1,\ldots,H_M\in\mathbb T^N[/math], then the following matrix of rank one projections is submagic:

[[math]] P_{ij}=Proj\left(\frac{H_i}{H_j}\right) [[/math]]

Thus [math]H[/math] produces a representation [math]\pi_H:C(\widetilde{S}_M^+)\to M_N(\mathbb C)[/math], given by [math]u_{ij}\to P_{ij}[/math], that we can factorize through [math]C(G)[/math], with the quantum semigroup [math]G\subset\widetilde{S}_M^+[/math] chosen minimal.


Show Proof

We have indeed the following computation, for the rows:

[[math]] \begin{eqnarray*} \Big\langle\frac{H_i}{H_j},\frac{H_i}{H_k}\Big\rangle &=&\sum_l\frac{H_{il}}{H_{jl}}\cdot\frac{H_{kl}}{H_{il}}\\ &=&\sum_l\frac{H_{kl}}{H_{jl}}\\ &=& \lt H_k,H_j \gt \\ &=&\delta_{jk} \end{eqnarray*} [[/math]]


The verification for the columns is similar, as follows:

[[math]] \begin{eqnarray*} \left \lt \frac{H_i}{H_j},\frac{H_k}{H_j}\right \gt &=&\sum_l\frac{H_{il}}{H_{jl}}\cdot\frac{H_{jl}}{H_{kl}}\\ &=&\sum_l\frac{H_{il}}{H_{kl}}\\ &=&N\delta_{ik} \end{eqnarray*} [[/math]]


Regarding now the last assertion, we can indeed factorize our representation as indicated, with the existence and uniqueness of the bialgebra [math]C(G)[/math], with the minimality property as above, being obtained by dividing [math]C(\widetilde{S}_M^+)[/math] by a suitable ideal. See [5].

Summarizing, we have a generalization of the [math]H\to G[/math] construction from chapter 14. The very first problem is that of deciding under which exact assumptions our construction is in fact “classical”. In order to explain the answer here, we will need:

Definition

A pre-Latin square is a square matrix

[[math]] L\in M_M(1,\ldots,N) [[/math]]
having the property that its entries are distinct, on each row and each column.

Given such a pre-Latin square [math]L[/math], to any [math]x\in\{1,\ldots,N\}[/math] we can associate the partial permutation [math]\sigma_x\in\widetilde{S}_M[/math] given by the following formula:

[[math]] \sigma_x(j)=i\iff L_{ij}=x [[/math]]


With this construction in hand, we denote by [math]G\subset\widetilde{S}_M[/math] the semigroup generated by these partial permutations [math]\sigma_1,\ldots,\sigma_N[/math], and call it semigroup associated to [math]L[/math]. Also, given an orthogonal basis [math]\xi=(\xi_1,\ldots,\xi_N)[/math] of [math]\mathbb C^N[/math], we can construct a submagic matrix [math]P\in M_M(M_N(\mathbb C))[/math], according to the following formula:

[[math]] P_{ij}=Proj(\xi_{L_{ij}}) [[/math]]


With these notations, we have the following result, from [5]:

Theorem

If [math]H\in M_{N\times M}(\mathbb C)[/math] is a PHM, the following are equivalent:

  • The semigroup [math]G\subset\widetilde{S}_M^+[/math] is classical, i.e. [math]G\subset\widetilde{S}_M[/math].
  • The projections [math]P_{ij}=Proj(H_i/H_j)[/math] pairwise commute.
  • The vectors [math]H_i/H_j\in\mathbb T^N[/math] are pairwise proportional, or orthogonal.
  • The submagic matrix [math]P=(P_{ij})[/math] comes for a pre-Latin square [math]L[/math].

In addition, if so is the case, [math]G[/math] is the semigroup associated to [math]L[/math].


Show Proof

This is something standard, as follows:


[math](1)\iff(2)[/math] is clear from definitions.


[math](2)\iff(3)[/math] comes from the fact that two rank 1 projections commute precisely when their images coincide, or are orthogonal.


[math](3)\iff(4)[/math] is clear again.


As for the last assertion, this is something standard, coming from Gelfand duality, which allows us to compute the Hopf image, in combinatorial terms. See [5].

We call “classical” the matrices in Theorem 15.23, that we will study now. Let us begin with a study at [math]M=2[/math]. We make the following convention, where [math]\tau[/math] is the transposition, [math]ij[/math] is the partial permutation [math]i\to j[/math], and [math]\emptyset[/math] is the null map:

[[math]] \widetilde{S}_2=\{id,\tau,11,12,21,22,\emptyset\} [[/math]]


With this convention, we have the following result:

Proposition

A partial Hadamard matrix [math]H\in M_{2\times N}(\mathbb T)[/math], in dephased form

[[math]] H=\begin{pmatrix}1&\ldots&1\\ \lambda_1&\ldots&\lambda_N\end{pmatrix} [[/math]]
is of classical type when one of the following happens:

  • Either [math]\lambda_i=\pm w[/math], for some [math]w\in\mathbb T[/math], in which case [math]G=\{id,\tau\}[/math].
  • Or [math]\sum_i\lambda_i^2=0[/math], in which case [math]G=\{id,11,12,21,22,\emptyset\}[/math]


Show Proof

With [math]1=(1,\ldots,1)[/math] and [math]\lambda=(\lambda_1,\ldots,\lambda_N)[/math], the matrix formed by the vectors [math]H_i/H_j[/math] is [math](^1_{\bar{\lambda}}{\ }^\lambda_1)[/math]. Since [math]1\perp\lambda,\bar{\lambda}[/math] we just have to compare [math]\lambda,\bar{\lambda}[/math], and we have two cases:


(1) Case [math]\lambda\sim\bar{\lambda}[/math]. This means that we have [math]\lambda^2\sim1[/math], and so [math]\lambda_i=\pm w[/math], for some complex number [math]w\in\mathbb T[/math]. In this case the associated pre-Latin square is [math]L=(^1_2{\ }^2_1)[/math], and the partial permutations [math]\sigma_x[/math] associated to [math]L[/math], as above, are as follows:

[[math]] \sigma_1=id\quad,\quad \sigma_2=\tau [[/math]]


We obtain from this that we have, as claimed:

[[math]] G= \lt id,\tau \gt =\{id,\tau\} [[/math]]


(2) Case [math]\lambda\perp\bar{\lambda}[/math]. This means [math]\sum_i\lambda_i^2=0[/math]. In this case the associated pre-Latin square is [math]L=(^1_3{\ }^2_1)[/math], the associated partial permutations [math]\sigma_x[/math] are given by:

[[math]] \sigma_1=id\quad,\quad \sigma_2=21\quad,\quad \sigma_3=12 [[/math]]


The semigroup generated by these partial permutations is:

[[math]] G= \lt id,21,12 \gt =\{id,11,12,21,22,\emptyset\} [[/math]]


Thus, we are led to the conclusion in the statement.

The matrices in (1) are, modulo equivalence, those which are real. As for the matrices in (2), these are parametrized by the solutions [math]\lambda\in\mathbb T^N[/math] of the following equations:

[[math]] \sum_i\lambda_i=\sum_i\lambda_i^2=0 [[/math]]


In general, it is quite unclear on how to deal with these equations. Observe however that, as a basic example here, we have the upper [math]2\times N[/math] submatrix of [math]F_N[/math], with [math]N\geq3[/math]. We refer to [7], [5] and related papers, for more on these questions.

15d. Fourier matrices

Let us discuss now in detail the truncated Fourier matrix case. First, we have the following result, that we already know from chapter 14, but that we will present here with a complete proof, as an illustration for Theorem 15.23:

Proposition

The Fourier matrix, which is as follows, with [math]w=e^{2\pi i/N}[/math],

[[math]] F_N=(w^{ij}) [[/math]]
is of classical type, and the associated group [math]G\subset S_N[/math] is the cyclic group [math]\mathbb Z_N[/math].


Show Proof

Since [math]H=F_N[/math] is a square matrix, the associated semigroup [math]G\subset\widetilde{S}_N^+[/math] must be a quantum group, [math]G\subset S_N^+[/math]. We must prove that we have [math]G=\mathbb Z_N[/math]. Let us set:

[[math]] \rho=(1,w,w^2,\ldots,w^{N-1}) [[/math]]


The rows of [math]H[/math] are then given by [math]H_i=\rho^i[/math], and so we have:

[[math]] \frac{H_i}{H_j}=\rho^{i-j} [[/math]]


We conclude that [math]H[/math] is indeed of classical type, coming from the Latin square [math]L_{ij}=j-i[/math] and from the following orthogonal basis:

[[math]] \xi=(1,\rho^{-1},\rho^{-2},\ldots,\rho^{1-N}) [[/math]]


We have [math]G= \lt \sigma_1,\ldots,\sigma_N \gt [/math], where [math]\sigma_x\in S_N[/math] is given by:

[[math]] \sigma_x(j)=i\iff L_{ij}=x [[/math]]


Now from [math]L_{ij}=j-i[/math] we obtain [math]\sigma_x(j)=j-x[/math], and so:

[[math]] G=\{\sigma_1,\ldots,\sigma_N\}\simeq\mathbb Z_N [[/math]]


Thus, we are led to the conclusion in the statement.

We will be interested in what follows in the truncated Fourier matrices. Let [math]F_{M,N}[/math] be the upper [math]M\times N[/math] submatrix of [math]F_N[/math], and [math]G_{M,N}\subset\widetilde{S}_M[/math] be the associated semigroup. The simplest case is that when [math]M[/math] is small, and we have here the following result:

Theorem

In the [math]N \gt 2M-2[/math] regime, [math]G_{M,N}\subset\widetilde{S}_M[/math] is formed by the maps \vskip-10mm

[[math]] \begin{matrix}\\ \\ \\ \sigma=\ \ \\ \end{matrix}\xymatrix@R=10mm@C=2mm{ \circ&\circ&\circ&\circ\ar[dll]&\circ\ar[dll]&\circ\ar[dll]&\circ\\ \circ&\circ&\circ&\circ&\circ&\circ&\circ} [[/math]]
that is, [math]\sigma:I\simeq J[/math], [math]\sigma(j)=j-x[/math], with [math]I,J\subset\{1,\ldots,M\}[/math] intervals, independently of [math]N[/math].


Show Proof

For [math]\widetilde{H}=F_N[/math] the associated Latin square is circulant, given by:

[[math]] \widetilde{L}_{ij}=j-i [[/math]]


Thus, the pre-Latin square that we are interested in is given by:

[[math]] L=\begin{pmatrix} 0&1&2&\ldots&M-1\\ N-1&0&1&\ldots&M-2\\ N-2&N-1&0&\ldots&M-3\\ \ldots\\ N-M+1&N-M+2&N-M+3&\ldots&0 \end{pmatrix} [[/math]]


Observe that, due to our [math]N \gt 2M-2[/math] assumption, we have [math]N-M+1 \gt M-1[/math], and so the entries above the diagonal are distinct from those below the diagonal. Let us compute now the partial permutations [math]\sigma_x\in\widetilde{S}_M[/math] given by:

[[math]] \sigma_x(j)=i\iff L_{ij}=x [[/math]]


We have [math]\sigma_0=id[/math], and then [math]\sigma_1,\sigma_2,\ldots,\sigma_{M-1}[/math] are as follows: \vskip-7mm

[[math]] \begin{matrix}\\ \\ \sigma_1=\\ \end{matrix} \item[a]ymatrix@R=5mm@C=1mm{ \circ&\circ\ar[dl]&\circ\ar[dl]&\circ\ar[dl]&\circ\ar[dl]\\ \circ&\circ&\circ&\circ&\circ} [[/math]]

[[math]] \begin{matrix}\\ \\ \sigma_2=\\ \end{matrix} \item[a]ymatrix@R=5mm@C=1mm{ \circ&\circ&\circ\ar[dll]&\circ\ar[dll]&\circ\ar[dll]\\ \circ&\circ&\circ&\circ&\circ} [[/math]]

[[math]] \vdots [[/math]]
\vskip-7mm

[[math]] \begin{matrix}\\ \\ \sigma_{M-1}=\\ \end{matrix} \item[a]ymatrix@R=5mm@C=1mm{ \circ&\circ&\circ&\circ&\circ\ar[dllll]\\ \circ&\circ&\circ&\circ&\circ} [[/math]]


Observe that we have the following formulae, for these maps:

[[math]] \sigma_2=\sigma_1^2 [[/math]]

[[math]] \sigma_3=\sigma_1^3 [[/math]]

[[math]] \vdots [[/math]]

[[math]] \sigma_{M-1}=\sigma_1^{M-1} [[/math]]


As for the remaining partial permutations, these are given by:

[[math]] \sigma_{N-1}=\sigma_1^{-1} [[/math]]

[[math]] \sigma_{N-2}=\sigma_2^{-1} [[/math]]

[[math]] \vdots [[/math]]

[[math]] \sigma_{N-M+1}=\sigma_{M-1}^{-1} [[/math]]


The corresponding diagrams are as follows: \vskip-7mm

[[math]] \begin{matrix}\\ \\ \sigma_{N-1}=\\ \end{matrix} \item[a]ymatrix@R=5mm@C=1mm{ \circ\ar[dr]&\circ\ar[dr]&\circ\ar[dr]&\circ\ar[dr]&\circ\\ \circ&\circ&\circ&\circ&\circ} [[/math]]

[[math]] \begin{matrix}\\ \\ \sigma_{N-2}=\\ \end{matrix} \item[a]ymatrix@R=5mm@C=1mm{ \circ\ar[drr]&\circ\ar[drr]&\circ\ar[drr]&\circ&\circ\\ \circ&\circ&\circ&\circ&\circ} [[/math]]

[[math]] \vdots [[/math]]
\vskip-7mm

[[math]] \begin{matrix}\\ \\ \sigma_{N-M+1}=\\ \end{matrix} \item[a]ymatrix@R=5mm@C=1mm{ \circ\ar[drrrr]&\circ&\circ&\circ&\circ\\ \circ&\circ&\circ&\circ&\circ} [[/math]]


We conclude that we have the following generation result:

[[math]] G_{M,N}= \lt \sigma_1 \gt [[/math]]


Now if we denote by [math]G_{M,N}'[/math] the semigroup in the statement, we have [math]\sigma_1\in G_{M,N}'[/math], and so we have an inclusion as follows:

[[math]] G_{M,N}\subset G_{M,N}' [[/math]]


The reverse inclusion can be established as follows:


(1) Assume first that [math]\sigma\in G_{M,N}'[/math], [math]\sigma:I\simeq J[/math] has the property [math]M\in I,J[/math]: \vskip-10mm

[[math]] \begin{matrix}\\ \\ \\ \sigma=\ \ \\ \end{matrix}\xymatrix@R=10mm@C=2mm{ \circ&\circ&\circ&\circ&\circ\ar[d]&\circ\ar[d]&\circ\ar[d]\\ \circ&\circ&\circ&\circ&\circ&\circ&\circ} [[/math]]


Then we can write [math]\sigma=\sigma_{N-k}\sigma_k[/math], with [math]k=M-|I|[/math], so we have [math]\sigma\in G_{M,N}[/math].


(2) Assume now that [math]\sigma\in G_{M,N}'[/math], [math]\sigma:I\simeq J[/math] has just the property [math]M\in I[/math] or [math]M\in J[/math]: \vskip-10mm

[[math]] \begin{matrix}\\ \\ \\ \sigma'=\ \ \\ \end{matrix}\xymatrix@R=10mm@C=2mm{ \circ&\circ&\circ&\circ&\circ\ar[dlll]&\circ\ar[dlll]&\circ\ar[dlll]\\ \circ&\circ&\circ&\circ&\circ&\circ&\circ} [[/math]]

[[math]] \begin{matrix}\\ \\ \\ \sigma''=\ \ \\ \end{matrix}\xymatrix@R=10mm@C=2mm{ \circ&\circ&\circ&\circ\ar[dr]&\circ\ar[dr]&\circ\ar[dr]&\circ\\ \circ&\circ&\circ&\circ&\circ&\circ&\circ} [[/math]]


In this case we have as well [math]\sigma\in G_{M,N}[/math], because [math]\sigma[/math] appears from one of the maps in (1) by adding a “slope”, which can be obtained by composing with a suitable map [math]\sigma_k[/math].


(3) Assume now that [math]\sigma\in G_{M,N}'[/math], [math]\sigma:I\simeq J[/math] is arbitrary: \vskip-10mm

[[math]] \begin{matrix}\\ \\ \\ \sigma=\ \ \\ \end{matrix}\xymatrix@R=10mm@C=2mm{ \circ&\circ&\circ&\circ\ar[dll]&\circ\ar[dll]&\circ\ar[dll]&\circ\\ \circ&\circ&\circ&\circ&\circ&\circ&\circ} [[/math]]


Then we can write [math]\sigma=\sigma'\sigma''[/math] with [math]\sigma':L\simeq J[/math], [math]\sigma'':I\simeq L[/math], where [math]L[/math] is an interval satisfying [math]|L|=|I|=|J|[/math] and [math]M\in L[/math], and since [math]\sigma',\sigma''\in G_{M,N}[/math] by (2), we are done.

Summarizing, we have so far complete results at [math]N=M[/math], and at [math]N \gt 2M-2[/math]. In the remaining regime, [math]M \lt N\leq2M-2[/math], the semigroup [math]G_{M,N}\subset\widetilde{S}_M[/math] looks quite hard to compute, and for the moment there are only partial results regarding it. For a partial permutation [math]\sigma:I\simeq J[/math] with [math]|I|=|J|=k[/math], set [math]\kappa(\sigma)=k[/math]. We have:

Theorem

The following semigroup components, with [math]k \gt 2M-N[/math],

[[math]] G_{M,N}^{(k)}=\left\{\sigma\in G_{M,N}\Big|\kappa(\sigma)=k\right\} [[/math]]

are in the [math]M \lt N\leq2M-2[/math] regime the same as those in the [math]N \gt 2M-2[/math] regime.


Show Proof

In the [math]M \lt N\leq2M-2[/math] regime the pre-Latin square that we are interested in has as usual 0 on the diagonal, and then takes its entries from the following set, in a uniform way from each of the 3 components:

[[math]] S=\{1,\ldots,N-M\}\cup\{N-M+1,\ldots,M-1\}\cup\{M,\ldots,N-1\} [[/math]]


Here is an illustrating example, at [math]M=6,N=8[/math]:

[[math]] L=\begin{pmatrix} ''' 0'''&1&2&''' 3'''&''' 4'''&''' 5'''\\ 7&''' 0'''&1&2&''' 3'''&''' 4'''\\ 6&7&''' 0'''&1&2&''' 3'''\\ ''' 5'''&6&7&''' 0'''&1&2\\ ''' 4'''&''' 5'''&6&7&''' 0'''&1\\ ''' 3'''&''' 4'''&''' 5'''&6&7&''' 0''' \end{pmatrix} [[/math]]


The point now is that [math]\sigma_1,\ldots,\sigma_{N-M}[/math] are given by the same formulae as those in the proof of Theorem 15.26, then [math]\sigma_{N-M+1},\ldots,\sigma_{M-1}[/math] all satisfy [math]\kappa(\sigma)=2M-N[/math], and finally [math]\sigma_M,\ldots,\sigma_{N-1}[/math] are once again given by the formulae in the proof of Theorem 15.26. Now since we have [math]\kappa(\sigma\rho)\leq\min(\kappa(\sigma),\kappa(\rho))[/math], adding the maps [math]\sigma_{N-M+1},\ldots,\sigma_{M-1}[/math] to the semigroup [math]G_{M,N}\subset\widetilde{S}_M[/math] computed in the proof of Theorem 15.26 won't change the [math]G_{M,N}^{(k)}[/math] components of this semigroup at [math]k \gt 2M-N[/math], and this gives the result.

General references

Banica, Teo (2024). "Invitation to Hadamard matrices". arXiv:1910.06911 [math.CO].

References

  1. P. Di\c t\u a, Some results on the parametrization of complex Hadamard matrices, J. Phys. A 37 (2004), 5355--5374.
  2. U. Haagerup, Orthogonal maximal abelian [math]*[/math]-subalgebras of the [math]n\times n[/math] matrices and cyclic [math]n[/math]-roots, in “Operator algebras and quantum field theory”, International Press (1997), 296--323.
  3. T. Banica, Introduction to quantum groups, Springer (2023).
  4. T. Banica and J. Bichon, Random walk questions for linear quantum groups, Int. Math. Res. Not. 24 (2015), 13406--13436.
  5. 5.0 5.1 5.2 5.3 5.4 T. Banica and A. Skalski, The quantum algebra of partial Hadamard matrices, Linear Algebra Appl. 469 (2015), 364--380.
  6. S. Wang, Quantum symmetry groups of finite spaces, Comm. Math. Phys. 195 (1998), 195--211.
  7. T. Banica, D. \"Ozteke and L. Pittau, Isolated partial Hadamard matrices and related topics, Open Syst. Inf. Dyn. 25 (2018), 1--27.