Partial matrices
4a. Partial matrices
In this chapter we discuss a number of more specialized questions in the real case, regarding the square or rectangular submatrices of the Hadamard matrices [math]H\in M_N(\pm1)[/math], and some related classes of square or rectangular real matrices. There are many things to be done here, going in various directions, and our plan will be as follows:
(1) We will first review the material from chapter 1 regarding the partial Hadamard matrices, with some further algebraic results, and with a few analytic things added too, inspired from the theory developed in the square matrix case in chapters 2-3.
(2) Then, we will get into the question of counting the partial Hadamard matrices [math]H\in M_{M\times N}(\pm1)[/math], at small values of [math]M[/math], and with [math]N\to\infty[/math]. This is a question having no square counterpart, and following de Launey-Levin [1], interesting things can be said.
(3) Finally, we will go back to the square matrix case, and present some results from [2] regarding the square submatrices of the usual Hadamard matrices [math]H\in M_N(\pm1)[/math], making the connection with the almost Hadamard matrices from chapter 3.
All in all, many things to be done. Let us mention right away that the most important thing in all this is (2), with the counting result of de Launey and Levin in [1] being something truly remarkable, and providing a viable alternative to the whole HC problematics, developed by countless people since the papers of Sylvester [3] and Hadamard [4].
Getting started now, let us begin by reviewing what we know about the partial Hadamard matrices, from chapter 1. The definition of these matrices is as follows:
A partial Hadamard matrix (PHM) is a rectangular matrix
The motivating examples are the usual Hadamard matrices [math]H\in M_N(\pm1)[/math], and their various [math]M\times N[/math] submatrices, with [math]M\leq N[/math]. However, there are as well examples which are not of this form, and the PHM are interesting combinatorial objects, on their own.
Following the study from the square matrix case, we first have:
The set [math]Y_{M,N}[/math] of the [math]M\times N[/math] partial Hadamard matrices is
This follows exactly as in the square matrix case. Indeed, given a rectangular matrix [math]U\in M_{M\times N}(\mathbb R)[/math] having rows [math]R_1,\ldots,R_M\in\mathbb R^N[/math], we have:
Thus, the condition [math]UU^t=1_M[/math] expresses the fact that the vectors [math]R_1,\ldots,R_M[/math] are pairwise orthogonal, and of norm 1, and this gives the formula in the statement.
As a remark here, at [math]M=1[/math] we have of course [math]Y_{1,N}=M_{1\times N}(\pm1)[/math], and this because of an automatic inclusion [math]M_{1\times N}(\pm1)\subset\sqrt{N}O_{1,N}[/math]. Indeed, given [math]H\in M_{1\times N}(\pm1)[/math], the matrix [math]U=H/\sqrt{N}[/math] satisfies [math]UU^t=\frac{1}{N}\cdot N=1[/math], and so we have, as claimed:
In general, the space [math]O_{M,N}[/math] appearing above can be thought of as being a joint generalization of the unit sphere [math]S^{N-1}[/math], which appears at [math]M=1[/math], and of the orthogonal group [math]O_N[/math], which appears in the square case, [math]M=N[/math]. Based on this analogy, the space [math]O_{M,N}[/math] has several useful interpretations, which can be summarized as follows:
The space [math]O_{M,N}[/math] has the following properties:
- Its elements are the transposes of the isometries [math]g:\mathbb R^M\to\mathbb R^N[/math].
- It is the space of vectors [math]R_1,\ldots,R_M\in S^{N-1}[/math] which are pairwise orthogonal.
- It is also an homogeneous space, given by [math]O_{M,N}\simeq O_N/O_{N-M}[/math].
- It is also the space determined by the first [math]M[/math] rows of coordinates on [math]O_N[/math].
All this is standard algebra and geometry, the idea being as follows:
(1) Each matrix [math]U\in M_{M\times N}(\mathbb R)[/math] determines a linear map [math]f:\mathbb R^N\to\mathbb R^M[/math], given by [math]f(x)=Ux[/math], whose transpose is the linear map [math]g:\mathbb R^M\to\mathbb R^N[/math] given by [math]g(x)=U^tx[/math]. Now observe that for any two vectors [math]x,y\in\mathbb R^M[/math] we have:
Thus the condition [math]UU^t=1[/math] is equivalent to the following condition:
But this latter condition tells us that [math]g[/math] must be an isometry, as desired.
(2) This follows from the fact, that we know from the proof of Proposition 4.2, that the condition [math]UU^t=1_M[/math] tells us that the row vectors [math]R_1,\ldots,R_M\in\mathbb R^N[/math] of our matrix [math]U\in M_{M\times N}(\mathbb R)[/math] must be pairwise orthogonal, and of norm 1.
(3) Since the condition [math]UU^t=1[/math] defining [math]O_{M,N}[/math] implies [math](UA^t)(UA^t)^t=1[/math], for any orthogonal matrix [math]A\in O_N[/math], we have an action, as follows:
Let us compute now the stabilizer of the following particular element:
Given an orthogonal matrix [math]A\in O_N[/math], we have the following formula:
Thus [math]U=UA^t[/math] means that the matrix [math]A^t\in O_N[/math] must be of the following form:
Now since [math]A^t[/math] is orthogonal, it must be of the following form, with [math]B\in O_{N-M}[/math]:
Thus the stabilizer is [math]O_{N-M}[/math], and we obtain [math]O_{M,N}\simeq O_N/O_{N-M}[/math].
(4) This follows from some basic functional analysis. Consider indeed the algebra [math]C(O_N)[/math] of continuous functions [math]f:O_N\to\mathbb C[/math]. By Stone-Weierstrass, this algebra is generated by the coordinate functions [math]u_{ij}:O_N\to\mathbb C[/math], which are given by:
Consider now the following closed subalgebra of the algebra [math]C(O_N)[/math]:
We have then [math]A\simeq C(O_{M,N})[/math], coming from the homogeneous space result in (3).
Let us discuss now, as a continuation of the study from the real case, some basic analytic aspects. In what regards the 1-norm bound, we have the following result:
Given a matrix [math]U\in O_{M,N}[/math] we have
We have indeed the following estimate, valid for any [math]U\in O_{M,N}[/math]:
In this estimate the equality case holds when [math]|U_{ij}|=1/\sqrt{N}[/math] for any [math]i,j[/math]. But this amounts in saying that the rescaled matrix [math]H=\sqrt{N}U[/math] must satisfy [math]H\in M_{M\times N}(\pm1)[/math], and so that this rescaled matrix must be partial Hadamard, as claimed.
Observe that in terms of the rescaled matrix [math]H\in\sqrt{N}O_{M,N}[/math], the inequality found above reformulates as [math]||H||_1\leq MN[/math], with equality precisely when [math]H[/math] is partial Hadamard. Thus, in analogy with the square matrix case, we can formulate:
A matrix [math]H\in\sqrt{N}O_{M,N}[/math] is called:
- Almost PHM, when it locally maximizes the [math]1[/math]-norm on [math]\sqrt{N}O_{M,N}[/math].
- Optimal almost PHM, when it maximizes the [math]1[/math]-norm on [math]\sqrt{N}O_{M,N}[/math].
Some similar estimates hold for the [math]p[/math]-norms, with [math]p\neq2[/math]. The whole subject, while being potentially quite interesting, is for the moment largely unexplored. So, let us turn instead to algebra. Still following the study from the square case, let us formulate:
Two PHM are called equivalent when we can pass from one to the other by permuting the rows or columns, or multiplying rows or columns by [math]-1[/math]. Also:
- We say that a PHM is in dephased form when its first row and its first column consist of [math]1[/math] entries.
- We say that a PHM is in standard form when it is dephased, with the [math]1[/math] entries moved to the left as much as possible, by proceeding from top to bottom.
Unlike in the square case, where the standard form is generally not used, putting a rectangular matrix in standard form is something quite useful, in practice. As an illustration here, here is a result that we already know, from chapter 1, regarding the partial Hadamard matrices in standard form, at small values of [math]M[/math]:
The standard form of dephased PHM at [math]M=2,3,4[/math] is
This is something that we know from chapter 1, the idea being that the [math]M=2[/math] result is obvious, that the [math]M=3[/math] result follows from the orthogonality conditions between the rows, and that the [math]M=4[/math] result follows from the [math]M=3[/math] result.
At [math]M=5[/math] and higher the situation is more complicated, and we will be back to this. For the moment, let us stay with [math]M=4[/math]. We can fine-tune our result, as follows:
The [math]4\times N[/math] partial Hadamard matrices are of the form
Let [math]H\in M_{4\times N}(\pm1)[/math] be as in Proposition 4.7. The matrix formed by the [math]a[/math] type columns, one from each block, is equivalent to [math]W_4[/math], via a permutation of columns:
Also, the matrix formed by the [math]b[/math] type columns, one from each block, is equivalent to [math]K_4[/math], via a first column sign switch, plus a certain permutation of the columns:
Thus, just by performing operations on the columns, we obtain, as desired:
In order to prove now the last assertion, we must prove that we have:
But this can be seen by performing a sign switch on the last row, and then permuting the columns. Equivalently, we can start with the original matrix, in standard form, and perform a sign switch on the last row. The matrix becomes:
Now by putting this matrix in standard form, we obtain:
Thus [math]a,b[/math] got interchanged, and this gives the result.
At [math]M=5[/math] now, as already mentioned above, the combinatorics becomes quite complicated, and we will see in a moment that there are [math]5\times N[/math] partial Hadamard matrices which do not complete into Hadamard matrices. We first have the following result:
The [math]5\times N[/math] partial Hadamard matrices are of the form
This is something that we already worked out at [math]N=8[/math], in chapter 1, in both of the cases that can appear, namely [math]a=2,b=0[/math] and [math]a=1,b=1[/math]. The proof in general is similar, via some routine computations, with the equations in the statement coming by processing the orthogonality conditions between the 5th row and the first 4 rows.
As a first observation, the equations in the above statement can be written in the following more convenient form:
Now observe that the matrix of this system is as follows:
Thus, the system can be written as follows:
Thus, we are led into parity and positivity questions, regarding the vectors [math]r_t=\sum_i(v_i)_t[/math] and [math]s_t=\sum_j(v_j)_t[/math]. It is possible to further go along these lines, but the structure of the [math]5\times N[/math] partial Hadamard matrices remains something quite complicated. As an explicit consequence of our study, however, we have the following result:
Consider an arbitrary [math]4\times N[/math] partial Hadamard matrix, written as
This follows from Proposition 4.9, because with the notations there, the condition [math]b=0[/math] implies that the system there is simply:
Since [math]W_4[/math] is invertible, the solution is [math]r=0[/math]. Now observe that, by definition of the numbers [math]r_i[/math], we have [math]r_i=a(2)[/math] for any [math]i[/math]. Thus, we must have [math]a=0(2)[/math], and since we have [math]a=N/4[/math], this gives [math]N=0(8)[/math], as desired. The proof in the case [math]a=0[/math] is similar.
In general, the full classification of all the possible [math]5\times 8[/math] completions of a given [math]4\times N[/math] partial Hadamard matrix is something quite difficult, and we have already seen this at [math]N=8[/math], where a careful study is needed, the result being as follows:
The two [math]4\times 8[/math] partial Hadamard matrices, namely
This is something that we already know, from chapter 1.
At [math]N=12[/math] now, we have only one matrix to be studied, which is as follows, and with at least 8 solutions to the completion problem, coming from the Paley matrix [math]P_{12}[/math]:
Generally speaking, all this leads to quite complicated algebra and combinatorics. We refer to Hall [5], Ito [6] and Verheiden [7] for more on the combinatorics of the PHM. Finally, let us end this discussion with an elementary result, from [8]:
For a partial Hadamard matrix [math]H\in M_{(N-1)\times N}(\pm1)[/math], with rows [math]R_1,\ldots,R_{N-1}[/math] and columns [math]C_1,\ldots,C_N[/math], the following are equivalent:
- [math]H[/math] is completable into a [math]N\times N[/math] Hadamard matrix.
- [math]|\det H^{(j)}|[/math] is independent from [math]j[/math], where [math]H^{(j)}[/math] is obtained from [math]H[/math] by removing [math]C_j[/math].
- [math]|\det H^{(j)}|=N^{N/2-1}[/math] for any [math]i[/math], where [math]H^{(j)}[/math] is as above.
Moreover, if these conditions hold, the completion is obtained by setting
This follows from some basic linear algebra, the idea being as follows:
[math](1)\iff(2)[/math]. Consider the following vector, having integer entries:
Our claim is that we have the following equality of vector spaces:
Indeed, if we denote by [math]H_i[/math] the square matrix obtained from [math]H[/math] by adding a first row equal to [math]R_i[/math], then we have the following computation, which proves our claim:
But this gives [math](1)\iff(2)[/math], since the existence of a completion is equivalent to the fact that [math]span(R_1,\ldots,R_{N-1})^\perp[/math] contains a vector with all entries having absolute value [math]1[/math].
[math](1)\implies(3)[/math]. Write [math]c=|\det H^{(j)}|[/math] and let [math]M\in M_N(\pm1)[/math] be the Hadamard matrix completing [math]H[/math]. The proof of [math](1)\iff(2)[/math] above shows that the last row of [math]M[/math] must be the vector [math]c^{-1} Z[/math]. Also, since the matrix [math]M\in M_N(\pm1)[/math] is Hadamard, we have:
Thus, it remains to compute this determinant by expansion with respect to the last row, and the computation here gives:
But this means that we have [math]c= N^{N/2-1}[/math], which proves the implication [math](1)\implies(3)[/math], and also proves the last assertion of our theorem.
[math](3)\implies(2)[/math]. This is something obvious, and so we are done.
We will be back to the algebraic properties of the PHM on several occasions in this book, but directly in the complex matrix case, or sometimes in the general root of unity case, where more things can be said. In relation with the real case, of particular interest will be the material in chapter 15 below, where, following [8], we will associate a quantum semigroup of partial permutations of [math]\{1,\ldots,N\}[/math] to each such matrix, real or complex.
4b. Counting results
Let us try now to count the partial Hadamard matrices [math]H\in M_{M\times N}(\pm1)[/math]. This is an easy task at [math]M=2,3,4[/math], where the answer is as follows:
The number of PHM at [math]M=2,3,4[/math] is
We use the structure results for the PHM in standard form at [math]M\leq4[/math] found above, which are as follows, with the numbers [math]a,b\in\mathbb N[/math] satisfing [math]a+b=N/4[/math]:
But this gives the formulae in the statement, with the multinomial coefficients counting the matrices having the first row consisting of 1 entries only, obtained by permuting the columns of the above solutions, and with the [math]2^N[/math] factors coming from this.
In order to convert the above result into [math]N\to\infty[/math] estimates, we will need the following technical result regarding the multinomial coefficients, from Richmond-Shallit [9]:
We have the estimate
This is proved by Richmond and Shallit in [9] at [math]p=2[/math], and the proof in the general case, [math]p\in\mathbb N[/math], is similar, the idea being as follows:
(1) In order to do some analysis, we agree to use the convention [math]x!=\Gamma(x+1)[/math] for [math]x \gt 0[/math] real. Since the multinomial coefficient in the statement attains its maximum when the numbers [math]n_i[/math] are all equal, it is natural to make a change of variables, as follows:
Observe that, since we have [math]n_1+\ldots+n_k=N[/math], the numbers [math]x_1,\ldots,x_k[/math] satisfy:
(2) Let us first estimate, in terms of the numbers [math]x_1,\ldots,x_k[/math], the multinomial coefficient in the statement. By using the Taylor formula [math]\log(1+y)\simeq y-y^2/2[/math], we obtain:
By multiplying by [math]n_i[/math], this gives the following estimate:
Now by further substracting [math]n_i[/math], we obtain the following estimate:
(3) We are now ready to estimate the multinomial coefficient in the statement. By summing over [math]i[/math], and using [math]x_1+\ldots+x_k=0[/math], the formula found above gives:
By using the Stirling formula [math]n!\simeq e^{n\log n-n}\sqrt{2\pi n}[/math], we obtain from this:
Thus, the multinomial coefficient in the statement is:
(4) Raising now to the power [math]p[/math] gives the following formula:
Getting now to what we want to do, the point is that, by using the above estimate for the summands, we can estimate their sum by a multiple integral, as follows:
(5) We are almost there. By doing the calculus, as explained in [9], this gives:
Thus we have obtained the formula in the statement, and we are done.
The above formula is something very useful, that we will heavily use in what follows. Getting back now to the PHM, we have the following result:
The probability for a random [math]H\in M_{M\times N}(\pm1)[/math] to be a PHM is
Since there are exactly [math]2^{MN}[/math] sign matrices of size [math]N\times M[/math], the probability [math]P_M[/math] for a random [math]H\in M_{M\times N}(\pm1)[/math] to be a PHM is given by:
With this formula in hand, the result follows from Proposition 4.13, by using the estimates for sums of multinomial coefficients from Theorem 4.14.
4c. Asymptotic count
In their remarkable paper [1], de Launey and Levin were able to count the PHM, in the asymptotic limit [math]N\in 4\mathbb N[/math], [math]N\to\infty[/math]. Their method is based on:
The probability for a random [math]H\in M_{M\times N}(\pm1)[/math] to be partial Hadamard equals the probability for a length [math]N[/math] random walk with increments drawn from
Indeed, with [math]T(e)=(e_i\bar{e}_j)_{i \lt j}[/math], a matrix [math]X=[e_1,\ldots,e_N]\in M_{M\times N}(\mathbb Z_2)[/math] is partial Hadamard precisely when [math]T(e_1)+\ldots+T(e_N)=0[/math]. But this gives the result.
As explained in [1], the above probability can be indeed computed, and we have:
The probability for a random [math]H\in M_{M\times N}(\pm1)[/math] to be PHM is
According to Proposition 4.16, we have:
By using the Fourier inversion formula we have, with [math]D=\binom{M}{2}[/math]:
After many non-trivial computations, this leads to the result. See [1].
All this is quite interesting, because it provides a viable alternative to the HC problematics. To be more precise, after long decades of work on the HC, the conclusion that emerges is that this is probably an analytic question, at least in the [math]N \gt \gt 0[/math] regime, with the thing to be done being that of conjecturing something of type [math]C_N\simeq f(N)[/math] about the asymptotics of the number [math]C_N[/math] of the [math]N\times N[/math] Hadamard matrices, with [math]f(N)[/math] being some kind of known function, and then proving this conjecture, with [math]C_N \gt 0[/math] coming as consequence. But, no one knows what the conjecture of type [math]C_N\simeq f(N)[/math] should be.
In contrast to this, the work of de Launey and Levin [1] explained above puts us on a clear track, in order to deal with such questions. Indeed, when enlarging the attention to the partial Hadamard matrices [math]H\in M_{M\times N}(\pm1)[/math], we do have their counting result, at any [math]M\in\mathbb N[/math], and in the [math]N\to\infty[/math] limit, as a non-trivial and rock-solid starting point, and the problem is that of slowly fine-tuning their methods, as to get towards asymptotic counting results in the square matrix case, [math]M=N[/math]. But this is a quite tough mix of probability and combinatorics, and no one managed so far to go beyond [1].
4d. Square submatrices
Following now [2], and some previous work of Koukouvinos, Mitrouli, Seberry [10] and Szöllősi [11], let us discuss now another topic, namely the square submatrices of the usual, square Hadamard matrices. We will see that all this is related, in a quite subtle way, to the notion of almost Hadamard matrix (AHM), discussed in chapter 3. Let us start with some basic linear algebra. We will need the following standard result:
Any matrix [math]D\in M_N(\mathbb R)[/math] can be written as
- If [math]D[/math] is invertible, then [math]U[/math] is uniquely determined, and we write:
[[math]] U=Pol(D) [[/math]]
- If [math]D=V\Delta W^t[/math] with [math]V,W[/math] being orthogonal and [math]\Delta[/math] being diagonal is the singular value decomposition of [math]D[/math], then [math]Pol(D)=VW^t[/math].
All this is very standard, and can be found in any linear algebra book, one method for instance being that of deducing (2), and then the whole result, from the singular value decomposition theorem for the matrices [math]D\in M_N(\mathbb R)[/math].
We start analyzing the square submatrices of the Hadamard matrices. By permuting rows and columns, we can always reduce the problem to the following situation:
[math]D\in M_d(\pm1)[/math] is called a submatrix of [math]H\in M_N(\pm1)[/math] if we have
Observe that any [math]D\in M_2(\pm1)[/math] having distinct columns appears as a submatrix of [math]W_4[/math], and that any [math]D\in M_2(\pm1)[/math] appears as a submatrix of [math]W_8[/math]. In fact, we have:
Let [math]D\in M_d(\pm1)[/math] be an arbitrary sign matrix.
- If [math]D[/math] has distinct columns, then [math]D[/math] is as submatrix of [math]W_N[/math], with [math]N=2^d[/math].
- In general, [math]D[/math] appears as submatrix of [math]W_M[/math], with [math]M=2^{d+[\log_2d]}[/math].
This is something elementary, as follows:
(1) Set [math]N=2^d[/math]. If we use length [math]d[/math] bit strings [math]x,y\in\{0,1\}^d[/math] as indices, then:
Let [math]\widetilde{W}_N\in M_{d\times N}(\pm1)[/math] be the submatrix of [math]W_N[/math] having as row indices the strings of the following type:
Then for [math]i\in\{1,\ldots,d\}[/math] and [math]y\in\{0,1\}^d[/math], we have:
\vskip-3mm
Thus the columns of [math]\widetilde{W}_N[/math] are the [math]N[/math] elements of [math]\{\pm 1\}^d[/math], which gives the result.
(2) Set [math]R=2^{[\log_2d]}\geq d[/math]. Since the first row of [math]W_R[/math] contains only ones, [math]W_R\otimes W_N[/math] contains as a submatrix [math]R[/math] copies of [math]\widetilde{W}_N[/math], in which [math]D[/math] can be embedded, as desired.
Let us go back now to Definition 4.19, and try to relate the matrices [math]A,D[/math] appearing there. The following result, due to Szòllősi [11], is a first one in this direction:
Assuming that a square matrix
- The singular values of [math]A,D[/math] are identical, up to [math]|r-d|[/math] values of [math]1[/math].
- [math]\det A=\det U\cdot\overline{\det D}[/math], so in particular, [math]|\det A|=|\det D|[/math].
Here is a simplified proof. From the unitarity of [math]U[/math] we get:
(1) This follows from the first two equations, and from the well-known fact that the matrices [math]CC^*,C^*C[/math] have the same eigenvalues, up to [math]|r-d|[/math] values of [math]0[/math].
(2) By using the above unitarity equations, we have:
The result follows then by taking determinants.
We want to find a formula for the polar decomposition of [math]D[/math]. Let us introduce:
Associated to any [math]A\in M_r(\pm1)[/math] are the matrices
Observe that, in terms of the polar decomposition [math]A=VP[/math], we have:
The idea now will be that, under the assumptions of Theorem 4.21, the polar parts of the matrices [math]A,D[/math] appearing there should be related by a simple formula, with the passage [math]Pol(A)\to Pol(D)[/math] involving the above matrices [math]X_A,Y_A[/math].
In what follows we will focus on the case where [math]U\in U_N[/math] is replaced by [math]U=\sqrt{N}H[/math] with [math]H\in M_N(\pm1)[/math] Hadamard. In the non-singular case, following [2], we have:
Assuming that a square matrix
where [math]E=CX_AB[/math] and [math]S=B^tY_AB[/math].
Since [math]H[/math] is Hadamard, we can use the formulae coming from:
We start from the singular value decomposition of [math]A[/math]:
Here [math]V,X \in O_r[/math] and [math]s_i\in(0,||A||][/math]. From [math]AA^t+BB^t=NI_r[/math] we get:
Thus, the singular value decomposition of [math]B[/math] is as follows, with [math]Y\in O_d[/math]:
Similarly, from [math]A^tA+C^tC=I_r[/math] we deduce the singular value decomposition for [math]C[/math], the result being that there exists an orthogonal matrix [math]\widetilde{Z}\in O_d[/math] such that:
From [math]B^tB+D^tD=NI_d[/math] we obtain:
Thus the polar decomposition of [math]D[/math] reads:
Let [math]Z=UY[/math]. By using the orthogonality relation [math]CA^t+DB^t=0_{d \times r}[/math], we obtain:
From the assumptions of our theorem, we have the following inequality:
Thus [math]Z^t\widetilde Z = I_r \oplus Q[/math], for some orthogonal matrix [math]Q \in O_d[/math]. Plugging [math]\widetilde Z = Z(I_r \oplus Q)[/math] in the singular value decomposition formula for [math]C[/math], we obtain:
To summarize, we have found [math]V,X \in O_r[/math] and [math]Y,Z \in O_d[/math] such that:
Now with [math]U,T,E,S[/math] defined as in the statement, we obtain:
Thus we have [math]E=CX_AB[/math], as claimed. Also, we have:
Thus, we have as well [math]S=B^tY_AB[/math], as claimed, and we are done.
Observe that, in the above statement, in the case where the size of the upper left block satisfies [math]r \lt \sqrt N[/math], the condition [math]||A|| \lt \sqrt N[/math] is automatically satisfied. Our claim now is that all this is related to the notion of almost Hadamard matrix, from chapter 3. To be more precise, still following [2], let us introduce the following notion:
A sign matrix [math]S\in M_N(\pm1)[/math] is called an almost Hadamard sign pattern (AHP) if it appears as
Observe that, due to the theory in chapter 3, if a sign matrix [math]S[/math] is an AHP, then there exists a unique almost Hadamard matrix [math]H[/math] such that [math]S_{ij}=sgn(H_{ij})[/math], namely:
Getting back to Proposition 4.23, let us try to find out when [math]D[/math] is AHP. For this purpose, we must estimate the quantity [math]||E||_\infty=\max_{ij}|E_{ij}|[/math], and we have here:
Assuming that a matrix
- [math]||E||_\infty\leq\frac{r\sqrt{r}}{\sqrt{r}+\sqrt{N}}[/math] when [math]A[/math] is Hadamard.
- [math]||E||_\infty\leq\frac{r^2c\sqrt{N}}{N-r^2}[/math] if [math]r^2 \lt N[/math], with [math]c=||Pol(A)-\frac{A}{\sqrt{N}}||_\infty[/math].
- [math]||E||_\infty\leq\frac{r^2(1+\sqrt{N})}{N-r^2}[/math] if [math]r^2 \lt N[/math].
We use the basic fact that for two rectangular matrices which are multipliable, [math]X\in M_{p\times r}(\mathbb C)[/math] and [math]Y\in M_{r\times q}(\mathbb C)[/math], we have the following estimate:
Thus, according to Proposition 4.23, we have:
(1) If [math]A[/math] is Hadamard, [math]AA^t =rI_r[/math], [math]Pol(A)=A/\sqrt{r}[/math] and thus:
We therefore obtain from this:
But this gives the result.
(2) According to the definition of [math]X_A[/math], we have:
We therefore obtain the following estimate:
Now by using [math]||A^tA||_\infty\leq r[/math], we obtain:
Thus we have the following estimate:
But this gives the result.
(3) This follows from (2), because:
The proof is now complete.
Following [2], we can now state and prove a main result, as follows:
Assume that a matrix
- If [math]A[/math] is Hadamard, and [math]N \gt r(r-1)^2[/math], then [math]D[/math] is AHP.
- If [math]N \gt \frac{r^2}{4}(x+\sqrt{x^2+4})^2[/math], where [math]x=r||Pol(A)-\frac{A}{\sqrt{N}}||_\infty[/math], then [math]D[/math] is AHP.
- If [math]N \gt \frac{r^2}{4}(r+\sqrt{r^2+8})^2[/math], then [math]D[/math] is AHP.
This follows from the various estimates that we have, as follows:
(1) This follows from Proposition 4.25 (1), because:
(2) This follows from Proposition 4.25 (2), because:
Indeed, this is equivalent to:
Here the value of [math]x[/math] is as follows:
(3) This follows from Proposition 4.25 (3), because:
Indeed, this is equivalent to:
But this gives the result.
As a technical comment, for [math]A\in M_r(\pm1)[/math] Hadamard, Proposition 4.25 (2) gives:
Thus [math]||E||_\infty \lt 1[/math] for [math]N \gt r^3[/math], which is slightly weaker than Theorem 4.26 (1).
In view of the results above, it is convenient to make the following convention:
We denote by [math]\{x\}_{m \times n} \in M_{m \times n}(\mathbb R)[/math] the all-[math]x[/math] matrix, and by
Modulo equivalence, the [math]\pm1[/math] matrices of size [math]r=1,2[/math] are as follows:
In the cases [math](1)[/math] and [math](2)[/math] above, where the matrix [math]A[/math] is invertible, the spectral properties of their complementary matrices are as follows:
For the [math]N\times N[/math] Hadamard matrices of type
For [math]A\in M_r(\pm1)[/math] Hadamard, the quantities in Definition 4.22 are:
These formulae follow indeed from the following equalities:
(1) Using the notation introduced in Definition 4.27, we have here:
Since the matrix [math]A_{(1)}=[+][/math] is Hadamard we have:
We therefore obtain that:
Similarly, we obtain that:
(2) Using the orthogonality of the first two rows of [math]H_{(2)}[/math], we find that the matrices [math]D_{00}[/math] and [math]D_{11}[/math] have size [math]N/2-1[/math]. Since since the matrix [math]A_{(2)}=[^+_+{\ }^+_-][/math] is Hadamard we have:
But this gives the following formula:
Similarly, we obtain the following formula:
Thus, we have obtained the formulae in the statement.
We refer to [2] for more on all the above.
General references
Banica, Teo (2024). "Invitation to Hadamard matrices". arXiv:1910.06911 [math.CO].
References
- 1.0 1.1 1.2 1.3 1.4 1.5 1.6 W. de Launey and D.A. Levin, A Fourier-analytic approach to counting partial Hadamard matrices, Cryptogr. Commun. 2 (2010), 307--334.
- 2.0 2.1 2.2 2.3 2.4 2.5 T. Banica, I. Nechita and J.M. Schlenker, Submatrices of Hadamard matrices: complementation results, Electron. J. Linear Algebra 27 (2014), 197--212.
- J.J. Sylvester, Thoughts on inverse orthogonal matrices, simultaneous sign-successions, and tesselated pavements in two or more colours, with applications to Newton's rule, ornamental tile-work, and the theory of numbers, Phil. Mag. 34 (1867), 461--475.
- J. Hadamard, Résolution d'une question relative aux déterminants, Bull. Sci. Math. 2 (1893), 240--246.
- M. Hall, Integral matrices [math]A[/math] for which [math]AA^T=mI[/math], in “Number Theory and Algebra”, Academic Press (1977), 119--134.
- N. Ito, Hadamard Graphs I, Graphs Combin. 1 (1985), 57--64.
- E. Verheiden, Integral and rational completions of combinatorial matrices, J. Combin. Theory Ser. A 25 (1978) 267--276.
- 8.0 8.1 T. Banica and A. Skalski, The quantum algebra of partial Hadamard matrices, Linear Algebra Appl. 469 (2015), 364--380.
- 9.0 9.1 9.2 L.B. Richmond and J. Shallit, Counting abelian squares, Electron. J. Combin. 16 (2009), 1--9.
- C. Koukouvinos, M. Mitrouli and J. Seberry, An algorithm to find formulae and values of minors for Hadamard matrices, Linear Algebra Appl. 330 (2001), 129--147.
- 11.0 11.1 F. Szöllősi, Exotic complex Hadamard matrices and their equivalence, Cryptogr. Commun. 2 (2010), 187--198.