6d. Partial matrices

[math] \newcommand{\mathds}{\mathbb}[/math]

This article was automatically generated from a tex file and may contain conversion errors. If permitted, you may login and edit this article to improve the conversion.

As already mentioned after Conjecture 6.11, one way of getting away from the above algebraic difficulties is by doing [math]N \gt \gt 0[/math] analysis for the partial Hadamard matrices, with counting results in the spirit of those of de Launey-Levin [1]. Let us start with:

Definition

A partial Butson matrix (PBM) is a matrix

[[math]] H\in M_{M\times N}(\mathbb Z_q) [[/math]]

having its rows pairwise orthogonal, where [math]\mathbb Z_q\subset\mathbb C^\times[/math] is the group of [math]q[/math]-roots of unity.

Two PBM are called equivalent if one can pass from one to the other by permuting the rows and columns, or by multiplying the rows and columns by numbers in [math]\mathbb Z_q[/math]. Up to this equivalence, we can assume that [math]H[/math] is dephased, in the sense that its first row consists of [math]1[/math] entries only. We can also put [math]H[/math] in “standard form”, as follows:

Definition

We say that that a partial Butson matrix [math]H\in M_{M\times N}(\mathbb Z_q)[/math] is in standard form if the low powers of

[[math]] w=e^{2\pi i/q} [[/math]]
are moved to the left as much as possible, by proceeding from top to bottom.

Let us first try to understand the case [math]M=2[/math]. Here a dephased partial Butson matrix [math]H\in M_{2\times N}(\mathbb Z_q)[/math] must look as follows, with [math]\lambda_i\in\mathbb Z_q[/math] satisfying [math]\lambda_1+\ldots+\lambda_N=0[/math]:

[[math]] H=\begin{pmatrix}1&\ldots&1\\ \lambda_1&\ldots&\lambda_N\end{pmatrix} [[/math]]


With [math]q=p_1^{k_1}\ldots p_s^{k_s}[/math], we must have, according to Lam and Leung [2]:

[[math]] N\in p_1\mathbb N+\ldots+p_s\mathbb N [[/math]]


Observe however that at [math]s\geq 2[/math] this obstruction dissapears at [math]N\geq p_1p_2[/math]. With this discussion made, let us get now into the prime power case. We have:

Proposition

When [math]q=p^k[/math] is a prime power, the standard form of the dephased partial Butson matrices at [math]M=2[/math] is

[[math]] H=\begin{pmatrix} 1&1&\ldots&1&\ldots&\ldots&1&1&\ldots&1\\ \underbrace{1}_{a_1}&\underbrace{w}_{a_2}&\ldots&\underbrace{w^{q/p-1}}_{a_{q/p}}&\ldots&\ldots&\underbrace{w^{q-q/p}}_{a_1}&\underbrace{w^{q-q/p+1}}_{a_2}&\ldots&\underbrace{w^{q-1}}_{a_{q/p}} \end{pmatrix} [[/math]]
where [math]w=e^{2\pi i/q}[/math] and where [math]a_1,\ldots,a_{q/p}\in\mathbb N[/math] are multiplicities, summing up to [math]N/p[/math].


Show Proof

Indeed, it is well-known that for [math]q=p^k[/math] the solutions of [math]\lambda_1+\ldots+\lambda_N=0[/math] with [math]\lambda_i\in\mathbb Z_q[/math] are, up to permutations of the terms, exactly those in the statement.

Now with Proposition 6.28 in hand, we can prove:

Theorem

When [math]q=p^k[/math] is a prime power, the probability for a randomly chosen [math]M\in M_{2\times N}(\mathbb Z_q)[/math], with [math]N\in p\mathbb N[/math], [math]N\to\infty[/math], to be partial Butson is:

[[math]] P_2\simeq\sqrt{\frac{p^{2-\frac{q}{p}}q^{q-\frac{q}{p}}}{(2\pi N)^{q-\frac{q}{p}}}} [[/math]]


Show Proof

First, the probability [math]P_M[/math] for a random [math]M\in M_{M\times N}(\mathbb Z_q)[/math] to be PBM is:

[[math]] P_M=\frac{1}{q^{MN}}\#PBM_{M\times N} [[/math]]


Thus, according to Proposition 6.28, we have the following formula:

[[math]] \begin{eqnarray*} P_2 &=&\frac{1}{q^N}\sum_{a_1+\ldots +a_{q/p}=N/p}\binom{N}{\underbrace{a_1\ldots a_1}_p\ldots\ldots\underbrace{a_{q/p}\ldots a_{q/p}}_p}\\ &=&\frac{1}{q^N}\binom{N}{\underbrace{N/p\ldots N/p}_p}\sum_{a_1+\ldots +a_{q/p}=N/p}\binom{N/p}{a_1\ldots a_{q/p}}^p\\ &=&\frac{1}{p^N}\binom{N}{\underbrace{N/p\ldots N/p}_p}\times\frac{1}{(q/p)^N}\sum_{a_1+\ldots +a_{q/p}=N/p}\binom{N/p}{a_1\ldots a_{q/p}}^p \end{eqnarray*} [[/math]]


Now by using the Stirling formula for the left term, and the basic multinomial sum estimate from chapter 4 with [math]s=q/p[/math] and [math]n=N/p[/math] for the right term, we obtain:

[[math]] \begin{eqnarray*} P_2 &=&\sqrt{\frac{p^p}{(2\pi N)^{p-1}}}\times\sqrt{\frac{(q/p)^{\frac{q}{p}(p-1)}}{p^{\frac{q}{p}-1}(2\pi N/p)^{(\frac{q}{p}-1)(p-1)}}}\\ &=&\sqrt{\frac{p^{p-\frac{q}{p}(p-1)-\frac{q}{p}+1+(\frac{q}{p}-1)(p-1)}q^{\frac{q}{p}(p-1)}}{(2\pi N)^{p-1+(\frac{q}{p}-1)(p-1)}}}\\ &=&\sqrt{\frac{p^{2-\frac{q}{p}}q^{q-\frac{q}{p}}}{(2\pi N)^{q-\frac{q}{p}}}} \end{eqnarray*} [[/math]]


Thus we have obtained the formula in the statement, and we are done.

Let us discuss now the case where [math]M=2[/math], and [math]q=p_1^{k_1}p_2^{k_2}[/math] has two prime factors. We first examine the simplest such case, namely [math]q=p_1p_2[/math], with [math]p_1,p_2[/math] primes:

Proposition

When [math]q=p_1p_2[/math] is a product of distinct primes, the standard form of the dephased partial Butson matrices at [math]M=2[/math] is

[[math]] H=\begin{pmatrix} 1&1&\ldots&1&\ldots&\ldots&1&1&\ldots&1\\ \underbrace{1}_{A_{11}}&\underbrace{w}_{A_{12}}&\ldots&\underbrace{w^{p_2-1}}_{A_{1p_2}}&\ldots&\ldots&\underbrace{w^{q-p_2}}_{A_{p_11}}&\underbrace{w^{q-p_2+1}}_{A_{p_12}}&\ldots&\underbrace{w^{q-1}}_{A_{p_1p_2}} \end{pmatrix} [[/math]]
where [math]w=e^{2\pi i/q}[/math], and [math]A\in M_{p_1\times p_2}(\mathbb N)[/math] is of the form [math]A_{ij}=B_i+C_j[/math], with [math]B_i,C_j\in\mathbb N[/math].


Show Proof

We use the fact that for [math]q=p_1p_2[/math] any vanishing sum of [math]q[/math]-roots of unity decomposes as a sum of cycles. Now if we denote by [math]B_i,C_j\in\mathbb N[/math] the multiplicities of the various [math]p_2[/math]-cycles and [math]p_1[/math]-cycles, then we must have [math]A_{ij}=B_i+C_j[/math], as claimed.

Regarding now the matrices of type [math]A_{ij}=B_i+C_j[/math], when taking them over integers, [math]B_i,C_j\in\mathbb Z[/math], these form a vector space of dimension [math]d=p_1+p_2-1[/math]. Given [math]A\in M_{p_1\times p_2}(\mathbb Z)[/math], the “test” for deciding if we have [math]A_{ij}=B_i+C_j[/math] or not is:

[[math]] A_{ij}+A_{kl}=A_{il}+A_{jk} [[/math]]


The problem comes of course from the assumption [math]B_i,C_j\geq0[/math], which is quite a subtle one. In what follows we restrict the attention to the case [math]p_1=2[/math]. Here we have:

Theorem

For [math]q=2p[/math] with [math]p\geq 3[/math] prime, [math]P_2[/math] equals the probability for a random walk on [math]\mathbb Z^p[/math] to end up on the diagonal, i.e. at a position of type [math](t,\ldots,t)[/math], with [math]t\in\mathbb Z[/math].


Show Proof

According to Proposition 6.30, we must understand the structure of the matrices [math]A\in M_{2\times p}(\mathbb N)[/math] which decompose as follows, with [math]B_i,C_j\geq0[/math]:

[[math]] A_{ij}=B_i+C_j [[/math]]


But this is an easy task, because depending on the value of [math]A_{11}[/math] compared to the value of [math]A_{21}[/math] we have 3 types of solutions, as follows:

[[math]] \begin{pmatrix} a_1&\ldots&a_p\\ a_1&\ldots&a_p \end{pmatrix}\quad,\quad \begin{pmatrix} a_1&\ldots&a_p\\ a_1+t&\ldots&a_p+t \end{pmatrix}\quad,\quad \begin{pmatrix} a_1+t&\ldots&a_p+t\\ a_1&\ldots&a_p \end{pmatrix} [[/math]]


Here [math]a_i\geq0[/math] and [math]t\geq1[/math]. Now since cases 2,3 contribute in the same way, we obtain:

[[math]] \begin{eqnarray*} P_2 &=&\frac{1}{(2p)^N}\sum_{2\Sigma a_i=N}\binom{N}{a_1,a_1,\ldots,a_p,a_p}\\ &+&\frac{2}{(2p)^N}\sum_{t\geq1}\sum_{2\Sigma a_i+pt=N}\binom{N}{a_1,a_1+t,\ldots,a_p,a_p+t} \end{eqnarray*} [[/math]]


We can write this formula in a more compact way, as follows:

[[math]] P_2=\frac{1}{(2p)^N}\sum_{t\in\mathbb Z}\sum_{2\Sigma a_i+p|t|=N}\binom{N}{a_1,a_1+|t|,\ldots,a_p,a_p+|t|} [[/math]]


Now since the sum on the right, when rescaled by [math]\frac{1}{(2p)^N}[/math], is exactly the probability for a random walk on [math]\mathbb Z^p[/math] to end up at [math](t,\ldots,t)[/math], this gives the result.

According to the above result we have [math]P_2=\sum_{t\in\mathbb Z}P_2^{(t)}[/math], where [math]P_2^{(t)}[/math] with [math]t\in\mathbb Z[/math] is the probability for a random walk on [math]\mathbb Z^p[/math] to end up at [math](t,\ldots,t)[/math]. By using the basic binomial sum estimate of Richmond-Shallit [3], explained in chapter 4, we obtain:

[[math]] \begin{eqnarray*} P_2^{(0)} &=&\frac{1}{(2p)^N}\binom{N}{N/2}\sum_{a_1+\ldots+a_p=N/2}\binom{N/2}{a_1,\ldots,a_p}^2\\ &\simeq&\sqrt{\frac{2}{\pi N}}\times\sqrt{\frac{p^p}{2^{p-1}(\pi N)^{p-1}}}\\ &=&2\sqrt{\left(\frac{p}{2\pi N}\right)^p} \end{eqnarray*} [[/math]]


Regarding now the probability [math]P_2^{(t)}[/math] of ending up at [math](t,\ldots,t)[/math], in principle for small [math]t[/math] this can be estimated by using a modification of the method in [3]. However, it is not clear how to compute the full diagonal return probability in Theorem 6.31.


Let us discuss now the exponents [math]q=3p[/math]. The same method as in the proof of Theorem 6.31 works, with the “generic” solution for [math]A[/math] being as follows:

[[math]] A=\begin{pmatrix} a_1&\ldots&a_p\\ a_1+t&\ldots&a_p+t\\ a_1+s+t&\ldots&a_p+s+t\\ \end{pmatrix} [[/math]]


More precisely, this type of solution, with [math]s,t\geq1[/math], must be counted 6 times, then its [math]s=0,t\geq1[/math] and [math]s\geq1,t=0[/math] particular cases must be counted 3 times each, and finally the [math]s=t=0[/math] case must be counted once. Observe that the [math]s=t=0[/math] contribution is:

[[math]] \begin{eqnarray*} P_3^{(0,0)} &=&\frac{1}{(3p)^N}\binom{N}{N/3,N/3,N/3}\sum_{a_1+\ldots+a_p=N/3}\binom{N/3}{a_1,\ldots,a_p}^3\\ &\simeq&\sqrt{\frac{27}{(2\pi N)^2}}\times\sqrt{\frac{p^{2p}}{3^{p-1}(2\pi N/3)^{2(p-1)}}}\\ &=&3\sqrt{3^p}\left(\frac{p}{2\pi N}\right)^p \end{eqnarray*} [[/math]]

Finally, regarding arbitrary exponents with two prime factors, we have:

Proposition

When [math]q=p_1^{k_1}p_2^{k_2}[/math] has exactly two prime factors, the dephased partial Butson matrices at [math]M=2[/math] are indexed by the solutions of

[[math]] A_{ij,xy}=B_{ijy}+C_{jxy} [[/math]]
with [math]B_{ijy},C_{jxy}\in\mathbb N[/math], with [math]i\in\mathbb Z_{p_1}[/math], [math]j\in\mathbb Z_{p_1^{k_1-1}}[/math], [math]x\in\mathbb Z_{p_2}[/math], [math]y\in\mathbb Z_{p_2^{k_2-1}}[/math].


Show Proof

We follow the method in the proof of Proposition 6.30. First, according to Lam-Leung [2], for [math]q=p_1^{k_1}p_2^{k_2}[/math] any vanishing sum of [math]q[/math]-roots of unity decomposes as a sum of cycles. Let us first work out a simple particular case, namely [math]q=4p[/math]. Here the multiplicity matrices [math]A\in M_{4\times p}(\mathbb N)[/math] appear as follows:

[[math]] A=\begin{pmatrix}B_1&\ldots&B_1\\ B_2&\ldots&B_2\\ B_3&\ldots&B_3\\ B_4&\ldots&B_4\end{pmatrix}+ \begin{pmatrix}C_1&\ldots&C_p\\ D_1&\ldots&D_p\\ C_1&\ldots&C_p\\ D_1&\ldots&D_p\end{pmatrix} [[/math]]


Thus, if we use double binary indices for the elements of [math]\{1,2,3,4\}[/math], the condition is:

[[math]] A_{ij,x}=B_{ij}+C_{jx} [[/math]]


The same method works for any exponent of type [math]q=p_1^{k_1}p_2^{k_2}[/math], the formula being:

[[math]] A_{i_1\ldots i_{k_1},x_1\ldots x_{k_2}}=B_{i_1\ldots i_{k_1},x_2\ldots x_{k_2}}+C_{i_2\ldots i_{k_1},x_1\ldots x_{k_2}} [[/math]]


But this gives the formula in the statement, and we are done.

At [math]M=3[/math] now, we first restrict attention to the case where [math]q=p[/math] is prime. In this case, the general result in Proposition 6.32 becomes simply:

[[math]] H=\begin{pmatrix} 1&1&\ldots&1\\ \underbrace{1}_a&\underbrace{w}_a&\ldots&\underbrace{w^{p-1}}_a \end{pmatrix} [[/math]]


We call a matrix [math]A\in M_p(\mathbb N)[/math] “tristochastic” if the sums on its rows, columns and diagonals are all equal. Here, and in what follows, we call “diagonals” the main diagonal, and its [math]p-1[/math] translates to the right, obtained by using modulo [math]p[/math] indices. With this convention, here is now the result at [math]M=3[/math]:

Proposition

For [math]p[/math] prime, the standard form of dephased PBM at [math]M=3[/math] is

[[math]] H=\begin{pmatrix} 1&1&\ldots&1&\ldots&\ldots&1&1&\ldots&1\\ 1&1&\ldots&1&\ldots&\ldots&w^{p-1}&w^{p-1}&\ldots&w^{p-1}\\ \underbrace{1}_{A_{11}}&\underbrace{w}_{A_{12}}&\ldots&\underbrace{w^{p-1}}_{A_{1p}}&\ldots&\ldots&\underbrace{1}_{A_{p1}}&\underbrace{w}_{A_{p2}}&\ldots&\underbrace{w^{p-1}}_{A_{pp}} \end{pmatrix} [[/math]]
where [math]w=e^{2\pi i/p}[/math] and where [math]A\in M_p(\mathbb N)[/math] is tristochastic, with sums [math]N/p[/math].


Show Proof

Consider a dephased matrix [math]H\in M_{3\times N}(\mathbb Z_p)[/math], written in standard form as in the statement. Then the orthogonality conditions between the rows are as follows:


[math]1\perp 2[/math] means [math]A_{11}+\ldots+A_{1p}=A_{21}+\ldots+A_{2p}=\ldots\ldots=A_{p1}+\ldots+A_{pp}[/math].


[math]1\perp 3[/math] means [math]A_{11}+\ldots+A_{p1}=A_{12}+\ldots+A_{p2}=\ldots\ldots=A_{1p}+\ldots+A_{pp}[/math].


[math]2\perp 3[/math] means [math]A_{11}+\ldots+A_{pp}=A_{12}+\ldots+A_{p1}=\ldots\ldots=A_{1p}+\ldots+A_{p,p-1}[/math].


Thus [math]A[/math] must have constant sums on rows, columns and diagonals, as claimed.

It is quite unobvious on how to deal with the tristochastic matrices with bare hands. For the moment, let us just record a few elementary results:

Proposition

For [math]p=2,3[/math], the standard form of the dephased PBM at [math]M=3[/math] is respectively as follows, with [math]w=e^{2\pi i/3}[/math] and [math]a+b+c=N/3[/math] at [math]p=3[/math]:

[[math]] H=\begin{pmatrix}+&+&+&+\\+&+&-&-\\\underbrace{+}_{N/4}&\underbrace{-}_{N/4}&\underbrace{+}_{N/4}&\underbrace{-}_{N/4}\end{pmatrix} [[/math]]

[[math]] H=\begin{pmatrix} 1&1&1&1&1&1&1&1&1\\ 1&1&1&w&w&w&w^2&w^2&w^2\\ \underbrace{1}_a&\underbrace{w}_b&\underbrace{w^2}_c&\underbrace{1}_b&\underbrace{w}_c&\underbrace{w^2}_a&\underbrace{1}_c&\underbrace{w}_a&\underbrace{w^2}_b \end{pmatrix} [[/math]]
Also, for [math]p\geq 3[/math] prime and [math]N\in p\mathbb N[/math], there is at least one Butson matrix [math]H\in M_{3\times N}(\mathbb Z_p)[/math].


Show Proof

The idea is that the [math]p=2[/math] assertion follows from Proposition 6.33, and from the fact that the [math]2\times 2[/math] tristochastic matrices are as follows:

[[math]] A=\begin{pmatrix}a&a\\a&a\end{pmatrix} [[/math]]


As for the [math]p=3[/math] assertion, once again the idea is that this follows from Proposition 6.33, and from the fact that the [math]3\times 3[/math] tristochastic matrices are as follows:

[[math]] A=\begin{pmatrix}a&b&c\\ b&c&a\\ c&a&b\end{pmatrix} [[/math]]


Indeed, the [math]p=2[/math] assertion is clear. Regarding now the [math]p=3[/math] assertion, consider an arbitary [math]3\times 3[/math] bistochastic matrix, written as follows:

[[math]] A=\begin{pmatrix}a&b&n-a-b\\ d&c&n-c-d\\ n-a-d&n-b-c&*\end{pmatrix} [[/math]]


Here [math]*=a+b+c+d-n[/math], but we won't use this value, because one of the 3 diagonal equations is redundant anyway. With these notations in hand, the conditions are:

[[math]] b+(n-c-d)+(n-a-d)=n [[/math]]

[[math]] (n-a-b)+d+(n-b-c)=n [[/math]]


Now since substracting these equations gives [math]b=d[/math], we obtain the result. Regarding now the last assertion, consider the following [math]p\times p[/math] permutation matrix:

[[math]] A=\begin{pmatrix} 1&&&&\\ &&&&1\\ &&&1\\ &&\ldots\\ &1 \end{pmatrix} [[/math]]


Since this matrix is tristochastic, for any [math]p\geq 3[/math] odd, this gives the result.

Regarding now the asymptotic count, we have here:

Theorem

For [math]p=2,3[/math], the probability for a randomly chosen

[[math]] M\in M_{3\times N}(\mathbb Z_p) [[/math]]
with [math]N\in p\mathbb N[/math], [math]N\to\infty[/math], to be partial Butson is respectively given by

[[math]] P_3^{(2)}\simeq\begin{cases} \frac{16}{\sqrt{(2\pi N)^3}}&{\rm if}\ N\in4\mathbb N\\ 0&{\rm if}\ N\notin 4\mathbb N\end{cases} [[/math]]
at [math]p=2[/math], and

[[math]] P_3^{(3)}\simeq\frac{243\sqrt{3}}{(2\pi N)^3} [[/math]]
at [math]p=3[/math]. In addition, we have [math]P_3^{(p)} \gt 0[/math] for any [math]N\in p\mathbb N[/math], for any [math]p\geq 3[/math] prime.


Show Proof

According to Proposition 6.34, and then to the Stirling formula, we have:

[[math]] P_3^{(2)} =\frac{1}{4^N}\binom{N}{N/4,N/4,N/4,N/4}\\ \simeq\frac{16}{\sqrt{(2\pi N)^3}} [[/math]]


Similarly, by using the basic estimate with [math]s=p=3[/math], [math]n=N/3[/math], we have:

[[math]] \begin{eqnarray*} P_3^{(3)} &=&\frac{1}{9^N}\sum_{a+b+c=N/3}\binom{N}{a,b,c,b,c,a,c,a,b}\\ &=&\frac{1}{3^N}\binom{N}{N/3,N/3,N/3}\times\frac{1}{3^N}\sum_{a+b+c=N/3}\binom{N/3}{a,b,c}^3\\ &\simeq&\frac{3\sqrt{3}}{2\pi N}\cdot\sqrt{\frac{81}{(2\pi N/3)^4}}\\ &=&\frac{243\sqrt{3}}{(2\pi N)^3} \end{eqnarray*} [[/math]]


Finally, the last assertion is clear from the last assertion in Proposition 6.33.

It is possible to establish a few more results in this direction, making interesting connections with probability. However, the main question regarding the partial Butson matrices remains that of adapting the asymptotic counting methods of de Launey-Levin [1] to the root of unity case. As a preliminary observation here, we have:

Proposition

The probability [math]P_M[/math] for a random [math]H\in M_{M\times N}(\mathbb Z_q)[/math] to be partial Butson equals the probability for a length [math]N[/math] random walk with increments drawn from

[[math]] E=\left\{(e_i\bar{e}_j)_{i \lt j}\Big|e\in\mathbb Z_q^M\right\} [[/math]]
regarded as a subset [math]\mathbb Z_q^{\binom{M}{2}}[/math], to return at the origin.


Show Proof

Indeed, with [math]T(e)=(e_i\bar{e}_j)_{i \lt j}[/math], a matrix [math]X=[e_1,\ldots,e_N]\in M_{M\times N}(\mathbb Z_q)[/math] is partial Butson if and only if the following condition is satisfied:

[[math]] T(e_1)+\ldots+T(e_N)=0 [[/math]]


But this leads to the conclusion in the statement.

Observe now that, according to the above result, we have:

[[math]] \begin{eqnarray*} P_M &=&\frac{1}{q^{(M-1)N}}\#\left\{\xi_1,\ldots,\xi_N\in E\Big|\sum_i\xi_i=0\right\}\\ &=&\frac{1}{q^{(M-1)N}}\sum_{\xi_1,\ldots,\xi_N\in E}\delta_{\sum\xi_i,0} \end{eqnarray*} [[/math]]


The problem is to continue the computation in the proof of the inversion formula. More precisely, the next step at [math]q=2[/math], which is the key one, is as follows:

[[math]] \delta_{\sum\xi_i,0}=\frac{1}{(2\pi)^D}\int_{[-\pi,\pi]^D}e^{i \lt \lambda,\sum\xi_i \gt }d\lambda [[/math]]


Here [math]D=\binom{M}{2}[/math]. The problem is that this formula works when [math]\sum\xi_i[/math] is real, as is the case in the context of [1], but not when [math]\sum\xi_i[/math] is complex, as is the case in Proposition 6.36. As before with other open questions, this is a good question for you, reader.

General references

Banica, Teo (2024). "Invitation to Hadamard matrices". arXiv:1910.06911 [math.CO].

References

  1. 1.0 1.1 1.2 W. de Launey and D.A. Levin, A Fourier-analytic approach to counting partial Hadamard matrices, Cryptogr. Commun. 2 (2010), 307--334.
  2. 2.0 2.1 T.Y. Lam and K.H. Leung, On vanishing sums of roots of unity, J. Algebra 224 (2000), 91--109.
  3. 3.0 3.1 L.B. Richmond and J. Shallit, Counting abelian squares, Electron. J. Combin. 16 (2009), 1--9.