15a. Commuting squares
In this chapter and in the next one we discuss a number of more specialized aspects of subfactor theory, making the link with several advanced topics, such as quantum groups, noncommutative geometry, free probability, and more. We will mainly insist on the connections with quantum groups, and with the material from chapters 7-8.
A first question, to be discussed in the present chapter, is the explicit construction of subfactors by using some suitable combinatorial data, encoded in a structure called “commuting square”. Let us start with the following definition:
A commuting square in the sense of subfactor theory is a commuting diagram of finite dimensional algebras with traces, as follows,
This notion is in fact something that we already talked about, in chapter 14, when discussing the classification of the finite depth subfactors, following the work of Ocneanu [1], [2] and Popa [3], [4]. To be more precise, it is possible to prove that any finite depth subfactor of [math]R[/math] appears from a commuting square, and vice versa. And as a well-known consequence of this, the subfactors of [math]R[/math] having index [math] \lt 4[/math], which are all of finite depth, can be shown to be classified by ADE diagrams. But more on this later.
Getting back now to Definition 15.1 as it is, something quite simple, not obviously subfactor related, the idea is that there are many examples of such commuting squares, always coming from subtle combinatorial data. As an illustration for this principle, we have for instance commuting squares associated to the complex Hadamard matrices, that we met in chapter 11, in the maximal abelian subalgebra (MASA) context. In order to discuss this, let us recall from there that, following Popa [5], we have:
Up to a conjugation by a unitary, the pairs of orthogonal MASA in the simplest factor, namely [math]M_N(\mathbb C)[/math], are as follows,
Any maximal abelian subalgebra in [math]M_N(\mathbb C)[/math] being conjugated to [math]\Delta[/math], we can assume, up to conjugation by a unitary, that we have, with [math]U\in U_N[/math]:
But a straightforward computation, explained in chapter 11, shows that the orthogonality condition reformulates as [math]|U_{ij}|=1/\sqrt{N}[/math], which gives the result.
As explained in chapter 11, while being something quite trivial, this result remains a statement which is fundamental, surprising, and very interesting, making the link between the general theory of von Neumann algebras, usually associated to rather lugubrious functional analysis computations, and the complex Hadamard matrices, which are a totally opposite beast, belonging to a wild area of linear algebra and combinatorics. As an illustration here, check the following matrix out, with [math]w=e^{2\pi i/N}[/math]:
This matrix, which is obviously a very beautiful one, hope you agree with me, is called Fourier matrix, and is the most basic example of a complex Hadamard matrix. As explained in chapter 11, this is the matrix of the Fourier transform over the cyclic group [math]\mathbb Z_N[/math], and by taking tensor products of such matrices, we obtain the matrices of the Fourier transforms over arbitrary finite abelian groups [math]G=\mathbb Z_{N_1}\times\ldots\times\mathbb Z_{N_k}[/math]:
But the story does not stop here, with basic discrete Fourier analysis. The complex Hadamard matrices, which can be thought of as being “generalized Fourier matrices”, can be far wilder than that. And among others, above everything, we have:
\begin{conjecture}[Hadamard Conjecture]
There is a real Hadamard matrix
for any [math]N\in 4\mathbb N[/math]. \end{conjecture} Here the condition at the end comes from the fact that, assuming [math]N\geq3[/math], the orthogonality conditions between the first 3 rows give [math]N\in 4\mathbb N[/math]. Observe that the Fourier matrices solve this conjecture only at values [math]N=2^k[/math], by tensoring [math]F_2\in M_2(\pm1)[/math] with itself. For anything else, [math]N=12,20,24,28,36,40,44,48,52,\ldots\,[/math], all sorts of clever constructions are needed, whose complexity grows with [math]N[/math], and with open questions at [math]N \gt 666[/math].
And the conjecture is more than 100 years old, seemingly undoable. Which puts us in a quite delicate situation with our general von Neumann algebra philosophy:
(1) Generally speaking, classical mathematics looks simpler than quantum mathematics, because you start learning one in high school, and the other one in graduate school. And exactly the same goes with classical mechanics vs quantum mechanics.
(2) At a more advanced level, however, classical mathematics turns to be something extremely complicated, wild and unpredictable, with all sorts of notorious no-go areas, such as the Riemann Hypothesis, the Jacobian Conjecture, and so on.
(3) Also at the more advanced level, quantum mathematics, like von Neumann algebras, while certainly difficult, looks plainly doable. Open problems always end up being solved, and you can always dismiss the few no-go areas as being “uninteresting”.
(4) And so, we have here evidence that quantum mathematics, while being something complicated of course, is probably simpler than classical mathematics. Again, things difficult, but peaceful horizons, with no black holes like the Riemann Hypothesis.
(5) Which agrees with what happens in physics too, where advanced classical mechanics is the hell on Earth, as opposed to quantum mechanics, where the landscape is rather relaxed, with beautiful results promised to everyone willing to give a serious try.
And so, what to do with these Hadamard matrices, which come via Theorem 15.2 to perturb our philosophy. All of the sudden, our von Neumann algebra theory, or even foundations, have a hole in them. Job for us to find a way of dealing with these beasts in a conceptual way, and then either solving Conjecture 15.3, or dismissing it as being “uninteresting”. In what regards the first task, subfactors come to the rescue, via:
Given an Hadamard matrix [math]H\in M_N(\mathbb C)[/math], the diagram formed by the associated pair of orthogonal maximal abelian subalgebras of [math]M_N(\mathbb C)[/math],
where [math]\Delta\subset M_N(\mathbb C)[/math] are the diagonal matrices, is a commuting square.
The expectation [math]E_\Delta:M_N(\mathbb C)\to\Delta[/math] is the operation [math]M\to M_\Delta[/math] which consists in keeping the diagonal, and erasing the rest. Consider now the other expectation:
It is better to identify this with the following expectation, with [math]U=H/\sqrt{N}[/math]:
This must be given by a formula of type [math]M\to UX_\Delta U^*[/math], with [math]X[/math] satisfying:
The scalar products being given by [math] \lt a,b \gt =tr(ab^*)[/math], this condition reads:
Thus [math]X=U^*MU[/math], and the formulae of our two expectations are as follows:
With these formulae in hand, we have the following computation:
As for the other composition, the computation here is similar, as follows:
Thus, we have indeed a commuting square, as claimed.
To summarize our discussion so far, we had a big scare coming from Popa's Theorem 15.2, but Theorem 15.4, also due to Popa [5], puts our von Neumann algebra theory back on tracks. We are doing things which are certainly difficult, but somehow “trivial”, meaning never undoable in the long run, and that feared Hadamard matrices are simply particular cases of commuting squares. And so, further studying commuting squares will tell us what's interesting and what's not, regarding these matrices, and so on.
Getting back now to Definition 15.1 as it is, there are many other explicit examples of commuting squares, all coming from subtle combinatorial data, and more on this later. So, leaving aside now examples, let us explain the connection with subfactors. For this purpose, consider an arbitrary commuting square, as in Definition 15.1:
The point is that, under some suitable extra mild assumptions, any such square [math]C[/math] produces a subfactor of the hyperfinite [math]{\rm II}_1[/math] factor [math]R[/math]. Indeed, by performing the basic construction, in finite dimensions, we obtain a whole array, as follows:
To be more precise, by performing the basic construction in both possible directions, namely to the right and upwards, we obtain a whole array of finite dimensional algebras with traces, that we can denote [math](C_{ij})_{i,j\geq0}[/math], as above. Once this done, we can further consider the von Neumann algebras obtained in the limit, via GNS construction, on each vertical and horizontal line, and denote them [math]A_i,B_j[/math], as above.
With this convention, we have the following result, due to Ocneanu [1], [2]:
In the context of the above diagram, the limiting von Neumann algebras [math]A_i,B_j[/math] are all isomorphic to the hyperfinite [math]{\rm II}_1[/math] factor [math]R[/math], and:
- [math]A_0\subset A_1[/math] is a subfactor, and [math]\{A_i\}[/math] is the Jones tower for it.
- The corresponding planar algebra is given by [math]A_0'\cap A_k=C_{01}'\cap C_{k0}[/math].
- A similar result holds for the “horizontal” subfactor [math]B_0\subset B_1[/math].
This is something very standard, with the factoriality of the limiting von Neumann algebras [math]A_i,B_j[/math] coming as a consequence of the general commutant computation in (2), which is independent from it, with the hyperfiniteness of the same [math]A_i,B_j[/math] algebras being clear by definition, and with the idea for the rest being as follows:
(1) This is somewhat clear from definitions, or rather from a quick verification of the basic construction axioms, as formulated in chapter 13, because the tower of algebras [math]\{A_i\}[/math] appears by definition as the [math]j\to\infty[/math] limit of the towers of algebras [math]\{C_{ij}\}[/math], which are all Jones towers. Thus the limiting tower [math]\{A_i\}[/math] is also a Jones tower.
(2) This is the non-trivial result, called Ocneanu compactness theorem, and whose proof is by doing some linear algebra. To be more precise, in one sense the result is clear, because by definition of the algebras [math]\{A_i\}[/math], we have inclusions as follows:
In the other sense things are more tricky, mixing standard linear algebra with some functional analysis too, and we refer here to Ocneanu's lecture notes [1], [2].
(3) This follows from (1,2), by transposing the whole diagram. Indeed, given a commuting square as in Definition 15.1, its transpose is a commuting square as well:
Thus we can apply (1,2) above to this commuting square, and we obtain in this way Jones tower and planar algebra results for the “horizontal” subfactor [math]B_0\subset B_1[/math].
In relation with the examples of commuting squares that we have so far, namely those coming from the Hadamard matrices, from Theorem 15.4, we can upgrade what we have so far into something more conceptual, due to Jones [6], as follows:
Given a complex Hadamard matrix [math]H\in M_N(\mathbb C)[/math], the diagram formed by the associated pair of orthogonal maximal abelian subalgebras, namely
is a commuting square in the sense of subfactor theory, and the associated planar algebra [math]P=(P_k)[/math] is given by the following formula, in terms of [math]H[/math] itself,
- [math]T^\circ=id\otimes T\otimes id[/math].
- [math]G_{ia}^{jb}=\sum_kH_{ik}\bar{H}_{jk}\bar{H}_{ak}H_{bk}[/math].
- [math]G^k_{i_1\ldots i_k,j_1\ldots j_k}=G_{i_ki_{k-1}}^{j_kj_{k-1}}\ldots G_{i_2i_1}^{j_2j_1}[/math].
We have several assertions here, the idea being as follows:
(1) The fact that we have indeed a commuting square is something quite elementary, that we already know, from Theorem 15.4.
(2) The computation of the associated planar algebra, directly in terms of [math]H[/math], is something which is definitely possible, thanks to the formula in Theorem 15.5 (2).
(3) As for the precise formula of the planar algebra, which emerges by doing the computation, we will be back to it, with full details, later on.
(4) The point indeed is that we want to first develop some better methods in dealing with the Hadamard matrices, and leave the computation of [math]P[/math] for later.
Summarizing, we have so far an interesting combinatorial notion, that of a commuting square, and a method of producing subfactors and planar algebras out of it. We will further explore all the possibilities that this opens up, in what follows:
(1) In the remainder of this chapter we will keep working on the Hadamard matrix problem, following [7] and subsequent papers. This might look of course a bit like mania, focusing just like that on a single class of commuting squares, but we are strongly motivated by all that has being said after Theorem 15.2 and Conjecture 15.3, with this being a matter of life and death to us. And don't worry, we will learn in this way useful techniques, that will apply to other commuting squares too. And also, following Jones [8], [6] and others, all this is potentially related to some interesting physics too.
(2) And in chapter 16 below we will go back to general commuting squares, and to their more traditional usage, for classification problems for small index subfactors.
General references
Banica, Teo (2024). "Principles of operator algebras". arXiv:2208.03600 [math.OA].
References
- 1.0 1.1 1.2 A. Ocneanu, Quantized groups, string algebras and Galois theory for algebras, London Math. Soc. Lect. Notes 136 (1988), 119--172.
- 2.0 2.1 2.2 A. Ocneanu, Quantum symmetry, differential geometry of finite graphs, and classification of subfactors, Univ. Tokyo Seminar Notes (1990).
- S. Popa, Classification of subfactors: the reduction to commuting squares, Invent. Math. 101 (1990), 19--43.
- S. Popa, Classification of amenable subfactors of type II, Acta Math. 172 (1994), 163--255.
- 5.0 5.1 S. Popa, Orthogonal pairs of [math]*[/math]-subalgebras in finite von Neumann algebras, J. Operator Theory 9 (1983), 253--268.
- 6.0 6.1 V.F.R. Jones, Planar algebras I (1999).
- T. Banica, Subfactors associated to compact Kac algebras, Integral Equations Operator Theory 39 (2001), 1--14.
- V.F.R. Jones, On knot invariants related to some statistical mechanical models, Pacific J. Math. 137 (1989), 311--334.