Subfactor theory

[math] \newcommand{\mathds}{\mathbb}[/math]

This article was automatically generated from a tex file and may contain conversion errors. If permitted, you may login and edit this article to improve the conversion.

13a. The Jones tower

In this last part of the present book we discuss the basics of Jones' subfactor theory [1], [2], [3], [4], [5]. The idea is that subfactors are quite subtle objects, generalizing various algebraic and combinatorial constructions from chapters 5-8, and coming from the functional analysis and operator theory considerations from chapters 9-12. Their study will bring us into a lot of advanced mathematics, mixing algebra, geometry, analysis and probability, and with everything being of modern physics flavor, often in relation with considerations from advanced statistical mechanics, and quantum mechanics.


We recall that a [math]{\rm II}_1[/math] factor is a von Neumann algebra [math]A\subset B(H)[/math] which has trivial center, [math]Z(A)=\mathbb C[/math], is infinite dimensional, and has a trace [math]tr:A\to\mathbb C[/math]. For a number of reasons, ranging from simple and intuitive to fairly advanced, explained in chapters 9-12, such algebras are the core at the whole von Neumann algebra theory.


The world of [math]{\rm II}_1[/math] factors is a bit similar to the world of the usual matrix algebras [math]M_N(\mathbb C)[/math], which are actually called type [math]{\rm I}[/math] factors, in the sense that it is “self-sufficient”, with no need to go further than that. In particular, a nice representation theory for such [math]{\rm II}_1[/math] factors can be obtained by staying inside the class of [math]{\rm II}_1[/math] factors, and we have the following definition to start with, which will keep us busy for the rest of this book:

Definition

A subfactor is an inclusion of [math]{\rm II}_1[/math] factors [math]A\subset B[/math].

We will see later some examples of such inclusions, along with motivations for their study. In order to get started now, the first thing to be done with such an inclusion is that of defining its index, as a quantity of the following type:

[[math]] [B:A]=\dim_AB [[/math]]


Since both [math]A,B[/math] are infinite dimensional algebras, this is not exactly obvious. In addition, in view of our previous experience with the [math]{\rm II}_1[/math] factors, and notably with their “continuous dimension” features, we can only expect the index to range as follows:

[[math]] [B:A]\in[1,\infty] [[/math]]


In order to discuss this, let us recall from chapter 10 that given a representation of a [math]{\rm II}_1[/math] factor [math]A\subset B(H)[/math], we can construct a number as follows, called coupling constant, which for the standard form, where [math]H=L^2(A)[/math], takes the value [math]1[/math], and which in general mesures how far is [math]A\subset B(H)[/math] from the standard form:

[[math]] \dim_AH\in(0,\infty] [[/math]]


Getting now to the subfactors, in the sense of Definition 13.1, we have the following construction, that we know as well from chapter 10:

Theorem

Given a subfactor [math]A\subset B[/math], the number

[[math]] N=\frac{\dim_AH}{\dim_BH}\in[1,\infty] [[/math]]
is independent of the ambient Hilbert space [math]H[/math], and is called index.


Show Proof

This is something that we know from chapter 10, the idea being that the independence of the index from the choice of the ambient Hilbert space [math]H[/math] comes from the various basic properties of the coupling constant.

There are many examples of subfactors, and we will discuss this gradually, in what follows. Following Jones [1], the most basic examples of subfactors are as follows:

Proposition

Assuming that [math]G[/math] is a compact group, acting on a [math]{\rm II}_1[/math] factor [math]P[/math] in a minimal way, in the sense that we have

[[math]] (P^G)'\cap P=\mathbb C [[/math]]
and that [math]H\subset G[/math] is a closed subgroup of finite index, we have a subfactor

[[math]] P^G\subset P^H [[/math]]
having index [math]N=[G:H][/math], called Jones subfactor.


Show Proof

This is something standard, the idea being that the factoriality of [math]P^G,P^H[/math] comes from the minimality of the action, and that the index formula is clear. We will be back with full details about this in a moment, directly in a more general setting.

In order to study the subfactors, let us start with the following standard result:

Proposition

Given a subfactor [math]A\subset B[/math], there is a unique linear map

[[math]] E:B\to A [[/math]]
which is positive, unital, trace-preserving and satisfies the following condition:

[[math]] E(b_1ab_2)=b_1E(a)b_2 [[/math]]
This map is called conditional expectation from [math]B[/math] onto [math]A[/math].


Show Proof

We make use of the standard representation of the [math]{\rm II}_1[/math] factor [math]B[/math], with respect to its unique trace [math]tr:B\to\mathbb C[/math], as constructed in chapter 10:

[[math]] B\subset L^2(B) [[/math]]


If we denote by [math]\Omega[/math] the standard cyclic and separating vector of [math]L^2(B)[/math], we have an identification [math]A\Omega=L^2(A)[/math]. Consider now the following orthogonal projection:

[[math]] e:L^2(B)\to L^2(A) [[/math]]


It follows from definitions that we have an inclusion as follows:

[[math]] e(B\Omega)\subset A\Omega [[/math]]

Thus [math]e[/math] induces by restriction a certain linear map [math]E:B\to A[/math]. This linear map [math]E[/math] and the orthogonal projection [math]e[/math] are then related by:

[[math]] exe=E(x)e [[/math]]


But this shows that the linear map [math]E[/math] satisfies the various conditions in the statement, namely positivity, unitality, trace preservation and bimodule property. As for the uniqueness assertion, this follows by using the same argument, applied backwards, the idea being that a map [math]E[/math] as in the statement must come from the projection [math]e[/math].

Following Jones [1], we will be interested in what follows in the orthogonal projection [math]e:L^2(B)\to L^2(A)[/math] producing the expectation [math]E:B\to A[/math], rather than in [math]E[/math] itself:

Definition

Associated to any subfactor [math]A\subset B[/math] is the orthogonal projection

[[math]] e:L^2(B)\to L^2(A) [[/math]]
producing the conditional expectation [math]E:B\to A[/math] via the following formula:

[[math]] exe=E(x)e [[/math]]
This projection is called Jones projection for the subfactor [math]A\subset B[/math].

Quite remarkably, the subfactor [math]A\subset B[/math], as well as its commutant, can be recovered from the knowledge of this projection, in the following way:

Proposition

Given a subfactor [math]A\subset B[/math], with Jones projection [math]e[/math], we have

[[math]] A=B\cap\{e\}' [[/math]]

[[math]] A'=(B'\cap\{e\})'' [[/math]]
as equalities of von Neumann algebras, acting on the space [math]L^2(B)[/math].


Show Proof

These formulae basically follow from [math]exe=E(x)e[/math], as follows:


(1) Let us first prove that we have [math]A\subset B\cap\{e\}'[/math]. Given [math]x\in A[/math], we have:

[[math]] xe=E(x)e=exe [[/math]]

[[math]] x^*e=E(x^*)e=ex^*e [[/math]]


Thus, we obtain, as desired, that [math]x[/math] commutes with [math]e[/math]:

[[math]] ex =(x^*e)^* =(ex^*e)^* =exe =xe [[/math]]


(2) Let us prove now that [math]B\cap\{e\}'\subset A[/math]. Assuming [math]ex=xe[/math], we have:

[[math]] E(x)e=exe=xe^2=xe [[/math]]


We conclude from this that we have the following equality:

[[math]] (E(x)-x)\Omega=(E(x)-x)e\Omega=0 [[/math]]


Now since [math]\Omega[/math] is separating for [math]B[/math] we have, as desired:

[[math]] x=E(x)\in A [[/math]]


(3) In order to prove now [math]A'= \lt B',e \gt [/math], observe that we have:

[[math]] A =B\cap\{e\}' =B''\cap\{e\}' =(B'\cap\{e\})' [[/math]]


Now by taking the commutant, we obtain [math]A'=(B'\cap\{e\})''[/math], as desired.

Still following Jones [1], we are now ready to formulate a key definition:

Definition

Associated to any subfactor [math]A\subset B[/math] is the basic construction

[[math]] A\subset_eB\subset C [[/math]]
with [math]C= \lt B,e \gt [/math] being the algebra generated by [math]B[/math] and by the Jones projection

[[math]] e:L^2(B)\to L^2(A) [[/math]]
acting on the Hilbert space [math]L^2(B)[/math].

The idea in what follows will be that [math]B\subset C[/math] appears as a kind of “reflection” of [math]A\subset B[/math], and also that the basic construction can be iterated, with all this leading to nontrivial results. Let us start by further studying the basic construction:

Theorem

Given a subfactor [math]A\subset B[/math] having finite index,

[[math]] [B:A] \lt \infty [[/math]]
the basic construction [math]A\subset_eB\subset C[/math] has the following properties:

  • [math]C=JA'J[/math].
  • [math]C=\overline{B+Beb}[/math].
  • [math]C[/math] is a [math]{\rm II}_1[/math] factor.
  • [math][C:B]=[B:A][/math].
  • [math]eCe=Ae[/math].
  • [math]tr(e)=[B:A]^{-1}[/math].
  • [math]tr(xe)=tr(x)[B:A]^{-1}[/math], for any [math]x\in B[/math].


Show Proof

All this is standard, the idea being as follows:


(1) We have [math]JB'J=B[/math] and [math]JeJ=e[/math], which gives:

[[math]] \begin{eqnarray*} JA'J &=&J \lt B',e \gt J\\ &=& \lt JB'J,JeJ \gt \\ &=& \lt B,e \gt \\ &=&C \end{eqnarray*} [[/math]]


(2) This follows from the fact that the vector space [math]B+BeB[/math] is closed under multiplication, and from the fact that we have [math]exe=E(x)e[/math].


(3) This follows from the fact, that we know from chapter 10, that our finite index assumption [math][B:A] \lt \infty[/math] is equivalent to the fact that [math]A'[/math] is a factor. But this is in turn equivalent to the fact that [math]C=JA'J[/math] is a factor, as desired.


(4) We have indeed the folowing computation:

[[math]] \begin{eqnarray*} [C:B] &=&\frac{\dim_BL^2(B)}{\dim_CL^2(B)}\\ &=&\frac{1}{\dim_CL^2(B)}\\ &=&\frac{1}{\dim_{JA'J}L^2(B)}\\ &=&\frac{1}{\dim_{A'}L^2(B)}\\ &=&\dim_AL^2(B)\\ &=&[B:A] \end{eqnarray*} [[/math]]


(5) This follows indeed from (2) and from the formula [math]exe=E(x)e[/math].


(6) We have the following computation:

[[math]] \begin{eqnarray*} 1 &=&\dim_AL^2(A)\\ &=&\dim_A(eL^2(B))\\ &=&tr_{A'}(e)\dim_A(L^2(B))\\ &=&tr_{A'}(a)[B:A] \end{eqnarray*} [[/math]]


Now since [math]C=JA'J[/math] and [math]JeJ=e[/math], we obtain from this, as desired:

[[math]] \begin{eqnarray*} tr(e) &=&tr_{JA'J}(JeJ)\\ &=&tr_{A'}(e)\\ &=&[B:A]^{-1} \end{eqnarray*} [[/math]]


(7) We already know from (6) that the formula in the statement holds for [math]x=1[/math]. In order to discuss the general case, observe first that for [math]x,y\in A[/math] we have:

[[math]] tr(xye)=tr(yex)=tr(yxe) [[/math]]


Thus the linear map [math]x\to tr(xe)[/math] is a trace on [math]A[/math], and by uniqueness of the trace on [math]A[/math], we must have, for a certain constant [math]c \gt 0[/math]:

[[math]] tr(xe)=c\cdot tr(x) [[/math]]


Now by using (6) we obtain [math]c=[B:A]^{-1}[/math], so we have proved the formula in the statement for [math]x\in A[/math]. The passage to the general case [math]x\in B[/math] can be done as follows:

[[math]] \begin{eqnarray*} tr(xe) &=&tr(exe)\\ &=&tr(E(x)e)\\ &=&tr(E(x))c\\ &=&tr(x)c \end{eqnarray*} [[/math]]


Thus, we have proved the formula in the statement, in general.

The above result is quite interesting, so let us perform now twice the basic construction, and see what we get. The result here, which is more technical, is as follows:

Proposition

Associated to [math]A\subset B[/math] is the double basic construction

[[math]] A\subset_eB\subset_fC\subset D [[/math]]
with [math]e,f[/math] being the following orthogonal projections,

[[math]] e:L^2(B)\to L^2(A) [[/math]]

[[math]] f:L^2(C)\to L^2(B) [[/math]]
having the following properties:

[[math]] fef=[B:A]^{-1}f [[/math]]

[[math]] efe=[B:A]^{-1}e [[/math]]


Show Proof

We have two formulae to be proved, the idea being as follows:


(1) The first formula is clear, because we have:

[[math]] \begin{eqnarray*} fef &=&E(e)f\\ &=&tr(e)f\\ &=&[B:A]^{-1}f \end{eqnarray*} [[/math]]


(2) Regarding now the second formula, it is enough to check it on the dense subset [math](B+BeB)\Omega[/math]. Thus, we must show that for any [math]x,y,z\in B[/math], we have:

[[math]] efe(x+yez)\Omega=[B:A]^{-1}e(x+yez)\Omega [[/math]]


For this purpose, we will prove that we have, for any [math]x,y,z\in B[/math]:

[[math]] efex\Omega=[B:A]^{-1}ex\Omega [[/math]]

[[math]] efeyez\Omega=[B:A]^{-1}eyez\Omega [[/math]]


The first formula can be established as follows:

[[math]] \begin{eqnarray*} efex\Omega &=&efexf\Omega\\ &=&eE(ex)f\Omega\\ &=&eE(e)xf\Omega\\ &=&[B:A]^{-1}exf\Omega\\ &=&[B:A]^{-1}ex\Omega \end{eqnarray*} [[/math]]


The second formula can be established as follows:

[[math]] \begin{eqnarray*} efeyez\Omega &=&efeyezf\Omega\\ &=&eE(eyez)f\Omega\\ &=&eE(eye)zf\Omega\\ &=&eE(E(y)e)zf\Omega\\ &=&eE(y)E(e)zf\Omega\\ &=&[B:A]^{-1}eE(y)zf\Omega\\ &=&[B:A]^{-1}eyezf\Omega\\ &=&[B:A]^{-1}eyez\Omega \end{eqnarray*} [[/math]]


Thus, we are led to the conclusion in the statement.

We can in fact perform the basic construction by recurrence, and we obtain:

Theorem

Associated to any subfactor [math]A_0\subset A_1[/math] is the Jones tower

[[math]] A_0\subset_{e_1}A_1\subset_{e_2}A_2\subset_{e_3}A_3\subset\ldots\ldots [[/math]]
with the Jones projections having the following properties:

  • [math]e_i^2=e_i=e_i^*[/math].
  • [math]e_ie_j=e_je_i[/math] for [math]|i-j|\geq2[/math].
  • [math]e_ie_{i\pm1}e_i=[B:A]^{-1}e_i[/math].
  • [math]tr(we_{n+1})=[B:A]^{-1}tr(w)[/math], for any word [math]w\in \lt e_1,\ldots,e_n \gt [/math].


Show Proof

This follows from Theorem 13.8 and Proposition 13.9, because the triple basic construction does not need in fact any further study. See Jones [1].

13b. Temperley-Lieb

The relations found in Theorem 13.10 are in fact well-known, from the standard theory of the Temperley-Lieb algebra. This algebra, discovered by Temperley and Lieb in the context of statistical mechanics [6], has a very simple definition, as follows:

Definition

The Temperley-Lieb algebra of index [math]N\in[1,\infty)[/math] is defined as

[[math]] TL_N(k)=span(NC_2(k,k)) [[/math]]
with product given by vertical concatenation, with the rule

[[math]] \bigcirc=N [[/math]]
for the closed circles that might appear when concatenating.

In other words, the algebra [math]TL_N(k)[/math], depending on parameters [math]k\in\mathbb N[/math] and [math]N\in[1,\infty)[/math], is the formal linear span of the pairings [math]\pi\in NC_2(k,k)[/math]. The product operation is obtained by linearity, for the pairings which span [math]TL_N(k)[/math] this being the usual vertical concatenation, with the conventions that things go “from top to bottom”, and that each circle that might appear when concatenating is replaced by a scalar factor, equal to [math]N[/math].


In order to make the connection with subfactors, let us start with:

Proposition

The Temperley-Lieb algebra [math]TL_N(k)[/math] is generated by the diagrams

[[math]] \varepsilon_1={\ }^\cup_\cap\quad,\quad \varepsilon_2=|\!{\ }^\cup_\cap\quad,\quad \varepsilon_3=||\!{\ }^\cup_\cap\quad,\quad \ldots [[/math]]
which are all multiples of projections, in the sense that their rescaled versions

[[math]] e_i=N^{-1}\varepsilon_i [[/math]]
satisfy the abstract projection relations [math]e_i^2=e_i=e_i^*[/math].


Show Proof

We have two assertions here, the idea being as follows:


(1) The fact that the algebra [math]TL_N(k)[/math] is indeed generated by the sequence of diagrams [math]\varepsilon_1,\varepsilon_2,\varepsilon_3,\ldots[/math] follows by drawing pictures, and more specifically by graphically decomposing each basis element [math]\pi\in NC_2(k,k)[/math] as a product of such elements [math]\varepsilon_i[/math].


(2) Regarding now the projection assertion, when composing [math]\varepsilon_i[/math] with itself we obtain [math]\varepsilon_i[/math] itself, times a circle. Thus, according to our multiplication conventions, we have:

[[math]] \varepsilon_i^2=N\varepsilon_i [[/math]]


Also, when turning upside-down [math]\varepsilon_i[/math], we obtain [math]\varepsilon_i[/math] itself. Thus, according to our involution convention for the Temperley-Lieb algebra, we have:

[[math]] \varepsilon_i^*=\varepsilon_i [[/math]]


We conclude that the rescalings [math]e_i=N^{-1}\varepsilon_i[/math] satisfy [math]e_i^2=e_i=e_i^*[/math], as desired.

As a second result now, making the link with Theorem 13.10, we have:

Proposition

The standard generators [math]e_i=N^{-1}\varepsilon_i[/math] of the Temperley-Lieb algebra [math]TL_N(k)[/math] have the following properties, where [math]tr[/math] is the trace obtained by closing:

  • [math]e_ie_j=e_je_i[/math] for [math]|i-j|\geq2[/math].
  • [math]e_ie_{i\pm1}e_i=[B:A]^{-1}e_i[/math].
  • [math]tr(we_{n+1})=[B:A]^{-1}tr(w)[/math], for any word [math]w\in \lt e_1,\ldots,e_n \gt [/math].


Show Proof

This follows indeed by doing some elementary computations with diagrams, in the spirit of those performed in the proof of Proposition 13.12. Indeed:


(1) This is clear from the definition of the diagrams [math]\varepsilon_i[/math].


(2) This is clear as well from the definition of the diagrams [math]\varepsilon_i[/math].


(3) This is something which is clear too, from the definition of [math]\varepsilon_{n+1}[/math].

With the above results in hand, we can now reformulate our main finding about subfactors, namely Theorem 13.10, into something more conceptual, as follows:

Theorem

Given a finite index subfactor [math]A_0\subset A_1[/math], with Jones tower

[[math]] A_0\subset_{e_1}A_1\subset_{e_2}A_2\subset_{e_3}A_3\subset\ldots\ldots [[/math]]
the rescaled sequence of projections [math]e_1,e_2,e_3,\ldots\in B(H)[/math] produces a representation

[[math]] TL_N\subset B(H) [[/math]]
of the Temperley-Lieb algebra of index [math]N=[A_1:A_0][/math].


Show Proof

The idea here is that Theorem 13.10, coming from the study of the basic construction, tells us that the rescaled sequence of projections [math]e_1,e_2,e_3,\ldots\in B(H)[/math] behaves algebrically exactly as the sequence of diagrams [math]\varepsilon_1,\varepsilon_2,\varepsilon_3,\ldots\in TL_N[/math] given by:

[[math]] \varepsilon_1={\ }^\cup_\cap\quad,\quad \varepsilon_2=|\!{\ }^\cup_\cap\quad,\quad \varepsilon_3=||\!{\ }^\cup_\cap\quad,\quad \ldots [[/math]]


But these diagrams generate [math]TL_N[/math], and so we have an embedding [math]TL_N\subset B(H)[/math], where [math]H[/math] is the Hilbert space where our subfactor [math]A_0\subset A_1[/math] lives, as claimed.

Before going further, with some examples, more theory, and consequences of Theorem 13.14, let us make the following key observation, also from Jones [1]:

Theorem

Given a finite index subfactor [math]A_0\subset A_1[/math], the graded algebra

[[math]] P=(P_k) [[/math]]
formed by the sequence of higher relative commutants

[[math]] P_k=A_0'\cap A_k [[/math]]
contains the copy of the Temperley-Lieb algebra constructed above:

[[math]] TL_N\subset P [[/math]]
This graded algebra [math]P=(P_k)[/math] is called “planar algebra” of the subfactor.


Show Proof

As a first observation, since the Jones projection [math]e_1:A_1\to A_0[/math] commutes with [math]A_0[/math], as was previously established in the above, we have:

[[math]] e_1\in P_2' [[/math]]


By translation we obtain from this that we have, for any [math]k\in\mathbb N[/math]:

[[math]] e_1,\ldots,e_{k-1}\in P_k [[/math]]


Thus we have indeed an inclusion of graded algebras [math]TL_N\subset P[/math], as claimed.

The point with the above result, which explains among others the terminology at the end, is that, in the context of Theorem 13.14, the planar algebra structure of [math]TL_N[/math], obtained by composing diagrams, extends into an abstract planar algebra structure of [math]P[/math]. See [3]. We will discuss all this, with full details, in the next chapter.

13c. Basic examples

Let us discuss now some basic examples of subfactors, with concrete illustrations for all the above notions, constructions, and general theory. These examples will all come from group actions [math]G\curvearrowright P[/math], which are assumed to be minimal, in the sense that:

[[math]] (P^G)'\cap P=\mathbb C [[/math]]

We will not provide proofs for the next few results to follow, the idea being that these constructions can be unified, and that we would like to keep the proofs for the unifications only. As a starting point, we have the following result, that we already know:

Proposition

Assuming that [math]G[/math] is a compact group, acting minimally on a [math]{\rm II}_1[/math] factor [math]P[/math], and that [math]H\subset G[/math] is a subgroup of finite index, we have a subfactor

[[math]] P^G\subset P^H [[/math]]
having index [math]N=[G:H][/math], called Jones subfactor.


Show Proof

This is something that we know, the idea being that the factoriality of [math]P^G,P^H[/math] comes from the minimality of the action, and that the index formula is clear.

Along the same lines, we have the following result:

Proposition

Assuming that [math]G[/math] is a finite group, acting minimally on a [math]{\rm II}_1[/math] factor [math]P[/math], we have a subfactor as follows,

[[math]] P\subset P\rtimes G [[/math]]
having index [math]N=|G|[/math], called Ocneanu subfactor.


Show Proof

This is standard as well, the idea being that the factoriality of [math]P\rtimes G[/math] comes from the minimality of the action, and that the index formula is clear.

We have as well a third result of the same type, as follows:

Proposition

Assuming that [math]G[/math] is a compact group, acting minimally on a [math]{\rm II}_1[/math] factor [math]P[/math], and that [math]G\to PU_n[/math] is a projective representation, we have a subfactor

[[math]] P^G\subset (M_n(\mathbb C) \otimes P)^G [[/math]]
having index [math]N=n^2[/math], called Wassermann subfactor.


Show Proof

As before, the idea is that the factoriality of [math]P^G,(M_n(\mathbb C)\otimes P)^G[/math] comes from the minimality of the action, and the index formula is clear.

The above subfactors look quite related, and indeed they are, due to:

Theorem

The Jones, Ocneanu and Wassermann subfactors are all of the same nature, and can be written as follows,

[[math]] \left( P^G\subset P^H\right)\,\simeq\, \left( ({\mathbb C}\otimes P)^G\subset (l^\infty(G/H)\otimes P)^G\right) [[/math]]

[[math]] \left( P\subset P\rtimes G\right)\,\simeq\, \left( (l^\infty (G)\otimes P)^G\subset ({\mathcal L} (l^2(G))\otimes P)^G\right) [[/math]]

[[math]] \left( P^G\subset (M_n(\mathbb C) \otimes P)^G\right)\,\simeq\, \left( ({\mathbb C}\otimes P)^G\subset (M_n(\mathbb C)\otimes P)^G\right) [[/math]]
with standard identifications for the various tensor products and fixed point algebras.


Show Proof

This is something very standard, modulo all kinds of standard identifications. We will explain all this more in detail later, after unifying these subfactors.

In order to unify now the above constructions of subfactors, the idea is quite clear. Given a compact group [math]G[/math], acting minimally on a [math]{\rm II}_1[/math] factor [math]P[/math], and an inclusion of finite dimensional algebras [math]B_0\subset B_1[/math], endowed as well with an action of [math]G[/math], we would like to construct a kind of generalized Wassermann subfactor, as follows:

[[math]] (B_0\otimes P)^G\subset (B_1\otimes P)^G [[/math]]


In order to do this, we must talk first about the finite dimensional algebras [math]B[/math], and about inclusions of such algebras [math]B_0\subset B_1[/math]. Let us start with the following definition:

Definition

Associated to any finite dimensional algebra [math]B[/math] is its canonical trace, obtained by composing the left regular representation with the trace of [math]\mathcal L(B)[/math]:

[[math]] tr:B\subset\mathcal L(B)\to\mathbb C [[/math]]
We say that an inclusion of finite dimensional algebras [math]B_0\subset B_1[/math] is Markov if it commmutes with the canonical traces of [math]B_0,B_1[/math].

In what regards the first notion, that of the canonical trace, this is something that we know well, from chapter 5. Indeed, as explained there, we can formally write [math]B=C(X)[/math], with [math]X[/math] being a finite quantum space, and the canonical trace [math]tr:B\to\mathbb C[/math] is then precisely the integration with respect to the “counting measure” on [math]X[/math].


In what regards the second notion, that of a Markov inclusion, this is something very natural too, and as a first example here, any inclusion of type [math]\mathbb C\subset B[/math] is Markov. In general, if we write [math]B_0=C(X_0)[/math] and [math]B_1=C(X_1)[/math], then the inclusion [math]B_0\subset B_1[/math] must come from a certain fibration [math]X_1\to X_0[/math], and the inclusion [math]B_0\subset B_1[/math] is Markov precisely when the fibration [math]X_1\to X_0[/math] commutes with the respective counting measures.


We will be back to Markov inclusions and their various properties on several occasions, in what follows. For our next purposes here, we just need the following result:

Proposition

Given a Markov inclusion of finite dimensional algebras [math]B_0\subset B_1[/math] we can perform to it the basic construction, as to obtain a Jones tower

[[math]] B_0\subset_{e_1}B_1\subset_{e_2}B_2\subset_{e_3}B_3\subset\ldots\ldots [[/math]]
exactly as we did in the above for the inclusions of [math]{\rm II}_1[/math] factors.


Show Proof

This is something quite routine, by following the computations in the above, from the case of the [math]{\rm II}_1[/math] factors, and with everything extending well. It is of course possible to do something more general here, unifying the constructions for the inclusions of [math]{\rm II}_1[/math] factors [math]A_0\subset A_1[/math], and for the inclusions of Markov inclusions of finite dimensional algebras [math]B_0\subset B_1[/math], but we will not need this degree of generality, in what follows.

With these ingredients in hand, getting back now to the Jones, Ocneanu and Wassermann subfactors, from Theorem 13.19, the point is that these constructions can be unified, and then further studied, the final result on the subject being as follows:

Theorem

Let [math]G[/math] be a compact group, and [math]G\to Aut(P)[/math] be a minimal action on a [math]{\rm II}_1[/math] factor. Consider a Markov inclusion of finite dimensional algebras

[[math]] B_0\subset B_1 [[/math]]
and let [math]G\to Aut(B_1)[/math] be an action which leaves invariant [math]B_0[/math], and which is such that its restrictions to the centers of [math]B_0[/math] and [math]B_1[/math] are ergodic. We have then a subfactor

[[math]] (B_0\otimes P)^G\subset (B_1\otimes P)^G [[/math]]
of index [math]N=[B_1:B_0][/math], called generalized Wassermann subfactor, whose Jones tower is

[[math]] (B_1\otimes P)^G\subset(B_2\otimes P)^G\subset(B_3\otimes P)^G\subset\ldots [[/math]]
where [math]\{ B_i\}_{i\geq 1}[/math] are the algebras in the Jones tower for [math]B_0\subset B_1[/math], with the canonical actions of [math]G[/math] coming from the action [math]G\to Aut(B_1)[/math], and whose planar algebra is given by:

[[math]] P_k=(B_0'\cap B_k)^G [[/math]]
These subfactors generalize the Jones, Ocneanu and Wassermann subfactors.


Show Proof

There are several things to be proved, the idea being as follows:


(1) As before on various occasions, the idea is that the factoriality of the algebras [math](B_i\otimes P)^G[/math] comes from the minimality of the action [math]G\to Aut(P)[/math], and that the index formula is clear as well, from the definition of the coupling constant and of the index.


(2) Regarding the Jones tower assertion, the precise thing to be checked here is that if [math]A\subset B\subset C[/math] is a basic construction, then so is the following sequence of inclusions:

[[math]] (A\otimes P)^G\subset(B\otimes P)^G\subset(C\otimes P)^G [[/math]]


But this is something standard, which follows by verifying the basic construction conditions. We will be back to this in a moment, directly in a more general setting.


(3) Next, regarding the planar algebra assertion, we have to prove here that for any indices [math]i\leq j[/math], we have the following equality between subalgebras of [math]B_j\otimes P[/math]:

[[math]] ((B_i\otimes P)^G)'\cap(B_j\otimes P)^G=(B_i'\cap B_j^G)\otimes 1 [[/math]]


But this is something which is routine too, following Wassermann [7], and we will be back to this in a moment, with full details, directly in a more general setting.


(4) Finally, the last assertion, regarding the main examples of such subfactors, which are those of Jones, Ocneanu, Wassermann, follows from Theorem 13.19.

In addition to the Jones, Ocneanu and Wassermann subfactors, discussed and unified in the above, we have the Popa subfactors, which are constructed as follows:

Proposition

Given a discrete group [math]\Gamma= \lt g_1,\ldots,g_n \gt [/math], acting faithfully via outer automorphisms on a [math]{\rm II}_1[/math] factor [math]Q[/math], we have the following “diagonal” subfactor

[[math]] \left\{ \begin{pmatrix} g_1(q)\\ &\ddots\\ && g_n(q) \end{pmatrix} \Big| q\in Q\right\} \subset M_n(Q) [[/math]]
having index [math]N=n^2[/math], called Popa subfactor.


Show Proof

This is something standard, a bit as for the Jones, Ocneanu and Wassermann subfactors, with the result basically coming from the work of Popa, who was the main user of such subfactors. We will come in a moment with a more general result in this direction, involving discrete quantum groups, along with a complete proof.

In order to unify now Theorem 13.22 and Proposition 13.23, observe that the diagonal subfactors can be written in the following way, by using a group dual:

[[math]] (Q\rtimes\Gamma)^{\widehat{\Gamma}}\subset(M_n(\mathbb C)\otimes (Q\rtimes\Gamma))^{\widehat{\Gamma}} [[/math]]


Here the group dual [math]\widehat{\Gamma}[/math] acts on [math]P=Q\rtimes\Gamma[/math] via the dual of the action [math]\Gamma\subset Aut (Q)[/math], and on [math]M_n(\mathbb C)[/math] via the adjoint action of the following representation:

[[math]] \oplus g_i :\widehat{\Gamma}\to {\mathbb C}^n [[/math]]


Summarizing, we are led into quantum groups. Our plan in what follows will be that of discussing the quantum extension of Theorem 13.22, covering the diagonal subfactors as well, and this time with full details, and with examples and illustrations as well.


We follow [8], where this extension of the Wassermann construction [7] was developed. Let us start our discussion with some basic theory. We first have:

Definition

A coaction of a Woronowicz algebra [math]A[/math] on a finite von Neumann algebra [math]P[/math] is an injective morphism [math]\Phi:P\to P\otimes A''[/math] satisfying the following conditions:

  • Coassociativity: [math](\Phi\otimes id)\Phi=(id\otimes\Delta)\Phi[/math].
  • Trace equivariance: [math](tr\otimes id)\Phi=tr(.)1[/math].
  • Smoothness: [math]\overline{\mathcal P}^{\,w}=P[/math], where [math]\mathcal P=\Phi^{-1}(P\otimes_{alg}\mathcal A)[/math].

The above conditions come from what happens in the commutative case, [math]A=C(G)[/math], where they correspond to the usual associativity, trace equivariance and smoothness of the corresponding action [math]G\curvearrowright P[/math]. Along the same lines, we have as well:

Definition

A coaction [math]\Phi:P\to P\otimes A''[/math] as above is called:

  • Ergodic, if the algebra [math]P^\Phi=\left\{p\in P\big|\Phi(p)=p\otimes1\right\}[/math] reduces to [math]\mathbb C[/math].
  • Faithful, if the span of [math]\left\{(f\otimes id)\Phi(P)\big|f\in P_*\right\}[/math] is dense in [math]A''[/math].
  • Minimal, if it is faithful, and satisfies [math](P^\Phi)'\cap P=\mathbb C[/math].

Observe that the minimality of the action implies in particular that the fixed point algebra [math]P^\Phi[/math] is a factor. Thus, we are getting here to the case that we are interested in, actions producing factors, via their fixed point algebras. More on this later.


In order to prove our subfactor results, we need of some general theory regarding the minimal actions. Following Wassermann [7], let us start with the following definition:

Definition

Let [math]\Phi:P\to P\otimes A''[/math] be a coaction. An eigenmatrix for a corepresentation [math]u\in B(H)\otimes A[/math] is an element [math]M\in B(H)\otimes P[/math] satisfying:

[[math]] (id\otimes\Phi)M=M_{12}u_{13} [[/math]]
A coaction is called semidual if each corepresentation has a unitary eigenmatrix.

As a basic example here, the canonical coaction [math]\Delta:A\to A\otimes A[/math] is semidual. We will prove in what follows, following the work of Wassermann in the usual compact group case, that the minimal coactions of Woronowicz algebras are semidual. We first have:

Proposition

If [math]\Phi:P\to P\otimes A''[/math] is a minimal coaction and [math]u\in Irr(A)[/math] is a corepresentation, then [math]u[/math] has a unitary eigenmatrix precisely when [math]P^u\neq\{ 0\}[/math].


Show Proof

Given [math]u\in M_n(A)[/math], consider the following unitary corepresentation:

[[math]] u^+=(n\otimes 1)\oplus u= \begin{pmatrix}1&0\\ 0&u\end{pmatrix} \in M_2(M_n(\mathbb C)\otimes\mathcal A) =M_2(\mathbb C)\otimes M_n(\mathbb C)\otimes\mathcal A [[/math]]


It is then routine to check, exactly as in [7], with the computation being explained in [8], that if the following algebra is a factor, then [math]u[/math] has a unitary eigenmatrix:

[[math]] X_u=(M_2(\mathbb C)\otimes M_n(\mathbb C)\otimes P)^{\pi_{u^+}} [[/math]]


So, let us prove that [math]X_u[/math] is a factor. For this purpose, let [math]x\in Z(X_u)[/math]. We have then [math]1\otimes1\otimes P^\Phi\subset X_u[/math], and from the irreducibility of the inclusion [math]P^\pi\subset P[/math] we obtain that:

[[math]] x\in M_2(\mathbb C)\otimes M_n(\mathbb C)\otimes 1 [[/math]]


On the other hand, we have the following formula:

[[math]] \begin{eqnarray*} X_u\cap M_2(\mathbb C)\otimes M_n(\mathbb C)\otimes 1 &=&(M_2(\mathbb C)\otimes M_n(\mathbb C))^{i_{u^+}}\otimes 1\\ &=&End(u^+)\otimes 1 \end{eqnarray*} [[/math]]


Since our corepresentation [math]u[/math] was chosen to be irreducible, it follows that [math]x[/math] must be of the following form, with [math]y\in M_n(\mathbb C)[/math], and with [math]\lambda\in\mathbb C[/math]:

[[math]] x=\begin{pmatrix}y&0\\0&\lambda I\end{pmatrix}\otimes 1 [[/math]]


Now let us pick a nonzero element [math]p\in P^u[/math], and write:

[[math]] \Phi(p)=\sum_{ij}p_{ij}\otimes u_{ij} [[/math]]


Then [math]\Phi(p_{ij})=\sum_kp_{kj}\otimes u_{ki}[/math] for any [math]i,j[/math], and so each column of [math](p_{ij})_{ij}[/math] is a [math]u[/math]-eigenvector. Choose such a nonzero column [math]l[/math] and let [math]m^i[/math] be the matrix having the [math]i[/math]-th row equal to [math]l[/math], and being zero elsewhere. Then [math]m_i[/math] is a [math]u[/math]-eigenmatrix for any [math]i[/math], and this implies that:

[[math]] \begin{pmatrix}0&m^i\\0&0\end{pmatrix}\in X_u [[/math]]


The commutation relation of this matrix with [math]x[/math] is as follows:

[[math]] \begin{pmatrix}y&0\\0&\lambda I\end{pmatrix} \begin{pmatrix}0&m^i\\0&0\end{pmatrix}= \begin{pmatrix}0&m^i\\0&0\end{pmatrix} \begin{pmatrix}y&0\\0&\lambda I\end{pmatrix} [[/math]]


But this gives [math](y-\lambda I)m^i=0[/math]. Now by definition of [math]m^i[/math], this shows that the [math]i[/math]-th column of [math]y-\lambda I[/math] is zero. Thus [math]y-\lambda I=0[/math], and so [math]x=\lambda 1[/math], as desired.

We can now prove a main result about minimal coactions, as follows:

Theorem

The minimal coactions are semidual.


Show Proof

Let [math]K[/math] be the set of finite dimensional unitary corepresentations of [math]A[/math] which have unitary eigenmatrices. Then, according to the above, the following happen:


(1) [math]K[/math] is stable under taking tensor products. Indeed, if [math]M,N[/math] are unitary eigenmatrices for [math]u,w[/math], then [math]M_{13}N_{23}[/math] is a unitary eigenmatrix for [math]u\otimes w[/math].


(2) [math]K[/math] is stable under taking sums. Indeed, if [math]M_i[/math] are unitary eigenmatrices for [math]u_i[/math], then [math]diag(M_i)[/math] is a unitary eigenmatrix for [math]\oplus u_i[/math].


(3) [math]K[/math] is stable under substractions. Indeed, if [math]M[/math] is an eigenmatrix for [math]U=\oplus_{i=1}^nu_i[/math], then the first [math]\dim(u_1)[/math] columns of [math]M[/math] are formed by elements of [math]P^{u_1}[/math], the next [math]\dim(u_2)[/math] columns of [math]M[/math] are formed by elements of [math]P^{u_2}[/math], and so on. Now if [math]M[/math] is unitary, it is in particular invertible, so all [math]P^{u_i}[/math] are different from [math]\{0\}[/math], and we may conclude that we can indeed substract corepresentations from [math]U[/math], by using Proposition 13.27.


(4) [math]K[/math] is stable under complex conjugation. Indeed, by the above results we may restrict attention to irreducible corepresentations. Now if [math]u\in Irr(A)[/math] has a nonzero eigenmatrix [math]M[/math] then [math]\overline{M}[/math] is an eigenmatrix for [math]\overline{u}[/math]. By Proposition 13.27 we obtain from this that [math]P^{\overline{u}}\neq\{0\}[/math], and we may conclude by using again Proposition 13.27.


With this in hand, by using Peter-Weyl, we obtain the result. See [8].

Let us construct now the fixed point subfactors. We first have:

Proposition

Consider a Woronowicz algebra [math]A=(A,\Delta,S)[/math], and denote by [math]A_\sigma[/math] the Woronowicz algebra [math](A,\sigma\Delta ,S)[/math], where [math]\sigma[/math] is the flip. Given coactions

[[math]] \beta:B\to B\otimes A [[/math]]

[[math]] \pi:P\to P\otimes A_\sigma [[/math]]
with [math]B[/math] being finite dimensional, the following linear map, while not being multiplicative in general, is coassociative with respect to the comultiplication [math]\sigma\Delta[/math] of [math]A_\sigma[/math],

[[math]] \beta\odot\pi:B\otimes P\to B\otimes P\otimes A_\sigma [[/math]]

[[math]] b\otimes p\to \pi (p)_{23}((id\otimes S)\beta(b))_{13} [[/math]]
and its fixed point space, which is by definition the following linear space,

[[math]] (B\otimes P)^{\beta\odot\pi}=\left\{x\in B\otimes P\Big|(\beta\odot\pi )x=x\otimes 1\right\} [[/math]]
is then a von Neumann subalgebra of [math]B\otimes P[/math].


Show Proof

This is something standard, which follows from a straightforward algebraic verification, explained in [8]. As mentioned in the statement, to be noted is that the tensor product coaction [math]\beta\odot\pi[/math] is not multiplicative in general. See [8].

Our first task is to investigate the factoriality of such algebras, and we have here:

Theorem

If [math]\beta:B\to B\otimes A[/math] is a coaction and [math]\pi:P\to P\otimes A_\sigma[/math] is a minimal coaction, then the following conditions are equivalent:

  • The von Neumann algebra [math](B\otimes P)^{\beta\odot\pi}[/math] is a factor.
  • The coaction [math]\beta[/math] is centrally ergodic, [math]Z(B)\cap B^\beta=\mathbb C[/math].


Show Proof

This is something standard, from [8], the idea being as follows:


(1) Our first claim, which is something whose proof is a routine verification, explained in [8], based on the semiduality of the minimal coaction [math]\pi[/math], that we know from Theorem 13.28, is that the following diagram is a non-degenerate commuting square:

[[math]] \begin{matrix} P&\subset&B\otimes P\\ \cup &\ &\cup \\ P^\pi&\subset&(B\otimes P)^{\beta\odot\pi} \end{matrix} [[/math]]


(2) In order to prove now the result, it is enough to check the following equality, between von Neumann subalgebras of the algebra [math]B\otimes P[/math]:

[[math]] Z((B\otimes P)^{\beta\odot\pi})=(Z(B)\cap B^\beta)\otimes 1 [[/math]]


So, let [math]x[/math] be in the algebra on the left. Then [math]x[/math] commutes with [math]1\otimes P^\pi[/math], so it has to be of the form [math]b\otimes 1[/math]. Thus [math]x[/math] commutes with [math]1\otimes P[/math]. But [math]x[/math] commutes with [math](B\otimes P)^{\beta\odot\pi}[/math], and from the non-degeneracy of the above square, [math]x[/math] commutes with [math]B\otimes P[/math], and in particular with [math]B\otimes 1[/math]. Thus we have [math]b\in Z(B)\cap B^\beta[/math]. As for the other inclusion, this is obvious.

In view of the above result, we can talk about subfactors of type [math](B_0\otimes P)^G\subset(B_1\otimes P)^G[/math]. In order to investigate such subfactors, we will need the following technical result:

Proposition

Consider two commuting squares, as follows:

[[math]] \begin{matrix} F&\subset&E&\subset&D\\ \cup&&\cup&&\cup\\ A&\subset&B&\subset&C\\ \end{matrix} [[/math]]

  • If the square on the left and the big square are non-degenerate, then so is the square on the right.
  • If both squares are non-degenerate, [math]F\subset E\subset D[/math] is a basic construction, and the Jones projection [math]e\in D[/math] for this basic construction belongs to [math]C[/math], then the square on the right is the basic construction for the square on the left.


Show Proof

We have several things to be proved, the idea being as follows:


(1) This assertion is clear from the following computation:

[[math]] D ={\overline{sp}^{\,w}\,}CF ={\overline{sp}^{\,w}\,}CBF ={\overline{sp}^{\,w}\,} CE [[/math]]


(2) Let [math]\Psi :D\to C[/math] be the expectation. By non-degeneracy, we have that:

[[math]] E={\overline{sp}^w\,} FB={\overline{sp}^w\,} BF [[/math]]


We also have [math]D={\overline{sp}^w\,} EeE[/math] by the basic construction, so we get that:

[[math]] \begin{eqnarray*} C &=&\Psi(D)\\ &=&\Psi({\overline{sp}^{\,w}\,}EeE)\\ &=&\Psi({\overline{sp}^{\,w}\,}BFeFB)\\ &=&\Psi({\overline{sp}^{\,w}\,}BeFB)\\ &=&{\overline{sp}^{\,w}\,}Be\Psi(F)B\\ &=&{\overline{sp}^{\,w}\,}BeAB\\ &=&{\overline{sp}^{\,w}\,}BeB \end{eqnarray*} [[/math]]


Thus the algebra [math]C[/math] is generated by [math]B[/math] and [math]e[/math], and this gives the result.

Next in line, we have the following key technical result:

Proposition

If [math]\beta:B\to B\otimes A[/math] is a coaction then

[[math]] \begin{matrix} A&\subset&B\otimes A\\ \cup&&\uparrow\beta\\ \mathbb C&\subset&B\\ \end{matrix} [[/math]]
is a non-degenerate commuting square.


Show Proof

From the [math]\beta[/math]-equivariance of the trace we get that the inclusion on the left commutes with the traces, so that the above is a commuting diagram of finite von Neumann algebras. From the formula of the expectation [math]E_{\beta}=(id\otimes\int_A)\beta[/math] we get that this diagram is a commuting square. Choose now an orthonormal basis [math]\{b_i\}[/math] of [math]B[/math], write [math]\beta:b_i\to\sum_jb_j\otimes u_{ji}[/math], and consider the corresponding unitary corepresentation:

[[math]] u_\beta=\sum e_{ij}\otimes u_{ij} [[/math]]


Then for any [math]k[/math] and any [math]a\in A[/math] we have the following computation:

[[math]] \begin{eqnarray*} \sum_i\beta(b_i)(1\otimes u_{ki}^*a) &=&\sum_{ij}b_j\otimes u_{ji}u_{ki}^*a\\ &=&\sum_{ij}b_j\otimes \delta_{jk}a\\ &=&b_k\otimes a \end{eqnarray*} [[/math]]


Thus our commuting square is non-degenerate, as claimed.

Getting now to the generalized Wassermann subfactors, we first have:

Proposition

Given a Markov inclusion of finite dimensional algebras [math]B_0\subset B_1[/math], construct its Jones tower, and denote it as follows:

[[math]] B_0\subset B_1\subset_{e_1}B_2= \lt B_1,e_1 \gt \subset_{e_2}B_3= \lt B_2,e_2 \gt \subset_{e_3}\ldots [[/math]]
If [math]\beta_1:B_1\to B_1\otimes A[/math] is a coaction/anticoaction leaving [math]B_0[/math] invariant then there exists a unique sequence [math]\{\beta_i\}_{i\geq 0}[/math] of coactions/anticoactions

[[math]] \beta_i:B_i\to B_i\otimes A [[/math]]
such that each [math]\beta_i[/math] extends [math]\beta_{i-1}[/math] and leaves invariant the Jones projection [math]e_{i-1}[/math].


Show Proof

By taking opposite inclusions we see that the assertion for anticoactions is equivalent to the one for coactions, that we will prove now. The uniqueness is clear from [math]B_i= \lt B_{i-1},e_{i-1} \gt [/math]. For the existence, we can apply Proposition 13.32 to:

[[math]] \begin{matrix} A&\subset&B_0\otimes A&\subset&B_1\otimes A\\ \cup&&\uparrow\beta_0&&\uparrow\beta_1\\ \mathbb C&\subset&B_0&\subset&B_1 \end{matrix} [[/math]]


Indeed, we get in this way that the square on the right is a non-degenerate. Now by performing basic constructions to it, we get a sequence as follows:

[[math]] \begin{matrix} B_0\otimes A&\subset&B_1\otimes A&\subset&B_2\otimes A&\subset&B_3\otimes A&\subset&\ldots\\ \uparrow\beta_0&&\uparrow\beta_1&&\uparrow\beta_2&&\uparrow\beta_3\\ B_0&\subset&B_1&\subset&B_2&\subset&B_3&\subset&\ldots \end{matrix} [[/math]]


It is easy to see from definitions that the [math]\beta_i[/math] are coactions, that they extend each other, and that they leave invariant the Jones projections. But this gives the result.

With the above technical results in hand, we can now formulate our main theorem regarding the fixed point subfactors, of the most possible general type, as follows:

Theorem

Let [math]G[/math] be a compact quantum group, and [math]G\to Aut(P)[/math] be a minimal action on a [math]{\rm II}_1[/math] factor. Consider a Markov inclusion of finite dimensional algebras

[[math]] B_0\subset B_1 [[/math]]
and let [math]G\to Aut(B_1)[/math] be an action which leaves invariant [math]B_0[/math] and which is such that its restrictions to the centers of [math]B_0[/math] and [math]B_1[/math] are ergodic. We have then a subfactor

[[math]] (B_0\otimes P)^G\subset (B_1\otimes P)^G [[/math]]
of index [math]N=[B_1:B_0][/math], called generalized Wassermann subfactor, whose Jones tower is

[[math]] (B_1\otimes P)^G\subset(B_2\otimes P)^G\subset(B_3\otimes P)^G\subset\ldots [[/math]]
where [math]\{ B_i\}_{i\geq 1}[/math] are the algebras in the Jones tower for [math]B_0\subset B_1[/math], with the canonical actions of [math]G[/math] coming from the action [math]G\to Aut(B_1)[/math], and whose planar algebra is given by:

[[math]] P_k=(B_0'\cap B_k)^G [[/math]]
These subfactors generalize the Jones, Ocneanu, Wassermann and Popa subfactors.


Show Proof

We have several things to be proved, the idea being as follows:


(1) The first part of the statement, regarding the factoriality, the index and the Jones tower assertions, is something that follows exactly as in the classical group case.


(2) In order to prove now the planar algebra assertion, consider the following diagram, with [math]i \lt j[/math] being arbitrary integers:

[[math]] \begin{matrix} P&\subset&B_i\otimes P&\subset&B_j\otimes P\\ \cup&&\cup&&\cup\\ P^\pi&\subset&(B_i\otimes P)^{\beta_i\otimes\pi}&\subset&(B_j\otimes P)^{\beta_j\otimes\pi} \end{matrix} [[/math]]


We know from Proposition 13.32 that the big square and the square on the left are both non-degenerate commuting squares. Thus Proposition 13.31 applies, and shows that the square on the right is a non-degenerate commuting square.


(3) Consider now the following sequence of non-degenerate commuting squares:

[[math]] \begin{matrix} B_0\otimes P&\subset&B_1\otimes P&\subset&B_2\otimes P&\subset&\ldots\\ \cup&&\cup&&\cup&&\\ (B_0\otimes P)^{\beta_0\otimes\pi}&\subset&(B_1\otimes P)^{\beta_1\otimes\pi}&\subset&(B_2\otimes P)^{\beta_2\otimes\pi}&\subset&\ldots \end{matrix} [[/math]]


Since the Jones projections live in the lower line, Proposition 13.32 applies and shows that this is a sequence of basic constructions for non-degenerate commuting squares. In particular the lower line is a sequence of basic constructions, as desired.


(4) Finally, we already know from Theorem 13.22 that our construction generalizes the Jones, Ocneanu and Wassermann subfactors. As for the Popa subfactors, the result here follows from the discussion made after Proposition 13.23.

13d. The index theorem

Let us go back now to the arbitrary subfactors, with Theorem 13.14 being our main result. As an interesting consequence of the above results, somehow contradicting the “continuous geometry” philosophy that has being going on so far, in relation with the [math]{\rm II}_1[/math] factors, we have the following surprising result, also from Jones' original paper [1]:

Theorem

The index of subfactors [math]A\subset B[/math] is “quantized” in the [math][1,4][/math] range,

[[math]] N\in\left\{4\cos^2\left(\frac{\pi}{n}\right)\Big|n\geq3\right\}\cup[4,\infty] [[/math]]
with the obstruction coming from the existence of the representation [math]TL_N\subset B(H)[/math].


Show Proof

This comes from the basic construction, and more specifically from the combinatorics of the Jones projections [math]e_1,e_2,e_3,\ldots[/math], the idea being as folows:


(1) In order to best comment on what happens, when iterating the basic construction, let us record the first few values of the numbers in the statement:

[[math]] 4\cos^2\left(\frac{\pi}{3}\right)=1\quad,\quad 4\cos^2\left(\frac{\pi}{4}\right)=2 [[/math]]

[[math]] 4\cos^2\left(\frac{\pi}{5}\right)=\frac{3+\sqrt{5}}{2}\quad,\quad 4\cos^2\left(\frac{\pi}{6}\right)=3 [[/math]]

[[math]] \ldots [[/math]]


(2) When performing a basic construction, we obtain, by trace manipulations on [math]e_1[/math]:

[[math]] N\notin(1,2) [[/math]]


With a double basic construction, we obtain, by trace manipulations on [math] \lt e_1,e_2 \gt [/math]:

[[math]] N\notin\left(2,\frac{3+\sqrt{5}}{2}\right) [[/math]]


With a triple basic construction, we obtain, by trace manipulations on [math] \lt e_1,e_2,e_3 \gt [/math]:

[[math]] N\notin\left(\frac{3+\sqrt{5}}{2},3\right) [[/math]]


Thus, we are led to the conclusion in the statement, by a kind of recurrence, involving a certain family of orthogonal polynomials.


(3) In practice now, the most elegant way of proving the result is by using the fundamental fact, explained in Theorem 13.14, that that sequence of Jones projections [math]e_1,e_2,e_3,\ldots\subset B(H)[/math] generate a copy of the Temperley-Lieb algebra of index [math]N[/math]:

[[math]] TL_N\subset B(H) [[/math]]


With this result in hand, we must prove that such a representation cannot exist in index [math]N \lt 4[/math], unless we are in the following special situation:

[[math]] N=4\cos^2\left(\frac{\pi}{n}\right) [[/math]]


But this can be proved by using some suitable trace and positivity manipulations on [math]TL_N[/math], as in (2) above. For full details here, we refer to [9], [1], [10].

The above result raises the question of understanding if there are further restrictions on the index of subfactors [math]A\subset B[/math], in the range found there, namely:

[[math]] N\in\left\{4\cos^2\left(\frac{\pi}{n}\right)\Big|n\geq3\right\}\cup[4,\infty] [[/math]]


This question is quite tricky, because it depends on the ambient factor [math]B\subset B(H)[/math], and also on the irreducibility assumption on the subfactor, namely [math]A'\cap B=\mathbb C[/math], which is something quite natural, and can be added to the problem.


All this is quite technical, to be discussed later on, when doing more advanced subfactor theory. In the simplest formulation of the question, the answer is generally “no”, as shown by the following result, also from Jones' original paper [1]:

Theorem

Consider the Murray-von Neumann hyperfinite [math]{\rm II}_1[/math] factor [math]R[/math]. Its subfactors [math]R_0\subset R[/math] are then as follows:

  • They exist for all admissible index values, [math]N\in\left\{4\cos^2\left(\frac{\pi}{n}\right)|n\geq3\right\}\cup[4,\infty][/math].
  • In index [math]N\leq4[/math], they can be realized as irreducible subfactors, [math]R_0'\cap R=\mathbb C[/math].
  • In index [math]N \gt 4[/math], they can be realized as arbitrary subfactors.


Show Proof

This is something quite tricky, worked out in Jones' original paper [1], and requiring some advanced algebra methods, the idea being as follows:


(1) This basically follows by taking a copy of the Temperley-Lieb algebra [math]TL_N[/math], and then building a subfactor out of it, first by constructing a certain inclusion of inductive limits of finite dimensional algebras, [math]\mathcal A\subset\mathcal B[/math], and then by taking the weak closure, which produces copies of the Murray-von Neumann hyperfinite [math]{\rm II}_1[/math] factor, [math]A\simeq B\simeq R[/math].


(2) This follows by examining and fine-tuning the construction in (1), which can be performed as to have control over the relative commutant.


(3) This follows as well from (1), and with the simplest proof here being in fact quite simple, based on a projection trick.

As another application now, which is more theoretical, let us go back to the question of defining the index of a subfactor in a purely algebraic manner, which was open since chapter 10. The answer here, due to Pimsner and Popa [11], is as follows:

Theorem

Any finite index subfactor [math]A\subset B[/math] has an algebraic orthonormal basis, called Pimsner-Popa basis, which is constructed as follows:

  • In integer index, [math]N\in\mathbb N[/math], this is a usual basis, of type [math]\{b_1,\ldots,b_N\}[/math], whose length is exactly the index.
  • In non-integer index, [math]N\notin\mathbb N[/math], this is something of type [math]\{b_1,\ldots,b_n,c\}[/math], having length [math]n+1[/math], with [math]n=[N][/math], and with [math]N-n\in(0,1)[/math] being related to [math]c[/math].


Show Proof

This is something quite technical, which follows from the basic theory of the basic construction. We refer here to the paper of Pimsner and Popa [11].

General references

Banica, Teo (2024). "Principles of operator algebras". arXiv:2208.03600 [math.OA].

References

  1. 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 V.F.R. Jones, Index for subfactors, Invent. Math. 72 (1983), 1--25.
  2. V.F.R. Jones, On knot invariants related to some statistical mechanical models, Pacific J. Math. 137 (1989), 311--334.
  3. 3.0 3.1 V.F.R. Jones, Planar algebras I (1999).
  4. V.F.R. Jones, The planar algebra of a bipartite graph, in “Knots in Hellas '98 (2000), 94--117.
  5. V.F.R. Jones, The annular structure of subfactors, Monogr. Enseign. Math. 38 (2001), 401--463.
  6. N.H. Temperley and E.H. Lieb, Relations between the “percolation” and “colouring” problem and other graph-theoretical problems associated with regular planar lattices: some exact results for the “percolation” problem, Proc. Roy. Soc. London 322 (1971), 251--280.
  7. 7.0 7.1 7.2 7.3 A. Wassermann, Coactions and Yang-Baxter equations for ergodic actions and subfactors, London Math. Soc. Lect. Notes 136 (1988), 203--236.
  8. 8.0 8.1 8.2 8.3 8.4 8.5 8.6 T. Banica, Subfactors associated to compact Kac algebras, Integral Equations Operator Theory 39 (2001), 1--14.
  9. F.M. Goodman, P. de la Harpe and V.F.R. Jones, Coxeter graphs and towers of algebras, Springer (1989).
  10. V.F.R. Jones and V.S Sunder, Introduction to subfactors, Cambridge Univ. Press (1997).
  11. 11.0 11.1 M. Pimsner and S. Popa, Entropy and index for subfactors, Ann. Sci. \'Ecole Norm. Sup. 19 (1986), 57--106.