Representations

[math] \newcommand{\mathds}{\mathbb}[/math]

13a. Basic theory

We have seen so far that some algebraic and probabilistic theory for the finite subgroups [math]G\subset U_N[/math], ranging from elementary to quite advanced, can be developed. We have seen as well a few computations for the continuous compact subgroups [math]G\subset U_N[/math]. In what follows we develop some systematic theory for the arbitrary closed subgroups [math]G\subset U_N[/math], covering both the finite and the infinite case. The main examples that we have in mind, and the questions that we would like to solve for them, are as follows:


  • The orthogonal and unitary groups [math]O_N,U_N[/math]. Here we would like to have an integration formula, and results about character laws, in the [math]N\to\infty[/math] limit.
  • Various versions of [math]O_N,U_N[/math], such as the bistochastic groups [math]B_N,C_N[/math], or the symplectic groups [math]Sp_N[/math], with similar questions to be solved.
  • The reflection groups [math]H_N^{sd}\subset U_N[/math], with results about characters extending, or at least putting in a more conceptual framework, what we already have.


There is a lot of theory to be developed, and we will do this gradually. To be more precise, in this chapter and in the next one we will work out algebraic aspects, and then in the chapter afterwards and in the last one we will use these algebraic techniques, in order to work out probabilistic results, and in particular to answer the above questions. As before, the main notion that we will be interested in is that of a representation:

Definition

A representation of a compact group [math]G[/math] is a continuous group morphism, which can be faithful or not, into a unitary group:

[[math]] u:G\to U_N [[/math]]
The character of such a representation is the function [math]\chi:G\to\mathbb C[/math] given by

[[math]] g\to Tr(u_g) [[/math]]
where [math]Tr[/math] is the usual trace of the [math]N\times N[/math] matrices, [math]Tr(M)=\sum_iM_{ii}[/math].

As a basic example here, for any compact group we always have available the trivial 1-dimensional representation, which is by definition as follows:

[[math]] u:G\to U_1\quad,\quad g\to(1) [[/math]]

At the level of non-trivial examples now, most of the compact groups that we met so far, finite or continuous, naturally appear as closed subgroups [math]G\subset U_N[/math]. In this case, the embedding [math]G\subset U_N[/math] is of course a representation, called fundamental representation:

[[math]] u:G\subset U_N\quad,\quad g\to g [[/math]]

In this situation, there are many other representations of [math]G[/math], which are equally interesting. For instance, we can define the representation conjugate to [math]u[/math], as being:

[[math]] \bar{u}:G\subset U_N\quad,\quad g\to\bar{g} [[/math]]

In order to clarify all this, and see which representations are available, let us first discuss the various operations on the representations. The result here is as follows:

Proposition

The representations of a given compact group [math]G[/math] are subject to the following operations:

  • Making sums. Given representations [math]u,v[/math], having dimensions [math]N,M[/math], their sum is the [math]N+M[/math]-dimensional representation [math]u+v=diag(u,v)[/math].
  • Making products. Given representations [math]u,v[/math], having dimensions [math]N,M[/math], their tensor product is the [math]NM[/math]-dimensional representation [math](u\otimes v)_{ia,jb}=u_{ij}v_{ab}[/math].
  • Taking conjugates. Given a representation [math]u[/math], having dimension [math]N[/math], its complex conjugate is the [math]N[/math]-dimensional representation [math](\bar{u})_{ij}=\bar{u}_{ij}[/math].
  • Spinning by unitaries. Given a representation [math]u[/math], having dimension [math]N[/math], and a unitary [math]V\in U_N[/math], we can spin [math]u[/math] by this unitary, [math]u\to VuV^*[/math].


Show Proof

The fact that the operations in the statement are indeed well-defined, among maps from [math]G[/math] to unitary groups, can be checked as follows:


(1) This follows from the trivial fact that if [math]g\in U_N[/math] and [math]h\in U_M[/math] are two unitaries, then their diagonal sum is a unitary too, as follows:

[[math]] \begin{pmatrix}g&0\\ 0&h\end{pmatrix}\in U_{N+M} [[/math]]

(2) This follows from the fact that if [math]g\in U_N[/math] and [math]h\in U_M[/math] are two unitaries, then [math]g\otimes h\in U_{NM}[/math] is a unitary too. Given unitaries [math]g,h[/math], let us set indeed:

[[math]] (g\otimes h)_{ia,jb}=g_{ij}h_{ab} [[/math]]

This matrix is then a unitary too, as shown by the following computation:

[[math]] \begin{eqnarray*} [(g\otimes h)(g\otimes h)^*]_{ia,jb} &=&\sum_{kc}(g\otimes h)_{ia,kc}((g\otimes h)^*)_{kc,jb}\\ &=&\sum_{kc}(g\otimes h)_{ia,kc}\overline{(g\otimes h)_{jb,kc}}\\ &=&\sum_{kc}g_{ik}h_{ac}\bar{g}_{jk}\bar{h}_{bc}\\ &=&\sum_kg_{ik}\bar{g}_{jk}\sum_ch_{ac}\bar{h}_{bc}\\ &=&\delta_{ij}\delta_{ab} \end{eqnarray*} [[/math]]


(3) This simply follows from the fact that if [math]g\in U_N[/math] is unitary, then so is its complex conjugate, [math]\bar{g}\in U_N[/math], and this due to the following formula, obtained by conjugating:

[[math]] g^*=g^{-1}\implies g^t=\bar{g}^{-1} [[/math]]

(4) This is clear as well, because if [math]g\in U_N[/math] is unitary, and [math]V\in U_N[/math] is another unitary, then we can spin [math]g[/math] by this unitary, and we obtain a unitary as follows:

[[math]] VgV^*\in U_N [[/math]]

Thus, our operations are well-defined, and this leads to the above conclusions.

In relation now with characters, we have the following result:

Proposition

We have the following formulae, regarding characters

[[math]] \chi_{u+v}=\chi_u+\chi_v\quad,\quad \chi_{u\otimes v}=\chi_u\chi_v\quad,\quad \chi_{\bar{u}}=\bar{\chi}_u\quad,\quad \chi_{VuV^*}=\chi_u [[/math]]
in relation with the basic operations for the representations.


Show Proof

All these assertions are elementary, by using the following well-known trace formulae, valid for any two square matrices [math]g,h[/math], and any unitary [math]V[/math]:

[[math]] Tr(diag(g,h))=Tr(g)+Tr(h)\quad,\quad Tr(g\otimes h)=Tr(g)Tr(h) [[/math]]

[[math]] Tr(\bar{g})=\overline{Tr(g)}\quad,\quad Tr(VgV^*)=Tr(g) [[/math]]

To be more precise, the first formula is clear from definitions. Regarding now the second formula, the computation here is immediate too, as follows:

[[math]] \begin{eqnarray*} Tr(g\otimes h) &=&\sum_{ia}(g\otimes h)_{ia,ia}\\ &=&\sum_{ia}g_{ii}h_{aa}\\ &=&Tr(g)Tr(h) \end{eqnarray*} [[/math]]


Regarding now the third formula, this is clear from definitions, by conjugating. Finally, regarding the fourth formula, this can be established as follows:

[[math]] Tr(VgV^*)=Tr(gV^*V)=Tr(g) [[/math]]

Thus, we are led to the conclusions in the statement.

Assume now that we are given a closed subgroup [math]G\subset U_N[/math]. By using the above operations, we can construct a whole family of representations of [math]G[/math], as follows:

Definition

Given a closed subgroup [math]G\subset U_N[/math], its Peter-Weyl representations are the tensor products between the fundamental representation and its conjugate:

[[math]] u:G\subset U_N\quad,\quad \bar{u}:G\subset U_N [[/math]]

We denote these tensor products [math]u^{\otimes k}[/math], with [math]k=\circ\bullet\bullet\circ\ldots[/math] being a colored integer, with the colored tensor powers being defined according to the rules

[[math]] u^{\otimes\circ}=u\quad,\quad u^{\otimes\bullet}=\bar{u}\quad,\quad u^{\otimes kl}=u^{\otimes k}\otimes u^{\otimes l} [[/math]]
and with the convention that [math]u^{\otimes\emptyset}[/math] is the trivial representation [math]1:G\to U_1[/math].

Here are a few examples of such Peter-Weyl representations, namely those coming from the colored integers of length 2, to be often used in what follows:

[[math]] u^{\otimes\circ\circ}=u\otimes u\quad,\quad u^{\otimes\circ\bullet}=u\otimes\bar{u} [[/math]]

[[math]] u^{\otimes\bullet\circ}=\bar{u}\otimes u\quad,\quad u^{\otimes\bullet\bullet}=\bar{u}\otimes\bar{u} [[/math]]

In relation now with characters, we have the following result:

Proposition

The characters of Peter-Weyl representations are given by

[[math]] \chi_{u^{\otimes k}}=(\chi_u)^k [[/math]]
with the colored powers of a variable [math]\chi[/math] being by definition given by

[[math]] \chi^\circ=\chi\quad,\quad \chi^\bullet=\bar{\chi}\quad,\quad \chi^{kl}=\chi^k\chi^l [[/math]]
and with the convention that [math]\chi^\emptyset[/math] equals by definition [math]1[/math].


Show Proof

This follows indeed from the additivity, multiplicativity and conjugation formulae established in Proposition 13.3, via the conventions in Definition 13.4.

Getting back now to our motivations, we can see the interest in the above constructions. Indeed, the joint moments of the main character [math]\chi=\chi_u[/math] and its adjoint [math]\bar{\chi}=\chi_{\bar{u}}[/math] are simply the expectations of the characters of various Peter-Weyl representations:

[[math]] \int_G\chi^k=\int_G \chi_{u^{\otimes k}} [[/math]]

Summarizing, given a closed subgroup [math]G\subset U_N[/math], we would like to understand its Peter-Weyl representations, and compute the expectations of the characters of these representations. In order to do so, let us formulate the following key definition:

Definition

Given a compact group [math]G[/math], and two of its representations,

[[math]] u:G\to U_N\quad,\quad v:G\to U_M [[/math]]
we define the linear space of intertwiners between these representations as being

[[math]] Hom(u,v)=\left\{T\in M_{M\times N}(\mathbb C)\Big|Tu_g=v_gT,\forall g\in G\right\} [[/math]]
and we use the following conventions:

  • We use the notations [math]Fix(u)=Hom(1,u)[/math], and [math]End(u)=Hom(u,u)[/math].
  • We write [math]u\sim v[/math] when [math]Hom(u,v)[/math] contains an invertible element.
  • We say that [math]u[/math] is irreducible, and write [math]u\in Irr(G)[/math], when [math]End(u)=\mathbb C1[/math].

The terminology here is standard, with Hom and End standing for “homomorphisms” and “endomorphisms”, and with Fix standing for “fixed points”. In practice, it is useful to think of the representations of [math]G[/math] as being the objects of some kind of abstract combinatorial structure associated to [math]G[/math], and of the intertwiners between these representations as being the “arrows” between these objects. We have in fact the following result:

Theorem

The following happen:

  • The intertwiners are stable under composition:
    [[math]] T\in Hom(u,v)\ ,\ S\in Hom(v,w) \implies ST\in Hom(u,w) [[/math]]
  • The intertwiners are stable under taking tensor products:
    [[math]] S\in Hom(u,v)\ ,\ T\in Hom(w,t)\\ \implies S\otimes T\in Hom(u\otimes w,v\otimes t) [[/math]]
  • The intertwiners are stable under taking adjoints:
    [[math]] T\in Hom(u,v) \implies T^*\in Hom(v,u) [[/math]]
  • Thus, the Hom spaces form a tensor [math]*[/math]-category.


Show Proof

All this is clear from definitions, the verifications being as follows:


(1) This follows indeed from the following computation, valid for any [math]g\in G[/math]:

[[math]] STu_g=Sv_gT=w_gST [[/math]]

(2) Again, this is clear, because we have the following computation:

[[math]] \begin{eqnarray*} (S\otimes T)(u_g\otimes w_g) &=&Su_g\otimes Tw_g\\ &=&v_gS\otimes t_gT\\ &=&(v_g\otimes t_g)(S\otimes T) \end{eqnarray*} [[/math]]


(3) This follows from the following computation, valid for any [math]g\in G[/math]:

[[math]] \begin{eqnarray*} Tu_g=v_gT &\implies&u_g^*T^*=T^*v_g^*\\ &\implies&T^*v_g=u_gT^* \end{eqnarray*} [[/math]]


(4) This is just a conclusion of (1,2,3), with a tensor [math]*[/math]-category being by definition an abstract beast satisfying these conditions (1,2,3). We will be back to tensor categories later on, in chapter 14 below, with more details on all this.

The above result is quite interesting, because it shows that the combinatorics of a compact group [math]G[/math] is described by a certain collection of linear spaces, which can be in principle investigated by using tools from linear algebra. Thus, what we have here is a “linearization” idea. We will heavily use this idea, in what follows.

13b. Peter-Weyl theory

In what follows we develop a systematic theory of the representations of the compact groups [math]G[/math], with emphasis on the Peter-Weyl representations, in the closed subgroup case [math]G\subset U_N[/math], that we are mostly interested in. Before starting, some comments:


(1) First of all, all this goes back to Hermann Weyl, who along with Einstein, Poincaré, von Neumann and a few others was part of the last generation of great mathematicians and physicists, knowing everything about mathematics, and everything about physics too. Get to know more about him, and have a look at his books [1], [2], [3].


(2) In what concerns Peter-Weyl theory, which is something quite tricky, this is perhaps best learned for the finite groups first, and for more general groups afterwards. This is how I learned it myself, back when I was a student, by reading Serre [4] for finite groups, and then Woronowicz [5], directly for the compact quantum groups.


(3) Getting back now to the present book, we will be a bit in a hurry with this, because we have so many other things to talk about. So, we will skip the finite group preliminaries, and deal directly with the compact groups. And by using a somewhat rough functional analysis viewpoint. For a complement to all this, we recommend Serre [4].


As a starting point, as a main consequence of Theorem 13.7, we have:

Theorem

Given a representation of a compact group [math]u:G\to U_N[/math], the corresponding linear space of self-intertwiners

[[math]] End(u)\subset M_N(\mathbb C) [[/math]]
is a [math]*[/math]-algebra, with respect to the usual involution of the matrices.


Show Proof

By definition, the space [math]End(u)[/math] is a linear subspace of [math]M_N(\mathbb C)[/math]. We know from Theorem 13.7 (1) that this subspace [math]End(u)[/math] is a subalgebra of [math]M_N(\mathbb C)[/math], and then we know as well from Theorem 13.7 (3) that this subalgebra is stable under the involution [math]*[/math]. Thus, what we have here is a [math]*[/math]-subalgebra of [math]M_N(\mathbb C)[/math], as claimed.

The above result is quite interesting, because it gets us into linear algebra. Indeed, associated to any group representation [math]u:G\to U_N[/math] is now a quite familiar object, namely the algebra [math]End(u)\subset M_N(\mathbb C)[/math]. In order to exploit this fact, we will need a well-known result, complementing the operator algebra theory from chapter 8, namely:

Theorem

Let [math]A\subset M_N(\mathbb C)[/math] be a [math]*[/math]-algebra.

  • The unit decomposes as follows, with [math]p_i\in A[/math] being central minimal projections:
    [[math]] 1=p_1+\ldots+p_k [[/math]]
  • Each of the following linear spaces is a non-unital [math]*[/math]-subalgebra of [math]A[/math]:
    [[math]] A_i=p_iAp_i [[/math]]
  • We have a non-unital [math]*[/math]-algebra sum decomposition, as follows:
    [[math]] A=A_1\oplus\ldots\oplus A_k [[/math]]
  • We have unital [math]*[/math]-algebra isomorphisms as follows, with [math]n_i=rank(p_i)[/math]:
    [[math]] A_i\simeq M_{n_i}(\mathbb C) [[/math]]
  • Thus, we have a [math]*[/math]-algebra isomorphism as follows:
    [[math]] A\simeq M_{n_1}(\mathbb C)\oplus\ldots\oplus M_{n_k}(\mathbb C) [[/math]]

Moreover, the final conclusion holds in fact for any finite dimensional [math]C^*[/math]-algebra.


Show Proof

This is something very standard. Consider indeed an arbitrary [math]*[/math]-algebra of the [math]N\times N[/math] matrices, [math]A\subset M_N(\mathbb C)[/math]. Let us first look at the center of this algebra, [math]Z(A)=A\cap A'[/math]. This center, viewed as an algebra, is then of the following form:

[[math]] Z(A)\simeq\mathbb C^k [[/math]]

Consider now the standard basis [math]e_1,\ldots,e_k\in\mathbb C^k[/math], and let [math]p_1,\ldots,p_k\in Z(A)[/math] be the images of these vectors via the above identification. In other words, these elements [math]p_1,\ldots,p_k\in A[/math] are central minimal projections, summing up to 1:

[[math]] p_1+\ldots+p_k=1 [[/math]]

The idea is then that this partition of the unity will eventually lead to the block decomposition of [math]A[/math], as in the statement. We prove this in 4 steps, as follows:


\underline{Step 1}. We first construct the matrix blocks, our claim here being that each of the following linear subspaces of [math]A[/math] are non-unital [math]*[/math]-subalgebras of [math]A[/math]:

[[math]] A_i=p_iAp_i [[/math]]

But this is clear, with the fact that each [math]A_i[/math] is closed under the various non-unital [math]*[/math]-subalgebra operations coming from the projection equations [math]p_i^2=p_i^*=p_i[/math].


\underline{Step 2}. We prove now that the above algebras [math]A_i\subset A[/math] are in a direct sum position, in the sense that we have a non-unital [math]*[/math]-algebra sum decomposition, as follows:

[[math]] A=A_1\oplus\ldots\oplus A_k [[/math]]

As with any direct sum question, we have two things to be proved here. First, by using the formula [math]p_1+\ldots+p_k=1[/math] and the projection equations [math]p_i^2=p_i^*=p_i[/math], we conclude that we have the needed generation property, namely:

[[math]] A_1+\ldots+ A_k=A [[/math]]

As for the fact that the sum is indeed direct, this follows as well from the formula [math]p_1+\ldots+p_k=1[/math], and from the projection equations [math]p_i^2=p_i^*=p_i[/math].


\underline{Step 3}. Our claim now, which will finish the proof, is that each of the [math]*[/math]-subalgebras [math]A_i=p_iAp_i[/math] constructed above is in fact a full matrix algebra. To be more precise, with [math]n_i=rank(p_i)[/math], our claim is that we have isomorphisms, as follows:

[[math]] A_i\simeq M_{n_i}(\mathbb C) [[/math]]

In order to prove this claim, recall that the projections [math]p_i\in A[/math] were chosen central and minimal. Thus, the center of each of the algebras [math]A_i[/math] reduces to the scalars:

[[math]] Z(A_i)=\mathbb C [[/math]]

But this shows, either via a direct computation, or via the bicommutant theorem, that the each of the algebras [math]A_i[/math] is a full matrix algebra, as claimed.


\underline{Step 4}. We can now obtain the result, by putting together what we have. Indeed, by using the results from Step 2 and Step 3, we obtain an isomorphism as follows:

[[math]] A\simeq M_{n_1}(\mathbb C)\oplus\ldots\oplus M_{n_k}(\mathbb C) [[/math]]

In addition to this, a careful look at the isomorphisms established in Step 3 shows that at the global level, of the algebra [math]A[/math] itself, the above isomorphism simply comes by twisting the following standard multimatrix embedding, discussed in the beginning of the proof, (1) above, by a certain unitary matrix [math]U\in U_N[/math]:

[[math]] M_{n_1}(\mathbb C)\oplus\ldots\oplus M_{n_k}(\mathbb C)\subset M_N(\mathbb C) [[/math]]

Now by putting everything together, we obtain the result. Finally, in what regards the last assertion, that we will not really need in what follows, this can be deduced from what we have, by using the GNS theorem from chapter 8. Indeed, assuming that [math]A[/math] is a finite dimensional [math]C^*[/math]-algebra, that theorem gives an ambedding as follows:

[[math]] A\subset\mathcal L(A)\simeq M_N(\mathbb C)\quad,\quad N=\dim A [[/math]]

Thus, our algebra is a [math]*[/math]-subalgebra of [math]M_N(\mathbb C)[/math], and we get the result.

We can now formulate our first Peter-Weyl theorem, as follows:

Theorem (PW1)

Let [math]u:G\to U_N[/math] be a group representation, consider the algebra [math]A=End(u)[/math], and write its unit as above, as follows:

[[math]] 1=p_1+\ldots+p_k [[/math]]
The representation [math]u[/math] decomposes then as a direct sum, as follows,

[[math]] u=u_1+\ldots+u_k [[/math]]
with each [math]u_i[/math] being an irreducible representation, obtained by restricting [math]u[/math] to [math]Im(p_i)[/math].


Show Proof

This basically follows from Theorem 13.8 and Theorem 13.9, as follows:


(1) As a first observation, by replacing [math]G[/math] with its image [math]u(G)\subset U_N[/math], we can assume if we want that our representation [math]u[/math] is faithful, [math]G\subset_uU_N[/math]. However, this replacement will not be really needed, and we will keep using [math]u:G\to U_N[/math], as above.


(2) In order to prove the result, we will need some preliminaries. We first associate to our representation [math]u:G\to U_N[/math] the corresponding action map on [math]\mathbb C^N[/math]. If a linear subspace [math]V\subset\mathbb C^N[/math] is invariant, the restriction of the action map to [math]V[/math] is an action map too, which must come from a subrepresentation [math]v\subset u[/math]. This is clear indeed from definitions, and with the remark that the unitaries, being isometries, restrict indeed into unitaries.


(3) Consider now a projection [math]p\in End(u)[/math]. From [math]pu=up[/math] we obtain that the linear space [math]V=Im(p)[/math] is invariant under [math]u[/math], and so this space must come from a subrepresentation [math]v\subset u[/math]. It is routine to check that the operation [math]p\to v[/math] maps subprojections to subrepresentations, and minimal projections to irreducible representations.


(4) To be more precise here, the condition [math]p\in End(u)[/math] reformulates as follows:

[[math]] pu_g=u_gp\quad,\quad\forall g\in G [[/math]]

As for the condition that [math]V=Im(p)[/math] is invariant, this reformulates as follows:

[[math]] pu_gp=u_gp\quad,\quad\forall g\in G [[/math]]

Thus, we are in need of a technical linear algebra result, stating that for a projection [math]P\in M_N(\mathbb C)[/math] and a unitary [math]U\in U_N[/math], the following happens:

[[math]] PUP=UP\implies PU=UP [[/math]]

(5) But this can be established with some [math]C^*[/math]-algebra know-how, as follows:

[[math]] \begin{eqnarray*} tr[(PU-UP)(PU-UP)^*] &=&tr[(PU-UP)(U^*P-PU^*)]\\ &=&tr[P-PUPU^*-UPU^*P+UPU^*]\\ &=&tr[P-UPU^*-UPU^*+UPU^*]\\ &=&tr[P-UPU^*]\\ &=&0 \end{eqnarray*} [[/math]]


Indeed, by positivity this gives [math]PU-UP=0[/math], as desired.


(6) With these preliminaries in hand, let us decompose the algebra [math]End(u)[/math] as in Theorem 13.9, by using the decomposition [math]1=p_1+\ldots+p_k[/math] into minimal projections. If we denote by [math]u_i\subset u[/math] the subrepresentation coming from the vector space [math]V_i=Im(p_i)[/math], then we obtain in this way a decomposition [math]u=u_1+\ldots+u_k[/math], as in the statement.

In order to formulate our second Peter-Weyl theorem, we need to talk about coefficients, and smoothness. Things here are quite tricky, and we can proceed as follows:

Definition

Given a closed subgroup [math]G\subset U_N[/math], and a unitary representation [math]v:G\to U_M[/math], the space of coefficients of this representation is:

[[math]] C_v=\left\{f\circ v\Big|f\in M_M(\mathbb C)^*\right\} [[/math]]
In other words, by delinearizing, [math]C_\nu\subset C(G)[/math] is the following linear space:

[[math]] C_v=span\Big[g\to(v_g)_{ij}\Big] [[/math]]
We say that [math]v[/math] is smooth if its matrix coefficients [math]g\to(v_g)_{ij}[/math] appear as polynomials in the standard matrix coordinates [math]g\to g_{ij}[/math], and their conjugates [math]g\to\overline{g}_{ij}[/math].

As a basic example of coefficient we have, besides the matrix coefficients [math]g\to(v_g)_{ij}[/math], the character, which appears as the diagonal sum of these coefficients:

[[math]] \chi_v(g)=\sum_i(v_g)_{ii} [[/math]]

Regarding the notion of smoothness, things are quite tricky here, the idea being that any closed subgroup [math]G\subset U_N[/math] can be shown to be a Lie group, and that, with this result in hand, a representation [math]v:G\to U_M[/math] is smooth precisely when the condition on coefficients from the above definition is satisfied. All this is quite technical, and we will not get into it. We will simply use Definition 13.11 as such, and further comment on this later on. Here is now our second Peter-Weyl theorem, complementing Theorem 13.10:

Theorem (PW2)

Given a closed subgroup [math]G\subset_uU_N[/math], any of its irreducible smooth representations

[[math]] v:G\to U_M [[/math]]
appears inside a tensor product of the fundamental representation [math]u[/math] and its adjoint [math]\bar{u}[/math].


Show Proof

In order to prove the result, we will use the following three elementary facts, regarding the spaces of coefficients introduced above:


(1) The construction [math]v\to C_v[/math] is functorial, in the sense that it maps subrepresentations into linear subspaces. This is indeed something which is routine to check.


(2) Our smoothness assumption on [math]v:G\to U_M[/math], as formulated in Definition 13.11, means that we have an inclusion of linear spaces as follows:

[[math]] C_v\subset \lt g_{ij} \gt [[/math]]

(3) By definition of the Peter-Weyl representations, as arbitrary tensor products between the fundamental representation [math]u[/math] and its conjugate [math]\bar{u}[/math], we have:

[[math]] \lt g_{ij} \gt =\sum_kC_{u^{\otimes k}} [[/math]]

(4) Now by putting together the observations (2,3) we conclude that we must have an inclusion as follows, for certain exponents [math]k_1,\ldots,k_p[/math]:

[[math]] C_v\subset C_{u^{\otimes k_1}\oplus\ldots\oplus\pi^{\otimes k_p}} [[/math]]

By using now the functoriality result from (1), we deduce from this that we have an inclusion of representations, as follows:

[[math]] v\subset u^{\otimes k_1}\oplus\ldots\oplus u^{\otimes k_p} [[/math]]

Together with Theorem 13.10, this leads to the conclusion in the statement.

As a conclusion to what we have so far, the problem to be solved is that of splitting the Peter-Weyl representations into sums of irreducible representations.

13c. Haar integration

In order to further advance, and complete the Peter-Weyl theory, we need to talk about integration over [math]G[/math]. In the finite group case the situation is trivial, as follows:

Proposition

Any finite group [math]G[/math] has a unique probability measure which is invariant under left and right translations,

[[math]] \mu(E)=\mu(gE)=\mu(Eg) [[/math]]
and this is the normalized counting measure on [math]G[/math], given by [math]\mu(E)=|E|/|G|[/math].


Show Proof

The uniformity condition in the statement gives, with [math]E=\{h\}[/math]:

[[math]] \mu\{h\}=\mu\{gh\}=\mu\{hg\} [[/math]]

Thus [math]\mu[/math] must be the usual counting measure, normalized as to have mass 1.

In the continuous group case now, the simplest examples, to be studied first, are the compact abelian groups. Here things are standard again, as follows:

Theorem

Given a compact abelian group [math]G[/math], with dual group denoted [math]\Gamma=\widehat{G}[/math], we have an isomorphism of commutative algebras

[[math]] C(G)\simeq C^*(\Gamma) [[/math]]
and via this isomorphism, the functional defined by linearity and the following formula,

[[math]] \int_Gg=\delta_{g1} [[/math]]
for any [math]g\in\Gamma[/math], is the integration with respect to the unique uniform measure on [math]G[/math].


Show Proof

This is something that we basically know, from chapters 8 and 9, coming as a consequence of the general results regarding the abelian groups and the commutative [math]C^*[/math]-algebras developed there. To be more precise, and skipping some details here, the conclusions in the statement can be deduced as follows:


(1) We can either apply the Gelfand theorem, from chapter 8, to the group algebra [math]C^*(\Gamma)[/math], which is commutative, and this gives all the results.


(2) Or, we can use decomposition results for the compact abelian groups from chapter 9, and by reducing things to summands, once again we obtain the results.

Summarizing, we have results in the finite case, and in the compact abelian case. With the remark that the proof in the compact abelian case was quite brief, but this result, coming as an illustration for more general things to follow, is not crucial for us. Let us discuss now the construction of the uniform probability measure in general. This is something quite technical, the idea being that the uniform measure [math]\mu[/math] over [math]G[/math] can be constructed by starting with an arbitrary probability measure [math]\nu[/math], and setting:

[[math]] \mu=\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^n\nu^{*k} [[/math]]

Thus, our next task will be that of proving this result. It is convenient, for this purpose, to work with the integration functionals with respect to the various measures on [math]G[/math], instead of the measures themselves. Let us begin with the following key result:

Proposition

Given a unital positive linear form [math]\varphi:C(G)\to\mathbb C[/math], the limit

[[math]] \int_\varphi f=\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^n\varphi^{*k}(f) [[/math]]
exists, and for a coefficient of a representation [math]f=(\tau\otimes id)v[/math] we have

[[math]] \int_\varphi f=\tau(P) [[/math]]
where [math]P[/math] is the orthogonal projection onto the [math]1[/math]-eigenspace of [math](id\otimes\varphi)v[/math].


Show Proof

By linearity it is enough to prove the first assertion for functions of the following type, where [math]v[/math] is a Peter-Weyl representation, and [math]\tau[/math] is a linear form:

[[math]] f=(\tau\otimes id)v [[/math]]

Thus we are led into the second assertion, and more precisely we can have the whole result proved if we can establish the following formula, with [math]f=(\tau\otimes id)v[/math]:

[[math]] \lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^n\varphi^{*k}(f)=\tau(P) [[/math]]

In order to prove this latter formula, observe that we have:

[[math]] \varphi^{*k}(f) =(\tau\otimes\varphi^{*k})v =\tau((id\otimes\varphi^{*k})v) [[/math]]

Let us set [math]M=(id\otimes\varphi)v[/math]. In terms of this matrix, we have:

[[math]] ((id\otimes\varphi^{*k})v)_{i_0i_{k+1}} =\sum_{i_1\ldots i_k}M_{i_0i_1}\ldots M_{i_ki_{k+1}} =(M^k)_{i_0i_{k+1}} [[/math]]

Thus we have the following formula, for any [math]k\in\mathbb N[/math]:

[[math]] (id\otimes\varphi^{*k})v=M^k [[/math]]

It follows that our Cesàro limit is given by the following formula:

[[math]] \begin{eqnarray*} \lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^n\varphi^{*k}(f) &=&\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^n\tau(M^k)\\ &=&\tau\left(\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^nM^k\right) \end{eqnarray*} [[/math]]


Now since [math]v[/math] is unitary we have [math]||v||=1[/math], and so [math]||M||\leq1[/math]. Thus the last Cesàro limit converges, and equals the orthogonal projection onto the [math]1[/math]-eigenspace of [math]M[/math]:

[[math]] \lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^nM^k=P [[/math]]

Thus our initial Cesàro limit converges as well, to [math]\tau(P)[/math], as desired.

The point now is that when the linear form [math]\varphi\in C(G)^*[/math] from the above result is chosen to be faithful, we obtain the following finer result:

Proposition

Given a faithful unital linear form [math]\varphi\in C(G)^*[/math], the limit

[[math]] \int_\varphi f=\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^n\varphi^{*k}(f) [[/math]]
exists, and is independent of [math]\varphi[/math], given on coefficients of representations by

[[math]] \left(id\otimes\int_\varphi\right)v=P [[/math]]
where [math]P[/math] is the orthogonal projection onto the space [math]Fix(v)=\left\{\xi\in\mathbb C^n\big|v\xi=\xi\right\}[/math].


Show Proof

In view of Proposition 13.15, it remains to prove that when [math]\varphi[/math] is faithful, the [math]1[/math]-eigenspace of the matrix [math]M=(id\otimes\varphi)v[/math] equals the space [math]Fix(v)[/math].


[math]\supset[/math]” This is clear, and for any [math]\varphi[/math], because we have the following implication:

[[math]] v\xi=\xi\implies M\xi=\xi [[/math]]

[math]\subset[/math]” Here we must prove that, when [math]\varphi[/math] is faithful, we have:

[[math]] M\xi=\xi\implies v\xi=\xi [[/math]]

For this purpose, assume that we have [math]M\xi=\xi[/math], and consider the following function:

[[math]] f=\sum_i\left(\sum_jv_{ij}\xi_j-\xi_i\right)\left(\sum_kv_{ik}\xi_k-\xi_i\right)^* [[/math]]

We must prove that we have [math]f=0[/math]. Since [math]v[/math] is unitary, we have:

[[math]] \begin{eqnarray*} f &=&\sum_{ijk}v_{ij}v_{ik}^*\xi_j\bar{\xi}_k-\frac{1}{N}v_{ij}\xi_j\bar{\xi}_i-\frac{1}{N}v_{ik}^*\xi_i\bar{\xi}_k+\frac{1}{N^2}\xi_i\bar{\xi}_i\\ &=&\sum_j|\xi_j|^2-\sum_{ij}v_{ij}\xi_j\bar{\xi}_i-\sum_{ik}v_{ik}^*\xi_i\bar{\xi}_k+\sum_i|\xi_i|^2\\ &=&||\xi||^2- \lt v\xi,\xi \gt -\overline{ \lt v\xi,\xi \gt }+||\xi||^2\\ &=&2(||\xi||^2-Re( \lt v\xi,\xi \gt )) \end{eqnarray*} [[/math]]


By using now our assumption [math]M\xi=\xi[/math], we obtain from this:

[[math]] \begin{eqnarray*} \varphi(f) &=&2\varphi(||\xi||^2-Re( \lt v\xi,\xi \gt ))\\ &=&2(||\xi||^2-Re( \lt M\xi,\xi \gt ))\\ &=&2(||\xi||^2-||\xi||^2)\\ &=&0 \end{eqnarray*} [[/math]]


Now since [math]\varphi[/math] is faithful, this gives [math]f=0[/math], and so [math]v\xi=\xi[/math], as claimed.

We can now formulate a main result, as follows:

Theorem

Any compact group [math]G[/math] has a unique Haar integration, which can be constructed by starting with any faithful positive unital state [math]\varphi\in C(G)^*[/math], and setting:

[[math]] \int_G=\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^n\varphi^{*k} [[/math]]
Moreover, for any representation [math]v[/math] we have the formula

[[math]] \left(id\otimes\int_G\right)v=P [[/math]]
where [math]P[/math] is the orthogonal projection onto [math]Fix(v)=\left\{\xi\in\mathbb C^n\big|v\xi=\xi\right\}[/math].


Show Proof

We can prove this from what we have, in several steps, as follows:


(1) Let us first go back to the general context of Proposition 13.15. Since convolving one more time with [math]\varphi[/math] will not change the Cesàro limit appearing there, the functional [math]\int_\varphi\in C(G)^*[/math] constructed there has the following invariance property:

[[math]] \int_\varphi*\,\varphi=\varphi*\int_\varphi=\int_\varphi [[/math]]

In the case where [math]\varphi[/math] is assumed to be faithful, as in Proposition 13.16, our claim is that we have the following formula, valid this time for any [math]\psi\in C(G)^*[/math]:

[[math]] \int_\varphi*\,\psi=\psi*\int_\varphi=\psi(1)\int_\varphi [[/math]]

Moreover, it is enough to prove this formula on a coefficient of a representation:

[[math]] f=(\tau\otimes id)v [[/math]]

(2) In order to do so, consider the following two matrices:

[[math]] P=\left(id\otimes\int_\varphi\right)v\quad,\quad Q=(id\otimes\psi)v [[/math]]

We have then the following two computations, involving these matrices:

[[math]] \left(\int_\varphi*\,\psi\right)f =\left(\tau\otimes\int_\varphi\otimes\,\psi\right)(v_{12}v_{13}) =\tau(PQ) [[/math]]

[[math]] \left(\psi*\int_\varphi\right)f =\left(\tau\otimes\psi\otimes\int_\varphi\right)(v_{12}v_{13}) =\tau(QP) [[/math]]

Also, regarding the term on the right in our formula in (1), this is given by:

[[math]] \psi(1)\int_\varphi f=\psi(1)\tau(P) [[/math]]

We conclude from all this that our claim is equivalent to the following equality:

[[math]] PQ=QP=\psi(1)P [[/math]]

(3) But this latter equality holds indeed, coming from the fact, that we know from Proposition 13.16, that [math]P=(id\otimes\int_\varphi)v[/math] equals the orthogonal projection onto [math]Fix(v)[/math]. Thus, we have proved our claim in (1), namely that the following formula holds:

[[math]] \int_\varphi*\,\psi=\psi*\int_\varphi=\psi(1)\int_\varphi [[/math]]

(4) In order to finish now, it is convenient to introduce the following abstract operation, on the continuous functions [math]f,f':C(G)\to\mathbb C[/math] on our group:

[[math]] \Delta(f\otimes f')(g\otimes h)=f(g)f'(h) [[/math]]

With this convention, the formula that we established above can be written as:

[[math]] \psi\left(\int_\varphi\otimes\, id\right)\Delta =\psi\left(id\otimes\int_\varphi\right)\Delta =\psi\int_\varphi(.)1 [[/math]]

This formula being true for any [math]\psi\in C(G)^*[/math], we can simply delete [math]\psi[/math]. We conclude that the following invariance formula holds indeed, with [math]\int_G=\int_\varphi[/math]:

[[math]] \left(\int_G\otimes\, id\right)\Delta=\left(id\otimes\int_G\right)\Delta=\int_G(.)1 [[/math]]

But this is exactly the left and right invariance formula we were looking for.


(5) Finally, in order to prove the uniqueness assertion, assuming that we have two invariant integrals [math]\int_G,\int_G'[/math], we have, according to the above invariance formula:

[[math]] \left(\int_G\otimes\int_G'\right)\Delta =\left(\int_G'\otimes\int_G\right)\Delta =\int_G(.)1 =\int_G'(.)1 [[/math]]

Thus we have [math]\int_G=\int_G'[/math], and this finishes the proof.

Summarizing, we can now integrate over [math]G[/math]. As a first application, we have:

Theorem

Given a compact group [math]G[/math], we have the following formula, valid for any unitary group representation [math]v:G\to U_M[/math]:

[[math]] \int_G\chi_v=\dim(Fix(v)) [[/math]]
In particular, in the unitary matrix group case, [math]G\subset_uU_N[/math], the moments of the main character [math]\chi=\chi_u[/math] are given by the following formula:

[[math]] \int_G\chi^k=\dim(Fix(u^{\otimes k})) [[/math]]
Thus, knowing the law of [math]\chi[/math] is the same as knowing the dimensions on the right.


Show Proof

We have three assertions here, the idea being as follows:


(1) Given a unitary representation [math]v:G\to U_M[/math] as in the statement, its character [math]\chi_v[/math] is a coefficient, so we can use the integration formula for coefficients in Theorem 13.17. If we denote by [math]P[/math] the projection onto [math]Fix(v)[/math], that formula gives, as desired:

[[math]] \begin{eqnarray*} \int_G\chi_v &=&Tr(P)\\ &=&\dim(Im(P))\\ &=&dim(Fix(v)) \end{eqnarray*} [[/math]]


(2) This follows from (1), applied to the Peter-Weyl representations, as follows:

[[math]] \begin{eqnarray*} \int_G\chi^k &=&\int_G\chi_u^k\\ &=&\int_G\chi_{u^{\otimes k}}\\ &=&\dim(Fix(u^{\otimes k})) \end{eqnarray*} [[/math]]


(3) This follows from (2), and from the standard fact, which follows from definitions, that a probability measure is uniquely determined by its moments.

As a key remark now, the integration formula in Theorem 13.17 allows the computation for the truncated characters too, because these truncated characters are coefficients as well. To be more precise, all the probabilistic questions about [math]G[/math], regarding characters, or truncated characters, or more complicated variables, require a good knowledge of the integration over [math]G[/math], and more precisely, of the various polynomial integrals over [math]G[/math]:

Definition

Given a closed subgroup [math]G\subset U_N[/math], the quantities

[[math]] I_k=\int_Gg_{i_1j_1}^{e_1}\ldots g_{i_kj_k}^{e_k}\,dg [[/math]]
depending on a colored integer [math]k=e_1\ldots e_k[/math], are called polynomial integrals over [math]G[/math].

As a first observation, the knowledge of these integrals is the same as the knowledge of the integration functional over [math]G[/math]. Indeed, since the coordinate functions [math]g\to g_{ij}[/math] separate the points of [math]G[/math], we can apply the Stone-Weierstrass theorem, and we obtain:

[[math]] C(G)= \lt g_{ij} \gt [[/math]]

Thus, by linearity, the computation of any functional [math]f:C(G)\to\mathbb C[/math], and in particular of the integration functional, reduces to the computation of this functional on the polynomials of the coordinate functions [math]g\to g_{ij}[/math] and their conjugates [math]g\to\bar{g}_{ij}[/math].


By using now Peter-Weyl theory, everything reduces to algebra, as follows:

Theorem

The Haar integration over a closed subgroup [math]G\subset_uU_N[/math] is given on the dense subalgebra of smooth functions by the Weingarten formula

[[math]] \int_Gg_{i_1j_1}^{e_1}\ldots g_{i_kj_k}^{e_k}\,dg=\sum_{\pi,\sigma\in D_k}\delta_\pi(i)\delta_\sigma(j)W_k(\pi,\sigma) [[/math]]
valid for any colored integer [math]k=e_1\ldots e_k[/math] and any multi-indices [math]i,j[/math], where [math]D_k[/math] is a linear basis of [math]Fix(u^{\otimes k})[/math], the associated generalized Kronecker symbols are given by

[[math]] \delta_\pi(i)= \lt \pi,e_{i_1}\otimes\ldots\otimes e_{i_k} \gt [[/math]]
and [math]W_k=G_k^{-1}[/math] is the inverse of the Gram matrix, [math]G_k(\pi,\sigma)= \lt \pi,\sigma \gt [/math].


Show Proof

We know from Peter-Weyl theory that the integrals in the statement form altogether the orthogonal projection [math]P^k[/math] onto the following space:

[[math]] Fix(u^{\otimes k})=span(D_k) [[/math]]

Consider now the following linear map, with [math]D_k=\{\xi_k\}[/math] being as in the statement:

[[math]] E(x)=\sum_{\pi\in D_k} \lt x,\xi_\pi \gt \xi_\pi [[/math]]

By a standard linear algebra computation, it follows that we have [math]P=WE[/math], where [math]W[/math] is the inverse of the restriction of [math]E[/math] to the following space:

[[math]] K=span\left(T_\pi\Big|\pi\in D_k\right) [[/math]]

But this restriction is precisely the linear map given by the matrix [math]G_k[/math], and so [math]W[/math] itself is the linear map given by the matrix [math]W_k[/math], and this gives the result.

We will be back to this in chapter 16 below, with some concrete applications.

13d. More Peter-Weyl

In order to further develop now the Peter-Weyl theory, which is something very useful, we will need the following result, which is of independent interest:

Proposition

We have a Frobenius type isomorphism

[[math]] Hom(v,w)\simeq Fix(v\otimes\bar{w}) [[/math]]
valid for any two representations [math]v,w[/math].


Show Proof

According to the definitions, we have the following equivalences:

[[math]] \begin{eqnarray*} T\in Hom(v,w) &\iff&Tv=wT\\ &\iff&\sum_jT_{aj}v_{ji}=\sum_bw_{ab}T_{bi},\forall a,i \end{eqnarray*} [[/math]]


On the other hand, we have as well the following equivalences:

[[math]] \begin{eqnarray*} T\in Fix(v\otimes\bar{w}) &\iff&(v\otimes\bar{w})T=\xi\\ &\iff&\sum_{jb}v_{ij}w_{ab}^*T_{bj}=T_{ai}\forall a,i \end{eqnarray*} [[/math]]


With these formulae in hand, both inclusions follow from the unitarity of [math]v,w[/math].

We can now formulate our third Peter-Weyl theorem, as follows:

Theorem (PW3)

The norm dense [math]*[/math]-subalgebra

[[math]] \mathcal C(G)\subset C(G) [[/math]]
generated by the coefficients of the fundamental representation decomposes as

[[math]] \mathcal C(G)=\bigoplus_{v\in Irr(G)}M_{\dim(v)}(\mathbb C) [[/math]]
with the summands being pairwise orthogonal with respect to the scalar product

[[math]] \lt a,b \gt =\int_Gab^* [[/math]]
where [math]\int_G[/math] is the Haar integration over [math]G[/math].


Show Proof

By combining the previous two Peter-Weyl results, we deduce that we have a linear space decomposition as follows:

[[math]] \mathcal C(G) =\sum_{v\in Irr(G)}C_v =\sum_{v\in Irr(G)}M_{\dim(v)}(\mathbb C) [[/math]]

Thus, in order to conclude, it is enough to prove that for any two irreducible corepresentations [math]v,w\in Irr(A)[/math], the corresponding spaces of coefficients are orthogonal:

[[math]] v\not\sim w\implies C_v\perp C_w [[/math]]

But this follows from Theorem 13.17, via Proposition 13.21. Let us set indeed:

[[math]] P_{ia,jb}=\int_Gv_{ij}w_{ab}^* [[/math]]

Then [math]P[/math] is the orthogonal projection onto the following vector space:

[[math]] Fix(v\otimes\bar{w}) \simeq Hom(v,w) =\{0\} [[/math]]

Thus we have [math]P=0[/math], and this gives the result.

Finally, we have the following result, completing the Peter-Weyl theory:

Theorem (PW4)

The characters of irreducible representations belong to

[[math]] \mathcal C(G)_{central}=\left\{f\in\mathcal C(G)\Big|f(gh)=f(hg),\forall g,h\in G\right\} [[/math]]
called algebra of smooth central functions on [math]G[/math], and form an orthonormal basis of it.


Show Proof

We have several things to be proved, the idea being as follows:


(1) Observe first that [math]\mathcal C(G)_{central}[/math] is indeed an algebra, which contains all the characters. Conversely, consider a function [math]f\in\mathcal C(G)[/math], written as follows:

[[math]] f=\sum_{v\in Irr(G)}f_v [[/math]]

The condition [math]f\in\mathcal C(G)_{central}[/math] states then that for any [math]v\in Irr(G)[/math], we must have:

[[math]] f_v\in\mathcal C(G)_{central} [[/math]]

But this means precisely that the coefficient [math]f_v[/math] must be a scalar multiple of [math]\chi_v[/math], and so the characters form a basis of [math]\mathcal C(G)_{central}[/math], as stated.


(2) The fact that we have an orthogonal basis follows from Theorem 13.22.


(3) As for the fact that the characters have norm 1, this follows from:

[[math]] \begin{eqnarray*} \int_G\chi_v\chi_v^* &=&\sum_{ij}\int_Gv_{ii}v_{jj}^*\\ &=&\sum_i\frac{1}{N}\\ &=&1 \end{eqnarray*} [[/math]]


Here we have used the fact, coming from Theorem 13.22, that the integrals [math]\int_Gv_{ij}v_{kl}^*[/math] form the orthogonal projection onto the following vector space:

[[math]] Fix(v\otimes\bar{v}) \simeq End(v) =\mathbb C1 [[/math]]

Thus, the proof of our theorem is now complete.

As a key observation here, complementing Theorem 13.23, observe that a function [math]f:G\to\mathbb C[/math] is central, in the sense that it satisfies [math]f(gh)=f(hg)[/math], precisely when it satisfies the following condition, saying that it must be constant on conjugacy classes:

[[math]] f(ghg^{-1})=f(h),\forall g,h\in G [[/math]]

Thus, in the finite group case for instance, the algebra of central functions is something which is very easy to compute, and this gives useful information about [math]Rep(G)[/math]. We will not get into this here, but some of our exercises will be about this.


So long for Peter-Weyl theory. As a basic illustration for all this, which clarifies some previous considerations from chapter 9, we have the following result:

Theorem

For a compact abelian group [math]G[/math] the irreducible representations are all [math]1[/math]-dimensional, and form the dual discrete abelian group [math]\widehat{G}[/math].


Show Proof

This is clear from the Peter-Weyl theory, because when [math]G[/math] is abelian any function [math]f:G\to\mathbb C[/math] is central, and so the algebra of central functions is [math]\mathcal C(G)[/math] itself, and so the irreducible representations [math]u\in Irr(G)[/math] coincide with their characters [math]\chi_u\in\widehat{G}[/math].

There are also many things that can be said in the finite group case, in relation with central functions, and conjugacy classes. For more here, we recommend Serre [4].


General references

Banica, Teo (2024). "Linear algebra and group theory". arXiv:2206.09283 [math.CO].

References

  1. H. Weyl, The theory of groups and quantum mechanics, Princeton Univ. Press (1931).
  2. H. Weyl, The classical groups: their invariants and representations, Princeton Univ. Press (1939).
  3. H. Weyl, Space, time, matter, Princeton Univ. Press (1918).
  4. 4.0 4.1 4.2 J.P. Serre, Linear representations of finite groups, Springer (1977).
  5. S.L. Woronowicz, Compact matrix pseudogroups, Comm. Math. Phys. 111 (1987), 613--665.