11a. Function algebras
It is possible to do many other things, based on the techniques developed above, by further building on that material. But, my proposal now would be to slow down, temporarily forget about the graphs [math]X[/math], and focus instead on the finite groups [math]G[/math]. These are our main tools, and before going to war, you have to sharpen your blades.
In order to discuss this, let us first have a second look at the magic unitaries, introduced in chapter 10. We have the following result, summarizing our knowledge from there:
Given a subgroup [math]G\subset S_N\subset O_N[/math], the standard coordinates [math]u_{ij}\in C(G)[/math] generate [math]C(G)[/math], are given by the following formula, and form a magic matrix:
This is something that we know from chapter 10, the idea being as follows:
(1) Since the action of the group elements [math]\sigma\in G\subset S_N\subset O_N[/math] on the standard basis of [math]\mathbb R^N[/math] is given by [math]\sigma(e_i)=e_{\sigma(i)}[/math], we have [math]\sigma_{ij}=\delta_{\sigma(j)i}[/math], which gives the formula of [math]u_{ij}[/math].
(2) The fact that our matrix [math]u=(u_{ij})[/math] is indeed magic, in the sense that its entries sum up to 1, on each row and each column, follows from our formula of [math]u_{ij}[/math].
(3) By Stone-Weierstrass we get [math]C(G)= \lt u_{ij} \gt [/math], since the coordinate functions [math]u_{ij}[/math] separate the points of [math]G[/math], in the sense that [math]\sigma\neq\pi[/math] needs [math]\sigma_{ij}\neq\pi_{ij}[/math], for some [math]i,j[/math].
(4) Regarding the action [math]a:X\times G\to X[/math] and the coaction [math]\Phi:C(X)\to C(X)\otimes C(G)[/math], all this looks scary, but is in fact a triviality, as explained in chapter 10.
(5) Finally, in what regards the last assertion, concerning [math]du=ud[/math], this again looks a bit abstract and scary, but is again a triviality, as explained in chapter 10.
We have the following result, further building on the above:
The symmetry group [math]G(X)[/math] of a graph [math]X[/math] having [math]N[/math] vertices is given by the following formula, at the level of the corresponding algebra of functions,
This follows indeed from Theorem 11.1, and more specifically, is just an abstract reformulation of the last assertion there.
In order to further build on all this, the idea will be that of getting rid of [math]S_N[/math], or rather of the corresponding algebra of functions [math]C(S_N)[/math], and formulating everything in terms of magic matrices. To be more precise, leaving aside [math]X[/math], we have the following question: \begin{question} With a suitable algebra formalism, do we have
with [math]A[/math] standing for “universal algebra generated by”? \end{question} At the first glance, this question might seem overly theoretical and abstract, but the claim is that such things can be useful, in order to deal with graphs. Indeed, assuming a positive answer to this question, by Theorem 11.2 we would conclude that [math]C(G(X))[/math] is the universal algebra generated by the entries of a magic matrix commuting with [math]d[/math].
Which is something very nice, and potentially useful, among others putting under a more conceptual light the various product computations from the previous chapter, done with magic matrices. In a word, we have seen in the previous chapter that the magic matrices can be very useful objects, so let us go now for it, and reformulate everything in terms of them, along the lines of Question 11.3, and of the comments afterwards.
Getting to work now, the algebra [math]C(S_N)[/math] from Question 11.3, that we want to axiomatize via magic matrices, is something very particular, and we have:
\begin{fact}
The function algebra [math]C(S_N)[/math] has the following properties:
- It is a complex algebra.
- It is finite dimensional.
- It is commutative, [math]fg=gf[/math].
- It has an involution, given by [math]f^*(x)=\overline{f(x)}[/math].
- It has a norm, given by [math]||f||=\sup_x|f(x)|[/math].
\end{fact} So, which of these properties shall we choose, for our axiomatization? Definitely (1), and then common sense would suggest to try the combination (2+3). However, since we will be soon in need, in this book, of algebras which are infinite dimensional, or not commutative, or both, let us go instead with the combination (4+5), for most of our axiomatization work, and then with (2) or (3) added, if needed, towards the end.
In short, trust me here, we need to do some algebra and this is what we will do, we will learn useful things in what follows, and here are some axioms, to start with:
A [math]C^*[/math]-algebra is a complex algebra [math]A[/math], given with an involution [math]a\to a^*[/math] and a norm [math]a\to||a||[/math], such that:
- The norm and involution are related by [math]||aa^*||=||a||^2[/math].
- [math]A[/math] is complete, as metric space, with respect to the norm.
As a first basic class of examples, which are of interest for us, in relation with Question 11.3, we have the algebras of type [math]A=C(X)[/math], with [math]X[/math] being a finite, or more generally compact space, with the usual involution and norm of functions, namely:
Observe that such algebras are commutative, [math]fg=gf[/math], and also that both the conditions in Definition 11.5 are satisfied, with (1) being something trivial, and with (2) coming from the well-known fact that a uniform limit of continuous function is continuous.
Interestingly, and of guaranteed interest for many considerations to follow, in this book, as a second basic class of examples we have the matrix algebras [math]A=M_N(\mathbb C)[/math], with the usual involution and norm of the complex matrices, namely:
Observe that such algebras are finite dimensional, and also that the two conditions in Definition 11.5 are satisfied, with (1) being a good linear algebra exercise for you, via double inequality, and with (2) being trivial, our algebra being finite dimensional.
Summarizing, good definition that we have, so let us develop now some theory, for the [math]C^*[/math]-algebras. Inspired by the matrix algebra examples, we first have:
Given an element [math]a\in A[/math] of a [math]C^*[/math]-algebra, define its spectrum as:
- We have [math]\sigma(ab)\cup\{0\}=\sigma(ba)\cup\{0\}[/math].
- We have [math]\sigma(f(a))=f(\sigma(a))[/math], for any [math]f\in\mathbb C(X)[/math] having poles outside [math]\sigma(a)[/math].
- The spectrum [math]\sigma(a)[/math] is compact, non-empty, and contained in [math]D_0(||a||)[/math].
- The spectra of unitaries [math](u^*=u^{-1})[/math] and self-adjoints [math](a=a^*)[/math] are on [math]\mathbb T,\mathbb R[/math].
- The spectral radius of normal elements [math](aa^*=a^*a)[/math] is given by [math]\rho(a)=||a||[/math].
In addition, assuming [math]a\in A\subset B[/math], the spectra of [math]a[/math] with respect to [math]A[/math] and to [math]B[/math] coincide.
Here the assertions (1-5), which are of course formulated a bit informally, are well-known for the matrix algebra [math]A=M_N(\mathbb C)[/math], and the proof in general is similar:
(1) Assuming that [math]1-ab[/math] is invertible, with inverse [math]c[/math], we have [math]abc=cab=c-1[/math], and it follows that [math]1-ba[/math] is invertible too, with inverse [math]1+bca[/math]. Thus [math]\sigma(ab),\sigma(ba)[/math] agree on [math]1\in\mathbb C[/math], and by linearity, it follows that [math]\sigma(ab),\sigma(ba)[/math] agree on any point [math]\lambda\in\mathbb C^*[/math].
(2) The formula [math]\sigma(f(a))=f(\sigma(a))[/math] is clear for polynomials, [math]f\in\mathbb C[X][/math], by factorizing [math]f-\lambda[/math], with [math]\lambda\in\mathbb C[/math]. Then, the extension to the rational functions is straightforward, because [math]P(a)/Q(a)-\lambda[/math] is invertible precisely when [math]P(a)-\lambda Q(a)[/math] is.
(3) By using [math]1/(1-b)=1+b+b^2+\ldots[/math] for [math]||b|| \lt 1[/math] we obtain that [math]a-\lambda[/math] is invertible for [math]|\lambda| \gt ||a||[/math], and so [math]\sigma(a)\subset D_0(||a||)[/math]. It is also clear that [math]\sigma(a)[/math] is closed, so what we have is a compact set. Finally, assuming [math]\sigma(a)=\emptyset[/math] the function [math]f(\lambda)=\varphi((a-\lambda)^{-1})[/math] is well-defined, for any [math]\varphi\in A^*[/math], and by Liouville we get [math]f=0[/math], contradiction.
(4) Assuming [math]u^*=u^{-1}[/math] we have [math]||u||=1[/math], and so [math]\sigma(u)\subset D_0(1)[/math]. But with [math]f(z)=z^{-1}[/math] we obtain via (2) that we have as well [math]\sigma(u)\subset f(D_0(1))[/math], and this gives [math]\sigma(u)\subset\mathbb T[/math]. As for the result regarding the self-adjoints, this can be obtained from the result for the unitaries, by using (2) with functions of type [math]f(z)=(z+it)/(z-it)[/math], with [math]t\in\mathbb R[/math].
(5) It is routine to check, by integrating quantities of type [math]z^n/(z-a)[/math] over circles centered at the origin, and estimating, that the spectral radius is given by [math]\rho(a)=\lim||a^n||^{1/n}[/math]. But in the self-adjoint case, [math]a=a^*[/math], this gives [math]\rho(a)=||a||[/math], by using exponents of type [math]n=2^k[/math], and then the extension to the general normal case is straightforward.
(6) Regarding now the last assertion, the inclusion [math]\sigma_B(a)\subset\sigma_A(a)[/math] is clear. For the converse, assume [math]a-\lambda\in B^{-1}[/math], and set [math]b=(a-\lambda )^*(a-\lambda )[/math]. We have then:
Thus this difference in an open subset of [math]\mathbb C[/math]. On the other hand [math]b[/math] being self-adjoint, its two spectra are both real, and so is their difference. Thus the two spectra of [math]b[/math] are equal, and in particular [math]b[/math] is invertible in [math]A[/math], and so [math]a-\lambda\in A^{-1}[/math], as desired.
With these ingredients, we can now a prove a key result, as follows:
Any commutative [math]C^*[/math]-algebra is of the form [math]A=C(X)[/math], with
This is something quite tricky, the idea being as follows:
(1) Given a commutative [math]C^*[/math]-algebra [math]A[/math], let us define a space [math]X[/math] as in the statement. Then [math]X[/math] is compact, and [math]a\to ev_a[/math] is a morphism of algebras, as follows:
(2) We first prove that [math]ev[/math] is involutive. We use the following formula, which is similar to the [math]z=Re(z)+iIm(z)[/math] decomposition formula for usual complex numbers:
Thus it is enough to prove [math]ev_{a^*}=ev_a^*[/math] for the self-adjoint elements [math]a[/math]. But this is the same as proving that [math]a=a^*[/math] implies that [math]ev_a[/math] is a real function, which is in turn true, by Theorem 11.6 (4), because [math]ev_a(\chi)=\chi(a)[/math] is an element of [math]\sigma(a)[/math], contained in [math]\mathbb R[/math].
(3) Since [math]A[/math] is commutative, each element is normal, so [math]ev[/math] is isometric:
It remains to prove that [math]ev[/math] is surjective. But this follows from the Stone-Weierstrass theorem, because [math]ev(A)[/math] is a closed subalgebra of [math]C(X)[/math], which separates the points.
The above result is something truly remarkable, and we can now formulate:
Given an arbitrary [math]C^*[/math]-algebra [math]A[/math], we can write it as
Which is very nice, but obviously, a bit off-topic. More on quantum spaces a bit later in this book, and for the moment, just take this as it came, namely math professor supposed to lecture on something, and ending up in lecturing on something else.
More seriously now, the Gelfand theorem that we just learned is something very useful, and getting back to our original Question 11.3, we can answer it, as follows:
The algebra of functions on [math]S_N[/math] has the following presentation,
The universal algebra [math]A[/math] in the statement being commutative, by the Gelfand theorem it must be of the form [math]A=C(X)[/math], with [math]X[/math] being a certain compact space. Now since we have coordinates [math]u_{ij}:X\to\mathbb R[/math], we have an embedding [math]X\subset M_N(\mathbb R)[/math]. Also, since we know that these coordinates form a magic matrix, the elements [math]g\in X[/math] must be 0-1 matrices, having exactly one 1 entry on each row and each column, and so [math]X=S_N[/math]. Thus we have proved the first assertion, and the second assertion is clear as well.
In relation now with graphs, we have the following result:
The symmetry group of a graph [math]X[/math] having [math]N[/math] vertices is given by the following formula, at the level of the corresponding algebra of functions,
This follows indeed from Theorem 11.9, by combining it with Theorem 11.2, which tells us that when dealing with a graph [math]X[/math], with adjacency matrix [math]d\in M_N(0,1)[/math], we simply must add the condition [math]du=ud[/math], on the corresponding magic matrix [math]u[/math].
Let us discuss as well what happens in relation with the partial permutations, introduced at the end of chapter 9. We first have the following result:
The algebra of functions on [math]\widetilde{S}_N[/math] has the following presentation, with submagic meaning formed of projections, pairwise orthogonal on rows and columns,
This is very similar to the proof of Theorem 11.9, with the result again coming from the Gelfand theorem, applied to the universal algebra in the statement.
In relation now with graphs, the result, which is quite tricky, is as follows:
Given a graph [math]X[/math] having [math]N[/math] vertices, and adjacency matrix [math]d\in M_N(0,1)[/math], consider its partial automorphism semigroup, given by:
This requires a bit of abstract thinking, the idea being as follows:
(1) To start with, we will use the formula from Theorem 11.11, namely:
(2) Getting now to graphs, the definition of [math]\widetilde{G}(X)[/math] in the statement reformulates as follows, in terms of the usual adjacency relation [math]i-j[/math] for the vertices:
Indeed, this reformulation is something which is clear from definitions.
(3) In view of this, we have the following product computation:
On the other hand, we have as well the following computation:
To be more precise, in the above two formulae the “otherwise” cases include by definition the cases where [math]\sigma(j)[/math], respectively [math]\sigma^{-1}(i)[/math], is undefined.
(4) On the other hand, we have as well the following formulae:
(5) Now by multiplying the above formulae, we obtain the following formulae:
(6) We conclude that the relations in the statement, [math]R_i(du)_{ij}C_j=R_i(ud)_{ij}C_j[/math], when applied to a given element [math]\sigma\in\widetilde{S}_N[/math], correspond to the following condition:
But with [math]i=\sigma(k)[/math], this latter condition reformulates as follows:
Thus we must have [math]\sigma\in\widetilde{G}(X)[/math], and we obtain the presentation result for [math]\widetilde{G}(X)[/math].
(7) Regarding now the second assertion, the simplest counterexample here is simplex [math]X_N[/math], having [math]N[/math] vertices and edges everywhere. Indeed, the adjacency matrix of this simplex is [math]d=\mathbb I_N-1_N[/math], with [math]\mathbb I_N[/math] being the all-1 matrix, and so the commutation of this matrix with [math]u[/math] corresponds to the fact that [math]u[/math] must be bistochastic. Thus, [math]u[/math] must be in fact magic, and we obtain [math]\bar{G}(X_N)=S_N[/math], which is smaller than [math]\widetilde{G}(X_N)=\widetilde{S}_N[/math].
Many interesting things can be said here, and we have of course many explicit examples, for the graphs having small number of vertices. We will be back to this.
General references
Banica, Teo (2024). "Graphs and their symmetries". arXiv:2406.03664 [math.CO].