15d. Partial symmetries

[math] \newcommand{\mathds}{\mathbb}[/math]

This article was automatically generated from a tex file and may contain conversion errors. If permitted, you may login and edit this article to improve the conversion.

As a last topic for this chapter, let us discuss the construction and main properties of the quantum semigroup [math]\widetilde{G}^+(X)\subset\widetilde{S}_N^+[/math], in analogy with what we know about [math]\widetilde{G}(X)\subset\widetilde{S}_N[/math]. This is something more recent, and potentially interesting too. Let us start with:

Definition

[math]C(\widetilde{S}_N^+)[/math] is the universal [math]C^*[/math]-algebra generated by the entries of a [math]N\times N[/math] submagic matrix [math]u[/math], with comultiplication and counit maps given by

[[math]] \Delta(u_{ij})=\sum_ku_{ik}\otimes u_{kj}\quad,\quad \varepsilon(u_{ij})=\delta_{ij} [[/math]]
where “submagic” means formed of projections, which are pairwise orthogonal on rows and columns. We call [math]\widetilde{S}_N^+[/math] semigroup of quantum partial permutations of [math]\{1,\ldots,N\}[/math].

Observe that the morphisms [math]\Delta,\varepsilon[/math] constructed above satisfy the usual axioms for a comultiplication and an antipode, in the bialgebra setting, namely:

[[math]] (\Delta\otimes id)\Delta=(id\otimes \Delta)\Delta [[/math]]

[[math]] (\varepsilon\otimes id)\Delta=(id\otimes\varepsilon)\Delta=id [[/math]]


As a conclusion to this, the basic properties of the quantum semigroup [math]\widetilde{S}_N^+[/math] that we constructed in Definition 15.27 can be summarized as follows:

Proposition

We have maps as follows,

[[math]] \begin{matrix} C(S_N^+)&\leftarrow&C(\widetilde{S}_N^+)\\ \\ \downarrow&&\downarrow\\ \\ C(S_N)&\leftarrow&C(\widetilde{S}_N) \end{matrix} \quad \quad \quad:\quad \quad\quad \begin{matrix} S_N^+&\subset&\widetilde{S}_N^+\\ \\ \cup&&\cup\\ \\ S_N&\subset&\widetilde{S}_N \end{matrix} [[/math]]
with the bialgebras at left corresponding to the quantum semigroups at right.


Show Proof

This is clear from the above discussion, and from the well-known fact that projections which sum up to [math]1[/math] are pairwise orthogonal.

We recall from chapter 13 that we have [math]S_N^+\neq S_N[/math], starting from [math]N=4[/math]. At the semigroup level things get interesting starting from [math]N=2[/math], where we have:

Proposition

We have an isomorphism as follows,

[[math]] C(\widetilde{S}_2^+)\simeq\left\{(x,y)\in C^*(D_\infty)\oplus C^*(D_\infty)\Big|\varepsilon(x)=\varepsilon(y)\right\} [[/math]]
where [math]\varepsilon:C^*(D_\infty)\to\mathbb C1[/math] is the counit, given by the formula

[[math]] u=\begin{pmatrix}p\oplus 0&0\oplus r\\0\oplus s&q\oplus 0\end{pmatrix} [[/math]]
where [math]p,q[/math] and [math]r,s[/math] are the standard generators of the two copies of [math]C^*(D_\infty)[/math].


Show Proof

Consider an arbitrary [math]2\times 2[/math] matrix formed by projections:

[[math]] u=\begin{pmatrix}P&R\\S&Q\end{pmatrix} [[/math]]


This matrix is submagic when the following conditions are satisfied:

[[math]] PR=PS=QR=QS=0 [[/math]]


Now observe that these conditions tell us that the non-unital algebras [math]X= \lt P,Q \gt [/math] and [math]Y= \lt R,S \gt [/math] must commute, and must satisfy [math]xy=0[/math], for any [math]x\in X,y\in Y[/math]. Thus, if we denote by [math]Z[/math] the universal algebra generated by two projections, we have:

[[math]] C(\widetilde{S}_2^+)\simeq\mathbb C1\oplus Z\oplus Z [[/math]]


Now since we have [math]C^*(D_\infty)=\mathbb C1\oplus Z[/math], we obtain an isomorphism as follows:

[[math]] C(\widetilde{S}_2^+)\simeq\left\{(\lambda+a,\lambda+b)\Big|\lambda\in\mathbb C, a,b\in Z\right\} [[/math]]


Thus, we are led to the conclusion in the statement.

We recall now from chapter 9 that given a graph [math]X[/math] with [math]N[/math] vertices, and adjacency matrix [math]d\in M_N(0,1)[/math], its partial automorphism semigroup is given by:

[[math]] \widetilde{G}(X)=\left\{\sigma\in\widetilde{S}_N\Big|d_{ij}=d_{\sigma(i)\sigma(j)},\ \forall i,j\in Dom(\sigma)\right\} [[/math]]


We have the following formula, from chapter 11, with [math]R=diag(R_i)[/math], [math]C=diag(C_j)[/math], with [math]R_i,C_j[/math] being the row and column sums of the associated submagic matrix [math]u[/math]:

[[math]] C(\widetilde{G}(X))=C(\widetilde{S}_N)\Big/\Big \lt R(du-ud)C=0\Big \gt [[/math]]


With these results in hand, we are led to the following statement:

Theorem

The following construction, with [math]R,C[/math] being the diagonal matrices formed by the row and column sums of [math]u[/math], produces a subsemigroup [math]\widetilde{G}^+(X)\subset\widetilde{S}_N^+[/math],

[[math]] C(\widetilde{G}^+(X))=C(\widetilde{S}_N^+)\Big/\Big \lt R(du-ud)C=0\Big \gt [[/math]]
called semigroup of quantum partial automorphisms of [math]X[/math], whose classical version is [math]\widetilde{G}(X)[/math]. When using [math]du=ud[/math], we obtain a semigroup [math]\bar{G}^+(X)\subset\widetilde{G}^+(X)[/math] which can be smaller.


Show Proof

All this is elementary, the idea being as follows:


(1) In order to construct the comultiplication [math]\Delta[/math], consider the following elements:

[[math]] U_{ij}=\sum_ku_{ik}\otimes u_{kj} [[/math]]


By using the fact that [math]u[/math] is submagic, we deduce that we have:

[[math]] R^U_i(dU)_{ij}C^U_j=\Delta(R_i(du)_{ij}C_j) [[/math]]

[[math]] R^U_i(Ud)_{ij}C^U_j=\Delta(R_i(ud)_{ij}C_j) [[/math]]


Thus we can define [math]\Delta[/math] by mapping [math]u_{ij}\to U_{ij}[/math], as desired.


(2) Regarding now [math]\varepsilon[/math], the algebra in the statement has indeed a morphism [math]\varepsilon[/math] defined by [math]u_{ij}\to\delta_{ij}[/math], because the following relations are trivially satisfied:

[[math]] R_i(d1_N)_{ij}C_j=R_i(1_Nd)_{ij}C_j [[/math]]


(3) Regarding now [math]S[/math], we must prove that we have a morphism [math]S[/math] given by [math]u_{ij}\to u_{ji}[/math]. For this purpose, we know that with [math]R=diag(R_i)[/math] and [math]C=diag(C_j)[/math], we have:

[[math]] R(du-ud)C=0 [[/math]]


Now when transposing this formula, we obtain:

[[math]] C^t(u^td-du^t)R^t=0 [[/math]]


Since [math]C^t,R^t[/math] are respectively the diagonal matrices formed by the row sums and column sums of [math]u^t[/math], we conclude that the relations [math]R(du-ud)C=0[/math] are satisfied by the transpose matrix [math]u^t[/math], and this gives the existence of the subantipode map [math]S[/math].


(4) The fact that we have [math]\widetilde{G}^+(X)_{class}=\widetilde{G}(X)[/math] follows from [math](S_N^+)_{class}=S_N[/math].


(5) Finally, the last assertion follows from our similar results from chapter 7, by taking classical versions, the simplest counterexample being the simplex.

As a first result now regarding the correspondence [math]X\to\widetilde{G}^+(X)[/math], we have:

Proposition

For any finite graph [math]X[/math] we have

[[math]] \widetilde{G}^+(X)=\widetilde{G}^+(X^c) [[/math]]
where [math]X^c[/math] is the complementary graph.


Show Proof

The adjacency matrices of a graph [math]X[/math] and of its complement [math]X^c[/math] are related by the following formula, where [math]\mathbb I_N[/math] is the all-1 matrix:

[[math]] d_X+d_{X^c}=\mathbb I_N-1_N [[/math]]


Thus, in order to establish the formula in the statement, we must prove that:

[[math]] R_i(\mathbb I_Nu)_{ij}C_j=R_i(u\mathbb I_N)_{ij}C_j [[/math]]


For this purpose, let us recall that, the matrix [math]u[/math] being submagic, its row sums and column sums [math]R_i,C_j[/math] are projections. By using this fact, we have:

[[math]] R_i(\mathbb I_Nu)_{ij}C_j=R_iC_jC_j=R_iC_j [[/math]]

[[math]] R_i(u\mathbb I_N)_{ij}C_j=R_iR_iC_j=R_iC_j [[/math]]


Thus we have proved our equality, and the conclusion follows.

In order to discuss now various aspects of the correspondence [math]X\to\widetilde{G}^+(X)[/math], it is technically convenient to slightly enlarge our formalism, as follows:

Definition

Associated to any complex-colored oriented graph [math]X[/math], with adjacency matrix [math]d\in M_N(\mathbb C)[/math], is its semigroup of partial automorphisms, given by

[[math]] \widetilde{G}(X)=\left\{\sigma\in\widetilde{S}_N\Big|d_{ij}=d_{\sigma(i)\sigma(j)},\ \forall i,j\in Dom(\sigma)\right\} [[/math]]
as well as its quantum semigroup of quantum partial automorphisms, given by

[[math]] C(\widetilde{G}^+(X))=C(\widetilde{S}_N^+)\Big/\Big \lt R(du-ud)C=0\Big \gt [[/math]]
where [math]R=diag(R_i)[/math], [math]C=diag(C_j)[/math], with [math]R_i,C_j[/math] being the row and column sums of [math]u[/math].

With this notion in hand, following the material in chapter 14, let us discuss now the color independence. Let [math]m,\gamma[/math] be the multiplication and comultiplication of [math]\mathbb C^N[/math]:

[[math]] m(e_i\otimes e_j)=\delta_{ij}e_i\quad,\quad \gamma(e_i)=e_i\otimes e_i [[/math]]

We denote by [math]m^{(p)},\gamma^{(p)}[/math] their iterations, given by the following formulae:

[[math]] m^{(p)}(e_{i_1}\otimes\ldots\otimes e_{i_1})=\delta_{i_1\ldots i_p}e_{i_1}\quad,\quad \gamma^{(p)}(e_i)=e_i\otimes\ldots\otimes e_i [[/math]]

Our goal is to use these iterations in the semigroup case, exactly as we did in chapter 14, in the quantum group case. We will need some technical results. Let us start with:

Proposition

We have the following formulae,

[[math]] m^{(p)}u^{\otimes p}=um^{(p)}\quad,\quad u^{\otimes p}\gamma^{(p)}=\gamma^{(p)}u [[/math]]
valid for any submagic matrix [math]u[/math].


Show Proof

We have the following computations, which prove the first formula:

[[math]] m^{(p)}u^{\otimes p}(e_{i_1}\otimes\ldots\otimes e_{i_p}) =\sum_je_j\otimes u_{ji_1}\ldots u_{ji_p} =\delta_{i_1\ldots i_p}\sum_je_j\otimes u_{ji_1} [[/math]]

[[math]] um^{(p)}(e_{i_1}\otimes\ldots\otimes e_{i_p}) =\delta_{i_1\ldots i_p}u(e_{i_1}) =\delta_{i_1\ldots i_p}\sum_je_j\otimes u_{ji_1} [[/math]]


We have as well the following computations, which prove the second formula:

[[math]] u^{\otimes p}\gamma^{(p)}(e_i) =u^{\otimes p}(e_i\otimes\ldots\otimes e_i) =\sum_je_j\otimes\ldots\otimes e_j\otimes u_{ji} [[/math]]

[[math]] \gamma^{(p)}u(e_i) =\gamma^{(p)}\left(\sum_je_j\otimes u_{ji}\right) =\sum_je_j\otimes\ldots\otimes e_j\otimes u_{ji} [[/math]]


Summarizing, we have proved both formulae in the statement.

We will need as well a second technical result, as follows:

Proposition

We have the following formulae, with [math]u,m,\gamma[/math] being as before,

[[math]] m^{(p)}R^{\otimes p}d^{\otimes p}\gamma^{(p)}=Rd^{\times p}\quad,\quad m^{(p)}d^{\otimes p}C^{\otimes p}\gamma^{(p)}=d^{\times p}C [[/math]]
and with [math]\times[/math] being the componentwise, or Hadamard, product of matrices.


Show Proof

We have the following computations, which prove the first formula:

[[math]] m^{(p)}R^{\otimes p}d^{\otimes p}\gamma^{(p)}(e_i) =m^{(p)}R^{\otimes p}d^{\otimes p}(e_i\otimes\ldots\otimes e_i) =\sum_je_j\otimes R_jd_{ji}^p [[/math]]

[[math]] Rd^{\times p}(e_i) =R\left(\sum_je_j\otimes d_{ji}^p\right) =\sum_je_j\otimes R_jd_{ji}^p [[/math]]


We have as well the following computations, which prove the second formula:

[[math]] m^{(p)}d^{\otimes p}C^{\otimes p}\gamma^{(p)}(e_i) =m^{(p)}d^{\otimes p}(e_i\otimes\ldots\otimes e_i\otimes C_i) =\sum_je_j\otimes d_{ji}^pC_i [[/math]]

[[math]] d^{\times p}C(e_i) =d^{\times p}(e_i\otimes C_i) =\sum_je_j\otimes d_{ji}^pC_i [[/math]]


Thus, we have proved both formulae in the statement.

We can now prove a key result, as follows:

Proposition

We have the following formulae, with [math]u,m,\gamma[/math] being as before,

[[math]] m^{(p)}(Rdu)^{\otimes p}\gamma^{(p)}=Rd^{\times p}u\quad,\quad m^{(p)}(udC)^{\otimes p}\gamma^{(p)}=ud^{\times p}C [[/math]]
and with [math]\times[/math] being the componentwise product of matrices.


Show Proof

By using the formulae in Propositions 15.33 and 15.34, we get:

[[math]] \begin{eqnarray*} m^{(p)}(Rdu)^{\otimes p}\gamma^{(p)} &=&m^{(p)}R^{\otimes p}d^{\otimes p}u^{\otimes p}\gamma^{(p)}\\ &=&m^{(p)}R^{\otimes p}d^{\otimes p}\gamma^{(p)}u\\ &=&Rd^{\times p}u \end{eqnarray*} [[/math]]


Once again by using Proposition 15.33 and Proposition 15.34, we have:

[[math]] \begin{eqnarray*} m^{(p)}(udC)^{\otimes p}\gamma^{(p)} &=&m^{(p)}u^{\otimes p}d^{\otimes p}C^{\otimes p}\gamma^{(p)}\\ &=&um^{(p)}d^{\otimes p}C^{\otimes p}\gamma^{(p)}\\ &=&ud^{\times p}C \end{eqnarray*} [[/math]]


Thus, we have proved both formulae in the statement.

We can now prove the color independence result, as follows:

Theorem

The quantum semigroups of quantum partial isomorphisms of finite graphs are subject to the “independence on the colors” formula

[[math]] \Big[d_{ij}=d_{kl}\iff d'_{ij}=d'_{kl}\Big]\implies\widetilde{G}^+(X)=\widetilde{G}^+(X') [[/math]]
valid for any graphs [math]X,X'[/math], having adjacency matrices [math]d,d'[/math].


Show Proof

Given a matrix [math]d\in M_N(\mathbb C)[/math], consider its color decomposition, which is as follows, with the color components [math]d_c[/math] being by definition 0-1 matrices:

[[math]] d=\sum_{c\in\mathbb C}c\cdot d_c [[/math]]


We want to prove that a given quantum semigroup [math]G[/math] acts on [math](X,d)[/math] if and only if it acts on [math](X,d_c)[/math], for any [math]c\in\mathbb C[/math]. For this purpose, consider the following linear space:

[[math]] E_u=\left\{f\in M_N(\mathbb C)\Big|Rfu=ufC\right\} [[/math]]


In terms of this space, we want to prove that we have:

[[math]] d\in E_u\implies d_c\in E_u,\forall c\in\mathbb C [[/math]]


For this purpose, observe that we have the following implication, as a consequence of the formulae established in Proposition 15.35:

[[math]] Rdu=udC\implies Rd^{\times p}u=ud^{\times p}C [[/math]]


We conclude that we have the following implication:

[[math]] d\in E_u\implies d^{\times p}\in E_u,\forall p\in\mathbb N [[/math]]


But this gives the result, exactly as in chapter 14, via the standard fact that the color components [math]d_c[/math] can be obtained from the componentwise powers [math]d^{\times p}[/math].

In contrast with what happens for the groups or quantum groups, in the semigroup setting we do not have a spectral decomposition result as well. To be more precise, consider as before the following linear space, associated to a submagic matrix [math]u[/math]:

[[math]] E_u=\left\{d\in M_N(\mathbb C)\Big|Rdu=udC\right\} [[/math]]


It is clear that [math]E_u[/math] is a linear space, containing 1, and stable under the adjoint operation [math]*[/math] too. We also know from Theorem 15.36 that [math]E_u[/math] is stable under color decomposition. However, [math]E_u[/math] is not stable under taking products, and so is not an algebra, in general.


In general, the computation of [math]\widetilde{G}^+(X)[/math] remains a very interesting question. Interesting as well is the question of generalizing all this to the infinite graph case, [math]|X|=\infty[/math], with the key remark that this might be simpler than talking about [math]G^+(X)[/math] with [math]|X|=\infty[/math].

General references

Banica, Teo (2024). "Graphs and their symmetries". arXiv:2406.03664 [math.CO].

References