Revision as of 21:17, 21 April 2025 by Bot
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

13a. Quantum spaces

[math] \newcommand{\mathds}{\mathbb}[/math]

This article was automatically generated from a tex file and may contain conversion errors. If permitted, you may login and edit this article to improve the conversion.

Welcome to quantum symmetry. Our purpose in what follows will be to look for hidden, quantum symmetries of graphs, according to the following principle: \begin{principle} The following happen, in the quantum world:

  • [math]S_N[/math] has a free analogue [math]S_N^+[/math], which is a compact quantum group.
  • This quantum group [math]S_N^+[/math] is infinite, and reminding [math]SO_3[/math], at [math]N\geq4[/math].
  • [math]S_N\to S_N^+[/math] can be however understood, using algebra and probability.
  • [math]S_N^+[/math] is the quantum symmetry group [math]G^+(K_N)[/math] of the complete graph [math]K_N[/math].
  • In fact, any graph [math]X\subset K_N[/math] has a quantum symmetry group [math]G^+(X)\subset S_N^+[/math].
  • [math]G(X)\subset G^+(X)[/math] can be an isomorphism or not, depending on [math]X[/math].
  • [math]G(X)\to G^+(X)[/math] can be understood, via algebra and probability.

\end{principle} Excited about this? We will learn this technology, in this chapter, and in the next one. To be more precise, in this chapter we will talk about Hilbert spaces, operator algebras, quantum spaces, quantum groups, and then about (1), with a look into (2,3) too. And then, in the next chapter, we will talk about (4,5), with a look into (6,7) too.


Getting started now, we already know a bit about operator algebras and quantum spaces from chapter 11, but that material was explained in a hurry, time now to do this the right way. At the gates of the quantum world are the Hilbert spaces:

Definition

A Hilbert space is a complex vector space [math]H[/math] with a scalar product [math] \lt x,y \gt [/math], which will be linear at left and antilinear at right,

[[math]] \lt \lambda x,y \gt =\lambda \lt x,y \gt \quad,\quad \lt x,\lambda y \gt =\bar{\lambda} \lt x,y \gt [[/math]]
and which is complete with respect to corresponding norm

[[math]] ||x||=\sqrt{ \lt x,x \gt } [[/math]]
in the sense that any sequence [math]\{x_n\}[/math] which is a Cauchy sequence, having the property [math]||x_n-x_m||\to0[/math] with [math]n,m\to\infty[/math], has a limit, [math]x_n\to x[/math].

Here our convention for the scalar products, written [math] \lt x,y \gt [/math] and being linear at left, is one among others, often used by mathematicians, and also by certain professional quantum physicists, like myself. As further comments now on Definition 13.2, there is some mathematics encapsulated there, needing some discussion. First, we have:

Theorem

Given an index set [math]I[/math], which can be finite or not, the space

[[math]] l^2(I)=\left\{(x_i)_{i\in I}\Big|\sum_i|x_i|^2 \lt \infty\right\} [[/math]]
is a Hilbert space, with scalar product as follows:

[[math]] \lt x,y \gt =\sum_ix_i\bar{y}_i [[/math]]
When [math]I[/math] is finite, [math]I=\{1,\ldots,N\}[/math], we obtain in this way the usual space [math]H=\mathbb C^N[/math].


Show Proof

All this is well-known and routine, the idea being as follows:


(1) We know that [math]l^2(I)\subset\mathbb C^I[/math] is the space of vectors satisfying [math]||x|| \lt \infty[/math]. We want to prove that [math]l^2(I)[/math] is a vector space, that [math] \lt x,y \gt [/math] is a scalar product on it, that [math]l^2(I)[/math] is complete with respect to [math]||.||[/math], and finally that for [math]|I| \lt \infty[/math] we have [math]l^2(I)=\mathbb C^{|I|}[/math].


(2) The last assertion, [math]l^2(I)=\mathbb C^{|I|}[/math] for [math]|I| \lt \infty[/math], is clear, because in this case the sums are finite, so the condition [math]||x|| \lt \infty[/math] is automatic. So, we know at least one thing.


(3) Next, we can use the Cauchy-Schwarz inequality, which is as follows, coming from the positivity of the degree 2 quantity [math]f(t)=||twx+y||^2[/math], with [math]t\in\mathbb R[/math] and [math]w\in\mathbb T[/math]:

[[math]] | \lt x,y \gt |\leq||x||\cdot||y|| [[/math]]


(4) Indeed, with Cauchy-Schwarz in hand, everything is straightforward. We first obtain, by raising to the square and expanding, that for any [math]x,y\in l^2(I)[/math] we have:

[[math]] ||x+y||\leq||x||+||y|| [[/math]]


(5) Thus [math]l^2(I)[/math] is indeed a vector space, and [math] \lt x,y \gt [/math] is surely a scalar product on it, because all the conditions for a scalar product are trivially satisfied.


(6) Finally, the completness with respect to [math]||.||[/math] follows in the obvious way, the limit of a Cauchy sequence [math]\{x^n\}[/math] being the vector [math]y=(y_i)[/math] given by [math]y_i=\lim_{n\to\infty}x^n_i[/math].

Going now a bit abstract, we have, more generally, the following result:

Theorem

Given a space [math]X[/math] with a positive measure [math]\mu[/math] on it, the space

[[math]] L^2(X)=\left\{f:X\to\mathbb C\Big|\int_X|f(x)|^2\,d\mu(x) \lt \infty\right\} [[/math]]
with the convention [math]f=g[/math] when [math]||f-g||=0[/math], is a Hilbert space, with scalar product:

[[math]] \lt f,g \gt =\int_Xf(x)\overline{g(x)}\,d\mu(x) [[/math]]
When [math]X=I[/math] is discrete, [math]\mu(\{x\})=1[/math] for any [math]x\in X[/math], we obtain the previous space [math]l^2(I)[/math].


Show Proof

This is something routine, remake of Theorem 13.3, as follows:


(1) The proof of the first, and main assertion is something perfectly similar to the proof of Theorem 13.3, by replacing everywhere the sums by integrals.


(2) As for the last assertion, when [math]\mu[/math] is the counting measure all our integrals here become usual sums, and so we recover in this way Theorem 13.3.

As a third and last theorem about Hilbert spaces, that we will need, we have:

Theorem

Any Hilbert space [math]H[/math] has an orthonormal basis [math]\{e_i\}_{i\in I}[/math], which is by definition a set of vectors whose span is dense in [math]H[/math], and which satisfy

[[math]] \lt e_i,e_j \gt =\delta_{ij} [[/math]]
with [math]\delta[/math] being a Kronecker symbol. The cardinality [math]|I|[/math] of the index set, which can be finite, countable, or worse, depends only on [math]H[/math], and is called dimension of [math]H[/math]. We have

[[math]] H\simeq l^2(I) [[/math]]
in the obvious way, mapping [math]\sum\lambda_ie_i\to(\lambda_i)[/math]. The Hilbert spaces with [math]\dim H=|I|[/math] being countable, including [math]l^2(\mathbb N)[/math] and [math]L^2(\mathbb R)[/math], are all isomorphic, and are called separable.


Show Proof

We have many assertions here, the idea being as follows:


(1) In finite dimensions an orthonormal basis [math]\{e_i\}_{i\in I}[/math] can be constructed by starting with any vector space basis [math]\{x_i\}_{i\in I}[/math], and using the Gram-Schmidt procedure. But the same method works in arbitrary dimensions, by using the Zorn lemma.


(2) Regarding [math]L^2(\mathbb R)[/math], here we can argue that, since functions can be approximated by polynomials, we have a countable algebraic basis, namely [math]\{x^n\}_{n\in\mathbb N}[/math], called the Weierstrass basis, that we can orthogonalize afterwards by using Gram-Schmidt.

Moving ahead, now that we know what our vector spaces are, we can talk about infinite matrices with respect to them. And the situation here is as follows:

Theorem

Given a Hilbert space [math]H[/math], consider the linear operators [math]T:H\to H[/math], and for each such operator define its norm by the following formula:

[[math]] ||T||=\sup_{||x||=1}||Tx|| [[/math]]
The operators which are bounded, [math]||T|| \lt \infty[/math], form then a complex algebra [math]B(H)[/math], which is complete with respect to [math]||.||[/math]. When [math]H[/math] comes with a basis [math]\{e_i\}_{i\in I}[/math], we have

[[math]] B(H)\subset\mathcal L(H)\subset M_I(\mathbb C) [[/math]]
where [math]\mathcal L(H)[/math] is the algebra of all linear operators [math]T:H\to H[/math], and [math]\mathcal L(H)\subset M_I(\mathbb C)[/math] is the correspondence [math]T\to M[/math] obtained via the usual linear algebra formulae, namely:

[[math]] T(x)=Mx\quad,\quad M_{ij}= \lt Te_j,e_i \gt [[/math]]
In infinite dimensions, none of the above two inclusions is an equality.


Show Proof

This is something straightforward, the idea being as follows:


(1) The fact that we have indeed an algebra, satisfying the product condition in the statement, follows from the following estimates, which are all elementary:

[[math]] ||S+T||\leq||S||+||T||\quad,\quad ||\lambda T||=|\lambda|\cdot||T||\quad,\quad ||ST||\leq||S||\cdot||T|| [[/math]]


(2) Regarding now the completness assertion, if [math]\{T_n\}\subset B(H)[/math] is Cauchy then [math]\{T_nx\}[/math] is Cauchy for any [math]x\in H[/math], so we can define the limit [math]T=\lim_{n\to\infty}T_n[/math] by setting:

[[math]] Tx=\lim_{n\to\infty}T_nx [[/math]]


Let us first check that the application [math]x\to Tx[/math] is linear. We have:

[[math]] \begin{eqnarray*} T(x+y) &=&\lim_{n\to\infty}T_n(x+y)\\ &=&\lim_{n\to\infty}T_n(x)+T_n(y)\\ &=&\lim_{n\to\infty}T_n(x)+\lim_{n\to\infty}T_n(y)\\ &=&T(x)+T(y) \end{eqnarray*} [[/math]]


Similarly, we have [math]T(\lambda x)=\lambda T(x)[/math], and we conclude that [math]T\in\mathcal L(H)[/math].


(3) With this done, it remains to prove now that we have [math]T\in B(H)[/math], and that [math]T_n\to T[/math] in norm. For this purpose, observe that we have:

[[math]] \begin{eqnarray*} ||T_n-T_m||\leq\varepsilon\ ,\ \forall n,m\geq N &\implies&||T_nx-T_mx||\leq\varepsilon\ ,\ \forall||x||=1\ ,\ \forall n,m\geq N\\ &\implies&||T_nx-Tx||\leq\varepsilon\ ,\ \forall||x||=1\ ,\ \forall n\geq N\\ &\implies&||T_Nx-Tx||\leq\varepsilon\ ,\ \forall||x||=1\\ &\implies&||T_N-T||\leq\varepsilon \end{eqnarray*} [[/math]]


But this gives both [math]T\in B(H)[/math], and [math]T_N\to T[/math] in norm, and we are done.


(4) Regarding the embeddings, the correspondence [math]T\to M[/math] in the statement is indeed linear, and its kernel is [math]\{0\}[/math], so we have indeed an embedding as follows, as claimed:

[[math]] \mathcal L(H)\subset M_I(\mathbb C) [[/math]]


In finite dimensions we have an isomorphism, because any [math]M\in M_N(\mathbb C)[/math] determines an operator [math]T:\mathbb C^N\to\mathbb C^N[/math], given by [math] \lt Te_j,e_i \gt =M_{ij}[/math]. However, in infinite dimensions, we have matrices not producing operators, as for instance the all-one matrix.


(5) As for the examples of linear operators which are not bounded, these are more complicated, coming from logic, and we will not need them in what follows.

Finally, as a second and last result regarding the operators, we will need:

Theorem

Each operator [math]T\in B(H)[/math] has an adjoint [math]T^*\in B(H)[/math], given by:

[[math]] \lt Tx,y \gt = \lt x,T^*y \gt [[/math]]
The operation [math]T\to T^*[/math] is antilinear, antimultiplicative, involutive, and satisfies:

[[math]] ||T||=||T^*||\quad,\quad ||TT^*||=||T||^2 [[/math]]
When [math]H[/math] comes with a basis [math]\{e_i\}_{i\in I}[/math], the operation [math]T\to T^*[/math] corresponds to

[[math]] (M^*)_{ij}=\overline{M}_{ji} [[/math]]

at the level of the associated matrices [math]M\in M_I(\mathbb C)[/math].


Show Proof

This is standard too, and can be proved in 3 steps, as follows:


(1) The existence of the adjoint operator [math]T^*[/math], given by the formula in the statement, comes from the fact that the function [math]\varphi(x)= \lt Tx,y \gt [/math] being a linear map [math]H\to\mathbb C[/math], we must have a formula as follows, for a certain vector [math]T^*y\in H[/math]:

[[math]] \varphi(x)= \lt x,T^*y \gt [[/math]]


Moreover, since this vector is unique, [math]T^*[/math] is unique too, and we have as well:

[[math]] (S+T)^*=S^*+T^*\quad,\quad (\lambda T)^*=\bar{\lambda}T^*\quad,\quad (ST)^*=T^*S^*\quad,\quad (T^*)^*=T [[/math]]


Observe also that we have indeed [math]T^*\in B(H)[/math], because:

[[math]] \begin{eqnarray*} ||T|| &=&\sup_{||x||=1}\sup_{||y||=1} \lt Tx,y \gt \\ &=&\sup_{||y||=1}\sup_{||x||=1} \lt x,T^*y \gt \\ &=&||T^*|| \end{eqnarray*} [[/math]]


(2) Regarding now [math]||TT^*||=||T||^2[/math], which is a key formula, observe that we have:

[[math]] ||TT^*|| \leq||T||\cdot||T^*|| =||T||^2 [[/math]]


On the other hand, we have as well the following estimate:

[[math]] \begin{eqnarray*} ||T||^2 &=&\sup_{||x||=1}| \lt Tx,Tx \gt |\\ &=&\sup_{||x||=1}| \lt x,T^*Tx \gt |\\ &\leq&||T^*T|| \end{eqnarray*} [[/math]]


By replacing [math]T\to T^*[/math] we obtain from this [math]||T||^2\leq||TT^*||[/math], as desired.


(3) Finally, when [math]H[/math] comes with a basis, the formula [math] \lt Tx,y \gt = \lt x,T^*y \gt [/math] applied with [math]x=e_i[/math], [math]y=e_j[/math] translates into the formula [math](M^*)_{ij}=\overline{M}_{ji}[/math], as desired.

Generally speaking, the theory of bounded operators can be developed in analogy with the theory of the usual matrices, and the main results can be summarized as follows: \begin{fact} The following happen, extending the spectral theorem for matrices:

  • Any self-adjoint operator, [math]T=T^*[/math], is diagonalizable.
  • More generally, any normal operator, [math]TT^*=T^*T[/math], is diagonalizable.
  • In fact, any family [math]\{T_i\}[/math] of commuting normal operators is diagonalizable.

\end{fact} You might wonder here, why calling this Fact instead of Theorem. In answer, this is something which is quite hard to prove, and in fact not only we will not prove this, but we will also find a way of short-circuiting all this. But more on this in a moment, for now, let us enjoy this. As a consequence of all this, we can formulate as well: \begin{fact} The following happen, regarding the closed [math]*[/math]-algebras [math]A\subset B(H)[/math]:

  • For [math]A= \lt T \gt [/math] with [math]T[/math] normal, we have [math]A=C(X)[/math], with [math]X=\sigma(T)[/math].
  • In fact, all commutative algebras are of the form [math]A=C(X)[/math], with [math]X[/math] compact.
  • In general, we can write [math]A=C(X)[/math], with [math]X[/math] being a compact quantum space.

\end{fact} To be more precise here, the first assertion is more or less part of the spectral theorems from Fact 13.8, with the spectrum of an operator [math]T\in B(H)[/math] being defined as follows:

[[math]] \sigma(T)=\left\{\lambda\in\mathbb C\Big|T-\lambda\notin B(H)^{-1}\right\} [[/math]]


Regarding the second assertion, if we write [math]A=span(T_i)[/math], then the family [math]\{T_i\}[/math] consists of commuting normal operators, and this leads to the above conclusion, with [math]X[/math] being a certain compact space associated to the family [math]\{T_i\}[/math]. As for the third assertion, which is something important to us, this is rather a philosophical conclusion, to all this.


Very good all this, so we have quantum spaces, and you would say, it remains to understand the proofs of all the above, and then we are all set, ready to go ahead with quantum groups, and the rest of our program. However, there is a bug with all this: \begin{bug} Besides the spectral theorem in infinite dimensions being something tough, the resulting notion of compact quantum spaces is not very satisfactory, because we cannot define operator algebras [math]A\subset B(H)[/math] with generators and relations, as we would love to. \end{bug} In short, nice try with the above, but time now to forget all this, and invent something better. And, fortunately, the solution to our problem exists, due to Gelfand, with the starting definition here, that we already met in chapter 11, being as follows:

Definition

A [math]C^*[/math]-algebra is a complex algebra [math]A[/math], having a norm [math]||.||[/math] making it a Banach algebra, and an involution [math]*[/math], related to the norm by the formula

[[math]] ||aa^*||=||a||^2 [[/math]]
which must hold for any [math]a\in A[/math].

As a basic example, the full operator algebra [math]B(H)[/math] is a [math]C^*[/math]-algebra, and so is any norm closed [math]*[/math]-subalgebra [math]A\subset B(H)[/math]. We will see in a moment that a converse of this holds, in the sense that any [math]C^*[/math]-algebra appears as an operator algebra, [math]A\subset B(H)[/math].


But, let us start with finite dimensions. We know that the matrix algebra [math]M_N(\mathbb C)[/math] is a [math]C^*[/math]-algebra, with the usual matrix norm and involution of matrices, namely:

[[math]] ||M||=\sup_{||x||=1}||Mx||\quad,\quad (M^*)_{ij}=\bar{M}_{ji} [[/math]]


More generally, any [math]*[/math]-subalgebra [math]A\subset M_N(\mathbb C)[/math] is automatically closed, and so is a [math]C^*[/math]-algebra. In fact, in finite dimensions, the situation is as follows:

Theorem

The finite dimensional [math]C^*[/math]-algebras are exactly the algebras

[[math]] A=M_{N_1}(\mathbb C)\oplus\ldots\oplus M_{N_k}(\mathbb C) [[/math]]
with norm [math]||(a_1,\ldots,a_k)||=\sup_i||a_i||[/math], and involution [math](a_1,\ldots,a_k)^*=(a_1^*,\ldots,a_k^*)[/math].


Show Proof

This is something that we discussed in chapter 11, the idea being that this comes by splitting the unit of our algebra [math]A[/math] as a sum of central minimal projections, [math]1=p_1+\ldots+p_k[/math]. Indeed, when doing so, each of the [math]*[/math]-algebras [math]A_i=p_iAp_i[/math] follows to be a matrix algebra, [math]A_i\simeq M_{N_i}(\mathbb C)[/math], and this gives the decomposition in the statement.

In order to develop more theory, we will need a technical result, as follows:

Theorem

Given an element [math]a\in A[/math] of a [math]C^*[/math]-algebra, define its spectrum as:

[[math]] \sigma(a)=\left\{\lambda\in\mathbb C\Big|a-\lambda\notin A^{-1}\right\} [[/math]]
The following spectral theory results hold, exactly as in the [math]A=B(H)[/math] case:

  • We have [math]\sigma(ab)\cup\{0\}=\sigma(ba)\cup\{0\}[/math].
  • We have [math]\sigma(f(a))=f(\sigma(a))[/math], for any [math]f\in\mathbb C(X)[/math] having poles outside [math]\sigma(a)[/math].
  • The spectrum [math]\sigma(a)[/math] is compact, non-empty, and contained in [math]D_0(||a||)[/math].
  • The spectra of unitaries [math](u^*=u^{-1})[/math] and self-adjoints [math](a=a^*)[/math] are on [math]\mathbb T,\mathbb R[/math].
  • The spectral radius of normal elements [math](aa^*=a^*a)[/math] is given by [math]\rho(a)=||a||[/math].

In addition, assuming [math]a\in A\subset B[/math], the spectra of [math]a[/math] with respect to [math]A[/math] and to [math]B[/math] coincide.


Show Proof

All the above assertions, which are of course formulated a bit informally, are well-known to hold for the full operator algebra [math]A=B(H)[/math], and the proof in general is similar. We refer here to chapter 11, where all this was already discussed.

With these ingredients, we can now a prove a key result of Gelfand, as follows:

Theorem

Any commutative [math]C^*[/math]-algebra [math]A[/math] is of the form

[[math]] A=C(X) [[/math]]
with [math]X=Spec(A)[/math] being the space of Banach algebra characters [math]\chi:A\to\mathbb C[/math].


Show Proof

This is something that we know too from chapter 11, the idea being that with [math]X[/math] as in the statement, we have a morphism of algebras as follows:

[[math]] ev:A\to C(X)\quad,\quad a\to ev_a=[\chi\to\chi(a)] [[/math]]


(1) Quite suprisingly, the fact that [math]ev[/math] is involutive is not trivial. But here we can argue that it is enough to prove that we have [math]ev_{a^*}=ev_a^*[/math] for the self-adjoint elements [math]a[/math], which in turn follows from Theorem 13.13 (4), which shows that we have:

[[math]] ev_a(\chi)=\chi(a)\in\sigma(a)\subset\mathbb R [[/math]]


(2) Next, since [math]A[/math] is commutative, each element is normal, so [math]ev[/math] is isometric:

[[math]] ||ev_a||=\rho(a)=||a|| [[/math]]


It remains to prove that [math]ev[/math] is surjective. But this follows from the Stone-Weierstrass theorem, because [math]ev(A)[/math] is a closed subalgebra of [math]C(X)[/math], which separates the points.

In view of Theorem 13.14, we can formulate the following definition:

Definition

Given an arbitrary [math]C^*[/math]-algebra [math]A[/math], we can write

[[math]] A=C(X) [[/math]]
and call the abstract space [math]X[/math] a compact quantum space.

In other words, we can define the category of compact quantum spaces [math]X[/math] as being the category of the [math]C^*[/math]-algebras [math]A[/math], with the arrows reversed. A morphism [math]f:X\to Y[/math] corresponds by definition to a morphism [math]\Phi:C(Y)\to C(X)[/math], a product of spaces [math]X\times Y[/math] corresponds by definition to a product of algebras [math]C(X)\otimes C(Y)[/math], and so on.


All this is of course a bit speculative, and as a first true result, we have:

Theorem

The finite quantum spaces are exactly the disjoint unions of type

[[math]] X=M_{N_1}\sqcup\ldots\sqcup M_{N_k} [[/math]]
where [math]M_N[/math] is the finite quantum space given by [math]C(M_N)=M_N(\mathbb C)[/math].


Show Proof

For a compact quantum space [math]X[/math], coming from a [math]C^*[/math]-algebra [math]A[/math] via the formula [math]A=C(X)[/math], being finite can only mean that the following number is finite:

[[math]] |X|=\dim_\mathbb CA \lt \infty [[/math]]


Thus, by using Theorem 13.12, we are led to the conclusion that we must have:

[[math]] C(X)=M_{N_1}(\mathbb C)\oplus\ldots\oplus M_{N_k}(\mathbb C) [[/math]]


But since direct sums of algebras [math]A[/math] correspond to disjoint unions of quantum spaces [math]X[/math], via the correspondence [math]A=C(X)[/math], this leads to the conclusion in the statement.

Finally, at the general level, we have as well the following key result:

Theorem

Any [math]C^*[/math]-algebra appears as an operator algebra:

[[math]] A\subset B(H) [[/math]]
Moreover, when [math]A[/math] is separable, which is usually the case, [math]H[/math] can be taken separable.


Show Proof

This result, called GNS representation theorem after Gelfand-Naimark-Segal, comes as a continuation of the Gelfand theorem, the idea being as follows:


(1) Let us first prove that the result holds in the commutative case, [math]A=C(X)[/math]. Here, we can pick a positive measure on [math]X[/math], and construct our embedding as follows:

[[math]] C(X)\subset B(L^2(X))\quad,\quad f\to[g\to fg] [[/math]]


(2) In general the proof is similar, the idea being that given a [math]C^*[/math]-algebra [math]A[/math] we can construct a Hilbert space [math]H=L^2(A)[/math], and then an embedding as above:

[[math]] A\subset B(L^2(A))\quad,\quad a\to[b\to ab] [[/math]]


(3) Finally, the last assertion is clear, because when [math]A[/math] is separable, meaning that it has a countable algebraic basis, so does the associated Hilbert space [math]H=L^2(A)[/math].

Many other things can be said about the [math]C^*[/math]-algebras, and we recommend here any operator algebra book. But for our purposes here, the above will do.

General references

Banica, Teo (2024). "Graphs and their symmetries". arXiv:2406.03664 [math.CO].