guide:318f65175f: Difference between revisions

From Stochiki
No edit summary
 
No edit summary
 
Line 1: Line 1:
<div class="d-none"><math>
\newcommand{\mathds}{\mathbb}</math></div>
{{Alert-warning|This article was automatically generated from a tex file and may contain conversion errors. If permitted, you may login and edit this article to improve the conversion. }}
As a first true result about graphs, and we will call this theorem, not because of the difficulty of its proof, but because of its beauty and usefulness, we have:


{{proofcard|Theorem|theorem-1|A graph <math>X</math>, with vertices labeled <math>1,\ldots,N</math>, is uniquely determined by its adjacency matrix, which is the matrix <math>d\in M_N(0,1)</math> given by:
<math display="block">
d_{ij}=\begin{cases}
1&{\rm if}\ i-j\\
0&{\rm if}\ {i\not\!\!-\,j}
\end{cases}
</math>
Moreover, the matrices <math>d\in M_N(0,1)</math> which can appear in this way, from graphs, are precisely those which are symmetric, and have <math>0</math> on the diagonal.
|We have two things to be proved, the idea being as follows:
(1) Given a graph <math>X</math>, we can construct a matrix <math>d\in M_N(0,1)</math> as in the statement, and this matrix is obviously symmetric, and has <math>0</math> on the diagonal.
(2) Conversely, given a matrix <math>d\in M_N(0,1)</math> which is symmetric, and has <math>0</math> on the diagonal, we can associate to it the graph <math>X</math> having as vertices the numbers <math>1,\ldots,N</math>, and with the edges between these vertices being defined as follows:
<math display="block">
i-j\iff d_{ij}=1
</math>
It is then clear that the adjacency matrix of this graph <math>X</math> is the matrix <math>d</math> itself. Thus, we have established a correspondence <math>X\leftrightarrow d</math>, as in the statement.}}
The above result is very useful for various purposes, a first obvious application being that we can now tell a computer what our favorite graph <math>X</math> is, just by typing in the corresponding adjacency matrix <math>d\in M_N(0,1)</math>, which is something that the computer will surely understand. In fact, computers like 0-1 data, that's the language that they internally speak, and when that data comes in matrix form, they even get very happy and excited, and start doing all sorts of computations. But more on this later.
Speaking conversion between graphs <math>X</math> and matrices <math>d</math>, this can work as well for the two of us. Here is my matrix <math>d</math>, and up to you to tell me what my graph <math>X</math> was:
<math display="block">
d=\begin{pmatrix}
0&1&1\\
1&0&1\\
1&1&0
\end{pmatrix}
</math>
Problem solved I hope, with the graph in question being a triangle. Here is now another matrix <math>d</math>, and in the hope that you will see here a square:
<math display="block">
d=\begin{pmatrix}
0&1&0&1\\
1&0&1&0\\
0&1&0&1\\
1&0&1&0
\end{pmatrix}
</math>
Here is now another matrix <math>d</math>, and in the hope that you will see the same square here, but this time with both its diagonals drawn, and you can call that tetrahedron too:
<math display="block">
d=\begin{pmatrix}
0&1&1&1\\
1&0&1&1\\
1&1&0&1\\
1&1&1&0
\end{pmatrix}
</math>
By the way, talking exercises, please allow me to record the solutions to the above exercises, not of course for you, who are young and enthusiastic and must train hard, but rather for me and my colleagues, who are often old and tired. Here they are:
<math display="block">
\xymatrix@R=20pt@C=20pt{
&1\ar@{-}[ddl]\ar@{-}[ddr]&&&1\ar@{-}[dd]\ar@{-}[rr]&&2\ar@{-}[dd]&&1\ar@{-}[dd]\ar@{-}[ddrr]\ar@{-}[rr]&&2\ar@{-}[dd]\ar@{-}[ddll]\\
\\
3\ar@{-}[rr]&&2&&4\ar@{-}[rr]&&3&&4\ar@{-}[rr]&&3
}
</math>
And to finish this discussion, at a more advanced level now, here is another matrix <math>d</math>, and in the hope that you will see something related to cycling here:
<math display="block">
d=\begin{pmatrix}
0&1&0&1&0&1\\
1&0&1&0&1&0\\
0&1&0&1&0&1\\
1&0&1&0&1&0\\
0&1&0&1&0&1\\
1&0&1&0&1&0
\end{pmatrix}
</math>
All this looks fun, and mathematically relevant too. Based on this, we will agree to sometimes speak directly in terms of <math>d</math>, which is quite practical. Especially for me, typing a matrix in Latex, the computer language used for typing math, and the present book, being easier than drawing a graph. Plus our computer friend will understand too.
Let us develop now some theory, for the general graphs. Once the number of vertices is fixed, <math>N\in\mathbb N</math>, what seems to most distinguish the graphs, for instance in connection with the easiness or pain of drawing them, is the total number of edges <math>|E|</math>. Equivalently, what distinguishes the graphs is the density of edges at each vertex, given by:
<math display="block">
\rho=\frac{|E|}{N}
</math>
So, let us take a closer look at this quantity. It is convenient, for a finer study, to formulate our definition as follows:
{{defncard|label=|id=|Given a graph <math>X</math>, with <math>X</math> standing as usual for both the vertex set, and the graph itself, the valence of each vertex <math>i</math> is the number of its neighbors:
<math display="block">
v_i=\#\left\{j\in X\Big|i-j\right\}
</math>
We call <math>X</math> regular when the valence function <math>v:X\to\mathbb N</math> is constant. More specifically, in this case, we say that <math>X</math> is <math>k</math>-regular, with <math>k</math> being the common valence of vertices.}}
At the level of examples, all the graphs pictured above, with our usual convention that 0-1 matrix means picture, are regular, with the valence being as follows:
<math display="block">
\begin{matrix}
{\rm Graph}&{\rm Regularity}&{\rm Valence}\\
&-\\
{\rm triangle}&{\rm yes}&2\\
{\rm square}&{\rm yes}&2\\
{\rm tetrahedron}&{\rm yes}&3\\
{\rm hexagonal\ wheel}&{\rm yes}&3
\end{matrix}
</math>
Which leads us into the question, is there any interesting non-regular graph? Certainly yes, think for instance at all that beautiful pictures of snowflakes, such as:
<math display="block">
\xymatrix@R=10pt@C=10pt{
&&&\bullet\ar@{-}[d]\\
&&\bullet\ar@{-}[r]&\bullet\ar@{-}[dd]\ar@{-}[r]&\bullet\\
&\bullet\ar@{-}[d]&&&&\bullet\ar@{-}[d]\\
\bullet\ar@{-}[r]&\bullet\ar@{-}[rr]&&\bullet\ar@{-}[rr]\ar@{-}[dd]&&\bullet\ar@{-}[r]&\bullet\\
&\bullet\ar@{-}[u]&&&&\bullet\ar@{-}[u]\\
&&\bullet\ar@{-}[r]&\bullet\ar@{-}[d]\ar@{-}[r]&\bullet\\
&&&\bullet
}
</math>
Here the valence is 4 at the inner points, and 1 at the endpoints. Graphs as the above one are called in mathematics “trees”, due to the fact that, thinking a bit, if you look at a usual tree from the above, say from a helicopter, what you see is a picture as above. But more on such graphs, which can be quite complicated, later.
For the moment, let us study the valence function, and the regular graphs. We can do some math, by using the adjacency matrix, as follows:
{{proofcard|Proposition|proposition-1|Given a graph <math>X</math>, with adjacency matrix <math>d</math>:
<ul><li> The valence function <math>v</math> is the row sum function for <math>d</math>.
</li>
<li> Equivalently, <math>v=d\xi</math>, with <math>\xi</math> being the all-<math>1</math> vector.
</li>
<li> <math>X</math> is regular when <math>d</math> is stochastic, meaning with constant row sums.
</li>
<li> Equivalently, <math>X</math> is <math>k</math>-regular, with <math>k\in\mathbb N</math>, when <math>d\xi=k\xi</math>.
</li>
</ul>
|All this looks quite trivial, but let us make some effort, and write down a complete mathematical proof. Regarding (1), this follows from:
<math display="block">
\begin{eqnarray*}
v_i
&=&\sum_{j-i}1\\
&=&\sum_{j-i}1+\sum_{j\not-i}0\\
&=&\sum_jd_{ij}
\end{eqnarray*}
</math>
Regarding (2), this follows from (1), because for any matrix <math>d</math> we have:
<math display="block">
d\xi=\begin{pmatrix}
d_{11}&\ldots&d_{1N}\\
\vdots&&\vdots\\
d_{N1}&\ldots&d_{NN}
\end{pmatrix}
\begin{pmatrix}
1\\
\vdots\\
1\end{pmatrix}
=\begin{pmatrix}
d_{11}+\ldots+d_{1N}\\
\vdots\\
d_{N1}+\ldots+d_{NN}\end{pmatrix}
</math>
Finally, (3) follows from (1), and (4) follows from (2).}}
Observe that, in linear algebra terms, (4) above reformulates as follows:
{{proofcard|Theorem|theorem-2|A graph is regular precisely when the all-<math>1</math> vector is an eigenvector of the adjacency matrix. In this case, the valence is the corresponding eigenvalue.
|This is clear indeed from what we know from Proposition 1.12 (4). As a philosophical comment here, you might wonder how something previously labeled Proposition can suddenly become a Theorem. Welcome to mathematics, which is not an exact science, the general rule being that everything that looks fancy enough can be called Theorem. In fact, we have already met this phenomenon in the context of Proposition 1.8, which got converted into Advertisement 1.9, and with that advertisement being, you guessed right, an advertisement for a future Theorem, saying exactly the same thing.}}
The above result is quite interesting, and suggests systematically looking at the eigenvalues and eigenvectors of <math>d</math>. Which is indeed a very good idea, but we will keep this for later, starting with chapter 2, once we will have a bit more general theory.
==General references==
{{cite arXiv|last1=Banica|first1=Teo|year=2024|title=Graphs and their symmetries|eprint=2406.03664|class=math.CO}}

Latest revision as of 21:16, 21 April 2025

[math] \newcommand{\mathds}{\mathbb}[/math]

This article was automatically generated from a tex file and may contain conversion errors. If permitted, you may login and edit this article to improve the conversion.

As a first true result about graphs, and we will call this theorem, not because of the difficulty of its proof, but because of its beauty and usefulness, we have:

Theorem

A graph [math]X[/math], with vertices labeled [math]1,\ldots,N[/math], is uniquely determined by its adjacency matrix, which is the matrix [math]d\in M_N(0,1)[/math] given by:

[[math]] d_{ij}=\begin{cases} 1&{\rm if}\ i-j\\ 0&{\rm if}\ {i\not\!\!-\,j} \end{cases} [[/math]]
Moreover, the matrices [math]d\in M_N(0,1)[/math] which can appear in this way, from graphs, are precisely those which are symmetric, and have [math]0[/math] on the diagonal.


Show Proof

We have two things to be proved, the idea being as follows:


(1) Given a graph [math]X[/math], we can construct a matrix [math]d\in M_N(0,1)[/math] as in the statement, and this matrix is obviously symmetric, and has [math]0[/math] on the diagonal.


(2) Conversely, given a matrix [math]d\in M_N(0,1)[/math] which is symmetric, and has [math]0[/math] on the diagonal, we can associate to it the graph [math]X[/math] having as vertices the numbers [math]1,\ldots,N[/math], and with the edges between these vertices being defined as follows:

[[math]] i-j\iff d_{ij}=1 [[/math]]


It is then clear that the adjacency matrix of this graph [math]X[/math] is the matrix [math]d[/math] itself. Thus, we have established a correspondence [math]X\leftrightarrow d[/math], as in the statement.

The above result is very useful for various purposes, a first obvious application being that we can now tell a computer what our favorite graph [math]X[/math] is, just by typing in the corresponding adjacency matrix [math]d\in M_N(0,1)[/math], which is something that the computer will surely understand. In fact, computers like 0-1 data, that's the language that they internally speak, and when that data comes in matrix form, they even get very happy and excited, and start doing all sorts of computations. But more on this later.


Speaking conversion between graphs [math]X[/math] and matrices [math]d[/math], this can work as well for the two of us. Here is my matrix [math]d[/math], and up to you to tell me what my graph [math]X[/math] was:

[[math]] d=\begin{pmatrix} 0&1&1\\ 1&0&1\\ 1&1&0 \end{pmatrix} [[/math]]


Problem solved I hope, with the graph in question being a triangle. Here is now another matrix [math]d[/math], and in the hope that you will see here a square:

[[math]] d=\begin{pmatrix} 0&1&0&1\\ 1&0&1&0\\ 0&1&0&1\\ 1&0&1&0 \end{pmatrix} [[/math]]


Here is now another matrix [math]d[/math], and in the hope that you will see the same square here, but this time with both its diagonals drawn, and you can call that tetrahedron too:

[[math]] d=\begin{pmatrix} 0&1&1&1\\ 1&0&1&1\\ 1&1&0&1\\ 1&1&1&0 \end{pmatrix} [[/math]]


By the way, talking exercises, please allow me to record the solutions to the above exercises, not of course for you, who are young and enthusiastic and must train hard, but rather for me and my colleagues, who are often old and tired. Here they are:

[[math]] \xymatrix@R=20pt@C=20pt{ &1\ar@{-}[ddl]\ar@{-}[ddr]&&&1\ar@{-}[dd]\ar@{-}[rr]&&2\ar@{-}[dd]&&1\ar@{-}[dd]\ar@{-}[ddrr]\ar@{-}[rr]&&2\ar@{-}[dd]\ar@{-}[ddll]\\ \\ 3\ar@{-}[rr]&&2&&4\ar@{-}[rr]&&3&&4\ar@{-}[rr]&&3 } [[/math]]


And to finish this discussion, at a more advanced level now, here is another matrix [math]d[/math], and in the hope that you will see something related to cycling here:

[[math]] d=\begin{pmatrix} 0&1&0&1&0&1\\ 1&0&1&0&1&0\\ 0&1&0&1&0&1\\ 1&0&1&0&1&0\\ 0&1&0&1&0&1\\ 1&0&1&0&1&0 \end{pmatrix} [[/math]]


All this looks fun, and mathematically relevant too. Based on this, we will agree to sometimes speak directly in terms of [math]d[/math], which is quite practical. Especially for me, typing a matrix in Latex, the computer language used for typing math, and the present book, being easier than drawing a graph. Plus our computer friend will understand too.


Let us develop now some theory, for the general graphs. Once the number of vertices is fixed, [math]N\in\mathbb N[/math], what seems to most distinguish the graphs, for instance in connection with the easiness or pain of drawing them, is the total number of edges [math]|E|[/math]. Equivalently, what distinguishes the graphs is the density of edges at each vertex, given by:

[[math]] \rho=\frac{|E|}{N} [[/math]]


So, let us take a closer look at this quantity. It is convenient, for a finer study, to formulate our definition as follows:

Definition

Given a graph [math]X[/math], with [math]X[/math] standing as usual for both the vertex set, and the graph itself, the valence of each vertex [math]i[/math] is the number of its neighbors:

[[math]] v_i=\#\left\{j\in X\Big|i-j\right\} [[/math]]
We call [math]X[/math] regular when the valence function [math]v:X\to\mathbb N[/math] is constant. More specifically, in this case, we say that [math]X[/math] is [math]k[/math]-regular, with [math]k[/math] being the common valence of vertices.

At the level of examples, all the graphs pictured above, with our usual convention that 0-1 matrix means picture, are regular, with the valence being as follows:

[[math]] \begin{matrix} {\rm Graph}&{\rm Regularity}&{\rm Valence}\\ &-\\ {\rm triangle}&{\rm yes}&2\\ {\rm square}&{\rm yes}&2\\ {\rm tetrahedron}&{\rm yes}&3\\ {\rm hexagonal\ wheel}&{\rm yes}&3 \end{matrix} [[/math]]


Which leads us into the question, is there any interesting non-regular graph? Certainly yes, think for instance at all that beautiful pictures of snowflakes, such as:

[[math]] \xymatrix@R=10pt@C=10pt{ &&&\bullet\ar@{-}[d]\\ &&\bullet\ar@{-}[r]&\bullet\ar@{-}[dd]\ar@{-}[r]&\bullet\\ &\bullet\ar@{-}[d]&&&&\bullet\ar@{-}[d]\\ \bullet\ar@{-}[r]&\bullet\ar@{-}[rr]&&\bullet\ar@{-}[rr]\ar@{-}[dd]&&\bullet\ar@{-}[r]&\bullet\\ &\bullet\ar@{-}[u]&&&&\bullet\ar@{-}[u]\\ &&\bullet\ar@{-}[r]&\bullet\ar@{-}[d]\ar@{-}[r]&\bullet\\ &&&\bullet } [[/math]]


Here the valence is 4 at the inner points, and 1 at the endpoints. Graphs as the above one are called in mathematics “trees”, due to the fact that, thinking a bit, if you look at a usual tree from the above, say from a helicopter, what you see is a picture as above. But more on such graphs, which can be quite complicated, later.


For the moment, let us study the valence function, and the regular graphs. We can do some math, by using the adjacency matrix, as follows:

Proposition

Given a graph [math]X[/math], with adjacency matrix [math]d[/math]:

  • The valence function [math]v[/math] is the row sum function for [math]d[/math].
  • Equivalently, [math]v=d\xi[/math], with [math]\xi[/math] being the all-[math]1[/math] vector.
  • [math]X[/math] is regular when [math]d[/math] is stochastic, meaning with constant row sums.
  • Equivalently, [math]X[/math] is [math]k[/math]-regular, with [math]k\in\mathbb N[/math], when [math]d\xi=k\xi[/math].


Show Proof

All this looks quite trivial, but let us make some effort, and write down a complete mathematical proof. Regarding (1), this follows from:

[[math]] \begin{eqnarray*} v_i &=&\sum_{j-i}1\\ &=&\sum_{j-i}1+\sum_{j\not-i}0\\ &=&\sum_jd_{ij} \end{eqnarray*} [[/math]]


Regarding (2), this follows from (1), because for any matrix [math]d[/math] we have:

[[math]] d\xi=\begin{pmatrix} d_{11}&\ldots&d_{1N}\\ \vdots&&\vdots\\ d_{N1}&\ldots&d_{NN} \end{pmatrix} \begin{pmatrix} 1\\ \vdots\\ 1\end{pmatrix} =\begin{pmatrix} d_{11}+\ldots+d_{1N}\\ \vdots\\ d_{N1}+\ldots+d_{NN}\end{pmatrix} [[/math]]


Finally, (3) follows from (1), and (4) follows from (2).

Observe that, in linear algebra terms, (4) above reformulates as follows:

Theorem

A graph is regular precisely when the all-[math]1[/math] vector is an eigenvector of the adjacency matrix. In this case, the valence is the corresponding eigenvalue.


Show Proof

This is clear indeed from what we know from Proposition 1.12 (4). As a philosophical comment here, you might wonder how something previously labeled Proposition can suddenly become a Theorem. Welcome to mathematics, which is not an exact science, the general rule being that everything that looks fancy enough can be called Theorem. In fact, we have already met this phenomenon in the context of Proposition 1.8, which got converted into Advertisement 1.9, and with that advertisement being, you guessed right, an advertisement for a future Theorem, saying exactly the same thing.

The above result is quite interesting, and suggests systematically looking at the eigenvalues and eigenvectors of [math]d[/math]. Which is indeed a very good idea, but we will keep this for later, starting with chapter 2, once we will have a bit more general theory.

General references

Banica, Teo (2024). "Graphs and their symmetries". arXiv:2406.03664 [math.CO].