guide:82357ea8ae: Difference between revisions

From Stochiki
No edit summary
 
No edit summary
 
Line 1: Line 1:
<div class="d-none"><math>
\newcommand{\mathds}{\mathbb}</math></div>
{{Alert-warning|This article was automatically generated from a tex file and may contain conversion errors. If permitted, you may login and edit this article to improve the conversion. }}
Let us discuss now some explicit examples of operators, in analogy with what happens in finite dimensions. The most basic examples of linear transformations are the rotations, symmetries and projections. Then, we have certain remarkable classes of linear transformations, such as the positive, self-adjoint and normal ones. In what follows we will develop the basic theory of such transformations, in the present Hilbert space setting.


Let us begin with the rotations. The situation here is quite tricky in arbitrary dimensions, and we have several notions instead of one. We first have the following result:
{{proofcard|Theorem|theorem-1|For a linear operator <math>U\in B(H)</math> the following conditions are equivalent, and if they are satisfied, we say that <math>U</math> is an isometry:
<ul><li> <math>U</math> is a metric space isometry, <math>d(Ux,Uy)=d(x,y)</math>.
</li>
<li> <math>U</math> is a normed space isometry, <math>||Ux||=||x||</math>.
</li>
<li> <math>U</math> preserves the scalar product, <math> < Ux,Uy > = < x,y > </math>.
</li>
<li> <math>U</math> satisfies the isometry condition <math>U^*U=1</math>.
</li>
</ul>
In finite dimensions, we recover in this way the usual unitary transformations.
|The proofs are similar to those in finite dimensions, as follows:
<math>(1)\iff(2)</math> This follows indeed from the formula of the distances, namely:
<math display="block">
d(x,y)=||x-y||
</math>
<math>(2)\iff(3)</math> This is again standard, because we can pass from scalar products to distances, and vice versa, by using <math>||x||=\sqrt{ < x,x > }</math>, and the polarization formula.
<math>(3)\iff(4)</math> We have indeed the following equivalences, by using the standard formula <math> < Tx,y > = < x,T^*y > </math>, which defines the adjoint operator:
<math display="block">
\begin{eqnarray*}
< Ux,Uy > = < x,y >
&\iff& < x,U^*Uy > = < x,y > \\
&\iff&U^*Uy=y\\
&\iff&U^*U=1
\end{eqnarray*}
</math>
Thus, we are led to the conclusions in the statement.}}
The point now is that the condition <math>U^*U=1</math> does not imply in general <math>UU^*=1</math>, the simplest counterexample here being the shift operator on <math>l^2(\mathbb N)</math>:
{{proofcard|Proposition|proposition-1|The shift operator on the space <math>l^2(\mathbb N)</math>, given by
<math display="block">
S(e_i)=e_{i+1}
</math>
is an isometry, <math>S^*S=1</math>. However, we have <math>SS^*\neq1</math>.
|The adjoint of the shift is given by the following formula:
<math display="block">
S^*(e_i)=\begin{cases}
e_{i-1}&{\rm if}\ i > 0\\
0&{\rm if}\ i=0
\end{cases}
</math>
When composing <math>S,S^*</math>, in one sense we obtain the following formula:
<math display="block">
S^*S(e_i)=e_i
</math>
In other other sense now, we obtain the following formula:
<math display="block">
SS^*(e_i)=\begin{cases}
e_i&{\rm if}\ i > 0\\
0&{\rm if}\ i=0
\end{cases}
</math>
Summarizing, the compositions are given by the following formulae:
<math display="block">
S^*S=1\quad,\quad
SS^*=Proj(e_0^\perp)
</math>
Thus, we are led to the conclusions in the statement.}}
As a conclusion, the notion of isometry is not the correct infinite dimensional analogue of the notion of unitary, and the unitary operators must be introduced as follows:
{{proofcard|Theorem|theorem-2|For a linear operator <math>U\in B(H)</math> the following conditions are equivalent, and if they are satisfied, we say that <math>U</math> is a unitary:
<ul><li> <math>U</math> is an isometry, which is invertible.
</li>
<li> <math>U</math>, <math>U^{-1}</math> are both isometries.
</li>
<li> <math>U</math>, <math>U^*</math> are both isometries.
</li>
<li> <math>UU^*=U^*U=1</math>.
</li>
<li> <math>U^*=U^{-1}</math>.
</li>
</ul>
Moreover, the unitary operators from a group <math>U(H)\subset B(H)</math>.
|There are several statements here, the idea being as follows:
(1) The various equivalences in the statement are all clear from definitions, and from Theorem 2.19 in what regards the various possible notions of isometries which can be used, by using the formula <math>(ST)^*=T^*S^*</math> for the adjoints of the products of operators.
(2) The fact that the products and inverses of unitaries are unitaries is also clear, and we conclude that the unitary operators from a group <math>U(H)\subset B(H)</math>, as stated.}}
Let us discuss now the projections. Modulo the fact that all the subspaces <math>K\subset H</math> where these projections project must be assumed to be closed, in the present setting, here the result is perfectly similar to the one in finite dimensions, as follows:
{{proofcard|Theorem|theorem-3|For a linear operator <math>P\in B(H)</math> the following conditions are equivalent, and if they are satisfied, we say that <math>P</math> is a projection:
<ul><li> <math>P</math> is the orthogonal projection on a closed subspace <math>K\subset H</math>.
</li>
<li> <math>P</math> satisfies the projection equations <math>P^2=P^*=P</math>.
</li>
</ul>
|As in finite dimensions, <math>P</math> is an abstract projection, not necessarily orthogonal, when it is an idempotent, algebrically speaking, in the sense that we have:
<math display="block">
P^2=P
</math>
The point now is that this projection is orthogonal when:
<math display="block">
\begin{eqnarray*}
< Px-x,Py > =0
&\iff& < P^*Px-P^*x,y > =0\\
&\iff&P^*Px-P^*x=0\\
&\iff&P^*P-P^*=0\\
&\iff&P^*P=P^*
\end{eqnarray*}
</math>
Now observe that by conjugating, we obtain <math>P^*P=P</math>. Thus, we must have <math>P=P^*</math>, and so we have shown that any orthogonal projection must satisfy, as claimed:
<math display="block">
P^2=P^*=P
</math>
Conversely, if this condition is satisfied, <math>P^2=P</math> shows that <math>P</math> is a projection, and <math>P=P^*</math> shows via the above computation that <math>P</math> is indeed orthogonal.}}
There is a relation between the projections and the general isometries, such as the shift <math>S</math> that we met before, and we have the following result:
{{proofcard|Proposition|proposition-2|Given an isometry <math>U\in B(H)</math>, the operator
<math display="block">
P=UU^*
</math>
is a projection, namely the orthogonal projection on <math>Im(U)</math>.
|Assume indeed that we have an isometry, <math>U^*U=1</math>. The fact that <math>P=UU^*</math> is indeed a projection can be checked abstractly, as follows:
<math display="block">
(UU^*)^*=UU^*
</math>
<math display="block">
UU^*UU^*=UU^*
</math>
As for the last assertion, this is something that we already met, for the shift, and the situation in general is similar, with the result itself being clear.}}
More generally now, along the same lines, and clarifying the whole situation with the unitaries and isometries, we have the following result:
{{proofcard|Theorem|theorem-4|An operator <math>U\in B(H)</math> is a partial isometry, in the usual geometric sense, when the following two operators are projections:
<math display="block">
P=UU^*\quad,\quad
Q=U^*U
</math>
Moreover, the isometries, adjoints of isometries and unitaries are respectively characterized by the conditions <math>Q=1</math>, <math>P=1</math>, <math>P=Q=1</math>.
|The first assertion is a straightforward extension of Proposition 2.23, and the second assertion follows from various results regarding isometries established above.}}
It is possible to talk as well about symmetries, in the following way:
{{defncard|label=|id=|An operator <math>S\in B(H)</math> is called a symmetry when <math>S^2=1</math>, and a unitary symmetry when one of the following equivalent conditions is satisfied:
<ul><li> <math>S</math> is a unitary, <math>S^*=S^{-1}</math>, and a symmetry as well, <math>S^2=1</math>.
</li>
<li> <math>S</math> satisfies the equations <math>S=S^*=S^{-1}</math>.
</li>
</ul>}}
Here the terminology is a bit non-standard, because even in finite dimensions, <math>S^2=1</math> is not exactly what you would require for a “true” symmetry, as shown by the following transformation, which is a symmetry in our sense, but not a unitary symmetry:
<math display="block">
\begin{pmatrix}0&2\\ 1/2&0\end{pmatrix}\binom{x}{y}=\binom{2y}{x/2}
</math>
Let us study now some larger classes of operators, which are of particular importance, namely the self-adjoint, positive and normal ones. We first have:
{{proofcard|Theorem|theorem-5|For an operator <math>T\in B(H)</math>, the following conditions are equivalent, and if they are satisfied, we call <math>T</math> self-adjoint:
<ul><li> <math>T=T^*</math>.
</li>
<li> <math> < Tx,x > \in\mathbb R</math>.
</li>
</ul>
In finite dimensions, we recover in this way the usual self-adjointness notion.
|There are several assertions here, the idea being as follows:
<math>(1)\implies(2)</math> This is clear, because we have:
<math display="block">
\begin{eqnarray*}
\overline{ < Tx,x > }
&=& < x,Tx > \\
&=& < T^*x,x > \\
&=& < Tx,x >
\end{eqnarray*}
</math>
<math>(2)\implies(1)</math> In order to prove this, observe that the beginning of the above computation shows that, when assuming <math> < Tx,x > \in\mathbb R</math>, the following happens:
<math display="block">
< Tx,x > = < T^*x,x >
</math>
Thus, in terms of the operator <math>S=T-T^*</math>, we have:
<math display="block">
< Sx,x > =0
</math>
In order to finish, we use a polarization trick. We have the following formula:
<math display="block">
< S(x+y),x+y > = < Sx,x > + < Sy,y > + < Sx,y > + < Sy,x >
</math>
Since the first 3 terms vanish, the sum of the 2 last terms vanishes too. But, by using <math>S^*=-S</math>, coming from <math>S=T-T^*</math>, we can process this latter vanishing as follows:
<math display="block">
\begin{eqnarray*}
< Sx,y >
&=&- < Sy,x > \\
&=& < y,Sx > \\
&=&\overline{ < Sx,y > }
\end{eqnarray*}
</math>
Thus we must have <math> < Sx,y > \in\mathbb R</math>, and with <math>y\to iy</math> we obtain <math> < Sx,y > \in i\mathbb R</math> too, and so <math> < Sx,y > =0</math>. Thus <math>S=0</math>, which gives <math>T=T^*</math>, as desired.
(3) Finally, in what regards the finite dimensions, or more generally the case where our Hilbert space comes with a basis, <math>H=l^2(I)</math>, here the condition <math>T=T^*</math> corresponds to the usual self-adjointness condition <math>M=M^*</math> at the level of the associated matrices.}}
At the level of the basic examples, the situation is as follows:
{{proofcard|Proposition|proposition-3|The folowing operators are self-adjoint:
<ul><li> The projections, <math>P^2=P^*=P</math>. In fact, an abstract, algebraic projection is an orthogonal projection precisely when it is self-adjoint.
</li>
<li> The unitary symmetries, <math>S=S^*=S^{-1}</math>. In fact, a unitary is a unitary symmetry precisely when it is self-adjoint.
</li>
</ul>
|These assertions are indeed all clear from definitions.}}
Next in line, we have the notion of positive operator. We have here:
{{proofcard|Theorem|theorem-6|The positive operators, which are the operators <math>T\in B(H)</math> satisfying <math> < Tx,x > \geq0</math>, have the following properties:
<ul><li> They are self-adjoint, <math>T=T^*</math>.
</li>
<li> As examples, we have the projections, <math>P^2=P^*=P</math>.
</li>
<li> More generally, <math>T=S^*S</math> is positive, for any <math>S\in B(H)</math>.
</li>
<li> In finite dimensions, we recover the usual positive operators.
</li>
</ul>
|All these assertions are elementary, the idea being as follows:
(1) This follows from Theorem 2.26, because <math> < Tx,x > \geq0</math> implies <math> < Tx,x > \in\mathbb R</math>.
(2) This is clear from <math>P^2=P=P^*</math>, because we have:
<math display="block">
\begin{eqnarray*}
< Px,x >
&=& < P^2x,x > \\
&=& < Px,Px > \\
&=&||Px||^2
\end{eqnarray*}
</math>
(3) This follows from a similar computation, namely:
<math display="block">
< S^*Sx,x >
= < Sx,Sx >
=||Sx||^2
</math>
(4) This is well-known, the idea being that the condition <math> < Tx,x > \geq0</math> corresponds to the usual positivity condition <math>A\geq0</math>, at the level of the associated matrix.}}
It is possible to talk as well about strictly positive operators, and we have here:
{{proofcard|Theorem|theorem-7|The strictly positive operators, which are the operators <math>T\in B(H)</math> satisfying <math> < Tx,x >  > 0</math>, for any <math>x\neq0</math>, have the following properties:
<ul><li> They are self-adjoint, <math>T=T^*</math>.
</li>
<li> As examples, <math>T=S^*S</math> is positive, for any <math>S\in B(H)</math> injective.
</li>
<li> In finite dimensions, we recover the usual strictly positive operators.
</li>
</ul>
|As before, all these assertions are elementary, the idea being as follows:
(1) This is something that we know, from Theorem 2.28.
(2) This follows from the injectivity of <math>S</math>, because for any <math>x\neq0</math> we have:
<math display="block">
\begin{eqnarray*}
< S^*Sx,x >
&=& < Sx,Sx > \\
&=&||Sx||^2\\
& > &0
\end{eqnarray*}
</math>
(3) This is well-known, the idea being that the condition <math> < Tx,x >  > 0</math> corresponds to the usual strict positivity condition <math>A > 0</math>, at the level of the associated matrix.}}
As a comment, while any strictly positive matrix <math>A > 0</math> is well-known to be invertible, the analogue of this fact does not hold in infinite dimensions, a counterexample here being the following operator on <math>l^2(\mathbb N)</math>, which is strictly positive but not invertible:
<math display="block">
T=\begin{pmatrix}
1\\
&\frac{1}{2}\\
&&\frac{1}{3}\\
&&&\ddots
\end{pmatrix}
</math>
As a last remarkable class of operators, we have the normal ones. We have here:
{{proofcard|Theorem|theorem-8|For an operator <math>T\in B(H)</math>, the following conditions are equivalent, and if they are satisfied, we call <math>T</math> normal:
<ul><li> <math>TT^*=T^*T</math>.
</li>
<li> <math>||Tx||=||T^*x||</math>.
</li>
</ul>
In finite dimensions, we recover in this way the usual normality notion.
|There are several assertions here, the idea being as follows:
<math>(1)\implies(2)</math> This is clear, due to the following computation:
<math display="block">
\begin{eqnarray*}
||Tx||^2
&=& < Tx,Tx > \\
&=& < T^*Tx,x > \\
&=& < TT^*x,x > \\
&=& < T^*x,T^*x > \\
&=&||T^*x||^2
\end{eqnarray*}
</math>
<math>(2)\implies(1)</math> This is clear as well, because the above computation shows that, when assuming <math>||Tx||=||T^*x||</math>, the following happens:
<math display="block">
< TT^*x,x > = < T^*Tx,x >
</math>
Thus, in terms of the operator <math>S=TT^*-T^*T</math>, we have:
<math display="block">
< Sx,x > =0
</math>
In order to finish, we use a polarization trick. We have the following formula:
<math display="block">
< S(x+y),x+y > = < Sx,x > + < Sy,y > + < Sx,y > + < Sy,x >
</math>
Since the first 3 terms vanish, the sum of the 2 last terms vanishes too. But, by using <math>S=S^*</math>, coming from <math>S=TT^*-T^*T</math>, we can process this latter vanishing as follows:
<math display="block">
\begin{eqnarray*}
< Sx,y >
&=&- < Sy,x > \\
&=&- < y,Sx > \\
&=&-\overline{ < Sx,y > }
\end{eqnarray*}
</math>
Thus we must have <math> < Sx,y > \in i\mathbb R</math>, and with <math>y\to iy</math> we obtain <math> < Sx,y > \in \mathbb R</math> too, and so <math> < Sx,y > =0</math>. Thus <math>S=0</math>, which gives <math>TT^*=T^*T</math>, as desired.
(3) Finally, in what regards finite dimensions, or more generally the case where our Hilbert space comes with a basis, <math>H=l^2(I)</math>, here the condition <math>TT^*=T^*T</math> corresponds to the usual normality condition <math>MM^*=M^*M</math> at the level of the associated matrices.}}
Observe that the normal operators generalize both the self-adjoint operators, and the unitaries. We will be back to such operators, on many occassions, in what follows.
==General references==
{{cite arXiv|last1=Banica|first1=Teo|year=2024|title=Principles of operator algebras|eprint=2208.03600|class=math.OA}}

Latest revision as of 21:38, 22 April 2025

[math] \newcommand{\mathds}{\mathbb}[/math]

This article was automatically generated from a tex file and may contain conversion errors. If permitted, you may login and edit this article to improve the conversion.

Let us discuss now some explicit examples of operators, in analogy with what happens in finite dimensions. The most basic examples of linear transformations are the rotations, symmetries and projections. Then, we have certain remarkable classes of linear transformations, such as the positive, self-adjoint and normal ones. In what follows we will develop the basic theory of such transformations, in the present Hilbert space setting.


Let us begin with the rotations. The situation here is quite tricky in arbitrary dimensions, and we have several notions instead of one. We first have the following result:

Theorem

For a linear operator [math]U\in B(H)[/math] the following conditions are equivalent, and if they are satisfied, we say that [math]U[/math] is an isometry:

  • [math]U[/math] is a metric space isometry, [math]d(Ux,Uy)=d(x,y)[/math].
  • [math]U[/math] is a normed space isometry, [math]||Ux||=||x||[/math].
  • [math]U[/math] preserves the scalar product, [math] \lt Ux,Uy \gt = \lt x,y \gt [/math].
  • [math]U[/math] satisfies the isometry condition [math]U^*U=1[/math].

In finite dimensions, we recover in this way the usual unitary transformations.


Show Proof

The proofs are similar to those in finite dimensions, as follows:


[math](1)\iff(2)[/math] This follows indeed from the formula of the distances, namely:

[[math]] d(x,y)=||x-y|| [[/math]]


[math](2)\iff(3)[/math] This is again standard, because we can pass from scalar products to distances, and vice versa, by using [math]||x||=\sqrt{ \lt x,x \gt }[/math], and the polarization formula.


[math](3)\iff(4)[/math] We have indeed the following equivalences, by using the standard formula [math] \lt Tx,y \gt = \lt x,T^*y \gt [/math], which defines the adjoint operator:

[[math]] \begin{eqnarray*} \lt Ux,Uy \gt = \lt x,y \gt &\iff& \lt x,U^*Uy \gt = \lt x,y \gt \\ &\iff&U^*Uy=y\\ &\iff&U^*U=1 \end{eqnarray*} [[/math]]


Thus, we are led to the conclusions in the statement.

The point now is that the condition [math]U^*U=1[/math] does not imply in general [math]UU^*=1[/math], the simplest counterexample here being the shift operator on [math]l^2(\mathbb N)[/math]:

Proposition

The shift operator on the space [math]l^2(\mathbb N)[/math], given by

[[math]] S(e_i)=e_{i+1} [[/math]]
is an isometry, [math]S^*S=1[/math]. However, we have [math]SS^*\neq1[/math].


Show Proof

The adjoint of the shift is given by the following formula:

[[math]] S^*(e_i)=\begin{cases} e_{i-1}&{\rm if}\ i \gt 0\\ 0&{\rm if}\ i=0 \end{cases} [[/math]]


When composing [math]S,S^*[/math], in one sense we obtain the following formula:

[[math]] S^*S(e_i)=e_i [[/math]]


In other other sense now, we obtain the following formula:

[[math]] SS^*(e_i)=\begin{cases} e_i&{\rm if}\ i \gt 0\\ 0&{\rm if}\ i=0 \end{cases} [[/math]]


Summarizing, the compositions are given by the following formulae:

[[math]] S^*S=1\quad,\quad SS^*=Proj(e_0^\perp) [[/math]]


Thus, we are led to the conclusions in the statement.

As a conclusion, the notion of isometry is not the correct infinite dimensional analogue of the notion of unitary, and the unitary operators must be introduced as follows:

Theorem

For a linear operator [math]U\in B(H)[/math] the following conditions are equivalent, and if they are satisfied, we say that [math]U[/math] is a unitary:

  • [math]U[/math] is an isometry, which is invertible.
  • [math]U[/math], [math]U^{-1}[/math] are both isometries.
  • [math]U[/math], [math]U^*[/math] are both isometries.
  • [math]UU^*=U^*U=1[/math].
  • [math]U^*=U^{-1}[/math].

Moreover, the unitary operators from a group [math]U(H)\subset B(H)[/math].


Show Proof

There are several statements here, the idea being as follows:


(1) The various equivalences in the statement are all clear from definitions, and from Theorem 2.19 in what regards the various possible notions of isometries which can be used, by using the formula [math](ST)^*=T^*S^*[/math] for the adjoints of the products of operators.


(2) The fact that the products and inverses of unitaries are unitaries is also clear, and we conclude that the unitary operators from a group [math]U(H)\subset B(H)[/math], as stated.

Let us discuss now the projections. Modulo the fact that all the subspaces [math]K\subset H[/math] where these projections project must be assumed to be closed, in the present setting, here the result is perfectly similar to the one in finite dimensions, as follows:

Theorem

For a linear operator [math]P\in B(H)[/math] the following conditions are equivalent, and if they are satisfied, we say that [math]P[/math] is a projection:

  • [math]P[/math] is the orthogonal projection on a closed subspace [math]K\subset H[/math].
  • [math]P[/math] satisfies the projection equations [math]P^2=P^*=P[/math].


Show Proof

As in finite dimensions, [math]P[/math] is an abstract projection, not necessarily orthogonal, when it is an idempotent, algebrically speaking, in the sense that we have:

[[math]] P^2=P [[/math]]


The point now is that this projection is orthogonal when:

[[math]] \begin{eqnarray*} \lt Px-x,Py \gt =0 &\iff& \lt P^*Px-P^*x,y \gt =0\\ &\iff&P^*Px-P^*x=0\\ &\iff&P^*P-P^*=0\\ &\iff&P^*P=P^* \end{eqnarray*} [[/math]]


Now observe that by conjugating, we obtain [math]P^*P=P[/math]. Thus, we must have [math]P=P^*[/math], and so we have shown that any orthogonal projection must satisfy, as claimed:

[[math]] P^2=P^*=P [[/math]]


Conversely, if this condition is satisfied, [math]P^2=P[/math] shows that [math]P[/math] is a projection, and [math]P=P^*[/math] shows via the above computation that [math]P[/math] is indeed orthogonal.

There is a relation between the projections and the general isometries, such as the shift [math]S[/math] that we met before, and we have the following result:

Proposition

Given an isometry [math]U\in B(H)[/math], the operator

[[math]] P=UU^* [[/math]]
is a projection, namely the orthogonal projection on [math]Im(U)[/math].


Show Proof

Assume indeed that we have an isometry, [math]U^*U=1[/math]. The fact that [math]P=UU^*[/math] is indeed a projection can be checked abstractly, as follows:

[[math]] (UU^*)^*=UU^* [[/math]]

[[math]] UU^*UU^*=UU^* [[/math]]


As for the last assertion, this is something that we already met, for the shift, and the situation in general is similar, with the result itself being clear.

More generally now, along the same lines, and clarifying the whole situation with the unitaries and isometries, we have the following result:

Theorem

An operator [math]U\in B(H)[/math] is a partial isometry, in the usual geometric sense, when the following two operators are projections:

[[math]] P=UU^*\quad,\quad Q=U^*U [[/math]]
Moreover, the isometries, adjoints of isometries and unitaries are respectively characterized by the conditions [math]Q=1[/math], [math]P=1[/math], [math]P=Q=1[/math].


Show Proof

The first assertion is a straightforward extension of Proposition 2.23, and the second assertion follows from various results regarding isometries established above.

It is possible to talk as well about symmetries, in the following way:

Definition

An operator [math]S\in B(H)[/math] is called a symmetry when [math]S^2=1[/math], and a unitary symmetry when one of the following equivalent conditions is satisfied:

  • [math]S[/math] is a unitary, [math]S^*=S^{-1}[/math], and a symmetry as well, [math]S^2=1[/math].
  • [math]S[/math] satisfies the equations [math]S=S^*=S^{-1}[/math].

Here the terminology is a bit non-standard, because even in finite dimensions, [math]S^2=1[/math] is not exactly what you would require for a “true” symmetry, as shown by the following transformation, which is a symmetry in our sense, but not a unitary symmetry:

[[math]] \begin{pmatrix}0&2\\ 1/2&0\end{pmatrix}\binom{x}{y}=\binom{2y}{x/2} [[/math]]

Let us study now some larger classes of operators, which are of particular importance, namely the self-adjoint, positive and normal ones. We first have:

Theorem

For an operator [math]T\in B(H)[/math], the following conditions are equivalent, and if they are satisfied, we call [math]T[/math] self-adjoint:

  • [math]T=T^*[/math].
  • [math] \lt Tx,x \gt \in\mathbb R[/math].

In finite dimensions, we recover in this way the usual self-adjointness notion.


Show Proof

There are several assertions here, the idea being as follows:


[math](1)\implies(2)[/math] This is clear, because we have:

[[math]] \begin{eqnarray*} \overline{ \lt Tx,x \gt } &=& \lt x,Tx \gt \\ &=& \lt T^*x,x \gt \\ &=& \lt Tx,x \gt \end{eqnarray*} [[/math]]


[math](2)\implies(1)[/math] In order to prove this, observe that the beginning of the above computation shows that, when assuming [math] \lt Tx,x \gt \in\mathbb R[/math], the following happens:

[[math]] \lt Tx,x \gt = \lt T^*x,x \gt [[/math]]


Thus, in terms of the operator [math]S=T-T^*[/math], we have:

[[math]] \lt Sx,x \gt =0 [[/math]]


In order to finish, we use a polarization trick. We have the following formula:

[[math]] \lt S(x+y),x+y \gt = \lt Sx,x \gt + \lt Sy,y \gt + \lt Sx,y \gt + \lt Sy,x \gt [[/math]]


Since the first 3 terms vanish, the sum of the 2 last terms vanishes too. But, by using [math]S^*=-S[/math], coming from [math]S=T-T^*[/math], we can process this latter vanishing as follows:

[[math]] \begin{eqnarray*} \lt Sx,y \gt &=&- \lt Sy,x \gt \\ &=& \lt y,Sx \gt \\ &=&\overline{ \lt Sx,y \gt } \end{eqnarray*} [[/math]]


Thus we must have [math] \lt Sx,y \gt \in\mathbb R[/math], and with [math]y\to iy[/math] we obtain [math] \lt Sx,y \gt \in i\mathbb R[/math] too, and so [math] \lt Sx,y \gt =0[/math]. Thus [math]S=0[/math], which gives [math]T=T^*[/math], as desired.


(3) Finally, in what regards the finite dimensions, or more generally the case where our Hilbert space comes with a basis, [math]H=l^2(I)[/math], here the condition [math]T=T^*[/math] corresponds to the usual self-adjointness condition [math]M=M^*[/math] at the level of the associated matrices.

At the level of the basic examples, the situation is as follows:

Proposition

The folowing operators are self-adjoint:

  • The projections, [math]P^2=P^*=P[/math]. In fact, an abstract, algebraic projection is an orthogonal projection precisely when it is self-adjoint.
  • The unitary symmetries, [math]S=S^*=S^{-1}[/math]. In fact, a unitary is a unitary symmetry precisely when it is self-adjoint.


Show Proof

These assertions are indeed all clear from definitions.

Next in line, we have the notion of positive operator. We have here:

Theorem

The positive operators, which are the operators [math]T\in B(H)[/math] satisfying [math] \lt Tx,x \gt \geq0[/math], have the following properties:

  • They are self-adjoint, [math]T=T^*[/math].
  • As examples, we have the projections, [math]P^2=P^*=P[/math].
  • More generally, [math]T=S^*S[/math] is positive, for any [math]S\in B(H)[/math].
  • In finite dimensions, we recover the usual positive operators.


Show Proof

All these assertions are elementary, the idea being as follows:


(1) This follows from Theorem 2.26, because [math] \lt Tx,x \gt \geq0[/math] implies [math] \lt Tx,x \gt \in\mathbb R[/math].


(2) This is clear from [math]P^2=P=P^*[/math], because we have:

[[math]] \begin{eqnarray*} \lt Px,x \gt &=& \lt P^2x,x \gt \\ &=& \lt Px,Px \gt \\ &=&||Px||^2 \end{eqnarray*} [[/math]]


(3) This follows from a similar computation, namely:

[[math]] \lt S^*Sx,x \gt = \lt Sx,Sx \gt =||Sx||^2 [[/math]]


(4) This is well-known, the idea being that the condition [math] \lt Tx,x \gt \geq0[/math] corresponds to the usual positivity condition [math]A\geq0[/math], at the level of the associated matrix.

It is possible to talk as well about strictly positive operators, and we have here:

Theorem

The strictly positive operators, which are the operators [math]T\in B(H)[/math] satisfying [math] \lt Tx,x \gt \gt 0[/math], for any [math]x\neq0[/math], have the following properties:

  • They are self-adjoint, [math]T=T^*[/math].
  • As examples, [math]T=S^*S[/math] is positive, for any [math]S\in B(H)[/math] injective.
  • In finite dimensions, we recover the usual strictly positive operators.


Show Proof

As before, all these assertions are elementary, the idea being as follows:


(1) This is something that we know, from Theorem 2.28.


(2) This follows from the injectivity of [math]S[/math], because for any [math]x\neq0[/math] we have:

[[math]] \begin{eqnarray*} \lt S^*Sx,x \gt &=& \lt Sx,Sx \gt \\ &=&||Sx||^2\\ & \gt &0 \end{eqnarray*} [[/math]]


(3) This is well-known, the idea being that the condition [math] \lt Tx,x \gt \gt 0[/math] corresponds to the usual strict positivity condition [math]A \gt 0[/math], at the level of the associated matrix.

As a comment, while any strictly positive matrix [math]A \gt 0[/math] is well-known to be invertible, the analogue of this fact does not hold in infinite dimensions, a counterexample here being the following operator on [math]l^2(\mathbb N)[/math], which is strictly positive but not invertible:

[[math]] T=\begin{pmatrix} 1\\ &\frac{1}{2}\\ &&\frac{1}{3}\\ &&&\ddots \end{pmatrix} [[/math]]


As a last remarkable class of operators, we have the normal ones. We have here:

Theorem

For an operator [math]T\in B(H)[/math], the following conditions are equivalent, and if they are satisfied, we call [math]T[/math] normal:

  • [math]TT^*=T^*T[/math].
  • [math]||Tx||=||T^*x||[/math].

In finite dimensions, we recover in this way the usual normality notion.


Show Proof

There are several assertions here, the idea being as follows:


[math](1)\implies(2)[/math] This is clear, due to the following computation:

[[math]] \begin{eqnarray*} ||Tx||^2 &=& \lt Tx,Tx \gt \\ &=& \lt T^*Tx,x \gt \\ &=& \lt TT^*x,x \gt \\ &=& \lt T^*x,T^*x \gt \\ &=&||T^*x||^2 \end{eqnarray*} [[/math]]


[math](2)\implies(1)[/math] This is clear as well, because the above computation shows that, when assuming [math]||Tx||=||T^*x||[/math], the following happens:

[[math]] \lt TT^*x,x \gt = \lt T^*Tx,x \gt [[/math]]


Thus, in terms of the operator [math]S=TT^*-T^*T[/math], we have:

[[math]] \lt Sx,x \gt =0 [[/math]]


In order to finish, we use a polarization trick. We have the following formula:

[[math]] \lt S(x+y),x+y \gt = \lt Sx,x \gt + \lt Sy,y \gt + \lt Sx,y \gt + \lt Sy,x \gt [[/math]]


Since the first 3 terms vanish, the sum of the 2 last terms vanishes too. But, by using [math]S=S^*[/math], coming from [math]S=TT^*-T^*T[/math], we can process this latter vanishing as follows:

[[math]] \begin{eqnarray*} \lt Sx,y \gt &=&- \lt Sy,x \gt \\ &=&- \lt y,Sx \gt \\ &=&-\overline{ \lt Sx,y \gt } \end{eqnarray*} [[/math]]


Thus we must have [math] \lt Sx,y \gt \in i\mathbb R[/math], and with [math]y\to iy[/math] we obtain [math] \lt Sx,y \gt \in \mathbb R[/math] too, and so [math] \lt Sx,y \gt =0[/math]. Thus [math]S=0[/math], which gives [math]TT^*=T^*T[/math], as desired.


(3) Finally, in what regards finite dimensions, or more generally the case where our Hilbert space comes with a basis, [math]H=l^2(I)[/math], here the condition [math]TT^*=T^*T[/math] corresponds to the usual normality condition [math]MM^*=M^*M[/math] at the level of the associated matrices.

Observe that the normal operators generalize both the self-adjoint operators, and the unitaries. We will be back to such operators, on many occassions, in what follows.

General references

Banica, Teo (2024). "Principles of operator algebras". arXiv:2208.03600 [math.OA].