guide:F69273168a: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
<div class="d-none"><math> | |||
\newcommand{\mathds}{\mathbb}</math></div> | |||
{{Alert-warning|This article was automatically generated from a tex file and may contain conversion errors. If permitted, you may login and edit this article to improve the conversion. }} | |||
Our goal in what follows will be that of proving that any holomorphic function is analytic. This is something quite subtle, which cannot be proved with bare hands, and requires lots of preliminaries. Getting to these preliminaries now, our claim is that a lot of useful knowledge, in order to deal with the holomorphic functions, can be gained by further studying the analytic functions, and even the usual polynomials <math>P\in\mathbb C[X]</math>. | |||
So, let us further study the polynomials <math>P\in\mathbb C[X]</math>, and other analytic functions. We already know from chapter 5 that in the polynomial case, <math>P\in\mathbb C[X]</math>, some interesting things happen, because any such polynomial has a root, and even <math>\deg(P)</math> roots, after a recurrence. Keeping looking at polynomials, with the same methods, we are led to: | |||
{{proofcard|Theorem|theorem-1|Any polynomial <math>P\in\mathbb C[X]</math> satisfies the maximum principle, in the sense that given a domain <math>D</math>, with boundary <math>\gamma</math>, we have: | |||
<math display="block"> | |||
\exists x\in\gamma\quad,\quad |P(x)|=\max_{y\in D}|P(y)| | |||
</math> | |||
That is, the maximum of <math>|P|</math> over a domain is attained on its boundary. | |||
|In order to prove this, we can split <math>D</math> into connected components, and then assume that <math>D</math> is connected. Moreover, we can assume that <math>D</math> has no holes, and so is homeomorphic to a disk, and even assume that <math>D</math> itself is a disk. But with this assumption made, the result follows from by contradiction, by using the same arguments as in the proof of the existence of a root, from chapter 5. To be more precise, assume <math>\deg P\geq 1</math>, and that the maximum of <math>|P|</math> is attained at the center of a disk <math>D=D(z,r)</math>: | |||
<math display="block"> | |||
|P(z)|=\max_{x\in D}|P(x)| | |||
</math> | |||
We can write then <math>P(z+t)\simeq P(z)+ct^k</math> with <math>c\neq0</math>, for <math>t</math> small, and by suitably choosing the argument of <math>t</math> on the unit circle we conclude, exactly as in chapter 5, that the function <math>|P|</math> cannot have a local maximum at <math>z</math>, as stated.}} | |||
A good explanation for the fact that the maximum principle holds for polynomials <math>P\in\mathbb C[X]</math> could be that the values of such a polynomial inside a disk can be recovered from its values on the boundary. And fortunately, this is indeed the case, and we have: | |||
{{proofcard|Theorem|theorem-2|Given a polynomial <math>P\in\mathbb C[X]</math>, and a disk <math>D</math>, with boundary <math>\gamma</math>, we have the following formulae, with the integrations being the normalized, mass <math>1</math> ones: | |||
<ul><li> <math>P</math> satisfies the plain mean value formula <math>P(x)=\int_DP(y)dy</math>. | |||
</li> | |||
<li> <math>P</math> satisfies the boundary mean value formula <math>P(x)=\int_\gamma P(y)dy</math>. | |||
</li> | |||
</ul> | |||
|As a first observation, the two mean value formulae in the statement are equivalent, by restricting the attention to disks <math>D</math>, having as boundaries circles <math>\gamma</math>, and using annuli and polar coordinates for the proof of the equivalence. As for the formulae themselves, these can be checked by direct computation for a disk <math>D</math>, with the formulation in (2) being the most convenient. Indeed, for a monomial <math>P(x)=x^n</math> we have: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
\int_\gamma y^ndy | |||
&=&\frac{1}{2\pi}\int_0^{2\pi}(x+re^{it})^ndt\\ | |||
&=&\frac{1}{2\pi}\int_0^{2\pi}\sum_{k=0}^n\binom{n}{k}x^k(re^{it})^{n-k}dt\\ | |||
&=&\sum_{k=0}^n\binom{n}{k}x^kr^{n-k}\frac{1}{2\pi}\int_0^{2\pi}e^{i(n-k)t}dt\\ | |||
&=&\sum_{k=0}^n\binom{n}{k}x^kr^{n-k}\delta_{kn}\\ | |||
&=&x^n | |||
\end{eqnarray*} | |||
</math> | |||
Here we have used the following key identity, valid for any exponent <math>m\in\mathbb Z</math>: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
\frac{1}{2\pi}\int_0^{2\pi}e^{imt}dt | |||
&=&\frac{1}{2\pi}\int_0^{2\pi}\cos(mt)+i\sin(mt)dt\\ | |||
&=&\delta_{m0}+i\cdot 0\\ | |||
&=&\delta_{m0} | |||
\end{eqnarray*} | |||
</math> | |||
Thus, we have the result for monomials, and the general case follows by linearity.}} | |||
All the above is very nice, but we can in fact do even better, with a more powerful integration formula. Let us start with some preliminaries. We first have: | |||
{{proofcard|Proposition|proposition-1|We can integrate functions <math>f</math> over curves <math>\gamma</math> by setting | |||
<math display="block"> | |||
\int_\gamma f(x)dx=\int_a^bf(\gamma(t))\gamma'(t)dt | |||
</math> | |||
with this quantity being independent on the parametrization <math>\gamma:[a,b]\to\mathbb C</math>. | |||
|We must prove that the quantity in the statement is independent on the parametrization. In other words, we must prove that if we pick two different parametrizations <math>\gamma,\eta:[a,b]\to\mathbb C</math> of our curve, then we have the following formula: | |||
<math display="block"> | |||
\int_a^bf(\gamma(t))\gamma'(t)dt=\int_a^bf(\eta(t))\eta'(t)dt | |||
</math> | |||
But for this purpose, let us write <math>\gamma=\eta\phi</math>, with <math>\phi:[a,b]\to[a,b]</math> being a certain function, that we can assume to be bijective, via an elementary cut-and-paste argument. By using the chain rule for derivatives, and the change of variable formula, we have: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
\int_a^bf(\gamma(t))\gamma'(t)dt | |||
&=&\int_a^bf(\eta\phi(t))(\eta\phi)'(t)dt\\ | |||
&=&\int_a^bf(\eta\phi(t))\eta'(\phi(t))\phi'(t)dt\\ | |||
&=&\int_a^bf(\eta(t))\eta'(t)dt | |||
\end{eqnarray*} | |||
</math> | |||
Thus, we are led to the conclusions in the statement.}} | |||
The main properties of the above integration method are as follows: | |||
{{proofcard|Proposition|proposition-2|We have the following formula, for a union of paths: | |||
<math display="block"> | |||
\int_{\gamma\cup\eta}f(x)dx=\int_\gamma f(x)dx+\int_\eta f(x)dx | |||
</math> | |||
Also, when reversing the path, the integral changes its sign. | |||
|Here the first assertion is clear from definitions, and the second assertion comes from the change of variable formula, by using Proposition 6.24.}} | |||
Now by getting back to polynomials, we have the following result: | |||
{{proofcard|Theorem|theorem-3|Any polynomial <math>P\in\mathbb C[X]</math> satisfies the Cauchy formula | |||
<math display="block"> | |||
P(x)=\frac{1}{2\pi i}\int_\gamma\frac{P(y)}{y-x}\,dy | |||
</math> | |||
with the integration over <math>\gamma</math> being constructed as above. | |||
|This follows by using abstract arguments and computations similar to those in the proof of Theorem 6.23. Indeed, by linearity we can assume <math>P(x)=x^n</math>. Also, by using a cut-and-paste argument, we can assume that we are on a circle: | |||
<math display="block"> | |||
\gamma:[0,2\pi]\to\mathbb C\quad,\quad \gamma(t)=x+re^{it} | |||
</math> | |||
By using now the computation from the proof of Theorem 6.23, we obtain: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
\int_\gamma\frac{y^n}{y-x}\,dy | |||
&=&\int_0^{2\pi}\frac{(x+re^{it})^n}{re^{it}}\,rie^{it}dt\\ | |||
&=&i\int_0^{2\pi}(x+re^{it})^ndt\\ | |||
&=&i\cdot 2\pi x^n | |||
\end{eqnarray*} | |||
</math> | |||
Thus, we are led to the formula in the statement.}} | |||
All this is quite interesting, and obviously, we are now into some serious mathematics. Importantly, Theorem 6.22, Theorem 6.23 and Theorem 6.26 provide us with a path for proving the converse of Theorem 6.21. Indeed, if we manage to prove the Cauchy formula for any holomorphic function <math>f:X\to\mathbb C</math>, then it will follow that our function is in fact analytic, and so infinitely differentiable. So, let us start with the following result: | |||
{{proofcard|Theorem|theorem-4|The Cauchy formula, namely | |||
<math display="block"> | |||
f(x)=\frac{1}{2\pi i}\int_\gamma\frac{f(y)}{y-x}\,dy | |||
</math> | |||
holds for any holomorphic function <math>f:X\to\mathbb C</math>. | |||
|This is something standard, which can be proved as follows: | |||
(1) Our first claim is that given <math>f\in H(X)</math>, with <math>f'\in C(X)</math>, the integral of <math>f'</math> vanishes on any path. Indeed, by using the change of variable formula, we have: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
\int_\gamma f'(x)dx | |||
&=&\int_a^bf'(\gamma(t))\gamma'(t)dt\\ | |||
&=&f(\gamma(b))-f(\gamma(a))\\ | |||
&=&0 | |||
\end{eqnarray*} | |||
</math> | |||
(2) Our second claim is that given <math>f\in H(X)</math> and a triangle <math>\Delta\subset X</math>, we have: | |||
<math display="block"> | |||
\int_\Delta f(x)dx=0 | |||
</math> | |||
Indeed, let us call <math>\Delta=ABC</math> our triangle. Now consider the midpoints <math>A',B',C'</math> of the edges <math>BC,CA,AB</math>, and then consider the following smaller triangles: | |||
<math display="block"> | |||
\Delta_1=AC'B'\quad,\quad\Delta_2=BA'C'\quad,\quad\Delta_3=CB'A'\quad,\quad\Delta_4=A'B'C' | |||
</math> | |||
These smaller triangles partition then <math>\Delta</math>, and due to our above conventions for the vertex ordering, which produce cancellations when integrating over them, we have: | |||
<math display="block"> | |||
\int_\Delta f(x)dx=\sum_{i=1}^4\int_{\Delta_i}f(x)dx | |||
</math> | |||
Thus we can pick, among the triangles <math>\Delta_i</math>, a triangle <math>\Delta^{(1)}</math> such that: | |||
<math display="block"> | |||
\left|\int_\Delta f(x)dx\right|\leq 4\left|\int_{\Delta^{(1)}}f(x)dx\right| | |||
</math> | |||
In fact, by repeating the procedure, we obtain triangles <math>\Delta^{(n)}</math> such that: | |||
<math display="block"> | |||
\left|\int_\Delta f(x)dx\right|\leq 4^n\left|\int_{\Delta^{(n)}}f(x)dx\right| | |||
</math> | |||
(3) Now let <math>z</math> be the limiting point of these triangles <math>\Delta^{(n)}</math>, and fix <math>\varepsilon > 0</math>. By using the fact that the functions <math>1,x</math> integrate over paths up to 0, coming from (1), we obtain the following estimate, with <math>n\in\mathbb N</math> being big enough, and <math>L</math> being the perimeter of <math>\Delta</math>: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
\left|\int_{\Delta^{(n)}}f(x)dx\right| | |||
&=&\left|\int_{\Delta^{(n)}}f(x)-f(z)-f'(z)(x-z)dx\right|\\ | |||
&\leq&\int_{\Delta^{(n)}}\left|f(x)-f(z)-f'(z)(x-z)\right|dx\\ | |||
&\leq&\int_{\Delta^{(n)}}\varepsilon|x-z|dx\\ | |||
&\leq&\varepsilon(2^{-n}L)^2 | |||
\end{eqnarray*} | |||
</math> | |||
Now by combining this with the estimate in (2), this proves our claim. | |||
(4) The rest is quite routine. First, we can pass from triangles to boundaries of convex sets, in a straightforward way, with the same conclusion as in (2), namely: | |||
<math display="block"> | |||
\int_\gamma f(x)dx=0 | |||
</math> | |||
Getting back to what we want to prove, namely the Cauchy formula for an arbitrary holomorphic function <math>f\in H(X)</math>, let <math>x\in X</math>, and consider the following function: | |||
<math display="block"> | |||
g(y)=\begin{cases} | |||
\frac{f(y)-f(x)}{y-x}&(y\neq x)\\ | |||
f'(x)&(y=x) | |||
\end{cases} | |||
</math> | |||
Now assuming that <math>\gamma</math> encloses a convex set, we can apply what we found, namely vanishing of the integral, to this function <math>g</math>, and we obtain the Cauchy formula for <math>f</math>. | |||
(5) Finally, the extension to general curves is standard, and standard as well is the discussion of what exactly happens at <math>x</math>, in the above proof. See Rudin <ref name="ru2">W. Rudin, Real and complex analysis, McGraw-Hill (1966).</ref>.}} | |||
As a main application of the Cauchy formula, we have: | |||
{{proofcard|Theorem|theorem-5|The following conditions are equivalent, for a function <math>f:X\to\mathbb C</math>: | |||
<ul><li> <math>f</math> is holomorphic. | |||
</li> | |||
<li> <math>f</math> is infinitely differentiable. | |||
</li> | |||
<li> <math>f</math> is analytic. | |||
</li> | |||
<li> The Cauchy formula holds for <math>f</math>. | |||
</li> | |||
</ul> | |||
|This is routine from what we have, the idea being as follows: | |||
<math>(1)\implies(4)</math> is non-trivial, but we know this from Theorem 6.27. | |||
<math>(4)\implies(3)</math> is something trivial, because we can expand the series in the Cauchy formula, and we conclude that our function is indeed analytic. | |||
<math>(3)\implies(2)\implies(1)</math> are both elementary, known from Theorem 6.21.}} | |||
As another application of the Cauchy formula, we have: | |||
{{proofcard|Theorem|theorem-6|Any holomorphic function <math>f:X\to\mathbb C</math> satisfies the maximum principle, in the sense that given a domain <math>D</math>, with boundary <math>\gamma</math>, we have: | |||
<math display="block"> | |||
\exists x\in\gamma\quad,\quad |f(x)|=\max_{y\in D}|f(y)| | |||
</math> | |||
That is, the maximum of <math>|f|</math> over a domain is attained on its boundary. | |||
|This follows indeed from the Cauchy formula. Observe that the converse is not true, for instance because functions like <math>\bar{x}</math> satisfy too the maximum principle. We will be back to this later, when talking about harmonic functions.}} | |||
As before with polynomials, a good explanation for the fact that the maximum principle holds could be that the values of our function inside a disk can be recovered from its values on the boundary. And fortunately, this is indeed the case, and we have: | |||
{{proofcard|Theorem|theorem-7|Given an holomorphic function <math>f:X\to\mathbb C</math>, and a disk <math>D</math>, with boundary <math>\gamma</math>, the following happen: | |||
<ul><li> <math>f</math> satisfies the plain mean value formula <math>f(x)=\int_Df(y)dy</math>. | |||
</li> | |||
<li> <math>f</math> satisfies the boundary mean value formula <math>f(x)=\int_\gamma f(y)dy</math>. | |||
</li> | |||
</ul> | |||
|As usual, this follows from the Cauchy formula, with of course some care in passing from integrals constructed as in Proposition 6.24 to integrals viewed as averages, which are those that we refer to, in the present statement.}} | |||
Finally, as yet another application of the Cauchy formula, which is something nice-looking and conceptual, we have the following statement, called Liouville theorem: | |||
{{proofcard|Theorem|theorem-8|An entire, bounded holomorphic function | |||
<math display="block"> | |||
f:\mathbb C\to\mathbb C\quad,\quad |f|\leq M | |||
</math> | |||
must be constant. In particular, knowing <math>f\to0</math> with <math>z\to\infty</math> gives <math>f=0</math>. | |||
|This follows as usual from the Cauchy formula, namely: | |||
<math display="block"> | |||
f(x)=\frac{1}{2\pi i}\int_\gamma\frac{f(y)}{y-x}\,dy | |||
</math> | |||
Alternatively, we can view this as a consequence of Theorem 6.30, because given two points <math>x\neq y</math>, we can view the values of <math>f</math> at these points as averages over big disks centered at these points, say <math>D=D_x(R)</math> and <math>E=D_y(R)</math>, with <math>R > > 0</math>: | |||
<math display="block"> | |||
f(x)=\int_Df(z)dz\quad,\quad f(y)=\int_Ef(z)dz | |||
</math> | |||
Indeed, the point is that when the radius goes to <math>\infty</math>, these averages tend to be equal, and so we have <math>f(x)\simeq f(y)</math>, which gives <math>f(x)=f(y)</math> in the limit.}} | |||
==General references== | |||
{{cite arXiv|last1=Banica|first1=Teo|year=2024|title=Calculus and applications|eprint=2401.00911|class=math.CO}} | |||
==References== | |||
{{reflist}} |
Latest revision as of 15:13, 21 April 2025
Our goal in what follows will be that of proving that any holomorphic function is analytic. This is something quite subtle, which cannot be proved with bare hands, and requires lots of preliminaries. Getting to these preliminaries now, our claim is that a lot of useful knowledge, in order to deal with the holomorphic functions, can be gained by further studying the analytic functions, and even the usual polynomials [math]P\in\mathbb C[X][/math].
So, let us further study the polynomials [math]P\in\mathbb C[X][/math], and other analytic functions. We already know from chapter 5 that in the polynomial case, [math]P\in\mathbb C[X][/math], some interesting things happen, because any such polynomial has a root, and even [math]\deg(P)[/math] roots, after a recurrence. Keeping looking at polynomials, with the same methods, we are led to:
Any polynomial [math]P\in\mathbb C[X][/math] satisfies the maximum principle, in the sense that given a domain [math]D[/math], with boundary [math]\gamma[/math], we have:
In order to prove this, we can split [math]D[/math] into connected components, and then assume that [math]D[/math] is connected. Moreover, we can assume that [math]D[/math] has no holes, and so is homeomorphic to a disk, and even assume that [math]D[/math] itself is a disk. But with this assumption made, the result follows from by contradiction, by using the same arguments as in the proof of the existence of a root, from chapter 5. To be more precise, assume [math]\deg P\geq 1[/math], and that the maximum of [math]|P|[/math] is attained at the center of a disk [math]D=D(z,r)[/math]:
We can write then [math]P(z+t)\simeq P(z)+ct^k[/math] with [math]c\neq0[/math], for [math]t[/math] small, and by suitably choosing the argument of [math]t[/math] on the unit circle we conclude, exactly as in chapter 5, that the function [math]|P|[/math] cannot have a local maximum at [math]z[/math], as stated.
A good explanation for the fact that the maximum principle holds for polynomials [math]P\in\mathbb C[X][/math] could be that the values of such a polynomial inside a disk can be recovered from its values on the boundary. And fortunately, this is indeed the case, and we have:
Given a polynomial [math]P\in\mathbb C[X][/math], and a disk [math]D[/math], with boundary [math]\gamma[/math], we have the following formulae, with the integrations being the normalized, mass [math]1[/math] ones:
- [math]P[/math] satisfies the plain mean value formula [math]P(x)=\int_DP(y)dy[/math].
- [math]P[/math] satisfies the boundary mean value formula [math]P(x)=\int_\gamma P(y)dy[/math].
As a first observation, the two mean value formulae in the statement are equivalent, by restricting the attention to disks [math]D[/math], having as boundaries circles [math]\gamma[/math], and using annuli and polar coordinates for the proof of the equivalence. As for the formulae themselves, these can be checked by direct computation for a disk [math]D[/math], with the formulation in (2) being the most convenient. Indeed, for a monomial [math]P(x)=x^n[/math] we have:
Here we have used the following key identity, valid for any exponent [math]m\in\mathbb Z[/math]:
Thus, we have the result for monomials, and the general case follows by linearity.
All the above is very nice, but we can in fact do even better, with a more powerful integration formula. Let us start with some preliminaries. We first have:
We can integrate functions [math]f[/math] over curves [math]\gamma[/math] by setting
We must prove that the quantity in the statement is independent on the parametrization. In other words, we must prove that if we pick two different parametrizations [math]\gamma,\eta:[a,b]\to\mathbb C[/math] of our curve, then we have the following formula:
But for this purpose, let us write [math]\gamma=\eta\phi[/math], with [math]\phi:[a,b]\to[a,b][/math] being a certain function, that we can assume to be bijective, via an elementary cut-and-paste argument. By using the chain rule for derivatives, and the change of variable formula, we have:
Thus, we are led to the conclusions in the statement.
The main properties of the above integration method are as follows:
We have the following formula, for a union of paths:
Here the first assertion is clear from definitions, and the second assertion comes from the change of variable formula, by using Proposition 6.24.
Now by getting back to polynomials, we have the following result:
Any polynomial [math]P\in\mathbb C[X][/math] satisfies the Cauchy formula
This follows by using abstract arguments and computations similar to those in the proof of Theorem 6.23. Indeed, by linearity we can assume [math]P(x)=x^n[/math]. Also, by using a cut-and-paste argument, we can assume that we are on a circle:
By using now the computation from the proof of Theorem 6.23, we obtain:
Thus, we are led to the formula in the statement.
All this is quite interesting, and obviously, we are now into some serious mathematics. Importantly, Theorem 6.22, Theorem 6.23 and Theorem 6.26 provide us with a path for proving the converse of Theorem 6.21. Indeed, if we manage to prove the Cauchy formula for any holomorphic function [math]f:X\to\mathbb C[/math], then it will follow that our function is in fact analytic, and so infinitely differentiable. So, let us start with the following result:
The Cauchy formula, namely
This is something standard, which can be proved as follows:
(1) Our first claim is that given [math]f\in H(X)[/math], with [math]f'\in C(X)[/math], the integral of [math]f'[/math] vanishes on any path. Indeed, by using the change of variable formula, we have:
(2) Our second claim is that given [math]f\in H(X)[/math] and a triangle [math]\Delta\subset X[/math], we have:
Indeed, let us call [math]\Delta=ABC[/math] our triangle. Now consider the midpoints [math]A',B',C'[/math] of the edges [math]BC,CA,AB[/math], and then consider the following smaller triangles:
These smaller triangles partition then [math]\Delta[/math], and due to our above conventions for the vertex ordering, which produce cancellations when integrating over them, we have:
Thus we can pick, among the triangles [math]\Delta_i[/math], a triangle [math]\Delta^{(1)}[/math] such that:
In fact, by repeating the procedure, we obtain triangles [math]\Delta^{(n)}[/math] such that:
(3) Now let [math]z[/math] be the limiting point of these triangles [math]\Delta^{(n)}[/math], and fix [math]\varepsilon \gt 0[/math]. By using the fact that the functions [math]1,x[/math] integrate over paths up to 0, coming from (1), we obtain the following estimate, with [math]n\in\mathbb N[/math] being big enough, and [math]L[/math] being the perimeter of [math]\Delta[/math]:
Now by combining this with the estimate in (2), this proves our claim.
(4) The rest is quite routine. First, we can pass from triangles to boundaries of convex sets, in a straightforward way, with the same conclusion as in (2), namely:
Getting back to what we want to prove, namely the Cauchy formula for an arbitrary holomorphic function [math]f\in H(X)[/math], let [math]x\in X[/math], and consider the following function:
Now assuming that [math]\gamma[/math] encloses a convex set, we can apply what we found, namely vanishing of the integral, to this function [math]g[/math], and we obtain the Cauchy formula for [math]f[/math].
(5) Finally, the extension to general curves is standard, and standard as well is the discussion of what exactly happens at [math]x[/math], in the above proof. See Rudin [1].
As a main application of the Cauchy formula, we have:
The following conditions are equivalent, for a function [math]f:X\to\mathbb C[/math]:
- [math]f[/math] is holomorphic.
- [math]f[/math] is infinitely differentiable.
- [math]f[/math] is analytic.
- The Cauchy formula holds for [math]f[/math].
This is routine from what we have, the idea being as follows:
[math](1)\implies(4)[/math] is non-trivial, but we know this from Theorem 6.27.
[math](4)\implies(3)[/math] is something trivial, because we can expand the series in the Cauchy formula, and we conclude that our function is indeed analytic.
[math](3)\implies(2)\implies(1)[/math] are both elementary, known from Theorem 6.21.
As another application of the Cauchy formula, we have:
Any holomorphic function [math]f:X\to\mathbb C[/math] satisfies the maximum principle, in the sense that given a domain [math]D[/math], with boundary [math]\gamma[/math], we have:
This follows indeed from the Cauchy formula. Observe that the converse is not true, for instance because functions like [math]\bar{x}[/math] satisfy too the maximum principle. We will be back to this later, when talking about harmonic functions.
As before with polynomials, a good explanation for the fact that the maximum principle holds could be that the values of our function inside a disk can be recovered from its values on the boundary. And fortunately, this is indeed the case, and we have:
Given an holomorphic function [math]f:X\to\mathbb C[/math], and a disk [math]D[/math], with boundary [math]\gamma[/math], the following happen:
- [math]f[/math] satisfies the plain mean value formula [math]f(x)=\int_Df(y)dy[/math].
- [math]f[/math] satisfies the boundary mean value formula [math]f(x)=\int_\gamma f(y)dy[/math].
As usual, this follows from the Cauchy formula, with of course some care in passing from integrals constructed as in Proposition 6.24 to integrals viewed as averages, which are those that we refer to, in the present statement.
Finally, as yet another application of the Cauchy formula, which is something nice-looking and conceptual, we have the following statement, called Liouville theorem:
An entire, bounded holomorphic function
This follows as usual from the Cauchy formula, namely:
Alternatively, we can view this as a consequence of Theorem 6.30, because given two points [math]x\neq y[/math], we can view the values of [math]f[/math] at these points as averages over big disks centered at these points, say [math]D=D_x(R)[/math] and [math]E=D_y(R)[/math], with [math]R \gt \gt 0[/math]:
Indeed, the point is that when the radius goes to [math]\infty[/math], these averages tend to be equal, and so we have [math]f(x)\simeq f(y)[/math], which gives [math]f(x)=f(y)[/math] in the limit.
General references
Banica, Teo (2024). "Calculus and applications". arXiv:2401.00911 [math.CO].