Multiple integrals

[math] \newcommand{\mathds}{\mathbb}[/math]

This article was automatically generated from a tex file and may contain conversion errors. If permitted, you may login and edit this article to improve the conversion.

13a. Multiple integrals

Welcome to advanced calculus. We have kept the best for the end, and in this whole last part of the present book we will learn how to integrate functions of several variables. The general, fascinating question that we will be interested in is as follows: \begin{question} Given a function [math]f:\mathbb R^N\to\mathbb R[/math], how to compute

[[math]] \int_{\mathbb R^N}f(x)dx_1\ldots dx_N [[/math]]

or at least, what are the rules satisfied by such integrals? \end{question} Here we adopt, somehow by definition, the convention that the above integral is constructed a bit like the one-variable integrals, via Riemann sums as in chapter 4, by using this time divisions of the space [math]\mathbb R^N[/math] into small [math]N[/math]-dimensional cubes. There are of course some theory and details to be worked out here, not exactly trivial, but since we have not really done this in chapter 4, nor we will do this here. Let's focus on computations.


At the first glance solving Question 13.1 looks like an easy task, because we can iterate one-variable integrations, which are something that we know well, from chapter 4. For instance the integral of a function [math]f:\mathbb R^2\to\mathbb R[/math] can be computed by using:

[[math]] \int_{\mathbb R^2}f(z)dz=\int_\mathbb R\int_\mathbb Rf(x,y)dxdy [[/math]]


This being said, we are faced right away with a difficulty, when using this method. Indeed, we can use as well the following rival method, yielding the same answer:

[[math]] \int_{\mathbb R^2}f(z)dz=\int_\mathbb R\int_\mathbb Rf(x,y)dydx [[/math]]


So, which method is the best? Depends on [math]f[/math], of course. However, things do not stop here, because in certain situations it is better to use polar coordinates, as follows:

[[math]] \int_{\mathbb R^2}f(z)dz=\int_0^{2\pi}\int_0^\infty f(r\cos t,r\sin t)Jdrdt [[/math]]


Here the factor on the left is [math]J=dxdy/drdt[/math], which remains to be computed. And for the picture to be complete, we have as well the following fourth formula:

[[math]] \int_{\mathbb R^2}f(z)dz=\int_0^\infty\int_0^{2\pi} f(r\cos t,r\sin t)Jdtdr [[/math]]


In short, you got my point, I hope, things are quite complicated in several variables, with the complications starting already in 2 variables. And actually, if you think a bit about 3 variables, it is quite clear that the above complications can become true nightmares, leading to long nights spent in computing integrals, there in 3 variables.


So, what to do? Work and patience, of course, and here is our plan:


(1) In this chapter we will get used to the multiple integrals, with some general rules for their computation, including a rule for computing the above factor [math]J=dxdy/drdt[/math]. Then, we will enjoy all this by computing some integrals over the spheres in [math]\mathbb R^N[/math].


(2) In chapter 14 we will review probability theory, and further develop it, notably with the theory of normal variables. And finally, in chapters 15-16 we will get back to physics, with some sharp results in 3 dimensions, and then in infinite dimensions.


This sounds good, but as a matter of doublechecking what we are doing, make sure that it is wise indeed, let us ask the cat about what he thinks. And cat answers: \begin{cat} Not quite sure about your formula

[[math]] \int_\mathbb R\int_\mathbb Rf(x,y)dxdy=\int_\mathbb R\int_\mathbb Rf(x,y)dydx [[/math]]

and I doubt too that you can properly compute [math]J=dxdy/drdt[/math]. Read Rudin. \end{cat} Oh dear. What can I say. Sure I read Rudin, as a Gen X mathematician, but we are now well into the 21st century, and shall I go ahead with heavy measure theory, for having all this properly developed? Not quite sure, I'd rather stick to my plan.


So, ignoring what cat says, but get however a copy of Rudin's red book, and don't forget about Mao's too, and getting back to our plan, as a first goal, we would like to compute the factor [math]J=dxdy/drdt[/math]. Let us start with something that we know, in 1D:

Proposition

We have the change of variable formula

[[math]] \int_a^bf(x)dx=\int_c^df(\varphi(t))\varphi'(t)dt [[/math]]
where [math]c=\varphi^{-1}(a)[/math] and [math]d=\varphi^{-1}(b)[/math].


Show Proof

This follows with [math]f=F'[/math], via the following differentiation rule:

[[math]] (F\varphi)'(t)=F'(\varphi(t))\varphi'(t) [[/math]]


Indeed, by integrating between [math]c[/math] and [math]d[/math], we obtain the result.

In several variables now, we can only expect the above [math]\varphi'(t)[/math] factor to be replaced by something similar, a sort of “derivative of [math]\varphi[/math], arising as a real number”. But this can only be the Jacobian [math]\det(\varphi'(t))[/math], and with this in mind, we are led to:

Theorem

Given a transformation [math]\varphi=(\varphi_1,\ldots,\varphi_N)[/math], we have

[[math]] \int_Ef(x)dx=\int_{\varphi^{-1}(E)}f(\varphi(t))|J_\varphi(t)|dt [[/math]]
with the [math]J_\varphi[/math] quantity, called Jacobian, being given by

[[math]] J_\varphi(t)=\det\left[\left(\frac{d\varphi_i}{dx_j}(x)\right)_{ij}\right] [[/math]]

and with this generalizing the formula from Proposition 13.3.


Show Proof

This is something quite tricky, the idea being as follows:


(1) Observe first that this generalizes indeed the change of variable formula in 1 dimension, from Proposition 13.3, the point here being that the absolute value on the derivative appears as to compensate for the lack of explicit bounds for the integral.


(2) In general now, we can first argue that, the formula in the statement being linear in [math]f[/math], we can assume [math]f=1[/math]. Thus we want to prove [math]vol(E)=\int_{\varphi^{-1}(E)}|J_\varphi(t)|dt[/math], and with [math]D={\varphi^{-1}(E)}[/math], this amounts in proving [math]vol(\varphi(D))=\int_D|J_\varphi(t)|dt[/math].


(3) Now since this latter formula is additive with respect to [math]D[/math], it is enough to prove that [math]vol(\varphi(D))=\int_D J_\varphi(t)dt[/math], for small cubes [math]D[/math], and assuming [math]J_\varphi \gt 0[/math]. But this follows by using the definition of the determinant as a volume, as in chapter 9.


(4) The details and computations however are quite non-trivial, and can be found for instance in Rudin [1]. So, please read that. With this, reading the complete proof of the present theorem from Rudin, being part of the standard math experience.

Finally, speaking Rudin, and getting back to Cat 13.2, there are some deep truths there. But, remember our Advice 1.3, from the beginning of this book. Full rigor does not guarantee that your computation is correct, you always have to doublecheck. So an alternative method is that of using less rigor, and more doublechecks at the end. And this will be our philosophy in what follows, with all our formulae below being correct.

13b. Spherical coordinates

Time now do some exciting computations, with the technology that we have. In what regards the applications of Theorem 13.4, these often come via:

Proposition

We have polar coordinates in [math]2[/math] dimensions,

[[math]] \begin{cases} x\!\!\!&=\ r\cos t\\ y\!\!\!&=\ r\sin t \end{cases} [[/math]]
the corresponding Jacobian being [math]J=r[/math].


Show Proof

This is elementary, the Jacobian being:

[[math]] \begin{eqnarray*} J &=&\begin{vmatrix} \frac{d(r\cos t)}{dr}&&\frac{d(r\cos t)}{dt}\\ \\ \frac{d(r\sin t)}{dr}&&\frac{d(r\sin t)}{dt} \end{vmatrix}\\ &=&\begin{vmatrix} \cos t&-r\sin t\\ \sin t&r\cos t \end{vmatrix}\\ &=&r\cos^2t+r\sin^2t\\ &=&r \end{eqnarray*} [[/math]]


Thus, we have indeed the formula in the statement.

We can now compute the Gauss integral, which is the best calculus formula ever:

Theorem

We have the following formula,

[[math]] \int_\mathbb Re^{-x^2}dx=\sqrt{\pi} [[/math]]
called Gauss integral formula.


Show Proof

Let [math]I[/math] be the above integral. By using polar coordinates, we obtain:

[[math]] \begin{eqnarray*} I^2 &=&\int_\mathbb R\int_\mathbb Re^{-x^2-y^2}dxdy\\ &=&\int_0^{2\pi}\int_0^\infty e^{-r^2}rdrdt\\ &=&2\pi\int_0^\infty\left(-\frac{e^{-r^2}}{2}\right)'dr\\ &=&2\pi\left[0-\left(-\frac{1}{2}\right)\right]\\ &=&\pi \end{eqnarray*} [[/math]]


Thus, we are led to the formula in the statement.

Moving now to 3 dimensions, we have here the following result:

Proposition

We have spherical coordinates in [math]3[/math] dimensions,

[[math]] \begin{cases} x\!\!\!&=\ r\cos s\\ y\!\!\!&=\ r\sin s\cos t\\ z\!\!\!&=\ r\sin s\sin t \end{cases} [[/math]]
the corresponding Jacobian being [math]J(r,s,t)=r^2\sin s[/math].


Show Proof

The fact that we have indeed spherical coordinates is clear. Regarding now the Jacobian, this is given by the following formula:

[[math]] \begin{eqnarray*} &&J(r,s,t)\\ &=&\begin{vmatrix} \cos s&-r\sin s&0\\ \sin s\cos t&r\cos s\cos t&-r\sin s\sin t\\ \sin s\sin t&r\cos s\sin t&r\sin s\cos t \end{vmatrix}\\ &=&r^2\sin s\sin t \begin{vmatrix}\cos s&-r\sin s\\ \sin s\sin t&r\cos s\sin t\end{vmatrix} +r\sin s\cos t\begin{vmatrix}\cos s&-r\sin s\\ \sin s\cos t&r\cos s\cos t\end{vmatrix}\\ &=&r\sin s\sin^2 t \begin{vmatrix}\cos s&-r\sin s\\ \sin s&r\cos s\end{vmatrix} +r\sin s\cos^2 t\begin{vmatrix}\cos s&-r\sin s\\ \sin s&r\cos s\end{vmatrix}\\ &=&r\sin s(\sin^2t+\cos^2t)\begin{vmatrix}\cos s&-r\sin s\\ \sin s&r\cos s\end{vmatrix}\\ &=&r\sin s\times 1\times r\\ &=&r^2\sin s \end{eqnarray*} [[/math]]


Thus, we have indeed the formula in the statement.

Let us work out now the general spherical coordinate formula, in arbitrary [math]N[/math] dimensions. The formula here, which generalizes those at [math]N=2,3[/math], is as follows:

Theorem

We have spherical coordinates in [math]N[/math] dimensions,

[[math]] \begin{cases} x_1\!\!\!&=\ r\cos t_1\\ x_2\!\!\!&=\ r\sin t_1\cos t_2\\ \vdots\\ x_{N-1}\!\!\!&=\ r\sin t_1\sin t_2\ldots\sin t_{N-2}\cos t_{N-1}\\ x_N\!\!\!&=\ r\sin t_1\sin t_2\ldots\sin t_{N-2}\sin t_{N-1} \end{cases} [[/math]]
the corresponding Jacobian being given by the following formula,

[[math]] J(r,t)=r^{N-1}\sin^{N-2}t_1\sin^{N-3}t_2\,\ldots\,\sin^2t_{N-3}\sin t_{N-2} [[/math]]
and with this generalizing the known formulae at [math]N=2,3[/math].


Show Proof

As before, the fact that we have spherical coordinates is clear. Regarding now the Jacobian, also as before, by developing over the last column, we have:

[[math]] \begin{eqnarray*} J_N &=&r\sin t_1\ldots\sin t_{N-2}\sin t_{N-1}\times \sin t_{N-1}J_{N-1}\\ &+&r\sin t_1\ldots \sin t_{N-2}\cos t_{N-1}\times\cos t_{N-1}J_{N-1}\\ &=&r\sin t_1\ldots\sin t_{N-2}(\sin^2 t_{N-1}+\cos^2 t_{N-1})J_{N-1}\\ &=&r\sin t_1\ldots\sin t_{N-2}J_{N-1} \end{eqnarray*} [[/math]]


Thus, we obtain the formula in the statement, by recurrence.

As a comment here, the above convention for spherical coordinates is one among many, designed to best work in arbitrary [math]N[/math] dimensions. Also, in what regards the precise range of the angles [math]t_1,\ldots,t_{N-1}[/math], we will leave this to you, as an instructive exercise.


As an application, let us compute the volumes of spheres. For this purpose, we must understand how the products of coordinates integrate over spheres. Let us start with the case [math]N=2[/math]. Here the sphere is the unit circle [math]\mathbb T[/math], and with [math]z=e^{it}[/math] the coordinates are [math]\cos t,\sin t[/math]. We can first integrate arbitrary powers of these coordinates, as follows:

Proposition

We have the following formulae,

[[math]] \int_0^{\pi/2}\cos^pt\,dt=\int_0^{\pi/2}\sin^pt\,dt=\left(\frac{\pi}{2}\right)^{\varepsilon(p)}\frac{p!!}{(p+1)!!} [[/math]]
where [math]\varepsilon(p)=1[/math] if [math]p[/math] is even, and [math]\varepsilon(p)=0[/math] if [math]p[/math] is odd, and where

[[math]] m!!=(m-1)(m-3)(m-5)\ldots [[/math]]
with the product ending at [math]2[/math] if [math]m[/math] is odd, and ending at [math]1[/math] if [math]m[/math] is even.


Show Proof

Let us first compute the integral on the left in the statement:

[[math]] I_p=\int_0^{\pi/2}\cos^pt\,dt [[/math]]


We do this by partial integration. We have the following formula:

[[math]] \begin{eqnarray*} (\cos^pt\sin t)' &=&p\cos^{p-1}t(-\sin t)\sin t+\cos^pt\cos t\\ &=&p\cos^{p+1}t-p\cos^{p-1}t+\cos^{p+1}t\\ &=&(p+1)\cos^{p+1}t-p\cos^{p-1}t \end{eqnarray*} [[/math]]


By integrating between [math]0[/math] and [math]\pi/2[/math], we obtain the following formula:

[[math]] (p+1)I_{p+1}=pI_{p-1} [[/math]]


Thus we can compute [math]I_p[/math] by recurrence, and we obtain:

[[math]] \begin{eqnarray*} I_p &=&\frac{p-1}{p}\,I_{p-2}\\ &=&\frac{p-1}{p}\cdot\frac{p-3}{p-2}\,I_{p-4}\\ &=&\frac{p-1}{p}\cdot\frac{p-3}{p-2}\cdot\frac{p-5}{p-4}\,I_{p-6}\\ &&\vdots\\ &=&\frac{p!!}{(p+1)!!}\,I_{1-\varepsilon(p)} \end{eqnarray*} [[/math]]


But [math]I_0=\frac{\pi}{2}[/math] and [math]I_1=1[/math], so we get the result. As for the second formula, this follows from the first one, with [math]t=\frac{\pi}{2}-s[/math]. Thus, we have proved both formulae in the statement.

We can now compute the volume of the sphere, as follows:

Theorem

The volume of the unit sphere in [math]\mathbb R^N[/math] is given by

[[math]] V=\left(\frac{\pi}{2}\right)^{[N/2]}\frac{2^N}{(N+1)!!} [[/math]]
with our usual convention [math]N!!=(N-1)(N-3)(N-5)\ldots[/math]


Show Proof

Let us denote by [math]B^+[/math] the positive part of the unit sphere, or rather unit ball [math]B[/math], obtained by cutting this unit ball in [math]2^N[/math] parts. At the level of volumes, we have:

[[math]] V=2^NV^+ [[/math]]


We have the following computation, using spherical coordinates:

[[math]] \begin{eqnarray*} V^+ &=&\int_{B^+}1\\ &=&\int_0^1\int_0^{\pi/2}\ldots\int_0^{\pi/2}r^{N-1}\sin^{N-2}t_1\ldots\sin t_{N-2}\,drdt_1\ldots dt_{N-1}\\ &=&\int_0^1r^{N-1}\,dr\int_0^{\pi/2}\sin^{N-2}t_1\,dt_1\ldots\int_0^{\pi/2}\sin t_{N-2}dt_{N-2}\int_0^{\pi/2}1dt_{N-1}\\ &=&\frac{1}{N}\times\left(\frac{\pi}{2}\right)^{[N/2]}\times\frac{(N-2)!!}{(N-1)!!}\cdot\frac{(N-3)!!}{(N-2)!!}\ldots\frac{2!!}{3!!}\cdot\frac{1!!}{2!!}\cdot1\\ &=&\frac{1}{N}\times\left(\frac{\pi}{2}\right)^{[N/2]}\times\frac{1}{(N-1)!!}\\ &=&\left(\frac{\pi}{2}\right)^{[N/2]}\frac{1}{(N+1)!!} \end{eqnarray*} [[/math]]


Here we have used the following formula, for computing the exponent of [math]\pi/2[/math]:

[[math]] \begin{eqnarray*} \varepsilon(0)+\varepsilon(1)+\varepsilon(2)+\ldots+\varepsilon(N-2) &=&1+0+1+\ldots+\varepsilon(N-2)\\ &=&\left[\frac{N-2}{2}\right]+1\\ &=&\left[\frac{N}{2}\right] \end{eqnarray*} [[/math]]


Thus, we obtain the formula in the statement.

As main particular cases of the above formula, we have:

Theorem

The volumes of the low-dimensional spheres are as follows:

  • At [math]N=1[/math], the length of the unit interval is [math]V=2[/math].
  • At [math]N=2[/math], the area of the unit disk is [math]V=\pi[/math].
  • At [math]N=3[/math], the volume of the unit sphere is [math]V=\frac{4\pi}{3}[/math]
  • At [math]N=4[/math], the volume of the corresponding unit sphere is [math]V=\frac{\pi^2}{2}[/math].


Show Proof

Some of these results are well-known, but we can obtain all of them as particular cases of the general formula in Theorem 13.10, as follows:


(1) At [math]N=1[/math] we obtain [math]V=1\cdot\frac{2}{1}=2[/math].


(2) At [math]N=2[/math] we obtain [math]V=\frac{\pi}{2}\cdot\frac{4}{2}=\pi[/math].


(3) At [math]N=3[/math] we obtain [math]V=\frac{\pi}{2}\cdot\frac{8}{3}=\frac{4\pi}{3}[/math].


(4) At [math]N=4[/math] we obtain [math]V=\frac{\pi^2}{4}\cdot\frac{16}{8}=\frac{\pi^2}{2}[/math].

There are many other applications of the above, as we will see in what follows.

13c. Stirling estimates

The formula in Theorem 13.10 is certainly nice, but in practice, we would like to have estimates for that sphere volumes too. For this purpose, we will need:

Theorem

We have the Stirling formula

[[math]] N!\simeq\left(\frac{N}{e}\right)^N\sqrt{2\pi N} [[/math]]
valid in the [math]N\to\infty[/math] limit.


Show Proof

This is something quite tricky, the idea being as follows:


(1) Let us first see what we can get with Riemann sums. We have:

[[math]] \begin{eqnarray*} \log(N!) &=&\sum_{k=1}^N\log k\\ &\approx&\int_1^N\log x\,dx\\ &=&N\log N-N+1 \end{eqnarray*} [[/math]]


By exponentiating, this gives the following estimate, which is not bad:

[[math]] N!\approx\left(\frac{N}{e}\right)^N\cdot e [[/math]]


(2) We can improve our estimate by replacing the rectangles from the Riemann sum approach to the integrals by trapezoids. In practice, this gives the following estimate:

[[math]] \begin{eqnarray*} \log(N!) &=&\sum_{k=1}^N\log k\\ &\approx&\int_1^N\log x\,dx+\frac{\log 1+\log N}{2}\\ &=&N\log N-N+1+\frac{\log N}{2} \end{eqnarray*} [[/math]]


By exponentiating, this gives the following estimate, which gets us closer:

[[math]] N!\approx\left(\frac{N}{e}\right)^N\cdot e\cdot\sqrt{N} [[/math]]


(3) In order to conclude, we must take some kind of mathematical magnifier, and carefully estimate the error made in (2). Fortunately, this mathematical magnifier exists, called Euler-Maclaurin formula, and after some computations, this leads to:

[[math]] N!\simeq\left(\frac{N}{e}\right)^N\sqrt{2\pi N} [[/math]]


(4) However, all this remains a bit complicated, so we would like to present now an alternative approach to (3), which also misses some details, but better does the job, explaining where the [math]\sqrt{2\pi}[/math] factor comes from. First, by partial integration we have:

[[math]] N!=\int_0^\infty x^Ne^{-x}dx [[/math]]


Since the integrand is sharply peaked at [math]x=N[/math], as you can see by computing the derivative of [math]\log(x^Ne^{-x})[/math], this suggests writing [math]x=N+y[/math], and we obtain:

[[math]] \begin{eqnarray*} \log(x^Ne^{-x}) &=&N\log x-x\\ &=&N\log(N+y)-(N+y)\\ &=&N\log N+N\log\left(1+\frac{y}{N}\right)-(N+y)\\ &\simeq&N\log N+N\left(\frac{y}{N}-\frac{y^2}{2N^2}\right)-(N+y)\\ &=&N\log N-N-\frac{y^2}{2N} \end{eqnarray*} [[/math]]


By exponentiating, we obtain from this the following estimate:

[[math]] x^Ne^{-x}\simeq\left(\frac{N}{e}\right)^Ne^{-y^2/2N} [[/math]]


Now by integrating, and using the Gauss formula, we obtain from this:

[[math]] \begin{eqnarray*} N! &=&\int_0^\infty x^Ne^{-x}dx\\ &\simeq&\int_{-N}^N\left(\frac{N}{e}\right)^Ne^{-y^2/2N}\,dy\\ &\simeq&\left(\frac{N}{e}\right)^N\int_\mathbb Re^{-y^2/2N}\,dy\\ &=&\left(\frac{N}{e}\right)^N\sqrt{2N}\int_\mathbb Re^{-z^2}\,dz\\ &=&\left(\frac{N}{e}\right)^N\sqrt{2\pi N} \end{eqnarray*} [[/math]]


Thus, we have proved the Stirling formula, as formulated in the statement.

With the above formula in hand, we have many useful applications, such as:

Proposition

We have the following estimate for binomial coefficients,

[[math]] \binom{N}{K}\simeq\left(\frac{1}{t^t(1-t)^{1-t}}\right)^N\frac{1}{\sqrt{2\pi t(1-t)N}} [[/math]]
in the [math]K\simeq tN\to\infty[/math] limit, with [math]t\in(0,1][/math]. In particular we have

[[math]] \binom{2N}{N}\simeq\frac{4^N}{\sqrt{\pi N}} [[/math]]
in the [math]N\to\infty[/math] limit, for the central binomial coefficients.


Show Proof

All this is very standard, by using the Stirling formula etablished above, for the various factorials which appear, the idea being as follows:


(1) This follows from the definition of the binomial coefficients, namely:

[[math]] \begin{eqnarray*} \binom{N}{K} &=&\frac{N!}{K!(N-K)!}\\ &\simeq&\left(\frac{N}{e}\right)^N\sqrt{2\pi N}\left(\frac{e}{K}\right)^K\frac{1}{\sqrt{2\pi K}}\left(\frac{e}{N-K}\right)^{N-K}\frac{1}{\sqrt{2\pi(N-K)}}\\ &=&\frac{N^N}{K^K(N-K)^{N-K}}\sqrt{\frac{N}{2\pi K(N-K)}}\\ &\simeq&\frac{N^N}{(tN)^{tN}((1-t)N)^{(1-t)N}}\sqrt{\frac{N}{2\pi tN(1-t)N}}\\ &=&\left(\frac{1}{t^t(1-t)^{1-t}}\right)^N\frac{1}{\sqrt{2\pi t(1-t)N}} \end{eqnarray*} [[/math]]


Thus, we are led to the conclusion in the statement.


(2) This estimate follows from a similar computation, as follows:

[[math]] \begin{eqnarray*} \binom{2N}{N} &=&\frac{(2N)!}{N!N!}\\ &\simeq&\left(\frac{2N}{e}\right)^{2N}\sqrt{4\pi N}\left(\frac{e}{N}\right)^{2N}\frac{1}{2\pi N}\\ &=&\frac{4^N}{\sqrt{\pi N}} \end{eqnarray*} [[/math]]


Alternatively, we can take [math]t=1/2[/math] in (1), then rescale. Indeed, we have:

[[math]] \begin{eqnarray*} \binom{N}{[N/2]} &\simeq&\left(\frac{1}{(\frac{1}{2})^{1/2}(\frac{1}{2})^{1/2}}\right)^N\frac{1}{\sqrt{2\pi\cdot\frac{1}{2}\cdot\frac{1}{2}\cdot N}}\\ &=&2^N\sqrt{\frac{2}{\pi N}} \end{eqnarray*} [[/math]]


Thus with the change [math]N\to 2N[/math] we obtain the formula in the statement.

Summarizing, we have so far complete estimates for the factorials. Regarding now the double factorials, that we will need as well, the result here is as follows:

Proposition

We have the following estimate for the double factorials,

[[math]] N!!\simeq\left(\frac{N}{e}\right)^{N/2}C [[/math]]
with [math]C=\sqrt{2}[/math] for [math]N[/math] even, and [math]C=\sqrt{\pi}[/math] for [math]N[/math] odd. Alternatively, we have

[[math]] (N+1)!!\simeq\left(\frac{N}{e}\right)^{N/2}D [[/math]]
with [math]D=\sqrt{\pi N}[/math] for [math]N[/math] even, and [math]D=\sqrt{2N}[/math] for [math]N[/math] odd.


Show Proof

Once again this is standard, the idea being as follows:


(1) When [math]N=2K[/math] is even, we have the following computation:

[[math]] \begin{eqnarray*} N!! &=&(2K-1)(2K-3)\ldots1\\ &=&\frac{(2K)!}{2^KK!}\\ &\simeq&\frac{1}{2^K}\left(\frac{2K}{e}\right)^{2K}\sqrt{4\pi K}\left(\frac{e}{K}\right)^K\frac{1}{\sqrt{2\pi K}}\\ &=&\left(\frac{2K}{e}\right)^K\sqrt{2}\\ &=&\left(\frac{N}{e}\right)^{N/2}\sqrt{2} \end{eqnarray*} [[/math]]


(2) When [math]N=2K+1[/math] is odd, we have the following computation:

[[math]] \begin{eqnarray*} N!! &=&(2K)(2K-2)\ldots2\\ &=&2^KK!\\ &\simeq&\left(\frac{2K}{e}\right)^K\sqrt{2\pi K}\\ &=&\left(\frac{2K+1}{e}\right)^{K+1/2}\sqrt{\frac{e}{2K+1}}\left(\frac{2K}{2K+1}\right)^K\sqrt{2\pi K}\\ &\simeq&\left(\frac{N}{e}\right)^{N/2}\sqrt{\frac{e}{2K}}\cdot\frac{1}{\sqrt{e}}\cdot\sqrt{2\pi K}\\ &=&\left(\frac{N}{e}\right)^{N/2}\sqrt{\pi} \end{eqnarray*} [[/math]]


(3) Back to the case where [math]N=2K[/math] is even, by using (2) we obtain:

[[math]] \begin{eqnarray*} (N+1)!! &\simeq&\left(\frac{N+1}{e}\right)^{(N+1)/2}\sqrt{\pi}\\ &=&\left(\frac{N+1}{e}\right)^{N/2}\sqrt{\frac{N+1}{e}}\cdot\sqrt{\pi}\\ &=&\left(\frac{N}{e}\right)^{N/2}\left(\frac{N+1}{N}\right)^{N/2}\sqrt{\frac{N+1}{e}}\cdot\sqrt{\pi}\\ &\simeq&\left(\frac{N}{e}\right)^{N/2}\sqrt{e}\cdot\sqrt{\frac{N}{e}}\cdot\sqrt{\pi}\\ &=&\left(\frac{N}{e}\right)^{N/2}\sqrt{\pi N} \end{eqnarray*} [[/math]]


(4) Finally, back to the case where [math]N=2K+1[/math] is odd, by using (1) we obtain:

[[math]] \begin{eqnarray*} (N+1)!! &\simeq&\left(\frac{N+1}{e}\right)^{(N+1)/2}\sqrt{2}\\ &=&\left(\frac{N+1}{e}\right)^{N/2}\sqrt{\frac{N+1}{e}}\cdot\sqrt{2}\\ &=&\left(\frac{N}{e}\right)^{N/2}\left(\frac{N+1}{N}\right)^{N/2}\sqrt{\frac{N+1}{e}}\cdot\sqrt{2}\\ &\simeq&\left(\frac{N}{e}\right)^{N/2}\sqrt{e}\cdot\sqrt{\frac{N}{e}}\cdot\sqrt{2}\\ &=&\left(\frac{N}{e}\right)^{N/2}\sqrt{2N} \end{eqnarray*} [[/math]]


Thus, we have proved the estimates in the statement.

We can now estimate the volumes of the spheres, as follows:

Theorem

The volume of the unit sphere in [math]\mathbb R^N[/math] is given by

[[math]] V\simeq\left(\frac{2\pi e}{N}\right)^{N/2}\frac{1}{\sqrt{\pi N}} [[/math]]
in the [math]N\to\infty[/math] limit.


Show Proof

We use the formula for [math]V[/math] found in Theorem 13.10, namely:

[[math]] V=\left(\frac{\pi}{2}\right)^{[N/2]}\frac{2^N}{(N+1)!!} [[/math]]


In the case where [math]N[/math] is even, the estimate goes as follows:

[[math]] \begin{eqnarray*} V &=&\left(\frac{\pi}{2}\right)^{N/2}\frac{2^N}{(N+1)!!}\\ &\simeq&\left(\frac{\pi}{2}\right)^{N/2}2^N\left(\frac{e}{N}\right)^{N/2}\frac{1}{\sqrt{\pi N}}\\ &=&\left(\frac{2\pi e}{N}\right)^{N/2}\frac{1}{\sqrt{\pi N}} \end{eqnarray*} [[/math]]


In the case where [math]N[/math] is odd, the estimate goes as follows:

[[math]] \begin{eqnarray*} V &=&\left(\frac{\pi}{2}\right)^{(N-1)/2}\frac{2^N}{(N+1)!!}\\ &\simeq&\left(\frac{\pi}{2}\right)^{(N-1)/2}2^N\left(\frac{e}{N}\right)^{N/2}\frac{1}{\sqrt{2N}}\\ &=&\sqrt{\frac{2}{\pi}}\left(\frac{2\pi e}{N}\right)^{N/2}\frac{1}{\sqrt{2N}}\\ &=&\left(\frac{2\pi e}{N}\right)^{N/2}\frac{1}{\sqrt{\pi N}} \end{eqnarray*} [[/math]]


Thus, we are led to the uniform formula in the statement.

Getting back now to our main result so far, Theorem 13.10, we can compute in the same way the area of the sphere, the result being as follows:

Theorem

The area of the unit sphere in [math]\mathbb R^N[/math] is given by

[[math]] A=\left(\frac{\pi}{2}\right)^{[N/2]}\frac{2^N}{(N-1)!!} [[/math]]
with the our usual convention for double factorials, namely:

[[math]] N!!=(N-1)(N-3)(N-5)\ldots [[/math]]
In particular, at [math]N=2,3,4[/math] we obtain respectively [math]A=2\pi,4\pi,2\pi^2[/math].


Show Proof

Regarding the first assertion, there is no need to compute again, because the formula in the statement can be deduced from Theorem 13.10, as follows:


(1) We can either use the “pizza” argument from chapter 1, which shows that the area and volume of the sphere in [math]\mathbb R^N[/math] are related by the following formula:

[[math]] A=N\cdot V [[/math]]


Together with the formula in Theorem 13.10 for [math]V[/math], this gives the result.


(2) Or, we can start the computation in the same way as we started the proof of Theorem 13.10, the beginning of this computation being as follows:

[[math]] vol(S^+) =\int_0^{\pi/2}\ldots\int_0^{\pi/2}\sin^{N-2}t_1\ldots\sin t_{N-2}\,dt_1\ldots dt_{N-1} [[/math]]


Now by comparing with the beginning of the proof of Theorem 13.10, the only thing that changes is the following quantity, which now dissapears:

[[math]] \int_0^1r^{N-1}\,dr=\frac{1}{N} [[/math]]


Thus, we have [math]vol(S^+)=N\cdot vol(B^+)[/math], and so we obtain the following formula:

[[math]] vol(S)=N\cdot vol(B) [[/math]]


But this means [math]A=N\cdot V[/math], and together with the formula in Theorem 13.10 for [math]V[/math], this gives the result. As for the last assertion, this can be either worked out directly, or deduced from the results for volumes that we have so far, by multiplying by [math]N[/math].

13d. Spherical integrals

Let us discuss now the computation of the arbitrary integrals over the sphere. We will need a technical result extending Proposition 13.9, as follows:

Theorem

We have the following formula,

[[math]] \int_0^{\pi/2}\cos^pt\sin^qt\,dt=\left(\frac{\pi}{2}\right)^{\varepsilon(p)\varepsilon(q)}\frac{p!!q!!}{(p+q+1)!!} [[/math]]
where [math]\varepsilon(p)=1[/math] if [math]p[/math] is even, and [math]\varepsilon(p)=0[/math] if [math]p[/math] is odd, and where

[[math]] m!!=(m-1)(m-3)(m-5)\ldots [[/math]]
with the product ending at [math]2[/math] if [math]m[/math] is odd, and ending at [math]1[/math] if [math]m[/math] is even.


Show Proof

We use the same idea as in Proposition 13.9. Let [math]I_{pq}[/math] be the integral in the statement. In order to do the partial integration, observe that we have:

[[math]] \begin{eqnarray*} (\cos^pt\sin^qt)' &=&p\cos^{p-1}t(-\sin t)\sin^qt\\ &+&\cos^pt\cdot q\sin^{q-1}t\cos t\\ &=&-p\cos^{p-1}t\sin^{q+1}t+q\cos^{p+1}t\sin^{q-1}t \end{eqnarray*} [[/math]]


By integrating between [math]0[/math] and [math]\pi/2[/math], we obtain, for [math]p,q \gt 0[/math]:

[[math]] pI_{p-1,q+1}=qI_{p+1,q-1} [[/math]]


Thus, we can compute [math]I_{pq}[/math] by recurrence. When [math]q[/math] is even we have:

[[math]] \begin{eqnarray*} I_{pq} &=&\frac{q-1}{p+1}\,I_{p+2,q-2}\\ &=&\frac{q-1}{p+1}\cdot\frac{q-3}{p+3}\,I_{p+4,q-4}\\ &=&\frac{q-1}{p+1}\cdot\frac{q-3}{p+3}\cdot\frac{q-5}{p+5}\,I_{p+6,q-6}\\ &=&\vdots\\ &=&\frac{p!!q!!}{(p+q)!!}\,I_{p+q} \end{eqnarray*} [[/math]]


But the last term comes from Proposition 13.9, and we obtain the result:

[[math]] \begin{eqnarray*} I_{pq} &=&\frac{p!!q!!}{(p+q)!!}\,I_{p+q}\\ &=&\frac{p!!q!!}{(p+q)!!}\left(\frac{\pi}{2}\right)^{\varepsilon(p+q)}\frac{(p+q)!!}{(p+q+1)!!}\\ &=&\left(\frac{\pi}{2}\right)^{\varepsilon(p)\varepsilon(q)}\frac{p!!q!!}{(p+q+1)!!} \end{eqnarray*} [[/math]]


Observe that this gives the result for [math]p[/math] even as well, by symmetry. Indeed, we have [math]I_{pq}=I_{qp}[/math], by using the following change of variables:

[[math]] t=\frac{\pi}{2}-s [[/math]]


In the remaining case now, where both [math]p,q[/math] are odd, we can use once again the formula [math]pI_{p-1,q+1}=qI_{p+1,q-1}[/math] established above, and the recurrence goes as follows:

[[math]] \begin{eqnarray*} I_{pq} &=&\frac{q-1}{p+1}\,I_{p+2,q-2}\\ &=&\frac{q-1}{p+1}\cdot\frac{q-3}{p+3}\,I_{p+4,q-4}\\ &=&\frac{q-1}{p+1}\cdot\frac{q-3}{p+3}\cdot\frac{q-5}{p+5}\,I_{p+6,q-6}\\ &=&\vdots\\ &=&\frac{p!!q!!}{(p+q-1)!!}\,I_{p+q-1,1} \end{eqnarray*} [[/math]]


In order to compute the last term, observe that we have:

[[math]] \begin{eqnarray*} I_{p1} &=&\int_0^{\pi/2}\cos^pt\sin t\,dt\\ &=&-\frac{1}{p+1}\int_0^{\pi/2}(\cos^{p+1}t)'\,dt\\ &=&\frac{1}{p+1} \end{eqnarray*} [[/math]]


Thus, we can finish our computation in the case [math]p,q[/math] odd, as follows:

[[math]] \begin{eqnarray*} I_{pq} &=&\frac{p!!q!!}{(p+q-1)!!}\,I_{p+q-1,1}\\ &=&\frac{p!!q!!}{(p+q-1)!!}\cdot\frac{1}{p+q}\\ &=&\frac{p!!q!!}{(p+q+1)!!} \end{eqnarray*} [[/math]]


Thus, we obtain the formula in the statement, the exponent of [math]\pi/2[/math] appearing there being [math]\varepsilon(p)\varepsilon(q)=0\cdot 0=0[/math] in the present case, and this finishes the proof.

We can now integrate over the spheres, as follows:

Theorem

The polynomial integrals over the unit sphere [math]S^{N-1}_\mathbb R\subset\mathbb R^N[/math], with respect to the normalized, mass [math]1[/math] measure, are given by the following formula,

[[math]] \int_{S^{N-1}_\mathbb R}x_1^{k_1}\ldots x_N^{k_N}\,dx=\frac{(N-1)!!k_1!!\ldots k_N!!}{(N+\Sigma k_i-1)!!} [[/math]]
valid when all exponents [math]k_i[/math] are even. If an exponent [math]k_i[/math] is odd, the integral vanishes.


Show Proof

Assume first that one of the exponents [math]k_i[/math] is odd. We can make then the following change of variables, which shows that the integral in the statement vanishes:

[[math]] x_i\to-x_i [[/math]]


Assume now that all exponents [math]k_i[/math] are even. As a first observation, the result holds indeed at [math]N=2[/math], due to the formula from Theorem 13.17, which reads:

[[math]] \int_0^{\pi/2}\cos^pt\sin^qt\,dt =\left(\frac{\pi}{2}\right)^{\varepsilon(p)\varepsilon(q)}\frac{p!!q!!}{(p+q+1)!!} =\frac{p!!q!!}{(p+q+1)!!} [[/math]]


In the general case now, where the dimension [math]N\in\mathbb N[/math] is arbitrary, the integral in the statement can be written in spherical coordinates, as follows:

[[math]] I=\frac{2^N}{A}\int_0^{\pi/2}\ldots\int_0^{\pi/2}x_1^{k_1}\ldots x_N^{k_N}J\,dt_1\ldots dt_{N-1} [[/math]]


Here [math]A[/math] is the area of the sphere, [math]J[/math] is the Jacobian, and the [math]2^N[/math] factor comes from the restriction to the [math]1/2^N[/math] part of the sphere where all the coordinates are positive. According to Theorem 13.16, the normalization constant in front of the integral is:

[[math]] \frac{2^N}{A}=\left(\frac{2}{\pi}\right)^{[N/2]}(N-1)!! [[/math]]


As for the unnormalized integral, this is given by:

[[math]] \begin{eqnarray*} I'=\int_0^{\pi/2}\ldots\int_0^{\pi/2} &&(\cos t_1)^{k_1} (\sin t_1\cos t_2)^{k_2}\\ &&\vdots\\ &&(\sin t_1\sin t_2\ldots\sin t_{N-2}\cos t_{N-1})^{k_{N-1}}\\ &&(\sin t_1\sin t_2\ldots\sin t_{N-2}\sin t_{N-1})^{k_N}\\ &&\sin^{N-2}t_1\sin^{N-3}t_2\ldots\sin^2t_{N-3}\sin t_{N-2}\\ &&dt_1\ldots dt_{N-1} \end{eqnarray*} [[/math]]


By rearranging the terms, we obtain:

[[math]] \begin{eqnarray*} I' &=&\int_0^{\pi/2}\cos^{k_1}t_1\sin^{k_2+\ldots+k_N+N-2}t_1\,dt_1\\ &&\int_0^{\pi/2}\cos^{k_2}t_2\sin^{k_3+\ldots+k_N+N-3}t_2\,dt_2\\ &&\vdots\\ &&\int_0^{\pi/2}\cos^{k_{N-2}}t_{N-2}\sin^{k_{N-1}+k_N+1}t_{N-2}\,dt_{N-2}\\ &&\int_0^{\pi/2}\cos^{k_{N-1}}t_{N-1}\sin^{k_N}t_{N-1}\,dt_{N-1} \end{eqnarray*} [[/math]]


Now by using the above-mentioned formula at [math]N=2[/math], this gives:

[[math]] \begin{eqnarray*} I' &=&\frac{k_1!!(k_2+\ldots+k_N+N-2)!!}{(k_1+\ldots+k_N+N-1)!!}\left(\frac{\pi}{2}\right)^{\varepsilon(N-2)}\\ &&\frac{k_2!!(k_3+\ldots+k_N+N-3)!!}{(k_2+\ldots+k_N+N-2)!!}\left(\frac{\pi}{2}\right)^{\varepsilon(N-3)}\\ &&\vdots\\ &&\frac{k_{N-2}!!(k_{N-1}+k_N+1)!!}{(k_{N-2}+k_{N-1}+l_N+2)!!}\left(\frac{\pi}{2}\right)^{\varepsilon(1)}\\ &&\frac{k_{N-1}!!k_N!!}{(k_{N-1}+k_N+1)!!}\left(\frac{\pi}{2}\right)^{\varepsilon(0)} \end{eqnarray*} [[/math]]


Now let [math]F[/math] be the part involving the double factorials, and [math]P[/math] be the part involving the powers of [math]\pi/2[/math], so that [math]I'=F\cdot P[/math]. Regarding [math]F[/math], by cancelling terms we have:

[[math]] F=\frac{k_1!!\ldots k_N!!}{(\Sigma k_i+N-1)!!} [[/math]]


As in what regards [math]P[/math], by summing the exponents, we obtain [math]P=\left(\frac{\pi}{2}\right)^{[N/2]}[/math]. We can now put everything together, and we obtain:

[[math]] \begin{eqnarray*} I &=&\frac{2^N}{A}\times F\times P\\ &=&\left(\frac{2}{\pi}\right)^{[N/2]}(N-1)!!\times\frac{k_1!!\ldots k_N!!}{(\Sigma k_i+N-1)!!}\times\left(\frac{\pi}{2}\right)^{[N/2]}\\ &=&\frac{(N-1)!!k_1!!\ldots k_N!!}{(\Sigma k_i+N-1)!!} \end{eqnarray*} [[/math]]


Thus, we are led to the conclusion in the statement.

We have the following useful generalization of the above formula:

Theorem

We have the following integration formula over the sphere [math]S^{N-1}_\mathbb R\subset\mathbb R^N[/math], with respect to the normalized, mass [math]1[/math] measure, valid for any exponents [math]k_i\in\mathbb N[/math],

[[math]] \int_{S^{N-1}_\mathbb R}|x_1^{k_1}\ldots x_N^{k_N}|\,dx=\left(\frac{2}{\pi}\right)^{\Sigma(k_1,\ldots,k_N)}\frac{(N-1)!!k_1!!\ldots k_N!!}{(N+\Sigma k_i-1)!!} [[/math]]
with [math]\Sigma=[odds/2][/math] if [math]N[/math] is odd and [math]\Sigma=[(odds+1)/2][/math] if [math]N[/math] is even, where “odds” denotes the number of odd numbers in the sequence [math]k_1,\ldots,k_N[/math].


Show Proof

As before, the formula holds at [math]N=2[/math], due to Theorem 13.17. In general, the integral in the statement can be written in spherical coordinates, as follows:

[[math]] I=\frac{2^N}{A}\int_0^{\pi/2}\ldots\int_0^{\pi/2}x_1^{k_1}\ldots x_N^{k_N}J\,dt_1\ldots dt_{N-1} [[/math]]


Here [math]A[/math] is the area of the sphere, [math]J[/math] is the Jacobian, and the [math]2^N[/math] factor comes from the restriction to the [math]1/2^N[/math] part of the sphere where all the coordinates are positive. The normalization constant in front of the integral is, as before:

[[math]] \frac{2^N}{A}=\left(\frac{2}{\pi}\right)^{[N/2]}(N-1)!! [[/math]]


As for the unnormalized integral, this can be written as before, as follows:

[[math]] \begin{eqnarray*} I' &=&\int_0^{\pi/2}\cos^{k_1}t_1\sin^{k_2+\ldots+k_N+N-2}t_1\,dt_1\\ &&\int_0^{\pi/2}\cos^{k_2}t_2\sin^{k_3+\ldots+k_N+N-3}t_2\,dt_2\\ &&\vdots\\ &&\int_0^{\pi/2}\cos^{k_{N-2}}t_{N-2}\sin^{k_{N-1}+k_N+1}t_{N-2}\,dt_{N-2}\\ &&\int_0^{\pi/2}\cos^{k_{N-1}}t_{N-1}\sin^{k_N}t_{N-1}\,dt_{N-1} \end{eqnarray*} [[/math]]


Now by using the formula at [math]N=2[/math], we get:

[[math]] \begin{eqnarray*} I' &=&\frac{\pi}{2}\cdot\frac{k_1!!(k_2+\ldots+k_N+N-2)!!}{(k_1+\ldots+k_N+N-1)!!}\left(\frac{2}{\pi}\right)^{\delta(k_1,k_2+\ldots+k_N+N-2)}\\ &&\frac{\pi}{2}\cdot\frac{k_2!!(k_3+\ldots+k_N+N-3)!!}{(k_2+\ldots+k_N+N-2)!!}\left(\frac{2}{\pi}\right)^{\delta(k_2,k_3+\ldots+k_N+N-3)}\\ &&\vdots\\ &&\frac{\pi}{2}\cdot\frac{k_{N-2}!!(k_{N-1}+k_N+1)!!}{(k_{N-2}+k_{N-1}+k_N+2)!!}\left(\frac{2}{\pi}\right)^{\delta(k_{N-2},k_{N-1}+k_N+1)}\\ &&\frac{\pi}{2}\cdot\frac{k_{N-1}!!k_N!!}{(k_{N-1}+k_N+1)!!}\left(\frac{2}{\pi}\right)^{\delta(k_{N-1},k_N)} \end{eqnarray*} [[/math]]


In order to compute this quantity, let us denote by [math]F[/math] the part involving the double factorials, and by [math]P[/math] the part involving the powers of [math]\pi/2[/math], so that we have:

[[math]] I'=F\cdot P [[/math]]


Regarding [math]F[/math], there are many cancellations there, and we end up with:

[[math]] F=\frac{k_1!!\ldots k_N!!}{(\Sigma k_i+N-1)!!} [[/math]]


As in what regards [math]P[/math], the [math]\delta[/math] exponents on the right sum up to the following number:

[[math]] \Delta(k_1,\ldots,k_N)=\sum_{i=1}^{N-1}\delta(k_i,k_{i+1}+\ldots+k_N+N-i-1) [[/math]]


In other words, with this notation, the above formula reads:

[[math]] \begin{eqnarray*} I' &=&\left(\frac{\pi}{2}\right)^{N-1}\frac{k_1!!k_2!!\ldots k_N!!}{(k_1+\ldots+k_N+N-1)!!}\left(\frac{2}{\pi}\right)^{\Delta(k_1,\ldots,k_N)}\\ &=&\left(\frac{2}{\pi}\right)^{\Delta(k_1,\ldots,k_N)-N+1}\frac{k_1!!k_2!!\ldots k_N!!}{(k_1+\ldots+k_N+N-1)!!}\\ &=&\left(\frac{2}{\pi}\right)^{\Sigma(k_1,\ldots,k_N)-[N/2]}\frac{k_1!!k_2!!\ldots k_N!!}{(k_1+\ldots+k_N+N-1)!!} \end{eqnarray*} [[/math]]


Here the formula relating [math]\Delta[/math] to [math]\Sigma[/math] follows from a number of simple observations, the first of which being the fact that, due to obvious parity reasons, the sequence of [math]\delta[/math] numbers appearing in the definition of [math]\Delta[/math] cannot contain two consecutive zeroes. Together with [math]I=(2^N/V)I'[/math], this gives the formula in the statement.

Let us discuss as well the complex versions of the above results. We have the following variation of the formula in Theorem 13.19, dealing with the complex sphere:

Theorem

We have the following integration formula over the complex sphere [math]S^{N-1}_\mathbb C\subset\mathbb C^N[/math], with respect to the normalized uniform measure,

[[math]] \int_{S^{N-1}_\mathbb C}|z_1|^{2k_1}\ldots|z_N|^{2k_N}\,dz=\frac{(N-1)!k_1!\ldots k_n!}{(N+\sum k_i-1)!} [[/math]]
valid for any exponents [math]k_i\in\mathbb N[/math]. As for the other polynomial integrals in [math]z_1,\ldots,z_N[/math] and their conjugates [math]\bar{z}_1,\ldots,\bar{z}_N[/math], these all vanish.


Show Proof

Consider an arbitrary polynomial integral over [math]S^{N-1}_\mathbb C[/math], containing the same number of plain and conjugated variables, as to not vanish trivially, written as follows:

[[math]] I=\int_{S^{N-1}_\mathbb C}z_{i_1}\bar{z}_{i_2}\ldots z_{i_{2k-1}}\bar{z}_{i_{2k}}\,dz [[/math]]


By using transformations of type [math]p\to\lambda p[/math] with [math]|\lambda|=1[/math], we see that this integral [math]I[/math] vanishes, unless each [math]z_a[/math] appears as many times as [math]\bar{z}_a[/math] does, and this gives the last assertion. So, assume now that we are in the non-vanishing case. Then the [math]k_a[/math] copies of [math]z_a[/math] and the [math]k_a[/math] copies of [math]\bar{z}_a[/math] produce by multiplication a factor [math]|z_a|^{2k_a}[/math], so we have:

[[math]] I=\int_{S^{N-1}_\mathbb C}|z_1|^{2k_1}\ldots|z_N|^{2k_N}\,dz [[/math]]


Now by using the standard identification [math]S^{N-1}_\mathbb C\simeq S^{2N-1}_\mathbb R[/math], we obtain:

[[math]] \begin{eqnarray*} I &=&\int_{S^{2N-1}_\mathbb R}(x_1^2+y_1^2)^{k_1}\ldots(x_N^2+y_N^2)^{k_N}\,d(x,y)\\ &=&\sum_{r_1\ldots r_N}\binom{k_1}{r_1}\ldots\binom{k_N}{r_N}\int_{S^{2N-1}_\mathbb R}x_1^{2k_1-2r_1}y_1^{2r_1}\ldots x_N^{2k_N-2r_N}y_N^{2r_N}\,d(x,y) \end{eqnarray*} [[/math]]


By using the formula in Theorem 13.19, we obtain:

[[math]] \begin{eqnarray*} &&I\\ &=&\sum_{r_1\ldots r_N}\binom{k_1}{r_1}\ldots\binom{k_N}{r_N}\frac{(2N-1)!!(2r_1)!!\ldots(2r_N)!!(2k_1-2r_1)!!\ldots (2k_N-2r_N)!!}{(2N+2\sum k_i-1)!!}\\ &=&\sum_{r_1\ldots r_N}\binom{k_1}{r_1}\ldots\binom{k_N}{r_N}\frac{2^{N-1}(N-1)!\prod(2r_i)!/(2^{r_i}r_i!)\prod(2k_i-2r_i)!/(2^{k_i-r_i}(k_i-r_i)!)}{2^{N+\sum k_i-1}(N+\sum k_i-1)!}\\ &=&\sum_{r_1\ldots r_N}\binom{k_1}{r_1}\ldots\binom{k_N}{r_N} \frac{(N-1)!(2r_1)!\ldots (2r_N)!(2k_1-2r_1)!\ldots (2k_N-2r_N)!}{4^{\sum k_i}(N+\sum k_i-1)!r_1!\ldots r_N!(k_1-r_1)!\ldots (k_N-r_N)!} \end{eqnarray*} [[/math]]


Now observe that can rewrite this quantity in the following way:

[[math]] \begin{eqnarray*} &&I\\ &=&\sum_{r_1\ldots r_N}\frac{k_1!\ldots k_N!(N-1)!(2r_1)!\ldots (2r_N)!(2k_1-2r_1)!\ldots (2k_N-2r_N)!}{4^{\sum k_i}(N+\sum k_i-1)!(r_1!\ldots r_N!(k_1-r_1)!\ldots (k_N-r_N)!)^2}\\ &=&\sum_{r_1}\binom{2r_1}{r_1}\binom{2k_1-2r_1}{k_1-r_1}\ldots\sum_{r_N}\binom{2r_N}{r_N}\binom{2k_N-2r_N}{k_N-r_N}\frac{(N-1)!k_1!\ldots k_N!}{4^{\sum k_i}(N+\sum k_i-1)!}\\ &=&4^{k_1}\times\ldots\times 4^{k_N}\times\frac{(N-1)!k_1!\ldots k_N!}{4^{\sum k_i}(N+\sum k_i-1)!}\\ &=&\frac{(N-1)!k_1!\ldots k_N!}{(N+\sum k_i-1)!} \end{eqnarray*} [[/math]]


Thus, we are led to the formula in the statement.

We will see applications of all this in the next chapter, with some quite conceptual results regarding the spherical coordinates, in the [math]N\to\infty[/math] limit, obtained by processing the above results, by using standard tools from probability.

General references

Banica, Teo (2024). "Calculus and applications". arXiv:2401.00911 [math.CO].

References

  1. W. Rudin, Principles of mathematical analysis, McGraw-Hill (1964).