15a. Vector products
Good news, done with mathematics, and in the remainder of this book, last 50 pages, we will get into physics. We already met some physics in this book, namely gravity, waves and heat in one dimension, in chapters 3-4, then waves and heat again, in two dimensions, along with some basic electrodynamics, light and early atomic theory, in chapter 8, and finally harmonic functions, this time in arbitrary [math]N[/math] dimensions, in chapter 12.
Obviously, time to have some cleanup here, review what we know, and do more, in a more systematic way. We have divided what we have two say in two parts, as follows:
(1) In the present chapter we will discuss physics in low dimensions, [math]N=2,3,4[/math], with on the menu more classical mechanics, notably with the Coriolis force, then some relativity following Einstein, and finally the basics of electrostatics, following Gauss and others, which are quite interesting, mathematically speaking. We will be mostly interested in [math]N=3[/math], and the common feature of everything that we want to talk about will be the vector product [math]\times[/math], which exists only in 3 dimensions, and is something quite magic.
(2) In the next and final chapter we will discuss physics in infinite dimensions, [math]N=\infty[/math]. Our goal here will be that of having some quantum mechanics theory started, along the lines suggested at the end of chapter 8, and more specifically, solving the hydrogen atom. There are actually several ways of proceeding here, following Heisenberg, Schrödinger and others, and matter of annoying my cat, who seems to be a big fan of measure theory and Hilbert spaces, we will opt here for the Schrödinger approach, which is elementary.
Getting now to what we want to do in this chapter, namely 3 dimensions, vector products [math]\times[/math], and all sorts of related physics, there is some important mathematical interest in doing this, because we will reach in this way to answers to the following question:
\begin{question}
What are the higher dimensional analogues of the formula
that is, of the fundamental theorem of calculus? \end{question} And isn't this a fundamental question for us, mathematicians, because we have so far in our bag multivariable extensions of all the main theorems of one-variable calculus, except for this. So, definitely something to be solved, before the end of this book.
So, let us discuss this first. The fundamental theorem of calculus tells us that the integral of a function on an interval [math][a,b][/math] can be suitably recaptured from what happens on the boundary [math]\{a,b\}[/math] of this interval. Obviously, this is something quite magic, and thinking now at what we can expect in [math]N=2,3[/math] or more dimensions, that can only be quite complicated, involving curves, surfaces, solid bodies and so on, and with all this vaguely reminding all sorts of physics things, such as field lines for gravity, magnets and so on. In short, despite having no formal proof yet for all this, let us formulate:
\begin{answer}
Partial integration in several dimensions most likely means physics, and we will probably only get some partial results here, at [math]N=2,3[/math].
\end{answer}
Of course, all this remains to be confirmed, but assuming that you trust me a bit, here we are now at the plan that we made before, for this chapter. That is, do physics in low dimensions, guided by the beauty of the world surrounding us, and once this physics done, record some mathematical corollaries too, in relation with Question 15.1.
Before getting started, however, as usual when struggling with pedagogical matters, and other delicate dillemas, let us ask the cat. But here, unfortunately, no surprise:
\begin{cat}
Read Rudin.
\end{cat}
Thanks cat, and guess this sort of discussion, that we started in chapter 13, looks more and more cyclic. So, I'll just go my way, on your side have a good hunt, and by the way make sure to catch enough mice and birds, using your measure theory and differential forms techniques, because there is a bit of shortage of cat food today, sorry for that.
Getting started now, here is what we need:
The vector product of two vectors in [math]\mathbb R^3[/math] is given by
Obviously, this definition is something quite subtle, and also something very annoying, because you always need this, and always forget the formula. Here are my personal methods. With the first definition, what I always remember is that:
So, here's how it works. We are looking for a vector [math]x\times y[/math] whose length is proportional to those of [math]x,y[/math]. But the second formula tells us that the angle [math]\theta[/math] between [math]x,y[/math] must be involved via [math]0\to0[/math], and so the factor can only be [math]\sin\theta[/math]. And with this we are almost there, it's just a matter of choosing the orientation, and this comes from [math]e_1\times e_2=e_3[/math].
As with the second definition, that I like the most, what I remember here is simply:
Indeed, when trying to compute this determinant, by developing over the first column, what you get as coefficients are the entries of [math]x\times y[/math]. And with the good middle sign.
In practice now, in order to get familiar with the vector products, nothing better than doing some classical mechanics. We have here the following key result:
In the gravitational [math]2[/math]-body problem, the angular momentum
There are several things to be said here, the idea being as follows:
(1) First of all the usual momentum, [math]p=mv[/math], is not conserved, because the simplest solution is the circular motion, where the moment gets turned around. But this suggests precisely that, in order to fix the lack of conservation of the momentum [math]p[/math], what we have to do is to make a vector product with the position [math]x[/math]. Leading to [math]J[/math], as above.
(2) Regarding now the proof, consider indeed a particle [math]m[/math] moving under the gravitational force of a particle [math]M[/math], assumed, as usual, to be fixed at 0. By using the fact that for two proportional vectors, [math]p\sim q[/math], we have [math]p\times q=0[/math], we obtain:
Now since the derivative of [math]J[/math] vanishes, this quantity is constant, as stated.
While the above principle looks like something quite trivial, the mathematics behind it is quite interesting, and has several notable consequences, as follows:
In the context of a [math]2[/math]-body problem, the following happen:
- The fact that the direction of [math]J[/math] is fixed tells us that the trajectory of one body with respect to the other lies in a plane.
- The fact that the magnitude of [math]J[/math] is fixed tells us that the Kepler 2 law holds, namely that we have same areas sweeped by [math]Ox[/math] over the same times.
{{{4}}}
As another basic application of the vector products, still staying with classical mechanics, we have all sorts of useful formulae regarding rotating frames. We first have:
Assume that a [math]3D[/math] body rotates along an axis, with angular speed [math]w[/math]. For a fixed point of the body, with position vector [math]x[/math], the usual [math]3D[/math] speed is
where [math]\omega=wn[/math], with [math]n[/math] unit vector pointing North. When the point moves on the body
We have two assertions here, both requiring some 3D thinking, as follows:
(1) Assuming that the point is fixed, the magnitude of [math]\omega\times x[/math] is the good one, due to the following computation, with [math]r[/math] being the distance from the point to the axis:
As for the orientation of [math]\omega\times x[/math], this is the good one as well, because the North pole rule used above amounts in applying the right-hand rule for finding [math]n[/math], and so [math]\omega[/math], and this right-hand rule was precisely the one used in defining the vector products [math]\times[/math].
(2) Next, when the point moves on the body, the inertial observer [math]O[/math] can compute its speed by using a frame [math](u_1,u_2,u_3)[/math] which rotates with the body, as follows:
Thus, we are led to the conclusions in the statement.
In what regards now the acceleration, the result, which is famous, is as follows:
Assuming as before that a [math]3D[/math] body rotates along an axis, the acceleration of a moving point on the body, computed by [math]O[/math] as before, is given by
This comes by using twice the formulae in Theorem 15.7, as follows:
Thus, we are led to the conclusion in the statement.
The truly famous result is actually the one regarding forces, obtained by multiplying everything by a mass [math]m[/math], and writing things the other way around, as follows:
Here the second term is called Coriolis force, and the third term is called centrifugal force. These forces are both called apparent, or fictious, because they do not exist in the inertial frame, but they exist however in the non-inertial frame of reference, as explained above. And with of course the terms centrifugal and centripetal not to be messed up.
In fact, even more famous is the terrestrial application of all this, as follows:
The acceleration of an object [math]m[/math] subject to a force [math]F[/math] is given by
This follows indeed from the above discussion, by assuming that the acceleration [math]A[/math] there comes from the combined effect of a force [math]F[/math], and of the usual [math]g[/math].
We refer to any standard undergraduate mechanics book, such as Feynman [1], Kibble [2] or Taylor [3] for more on the above, including various numerics on what happens here on Earth, the Foucault pendulum, history of all this, and many other things. Let us just mention here, as a basic illustration for all this, that a rock dropped from 100m deviates about 1cm from its intended target, due to the formula in Theorem 15.9.
General references
Banica, Teo (2024). "Calculus and applications". arXiv:2401.00911 [math.CO].