guide:0e23add628: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
<div class="d-none"><math> | |||
\newcommand{\mathds}{\mathbb}</math></div> | |||
{{Alert-warning|This article was automatically generated from a tex file and may contain conversion errors. If permitted, you may login and edit this article to improve the conversion. }} | |||
Good news, done with mathematics, and in the remainder of this book, last 50 pages, we will get into physics. We already met some physics in this book, namely gravity, waves and heat in one dimension, in chapters 3-4, then waves and heat again, in two dimensions, along with some basic electrodynamics, light and early atomic theory, in chapter 8, and finally harmonic functions, this time in arbitrary <math>N</math> dimensions, in chapter 12. | |||
Obviously, time to have some cleanup here, review what we know, and do more, in a more systematic way. We have divided what we have two say in two parts, as follows: | |||
(1) In the present chapter we will discuss physics in low dimensions, <math>N=2,3,4</math>, with on the menu more classical mechanics, notably with the Coriolis force, then some relativity following Einstein, and finally the basics of electrostatics, following Gauss and others, which are quite interesting, mathematically speaking. We will be mostly interested in <math>N=3</math>, and the common feature of everything that we want to talk about will be the vector product <math>\times</math>, which exists only in 3 dimensions, and is something quite magic. | |||
(2) In the next and final chapter we will discuss physics in infinite dimensions, <math>N=\infty</math>. Our goal here will be that of having some quantum mechanics theory started, along the lines suggested at the end of chapter 8, and more specifically, solving the hydrogen atom. There are actually several ways of proceeding here, following Heisenberg, Schrödinger and others, and matter of annoying my cat, who seems to be a big fan of measure theory and Hilbert spaces, we will opt here for the Schrödinger approach, which is elementary. | |||
Getting now to what we want to do in this chapter, namely 3 dimensions, vector products <math>\times</math>, and all sorts of related physics, there is some important mathematical interest in doing this, because we will reach in this way to answers to the following question: | |||
\begin{question} | |||
What are the higher dimensional analogues of the formula | |||
<math display="block"> | |||
\int_a^bF'(x)dx=F(b)-F(a) | |||
</math> | |||
that is, of the fundamental theorem of calculus? | |||
\end{question} | |||
And isn't this a fundamental question for us, mathematicians, because we have so far in our bag multivariable extensions of all the main theorems of one-variable calculus, except for this. So, definitely something to be solved, before the end of this book. | |||
So, let us discuss this first. The fundamental theorem of calculus tells us that the integral of a function on an interval <math>[a,b]</math> can be suitably recaptured from what happens on the boundary <math>\{a,b\}</math> of this interval. Obviously, this is something quite magic, and thinking now at what we can expect in <math>N=2,3</math> or more dimensions, that can only be quite complicated, involving curves, surfaces, solid bodies and so on, and with all this vaguely reminding all sorts of physics things, such as field lines for gravity, magnets and so on. In short, despite having no formal proof yet for all this, let us formulate: | |||
\begin{answer} | |||
Partial integration in several dimensions most likely means physics, and we will probably only get some partial results here, at <math>N=2,3</math>. | |||
\end{answer} | |||
Of course, all this remains to be confirmed, but assuming that you trust me a bit, here we are now at the plan that we made before, for this chapter. That is, do physics in low dimensions, guided by the beauty of the world surrounding us, and once this physics done, record some mathematical corollaries too, in relation with Question 15.1. | |||
Before getting started, however, as usual when struggling with pedagogical matters, and other delicate dillemas, let us ask the cat. But here, unfortunately, no surprise: | |||
\begin{cat} | |||
Read Rudin. | |||
\end{cat} | |||
Thanks cat, and guess this sort of discussion, that we started in chapter 13, looks more and more cyclic. So, I'll just go my way, on your side have a good hunt, and by the way make sure to catch enough mice and birds, using your measure theory and differential forms techniques, because there is a bit of shortage of cat food today, sorry for that. | |||
Getting started now, here is what we need: | |||
{{defncard|label=|id=|The vector product of two vectors in <math>\mathbb R^3</math> is given by | |||
<math display="block"> | |||
x\times y=||x||\cdot||y||\cdot\sin\theta\cdot n | |||
</math> | |||
where <math>n\in\mathbb R^3</math> with <math>n\perp x,y</math> and <math>||n||=1</math> is constructed using the right-hand rule: | |||
<math display="block"> | |||
\begin{matrix} | |||
\ \ \ \ \ \ \ \ \ \uparrow_{x\times y}\\ | |||
\leftarrow_x\\ | |||
\swarrow_y | |||
\end{matrix} | |||
</math> | |||
Alternatively, in usual vertical linear algebra notation for all vectors, | |||
<math display="block"> | |||
\begin{pmatrix}x_1\\ x_2\\x_3\end{pmatrix} | |||
\times\begin{pmatrix}y_1\\ y_2\\y_3\end{pmatrix} | |||
=\begin{pmatrix}x_2y_3-x_3y_2\\ x_3y_1-x_1y_3\\x_1y_2-x_2y_1\end{pmatrix} | |||
</math> | |||
the rule being that of computing <math>2\times2</math> determinants, and adding a middle sign.}} | |||
Obviously, this definition is something quite subtle, and also something very annoying, because you always need this, and always forget the formula. Here are my personal methods. With the first definition, what I always remember is that: | |||
<math display="block"> | |||
||x\times y||\sim||x||,||y||\quad,\quad | |||
x\times x=0\quad,\quad | |||
e_1\times e_2=e_3 | |||
</math> | |||
So, here's how it works. We are looking for a vector <math>x\times y</math> whose length is proportional to those of <math>x,y</math>. But the second formula tells us that the angle <math>\theta</math> between <math>x,y</math> must be involved via <math>0\to0</math>, and so the factor can only be <math>\sin\theta</math>. And with this we are almost there, it's just a matter of choosing the orientation, and this comes from <math>e_1\times e_2=e_3</math>. | |||
As with the second definition, that I like the most, what I remember here is simply: | |||
<math display="block"> | |||
\begin{vmatrix} | |||
1&x_1&y_1\\ | |||
1&x_2&y_2\\ | |||
1&x_3&y_3 | |||
\end{vmatrix}=? | |||
</math> | |||
Indeed, when trying to compute this determinant, by developing over the first column, what you get as coefficients are the entries of <math>x\times y</math>. And with the good middle sign. | |||
In practice now, in order to get familiar with the vector products, nothing better than doing some classical mechanics. We have here the following key result: | |||
{{proofcard|Theorem|theorem-1|In the gravitational <math>2</math>-body problem, the angular momentum | |||
<math display="block"> | |||
J=x\times p | |||
</math> | |||
with <math>p=mv</math> being the usual momentum, is conserved. | |||
|There are several things to be said here, the idea being as follows: | |||
(1) First of all the usual momentum, <math>p=mv</math>, is not conserved, because the simplest solution is the circular motion, where the moment gets turned around. But this suggests precisely that, in order to fix the lack of conservation of the momentum <math>p</math>, what we have to do is to make a vector product with the position <math>x</math>. Leading to <math>J</math>, as above. | |||
(2) Regarding now the proof, consider indeed a particle <math>m</math> moving under the gravitational force of a particle <math>M</math>, assumed, as usual, to be fixed at 0. By using the fact that for two proportional vectors, <math>p\sim q</math>, we have <math>p\times q=0</math>, we obtain: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
\dot{J} | |||
&=&\dot{x}\times p+x\times\dot{p}\\ | |||
&=&v\times mv+x\times ma\\ | |||
&=&m(v\times v+x\times a)\\ | |||
&=&m(0+0)\\ | |||
&=&0 | |||
\end{eqnarray*} | |||
</math> | |||
Now since the derivative of <math>J</math> vanishes, this quantity is constant, as stated.}} | |||
While the above principle looks like something quite trivial, the mathematics behind it is quite interesting, and has several notable consequences, as follows: | |||
{{proofcard|Theorem|theorem-2|In the context of a <math>2</math>-body problem, the following happen: | |||
<ul><li> The fact that the direction of <math>J</math> is fixed tells us that the trajectory of one body with respect to the other lies in a plane. | |||
</li> | |||
<li> The fact that the magnitude of <math>J</math> is fixed tells us that the Kepler 2 law holds, namely that we have same areas sweeped by <math>Ox</math> over the same times. | |||
</li> | |||
</ul> | |||
|This follows indeed from Theorem 15.5, as follows: | |||
(1) We have by definition <math>J=m(x\times v)</math>, and since a vector product is orthogonal on both the vectors it comes from, we deduce from this that we have: | |||
<math display="block"> | |||
J\perp x,v | |||
</math> | |||
But this can be written as follows, with <math>J^\perp</math> standing for the plane orthogonal to <math>J</math>: | |||
<math display="block"> | |||
x,v\in J^\perp | |||
</math> | |||
Now since <math>J</math> is fixed by Theorem 15.5, we conclude that both <math>x,v</math>, and in particular the position <math>x</math>, and so the whole trajectory, lie in this fixed plane <math>J^\perp</math>, as claimed. | |||
(2) Conversely now, forget about Theorem 15.5, and assume that the trajectory lies in a certain plane <math>E</math>. Thus <math>x\in E</math>, and by differentiating we have <math>v\in E</math> too, and so <math>x,v\in E</math>. Thus <math>E=J^\perp</math>, and so <math>J=E^\perp</math>, so the direction of <math>J</math> is fixed, as claimed. | |||
(3) Regarding now the last assertion, we already know from the various formulae from chapter 11 that the Kepler 2 law is more or less equivalent to the formula <math>\dot{\theta}=\lambda/r^2</math>. However, the derivation of <math>\dot{\theta}=\lambda/r^2</math> was something tricky, and what we want to prove now is that this appears as a simple consequence of <math>||J||</math> = constant. | |||
(4) In order to to so, let us compute <math>J</math>, according to its definition <math>J=x\times p</math>, but in polar coordinates, which will change everything. Since <math>p=m\dot{x}</math>, we have: | |||
<math display="block"> | |||
J=r\begin{pmatrix}\cos\theta\\ \sin\theta\\ 0\end{pmatrix} | |||
\times m\begin{pmatrix} | |||
\dot{r}\cos\theta-r\sin\theta\cdot\dot{\theta}\\ | |||
\dot{r}\sin\theta+r\cos\theta\cdot\dot{\theta}\\ | |||
0\end{pmatrix} | |||
</math> | |||
Now recall from the definition of the vector product that we have: | |||
<math display="block"> | |||
\begin{pmatrix}a\\b\\0\end{pmatrix} | |||
\times\begin{pmatrix}c\\d\\0\end{pmatrix} | |||
=\begin{pmatrix}0\\0\\ad-bc\end{pmatrix} | |||
</math> | |||
Thus <math>J</math> is a vector of the above form, with its last component being: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
J_z | |||
&=&rm\begin{vmatrix} | |||
\cos\theta&&\dot{r}\cos\theta-r\sin\theta\cdot\dot{\theta}\\ | |||
\sin\theta&&\dot{r}\sin\theta+r\cos\theta\cdot\dot{\theta} | |||
\end{vmatrix}\\ | |||
&=&rm\cdot r(\cos^2\theta+\sin^2\theta)\dot{\theta}\\ | |||
&=&r^2m\cdot\dot{\theta} | |||
\end{eqnarray*} | |||
</math> | |||
(5) Now with the above formula in hand, our claim is that the magnitude <math>||J||</math> is constant precisely when <math>\dot{\theta}=\lambda/r^2</math>, for some <math>\lambda\in\mathbb R</math>. Indeed, up to the obvious fact that the orientation of <math>J</math> is a binary parameter, who cannot just switch like that, let us just agree on this, knowing <math>J</math> is the same as knowing <math>J_z</math>, and is also the same as knowing <math>||J||</math>. Thus, our claim is proved, and this leads to the conclusion in the statement.}} | |||
As another basic application of the vector products, still staying with classical mechanics, we have all sorts of useful formulae regarding rotating frames. We first have: | |||
{{proofcard|Theorem|theorem-3|Assume that a <math>3D</math> body rotates along an axis, with angular speed <math>w</math>. For a fixed point of the body, with position vector <math>x</math>, the usual <math>3D</math> speed is | |||
<math display="block"> | |||
v=\omega\times x | |||
</math> | |||
where <math>\omega=wn</math>, with <math>n</math> unit vector pointing North. When the point moves on the body | |||
<math display="block"> | |||
V=\dot{x}+\omega\times x | |||
</math> | |||
is its speed computed by an inertial observer <math>O</math> on the rotation axis. | |||
|We have two assertions here, both requiring some 3D thinking, as follows: | |||
(1) Assuming that the point is fixed, the magnitude of <math>\omega\times x</math> is the good one, due to the following computation, with <math>r</math> being the distance from the point to the axis: | |||
<math display="block"> | |||
||\omega\times x||=w||x||\sin t=wr=||v|| | |||
</math> | |||
As for the orientation of <math>\omega\times x</math>, this is the good one as well, because the North pole rule used above amounts in applying the right-hand rule for finding <math>n</math>, and so <math>\omega</math>, and this right-hand rule was precisely the one used in defining the vector products <math>\times</math>. | |||
(2) Next, when the point moves on the body, the inertial observer <math>O</math> can compute its speed by using a frame <math>(u_1,u_2,u_3)</math> which rotates with the body, as follows: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
V | |||
&=&\dot{x}_1u_1+\dot{x}_2u_2+\dot{x}_3u_3+x_1\dot{u}_1+x_2\dot{u}_2+x_3\dot{u}_3\\ | |||
&=&\dot{x}+(x_1\cdot\omega\times u_1+x_2\cdot\omega\times u_2+x_3\cdot\omega\times u_3)\\ | |||
&=&\dot{x}+w\times(x_1u_1+x_2u_2+x_3u_3)\\ | |||
&=&\dot{x}+\omega\times x | |||
\end{eqnarray*} | |||
</math> | |||
Thus, we are led to the conclusions in the statement.}} | |||
In what regards now the acceleration, the result, which is famous, is as follows: | |||
{{proofcard|Theorem|theorem-4|Assuming as before that a <math>3D</math> body rotates along an axis, the acceleration of a moving point on the body, computed by <math>O</math> as before, is given by | |||
<math display="block"> | |||
A=a+2\omega\times v+\omega\times(\omega\times x) | |||
</math> | |||
with <math>\omega=wn</math> being as before. In this formula the second term is called Coriolis acceleration, and the third term is called centripetal acceleration. | |||
|This comes by using twice the formulae in Theorem 15.7, as follows: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
A | |||
&=&\dot{V}+\omega\times V\\ | |||
&=&(\ddot{x}+\dot{\omega}\times x+\omega\times\dot{x})+(\omega\times\dot{x}+\omega\times(\omega\times x))\\ | |||
&=&\ddot{x}+\omega\times\dot{x}+\omega\times\dot{x}+\omega\times(\omega\times x)\\ | |||
&=&a+2\omega\times v+\omega\times(\omega\times x) | |||
\end{eqnarray*} | |||
</math> | |||
Thus, we are led to the conclusion in the statement.}} | |||
The truly famous result is actually the one regarding forces, obtained by multiplying everything by a mass <math>m</math>, and writing things the other way around, as follows: | |||
<math display="block"> | |||
ma=mA-2m\omega\times v-m\omega\times(\omega\times x) | |||
</math> | |||
Here the second term is called Coriolis force, and the third term is called centrifugal force. These forces are both called apparent, or fictious, because they do not exist in the inertial frame, but they exist however in the non-inertial frame of reference, as explained above. And with of course the terms centrifugal and centripetal not to be messed up. | |||
In fact, even more famous is the terrestrial application of all this, as follows: | |||
{{proofcard|Theorem|theorem-5|The acceleration of an object <math>m</math> subject to a force <math>F</math> is given by | |||
<math display="block"> | |||
ma=F-mg-2m\omega\times v-m\omega\times(\omega\times x) | |||
</math> | |||
with <math>g</math> pointing upwards, and with the last terms being the Coriolis and centrifugal forces. | |||
|This follows indeed from the above discussion, by assuming that the acceleration <math>A</math> there comes from the combined effect of a force <math>F</math>, and of the usual <math>g</math>.}} | |||
We refer to any standard undergraduate mechanics book, such as Feynman <ref name="fe1">R.P. Feynman, R.B. Leighton and M. Sands, The Feynman lectures on physics I: mainly mechanics, radiation and heat, Caltech (1963).</ref>, Kibble <ref name="kbe">T. Kibble and F.H. Berkshire, Classical mechanics, Imperial College Press (1966).</ref> or Taylor <ref name="tay">J.R. Taylor, Classical mechanics, Univ. Science Books (2003).</ref> for more on the above, including various numerics on what happens here on Earth, the Foucault pendulum, history of all this, and many other things. Let us just mention here, as a basic illustration for all this, that a rock dropped from 100m deviates about 1cm from its intended target, due to the formula in Theorem 15.9. | |||
==General references== | |||
{{cite arXiv|last1=Banica|first1=Teo|year=2024|title=Calculus and applications|eprint=2401.00911|class=math.CO}} | |||
==References== | |||
{{reflist}} |
Latest revision as of 15:14, 21 April 2025
Good news, done with mathematics, and in the remainder of this book, last 50 pages, we will get into physics. We already met some physics in this book, namely gravity, waves and heat in one dimension, in chapters 3-4, then waves and heat again, in two dimensions, along with some basic electrodynamics, light and early atomic theory, in chapter 8, and finally harmonic functions, this time in arbitrary [math]N[/math] dimensions, in chapter 12.
Obviously, time to have some cleanup here, review what we know, and do more, in a more systematic way. We have divided what we have two say in two parts, as follows:
(1) In the present chapter we will discuss physics in low dimensions, [math]N=2,3,4[/math], with on the menu more classical mechanics, notably with the Coriolis force, then some relativity following Einstein, and finally the basics of electrostatics, following Gauss and others, which are quite interesting, mathematically speaking. We will be mostly interested in [math]N=3[/math], and the common feature of everything that we want to talk about will be the vector product [math]\times[/math], which exists only in 3 dimensions, and is something quite magic.
(2) In the next and final chapter we will discuss physics in infinite dimensions, [math]N=\infty[/math]. Our goal here will be that of having some quantum mechanics theory started, along the lines suggested at the end of chapter 8, and more specifically, solving the hydrogen atom. There are actually several ways of proceeding here, following Heisenberg, Schrödinger and others, and matter of annoying my cat, who seems to be a big fan of measure theory and Hilbert spaces, we will opt here for the Schrödinger approach, which is elementary.
Getting now to what we want to do in this chapter, namely 3 dimensions, vector products [math]\times[/math], and all sorts of related physics, there is some important mathematical interest in doing this, because we will reach in this way to answers to the following question:
\begin{question}
What are the higher dimensional analogues of the formula
that is, of the fundamental theorem of calculus? \end{question} And isn't this a fundamental question for us, mathematicians, because we have so far in our bag multivariable extensions of all the main theorems of one-variable calculus, except for this. So, definitely something to be solved, before the end of this book.
So, let us discuss this first. The fundamental theorem of calculus tells us that the integral of a function on an interval [math][a,b][/math] can be suitably recaptured from what happens on the boundary [math]\{a,b\}[/math] of this interval. Obviously, this is something quite magic, and thinking now at what we can expect in [math]N=2,3[/math] or more dimensions, that can only be quite complicated, involving curves, surfaces, solid bodies and so on, and with all this vaguely reminding all sorts of physics things, such as field lines for gravity, magnets and so on. In short, despite having no formal proof yet for all this, let us formulate:
\begin{answer}
Partial integration in several dimensions most likely means physics, and we will probably only get some partial results here, at [math]N=2,3[/math].
\end{answer}
Of course, all this remains to be confirmed, but assuming that you trust me a bit, here we are now at the plan that we made before, for this chapter. That is, do physics in low dimensions, guided by the beauty of the world surrounding us, and once this physics done, record some mathematical corollaries too, in relation with Question 15.1.
Before getting started, however, as usual when struggling with pedagogical matters, and other delicate dillemas, let us ask the cat. But here, unfortunately, no surprise:
\begin{cat}
Read Rudin.
\end{cat}
Thanks cat, and guess this sort of discussion, that we started in chapter 13, looks more and more cyclic. So, I'll just go my way, on your side have a good hunt, and by the way make sure to catch enough mice and birds, using your measure theory and differential forms techniques, because there is a bit of shortage of cat food today, sorry for that.
Getting started now, here is what we need:
The vector product of two vectors in [math]\mathbb R^3[/math] is given by
Obviously, this definition is something quite subtle, and also something very annoying, because you always need this, and always forget the formula. Here are my personal methods. With the first definition, what I always remember is that:
So, here's how it works. We are looking for a vector [math]x\times y[/math] whose length is proportional to those of [math]x,y[/math]. But the second formula tells us that the angle [math]\theta[/math] between [math]x,y[/math] must be involved via [math]0\to0[/math], and so the factor can only be [math]\sin\theta[/math]. And with this we are almost there, it's just a matter of choosing the orientation, and this comes from [math]e_1\times e_2=e_3[/math].
As with the second definition, that I like the most, what I remember here is simply:
Indeed, when trying to compute this determinant, by developing over the first column, what you get as coefficients are the entries of [math]x\times y[/math]. And with the good middle sign.
In practice now, in order to get familiar with the vector products, nothing better than doing some classical mechanics. We have here the following key result:
In the gravitational [math]2[/math]-body problem, the angular momentum
There are several things to be said here, the idea being as follows:
(1) First of all the usual momentum, [math]p=mv[/math], is not conserved, because the simplest solution is the circular motion, where the moment gets turned around. But this suggests precisely that, in order to fix the lack of conservation of the momentum [math]p[/math], what we have to do is to make a vector product with the position [math]x[/math]. Leading to [math]J[/math], as above.
(2) Regarding now the proof, consider indeed a particle [math]m[/math] moving under the gravitational force of a particle [math]M[/math], assumed, as usual, to be fixed at 0. By using the fact that for two proportional vectors, [math]p\sim q[/math], we have [math]p\times q=0[/math], we obtain:
Now since the derivative of [math]J[/math] vanishes, this quantity is constant, as stated.
While the above principle looks like something quite trivial, the mathematics behind it is quite interesting, and has several notable consequences, as follows:
In the context of a [math]2[/math]-body problem, the following happen:
- The fact that the direction of [math]J[/math] is fixed tells us that the trajectory of one body with respect to the other lies in a plane.
- The fact that the magnitude of [math]J[/math] is fixed tells us that the Kepler 2 law holds, namely that we have same areas sweeped by [math]Ox[/math] over the same times.
{{{4}}}
As another basic application of the vector products, still staying with classical mechanics, we have all sorts of useful formulae regarding rotating frames. We first have:
Assume that a [math]3D[/math] body rotates along an axis, with angular speed [math]w[/math]. For a fixed point of the body, with position vector [math]x[/math], the usual [math]3D[/math] speed is
where [math]\omega=wn[/math], with [math]n[/math] unit vector pointing North. When the point moves on the body
We have two assertions here, both requiring some 3D thinking, as follows:
(1) Assuming that the point is fixed, the magnitude of [math]\omega\times x[/math] is the good one, due to the following computation, with [math]r[/math] being the distance from the point to the axis:
As for the orientation of [math]\omega\times x[/math], this is the good one as well, because the North pole rule used above amounts in applying the right-hand rule for finding [math]n[/math], and so [math]\omega[/math], and this right-hand rule was precisely the one used in defining the vector products [math]\times[/math].
(2) Next, when the point moves on the body, the inertial observer [math]O[/math] can compute its speed by using a frame [math](u_1,u_2,u_3)[/math] which rotates with the body, as follows:
Thus, we are led to the conclusions in the statement.
In what regards now the acceleration, the result, which is famous, is as follows:
Assuming as before that a [math]3D[/math] body rotates along an axis, the acceleration of a moving point on the body, computed by [math]O[/math] as before, is given by
This comes by using twice the formulae in Theorem 15.7, as follows:
Thus, we are led to the conclusion in the statement.
The truly famous result is actually the one regarding forces, obtained by multiplying everything by a mass [math]m[/math], and writing things the other way around, as follows:
Here the second term is called Coriolis force, and the third term is called centrifugal force. These forces are both called apparent, or fictious, because they do not exist in the inertial frame, but they exist however in the non-inertial frame of reference, as explained above. And with of course the terms centrifugal and centripetal not to be messed up.
In fact, even more famous is the terrestrial application of all this, as follows:
The acceleration of an object [math]m[/math] subject to a force [math]F[/math] is given by
This follows indeed from the above discussion, by assuming that the acceleration [math]A[/math] there comes from the combined effect of a force [math]F[/math], and of the usual [math]g[/math].
We refer to any standard undergraduate mechanics book, such as Feynman [1], Kibble [2] or Taylor [3] for more on the above, including various numerics on what happens here on Earth, the Foucault pendulum, history of all this, and many other things. Let us just mention here, as a basic illustration for all this, that a rock dropped from 100m deviates about 1cm from its intended target, due to the formula in Theorem 15.9.
General references
Banica, Teo (2024). "Calculus and applications". arXiv:2401.00911 [math.CO].