Dozensonline > General > Tau Vs. Pi

Pages: [**1**] 2

I suppose I should go ahead and kick this off here before too many other threads get infected with this debate. :)

So, the background is that for thousands of years mathematicians have been fascinated with the ratio of the circumference of a circle to its diameter, 3.1415926... (decimal). It's an irrational and in fact transcendental number. Although it wasn't specifically named until relatively recently (17th-18th century), it was well known, and mathematicians as early as Archimedes mounted efforts to estimate it. The Babylonians may even have estimated it.

It's not surprising that a diameter would be the first rectilinear measure of a circle that people would go to, since it's easy to estimate by feel, or by sliding a ruler over a circle and finding the widest chord length. However, Euclid himself realized that the way to define a circle is as a set of points that are all a constant distance, known as its*radius*, from a given point, known as its center. And of course, the radius of a circle is only half of its diameter. Characterizing a circle as a shape with a constant diameter actually turns out not to be definitive, because there are, in fact, shapes that have constant width ("diameter") that are nevertheless not circles. But finding the center and radius of a circle actually takes a bit of construction with compass and straightedge.

In the 17th and 18th centuries various mathematicians began using \(\pi\) as a symbol to stand for this ratio, and it was finally popularized widely by Leonard Euler. But at the same time, the development of calculus, the refinement of trigonometry and its generalization to the unit circle on the complex number plane, expanded the importance of the radius as the definitive rectilinear measure of a circle. The radian, the angle subtended by an arc length of one radius, became important as a unit of angular measure. So \(\pi\) wound up being defined as "half the periphery of a circle of radius 1", and the number of radians in a full circle wound up being \(2\pi\) = 6.28318531... (decimal).

Over the last 20 years, a number of mathematicians have called into question whether \(\pi\) is the most convenient constant to use in mathematics when circularity is involved, and have suggested that \(\tau = 2\pi\), the ratio of a circle's circumference to its radius, is a better choice. Most recently, Michael Hartl has posted a Tau Manifesto that attempts to make the case against \(\pi\) and in favor of \(\tau\).

But not everyone agrees with this assessment. This turns out to be a rather controversial subject, and the arguments from both sides have tended to get a bit heated. :) But I'm opening up this thread as a forum where this topic may be discussed in a sober and measured way. I think the ground rules should be that any participants refrain from casting doubt on each other's sanity or character, and limit the discussion to the realms of mathematics, and perhaps the philosophy of mathematics. Also, I think it important that when a participant makes an assertion of truth, that they should endeavor to demonstrate their point as rigorously as they can, supporting it with any necessary diagrams and algebra.

So ... release the hounds! :)

So, the background is that for thousands of years mathematicians have been fascinated with the ratio of the circumference of a circle to its diameter, 3.1415926... (decimal). It's an irrational and in fact transcendental number. Although it wasn't specifically named until relatively recently (17th-18th century), it was well known, and mathematicians as early as Archimedes mounted efforts to estimate it. The Babylonians may even have estimated it.

It's not surprising that a diameter would be the first rectilinear measure of a circle that people would go to, since it's easy to estimate by feel, or by sliding a ruler over a circle and finding the widest chord length. However, Euclid himself realized that the way to define a circle is as a set of points that are all a constant distance, known as its

In the 17th and 18th centuries various mathematicians began using \(\pi\) as a symbol to stand for this ratio, and it was finally popularized widely by Leonard Euler. But at the same time, the development of calculus, the refinement of trigonometry and its generalization to the unit circle on the complex number plane, expanded the importance of the radius as the definitive rectilinear measure of a circle. The radian, the angle subtended by an arc length of one radius, became important as a unit of angular measure. So \(\pi\) wound up being defined as "half the periphery of a circle of radius 1", and the number of radians in a full circle wound up being \(2\pi\) = 6.28318531... (decimal).

Over the last 20 years, a number of mathematicians have called into question whether \(\pi\) is the most convenient constant to use in mathematics when circularity is involved, and have suggested that \(\tau = 2\pi\), the ratio of a circle's circumference to its radius, is a better choice. Most recently, Michael Hartl has posted a Tau Manifesto that attempts to make the case against \(\pi\) and in favor of \(\tau\).

But not everyone agrees with this assessment. This turns out to be a rather controversial subject, and the arguments from both sides have tended to get a bit heated. :) But I'm opening up this thread as a forum where this topic may be discussed in a sober and measured way. I think the ground rules should be that any participants refrain from casting doubt on each other's sanity or character, and limit the discussion to the realms of mathematics, and perhaps the philosophy of mathematics. Also, I think it important that when a participant makes an assertion of truth, that they should endeavor to demonstrate their point as rigorously as they can, supporting it with any necessary diagrams and algebra.

So ... release the hounds! :)

This point has been nagging me for a while now. See my little "halvity" parable in this post for the context of this comment from dgiii:

QUOTE (dgoodmaniii @ Jun 8 2012, 01:07 PM) |

Except that half-gravity isn't a whole anything. Pi radians, on the other hand, is a whole angle, a full reversal, a straight line; as you pointed out, also known as a straight angle. It's a real thing. So the situations aren't remotely analogous. |

They are, in fact, exactly analogous.

An acceleration of 4.903325 m/s

But here's the key point: Talking about that particular acceleration would have absolutely no relevance to the situation we were actually considering, which is an object in free fall under gravity. In that context, the quadratic form \[d = g\left(\frac{1}{2} t^2\right)\]is the relevant formula. And, as we know from integral calculus, the \(\frac{1}{2}\) coefficient is an inherent by-product of integrating from a linear form (\(v = gt\)) to a quadratic form, and has nothing to do with any constant such as \(g\) that might be involved. The little game of combining the half with the \(g\) into a mythical \(h\) for "halvity" was just an algebraic trick, one that my fictitious "halvitists" exploited, just so they could avoid doing a division by 2. If you were annoyed by how smugly they congratulated themselves on the "elegance" and "beauty" of their solution, then you would be perfectly justified, and I would consider my little parable successful in its purpose.

However, the situation with the area of a circle is exactly analogous. Yes, a semicircle or straight angle is a real thing. Yes, it characterizes an amount of rotation equivalent to reversing one's direction. And yes, the number of radians in a semicircle happens to be called \(\pi\). And we can devise situations where, in the proper context, talking about a semicircle might be relevant.

But! -- and this is the key point: Talking about that particular angle would have absolutely no relevance to the situation we were actually considering here, which is calculating the area of a

But really, combining the \(\frac{1}{2}\) with the \(\tau\) to make a \(\pi\) here would have nothing to do with semicircles. It would simply be an algebraic trick, as much of a trick for \(\pi\)-ists as the \(h\) was for the halvitists. All that it would be good for is to avoid doing one division by 2. But it would disconnect this formula from its derivation, disconnect it from intimately related formulas such as the one for the area of a circular sector:\[A = \theta\left(\frac{1}{2} r^2\right)\]in exactly the same way that a halvity form of the gravity formula would disconnect it from the general formula for acceleration:\[d = a\left(\frac{1}{2} t^2\right)\]Whether we're talking halvity or \(\pi\), that algebraic trick obscures what is really going on, making it incrementally harder for students to learn and understand, incrementally harder for even the experienced to gain insight into how such a formula connects with other related formulas and even with other branches of mathematics and science.

For instance, if we make this \(\frac{1}{2}\) disappear, so that we have no evidence that an integration step occurred, what are we to make of the \(\frac{1}{3}\) in the formula for the volume of a sphere? \[V_3 = \frac{4\pi}{3} r^3\] In fact, it comes from doing an integration step to a cubic form. But the \(2\) accompanying the \(\pi\) just can't cancel out that \(3\), so the "halvitist" trick isn't available this time. But all it would take would be some creativity in our choice of circle constants:\[V_3 = \frac{2}{3}\tau\ r^3 = 2\ \tau_3\ r^3\]Here I exploit "tertiantau", a circle constant we can associate with the exterior angle of an equilateral triangle (the number of radians in \(120^{\circ}\)). We can use the fact that \(\tau = 3\ \tau_3\), and get the \(3\) to cancel out the \(\frac{1}{3}\), saving ourselves the nuisance of doing that division all the time. But an angle of \(120^{\circ}\) has no more to do with the volume of a sphere, than an angle of \(180^{\circ}\) has to do with the area of a circle.

Seriously though, if you really want to save people unnecessary calculation steps when computing the volumes of a lot of 3-spheres, just compute that coefficient once, locally, and reuse it:\[V_3 = \beta_3\ r^3\]\[\beta_3 = \frac{2}{3}\tau \approx 4.188790205\]But the same idea can apply to the area of the circle, which is the "volume" of the "2-sphere":\[V_2 = \beta_2\ r^2\]\[\beta_2 = \frac{1}{2}\tau \approx 3.141592654\]The fact that the coefficient at this dimension level happens to equal the angle of a semicircle is a pure coincidence. It has nothing to do with semicircles.

The 'beta' constants make surface area more computation more convenient!

Circle

\( \beta_2 r^2\) -- Area

\( 2 \beta_2 r\) -- Perimetre

Sphere

\( \beta_3 r^3\) -- Volume

\( 3 \beta_3 r^2\) -- Surface Area

4D Hypersphere

\( \beta_4 r^4\) -- 'Hypervolume' of 4D space occupied by the hypersphere

\( 4 \beta_4 r^3\) -- Volume of 3D surface of the hypersphere

Express in terms of \(\tau\) and \(\pi\) they are

\(\beta_2 = \frac{\tau}{2}\ = \pi\)

\(\beta_3 = \frac{2 \tau}{3}\ = \frac{4 \pi}{3}\)

\(\beta_4 = \frac{\tau^2}{8}\ = \frac{\pi^2}{2}\)

\(\beta_5 = \frac{2 \tau^2}{13}\ = \frac{8 \pi^2}{13}\)

Should we use gamma for the surface coefficients?

\(\gamma_2 = \tau\ = 2 \pi\)

\(\gamma_3 = 2 \tau\ = 4 \pi\)

\(\gamma_4 = \frac{\tau^2}{2}\ = 2 \pi^2 \)

\(\gamma_5 = \frac{2 \tau^2}{3}\ = \frac{8 \pi^2}{3}\)

The odd-number dimensions have a consistent 2 in the numerator and the double factorial in the denominator on the \(\tau\) side and the even-number dimensions follow double factorials. That can't be said of the \(\pi\) side nor any other rational number multiplied by \(\tau\)!

For the surface area of a sphere, one might suppose that \(\tau\) is half of the 'true' sphere constant \(\gamma_3\) much like how \(\pi\) is compared with \(\tau\) for the circle. They might say that \(\tau\) represents *only* an hemisphere while \(\gamma_3\) is the *entire* sphere. Then, why not use \(\gamma_3\) as far as surface area of spheres are concerned? But then the \(\tau\)-ists object, citing the area of a spherical cap/sector:

\[A = \tau r h\]

Another win for \(\tau\), this time over its double rather than its half! So how is the surface area of a sphere \(2 \tau r^2\) given the above? That's because for the entire sphere the 'height' \(h = 2 r\) so it becomes \(2 \tau r^2\) ... it's simply the result of integration through the *entire* diametre instead of simply the radius! Yet, it didn't involve any hemisphere. The \(\gamma_3\) seems like just a trick to hide the factor of two much like how \(\pi\) hides the the factor of one-half in the area of a circle.

So maybe \(\tau\) is the best idea for a circle constant*and* a sphere constant...??

Circle

\( \beta_2 r^2\) -- Area

\( 2 \beta_2 r\) -- Perimetre

Sphere

\( \beta_3 r^3\) -- Volume

\( 3 \beta_3 r^2\) -- Surface Area

4D Hypersphere

\( \beta_4 r^4\) -- 'Hypervolume' of 4D space occupied by the hypersphere

\( 4 \beta_4 r^3\) -- Volume of 3D surface of the hypersphere

Express in terms of \(\tau\) and \(\pi\) they are

\(\beta_2 = \frac{\tau}{2}\ = \pi\)

\(\beta_3 = \frac{2 \tau}{3}\ = \frac{4 \pi}{3}\)

\(\beta_4 = \frac{\tau^2}{8}\ = \frac{\pi^2}{2}\)

\(\beta_5 = \frac{2 \tau^2}{13}\ = \frac{8 \pi^2}{13}\)

Should we use gamma for the surface coefficients?

\(\gamma_2 = \tau\ = 2 \pi\)

\(\gamma_3 = 2 \tau\ = 4 \pi\)

\(\gamma_4 = \frac{\tau^2}{2}\ = 2 \pi^2 \)

\(\gamma_5 = \frac{2 \tau^2}{3}\ = \frac{8 \pi^2}{3}\)

The odd-number dimensions have a consistent 2 in the numerator and the double factorial in the denominator on the \(\tau\) side and the even-number dimensions follow double factorials. That can't be said of the \(\pi\) side nor any other rational number multiplied by \(\tau\)!

For the surface area of a sphere, one might suppose that \(\tau\) is half of the 'true' sphere constant \(\gamma_3\) much like how \(\pi\) is compared with \(\tau\) for the circle. They might say that \(\tau\) represents *only* an hemisphere while \(\gamma_3\) is the *entire* sphere. Then, why not use \(\gamma_3\) as far as surface area of spheres are concerned? But then the \(\tau\)-ists object, citing the area of a spherical cap/sector:

\[A = \tau r h\]

Another win for \(\tau\), this time over its double rather than its half! So how is the surface area of a sphere \(2 \tau r^2\) given the above? That's because for the entire sphere the 'height' \(h = 2 r\) so it becomes \(2 \tau r^2\) ... it's simply the result of integration through the *entire* diametre instead of simply the radius! Yet, it didn't involve any hemisphere. The \(\gamma_3\) seems like just a trick to hide the factor of two much like how \(\pi\) hides the the factor of one-half in the area of a circle.

So maybe \(\tau\) is the best idea for a circle constant

For the odd dimensions we have to go all the way back to Lineland, one dimension, to see where the 2 came from. When you have a center point at zero on the number line, and a radius, that defines a "surface" consisting of just two points that solve the equation:

[dohtml]<table cellspacing=10><tr><td>\[x^2 = r^2\]</td><td>which is the \(n=1\) case of the general equation for an n-sphere:</td><td>\[\sum_{i}^{n} x_i^2 = r^2\]</td></tr></table>[/dohtml]

The solutions for \(n=1\) are of course \[x = \pm{}r\] The "volume" on the other hand is the size of the set of points that solves the inequality

[dohtml]<table cellspacing=10><tr><td>\[x^2 \le r^2\]</td><td>which is the \(n=1\) case of the general inequality for an n-ball:</td><td>\[\sum_{i}^{n} x_i^2 \le r^2\]</td></tr></table>[/dohtml]

That means to get this "volume" (the length of the line segment) you have to integrate from both \(+r\) and \(-r\) \[\int_0^r \operatorname{d}\!r + \int_{-r}^0 \operatorname{d}\!r = 2 \int_0^r \operatorname{d}\!r = 2r\]This has nothing to do with \(\tau\) or \(\pi\) yet.

It's only when you subsequently add pairs of dimensions orthogonal to this line that you get rotations that involve circle constants. But each pair of dimensions will throw in another power of \(\tau\ r\) to generate the next surface area (2 dimensions up), and then \(\frac{r}{n}\) from an integration step to get the associated volume. The original 2 does explain why, for instance, in three dimensions you have to integrate over two hemispheres, but the 2 itself doesn't have anything to do with the powers of \(\tau\) coming in from the orthogonal dimensions, so there's no need for a new "sphere constant". But powers of \(\tau\) are common to all the dimensions because every time you get a new axis of rotation, you get a new opportunity to integrate a new \(\theta\) from \(0\) to \(\tau\).

As for what these \(\beta_n\) and \(\gamma_n\) coefficients are all about (I would have called the latter \(\alpha_n\) for "area coefficient", to go with \(\beta_n\) for "volume coefficient"), they're just local conveniences for presentation or calculation purposes. They're not meant to be enshrined with any kind of significance other than that, or used anywhere else. The formulas with \(\tau\) that you see in this post are what you want to work with as definitions; or better yet, summary formulas like:

[dohtml]

<table cellspacing=10>

<tr><td>

\[A_n = \alpha_n\ r^{n-1}\]

</td><td><td>

\[V_n = \beta_n\ r^{n}\]

</td></tr>

<tr><td><td align=center>where</td><td></tr>

<tr><td>

\[\alpha_n = \frac{\tau^{\lfloor\frac{n}{2}\rfloor}}{\left(n - 2\right)!!} \times \left(n \operatorname{mod} 2 + 1\right)\]

</td><td><td>

\[\beta_n = \frac{\tau^{\lfloor\frac{n}{2}\rfloor}}{n !!} \times \left(n \operatorname{mod} 2 + 1\right)\]

</td></tr>

</table>

[/dohtml]

with recursive derivation:

[dohtml]

<table cellspacing=10>

<tr><td>

\[\beta_0 = 1\]

<td><td>

\[\alpha_1 = 2\]

<tr>

<td>

\[\alpha_n = \beta_{n-2}\ C\]

<td>

<td>

\[\beta_n = \frac{\alpha_n}{n}\]

</tr>

<tr>

<td>\[C = \tau\ r\]</td></tr>

</table>

[/dohtml]

EDIT: Happy Decimal Tau Day, by the way, folks! Hartl has updated his Tau Manifesto today and he's incorporated a version of this treatment of n-sphere areas and volumes (with a footnote referencing yours truly). :) In it, he also comes up with names for these coefficients, but he chose \(\tau_n\) for \(\alpha_n\) and \(\sigma_n\) for \(\beta_n\).

EDIT: Actually, it turns out the terminology I've been using is non-standard and not technically precise. I've been using "n-sphere" to refer to both "n-spheres" and "n-balls" interchangeably, and I've been giving them dimension numbers "n" according to the dimensionality of the space they live in. Which I guess is relatively intuitive because no one has called me on it. But technically an n-dimensional space is the home for an (n-1)-sphere and an associated n-ball. The (n-1)-sphere is the outline or surface of the n-ball, the n-ball includes the (n-1)-sphere plus all the interior points. (n-1)-spheres have "surface areas", n-balls have "volumes". They get dimension numbers according to the topological dimension they exhibit, not the space they live in. So for instance, in 2 dimensions the circumference of a circle is the 1-sphere, but the whole enclosed disc is the 2-ball. In 3 dimensions the surface of a sphere is the 2-sphere but the entire spherical volume is the 3-ball. Confusing, and a little counter-intuitive, I know, but that's the way the math folks have set it up. I guess when you read my stuff above, you'll just have to understand that when I say "n-sphere" I mean "n-ball or associated (n-1)-sphere."

[dohtml]<table cellspacing=10><tr><td>\[x^2 = r^2\]</td><td>which is the \(n=1\) case of the general equation for an n-sphere:</td><td>\[\sum_{i}^{n} x_i^2 = r^2\]</td></tr></table>[/dohtml]

The solutions for \(n=1\) are of course \[x = \pm{}r\] The "volume" on the other hand is the size of the set of points that solves the inequality

[dohtml]<table cellspacing=10><tr><td>\[x^2 \le r^2\]</td><td>which is the \(n=1\) case of the general inequality for an n-ball:</td><td>\[\sum_{i}^{n} x_i^2 \le r^2\]</td></tr></table>[/dohtml]

That means to get this "volume" (the length of the line segment) you have to integrate from both \(+r\) and \(-r\) \[\int_0^r \operatorname{d}\!r + \int_{-r}^0 \operatorname{d}\!r = 2 \int_0^r \operatorname{d}\!r = 2r\]This has nothing to do with \(\tau\) or \(\pi\) yet.

It's only when you subsequently add pairs of dimensions orthogonal to this line that you get rotations that involve circle constants. But each pair of dimensions will throw in another power of \(\tau\ r\) to generate the next surface area (2 dimensions up), and then \(\frac{r}{n}\) from an integration step to get the associated volume. The original 2 does explain why, for instance, in three dimensions you have to integrate over two hemispheres, but the 2 itself doesn't have anything to do with the powers of \(\tau\) coming in from the orthogonal dimensions, so there's no need for a new "sphere constant". But powers of \(\tau\) are common to all the dimensions because every time you get a new axis of rotation, you get a new opportunity to integrate a new \(\theta\) from \(0\) to \(\tau\).

As for what these \(\beta_n\) and \(\gamma_n\) coefficients are all about (I would have called the latter \(\alpha_n\) for "area coefficient", to go with \(\beta_n\) for "volume coefficient"), they're just local conveniences for presentation or calculation purposes. They're not meant to be enshrined with any kind of significance other than that, or used anywhere else. The formulas with \(\tau\) that you see in this post are what you want to work with as definitions; or better yet, summary formulas like:

[dohtml]

<table cellspacing=10>

<tr><td>

\[A_n = \alpha_n\ r^{n-1}\]

</td><td><td>

\[V_n = \beta_n\ r^{n}\]

</td></tr>

<tr><td><td align=center>where</td><td></tr>

<tr><td>

\[\alpha_n = \frac{\tau^{\lfloor\frac{n}{2}\rfloor}}{\left(n - 2\right)!!} \times \left(n \operatorname{mod} 2 + 1\right)\]

</td><td><td>

\[\beta_n = \frac{\tau^{\lfloor\frac{n}{2}\rfloor}}{n !!} \times \left(n \operatorname{mod} 2 + 1\right)\]

</td></tr>

</table>

[/dohtml]

with recursive derivation:

[dohtml]

<table cellspacing=10>

<tr><td>

\[\beta_0 = 1\]

<td><td>

\[\alpha_1 = 2\]

<tr>

<td>

\[\alpha_n = \beta_{n-2}\ C\]

<td>

<td>

\[\beta_n = \frac{\alpha_n}{n}\]

</tr>

<tr>

<td>\[C = \tau\ r\]</td></tr>

</table>

[/dohtml]

EDIT: Happy Decimal Tau Day, by the way, folks! Hartl has updated his Tau Manifesto today and he's incorporated a version of this treatment of n-sphere areas and volumes (with a footnote referencing yours truly). :) In it, he also comes up with names for these coefficients, but he chose \(\tau_n\) for \(\alpha_n\) and \(\sigma_n\) for \(\beta_n\).

EDIT: Actually, it turns out the terminology I've been using is non-standard and not technically precise. I've been using "n-sphere" to refer to both "n-spheres" and "n-balls" interchangeably, and I've been giving them dimension numbers "n" according to the dimensionality of the space they live in. Which I guess is relatively intuitive because no one has called me on it. But technically an n-dimensional space is the home for an (n-1)-sphere and an associated n-ball. The (n-1)-sphere is the outline or surface of the n-ball, the n-ball includes the (n-1)-sphere plus all the interior points. (n-1)-spheres have "surface areas", n-balls have "volumes". They get dimension numbers according to the topological dimension they exhibit, not the space they live in. So for instance, in 2 dimensions the circumference of a circle is the 1-sphere, but the whole enclosed disc is the 2-ball. In 3 dimensions the surface of a sphere is the 2-sphere but the entire spherical volume is the 3-ball. Confusing, and a little counter-intuitive, I know, but that's the way the math folks have set it up. I guess when you read my stuff above, you'll just have to understand that when I say "n-sphere" I mean "n-ball or associated (n-1)-sphere."

QUOTE |

EDIT: Happy Decimal Tau Day, by the way, folks! |

"Happy" and "decimal" are mutually exclusive, aren't they?

QUOTE |

Which I guess is relatively intuitive because no one has called me on it. |

No such thing; "intuitive" is just an objective-sounding way of saying "I like it." Your way of referring to them

QUOTE (dgoodmaniii @ Jun 29 2012, 02:22 PM) | ||

No such thing; "intuitive" is just an objective-sounding way of saying "I like it." Your way of referring to them makes sense; it's logically consistent, and consequently people don't have trouble following it. But the notion that one way of referring to something is more "intuitive," in the sense of objectively easier to understand, I find meaningless. Something is either correct or not; it either leads to incorrect conceptions or not. |

I think you're parsing my words a little too fine. I didn't use non-standard terminology because it was my

Confusion and "intuitiveness" is purely a matter of context. In the context of thinking about shapes existing within spaces of n dimensions, which is where I was coming from (and I'm supposing most of us were), it

We're used to talking about a "sphere" as a 3-dimensional object, because our eyes need to be (at least) 3-dimensional in order to visualize it. In the form of a 3-ball it most definitely is 3-dimensional. But its surface or boundary is known as the 2-sphere. That threw me for a bit, until I understood that, topologically, it only had 2 dimensions, because you could imagine 2-dimensional Flatlanders -- or should I say Spherelanders -- living within the surface, and considering it to be their space. They might discover their universe wasn't Euclidean: their "parallel" lines would converge; the sum of the interior angles of a large enough triangle would exceed a semicircle, and increasingly so with size; and if they traveled far enough, they would circumnavigate the universe and return to where they started. From that, they might intuit that their "2-dimensional" universe was actually an object embedded in a higher space with (at least) 3 dimensions, even if they could not imagine what it would look like.

So now I have to go back and patch up my treatment of this subject as best I can. How about this as an explanation of the notation I was using:

\(V_n\) = "volume" of an n-ball

\(A_n\) = "surface-area" of an n-ball

= "volume" of the n-ball's surface or boundary

= "volume" of the associated (n-1)-sphere

= \(S_{n-1}\) = a more standard symbol for this in the literature

From what I'm reading, "volume" seems to be used as the most general term, to describe the "size" of a given shape, thinking of it as a set of points. And then it's a matter of deciding exactly what shape you mean. Just the surface or boundary of something, or all the interior points as well?

QUOTE (Kodegadulo @ Jun 29 2012, 05:03 PM) |

I think you're parsing my words a little too fine. |

Yes, I'm sure I am; it's just part of my increasingly passionate campaign against the word "intuitive."

I first began to despise this word in the context of user interfaces, with Mac and Windows fanbois throwing back and forth about how "intuitive" their respective interfaces were without anything objective at all to back it up. This led me to believe that "intuitive" really meant whatever an individual was personally accustomed to. However, since then I've come to hate the word even more; it doesn't even mean what people are accustomed to, but rather simply "I like this." It's a way of dressing up a completely subjective opinion in objective clothing.

Most people who use this word don't mean it that way, and I know you didn't; from your context I could tell that what you meant by it was "easy to understand" or something along those lines. But the word is so often abused that I try to push it out of my own conversation, and consequently sometimes react to it in others'. Sorry; this gratuitous thread pollution is now over.

QUOTE (Kodegadulo @ Jun 29 2012, 05:03 PM) |

From what I'm reading, "volume" seems to be used as the most general term, to describe the "size" of a given shape, thinking of it as a set of points. And then it's a matter of deciding exactly what shape you mean. Just the surface or boundary of something, or all the interior points as well? |

It

Probably best to just select something for "boundary" and something for "internal space" and stick with it for all dimensions. I think your proposed solution works for that just fine.

There's even more to "Euler Identities" than meets the ... eye. :) I've remarked before how silly it is that there is all this hype over the equation \[e^{i\pi} + 1 = 0\] otherwise known as "the" Euler Identity (so-called, as if there's only one). People marvel over how it combines the "five most important numbers in mathematics" and the operations of "exponentiation, multiplication, and addition" in one equation -- yet they do so without gaining any new insight or understanding of what the equation *means*. It amounts to nothing more than a piece of curious pop numerological mumbo-jumbo. But if we can just peel off the obscurity caused by \(\pi\) there are interesting treasures we can dig up.

First off, as I've noted before, it's far more elucidating to look at the*Euler formula*: \[e^{i\theta} = \cos\theta + i\sin\theta\]which reveals that complex exponentiation is equivalent to a rotation around the unit circle, mapping an angle \(\theta\) of polar coordinates to real and imaginary rectilinear coordinates via the circle functions \(\sin\) and \(\cos\). And in that light, plugging in interesting values for \(\theta\) can be instructive:

[dohtml]

<table cellspacing=10 align=center>

<tr>

<th>__Identity__

<th>__Meaning__

<tr>

<td>

\[e^{i\tau} = 1\]

<td>A rotation of a full turn is unity.

<tr>

<td>

\[e^{i\tau/2} = -1\]

<td>A rotation of a half turn is negation.

<tr>

<td>

\[e^{i\tau/4} = i\]

<td>A rotation of a quarter turn is perpendicular.

<tr>

<td>

\[e^{i\tau\ k} = 1^k = 1\]

<td>Any integer number of whole turns is unity.

</table>

[/dohtml]

I find each of these equations interesting and revealing. But for some reason the second of these is considered "ugly" because somehow division by two and negation are "inelegant" operations, so burying the half in a \(\pi\) and doing a bit of algebraic rearrangement to make a negative seem positive is supposed make the equation more "beautiful" and "elegant". But all that does is mask the true importance of that identity.

What is the significance of a division when it appears within an exponentiation? In other words, what is the meaning of \(z^{1/n}\)? The answer is:\[z^{1/n} = \sqrt[n]{z}\]That means dividing an exponent by \(n\) is the same as taking the \(n\)th root. But the Euler formula reveals that a complex exponent is equivalent to a rotation around the origin in the complex plane. So taking an \(n\)th root of a complex number is equivalent to dividing the angular portion of its polar coordinates by \(n\). What are the consequences of that?

If we start with a full circle of rotation \[e^{i\tau} = 1\] and divide the rotation in half we get \[e^{i\tau \cdot 1/2} = -1 = 1^{1/2} = \sqrt[2]{1}\]in other words, this reveals that the square root of unity is negation. Or rather,*a* square root of unity is negation. Because in fact if we take any number of whole turns\[e^{i\tau\ k} = 1\]and divide their rotations in half\[e^{i\tau \cdot k/2} = 1^{1/2}\] we see that the square roots of unity must include both

[dohtml]<table cellspacing=10 align=center><tr>

<td>\[e^{i\tau \cdot 1/2} = -1\]

<td>

<td>and

<td>

<td>\[e^{i\tau \cdot 2/2} = e^{i\tau} = 1\]

</table>[/dohtml]

We can confirm this by the fact that \[\left(e^{i\tau \cdot 1/2}\right)^2 = \left(-1\right)^2 = 1\] and \[\left(e^{i\tau \cdot 2/2}\right)^2 = \left(1\right)^2 = 1\]

But this means there is something very interesting buried unnoticed in the formula\[e^{i\pi} + 1 = 0\]because we can substitute equivalent terms to yield \[e^{i\tau \cdot 1/2} + e^{i\tau \cdot 2/2} = 0\]in other words, the sum of the square roots of unity, the multiplicative identity, is zero, the additive identity. Let's plot that on a unit circle:

[dohtml]<table align=center><tr><td>

</table>[/dohtml]

Hmm. What about the cube roots of unity? What would those be? And do they also add up to zero? Dividing up whole turns by 3 we get:

[dohtml]

<table align=center cellspacing=10>

<tr>

<td colspan=5 align=center>

\[e^{i\tau\cdot k/3} = 1^{1/3}\]

<tr>

<td>

\[e^{i\tau\cdot 1/3} = -\frac{1}{2} + \frac{\sqrt{3}}{2}i\]

<td>

<td>

\[e^{i\tau\cdot 2/3} = -\frac{1}{2} - \frac{\sqrt{3}}{2}i\]

<td>

<td>

\[e^{i\tau\cdot 3/3} = 1\]

</table>

[/dohtml]

Let's plot them:

[dohtml]<table align=center><tr><td>

</table>[/dohtml]

Does this look familiar? That's right, those are the vertices of the equilateral triangle.

Let's confirm they're actually cube roots:

[dohtml]

<table align=center cellspacing=10>

<tr>

<td>

\[\left(e^{i\tau\cdot 1/3}\right)^3 = \left(-\frac{1}{2} + \frac{\sqrt{3}}{2}i\right)^3\]

<tr>

<td>\[= \left(-\frac{1}{2}\right)^3 + 3 \left(-\frac{1}{2}\right)^2\left(\frac{\sqrt{3}}{2}i\right) + 3 \left(-\frac{1}{2}\right)\left(\frac{\sqrt{3}}{2}i\right)^2 + \left(\frac{\sqrt{3}}{2}i\right)^3\]

<tr>

<td>\[= -\frac{1}{8} + \frac{3\sqrt{3}}{8}i + \frac{9}{8} - \frac{3\sqrt{3}}{8}i = \frac{8}{8} = 1\]

<tr>

<td>

\[\left(e^{i\tau\cdot 2/3}\right)^3 = \left(-\frac{1}{2} - \frac{\sqrt{3}}{2}i\right)^3\]

<tr>

<td>\[= \left(-\frac{1}{2}\right)^3 + 3 \left(-\frac{1}{2}\right)^2\left(-\frac{\sqrt{3}}{2}i\right) + 3 \left(-\frac{1}{2}\right)\left(-\frac{\sqrt{3}}{2}i\right)^2 + \left(\frac{-\sqrt{3}}{2}i\right)^3\]

<tr>

<td>\[= -\frac{1}{8} - \frac{3\sqrt{3}}{8}i + \frac{9}{8} + \frac{3\sqrt{3}}{8}i = \frac{8}{8} = 1\]

<tr>

<td>

\[\left(e^{i\tau\cdot 3/3}\right)^3 = \left(1\right)^3 = 1\]

</table>

[/dohtml]

Yes, that works. And what do they add up to?

[dohtml]

<table cellspacing=10 align=center>

<tr>

<td>

\[e^{i\tau\cdot 1/3} + e^{i\tau\cdot 2/3} + e^{i\tau\cdot 3/3}\]

<tr>

<td>

\[=\left(-\frac{1}{2} + \frac{\sqrt{3}}{2}i\right) + \left(-\frac{1}{2} - \frac{\sqrt{3}}{2}i\right) + 1\]

<tr>

<td>

\[=\left(-\frac{1}{2} - \frac{1}{2}\right) + \left(\frac{\sqrt{3}}{2}i - \frac{\sqrt{3}}{2}i\right) + 1\]

<tr>

<td>

\[=-1 + 0 + 1 = 0\]

</table>

[/dohtml]

They do add up to zero!

How about the fourth roots, where we divide up whole turns by 4? These are a bit easier:

[dohtml]

<table align=center cellspacing=10>

<tr>

<td colspan=8 align=center>

\[e^{i\tau\cdot k/4} = 1^{1/4}\]

<tr>

<td>

\[e^{i\tau\cdot 1/4} = i\]

<td>

<td>

\[e^{i\tau\cdot 2/4} = -1\]

<td>

<td>

\[e^{i\tau\cdot 3/4} = -i\]

<td>

<td>

\[e^{i\tau\cdot 4/4} = 1\]

</table>

[/dohtml]

[dohtml]<table align=center><tr><td>

</table>[/dohtml]

And here we have the vertices of the square (disguised as its alter-ego, the diamond).

Let's confirm they're actually fourth roots:

[dohtml]

<table align=center cellspacing=10>

<tr>

<td>

\[\left(e^{i\tau\cdot 1/4}\right)^4 = \left(i\right)^4 = \left(-1\right)^2 = 1\]

<tr>

<td>

\[\left(e^{i\tau\cdot 2/4}\right)^4 = \left(-1\right)^4 = \left(1\right)^2 = 1\]

<tr>

<td>

\[\left(e^{i\tau\cdot 3/4}\right)^4 = \left(-i\right)^4 = \left(-1\right)^2 = 1\]

<tr>

<td>

\[\left(e^{i\tau\cdot 4/4}\right)^4 = \left(1\right)^4 = \left(1\right)^2 = 1\]

</table>

[/dohtml]

And do they add up to zero?

\[e^{i\tau\cdot 1/4} + e^{i\tau\cdot 2/4} + e^{i\tau\cdot 3/4} + e^{i\tau\cdot 4/4} = i + \left(-1\right) + \left(-i\right) + 1 = \left(i - i\right) + \left(1 - 1\right) = 0 + 0 = 0\]

They do!

Are you beginning to see a pattern here? For every natural number \(n \ge 2\), there are \(n\) complex roots of unity defined as \[e^{i \tau \cdot k/n}\] which divide up whole turns by \(n\) and which correspond to the vertices of the regular \(n\)-gon plotted on the unit circle, and \[\sum_k^n e^{i \tau \cdot k/n} = 0\]You can try it out with the fifth roots and the vertices of the pentagon, using a calculator for those multiples of \(72^\circ\) angles. Or a little more easily with the sixth roots at those sextant angles. But you'll see it works out for every \(n\) you try. It makes sense geometrically: The nth roots of unity are unit vectors centered at the origin and distributed evenly around the circle, so they must counterbalance each other to add up to that center point. Now I just need to find a rigorous proof ... :)

But, bottom line, ask yourself: Would any of this be any clearer or more "beautiful" or "elegant", if it were cast in terms of \(\pi\)? Wouldn't it be incrementally more ugly, and therefore incrementally more obscure, and therefore incrementally harder for students to grasp, if it were encrusted with \(2 \pi\) everywhere? We are dealing with complex numbers plotted on the unit circle. Isn't this just confirmation that the most*fundamental* constant associated with circles, and with radians as the ideal angular measure to use with circle functions such as \(\sin \theta\), \(\cos \theta\), and \(e^{i\theta}\), is the number identified as \(\tau\)? And isn't \(\pi\), at best, just one of many possible numbers derivable from \(\tau\)?

And given all this insight we can derive, isn't \(e^{i\pi} + 1 = 0\)*more* "beautiful", not less, if it's expressed as \(e^{i\tau \cdot 1/2} = -1\)?

First off, as I've noted before, it's far more elucidating to look at the

[dohtml]

<table cellspacing=10 align=center>

<tr>

<th>

<th>

<tr>

<td>

\[e^{i\tau} = 1\]

<td>A rotation of a full turn is unity.

<tr>

<td>

\[e^{i\tau/2} = -1\]

<td>A rotation of a half turn is negation.

<tr>

<td>

\[e^{i\tau/4} = i\]

<td>A rotation of a quarter turn is perpendicular.

<tr>

<td>

\[e^{i\tau\ k} = 1^k = 1\]

<td>Any integer number of whole turns is unity.

</table>

[/dohtml]

I find each of these equations interesting and revealing. But for some reason the second of these is considered "ugly" because somehow division by two and negation are "inelegant" operations, so burying the half in a \(\pi\) and doing a bit of algebraic rearrangement to make a negative seem positive is supposed make the equation more "beautiful" and "elegant". But all that does is mask the true importance of that identity.

What is the significance of a division when it appears within an exponentiation? In other words, what is the meaning of \(z^{1/n}\)? The answer is:\[z^{1/n} = \sqrt[n]{z}\]That means dividing an exponent by \(n\) is the same as taking the \(n\)th root. But the Euler formula reveals that a complex exponent is equivalent to a rotation around the origin in the complex plane. So taking an \(n\)th root of a complex number is equivalent to dividing the angular portion of its polar coordinates by \(n\). What are the consequences of that?

If we start with a full circle of rotation \[e^{i\tau} = 1\] and divide the rotation in half we get \[e^{i\tau \cdot 1/2} = -1 = 1^{1/2} = \sqrt[2]{1}\]in other words, this reveals that the square root of unity is negation. Or rather,

[dohtml]<table cellspacing=10 align=center><tr>

<td>\[e^{i\tau \cdot 1/2} = -1\]

<td>

<td>and

<td>

<td>\[e^{i\tau \cdot 2/2} = e^{i\tau} = 1\]

</table>[/dohtml]

We can confirm this by the fact that \[\left(e^{i\tau \cdot 1/2}\right)^2 = \left(-1\right)^2 = 1\] and \[\left(e^{i\tau \cdot 2/2}\right)^2 = \left(1\right)^2 = 1\]

But this means there is something very interesting buried unnoticed in the formula\[e^{i\pi} + 1 = 0\]because we can substitute equivalent terms to yield \[e^{i\tau \cdot 1/2} + e^{i\tau \cdot 2/2} = 0\]in other words, the sum of the square roots of unity, the multiplicative identity, is zero, the additive identity. Let's plot that on a unit circle:

[dohtml]<table align=center><tr><td>

</table>[/dohtml]

Hmm. What about the cube roots of unity? What would those be? And do they also add up to zero? Dividing up whole turns by 3 we get:

[dohtml]

<table align=center cellspacing=10>

<tr>

<td colspan=5 align=center>

\[e^{i\tau\cdot k/3} = 1^{1/3}\]

<tr>

<td>

\[e^{i\tau\cdot 1/3} = -\frac{1}{2} + \frac{\sqrt{3}}{2}i\]

<td>

<td>

\[e^{i\tau\cdot 2/3} = -\frac{1}{2} - \frac{\sqrt{3}}{2}i\]

<td>

<td>

\[e^{i\tau\cdot 3/3} = 1\]

</table>

[/dohtml]

Let's plot them:

[dohtml]<table align=center><tr><td>

</table>[/dohtml]

Does this look familiar? That's right, those are the vertices of the equilateral triangle.

Let's confirm they're actually cube roots:

[dohtml]

<table align=center cellspacing=10>

<tr>

<td>

\[\left(e^{i\tau\cdot 1/3}\right)^3 = \left(-\frac{1}{2} + \frac{\sqrt{3}}{2}i\right)^3\]

<tr>

<td>\[= \left(-\frac{1}{2}\right)^3 + 3 \left(-\frac{1}{2}\right)^2\left(\frac{\sqrt{3}}{2}i\right) + 3 \left(-\frac{1}{2}\right)\left(\frac{\sqrt{3}}{2}i\right)^2 + \left(\frac{\sqrt{3}}{2}i\right)^3\]

<tr>

<td>\[= -\frac{1}{8} + \frac{3\sqrt{3}}{8}i + \frac{9}{8} - \frac{3\sqrt{3}}{8}i = \frac{8}{8} = 1\]

<tr>

<td>

\[\left(e^{i\tau\cdot 2/3}\right)^3 = \left(-\frac{1}{2} - \frac{\sqrt{3}}{2}i\right)^3\]

<tr>

<td>\[= \left(-\frac{1}{2}\right)^3 + 3 \left(-\frac{1}{2}\right)^2\left(-\frac{\sqrt{3}}{2}i\right) + 3 \left(-\frac{1}{2}\right)\left(-\frac{\sqrt{3}}{2}i\right)^2 + \left(\frac{-\sqrt{3}}{2}i\right)^3\]

<tr>

<td>\[= -\frac{1}{8} - \frac{3\sqrt{3}}{8}i + \frac{9}{8} + \frac{3\sqrt{3}}{8}i = \frac{8}{8} = 1\]

<tr>

<td>

\[\left(e^{i\tau\cdot 3/3}\right)^3 = \left(1\right)^3 = 1\]

</table>

[/dohtml]

Yes, that works. And what do they add up to?

[dohtml]

<table cellspacing=10 align=center>

<tr>

<td>

\[e^{i\tau\cdot 1/3} + e^{i\tau\cdot 2/3} + e^{i\tau\cdot 3/3}\]

<tr>

<td>

\[=\left(-\frac{1}{2} + \frac{\sqrt{3}}{2}i\right) + \left(-\frac{1}{2} - \frac{\sqrt{3}}{2}i\right) + 1\]

<tr>

<td>

\[=\left(-\frac{1}{2} - \frac{1}{2}\right) + \left(\frac{\sqrt{3}}{2}i - \frac{\sqrt{3}}{2}i\right) + 1\]

<tr>

<td>

\[=-1 + 0 + 1 = 0\]

</table>

[/dohtml]

They do add up to zero!

How about the fourth roots, where we divide up whole turns by 4? These are a bit easier:

[dohtml]

<table align=center cellspacing=10>

<tr>

<td colspan=8 align=center>

\[e^{i\tau\cdot k/4} = 1^{1/4}\]

<tr>

<td>

\[e^{i\tau\cdot 1/4} = i\]

<td>

<td>

\[e^{i\tau\cdot 2/4} = -1\]

<td>

<td>

\[e^{i\tau\cdot 3/4} = -i\]

<td>

<td>

\[e^{i\tau\cdot 4/4} = 1\]

</table>

[/dohtml]

[dohtml]<table align=center><tr><td>

</table>[/dohtml]

And here we have the vertices of the square (disguised as its alter-ego, the diamond).

Let's confirm they're actually fourth roots:

[dohtml]

<table align=center cellspacing=10>

<tr>

<td>

\[\left(e^{i\tau\cdot 1/4}\right)^4 = \left(i\right)^4 = \left(-1\right)^2 = 1\]

<tr>

<td>

\[\left(e^{i\tau\cdot 2/4}\right)^4 = \left(-1\right)^4 = \left(1\right)^2 = 1\]

<tr>

<td>

\[\left(e^{i\tau\cdot 3/4}\right)^4 = \left(-i\right)^4 = \left(-1\right)^2 = 1\]

<tr>

<td>

\[\left(e^{i\tau\cdot 4/4}\right)^4 = \left(1\right)^4 = \left(1\right)^2 = 1\]

</table>

[/dohtml]

And do they add up to zero?

\[e^{i\tau\cdot 1/4} + e^{i\tau\cdot 2/4} + e^{i\tau\cdot 3/4} + e^{i\tau\cdot 4/4} = i + \left(-1\right) + \left(-i\right) + 1 = \left(i - i\right) + \left(1 - 1\right) = 0 + 0 = 0\]

They do!

Are you beginning to see a pattern here? For every natural number \(n \ge 2\), there are \(n\) complex roots of unity defined as \[e^{i \tau \cdot k/n}\] which divide up whole turns by \(n\) and which correspond to the vertices of the regular \(n\)-gon plotted on the unit circle, and \[\sum_k^n e^{i \tau \cdot k/n} = 0\]You can try it out with the fifth roots and the vertices of the pentagon, using a calculator for those multiples of \(72^\circ\) angles. Or a little more easily with the sixth roots at those sextant angles. But you'll see it works out for every \(n\) you try. It makes sense geometrically: The nth roots of unity are unit vectors centered at the origin and distributed evenly around the circle, so they must counterbalance each other to add up to that center point. Now I just need to find a rigorous proof ... :)

But, bottom line, ask yourself: Would any of this be any clearer or more "beautiful" or "elegant", if it were cast in terms of \(\pi\)? Wouldn't it be incrementally more ugly, and therefore incrementally more obscure, and therefore incrementally harder for students to grasp, if it were encrusted with \(2 \pi\) everywhere? We are dealing with complex numbers plotted on the unit circle. Isn't this just confirmation that the most

And given all this insight we can derive, isn't \(e^{i\pi} + 1 = 0\)

Ah, here's the proof:

Let \(z = e^{i\tau/n}\), i.e. the first of the \(n\)th roots of unity. Then the \(k\)th of the \(n\)th roots is \(e^{i\tau \cdot k/n} = \left(e^{i\tau/n}\right)^k = z^k\). But that's just the form for a geometric series. So the sum of all \(n\) of the \(n\)th roots is the sum of a geometric series, which is: \[\sum_{k = 0}^{n-1} z^k = \frac{z^n - 1}{z - 1}\]But in this case \(z^n = 1\) so \[\sum_{k = 0}^{n-1} z^k = \frac{z^n - 1}{z - 1} = \frac{1 - 1}{z - 1} = 0\ \ \leftarrow\ \operatorname{QED}\]

Let \(z = e^{i\tau/n}\), i.e. the first of the \(n\)th roots of unity. Then the \(k\)th of the \(n\)th roots is \(e^{i\tau \cdot k/n} = \left(e^{i\tau/n}\right)^k = z^k\). But that's just the form for a geometric series. So the sum of all \(n\) of the \(n\)th roots is the sum of a geometric series, which is: \[\sum_{k = 0}^{n-1} z^k = \frac{z^n - 1}{z - 1}\]But in this case \(z^n = 1\) so \[\sum_{k = 0}^{n-1} z^k = \frac{z^n - 1}{z - 1} = \frac{1 - 1}{z - 1} = 0\ \ \leftarrow\ \operatorname{QED}\]

QUOTE (Stella☆Sapphire @ Jun 28 2012, 05:16 PM) |

For the surface area of a sphere, one might suppose that \(\tau\) is half of the 'true' sphere constant \(\gamma_3\) much like how \(\pi\) is compared with \(\tau\) for the circle. They might say that \(\tau\) represents *only* an hemisphere while \(\gamma_3\) is the *entire* sphere. Then, why not use \(\gamma_3\) as far as surface area of spheres are concerned? But then the \(\tau\)-ists object, citing the area of a spherical cap/sector: \[A = \tau r h\] Another win for \(\tau\), this time over its double rather than its half! So how is the surface area of a sphere \(2 \tau r^2\) given the above? That's because for the entire sphere the 'height' \(h = 2 r\) so it becomes \(2 \tau r^2\) ... it's simply the result of integration through the *entire* diametre instead of simply the radius! Yet, it didn't involve any hemisphere. The \(\gamma_3\) seems like just a trick to hide the factor of two much like how \(\pi\) hides the the factor of one-half in the area of a circle. So maybe \(\tau\) is the best idea for a circle constant and a sphere constant...?? |

So, on further consideration, you (and Hartl) may be right: Those "area coefficients" do deserve of their own names. Although I've since absorbed Hartl's latest insight, taking these coefficients to mean "unit n-ball surface areas". So either \(\alpha_3\) or \(A\left(3,1\right)\) should stand for the area of the "unit 3-ball surface" or the "unit (3-1)-sphere". So I don't think it's necessary to make this a "win" for \(\tau\) by usurping the role of this unit surface in the formula for a hemispherical cap:

So, given:

[dohtml]

<table cellspacing=10 align=center>

<tr><td>

\[\alpha_3 = A\left(3, 1\right) = 2\tau\]

<td>

<td>and

<td>

<td>

\[A\left(3,r\right) = A\left(3, 1\right)\ r^2 = 2\tau\ r^2\]

</table>

[/dohtml]

then:

\[\frac{1}{2}\alpha_3\ r\ h = \frac{1}{2} A\left(3,1\right)\ r\ h = \frac{h}{2r}\ A\left(3, r\right)\]

The two formulas on the left give a sense that the cap area is something analogous to a triangular area, with the 1/2 cutting the unit 3-ball surface into hemispheres. The formula on the right gives a sense that the cap area is essentially the whole surface area of the 3-ball with the given radius, times a factor that reflects the cap height's as a proportion of the whole diameter of the sphere. This reinforces the finding that the sphere area maps to the circumscribing cylinder area: with every increment in height, we get (perhaps surprisingly) the same increment in sphere area.

That *is* pretty cool. (Though I still think the Youtube guy's got a point that it probably looks better for the most common applications using \(\eta\).)

Of course, it has nothing to do with what ought to be the basis for our unit of angle, which is really my concern.

It*does* show a situation where using a unit based on the full circle clarifies some relationships, but that's not very controversial; \(\pi\) supporters have always admitted that there are such situations. It's full-circle people who refuse to admit the contrary.

(I'm not using the symbol "\(\tau\)" for this unit anymore because it's terrible; it's already way too overloaded for yet another meaning, especially one as common as what \(\tau\) supporters propose for it. So I'll just call it "the full circle unit" until a reasonable alternative is proposed.

Of course, it has nothing to do with what ought to be the basis for our unit of angle, which is really my concern.

It

(I'm not using the symbol "\(\tau\)" for this unit anymore because it's terrible; it's already way too overloaded for yet another meaning, especially one as common as what \(\tau\) supporters propose for it. So I'll just call it "the full circle unit" until a reasonable alternative is proposed.

Then too, if your purpose isn't to relate the whole thing to circles, but something else, you'd be better off with something different. E.g., to demonstrate the nature of the imaginary numbers as opposed to the real ones, you're better off with \(\pi\), because that's the smallest multiplicand for the exponent which will bring the powers back to real numbers.

\[ e^{i\left(\pi/2\right)} = i \]

\[ e^{i\pi} = -1 \]

\[ e^{i\left(3\pi/2\right)} = -i \]

\[ e^{i2\pi} = 1 \]

This amply shows that the means for getting back to real numbers is to use a multiple of \(\pi\); and since for practical applications getting back to real numbers is really what you want to do, that's got an important place in mathematics.

And of course, using \(\pi/2\), or \(\eta\), or whatever you want to call it elucidates some different points pretty clearly, as well.

It reminds me of one of the comments on the \(\eta\) Youtube video: you make some good points, the commenter said, even if some don't have anything to do with circles. Well, so what if they don't? Not everything has to be related to circles all the time. For example, we had some back-and-forth about trigonometry in reference to right angles and trigonometry in reference to circles, and it seemed to be taken for granted that trigonometry with right angles was only useful because of its application to circles. Isn't it just as reasonable to argue that trigonometry is only applicable to circle because you can make the polar coordinates on a circle into the vertices of a right triangle?

In fact, I think that's a pretty good way to teach the extension of trigonometry beyond right triangles, and I'll do it that way with my children when they get there.

There's no need for one-size-fits-all; some purposes are better served by one constant, some by another.

\[ e^{i\left(\pi/2\right)} = i \]

\[ e^{i\pi} = -1 \]

\[ e^{i\left(3\pi/2\right)} = -i \]

\[ e^{i2\pi} = 1 \]

This amply shows that the means for getting back to real numbers is to use a multiple of \(\pi\); and since for practical applications getting back to real numbers is really what you want to do, that's got an important place in mathematics.

And of course, using \(\pi/2\), or \(\eta\), or whatever you want to call it elucidates some different points pretty clearly, as well.

It reminds me of one of the comments on the \(\eta\) Youtube video: you make some good points, the commenter said, even if some don't have anything to do with circles. Well, so what if they don't? Not everything has to be related to circles all the time. For example, we had some back-and-forth about trigonometry in reference to right angles and trigonometry in reference to circles, and it seemed to be taken for granted that trigonometry with right angles was only useful because of its application to circles. Isn't it just as reasonable to argue that trigonometry is only applicable to circle because you can make the polar coordinates on a circle into the vertices of a right triangle?

In fact, I think that's a pretty good way to teach the extension of trigonometry beyond right triangles, and I'll do it that way with my children when they get there.

There's no need for one-size-fits-all; some purposes are better served by one constant, some by another.

QUOTE (dgoodmaniii) |

Yes, I'm sure I am; it's just part of my increasingly passionate campaign against the word "intuitive." I first began to despise this word in the context of user interfaces, |

I've had the same thoughts within the context of 'intuitive divisibility tests'. How is the decimal digit-sum test (the omega rule) intuitive? I might have come up with it independently, but that wouldn't make it intuitive, only easily hit upon, the way the Thue-Morse sequence has been discovered by various people all over again. The fact is, I know of the digit-sum test only because I was taught it at school. All the more so for the alternating digit sum difference test (the alpha rule): this one I knew nothing of until quite recently, and learnt of it from none other than the DSA FAQs.

They are called 'intuitive' because they are more easy to hit upon than tests based on modular arithmetic. But they are not intuitive according to the dictionary definition, because they have to be either taught or discovered through a fair amount of thinking, while intuitive things are grasped immediately from sensory input.

QUOTE |

It's a way of dressing up a completely subjective opinion in objective clothing. |

The most common affliction of the academic world.

Nothing more I can add, so, exiting stage left.

QUOTE (Treisaran @ Jul 1 2012, 01:59 PM) |

The most common affliction of the academic world. |

You mean the people who say "'Science' supports my politics"? I don't like them either.

QUOTE (dgoodmaniii @ Jul 1 2012, 03:51 PM) |

That is pretty cool. |

Are you referring to the \(n\)th roots of unity? I'll assume so for this reply.

QUOTE |

Of course, it has nothing to do with what ought to be the basis for our unit of angle, which is really my concern. |

My intent for this thread is to focus on gaining a clearer understanding of mathematical concepts in general, and the role of a circle constant in mathematics, in specific. Units of angular measure for everyday use are not my concern here; I think that belongs in a different thread, and in fact I think I've already dealt with that in a pretty inclusive way. The only unit of angular measure I'm concerned about here is the radian, because that is the only (convenient) angular unit for higher mathematics.

QUOTE |

(I'm not using the symbol "\(\tau\)" for this unit anymore because it's terrible; it's already way too overloaded for yet another meaning, especially one as common as what \(\tau\) supporters propose for it. So I'll just call it "the full circle unit" until a reasonable alternative is proposed. |

As I've said several times before, I don't consider \(\tau\) to be the symbol for a unit. It is a circle constant, a quantity of importance in the mathematics of circles. It can, however, be read as "1 turn", which may be beneficial if that promotes a more understandable interpretation of the mathematics in which it appears. But that should always be taken as "1 turn's worth of radians."

QUOTE |

It does show a situation where using a unit based on the full circle clarifies some relationships, but that's not very controversial; \(\pi\) supporters have always admitted that there are such situations. It's full-circle people who refuse to admit the contrary. |

My intent for this thread is that it not degenerate into each side accusing the other of taking unreasonable positions, with self-serving claims of rationality for their own views. I am not "refusing to admit" anything, I am

QUOTE (dgoodmaniii @ Jul 1 2012, 04:09 PM) |

Then too, if your purpose isn't to relate the whole thing to circles, but something else, you'd be better off with something different. |

Why? What advantage do we gain by considering these different "relations" in isolation from each other? Don't we derive greater insight by relating the "whole thing" to the "something else" (whatever those happen to be) while at the same time relating

My purpose in this thread is to explore the mathematics of circles and the circle constant(s). Why should it be our "purpose" to undertake the exercise of pretending circles aren't involved in, for instance, trigonometry? Will that really produce any additional understanding than if we just keep in mind that there is also an intimate relationship with the unit circle? Why should we turn off our

QUOTE |

E.g., to demonstrate the nature of the imaginary numbers as opposed to the real ones, you're better off with \(\pi\), because that's the smallest multiplicand for the exponent which will bring the powers back to real numbers. |

Doesn't it give richer insight to say the following: "A half turn (\(\tau/2\)) is the minimal positive rotation away from unity that returns you to the real number line, since when applied as the \(\theta\) argument in the complex exponential, it yields \(e^{i\tau/2} = -1\), i.e., negative unity, which is the only point other than unity itself where the real number line intersects the unit circle; moreover, this point is revealed to be the first square root of unity, due to the division of the full rotation exponent by two."

QUOTE |

\[\ldots\] \[ e^{i\pi} = -1 \] \[\ldots\] This amply shows that the means for getting back to real numbers is to use a multiple of \(\pi\); and since for practical applications getting back to real numbers is really what you want to do, that's got an important place in mathematics. |

Complex numbers have very practical applications, especially in electrical engineering. But leave that aside, since I want to focus on pure mathematics in this thread.

What is it we are doing that gets us "away" from the real numbers, and what are we doing to "get back" to them, when we're using the complex Euler exponential? Isn't the geometric interpretation of what we're doing that we're rotating a unit vector? Doesn't it give richer insights to say:

\[e^{i\cdot\frac{\tau}{2}\cdot n} = \pm 1, \operatorname{when} n \in \mathbb{I}\]

i.e., "an integer number of half turns (any square root of unity) is positive or negative unity"; and:

\[\operatorname{Im}\left(e^{i\cdot\frac{\tau}{2}\cdot n}\right) = \sin \left(\frac{\tau}{2}\cdot n\right) = 0 \iff n \in \mathbb{I}\]

i.e., "the imaginary portion (i.e., the sine) of a rotation is zero (i.e., the resulting vector is real) if and only if the rotation is some integral number of half-turns"

QUOTE |

And of course, using \(\pi/2\), or \(\eta\), or whatever you want to call it elucidates some different points pretty clearly, as well. It reminds me of one of the comments on the \(\eta\) Youtube video: you make some good points, the commenter said, even if some don't have anything to do with circles. Well, so what if they don't? Not everything has to be related to circles all the time. For example, we had some back-and-forth about trigonometry in reference to right angles and trigonometry in reference to circles, and it seemed to be taken for granted that trigonometry with right angles was only useful because of its application to circles. Isn't it just as reasonable to argue that trigonometry is only applicable to circle because you can make the polar coordinates on a circle into the vertices of a right triangle? |

I find it quite mentally hobbling to try to look at the mathematics of trigonometry without the unit circle. Despite the name of the field, I think trying to limit one's thinking to triangles is not particularly edifying, whereas the unit circle provides the foundation for understanding most everything in the subject. Indeed, the fundamental trigonometric functions are actually called "circle functions", and the etymologies of their names refer in fact to their relationships to the circle:

**sine**< Neo-Latin, Latin sinus a curve, fold, pocket, translation of Arabic jayb literally, pocket, by folk etymology < Sanskrit jiyā, jyā chord of an arc, literally, bowstring.**tangent**< Latin tangent- (stem of tangēns, present participle of tangere to touch) in phrase līnea tangēns touching line.**secant**< Latin secant- (stem of secāns, present participle of secāre to cut), equivalent to sec- verb stem ( see saw1 ) + -ant- -ant.

In other words, the sine is the length of a chord (or rather half a chord) inscribed within the circle and sitting "in the pocket" formed by the the angle \(\theta\) between the x-axis and the radius line (hypotenuse). The tangent and secant are the lengths of lines that, respectively, are "touching" (tangent to), and "cutting" (secant to) the circle. The proportion of the tangent to the secant is the same as the proportion of the sine to the radius, because tangent, secant, and radius form a right triangle that shares the same \(\theta\) angle with the right triangle formed by sine, radius, and cosine. So these functions are all intimately involved with the circle from their inception.

QUOTE |

In fact, I think that's a pretty good way to teach the extension of trigonometry beyond right triangles, and I'll do it that way with my children when they get there. |

I'm not sure, but are you agreeing with me here? It's hard to justify extending trigonometry beyond right triangles without acknowledging that right triangles can only depict angles in the first quadrant, where a "quadrant" is in fact a quarter of a circle. Angles in the second quadrant are obtuse, and it is impossible for a right triangle to include an obtuse angle. The cosine of such angles go negative, and the idea of a triangle with negative base length is problematic. Angles in the third and fourth quadrant are reflex, so their sines go negative, which makes the idea of a triangle with negative height problematic, when considered from the perspective of Euclid's geometry. Euclid considered congruent triangles to still be congruent even when they are flipped horizontally or vertically, so this makes the whole notion of "negative length" problematic. Even the idea that sine and cosine are cyclic functions with a period of four quadrants, or one turn, is difficult to see via triangles alone.

But everything makes sense if you just interpret cosine and sine as x and y coordinates of points on a unit circle (radius=1) centered at the origin in the Cartesian plane; or as the real and imaginary components of complex numbers on a unit circle centered at 0 in the complex plane; with the polar coordinates being radius and \(\theta\), the amount of rotation (or angle).

QUOTE |

There's no need for one-size-fits-all; some purposes are better served by one constant, some by another. |

For the purposes of pure mathematics, I think it still needs to be shown what the symbol \(\pi\) provides in the way of insight that is not adequately covered by \(\tau/2\) meaning "half-turn".

QUOTE |

(Though I still think the Youtube guy's got a point that it probably looks better for the most common applications using \(\eta\).) |

I think it's much more revealing and edifying to characterize a right angle as \(\tau/4\), a quarter-turn; to characterize the point on the unit circle at that angle, \(e^{i\tau\cdot\left(1/4\right)} = i\), as the first of the fourth-roots of unity; and to characterize its four multiples

\[e^{i\tau\cdot\left(1/4\right)} = i\]

\[e^{i\tau\cdot\left(2/4\right)} = -1\]

\[e^{i\tau\cdot\left(3/4\right)} = -i\]

\[e^{i\tau\cdot\left(4/4\right)} = 1\]

as the four fourth roots of unity.

EDIT: I should also add that, considering the n-ball surface area and volume formulas as an example, if replacing \(\tau\) with\(2\pi\) causes them to suffer an increasing build-up of powers of \(2\), then replacing \(\tau\) with \(4\eta\) would inflict them with an even worse build-up of powers of \(4\).

EDIT: We really ought to ask the question why it should be that division is disparaged over multiplication. Is it just a matter of the notation we use for multiplication being more "streamlined" than the notation for division? \(4\eta\) does

Perhaps it's because the procedure we were taught for calculating a product by hand is a little less complicated than the long division procedure for calculating a quotient. I don't dispute that there should be some consideration for what we put people through if they are forced to do calculations by hand, but these days isn't it just a matter of setting up the right spreadsheet formulas?

But everyday convenience for calculation is not my focus in this thread. What I am focused on is the forms we use for expressing mathematical truths, and how those forms promote or discourage understanding of higher mathematics.

In that regard, we might make the argument that division is inherently a bit more complex than multiplication, because multiplication is commutative but division is not, and because any division can be expressed as multiplication by a reciprocal:\[a \div b = a \times \frac{1}{b}\]suggesting that division is a more derivative operation than multiplication. But if what we are trying to express can be most directly represented using division or reciprocation, why shouldn't we express it that way? Why contrive to use constants that allow us to avoid those operations at all costs?

QUOTE (Kodegadulo @ Jul 2 2012, 06:31 AM) |

My intent for this thread is to focus on gaining a clearer understanding of mathematical concepts in general, and the role of a circle constant in mathematics, in specific. Units of angular measure for everyday use are not my concern here |

All right, then; since my concern is precisely the opposite, I'll keep out.

QUOTE (dgoodmaniii @ Jul 2 2012, 03:58 PM) | ||

All right, then; since my concern is precisely the opposite, I'll keep out. |

They are ...

However, I will urge once again that that if you want an everyday unit corresponding to a circle, a straight angle, or a right angle, then call it a Circle, a Straightangle, or a Rightangle. Don't call it a Tau, a Pi, or an Eta.

QUOTE (Kodegadulo @ Jul 2 2012, 06:57 PM) |

However, I will urge once again that that if you want an everyday unit corresponding to a circle, a straight angle, or a right angle, then call it a Circle, a Straightangle, or a Rightangle. Don't call it a Tau, a Pi, or an Eta. |

But...but...but...that's what they

We can't call it a Circle, or a Straightangle, or a Semicircle, or whatever, because it's too narrow. One Pi equals the arc of a semicircle, for example, which is

A Pi, on the other hand, is exactly what it

If you don't like the name "Pi," okay; but let's not replace it with only something that names only one of multiple examples of what it measures.

I've suggested "Angz," but the object was rightly brought out that it closely resembles "angst," not exactly the emotion we want to encourage when people think of TGM. (Well...not the one

QUOTE (Kodegadulo @ Jul 2 2012, 06:31 AM) |

I find it quite mentally hobbling to try to look at the mathematics of trigonometry without the unit circle. Despite the name of the field, I think trying to limit one's thinking to triangles is not particularly edifying, whereas the unit circle provides the foundation for understanding most everything in the subject.... |

Also, I just wanted to say this: I'm not saying we should

QUOTE (dgoodmaniii @ Jul 2 2012, 10:52 PM) | ||

But...but...but...that's what they are! |

Followups to this post need to go to my Rotationels/angulels thread since that's about these everyday angular units and not pure math.

QUOTE (dgoodmaniii @ Jul 2 2012, 11:03 PM) | ||

Also, I just wanted to say this: I'm not saying we should limit our understanding of trigonometry to the unit circle, just recognize that trigonometry is built upon the right triangle, and is applicable to circles because any point on the circumference of a circle can be construed as the vertex of a right triangle. That's the way it developed in history for a reason. |

Right, but the right triangle only gets you so far unless you start playing games with it. Going into the second quadrant you have to flip it horizontally and start thinking of its base as negative, and you have to remember that the angle you mean is actually the supplement of the angle you're measuring with the right triangle. Going into the third quadrant you have to flip the triangle upside down and start thinking of both its base and its height as negative, and remember that the actual angle is the opposite of the one you're measuring. Going into the fourth quadrant you have to flip the triangle back horizontally and now the base is back to positive but the height is still negative, and the actual angle is the explement of the angle you're measuring. Inscribing Cartesian axes and circumscribing the whole thing with a unit circle is the best way keep it all straight (so to speak :) ), especially for students trying to grok it all.

QUOTE (Kodegadulo @ Jul 3 2012, 11:27 AM) |

Going into the second quadrant you have to flip it horizontally and start thinking of its base as negative, and you have to remember that the angle you mean is actually the supplement of the angle you're measuring with the right triangle. Going into the third quadrant you have to flip the triangle upside down and start thinking of both its base and its height as negative, and remember that the actual angle is the opposite of the one you're measuring. Going into the fourth quadrant you have to flip the triangle back horizontally and now the base is back to positive but the height is still negative, and the actual angle is the explement of the angle you're measuring. Inscribing Cartesian axes and circumscribing the whole thing with a unit circle is the best way keep it all straight (so to speak :) ), especially for students trying to grok it all. |

Correct! But you don't introduce students to these things immediately; you introduce it to them with simple right triangles and SOHCAHTOA. You do awesome things like triangulation with it, show them how powerful it is.

Teach them the Laws of Sines, Cosines, and Tangents, which can extend our use of trigonometry beyond right triangles to arbitrary triangles. (The Law of Cosines is especially cool for this, as it extends the Pythagorean theorem to arbitrary triangles. How awesome is that?) Show them the many fascinating applications of this in radio, in surveying, in astronomy, in physics,

Then, inscribe one in the first quadrant of the Cartesian plane. (We talk about these quadrants as quadrants of a circle; but they're really quadrants of the Cartesian plane, with a right angle's worth of arc inscribed on each. As witnessed by what you have to do to triangles with them.) Show them how in the second quadrant, since the number line is going backwards on the x-axis, you have to make your base measurement negative to get the right values. Show them the same thing in the third quadrant (mutatis mutandis, of course). Then the fourth. Explain to them the patterns (supplementary angles have the same absolute values for their trigonometry functions, and so forth).

Not only does this teach trigonometry in what is evidently the easiest way to learn it---since that's how mankind as a whole learned it, while he was developing it from nothing---but it goes step-by-step from the easier concepts to the hardest,

That's what got me to love trigonometry when I was in school, anyway.

I originally moved this discussion over to the Rotationels/Angulels thread, since it changed focus to talk about angle units. But since it shifted back to definitions of pure mathematics, I need to move it back here:

(Context: dgiii defined "natural" as "arising from nature", which I take to mean, "demanded by inherent properties of the system", contrasting with "obvious" which I take to mean, "immediately available to human perception." And I was agreeing with him that by that definition, the radian was the most "natural" angle unit for trigonometry and higher mathematics, even though it was not "obvious" to early mathematicians.)

(Context: dgiii defined "natural" as "arising from nature", which I take to mean, "demanded by inherent properties of the system", contrasting with "obvious" which I take to mean, "immediately available to human perception." And I was agreeing with him that by that definition, the radian was the most "natural" angle unit for trigonometry and higher mathematics, even though it was not "obvious" to early mathematicians.)

QUOTE (dgoodmaniii @ Jul 4 2012, 05:23 AM) | ||

Eh...I don't think so. The choice of radius or diameter is arbitrary; we use what's most convenient. Usually, we find, the radius is. (But not always; in the Law of Sines, for example, the equation is often needlessly written with a term of \(2r\), when a simple term of \(d\) would be equivalent, transparent, and simpler.) |

It's "simpler" in terms of brevity of this particular formula, but it's actually more complex in terms of the number of concepts that need to be maintained in order to understand it. If you understand what the radius of a circle is, then you are done, that's all you need in order to understand where the \(2r\) came from. But to understand where the \(d\) comes from, you have to write down the equality \(d = 2r\). In fact, this is a complicating factor, not a simplifying one, as I'll explain below.

QUOTE |

There's nothing about the nature of a circle which makes a radius prior to a diameter; we could define a circle just as well as "a continuous |

This is just wrong. To define a circle, the choice of radius vs. diameter is not at all arbitrary. As I state in the top post of this thread, a circle is defined as a curve of constant radius, where the radius is a distance from a center point. Euclid recognized this. On the other hand, it is not definitive to call a circle a curve of constant diameter. The diameter of a curve is defined as the largest distance that can be formed between two opposite parallel lines tangent to its boundary. The width of a curve is the smallest such distance. For a circle, the diameter happens to be the length of a chord passing through its center, and twice its radius. The diameter of a circle equals its width, and both are constant all around. But for curves in general, "diameter" is not defined in terms of a center point. And "constant diameter/width" is not definitive for a circle, because in fact there are an infinite number of curves that share this property. For instance the Reuleaux triangle:

This figure, although evidently having corners, maintains a constant width throughout its perimeter, and therefore can roll just as easily as a circular wheel. It just doesn't roll around its center.

Now I notice that you were very careful with your definition to use two endpoints of a line going through a center, rotating both of them about that center. They wind up maintaining a constant distance from each other, and thus a constant width for the resulting curve. But that is by virtue of the fact that they must both maintain a constant radius from that center, in order for the result to be a circle.

But we do not need to use two such points to produce a circle. We could just take one of them, and rotate it around that center, keeping a constant radius throughout, until we return to the original location. This is the simpler procedure, as every geometry student who has ever used a compass knows. Can you imagine a double compass with three arms, the middle one planted on a center, and the other two holding pencils? You would have to painstakingly calibrate them so they both bore the same radius from the center arm, otherwise the two curves drawn simultaneously might not link up properly half-way through to form a continuous circle.

And that brings up another point: If you manage to keep both radii constant and equal to each other, each pencil would not need to return to their original positions; they each would only need to trace out half a circle -- a semicircle, the arc of a straight angle. However, if you botched the job with your double compass, and they didn't link up, and you continued rotating around to the start, you'd wind up with two concentric circles. Why would anyone choose such an ungainly procedure for constructing a circle, when one radius and one pencil would suffice?

But if we go that route, why stop at rotating 2 points? We could also produce a circle by taking the vertices of an equilateral triangle, and rotate them about the center of the triangle (found by the intersection of lines bisecting each of its interior angles). Each point would only need to traverse a third of the circle -- a tertiant arc. Or how about 4 points? Rotate a square about its center, and each of its vertices trace out the arc of a quadrant or right angle. Or we could use the vertices of a pentagon, or a hexagon, or indeed any regular polygon rotated around its center.

But a definition for a mathematical object should be the simplest description that uniquely identifies it. For a circle, the simplest description involves a center and a constant radius, and can be implemented by rotating a single point around that center while maintaining that constant radius. Any procedure involving more than one point just introduces needless complications, which Occam's razor would eliminate.

Thus the idea of a radius is more "natural" for a circle, because it's demanded by inherent properties of the circle. The diameter is less "natural", because considering it opens up a can of worms, in terms of complicating factors. It would be better to consider the diameter as a derivative property, rather than a definitive one. But it is more "obvious" property because it's more directly accessible to human perception. When we hold a circular coin, we can feel its diameter, because our fingers can approximate those two parallel tangent lines. And its width/diameter will feel constant as we roll it around on our fingers. But hold one of these British coins

which might be characterized as a "Reuleaux heptagon", and it will feel like it has a constant width as well.

QUOTE |

But I may have just shot myself in the foot here; |

I think so... :)

QUOTE |

why not define a "radian," then, as the portion of arc rolled through when moving the circle horizontally the distance of its diameter? Arguably, then, just as natural as the radian is. |

That's not arguable at all. Didn't we have this discussion already? I would call that unit the hypothetical "diameteran" we've talked about before. I was agreeing with you that the radian is the most "natural" angle unit, because it's demanded by the system, where the system is the set of "circle functions" so central to trigonometry. The underlying mathematics for the sine, cosine, and complex Euler exponentiation functions work out better using radians than using any other unit, because this eliminates scaling factors. This reflects the intrinsic relationship between the circle and its radius. On the other hand, the "diameteran", an angle subtended by an arc of diameter length, would introduce an extraneous factor of 2 into the underlying math, which would do nothing but complicate it. Furthermore, any angle that is a rational division of the full turn, such as the degree or the right angle or the straight angle, or even the full turn itself, would introduce a transcendental number as a factor.

QUOTE |

Either way, some unit corresponding to the radius or the diameter is the "natural" unit of angle, the unit demanded by the nature of the beast itself. |

The radian is so demanded, yes. The diameteran is not at all.

QUOTE |

Since right angles are defined by two radii, |

I find your wording a bit confusing here. Any angle is definable using two rays emanating from a common point. Those rays might also be radii of a circle, but that would be incidental. But this is true of any angle, it's not just specific to right angles. Even the straight angle is definable by two rays. They just have the special property of being co-linear (though opposite) rays. Even a complete angle (full turn) is definable by two rays, if you allow the two rays to be coincident, i.e. superimposed.

QUOTE |

I'm inclined to say the radius, and the radian, are the more natural units to deal with. |

More natural for the sine and cosine functions to deal with. Not necessarily more natural for humans to deal with.

QUOTE |

(Circles and semicircles can be equivalently defined by both, of course; but the right angle can only be defined by the radius.) |

I'm not sure I follow you. How is a circle or a semicircle "defined" by a radius and/or a radian, but a right angle is not?

QUOTE | ||

Well, that's possible, of course. But let's not get ahead of ourselves. Remember that the radius is equally well defined as the angle unit that divides a straight line by the transcendental ratio \(\pi\). |

You mean the radian, not the radius. And you mean a straight angle, not a straight line, don't you? You claim the radian is "equally well _defined_" in terms of a straight angle? A _definition_ for the radian based on a straight angle would make a poor definition, because it misses the whole point and purpose of the radian. The radian exists not for the sake of dividing up straight angles, but for the sake of dividing up circles. Or rather, the whole motivation for identifying the radian as a unit is because we are dealing with the circle, and its defining dimension, its radius, which is intimately involved in defining the circle functions of trigonometry. They're "circle functions", not "straight line" functions, after all.

The proper definition for the radian is "an angle subtended by a circular arc of length equal to the radius of the circle." Given the radius of a circle, the circumference of that circle constitutes \(\tau\) radii. Thus the full angle subtended by that circle constitutes \(\tau\) radians. This is a simple and clear relationship.

Dividing both the circumference and \(\tau\) in half, and saying that "half the circumference of a circle constitutes \(\frac{\tau}{2}\) radii, thus a half-circular angle constitutes \(\frac{\tau}{2}\) radians", may express that same relationship, but with an extraneous complicating factor that adds nothing to the definition. Giving \(\frac{\tau}{2}\) the alias \(\pi\), and giving a half-circular angle the alias "straight angle", sweeps some of the complexity under the rug, and at the same time obscures the purpose of the radian. The factor of 2 is extraneous, as would any rational factor. For instance, saying "one third the circumference of a circle constitutes \(\frac{\tau}{3}\) radii, thus a one-third-circular angle constitutes \(\frac{\tau}{3}\) radians." Or "one fourth of the circumference..." -- but you get the idea. But there's nothing particularly special about the number 2 that would make it any

QUOTE |

Or we could define it as the angle unit which divides a right angle by the transcendental ratio \(\eta\). Or...well, infinite definitions, if we want them. That part seems certainly arbitrary. |

Exactly. Once you admit more than one of something, you might as well admit an infinite number, because any more than one is arbitrary. Unfortunately, the Greek alphabet doesn't have an infinite number of letters in it, so this is not a very scalable course of action.

But when setting up a definition for an entity, the simplest definition is the best.

QUOTE |

The radian may be the most natural unit of angle; but it's not the right size for easy computation, because it doesn't fit in even numbers into any of our normal angular measures. That's why we need something else. |

Precisely my point with the Rotationels/Angulels thread.

QUOTE | ||

Sure. But define "human-friendly." |

That's easy: Using names that humans already commonly use. Like "turn", "quadrant", "octant", etc. Or basing names on shapes humans can directly perceive. Like "diagonal", "right angle", "straight angle", "triangular", "pentagonal", etc

QUOTE |

Why isn't "Pi" or "Pirad" or "Etarad" or "Taurad" or something along these lines "human-friendly?" Humans don't like long vowels? |

Because these combine a strange, unwieldy, non-rational unit, the radian, with strange, unwieldy, non-rational transcendental ratios, which have never been given proper names but instead have just been given abstract labels, in the form of letters in an alphabet that is obscure to most people. You claim the one irrational cancels out the other irrational. Linguistically, I don't see anything getting cancelled out, I just see abstraction piled on abstraction. You even make it more obscure by attempting to abbreviate "radian" as "rad", which then gets people thinking these are units of radiation, with the Greek letters representing obscure sub-atomic particles. Contrast all of that with the "obviousness" of "Straightangle", "Rightangle", and "Turn".

QUOTE |

I want to give people human-friendly angular units, but I want them to be easily integrated into the radian system. So taking our whatever-to-radius ratio, lopping off the transcendental scaling factor, and just using simple numbers, making the conversion to radians a one-step process, seems like a great compromise to me. Your Rotationels work that way, too, though; take a Quadrantpart, multiply by a biqua to get a Quadrant; then multiply by \(\eta\) for radians. Mutatis mutandis for the others. |

There's nothing about the names of these things that would change the procedure for conversion, or the factors used in that procedure. But there is a big difference in visceral comprehension invested in one kind of name versus another.

Of course, for this example, I would simply multiply Quadrantlets by \(\tau_{400'}\) ("biciaquadrantau"), which I would have precalculated as \(\tau/400'\).

QUOTE (Kodegadulo @ Jul 6 2012, 03:23 AM) | ||||

I originally moved this discussion over to the Rotationels/Angulels thread, since it changed focus to talk about angle units. But since it shifted back to definitions of pure mathematics, I need to move it back here:
It's "simpler" in terms of brevity of this particular formula, but it's actually more complex in terms of the number of concepts that need to be maintained in order to understand it. If you understand what the radius of a circle is, then you are done, that's all you need in order to understand where the \(2r\) came from. But to understand where the \(d\) comes from, you have to write down the equality \(d = 2r\). In fact, this is a complicating factor, not a simplifying one, as I'll explain below. |

I should clarify my point here for the specific example of the Law of Sines. If you look at the Euclidean derivation/proof of the Law of Sines, which you can find here, you will see that the resulting formula:\[\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C} = 2 R\] is derived without ever drawing a diameter line and without ever referencing the length of the diameter of the circle. It is done entirely in reference to the radius of a circle, which circumscribes the triangle so that its sides become chords on the circle. The 2 that appears in the equation arises due to the fact that we're extending radii out from the center of the circle to the vertices of the triangle, to form isosceles triangles with legs of radius length and bases equal to the sides of the original triangle. Then we bisect the isosceles triangles to form pairs of similar right triangles, each of whose bases are half the associated side of the original triangle. The fact that the \(2 R\) that is produced happens to equal the diameter of the circle is an extraneous coincidence. If you replace \(2 R\) with \(d\), you obscure the derivation and create a mysterious unexplained association between the triangle's angles, its sides, and a circle diameter that was never drawn during the proof.

In fact, if you realize that a sine is really half a chord, i.e. \[\operatorname{crd} 2\theta = 2\sin\theta\] and rearrange it thus: \[\frac{1}{\sin\theta} = \frac{2}{\operatorname{crd} 2\theta}\] then you can recast the Law of Sines as a Law of Chords: \[\frac{2 a}{\operatorname{crd} 2 A} = \frac{2 b}{\operatorname{crd} 2 B} = \frac{2 c}{\operatorname{crd} 2 C} = 2 R\]

and cancel out a factor of 2:

\[\frac{a}{\operatorname{crd} 2 A} = \frac{b}{\operatorname{crd} 2 B} = \frac{c}{\operatorname{crd} 2 C} = R\] This makes sense because the central angle of each of those isosceles triangles is always twice the angle in the original triangle opposite to the chord, so you can assert:

[dohtml]<table cellspacing=20 align=center>

<tr>

<td>\[a = R \operatorname{crd} 2 A\]

<td>\[b = R \operatorname{crd} 2 B\]

<td>\[c = R \operatorname{crd} 2 C\]

</tr>

</table>

[/dohtml]

And now I've managed to spell out a proof of the Law of Sines ... backwards! :)

Yikes. I never said it was a *better* definition, just that it accurately defined the circle. In fact, I specifically said that the radius is typically a better quantity to use.

QUOTE |

(Context: dgiii defined "natural" as "arising from nature", which I take to mean, "demanded by inherent properties of the system", contrasting with "obvious" which I take to mean, "immediately available to human perception." And I was agreeing with him that by that definition, the radian was the most "natural" angle unit for trigonometry and higher mathematics, even though it was not "obvious" to early mathematicians.) |

Then most of this discussion is purely academic, since we agree on that primary point, and probably should go back to rotationels discussion, since the rest of the discussion is about how this fact effects angular units.

QUOTE |

It's "simpler" in terms of brevity of this particular formula, but it's actually more complex in terms of the number of concepts that need to be maintained in order to understand it. If you understand what the radius of a circle is, then you are done, that's all you need in order to understand where the \(2r\) came from. But to understand where the \(d\) comes from, you have to write down the equality \(d = 2r\). In fact, this is a complicating factor, not a simplifying one, as I'll explain below. |

I really think you're blowing this out of proportion. We all know what a diameter is and when it's useful. It's like the difference between \(\frac{x}{2}\) and \(x\cdot\frac{1}{2}\); technically the first is simpler, but it's not a problem. We just use the one that's simpler for a given application.

Sometimes, you seem to think that the geometrical representation of something is vitally important; like here, you argue that because we don't use a diameter in the proof, using \(d\) rather than \(2r\) would be "mysterious," and you've argued that using \(\pi\) is "mysterious" because a semicricle doesn't actually exist in some of these cases. Yet when I've pointed out similar situations where something isn't actually drawn in, you've dismissed this as irrelevant, and said that I'm confusing the reality with the geometrical representation.

Perhaps the situations aren't analogous and you're perfectly right in both cases; but it's very hard for me to see at this stage when this argument is appropriate and when it's not. Either it's important that we only use a symbol when there's a geometrical representation behind it, or it's not. Which is it?

QUOTE | ||

This is just wrong. To define a circle, the choice of radius vs. diameter is not at all arbitrary. As I state in the top post of this thread, a circle is defined as a curve of constant radius, where the radius is a distance from a center point. Euclid recognized this. On the other hand, it is not definitive to call a circle a curve of constant diameter. |

That's right; instead, you'd have to call it curve drawn out by a line spinning around its central point. This would form a circle. We decide to define circles by their radii because it's easier, not because it's inherently more valid.

QUOTE |

Why would anyone choose such an ungainly procedure for constructing a circle, when one radius and one pencil would suffice? |

That's precisely my point. We use the radius because it's easier, not because only a radius will work.

QUOTE |

But if we go that route, why stop at rotating 2 points? We could also produce a circle by taking the vertices of an equilateral triangle, and rotate them about the center of the triangle (found by the intersection of lines bisecting each of its interior angles). Each point would only need to traverse a third of the circle -- a tertiant arc. Or how about 4 points? Rotate a square about its center, and each of its vertices trace out the arc of a quadrant or right angle. Or we could use the vertices of a pentagon, or a hexagon, or indeed any regular polygon rotated around its center. |

Yep; and we don't do this because it's more complicated. The radius is the simplest way; but that doesn't mean it's the only way. Our choice of radius is due to its simplicity, not to its being the only way of doing it. That's all I'm trying to say.

QUOTE |

Thus the idea of a radius is more "natural" for a circle, because it's demanded by inherent properties of the circle. The diameter is less "natural", because considering it opens up a can of worms, in terms of complicating factors. It would be better to consider the diameter as a derivative property, rather than a definitive one. But it is more "obvious" property because it's more directly accessible to human perception. |

You're proving my point for me; you're just using the words differently. "Simplest" does not mean "most natural." It's not the nature of the circle which requires the radius being used in its definition; it's just the easiest way to define it (and to draw it).

With radians, on the other hand, that's the only way our functions work. That's a different situation.

QUOTE |

I was agreeing with you that the radian is the most "natural" angle unit, because it's demanded by the system, where the system is the set of "circle functions" so central to trigonometry. |

That's not what I'm arguing; the fact it works best with trigonometric functions (and don't leave out the tangent; that's often the most important one for some of the most common uses of trigonometry, like triangulation and vector addition (neither of which involve circles!)) is evidence of its naturalness, not the proof.

The proof is that the radian is simply the ratio of an arc to its enclosing rays. That's it; that's all it is. We have an angle of one radian when the arc of its subtended angle is equal to the length of the lines defining it.

You can say that the radius is a better unit for defining circles because it corresponds to radians; indeed, I think it's correct to say so. But it's not the same as the radian itself.

QUOTE | ||

I find your wording a bit confusing here. Any angle is definable using two rays emanating from a common point. Those rays might also be radii of a circle, but that would be incidental. But this is true of any angle, it's not just specific to right angles. Even the straight angle is definable by two rays. They just have the special property of being co-linear (though opposite) rays. Even a complete angle (full turn) is definable by two rays, if you allow the two rays to be coincident, i.e. superimposed. |

Yes; and that last is probably why angle isn't introduced to children as parts of a circle, but rather in precisely this way, as two rays emanating from a common point. Superimposed rays defining an angle is confusing.

But yes, I wasn't giving a complete definition of right angles there; I was just saying that while you can draw out a straight angle or a circle based on the diameter with only one step, it takes more than that to draw out a right angle. Since the radius is the common element in all three, we've got one more reason to prefer the radius.

QUOTE | ||

More natural for the sine and cosine functions to deal with. Not necessarily more natural for humans to deal with. |

Probably correct, though if we just defined our unit of angle as "one radian" we'd really be okay. It is nice to have it correspond to some easily pictured and drawn reality, though, like a circle or a semicircle or a right angle. That

QUOTE |

The radian exists not for the sake of dividing up straight angles, but for the sake of dividing up circles. Or rather, the whole motivation for identifying the radian as a unit is because we are dealing with the circle, and its defining dimension, its radius, which is intimately involved in defining the circle functions of trigonometry. They're "circle functions", not "straight line" functions, after all. The proper definition for the radian is "an angle subtended by a circular arc of length equal to the radius of the circle." |

You're throwing in the word "circular" without need. The proper definition for a radian is "the ratio between the length of an arc and its radius." This definition works whether we're talking about circles or not. It works for circles because the length of the enclosing lines have to be the same, which means it's easy to spin them all the way around to a circle, but "1 radian" is "the angle subtended by an arc of length equal to its two equal enclosing lines."

QUOTE |

Given the radius of a circle, the circumference of that circle constitutes \(\tau\) radii. Thus the full angle subtended by that circle constitutes \(\tau\) radians. This is a simple and clear relationship. |

Indeed. But no clearer than the straight angle is.

If the circle were the basis for the radian, surely it would come to an integral number of them. But it doesn't. The radian is natural no matter what our unit of angle is.

QUOTE |

Or "one fourth of the circumference..." -- but you get the idea. But there's nothing particularly special about the number 2 that would make it any less extraneous than any other unnecessary factor. |

But we're talking about

QUOTE | ||

That's easy: Using names that humans already commonly use. Like "turn", "quadrant", "octant", etc. Or basing names on shapes humans can directly perceive. Like "diagonal", "right angle", "straight angle", "triangular", "pentagonal", etc |

Ugh. Surely this isn't meant as an exclusive definition? As in, you wouldn't deny "human-friendly" status to

QUOTE |

Because these combine a strange, unwieldy, non-rational unit, the radian, with strange, unwieldy, non-rational transcendental ratios, which have never been given proper names but instead have just been given abstract labels, in the form of letters in an alphabet that is obscure to most people. |

The radian isn't non-rational, and it's unwieldy only in the sense that it's hard to visually estimate it. (Unlike a straight line, which is easy to draw freehand or with a rule; or a circle, which is easy to draw with a compass.) And these names don't actually combine radians with transcendental ratios; they just include the name there to remind us the conversion factor to radians.

Really, I think a unit name should

I want the same thing with the name for my angular unit. Give me a mono- or bisyllabic name that's not the same as an actual physical thing, and I might be able to get behind it. But I'm not in favor of copying natural terms, or using unwieldily long names.

QUOTE |

You claim the one irrational cancels out the other irrational. |

No, I don't. First of all, the radian isn't irrational; there's just an irrational number of them in any easy shape. Second, the whole point of the unit is to get

QUOTE | ||

There's nothing about the names of these things that would change the procedure for conversion, or the factors used in that procedure. But there is a big difference in visceral comprehension invested in one kind of name versus another. Of course, for this example, I would simply multiply Quadrantlets by \(\tau_{400'}\) ("biciaquadrantau"), which I would have precalculated as \(\tau/400'\). |

Which is a significantly more complex procedure than "multiply by this one number, the name of which is already right in front of you."

If we're worried about "visceral comprehension" based on name recognition, why don't we just call the unit "Angle"?

By the way, I like "lets" over "parts." I'd recommend sticking to that.

QUOTE (dgoodmaniii @ Jul 6 2012, 03:14 PM) |

The proof is that the radian is simply the ratio of an arc to its enclosing rays. That's it; that's all it is. We have an angle of one radian when the arc of its subtended angle is equal to the length of the lines defining it.The definition is independent of that of a circle. It's easiest to understand when related to a circle, but it's the natural unit of angle even outside of circles. |

Except that "arc" implies a circle anyway, because it's not just any arbitrary curve at all that happens to join the end points of two truncated rays. It's specifically a curve that maintains a constant radius, i.e., it's some portion of a circle, with a given radius, centered at the point where the rays unite. What makes those rays not just rays but radii is the circle. The idea of a "radian" as a unit of measure is intimately tied up with the idea of a circle and its radius, no matter how you slice it. Without the circle and its radius, there would be no point to inventing the radian as a unit of measure at all! The fact that you can measure any angle as some multiple of a radian is besides the point. When you do so, you're relating that angle to a circle and its radius.

The heart of my debate with you is that you keep insisting that there is an ... almost "moral equivalence" :) between \(\pi\) and \(\tau\), simply because you can find certain situations where a factor of 2 crops up and then assert \(\tau = 2\pi\). Likewise, when dealing with circles, you keep insisting that there is such an equivalence between diameters and radii. I insist that dealing with \(\pi\) and diameters introduce unnecessary complications that distract from the main ideas behind trigonometry. I just can't give \(\pi\) and diameters that much credit.

As for the tangent function -- even that function implies a circle. Why is that function called that? A "tangent" is a "touching line". What is that line "touching"? What is it "tangent to"? A circle. The same circle we're dealing with when we calculate sine and cosine. Whether we draw that circle or not. When we're computing a "tangent" function, it's not just a ratio between the opposite side to the adjacent side of a triangle. It's the height of a triangle similar to the the one containing sine, cosine and radius, but with a radius as the base instead of cosine, and a secant line as hypotenuse instead of a radius, so that the tangent line is tangent to the circle at that radius.

All of the trigonometric functions owe their existence to the circle. The whole motivation for defining them and studying them is based on how angles and triangles interact with a circle. The first trigonometric table ever compiled, attributed to Hipparchus in the second century BC, wasn't a table of triangular ratios, it was a table of

To say that you can do a lot of trigonometry with just the triangles, without comprehending how the circle is involved, is just saying that you can teach kids to apply techniques before they really get why they work.

QUOTE (dgoodmaniii @ Jul 6 2012, 03:14 PM) |

Sometimes, you seem to think that the geometrical representation of something is vitally important; like here, you argue that because we don't use a diameter in the proof, using \(d\) rather than \(2r\) would be "mysterious," and you've argued that using \(\pi\) is "mysterious" because a semicricle doesn't actually exist in some of these cases. Yet when I've pointed out similar situations where something isn't actually drawn in, you've dismissed this as irrelevant, and said that I'm confusing the reality with the geometrical representation. |

No. In those cases you were dismissing my arguments simply because I didn't draw the construction I was using for my proof or derivation, not that I

In those cases where I've dismissed your arguments, it's been because you've attempted to make associations purely by means of algebra without any backup of geometric interpretation. In other words, you never showed how such associations are

So the challenge for you is this: Can you show me a construction where, for instance, we can derive a proof of the Law of Sines using the diameter, and reasoning about the diameter, and not the radius of the circle, and reasoning about the radius? Reasoning about the diameter

Or derive the area of a circle directly from a semicircle, and not just by substituting \(\frac{1}{2}\tau = \pi\) in the end.

And then explain how such proofs or derivations are better, or even just as good as, the ones involving \(\tau\) and the radius of a circle.

QUOTE (dgoodmaniii @ Jul 6 2012, 03:14 PM) |

You're proving my point for me; you're just using the words differently. "Simplest" does not mean "most natural." It's not the nature of the circle which requires the radius being used in its definition; it's just the easiest way to define it (and to draw it). With radians, on the other hand, that's the only way our functions work. That's a different situation. |

LOL. No, it's not a different situation at all! You

We agree in part, because we both value when a solution is "simpler", i.e., free of unnecessary complicating factors. But we disagree on what constitutes an unnecessary complicating factor. I perceive your view as being a little too superficial, because what you view as "simpler" seems to amount to how many operations someone has to do to compute something. What I view as "simpler" is what is simpler to explain, teach, understand, reason about; simpler to derive or prove or construct; simpler in terms of the number of concepts you need to keep track of. There's all sorts of ways to streamline a mindless computation of a formula; but there are only so many ways to derive the formula in the first place. I think it's a lot more fruitful to teach kids how to derive all the formulas they need for a math test on the spot, based on a visceral understanding of what the math means, than to have them memorize a bunch of formulas, and supposedly "help" them by slightly reducing the number of symbols in the formulas. Give them the tools to build the tools they need. Give them a unifying model of what is going on.

QUOTE | ||

Except that "arc" implies a circle anyway, because it's not just any arbitrary curve at all that happens to join the end points of two truncated rays. It's specifically a curve that maintains a constant radius, i.e., it's some portion of a circle, with a given radius, centered at the point where the rays unite. What makes those rays not just rays but radii is the circle. |

Only if you insist on drawing a circle there. An arc of constant radius makes just as much sense as some random angle as it does of a circle. It's easy to construct a circle out of it, and it's a great way to explain the radian to people---rolling a unit circle across a distance equal to its radius and so forth---but it's not married to the circle by necessity, only by convenience.

And either way, the fact is that the radian, which we both agree is the only natural unit of angle,

QUOTE |

The idea of a "radian" as a unit of measure is intimately tied up with the idea of a circle and its radius, no matter how you slice it. |

Yep. But the radian is the radian whether or not there's a circle. Not so \(\pi\), which wouldn't be \(\pi\) without the straight angle; or the full circle constant, which wouldn't be itself without the circle.

QUOTE |

Without the circle and its radius, there would be no point to inventing the radian as a unit of measure at all! The fact that you can measure any angle as some multiple of a radian is besides the point. When you do so, you're relating that angle to a circle and its radius. |

No; I'm relating it to a curve of constant radius the arc of which is equal to its component arms. That's the radian. It happens that there are about 6;3494 of these in a full circle (necessarily an approximation, of course).

We can inscribe a circle quite easily, and it's often convenient to do so; but that's exactly what it is.

QUOTE |

The heart of my debate with you is that you keep insisting that there is an ... almost "moral equivalence" :) between \(\pi\) and \(\tau\), simply because you can find certain situations where a factor of 2 crops up and then assert \(\tau = 2\pi\). Likewise, when dealing with circles, you keep insisting that there is such an equivalence between diameters and radii. I insist that dealing with \(\pi\) and diameters introduce unnecessary complications that distract from the main ideas behind trigonometry. I just can't give \(\pi\) and diameters that much credit. |

I'm saying that the only natural unit of angle is the radian, and that the choice of non-radian angle unit, if we select one such at all, if arbitrary, based on convenience and ease of use rather than the nature of angle.

Yes, I

QUOTE |

As for the tangent function -- even that function implies a circle. Why is that function called that? A "tangent" is a "touching line". What is that line "touching"? What is it "tangent to"? A circle. The same circle we're dealing with when we calculate sine and cosine. Whether we draw that circle or not. When we're computing a "tangent" function, it's not just a ratio between the opposite side to the adjacent side of a triangle. It's the height of a triangle similar to the the one containing sine, cosine and radius, but with a radius as the base instead of cosine, and a secant line as hypotenuse instead of a radius, so that the tangent line is tangent to the circle at that radius. |

But you're assuming that its utility in triangles arises out of its utility in circles; one could just as easily claim it the other way around, and be just as right. And again, given how it actually arose historically, I argue that it's easier to understand it the other way around; and when trigonometry is actually applied, it's more often than not applied in triangles rather than circles.

QUOTE |

To say that you can do a lot of trigonometry with just the triangles, without comprehending how the circle is involved, is just saying that you can teach kids to apply techniques before they really get why they work. |

Your'e not starting that "magic formula memorization" thing again, are you? I've said it once already, and I'll say it again:

Plus, this is just wrong; trigonometric functions do work because of they represent the ratios of the sides of a triangle. There's certainly more to them, but that's a subset of what they are.

QUOTE (Kodegadulo @ Jul 6 2012, 05:03 PM) |

No. In those cases you were dismissing my arguments simply because I didn't draw the construction I was using for my proof or derivation, not that I couldn't draw it. The specific example I recall was the external angles of a regular polygon. If you literally want to see the construction, I could, and did provide it. Or you dismissed my arguments because there might be more than one way to draw the construction, equivalently, so you denied the "reality" of it. I find that whole line of reasoning specious. Half the proofs in Euclid would be voided if we held it to that stringent a standard. |

No, that's not the case at all. (The exterior angles pictures you drew were pretty, but unnecessary; I knew what you were talking about, but there are still no circles there by that definition, but rather supplementary angles based on the straight line. That's the point I was making. And interestingly, what we really probably want to know about exterior angles of polygons is what the explement of the interior is; and we can calculate that easily, without knowing the interior angles, using \(\pi\).) You've repeatedly stated that using \(\pi\) is "mysterious" because there isn't a physical half-circle involved. When I've made equivalent statements about your preferred constant and the lack of a full circle, you've dismissed them as irrelevant.

I don't care that the actual geometrical shape isn't drawn there; but you insisted that \(\pi\) made no sense because there was no semicircle, so I was challenging you to show me why a full circle constant made sense when there wasn't one.

So why is it important that there be a half circle to justify \(\pi\), but not important that there be a full circle to justify your constant?

QUOTE |

In those cases where I've dismissed your arguments, it's been because you've attempted to make associations purely by means of algebra without any backup of geometric interpretation. In other words, you never showed how such associations are relevant to a derivation, proof, or solution. What does the substitution tell us about the nature of the problem? |

That's because my concern isn't how to derive the formulas, as I've said more than once before; my concern is

In other words, I'm not disputing or conceding your arguments about your circle constant being better than \(\pi\) as a derivational tool, or as a more "pure" circle constant, or any of that jazz. I'm saying that

QUOTE |

So the challenge for you is this: Can you show me a construction where, for instance, we can derive a proof of the Law of Sines using the diameter, and reasoning about the diameter, and not the radius of the circle, and reasoning about the radius? Reasoning about the diameter directly, and not just applying d=2R in the end? |

No, that's you trying to force the argument onto your terms. Just because

The challenge to you is to show how

Oh, and once again:

Well, it's about 3;6, and I'm working on n-spheres! *weeps bitterly about own geekiness* If I've made any mistakes here, I'm chalking it up to the late (early) hour.

I've generalized the tables on n-spheres into equations written in terms of \(\pi\), because I believe that the tables written in terms of \(\pi\) given here and in the newer versions of the Tau Manifesto were unnecessarily complicated by two factors: (1) insistence on using the diameter instead of the radius, when I think they work better using the radius; and (2) failure to resolve out a pretty significant algebraic equality that makes the patterns of the system when expressed in terms of \(\pi\) much clearer, and the equations much cleaner.

That algebraic equality is \((xy)^n = x^n y^n\). If instead of just substituting \(2\pi\) for every \(\tau\) in the table, this had been resolved out, some patterns would have emerged that make the table just as pretty for \(\pi\) as for \(2\pi\) (I call it \(\varsigma\), pronounced "varsigma," because it looks sort of like a "c" for "circle," and because it's not horridly overloaded the way that \(\tau\) is).

So let's generalize the formulas for the surface and volume of an \(n\)-sphere where \(n \% 2 = 0\); I'm using the term "\(n_e!\)" for "the factorial of all evens less than including \(n\)," and "\((n-1)_e!\)" for "the factorial of all evens less than but not including \(n\)," but both including 1, just for brevity's sake. I'm sure there's a better notation, but I don't know it.

\[ S_n = \left(\frac{1}{(n-1)_e!}\right) \cdot \left(2^{n/2}\right)\pi^{n/2}r^{n-1} \]

\[ V_n = \left(\frac{1}{n_e!}\right) \cdot \left(2^{n/2}\right)\pi^{n/2}r^n \]

Now, since \((xy)^n = x^n y^n\), it follows that \(x^n y^n = (xy)^n\), so you may be wondering what exactly we gain by saying \((2^{n/2})\pi^{n/2}\) rather than \((2\pi)^{n/2}\). Well, what we gain is*cancellation*; it reminds us that we can further generalize these formulas to yield their simplest possible forms by saying thus:

\[ S_n = \frac{\left(\pi^{n/2}r^{n-1}\right)}{\left(\frac{(n - 1)_e!}{2^{n/2}}\right)} \]

\[ V_n = \frac{\left(\pi^{n/2}r^n\right)}{\left(\frac{n_e!}{2^{n/2}}\right)} \]

So let's do a couple of examples here, to make sure this really works. Okay, so let's try X, since we've already got the formulas in the Tau Manifesto to compare it to.

\[ S_X = \frac{\left(\pi^{X/2}r^{X-1}\right)}{\left(\frac{(X - 1)_e!}{2^{X/2}}\right)} = \frac{\pi^5r^9}{\left(\frac{280}{2^5}\right)} = \frac{\pi^5r^9}{\left(\frac{280}{28}\right)} = \frac{\pi^5r^9}{10} \]

The Tau Manifesto gives the equation as \(S_{10} = \frac{1}{8\cdot6\cdot4\cdot2}(2\pi)^5\left(\frac{D}{2}\right)^9\), so let's dozenize and reduce: \(\frac{28\pi^5r^9}{280} = \frac{\pi^5r^9}{10}\), seems right. Let's try the volume:

\[ V_X = \frac{\left(\pi^{X/2}r^X\right)}{\left(\frac{X_e!}{2^{X/2}}\right)} = \frac{\left(\pi^5r^X\right)}{\left(\frac{2280}{28}\right)} = \frac{\pi^5r^X}{X0} \]

And again, the Tau Manifesto gives the equation as \( V_{10} = \frac{1}{2\cdot4\cdot6\cdot8\cdot10}\left(2\pi\right)^5\left(\frac{D}{2}\right)^{10}\), so again we'll dozenize and reduce: \(\frac{28\pi^5r^X}{2280} = \frac{\pi^5r^X}{X0}\). Looks good, appears that that one works, too.

And we can follow the same procedures for the equations that we all know so well, and that we're accustomed to applying to some more usual \(n\)-spheres than a ten-dimensional shape. Let's see if this works to produce \(A = \pi r^2\). What we're talking about there is, of course, the volume of a two-dimensional shape (which we call the "area"), so:

\[ V_2 = \frac{\left(\pi^{2/2}r^2\right)}{\left(\frac{2_e!}{2^{2/2}}\right)} = \frac{\pi r^2}{\left(\frac{2}{2}\right)} = \pi r^2 \]

Yep, that works. What about circumference? That is, the surface of a 2-sphere?

\[ S_2 = \frac{\left(\pi^{2/2}r^{2-1}\right)}{\left(\frac{(2 - 1)_e!}{2^{2/2}}\right)} = \frac{\pi r}{\left(\frac{1}{2}\right)} = 2\pi r \]

So that works, too.

This is because that "extraneous factor of two"*does* cancel out in all cases *but* the surface of a 2-sphere. (The volume of a 2-sphere is the only place that it *does* cancel out when all this is rewritten in terms of \(\varsigma\).) But you have to generalize the formula and do the algebra. This yields a very simple equation in reduced form; indeed, the simplest possible for the volume of the 2-sphere (and no more complicated for the surface thereof than \(\varsigma\) gives us for the volume).

This is already an obscenely long post, so I'll spare the algebra for the odd-numbered \(n\)-spheres and just give the generalized equations in terms of \(\pi\). \(n_o!\) and friends have the expected meaning given the use of \(n_e\) above.

\[ S_n = \frac{2\cdot2\pi^{(n-1)/2}r^{n-1}}{(n-1)_o!} \]

\[ V_n = \frac{2\cdot2\pi^{(n-1)/2}r^n}{n_o!} \]

It's important to note that that second 2 doesn't come from \(\pi\) not being \(\varsigma\); it comes from the fraction \(\frac{2}{n_o!}\) which is factored into*all* of these equations. When you've got your actual equation, you won't see the two in either case. This does yield some bulky fractional equations (of course, these can be further rearranged to get rid of the factor of \(2^{(n - 1)/2}\), but since the denominators won't cancel out, there's little point), but the \(\varsigma\) equations yield some pretty nasty ones, too, so ditching this extra 2 seems a small gain, if any.

The important thing is that by using \(\pi\) we can (1) derive easy-to-use equations, including those that we're already well familiar with; and (2) demonstrate the relationships between multidimensional spheres, if we're feeling like it. At least, those relationships seem pretty clear to me with these equations, just as they do in the Tau Manifesto tables.

If I've screwed up on the algebra here, I'm sure one of you fine mathematicians will notice it; but I think this should be all correct, and I honestly think that this offers a perfectly clear perspective on \(n\)-spheres, one that is just as clear as the table we've all come to know.

I've generalized the tables on n-spheres into equations written in terms of \(\pi\), because I believe that the tables written in terms of \(\pi\) given here and in the newer versions of the Tau Manifesto were unnecessarily complicated by two factors: (1) insistence on using the diameter instead of the radius, when I think they work better using the radius; and (2) failure to resolve out a pretty significant algebraic equality that makes the patterns of the system when expressed in terms of \(\pi\) much clearer, and the equations much cleaner.

That algebraic equality is \((xy)^n = x^n y^n\). If instead of just substituting \(2\pi\) for every \(\tau\) in the table, this had been resolved out, some patterns would have emerged that make the table just as pretty for \(\pi\) as for \(2\pi\) (I call it \(\varsigma\), pronounced "varsigma," because it looks sort of like a "c" for "circle," and because it's not horridly overloaded the way that \(\tau\) is).

So let's generalize the formulas for the surface and volume of an \(n\)-sphere where \(n \% 2 = 0\); I'm using the term "\(n_e!\)" for "the factorial of all evens less than including \(n\)," and "\((n-1)_e!\)" for "the factorial of all evens less than but not including \(n\)," but both including 1, just for brevity's sake. I'm sure there's a better notation, but I don't know it.

\[ S_n = \left(\frac{1}{(n-1)_e!}\right) \cdot \left(2^{n/2}\right)\pi^{n/2}r^{n-1} \]

\[ V_n = \left(\frac{1}{n_e!}\right) \cdot \left(2^{n/2}\right)\pi^{n/2}r^n \]

Now, since \((xy)^n = x^n y^n\), it follows that \(x^n y^n = (xy)^n\), so you may be wondering what exactly we gain by saying \((2^{n/2})\pi^{n/2}\) rather than \((2\pi)^{n/2}\). Well, what we gain is

\[ S_n = \frac{\left(\pi^{n/2}r^{n-1}\right)}{\left(\frac{(n - 1)_e!}{2^{n/2}}\right)} \]

\[ V_n = \frac{\left(\pi^{n/2}r^n\right)}{\left(\frac{n_e!}{2^{n/2}}\right)} \]

So let's do a couple of examples here, to make sure this really works. Okay, so let's try X, since we've already got the formulas in the Tau Manifesto to compare it to.

\[ S_X = \frac{\left(\pi^{X/2}r^{X-1}\right)}{\left(\frac{(X - 1)_e!}{2^{X/2}}\right)} = \frac{\pi^5r^9}{\left(\frac{280}{2^5}\right)} = \frac{\pi^5r^9}{\left(\frac{280}{28}\right)} = \frac{\pi^5r^9}{10} \]

The Tau Manifesto gives the equation as \(S_{10} = \frac{1}{8\cdot6\cdot4\cdot2}(2\pi)^5\left(\frac{D}{2}\right)^9\), so let's dozenize and reduce: \(\frac{28\pi^5r^9}{280} = \frac{\pi^5r^9}{10}\), seems right. Let's try the volume:

\[ V_X = \frac{\left(\pi^{X/2}r^X\right)}{\left(\frac{X_e!}{2^{X/2}}\right)} = \frac{\left(\pi^5r^X\right)}{\left(\frac{2280}{28}\right)} = \frac{\pi^5r^X}{X0} \]

And again, the Tau Manifesto gives the equation as \( V_{10} = \frac{1}{2\cdot4\cdot6\cdot8\cdot10}\left(2\pi\right)^5\left(\frac{D}{2}\right)^{10}\), so again we'll dozenize and reduce: \(\frac{28\pi^5r^X}{2280} = \frac{\pi^5r^X}{X0}\). Looks good, appears that that one works, too.

And we can follow the same procedures for the equations that we all know so well, and that we're accustomed to applying to some more usual \(n\)-spheres than a ten-dimensional shape. Let's see if this works to produce \(A = \pi r^2\). What we're talking about there is, of course, the volume of a two-dimensional shape (which we call the "area"), so:

\[ V_2 = \frac{\left(\pi^{2/2}r^2\right)}{\left(\frac{2_e!}{2^{2/2}}\right)} = \frac{\pi r^2}{\left(\frac{2}{2}\right)} = \pi r^2 \]

Yep, that works. What about circumference? That is, the surface of a 2-sphere?

\[ S_2 = \frac{\left(\pi^{2/2}r^{2-1}\right)}{\left(\frac{(2 - 1)_e!}{2^{2/2}}\right)} = \frac{\pi r}{\left(\frac{1}{2}\right)} = 2\pi r \]

So that works, too.

This is because that "extraneous factor of two"

This is already an obscenely long post, so I'll spare the algebra for the odd-numbered \(n\)-spheres and just give the generalized equations in terms of \(\pi\). \(n_o!\) and friends have the expected meaning given the use of \(n_e\) above.

\[ S_n = \frac{2\cdot2\pi^{(n-1)/2}r^{n-1}}{(n-1)_o!} \]

\[ V_n = \frac{2\cdot2\pi^{(n-1)/2}r^n}{n_o!} \]

It's important to note that that second 2 doesn't come from \(\pi\) not being \(\varsigma\); it comes from the fraction \(\frac{2}{n_o!}\) which is factored into

The important thing is that by using \(\pi\) we can (1) derive easy-to-use equations, including those that we're already well familiar with; and (2) demonstrate the relationships between multidimensional spheres, if we're feeling like it. At least, those relationships seem pretty clear to me with these equations, just as they do in the Tau Manifesto tables.

If I've screwed up on the algebra here, I'm sure one of you fine mathematicians will notice it; but I think this should be all correct, and I honestly think that this offers a perfectly clear perspective on \(n\)-spheres, one that is just as clear as the table we've all come to know.

QUOTE (dgoodmaniii @ Jul 8 2012, 07:23 AM) |

(1) insistence on using the diameter instead of the radius, when I think they work better using the radius; |

What are you

Why did you even bother with this? I thought "mathematical purity" didn't matter to you.

QUOTE (dgoodmaniii @ Jul 8 2012, 07:23 AM) |

(2) failure to resolve out a pretty significant algebraic equality that makes the patterns of the system when expressed in terms of \(\pi\) much clearer, and the equations much cleaner. |

Nope. It's pretty clear from both the Tau Manifesto, and from my posts about this, and even from the standard \(\pi\)-ist treatments of this that you can find out there, and

The fact of the matter is, the

\[\tau = 2\pi\]

and then have tried to exploit those 2's to cancel out factors that arise from somewhere else. Namely, from calculus steps where the radius is integrated from 0 to r to turn a "surface" into a "volume", which at that point has nothing to do with \(\tau\) or \(\pi\), but which factors the current dimension number into the denominator.

The \(\tau\)'s themselves arise from separate calculus steps that take you from an n-2 "volume" (that's n-2 by the way, not n-1) to an n "surface", by integrating a rotation from 0 to \(\tau\). Not 0 to \(\pi\).

All you've done here -- or rather, attempted to do -- are some algebraic manipulations, without any geometric interpretation to back it up. In other words, the typical \(\pi\)-ist gimmick. On the other hand

QUOTE |

That algebraic equality is \((xy)^n = x^n y^n\). If instead of just substituting \(2\pi\) for every \(\tau\) in the table, this had been resolved out, some patterns would have emerged that make the table just as pretty for \(\pi\) as for \(2\pi\) (I call it \(\varsigma\), pronounced "varsigma," because it looks sort of like a "c" for "circle," and because it's not horridly overloaded the way that \(\tau\) is). |

Non-issue. \(e\) is horridly overloaded too, but people manage not to get confused. You can find cases where \(e\) appears as both the Euler constant and as the charge on the electron, all in the same equation. You can even find cases where \(\pi\) appears as both the prime-counting function and as a circle constant, all in the same equation. If torque is a conflict, just define T = torque. It fits better with F = force anyway.

QUOTE |

So let's generalize the formulas for the surface and volume of an \(n\)-sphere where \(n \% 2 = 0\); I'm using the term "\(n_e!\)" for "the factorial of all evens less than including \(n\)," and "\((n-1)_e!\)" for "the factorial of all evens less than but not including \(n\)," but both including 1, just for brevity's sake. I'm sure there's a better notation, but I don't know it. |

It was right there in front of you, in the Tau Manifesto, and even in one of my posts about this. It's called the double-factorial, and it's pretty simple; it's just like the factorial, except you decrement by 2 instead of 1:

\[n!! = n \left(n - 2\right) \left(n - 4\right) \ldots\]

That same definition works for both the even and the odd cases. But what you missed in doing this little 2-cancellation game, is that in the end it leaves you with an

QUOTE |

\[ S_n = \left(\frac{1}{(n-1)_e!}\right) \cdot \left(2^{n/2}\right)\pi^{n/2}r^{n-1} \] \[ V_n = \left(\frac{1}{n_e!}\right) \cdot \left(2^{n/2}\right)\pi^{n/2}r^n \] Now, since \((xy)^n = x^n y^n\), it follows that \(x^n y^n = (xy)^n\), so you may be wondering what exactly we gain by saying \((2^{n/2})\pi^{n/2}\) rather than \((2\pi)^{n/2}\). Well, what we gain is cancellation; it reminds us that we can further generalize these formulas to yield their simplest possible forms by saying thus:\[ S_n = \frac{\left(\pi^{n/2}r^{n-1}\right)}{\left(\frac{(n - 1)_e!}{2^{n/2}}\right)} \] \[ V_n = \frac{\left(\pi^{n/2}r^n\right)}{\left(\frac{n_e!}{2^{n/2}}\right)} \] So let's do a couple of examples here, to make sure this really works. Okay, so let's try X, since we've already got the formulas in the Tau Manifesto to compare it to. \[ S_X = \frac{\left(\pi^{X/2}r^{X-1}\right)}{\left(\frac{(X - 1)_e!}{2^{X/2}}\right)} = \frac{\pi^5r^9}{\left(\frac{280}{2^5}\right)} = \frac{\pi^5r^9}{\left(\frac{280}{28}\right)} = \frac{\pi^5r^9}{10} \] The Tau Manifesto gives the equation as \(S_{10} = \frac{1}{8\cdot6\cdot4\cdot2}(2\pi)^5\left(\frac{D}{2}\right)^9\), |

No, the Tau Manifesto gives the equation as \(S_{10} = \frac{\tau^5}{8!!} r^9 = \frac{\tau^5}{8\cdot6\cdot4\cdot2} r^9\), which is much simpler to get to than doing your 2-cancellation gimmick. It later goes on to show you that \(\pi\)-and-diameter based monstrosity as an example of what

QUOTE |

so let's dozenize and reduce: \(\frac{28\pi^5r^9}{280} = \frac{\pi^5r^9}{10}\), seems right. Let's try the volume: \[ V_X = \frac{\left(\pi^{X/2}r^X\right)}{\left(\frac{X_e!}{2^{X/2}}\right)} = \frac{\left(\pi^5r^X\right)}{\left(\frac{2280}{28}\right)} = \frac{\pi^5r^X}{X0} \] And again, the Tau Manifesto gives the equation as \( V_{10} = \frac{1}{2\cdot4\cdot6\cdot8\cdot10}\left(2\pi\right)^5\left(\frac{D}{2}\right)^{10}\), |

No, again, it gives it as \( V_{10} = \frac{\tau^5 }{10!!} r^{10} = \frac{\tau^5}{2\cdot4\cdot6\cdot8\cdot10} r^{10}\), and cites the \(\pi\)-and-diameter version as the wrong idea.

QUOTE |

so again we'll dozenize and reduce: \(\frac{28\pi^5r^X}{2280} = \frac{\pi^5r^X}{X0}\). Looks good, appears that that one works, too. And we can follow the same procedures for the equations that we all know so well, and that we're accustomed to applying to some more usual \(n\)-spheres than a ten-dimensional shape. Let's see if this works to produce \(A = \pi r^2\). What we're talking about there is, of course, the volume of a two-dimensional shape (which we call the "area"), so: \[ V_2 = \frac{\left(\pi^{2/2}r^2\right)}{\left(\frac{2_e!}{2^{2/2}}\right)} = \frac{\pi r^2}{\left(\frac{2}{2}\right)} = \pi r^2 \] Yep, that works. What about circumference? That is, the surface of a 2-sphere? \[ S_2 = \frac{\left(\pi^{2/2}r^{2-1}\right)}{\left(\frac{(2 - 1)_e!}{2^{2/2}}\right)} = \frac{\pi r}{\left(\frac{1}{2}\right)} = 2\pi r \] |

All you've done here is find a more convoluted and roundabout way to apply the same old \(\pi\)-ist substitution gimmick. Why bother to go through all that work? Why not just wait for the \(\tau\)-ists to do all your work for you, and just pounce in at the end and claim the \(\pi\) high ground:

\[V_2 = \frac{1}{2} \tau r^2 = \frac{1}{2} \left(2 \pi\right) r^2 = \pi r^2\]

\[S_2 = \tau r = \left(2 \pi\right) r = 2 \pi r \]

I mean, why should you be burdened with little trifles like coming up with a geometric rationale for what you're doing? Let those \(\tau\)-ist patsies do all the grunt-work for you. All you need to get what you want is algebra.

QUOTE |

So that works, too. This is because that "extraneous factor of two" does cancel out in all cases but the surface of a 2-sphere. (The volume of a 2-sphere is the only place that it does cancel out when all this is rewritten in terms of \(\varsigma\).) But you have to generalize the formula and do the algebra. This yields a very simple equation in reduced form; indeed, the simplest possible for the volume of the 2-sphere (and no more complicated for the surface thereof than \(\varsigma\) gives us for the volume). This is already an obscenely long post, so I'll spare the algebra for the odd-numbered \(n\)-spheres and just give the generalized equations in terms of \(\pi\). \(n_o!\) and friends have the expected meaning given the use of \(n_e\) above. \[ S_n = \frac{2\cdot2\pi^{(n-1)/2}r^{n-1}}{(n-1)_o!} \] \[ V_n = \frac{2\cdot2\pi^{(n-1)/2}r^n}{n_o!} \] It's important to note that that second 2 doesn't come from \(\pi\) not being \(\varsigma\); |

Right. That represents the two vertices of the line segment in Lineland. The line segment was our 1-ball, and its two vertices constitute its "surface". And all the higher odd dimensions are built on that. But that's hardly an "extra" 2. It's essential.

However, you wrote that equation wrong. It's not \(2 \cdot 2\pi^{(n-1)/2}\). It's \(2 \cdot \left(2\pi\right)^{(n-1)/2} = 2 \cdot 2^{(n-1)/2} \cdot \pi^{(n-1)/2} \). Which ends up piling an increasing number of 2's into the numerator, with each dimensional step. All those other 2's come from you abitrarily deciding to substitute \(2\pi\) for \(\tau\).

QUOTE |

it comes from the fraction \(\frac{2}{n_o!}\) which is factored into all of these equations. When you've got your actual equation, you won't see the two in either case. This does yield some bulky fractional equations (of course, these can be further rearranged to get rid of the factor of \(2^{(n - 1)/2}\), but since the denominators won't cancel out, there's little point), but the \(\varsigma\) equations yield some pretty nasty ones, too, so ditching this extra 2 seems a small gain, if any. |

"Ditch" the "extra" 2 that came from Lineland? Good luck. How about ditching all those extra 2's that you pile on, one every two dimension, that really are "extra"?

The denominators get hairy in all dimensions, even and odds, because integrating r appends another dimension number onto a rolling double factorial. But that's just the way it is. That's no justification to pile unnecessary powers of 2 onto the

[+IRONY]

You should have substituted right angles instead of just straight angles:

\[\tau = 4\eta\]

I mean, it seems like a "small gain" to deny another set of "extra" 2's when you can create a clear win for \(\eta\) in the process. Who cares if the odd-dimension formulas get more complicated that way, this will let us cancel out a lot of even halves of even numbers in the higher even dimensions. Okay, so now we'll get some leftover powers of 2 in the numerator in the even side, and lots of 4's in the odd side, but the whole thing's too hairy to understand anyway.

Or better yet, you should have substituted diagonals instead:

\[\tau = 8\delta\]

Again, what does it buy us to deny another 2 sets of "extra" 2's in these equations? Think of how many more even halves of even halves of even numbers we can catch in the higher even dimensions. So now we'll have so many uncancelled 8's on the even and odd side, that we won't be able to distinguish the even vs. the odd numerators. We'll have achieved ... unity.

[-IRONY]

QUOTE |

The important thing is that by using \(\pi\) we can (1) derive easy-to-use equations, including those that we're already well familiar with; |

You know, I find all of the \(\tau\) equations easy to use. Foregoing some extraneous cancellations that would only partially eliminate those hairy denominators in the even powers, in exchange for not having to multiply the numerators in the odd powers by a raft of powers of 2, seems like a win for \(\tau\) to me.

Besides, I find the \(\tau\) equations a lot easier to use because they're really just

\[ V_n = \left(\frac{\tau^{\lfloor\frac{n}{2}\rfloor}}{n!!}\left(n \operatorname{mod} 2 + 1\right) \right) r^{n}\]

and just

\[ S_n = \left(\frac{\tau^{\lfloor\frac{n}{2}\rfloor}}{\left(n - 2\right)!!}\left(n \operatorname{mod} 2 + 1\right) \right) r^{n-1}\]

which I can program up in a spreadsheet

And moreover, I find these equations easier to use, because I

QUOTE |

and (2) demonstrate the relationships between multidimensional spheres, if we're feeling like it. |

Nope. Not really. The relationships between those n-dimensional spheres are represented by the setups for those integrations, which lead most cleanly and directly to the equations based on \(\tau\) and \(r\). It's actually less direct, and more convoluted, to get to equations based on \(\pi\) and/or \(D\). And it's actually much easier to trace back from the \(\tau\) and \(r\) equations, to the original integration setups, than trying to do the same starting from equations using \(\pi\), canceled out or not, and/or \(D\).

QUOTE |

At least, those relationships seem pretty clear to me with these equations, just as they do in the Tau Manifesto tables. |

What exactly is "clearer" to you? All I can see that you made "clearer" is that you found another pathway to formulas that have fewer symbols in them. In a tiny number cases. But actually at the cost of a lot more symbols in many more cases. I don't see that you made the actual concepts used to derive these equations any clearer at all.

QUOTE |

If I've screwed up on the algebra here, I'm sure one of you fine mathematicians will notice it; but I think this should be all correct, and I honestly think that this offers a perfectly clear perspective on \(n\)-spheres, one that is just as clear as the table we've all come to know. |

This is entirely misguided.

[This is a bit out of sequence because I was working on this before my last post:]

I think we have been in agreement about this part:

I think we have been in agreement about this part:

QUOTE (dgoodmaniii @ Jul 6 2012, 08:53 PM) |

(The exterior angles pictures you drew were pretty, but unnecessary; I knew what you were talking about, but there are still no circles there by that definition,... |

Until one draws them, of course, as part of the accepted procedure for geometric proofs. Which you concede is "unnecessary", and I agree. Even if these things aren't explicitly drawn, they can still be reasoned about.

My contention is that one can take any point in the Euclidean plane whatsoever, and assume that there is a full turn's worth of rotation surrounding it, and use that fact to proceed with a derivation or proof. One can reinforce that by picking some suitable radius and drawing a circle around the point, especially if lines already drawn in the figure intersect it to divide it into arcs, which can then be reasoned about. None of this is at all controversial.

QUOTE |

...but rather supplementary angles based on the |

Which by this argument aren't there either, until one extends the sides of the polygon. But again, this is an accepted practice, rather run-of-the-mill for proofs.

My contention, as a corollary to the one above, is that one can take any point whatsoever on a straight line, and interpret the halves of the line on either side of the point as rays, dividing the full rotation around the point into two half-turns, or equivalently, two straight angles. The point might, for instance, be a vertex of a polygon, and the line an extension of an adjacent side, revealing the exterior angle as the supplement of the adjacent interior angle. But again, all this can be reasoned about even if it is not explicitly drawn. Again, nothing really controversial about that.

QUOTE |

That's the point I was making. |

Yes. But it was a non-sequitur. (We seemed to have been talking past each other at the time.) I wasn't objecting to those cases where a half-turn

Just because

I've challenged you, for instance, to demonstrate how a half-turn is relevant to deriving the unit circle area, for instance. It's not, of course, because the derivation involves taking a vector of fixed radius and integrating a rotation differential from 0 to \(\tau\), to yield the circle circumference \(\tau r\), and then integrating the radius differential from 0 to r to yield \(\frac{1}{2}\tau r^2\); then evaluating for \(r=1\). But you've made the claim that \(\pi\) is the best way to express the final result. The only support for that is simply the algebraic substitution \(\tau/2 = \pi\). But without any geometric interpretation, such a substitution is arbitrary, and no more relevant than, say, substituting \(\tau/4 = \eta\) to get \(2\eta\) or \(\tau/8 = \delta\) (for "diagonal") to get \(4\delta\). Moreover, I would contend that substituting \(\pi\), interpreted as a "half-turn", or any of these symbols, at best has meaning and relevance only when actually talking about angle measures, and has no meaning whatsoever when talking about areas. I would even lump my proposed "semitau" symbol (\(\tau_2\)), and all the rest of the \(\tau_n\) family, into this assessment.

When challenged about this, your answer seemed to be, "My use of \(\pi\) here is analogous to the case above, where you used the idea of a full turn, but didn't explicitly demonstrate it." Okay, I went and demonstrated my reasoning with explicit constructions; then I challenged you to do the same, as I'm doing now: Justify your use of \(\pi\) with an explicit construction. This is impossible of course, but you don't seem to want to concede that.

Instead your answer seems to be "I don't care about these things, I'm just looking to simplify the formula for practical applications." Okay, fine. But there's more than one way to skin that cat. I've argued (as has Hartl) that the unit n-ball surface areas (or unit (n-1)-sphere areas) and unit n-ball volumes ought to be given coefficient names, and should be pre-computed for practical purposes. I've proposed a family of names \(\alpha_n\) and \(\beta_n\), respectively, for these (Hartl has proposed a different set of names, but the idea is the same). The fact that \(\beta_2\) happens to be numerically equal to the ratio of the circumference of a circle to its diameter, is an interesting, but largely irrelevant coincidence.

Apart from reducing workload, these names are useful in and of themselves. For instance, they can be used to generalize the idea of radian angle measure to n dimensions: If we have some portion of an n-ball surface, and know its "surface area" \(s_n\), and therefore what fraction \(f_n\) of the total n-ball surface that it constitutes (\(f_n = s_n / A_n\)), then \(\theta_n = f_n \cdot \alpha_n\) would be the n-dimensional analog of its radian-angle measure (or rather, the measure of the n-dimensional "angle" the surface subtends). For \(n=2\) we're talking about actual angles and radian measures themselves, so this is \(\theta_2 = f_2\cdot \alpha_2 = f_2 \cdot \tau\). (Incidentally, my Rotationels essentially provide a nomenclature for expressing \(f_2\) for 2-dimensional angles). For \(n=3\), we have "solid angles" measured in steradians, so \(\theta_3 = f_3\cdot \alpha_3 = f_3 \cdot 2\tau\). For \(n=4\), we'd have "hyper-solid angles" measured in "hyper-steradians", so \(\theta_4 = f_4\cdot \alpha_4 = f_4\cdot \frac{\tau^2}{2}\). And so forth.

In physics, the use of the unit 3-ball (2-sphere) area \(\alpha_3 = 2\tau = 4\pi\) would be helpful for making sense of, for instance, Maxwell's equations. It turns out these deal with electric and magnetic flux integrated over a closed surface, which can be equated to a sphere. I think it was you who asked whether this was "another circle constant"? The answer is, no, it wouldn't be a

QUOTE |

And interestingly, what we really probably want to know about exterior angles of polygons is what the explement of the interior is; and we can calculate that easily, without knowing the interior angles, using \(\pi\).) You've repeatedly stated that using \(\pi\) is "mysterious" because there isn't a physical half-circle involved. |

What I was objecting to was not that you were using half-turns that you hadn't yet drawn. Clearly, the explement of each interior angle of a triangle is the sum of the exterior angle at each vertex, plus a half-turn "outside" the line drawn to demonstrate each exterior angle. But we know that the exterior angles of any polygon add up to one full turn, because a polygon is a closed curve. So from that we get the formula \[n\frac{\tau}{2} + \tau = \tau\left(\frac{n}{2} + 1\right)\] In either of these forms, we can still clearly see that one turn and n half-turns are involved. But then we get into the whole brouhaha over how this one little division by two makes this lightyears "harder" than the \(\pi\) alternative.

What I really objected to was the "simplification" of introducing \(\pi\), yielding \[2\pi \left(\frac{n}{2} + \frac{2}{2}\right) = \pi\left(n + 2\right)\]which obscures the half-turns as "pis", and the whole turn as two half-turns and then as nothing but a 2. Compared to that, I'd prefer even saying it like this: \[n\pi + \tau\] although a bit better would be \[n\tau_2 + \tau\]

Even if you want to use this to derive the formula for the interior as the sum of interiors of triangles, I think that ought to be expressed:

\[\frac{\tau}{2}\left(n - 2\right)\]

or if necessary

\[\tau_2\left(n - 2\right)\]

but better yet something like this (where \(I_n\) means "interior of n-gon"):

\[I_n = I_3 \left(n - 2\right)\]

where

\[I_3 = \frac{\tau}{2}\]

I truly believe that \(\frac{\tau}{2}\), contracted as \(\tau_2\) if necessary, is the

QUOTE |

That's because my concern isn't how to derive the formulas, as I've said more than once before; my concern is ease of application in actual problems. We can teach people to derive formulas using any angular constant; the problem of the factor of two, where it exists, is so minimal as to be negligible in these circumstances. But if we design our mathematics around ease of derivation and sacrifice ease of application, we're making a big mistake. |

I refuse to concede the notion that

As for your complex number "analogy": We don't have to go all the way to the n-ball area an volume formulas to understand the circle area formula, any more than we have to go all the way to complex numbers to understand real numbers. You don't have to widen a student's world with these things until the time is right for them. But even when we're just dealing with the one integration you need to get a circle area from its circumference, it is a benefit to see it as

\[A_{circle} = \frac{1}{2} \tau r^2\]

because, for instance, we can immediately make sense of it in connection with the area of a sector subtending an angle:

\[A_{sector} = \frac{1}{2} \theta r^2\]

which is arrived at by integration starting from similar reasoning. This makes both of these equations easier to understand, remember, and therefore apply. It also reinforces elementary concepts from calculus which are repeatedly used throughout mathematics and physics, making those concepts easier to understand, remember, and apply.

But I don't care what you call your argument about complex numbers, be it straw man, reductio ad absurdum, or "analogy". It still found it highly insulting. Was that your intent?

So much for a "genteel discussion" of mathematical concepts.

QUOTE (Kodegadulo @ Jul 8 2012, 12:18 PM) |

* nasty, sarcastic screed * |

Wow. Just...wow. If this is what you call "genteel" (your subtitle for this topic), then I'm out.

Out-of-sequence post:

QUOTE (Kodegadulo @ Jul 8 2012, 01:46 PM) |

But I don't care what you call your argument about complex numbers, be it straw man, reductio ad absurdum, or "analogy". It still found it highly insulting. Was that your intent? So much for a "genteel discussion" of mathematical concepts. |

QUOTE (dgoodmaniii @ Jul 8 2012, 04:05 PM) | ||

Wow. Just...wow. If this is what you call "genteel" (your subtitle for this topic), then I'm out. |

Well, you started it. At this point, I don't know why I'm bothering.

I'm deleting the whole substantive part of this post because it's clear that this won't remain civil. I'll just leave this.

QUOTE |

But I don't care what you call your argument about complex numbers, be it straw man, reductio ad absurdum, or "analogy". It still found it highly insulting. Was that your intent? |

It boggles my mind how you could find that insulting at all, much less highly so. I said nothing about your character, about your intelligence, about your work ethic, or anything else derogatory about you. I didn't even insult your argument; indeed, I explicitly stated that your argument did not produce such an idiotic conclusion.

You, on the other hand, have consistently insulted me

QUOTE |

So much for a "genteel discussion" of mathematical concepts. |

Indeed.

Honestly, I've never intended to insult you. Besides this whole "you're an idiot because you don't like \(\tau\)" thing, I really like you, and enjoy every one of the discussions we have on this board. But it seems clear to me that this is a topic we're not going to be able to remain civil about. So let's call it quits and stick to something that won't clog the board with invective.

What's really troubling is that I've made so many of my points numerous times, in numerous ways, every way I could think of to make the case. I invested a lot of time and effort to come up with the best, most illuminating treatment I could manage, for instance, for the n-sphere areas and volumes. I thought I had shown conclusively that the cleanest representation and derivation of those is simply in terms of \(\tau\).

I built upon what Hartl had done in last year's version of his Manifesto. Particularly, what he did was to algebraically excavate the \(\tau\) version from the typical \(\pi\) version presented in references. What you usually see are two very different looking formulas for the even and odd dimension cases. He saw how loaded with powers of 2 the odd formula was, and how strange it was that the even formula used a single-factorial whereas the odd case used a double factorial. He worked backward. He removed the extraneous powers of two from the odd dimension case, and restored the missing factors of 2 in the denominators of the even case, allow its factorial to be a double factorial just like the odd case. In both instances, this was done by replacing powers of \(2 \pi\) with \(\tau\). The result were two nearly identical equations, the only difference being a simple factor of 1 in the even case, and 2 in the odd case. And even that can be incorporated to make the equations identical, by using a function that maps all evens to 1 and odds to 2 (e.g., \(n \operatorname{mod} 2 + 1\). The result uses fewer symbols, fewer operations, fewer equations, is more elegant, and unifies the even and odd cases. By any objective criterion, it is a better representation of the system. But it doesn't use \(\pi\).

What I added to this was showing how all of this could be derived recursively, which means step by step how each dimension relates to the next, in terms of successive layers of integral calculus, both integrating on rotation, and integrating on radius. I showed how the individual formulas for surface area and volume build on each other recursively to yield the final consolidated formulas Hartl worked out. I even came up with explanations for the one difference (besides even/oddness) between the two cases, the recursive base cases that they start with: a single-point volume in Pointland vs. a two-point surface in Lineland. All of this yields the correct results. None of it is actually new stuff, you can find the essentials buried in the references. All that I added was the narrative to tie it all together. Hartl has since incorporated my treatment into the latest version of his Manifesto, and in fact moved his previous excavation work into an appendix.

But now, dgiii, you just spent all night attempting to undo all that. You proceeded to re-introduce the substitution \(\tau = 2\pi\). If done correctly, what this would produce would be the original reference versions from the literature: Two very different looking equations, apparently reduced terms and single-factorial in the even dimensions, double factorial and unexplained powers of two in the odd cases. You then made the claim that all of this "relates" the dimensions to each other better, or we can see that if we so chose. But with the even and odd cases no longer resembling each other, and with the even cases no longer cast in terms of a double-factorial attributable to integrations of rotation every 2 powers, it's hard to credit that claim. Yet you're making the claim quite pointedly and with complete confidence (unfounded in my opinion), as if you expect any reasonable audience to agree with you. This seems to me a direct challenge to my assertions of the exact opposite.

You claim that you are doing all of this to make things easier and more practical. I see it as making things harder to understand and more complicated than necessary. You imply that I'm a "purist" and doing all of this "even at the expense of practicality", just out of personal preference because I simply "like tau." I think the math naturally demands these forms, but you keep making it about me "forcing" the discussion onto my terms. Add to this the fact that with the very first mention of \(\tau\), you immediately took a derisive and antagonistic stance, and at every opportunity since you've simply contradicted every one of my points, making it personally about me and my somehow misguided opinions of what makes math easier. So how could I not be insulted?

I keep repeating the same points, and you keep rejecting them. I pour an inordinate amount of effort to explain the math and provide the reasoning behind my position, but you simply dismiss it all as irrelevant. It seems nothing will possibly persuade you off your position. So what's the point of this?

I built upon what Hartl had done in last year's version of his Manifesto. Particularly, what he did was to algebraically excavate the \(\tau\) version from the typical \(\pi\) version presented in references. What you usually see are two very different looking formulas for the even and odd dimension cases. He saw how loaded with powers of 2 the odd formula was, and how strange it was that the even formula used a single-factorial whereas the odd case used a double factorial. He worked backward. He removed the extraneous powers of two from the odd dimension case, and restored the missing factors of 2 in the denominators of the even case, allow its factorial to be a double factorial just like the odd case. In both instances, this was done by replacing powers of \(2 \pi\) with \(\tau\). The result were two nearly identical equations, the only difference being a simple factor of 1 in the even case, and 2 in the odd case. And even that can be incorporated to make the equations identical, by using a function that maps all evens to 1 and odds to 2 (e.g., \(n \operatorname{mod} 2 + 1\). The result uses fewer symbols, fewer operations, fewer equations, is more elegant, and unifies the even and odd cases. By any objective criterion, it is a better representation of the system. But it doesn't use \(\pi\).

What I added to this was showing how all of this could be derived recursively, which means step by step how each dimension relates to the next, in terms of successive layers of integral calculus, both integrating on rotation, and integrating on radius. I showed how the individual formulas for surface area and volume build on each other recursively to yield the final consolidated formulas Hartl worked out. I even came up with explanations for the one difference (besides even/oddness) between the two cases, the recursive base cases that they start with: a single-point volume in Pointland vs. a two-point surface in Lineland. All of this yields the correct results. None of it is actually new stuff, you can find the essentials buried in the references. All that I added was the narrative to tie it all together. Hartl has since incorporated my treatment into the latest version of his Manifesto, and in fact moved his previous excavation work into an appendix.

But now, dgiii, you just spent all night attempting to undo all that. You proceeded to re-introduce the substitution \(\tau = 2\pi\). If done correctly, what this would produce would be the original reference versions from the literature: Two very different looking equations, apparently reduced terms and single-factorial in the even dimensions, double factorial and unexplained powers of two in the odd cases. You then made the claim that all of this "relates" the dimensions to each other better, or we can see that if we so chose. But with the even and odd cases no longer resembling each other, and with the even cases no longer cast in terms of a double-factorial attributable to integrations of rotation every 2 powers, it's hard to credit that claim. Yet you're making the claim quite pointedly and with complete confidence (unfounded in my opinion), as if you expect any reasonable audience to agree with you. This seems to me a direct challenge to my assertions of the exact opposite.

You claim that you are doing all of this to make things easier and more practical. I see it as making things harder to understand and more complicated than necessary. You imply that I'm a "purist" and doing all of this "even at the expense of practicality", just out of personal preference because I simply "like tau." I think the math naturally demands these forms, but you keep making it about me "forcing" the discussion onto my terms. Add to this the fact that with the very first mention of \(\tau\), you immediately took a derisive and antagonistic stance, and at every opportunity since you've simply contradicted every one of my points, making it personally about me and my somehow misguided opinions of what makes math easier. So how could I not be insulted?

I keep repeating the same points, and you keep rejecting them. I pour an inordinate amount of effort to explain the math and provide the reasoning behind my position, but you simply dismiss it all as irrelevant. It seems nothing will possibly persuade you off your position. So what's the point of this?

QUOTE (Kodegadulo @ Jul 8 2012, 07:38 PM) |

I keep repeating the same points, and you keep rejecting them. I pour an inordinate amount of effort to explain the math and provide the reasoning behind my position, but you simply dismiss it all as irrelevant. It seems nothing will possibly persuade you off your position. So what's the point of this? |

It is really that mystifying to you that somebody might not agree with you? Surely you recognize that rational people might come to different conclusions on these matters?

You think one thing, I think another, and we're apparently incapable of civilly discussing it. Let's just call it a day on this one and discuss something that we don't end up yelling about.