Showing posts with label Euler. Show all posts
Showing posts with label Euler. Show all posts
30 July 2008
The five bridges of Kaliningrad
In 1736 Euler solved the following problem: the city of Königsberg is set on both sides of the Pregel river and on two islands between them. There are bridges connecting the various landmasses; is it possible to walk around the city in such a way that you cross each bridge exactly once? The answer is no; (the network of landmasses and bridges in) Königsberg didn't have an Eulerian path. In order to have an Eulerian path the graph corresponding to this network must have zero or two nodes of odd degree; that is, if we consider the number of bridges on each landmass, exactly zero or two of these numbers can be odd. In Königsberg all four degrees were odd.
But the bridges were bombed in World War II, the city was renamed Kaliningrad, and only five of them were rebuilt. These are bridges connecting each of the islands to each of the shores, and a brige connecting the two islands. As you can see, there are three bridges on each island and two on each shore of the river; two of these numbers are odd, so there exists an Eulerian path. It's still somewhat useless, because you have to start on one island and end on the other.
Here's a map of the route, from Microsiervos (in Spanish), and here are some pictures that some folks took when they were visiting Kaliningrad and actually doing this.
I also remember once seeing the analogous network for New York City (the relevant landmasses being Manhattan, Long Island, Staten Island, the Bronx, and New Jersey), which has a lot of bridges, with the question of whether that network had an Eulerian path. I don't remember the answer. I also think it wouldn't be as much fun; New York has lots of traffic and is much larger than Kaliningrad.
But the bridges were bombed in World War II, the city was renamed Kaliningrad, and only five of them were rebuilt. These are bridges connecting each of the islands to each of the shores, and a brige connecting the two islands. As you can see, there are three bridges on each island and two on each shore of the river; two of these numbers are odd, so there exists an Eulerian path. It's still somewhat useless, because you have to start on one island and end on the other.
Here's a map of the route, from Microsiervos (in Spanish), and here are some pictures that some folks took when they were visiting Kaliningrad and actually doing this.
I also remember once seeing the analogous network for New York City (the relevant landmasses being Manhattan, Long Island, Staten Island, the Bronx, and New Jersey), which has a lot of bridges, with the question of whether that network had an Eulerian path. I don't remember the answer. I also think it wouldn't be as much fun; New York has lots of traffic and is much larger than Kaliningrad.
10 April 2008
e day
I have been informed that tomorrow is "e day", February 71st. (In American date notation, that's 2/71; compare "π day" which is March 14th.)
In non-leap years, "e day" is still February 71st, but that's equivalent to April 12th.
It's too bad Euler wasn't born a few days earlier; he was born on April 15, 1707.
In non-leap years, "e day" is still February 71st, but that's equivalent to April 12th.
It's too bad Euler wasn't born a few days earlier; he was born on April 15, 1707.
21 January 2008
Nineteen proofs of Euler's formula
Nineteen proofs of Euler's Formula, from The Geometry Junkyard. (This is the one that says that if V, E, and F are the number of vertices, edges, and faces of a polyhedron, then V-E+F=2.)
This is something of an embarrassment of riches; I want one for my class tomorrow, because we're talking about regular polyhedra, and to talk about that without mentioning Euler's formula would be remiss.
I'll probably give "Proof #8: Sum of angles" from the page, which sums up the various angles involved in a drawing of the graph corresponding to a polyhedron, or the proof that Wikipedia attributes to Cauchy, which triangulates that graph (not changing V-E+F) and then removes edges one or two at a time (still not changing V-E+F). These seem the most accessible. (They're also the ones I managed to struggle in the general direction of while walking to the coffee house a couple hours ago.)
If I wanted to scare the hell out of my students (this class has no prerequisites), I'd give the "binary homology" proof.
This is something of an embarrassment of riches; I want one for my class tomorrow, because we're talking about regular polyhedra, and to talk about that without mentioning Euler's formula would be remiss.
I'll probably give "Proof #8: Sum of angles" from the page, which sums up the various angles involved in a drawing of the graph corresponding to a polyhedron, or the proof that Wikipedia attributes to Cauchy, which triangulates that graph (not changing V-E+F) and then removes edges one or two at a time (still not changing V-E+F). These seem the most accessible. (They're also the ones I managed to struggle in the general direction of while walking to the coffee house a couple hours ago.)
If I wanted to scare the hell out of my students (this class has no prerequisites), I'd give the "binary homology" proof.
06 October 2007
Some partition identities due to Euler
I wrote a couple weeks ago about my love of the identity
which is an analytic version of the fact that every integer is uniquely a sum of distinct powers of 2. What I didn't realize at the time, but learned from George Andrews article Euler's "De Partitio Numerorum", in the current issue (Vol. 44, No. 4) of the Bulletin of the American Mathematical Society, is that this is due to Euler. This is hardly surprising, though, as Euler was as far as I know the first person to apply generating functions to partitions. (The result was known before that; it's the generating-function statement that's due to Euler.)
But here's one I didn't know:
This is an analytic version of the fact that every integer is uniquely a sum of distinct powers of 3 and their negatives, and is Theorem 4 of the Andrews paper. For example, 1 = 1, 2 = 3-1, 3 = 3, 4 = 3+1, 5 = 9-3+1, 6 = 9-3, 7 = 9-3+1, and so on.
To see this fact combinatorially, note that we can make 3k sums out of the numbers 30, 31, ..., 3k-1; for each of these we include the number, its negative, or neither. If we can show that these are the distinct integers from -l to l, where l = (3k-1)/2, then that's enough. (For example, when k = 2, we have l = 4, and the nine possible sums of 1, 3, and their negatives are
-3-1, -3, -3+1, -1, 0, 1, 3-1, 3, 3+1
where "0" denotes the empty sum. These are just the integers from -4 to 4.)
But this can be shown by induction on k. Clearly it's true for k = 0. Then, for the induction, consider the 3k sums of the powers of three up to 3k-1 and their negatives; by the inductive hypothesis these are just the integers in the interval [画像:$\left[ {1-3^k \over 2}, {3^k-1 \over 2} \right]$]. Now consider the 3k+1 sums of the powers of three up to 3k and their negatives. There are 3k of these which are just the ones which don't include 3k. Those that include +3k add up to [画像:$\left[ {1-3^k \over 2}+3^k, {3^k-1 \over 2}+3^k \right]$], and those which include -3k add up to [画像:$\left[ {1-3^k \over 2}-3^k, {3^k-1 \over 2}-3^k \right]$]. These three intervals put together are [画像:$\left[ {1-3^{k+1} \over 2}, {3^{k+1}-1 \over 2} \right]$], as desired. (For example, when k=2, these three intervals are [-4, 4], [5, 13], and [-13, -5], which together make up [-13, 13].) By induction, we can make each integer in [画像:$\left[ {1-3^k \over 2}, {3^k-1 \over 2} \right]$] uniquely as a sum of distinct powers of 3 which are less than 3k and their negatives, which is essentially the desired result.
The last section of the article talks about partitions with some negative parts, or "signed partitions". For example, 9+8-6-5 is a signed partition of 6. These are a bit more awkward to work with in terms of generating functions -- the generating function above about the powers of 3 doesn't converge, ever! So it's basically a curiosity, but one can actually prove things about these signed partitions using generating functions; see the article for examples.
which is an analytic version of the fact that every integer is uniquely a sum of distinct powers of 2. What I didn't realize at the time, but learned from George Andrews article Euler's "De Partitio Numerorum", in the current issue (Vol. 44, No. 4) of the Bulletin of the American Mathematical Society, is that this is due to Euler. This is hardly surprising, though, as Euler was as far as I know the first person to apply generating functions to partitions. (The result was known before that; it's the generating-function statement that's due to Euler.)
But here's one I didn't know:
This is an analytic version of the fact that every integer is uniquely a sum of distinct powers of 3 and their negatives, and is Theorem 4 of the Andrews paper. For example, 1 = 1, 2 = 3-1, 3 = 3, 4 = 3+1, 5 = 9-3+1, 6 = 9-3, 7 = 9-3+1, and so on.
To see this fact combinatorially, note that we can make 3k sums out of the numbers 30, 31, ..., 3k-1; for each of these we include the number, its negative, or neither. If we can show that these are the distinct integers from -l to l, where l = (3k-1)/2, then that's enough. (For example, when k = 2, we have l = 4, and the nine possible sums of 1, 3, and their negatives are
-3-1, -3, -3+1, -1, 0, 1, 3-1, 3, 3+1
where "0" denotes the empty sum. These are just the integers from -4 to 4.)
But this can be shown by induction on k. Clearly it's true for k = 0. Then, for the induction, consider the 3k sums of the powers of three up to 3k-1 and their negatives; by the inductive hypothesis these are just the integers in the interval [画像:$\left[ {1-3^k \over 2}, {3^k-1 \over 2} \right]$]. Now consider the 3k+1 sums of the powers of three up to 3k and their negatives. There are 3k of these which are just the ones which don't include 3k. Those that include +3k add up to [画像:$\left[ {1-3^k \over 2}+3^k, {3^k-1 \over 2}+3^k \right]$], and those which include -3k add up to [画像:$\left[ {1-3^k \over 2}-3^k, {3^k-1 \over 2}-3^k \right]$]. These three intervals put together are [画像:$\left[ {1-3^{k+1} \over 2}, {3^{k+1}-1 \over 2} \right]$], as desired. (For example, when k=2, these three intervals are [-4, 4], [5, 13], and [-13, -5], which together make up [-13, 13].) By induction, we can make each integer in [画像:$\left[ {1-3^k \over 2}, {3^k-1 \over 2} \right]$] uniquely as a sum of distinct powers of 3 which are less than 3k and their negatives, which is essentially the desired result.
The last section of the article talks about partitions with some negative parts, or "signed partitions". For example, 9+8-6-5 is a signed partition of 6. These are a bit more awkward to work with in terms of generating functions -- the generating function above about the powers of 3 doesn't converge, ever! So it's basically a curiosity, but one can actually prove things about these signed partitions using generating functions; see the article for examples.
12 July 2007
Euler: The Master of Us All
In honor of Leonhard Euler's 300th birthday: the Sudoku watch, in a limited 1707-piece edition. (Euler was born in 1707, hence the number.) For some reason it costs 1678,ドル not 1707ドル. (And 2268ドル is not 1707 euros; I checked.) Euler didn't invent Sudoku, but he did work on Latin squares, of which Sudoku are a special case. (Thanks to Amy for pointing me to this.)
In other Eulerian news (how often do you get to use that phrase?), I just finished reading Euler: The Master of Us All by Wiliam Dunham, who is interested in the history of mathematics. I'd read his previous book Journey through Genius: The Great Theorems of Mathematics when I was in high school; this book treats a dozen or so of the major theorems of mathematics, one to a chapter, in a way that's probably accessible to the bright high school student. His book on Euler is a bit trickier -- it assumes one is familiar with the calculus and comfortable with infinite series. Euler was very comfortable with infinite series; this is what enabled, for example, his solution of the Basel problem: finding the sum 1 + 1/4 + 1/9 + 1/16 + ..., which turns out to be π2/6. The Wikipedia article gives two proofs, Euler's proof and a more rigorous one. Personally, I prefer Euler's proof; I often find that the rigor of modern proofs obscures why the result in question is actually true.
Dunham's book treats some of Euler's major results:
1 + 1/2 + 1/3 + 1/4 + 1/5 + ... = (2 x 3 x 5 x 7 x 11 x 13 x ...)/(1 x 2 x 4 x 6 x 10 x 12 x ...)
which seems silly, because both sides are infinite. But it makes sense. We have, for example,
1 + 1/2 + 1/4 + 1/8 + 1/16 + ... = 2/1
and then
1 + 1/2 + 1/3 + 1/4 + 1/6 + 1/8 + 1/9 + 1/12 + ...
= (1 + 1/2 + 1/4 + 1/8 + ...) (1 + 1/3 + 1/9 + 1/27 + ...)
= (2/1) (3/2)
where the original sum is the sum of reciprocals of numbers whose only prime factors are 2 and 3. Why not extend this to the sum of the reciprocals of all integers using unique factorization? And it turns out that this sum has a perfectly reasonable interpretation: by the same sort of argument as above,
1 + 1/2s + 1/3s + 1/4s + ... = 1/[(1-2-s)(1-3-s)(1-5-s)] ...
where if s>1 both sides converge; if you let s approach 1 from above you recover Euler's result. (This is due to Kronecker, and can be found on pp. 66-70 of Dunham.) But this sort of heuristic appeals to me. It seems to me that modern mathematicians don't take enough risks like this. Or, if they do, they are not telling me; I fear that the papers currently being published omit anything which isn't totally rigorous, even if (especially if?) it might make things clearer. I suppose this makes sense if you want to keep your papers short, which was of course important when papers were printed on, well, paper -- but these sorts of less rigorous things could certainly be made available in some sort of supplement to one's paper, published on a web page, for example.
Finally, given how many areas Euler has touched, sometimes I've joked that we should just rename mathematics "Euler". Somewhat more seriously, though -- I wonder to what extent one can tell whether someone is a mathematician by writing the word "Euler" on a piece of paper and asking them to pronounce it. (Isaac Asimov, who was trained as a chemist, suggested that "unionized" played the same role for chemists. Chemists pronounce it in four syllables and think it refers to molecules which have the same number of protons as electrons and thus a neutral charge; non-chemists pronounce it in three syllables and think it refers to organized labor.)
(My apologies for the awkwardness of the notation here. There seem to be ways to write out mathematics in blog posts -- either using LaTeX or MathML -- but they're not necessary for the notation I want to use here and seem like they'd take a little fooling around to get to work. And I don't even know MathML, although I wouldn't mind learning it.)
In other Eulerian news (how often do you get to use that phrase?), I just finished reading Euler: The Master of Us All by Wiliam Dunham, who is interested in the history of mathematics. I'd read his previous book Journey through Genius: The Great Theorems of Mathematics when I was in high school; this book treats a dozen or so of the major theorems of mathematics, one to a chapter, in a way that's probably accessible to the bright high school student. His book on Euler is a bit trickier -- it assumes one is familiar with the calculus and comfortable with infinite series. Euler was very comfortable with infinite series; this is what enabled, for example, his solution of the Basel problem: finding the sum 1 + 1/4 + 1/9 + 1/16 + ..., which turns out to be π2/6. The Wikipedia article gives two proofs, Euler's proof and a more rigorous one. Personally, I prefer Euler's proof; I often find that the rigor of modern proofs obscures why the result in question is actually true.
Dunham's book treats some of Euler's major results:
- the Euclid-Euler theorem on perfect numbers: all even perfect numbers have the form (2k-1)2k-1, where 2k-1 is prime. Odd perfect numbers are harder to find; Carl Pomerance has a heuristic argument that they don't exist.
- infinite series involving logarithms; apparently Euler was the first to look at logarithms as the inverse of the exponential function, as opposed to just a useful computational aid.
- the solution to the Basel problem and some related series; in fact Dunham talked about this in Journey through Genius as well, which is where I first learned about it (edit, 7/13/07: you can read more about the Basel problem, which is the summing of the series 1 + 1/4 + 1/9 + 1/16 + ... = π2/6, at the Everything Seminar
- the solution of third- and fourth-degree algebraic equations. (Did you know that any quartic polynomial with real coefficients can be factored as the product of two quadratics with real coefficients? I didn't.)
- some old-fashioned synthetic geometry: Heron's formula for the area of a triangle, for example
- solutions of the derangement problem and the fact that partitions into odd parts and into distinct parts are equinumerous
1 + 1/2 + 1/3 + 1/4 + 1/5 + ... = (2 x 3 x 5 x 7 x 11 x 13 x ...)/(1 x 2 x 4 x 6 x 10 x 12 x ...)
which seems silly, because both sides are infinite. But it makes sense. We have, for example,
1 + 1/2 + 1/4 + 1/8 + 1/16 + ... = 2/1
and then
1 + 1/2 + 1/3 + 1/4 + 1/6 + 1/8 + 1/9 + 1/12 + ...
= (1 + 1/2 + 1/4 + 1/8 + ...) (1 + 1/3 + 1/9 + 1/27 + ...)
= (2/1) (3/2)
where the original sum is the sum of reciprocals of numbers whose only prime factors are 2 and 3. Why not extend this to the sum of the reciprocals of all integers using unique factorization? And it turns out that this sum has a perfectly reasonable interpretation: by the same sort of argument as above,
1 + 1/2s + 1/3s + 1/4s + ... = 1/[(1-2-s)(1-3-s)(1-5-s)] ...
where if s>1 both sides converge; if you let s approach 1 from above you recover Euler's result. (This is due to Kronecker, and can be found on pp. 66-70 of Dunham.) But this sort of heuristic appeals to me. It seems to me that modern mathematicians don't take enough risks like this. Or, if they do, they are not telling me; I fear that the papers currently being published omit anything which isn't totally rigorous, even if (especially if?) it might make things clearer. I suppose this makes sense if you want to keep your papers short, which was of course important when papers were printed on, well, paper -- but these sorts of less rigorous things could certainly be made available in some sort of supplement to one's paper, published on a web page, for example.
Finally, given how many areas Euler has touched, sometimes I've joked that we should just rename mathematics "Euler". Somewhat more seriously, though -- I wonder to what extent one can tell whether someone is a mathematician by writing the word "Euler" on a piece of paper and asking them to pronounce it. (Isaac Asimov, who was trained as a chemist, suggested that "unionized" played the same role for chemists. Chemists pronounce it in four syllables and think it refers to molecules which have the same number of protons as electrons and thus a neutral charge; non-chemists pronounce it in three syllables and think it refers to organized labor.)
(My apologies for the awkwardness of the notation here. There seem to be ways to write out mathematics in blog posts -- either using LaTeX or MathML -- but they're not necessary for the notation I want to use here and seem like they'd take a little fooling around to get to work. And I don't even know MathML, although I wouldn't mind learning it.)
Subscribe to:
Comments (Atom)