Showing posts with label fractals. Show all posts
Showing posts with label fractals. Show all posts

30 March 2011

How long is the coast of Maryland?

Last night's Final Jeopardy clue: "With 301 miles, it has the most coastline of current states that were part of the original 13 colonies." (Thanks to the Jeopardy! forum for the wording.)

This agrees with the Wikipedia list which is sourced from official US government data.

But as Mandelbrot told us, coastlines are self-similar . (Link goes to the paper How Long is the Coast of Britain as reproduced on Mandelbrot's web page, which unfortunately doesn't have pictures. I'm not sure if the original version of this paper in Science did. Wikipedia's article on the paper does.) That is, the length of a coastline depends on the size of your ruler. Furthermore, I would suspect that the fractal dimension of some states' coastlines is larger than others. Wikipedia states that "Measurements were made using large-scale nautical charts" which seems to imply that all the measurements were done at the same length scale, but if you did the measurements at a smaller scale, the states whose coastline have higher fractal dimension would move up the list.

So last night I spent twenty seconds yelling this at the TV, and ten seconds getting out the answer Maryland. Which is wrong. Also wrong: Maine, New York. (Maine used to be part of Massachusetts; the wording is a bit ambiguous.) It appears that only the south shore of Long Island counts as "coast"; the north shore, which borders Long Island Sound, doesn't.

And of course Chesapeake Bay doesn't count either.

19 January 2009

Newton's method fractals

Simon Tatham has produced fractals derived from Newton-Raphson iteration which are quite interesting to look at.

The main idea here is that if you want to find a root of some function f, then you start from a guess a0; then compute a1 = a0 - f(a0)/f'(a0); geometrically this corresponds to replacing the graph of the function with its tangent line at (a0, f(a)0) and finding the root of the tangent line. Then starting from a1 we find a2 and so on. If you're already close to a root you'll get closer. But if you're far away from a root unexpected things can happen; the set of all starting points a0 for which the sequence (a0, a1, a@, a3, ...) converges to a given root of f is fractal.

(I've mentioned this before, and so has John Armstrong, but Tatham's pictures are better.)

13 January 2009

Mathematicians on roofs

I am a mathematician, and I would like to stand on your roof (video, 17 minutes). Ron Eglash talks about African fractals. It seems that fractals of one sort or another show up naturally in designs used both decoratively and functionally in various modern African cultures. (As far as I can tell from the video, Eglash is not claiming this is something unique about Africa. I'm curious if he addresses this question in his book, African Fractals: Modern Computing and Indigenous Design, which I have not yet read because I didn't know it existed until this morning.) The title of this post comes about because the best way to see how a village is designed is to stand on the roof of the tallest building.

10 April 2008

Fractal cookies

Fractal cookies, from Evil Mad Scientist Laboratories

Take nine "square cylinders" (i. e. rectangular solids which are much longer in one direction than the other two) of dough, one of which has chocolate in it.

Arrange the nine sticks in a three-by-three grid with the chocolate one in the center; squish them together so that they are one big piece of dough.

Stretch the whole thing to eight times its current length; cut into eight pieces of equal length (the length of the original piece), each of which will have a chocolate center. (This can be done by stretching to twice the length, cutting in half, and repeating twice more.)

Add a piece of chocolate dough of the same size; again arrange in a three-by-three grid with the chocolate one in the center, stretch, and cut. Then do it again. Then cut the whole thing into slices and cook.

Of course, you get the Sierpinski carpet in cookie form.

However, at the level of iteration given here, (8/9)3, or about seventy percent, of the cookie will consist of non-chocolate dough! This is sad. I recommend interchanging the chocolate and non-chocolate doughs.

See also the Sierpinski gaskets made from polymer clay, which are made by a similar process. These are inferior, because they cannot be eaten.

10 November 2007

The Menger sponge

Zooming in and out on the Menger sponge, from Microsiervos.

[埋込みオブジェクト:http://www.youtube.com/v/J-fcRzvRBqk&rel=1& type=]

You always hear that fractal objects are "self-similar" (that's what makes them fractal) but it's hard to get your head around that if you're looking at an ordinary drawing, since intuitively you think that the "small" parts are somehow qualitatively different than the "big" parts. But that's not really true. And this video illustrates that quite vividly -- as you zoom into the Menger sponge it very clearly looks the same on smaller and smaller scales.

31 October 2007

links for 2007年10月31日


  • Hello, India? I Need Help With My Math, by Steve Lohr, today's New York Times. The article's really about how consumer services, like business services before them, are being offshored; tutoring is just an example.

  • Pollock or Not? Can Fractals Spot a Fake Masterpiece?, from Scientific American. The verdict seems to be mixed. Pollock's paintings often contain certain fractal patterns, and certain simple images look "the same" as a Pollock painting in a certain sense. The researchers argue that their work is still valid, though:
    "There's an image out there of fractal analysis where you send the image through a computer and if a red light comes on it means it isn't a Pollock and if a green light comes on it is. We have never supported or encouraged such a mindless view."

    I'd agree with them, so long as it's more likely for the metaphorical "green light" to turn on when it sees a Pollock than when it sees a non-Pollock; there's no single way to test whether a creative work is by a particular person, other than going back in time and watching them create it.

  • On the cruelty of really teaching computing science, by the late Edsger Dijkstra. (I've had this one in the queue of "things I want to talk about" for a while, but I don't remember what I wanted to say, so here it is. There are a bunch of similar things which should dribble out in the near future.) But I can't resist commenting on this:
    My next linguistical suggestion is more rigorous. It is to fight the "if-this-guy-wants-to-talk-to-that-guy" syndrome: never refer to parts of programs or pieces of equipment in an anthropomorphic terminology, nor allow your students to do so. This linguistical improvement is much harder to implement than you might think, and your department might consider the introduction of fines for violations, say a quarter for undergraduates, two quarters for graduate students, and five dollars for faculty members: by the end of the first semester of the new regime, you will have collected enough money for two scholarships.

    I've long felt the same way about mathematical objects. There are exceptions, but for me these are mostly exceptions in which the mathematics describes some algorithm that has input which is actually coming from somewhere. Here it's not so much the program that is getting anthropomorphized as the user.

    And why are they always "guys"? How is it that scribbles of chalk on a blackboard, or pixels on a screen, can have gender? Note that I am not suggesting that mathematical objects should be female, or that some of them should be male and some of them should be female, with the choice being made, say, by the flipping of a coin. (Incidentally, the description of mathematical objects as "guys" seems to be much more common at my current institution than at my previous one.)

    By the way, Dijkstra is saying here that he thinks computer science should be taught in a formal manner -- proving the correctness of programs alongside actually writing them -- and that to de-emphasize the pragmatic aspect, students shouldn't execute their programs on a computer, since doing so enables them to not think about what the program is doing. I'm not sure if I agree with this.

29 September 2007

Potlucks and fractals

Last night, I was at a party thrown by a friend of mine.

This friend of mine lives at a house that has a potluck dinner most Wednesday evenings; I live three blocks away, so I go fairly often. I wasn't there last Wednesday, though, because it had been a long day and I was tired.

(Often there inadvertently seems to be a "theme" to the potluck despite nobody actually trying to do this, because pretty much any interesting-seeming coincidence counts. Last week it was the "night of the seven grains", as the seven people there brought dishes with seven different grains -- normal rice, arborio rice, buckwheat, wheat, two that I don't remember, and corn. Yeah, yeah, I know, you're thinking that arborio rice is rice. The point here is that if one interprets things widely enough there's always some sort of coincidence -- lots of today's foods are rectangular, or are yellow, or whatever. I think we once even said that the theme was "things with spices in them".)

While I wasn't there on Wednesday, people apparently came to discussing the existence of "some sort of crazy triangle fractal thing", and they had lamented that I wasn't there. I was eventually able to figure out that they were talking about the Sierpinski gasket, which is obtained by taking an equilateral triangle, removing a smaller triangle (half the size) from the middle to obtain three triangles with half the side length of the original triangle, and repeating ad infinitum. Click here for an illustration.

While this is nice, my favorite way to think of the Sierpinski gasket is via the so-called chaos game (more formally, an "iterated function system"), which is animated here. Start by picking three points (call them the red, yellow, and blue points) in the plane, and a random point x0 in the triangle which they bound. Plot that random point. Now, pick one of the three corner points (red, yellow, or blue) at random, each with probability 1/3; then pick the point halfway between x0 and that corner point, and call it x1. Pick a corner point again at random; the point halfway between x1 and that corner point is x2. Do this a thousand times or so and you get a nice picture. An applet showing more examples of such iterated function systems is available at cut-the-knot.org.

Why does this work? Basically, if you take the whole Sierpinski gasket, and contract it by a factor of two towards one of its edges, you get one of its three self-similar parts; call these the "red", "yellow", and "blue" parts, as in the picture at left. Now, x0 is some distance d from the gasket; the distance from x1 to the gasket is d/2, the distance from x2 to the gasket is d/4, and so on -- thus as the sequence is generated we get points closer and closer to the gasket. The gasket has measure zero, so with probability one we never actually end up with a point in it -- but we end up as close as we like after a very short time.

Although I'm not an expert in this, from some crude experimentation it looks like if you replace "each with probability 1/3" above with some other distribution, you end up with a gasket that is much more concentrated near the corners that you go to with high probabilities.

Edited, 5:06 pm: other surprising places where fractals appear include in the study of fast algorithms for multiplication (which also includes an explanation of the multiplication bug in Excel 2007), found via Good Math, Bad Math.

07 September 2007

How to build mountains

Mark Chu-Carroll of Good Math, Bad Math writes about how images of simulated mountains are constructed using a fractal process.

I'm kind of surprised to see how simple it is; basically you take a triangle and "pull up" a random point in the interior to get an irregular pyramid, and repeat this procedure on each of the faces.

One of the commenters there, mj, writes: "In a way it's not surprising that complex structure comes out of very simple fractal rules. The structures in reality, the real mountains, are also formed by relatively simple processes. A bit of wind and rain and erosion..." That's a good point. What's interesting is that there's no obvious connection between the rules used to generate the fractal and the real rules, but they generate the same sort of structure on a global scale. On a large enough scale, the low-level structure is simply hidden. Something similar happens with, say, random walks; a cloud of random walkers allowed to dissipate will eventually approach a Gaussian distribution regardless of the underlying lattice. An observer who could only observe on a large scale couldn't tell what the underlying lattice is. There are much deeper ideas of this sort; for example, we don't know whether the universe is actually continuous or discrete.

25 July 2007

fracta;s. space-filling curves, and scientific revolutions

Mark Chu-Carroll at "Good Math, Bad Math" writes about space-filling curves. These are really counterintuitive things -- curves that eventually fill up, say, an entire square. There's a nice article about them at Wikipedia.

It won't surprise you to learn that these aren't "curves" in the sense that you might think of them; if I ask you to draw a "curve" you'll probably draw something that's what mathematicians would call "piecewise smooth". What this means, roughly, is that you can draw a piece of it without having any "kinks", then turn, then draw another such piece, and so on, doing this only a finite number of times. Space-filling curves don't have this property; they are made up of infinitely many such "pieces". Not surprisingly, they also have infinite length. These curves are made by an iterative process; in the case of the Hilbert curve:

  • on the first iteration the curve has length 3/2 and each point is within √2/4 of the curve;

  • on the second iteration the curve has length 15/4 and each point is within √2/8 of the curve;

  • on the third iteration the curve has length 63/8 and each point is within √2/16 of the curve;

  • on iteration n the curve has length 2n - 1/2n (it is made of 4n-1 segments of length 2-n) and each point is within √2/2n+1 of the curve.


The maximum distance halves and the length doubles with each step; as we iterate, each point in the square is arbitrarily close to a point on the curve (thus on the curve) and the curve is infinitely long.

Andrew Cook at "Statistical Modeling, Causal Inference, and Social Science" writes about the fractal nature of scientific revolutions, pointing to this earlier post of his. The idea is that science moves forward in what the evolutionary biologists call "punctuated equilibrium" -- at most points "not much" is getting done but occasionally big moves are made and in the end science gets done. (This is a bit unfair, though, because the scientists who are doing the "not much" are often collecting the sort of data that is exactly what the revolutinaries doing the paradigm shift will turn out to need.) If this is true, then we might say that all the science that will get done between year 0 ("now") and year 81 (which turns out to be 2088) gets done either in the first third of that period (between 0 and 27) or the last third (between 54 and 81). But then something similar happens on each of those periods -- all the science gets done between 0 and 9, 18 and 27, 54 and 63, or 80 and 81. If we repeat this, ad infinitium, we get that the set of times at which science is being done is the Cantor set, which has measure zero; furthermore the rate of scientific progress, when scientific progress is happening, must be infinite in order for any science to happen at all!

Of course, this is ridiculous. But it makes sense that science happens in bursts, and that each burst is made of smaller bursts, and so on; that there are periods of stasis between these bursts, but that some of these periods of stasis are more static than others; and so on. It's only the mathematician's insistence on taking the limit that makes this model not work. Furthermore, there's more than one kind of science, and it could happen that one discipline's burst is another discipline's period of stasis. And maybe a model like this is more likely to hold for the individual scientist (who has periods when they Get Things Done and periods when they don't) than for science as a whole.

But the periods when it looks like the scientist isn't doing anything might be essential. The subconsious is often doing work then. Perhaps there is something about the way our subconscious works -- in which bigger breakthroughs need longer fallow periods to precede them -- that leads to this fractal nature, with bursts upon bursts.
Subscribe to: Comments (Atom)

AltStyle によって変換されたページ (->オリジナル) /