skip to main | skip to sidebar

Tuesday, December 25, 2007

Taxicab Numbers

Futurama is one of my favorite TV shows. It's the quintessential show about nerds, produced by nerds, for nerds, and it's chock full of inside jokes in math, science, sci-fi and pop culture.

I'm happy to note that although Futurama, the series, was cancelled after the fourth season, the show is back with a set of direct-to-DVD features. The first one, Bender's Big Score, is available now, and includes lecture by Sarah Greenwald as a special feature on the various math jokes that have appeared in the show. Some are silly, like π-in-one oil, or Klein's Beer (in a Klein bottle, of course). Others are subtle, like the repeated appearance of taxicab numbers.

Taxicab numbers are an interesting story in and of themselves. The story comes to us from G. H. Hardy, who was visiting Srinivasa Ramanujan:

I remember once going to see him when he was lying ill at Putney. I had ridden in taxi-cab No. 1729, and remarked that the number seemed to be rather a dull one, and that I hoped it was not an unfavourable omen. "No", he replied, "it is a very interesting number; it is the smallest number expressible as the sum of two [positive] cubes in two different ways."

Many of the staff involved in creating Futurama received advanced degrees in math, computer science and physics, so they enjoy making these obscure references, because they know their fans will find them. It shouldn't be surprising that the number 1729 makes many appearances in Futurama, or that the value 87539319 appears as an actuall taxicab number (87539319 is the sum of 2 cubes in 3 different ways).

Of course, the idea of taxicab numbers is interesting, the details less so. After watching the new feature, I wondered what Hardy's taxicab number was, and what two pairs of cubes add up to that value. If I were more mathematically inclined, I'd probably sit down with pencil and paper and just do the math until I found something. If I were truly lazy, I'd just Google for the answer. Instead, I sat down, fired up vim and typed this code up, pretty much as you see it here:

cube x = x * x * x

taxicab n = [(cube a + cube b, (a, b), (c, d))
| a <- [1..n],
b <- [(a+1)..n],
c <- [(a+1)..n],
d <- [(c+1)..n],
(cube a + cube b) == (cube c + cube d)]

I want to find the smallest taxicab number, but searching an unbounded range will not lead to a result. Therefore, it's necessary to search a bounded range. If there are any taxicab numbers among the integers [1..n], then this function will find them. Moreover, if there are many solutions to this problem in that range, this function will find all of them.

I could go into details, but my explanation would not be as clear and precise as the code above.

Here are some results:

*Main> taxicab 10
[]
*Main> taxicab 12
[(1729,(1,12),(9,10))]
*Main> taxicab 20
[(1729,(1,12),(9,10)),(4104,(2,16),(9,15))]

Ordinarily, I wouldn't blog about something so trivial, but as I stared at the code and the output from ghci, it reminded me of an incident in college. The Chudnovsky brothers came up with a new algorithm to compute π that was both very precise and very quick. A professor of mine wrote a quick, throwaway implementation of this algorithm in Maple to demonstrate it. Most of my work in those days was in C. Maple wasn't a tool I'd normally use, but it was easily the best tool for the job.

The lesson my professor was trying to teach me wasn't to use computer algebra systems to indulge a fixation with transcendental numbers. Instead, he was trying to show me that a true computer scientist is presented with a problem, writes a program to solve it, and moves onto the next problem.

I could have written my taxicab number search in C or Java, but I would have lost interest somewhere between #include <stdio.h> or public static void main(String args[]) and firing up the compiler. I could have written the code in Perl, but I would have likely lost interest around the time I saw a nested for loop, and contemplated the four levels of nesting.

Perl programmers know Larry Wall's vision for Perl - to make the easy things easy and the hard things possible. As I stared at this code, I realized that Larry's dictum misses an important cornerstone: to make the trivial things trivial.

Haskell may get a bad rap as making hard things easy and easy things hard, but it does make trivial things trivial. Whether it's list comprehensions solve a problem in a single statement, or parser combinators that make parsing code read like the grammar it implements, there's a lot to be said for a language that gets out of your way and lets you say precisely what you mean.

Wednesday, December 19, 2007

More thoughts on rewriting software

I started writing about when it is acceptable to consider rewriting a software project a few months back (part one and two). I remain convinced that it is occasionally economically sound to toss a problematic codebase and start anew. There are dozens of pesky little details to consider, like:


  • Is the code horrifically overcomplicated, or just aesthetically unpleasing?

  • How soon before the new system can replace the old?

  • How soon before there's a net benefit to the rewrite?

  • Are you rewriting to address fundamental flaws, or adopt this week's fashionable language/library/architecture/buzzword?

  • Do you understand the problem the code attempts to solve? Really? Really?

  • Since the project began, is there a new piece of off-the-shelf piece of software that makes a lot of this pain simply go away?

Steve Yegge looks at the problem from a different angle in his essay Code's Worst Enemy. (See Reginald Braithwaite's synopsis if you want to read just the choice quotes.) Steve argues that code size is easily the riskiest element in a project:

My minority opinion is that a mountain of code is the worst thing that can befall a person, a team, a company. I believe that code weight wrecks projects and companies, that it forces rewrites after a certain size, and that smart teams will do everything in their power to keep their code base from becoming a mountain. Tools or no tools. That's what I believe.

Steve's essay focuses on behaviors endemic among many Java programmers that lead to large codebases that only get larger. As codebases grow, they become difficult to manage, both in terms of the tools we use to build software from the code (IDEs and the like), as well as the cognitive load needed to keep track of what 20,000 pages of code do on a 1 million line project.

The issues aren't specific to Java, or even Java programmers. Rather, they stem from an idea that the size of a code base has no impact on the overall health of a project.

The solution? Be concise. Do what it takes to keep a project small and manageable.

Of course, there are many ways to achieve this goal. One way is to rewrite large, overgrown projects and rebuild them from scratch. This could be done in the same language (building upon the skills of the developers currently on the project), or porting the project to a new language. Another strategy involves splitting a large project into two or more isolated projects with clearly defined interfaces and responsibilities. Many more alternatives exist that don't involve ignoring the problem and letting a 1 million line project fester into a 5 million line project.


Interestingly, Jeff Atwood touched upon the issue as well, in his essay Nobody Cares What Your Code Looks Like. Jeff re-examines the great Bugzilla rewrite, when the team at Netscape decided in 1998 to convert the code from Tcl to Perl. The goal was ostensibly to encourage more contributions, since few people would be interested in learning Tcl to make contributions to Bugzilla. Nine years later, and the Bugzilla team considers Perl to be its biggest liability, because Perl is stagnating, and Perl culture values the ability of every Perl programmer to speak a slightly different dialect of Perl. Newer projects written in PHP, Python, Java and Ruby outcompete Bugzilla because (in a couple cases) they are comparable to Bugzilla today (without taking nine years of development to reach that plateau), and can deliver new features faster than the Bugzilla team can.

Nevertheless, Jeff stresses that although it may be ugly, harder to customize and extend, Bugzilla works. And it has worked for nearly a decade. And numerous large projects use it, and have no intentions to switch to the new bug tracker of the week anytime soon. So what if the code is ugly?

There's a lesson here, and I don't think it's Jeff's idea that nobody cares what your code looks like. Jeff is right that delivering value to customers is always most important, whether they are paying customers that keep your company in business, or users who rely on your open source project to supply a critical piece of infrastructure. He's also right that customers couldn't care less if your code uses Factories, Decorators and Iterators, or Closures, Catamorphisms and Currying. It is a disservice to your customers if you force them to wait while you rewrite your software from a static language (like Java) to a dynamic language (like Ruby), because dynamic languages are all the rage this year. Are you going to go through the same song and dance when type inferencing becomes popular? Massive concurrency?

While customers don't care about how a product is written, they do care about the effects of that choice. If your software crashes a lot, needs frequent security patches and occasionally corrupts data, well, maybe you shouldn't have written your timesheet application in C. If your rendering application is slow, uses ludicrous amounts of memory, then maybe you shouldn't have written it in Ruby. If your application provides a key piece of infrastructure, and there's literally no one who could replace you if you get hit by a bus, then maybe you shouldn't have written it in APL.

And if your application is written in a manner that leads to a huge codebase that makes it hard to find and fix bugs, costly to extend and requires a floor full of programmers to maintain, maybe you owe it to your customer to find a way to reduce the size of your codebase and deliver more value. Because sooner or later, that customer just might find a similar application, built with a smaller codebase and less internal complexity that's cheaper own and faster to customize and fix. And when it becomes too costly or too painful to maintain your application, they will switch.

Wednesday, October 3, 2007

Defanging the Multi-Core Werewolf

Many developers are have a nagging sense of fear about the upcoming “multi-core apocalypse”. Most of the software we write and use is written in imperative languages, which are fundamentally serial in nature. Parallelizing a serialized program is painful, and usually involves semaphores, locks, and comes with problems like deadlock, livelock, starvation and irreproducible bugs.

If we’re doomed to live in a future where nearly every machine uses a multi-core architecture, then either (a) we have a lot of work ahead of us to convert our serialized programs into parallelized programs, or (b) we’re going to be wasting a lot of CPU power when our programs only use a single processor element on a 4-, 8-, or even 64-core CPU.

At least that’s the received wisdom. I don’t buy it.

Yes, software that knows how to exploit parallelism will be different. I don’t know that it will be much harder to write, given decent tools. And I certainly don’t expect it to be the norm.

Here is some evidence that the multi-core monster is more of a dust bunny than a werewolf.

First, there’s an offhand remark that Rob Pike made during his Google Tech Talk on Newsqueak, a programming language he created 20 years ago at Bell Labs to explore concurrent programming. It’s based on Tony Hoare’s CSP, the same model used in Erlang. During the talk, Rob mentioned that the fundamental concurrency primitives are semaphores and locks, which are necessary when adding concurrency to an operating system, but horrible to deal with in application code. A concurrent application really needs a better set of primitives that hide these low level details. Newsqueak and Erlang improve upon these troublesome primitives by offering channels and mailboxes, which make most of the pain of concurrent programming go away.

Then, there’s Timothy Mattson of Intel, who says that there are just too many languages, libraries and environments available today for writing parallelized software. Timothy is a researcher in the field of parallel computing, and when someone with such a deep background in the field says the tools are too complicated, I’ll take his word for it. The good news is that very few programmers work on the kinds of embarrassingly parallel problems that require these tools. Working on parallel machines isn’t going to change that for us, either. In the future, shell scripts will continue to execute one statement at a time, on a single CPU, regardless of how many CPUs are available, with or without libraries like Pthreads, PVM or MPI. Parallel programmers are still in a world of hurt, but at least most of us will continue to be spared that pain.

Then there’s Kevin Farnham, who posted an idea of wrapping existing computationally intensive libraries with Intel’s Thread Building Blocks, and loading those wrapped libraries into Parrot. If all goes well and the stars are properly aligned, this would allow computationally intensive libraries to be used from languages like Perl/Python/Ruby/etc. without the need to port M libraries to N languages. (Tim O’Reilly thought it was an important enough meme that he drew attention to it on the O’Reilly Radar.)

This sounds like a hard problem, but adding Parrot to the equation feels like replacing one bad problem with five worse problems. If we’re going to live in a world where CPUs are cheap and parallelism is the norm, then we need to think in those terms. If we need Perl/Python/Ruby/etc. programs to interact with parallelized libraries written in C/C++/Fortran, where’s the harm in spawning another process? Let the two halves of the program communicate over some IPC mechanism (sockets, or perhaps HTTP + REST). That model is well known, well tested, well-understood, widely deployed and has been shipping for decades. Plus, it is at least as language-agnostic as Parrot hopes to become. (+2 points if the solution uses JSON instead of XML.)

Fourth, there’s Patrick Logan, who rightly points out the issue simply isn’t about a multi-core future, but also a multi-node future. Some applications will run in parallel on a single machine, others will run across multiple nodes on a network, and still others will be a hybrid of both approaches. Running programs across a network of nodes is done today, with tools like MapReduce, Hadoop and their kin.

If you have a grid of dual-core machines today, and need to plan out how to best use the network of 64-core machines you will have a decade from now, here’s a very simple migration plan for you: run 32x as many processes on each node!

With that said, here is my recipe for taming the multi-core dust bunny:

  • Determine what kind of parallelism makes sense for you: none, flyweight, fine-grained or coarse grained.
  • Avoid troublesome low-level concurrency primitives wherever possible.
  • Use tools like GHC’s Nested Data Parallelism for flyweight concurrency (one program, lots of data, spread out over multiple CPUs on a single machine).
  • Use tools like GHC’s Software Transactional Memory for lightweight concurrency (many interoperating processes managing shared data on a single machine).
  • Use tools like MapReduce and friends for heavyweight concurrency (work spread out across multiple cooperating processes, running on one or many machines).

As Timothy Mattson points out, parallel programming is fundamentally hard, and no one language, tool or environment is going to slay the dragon. I cite NDP here not as a perfect solution, but as a placeholder for a whole class of tools that exhibit of SIMD parallelism. Similarly, STM is a placeholder for a whole class of tools that exhibit MIMD parallelism. Sometimes you need one, sometimes you need the other, sometimes you need both, and sometimes you need neither.

And then there is the issue of virtualization. Perhaps the best use of a multi-core system isn't to use it as a single multiprocessing computer, but as a host for a series of virtual machines. Such a usage sidesteps all of the thorny issues around parallelism entirely, focusing instead on cost savings that accrue from server consolidation and simplified management. This is a very old idea that becomes more and more important as power efficiency in our data centers becomes a hot button issue.

Finally, there’s a looming question about what to do about the desktop. If your laptop has 32 cores, what do you do with them? The simple answer is nothing. As CPUs get faster, they spend more and more of their time in an idle state. The only thing that changes in a multi-core world is that more CPUs are idle. Desktop programmers can spend a lot of time evenly distributing that idleness across all CPUs, or make very few changes, and use only as many CPUs as necessary. Operating systems and basic tools (emulators, compilers, VMs, database engines, web servers, etc.) will need to be multi-core aware and distribute their work across as many CPUs as are available. Some of that work is already done -- make -j has been around for years. Processing intensive applications, like audio/video codecs, image manipulation and the like, will also need to be multi-core aware. The vast majority of the programs we write will continue to be mostly serial, and rarely care about parallelism.

After all, authenticating a user doesn’t get 32x faster on a 32-core machine.

Sunday, September 30, 2007

Rewriting Software, Part 2

When I wrote my previous post about rewriting software, I had a couple of simple goals in mind. First was answering Ovid’s question on when rewriting software is wise, and when it is foolish. Second was to re-examine Joel Spolsky’s dictum, never rewrite software from scratch, and show how software is so complex that no one answer fits all circumstances. Finally, I wanted to highlight that there are many contexts to examine, ranging from the “small here and short now” to the “big here and long now” (to use Brian Eno’s terminology).

When I wrote that post, I though there would be enough material for two or three followup posts that would meander around the theme of “yes, it’s OK to rewrite software”, and move on. The more I wrote, the more I found to write about, and the longer it took to condense that material into something worth reading.

Rather than post long rambling screeds on the benefits of rewriting software, I decided to take some time to plan out a small series of articles, each limited to a few points. Unfortunately, I got distracted and I haven’t posted any material to this blog in over a month. My apologies to you, dear reader.


Of course, there’s something poetic about writing a blog post about rewriting software, and finishing about a month late because I couldn’t stop rewriting my post. There’s a lesson to learn here, also from Joel Spolsky. His essay Fire and Motion is certainly worth reading in times like these. I try to re-read it, or at least recall his lessons, whenever I get stuck in a quagmire and can’t see my way out.

In that spirit, here’s a small nugget to ponder.

If you are a writer, or have ever taken a writing class, you’ve probably come across John R. Trimble’s assertion that “all writing is rewriting.” Isn’t it funny that software is something developers write yet fear rewriting?

There’s a deep seated prejudice in this industry against taking a working piece of software and tinkering with it, except when it involves fixing a bug or adding a feature. It doesn’t matter if we’re talking about some small-scale refactoring, rewriting an entire project from scratch, or something in between. The prejudice probably comes from engineering — there’s no good reason to take a working watch or an engine apart because it looks “ugly” and you want to make it more “elegant.”

Software sits at the intersection of writing and engineering. Unlike pure writing, there are times when rewriting software is simply a bad idea. Unlike pure engineering, there are times when it is necessary and worthwhile to rewrite working code.

As Abelson and Sussman point out, “programs must be written for people to read, and only incidentally for machines to execute.” Rewriting software is necessary to keep code concise and easy to understand. Rewriting software to follow the herd or track the latest trend is pointless and a wasted effort.

Tuesday, August 21, 2007

Rewriting Software

Ovid is wondering about rewrite projects. It’s a frequent topic in software, and there’s no one answer that fits all situations.

One of the clearest opinions is from Joel Spolsky, who says rewrites are “the single worst strategic mistake that any software company can make”. His essay is seven years old, and in it, he takes Netscape to task for open sourcing Mozilla, and immediately throwing all the code away and rewriting it from scratch. Joel was right, and for a few years Mozilla was a festering wasteland of nothingness, wrapped up in abstractions, with an unhealthy dose of gratuitous complexity sprinkled on top. But this is open source, and open source projects have a habit of over-estimating the short term and under-estimating the long term. Adam Wiggins revisited the big Mozilla rewrite issue earlier this year when he said:
[W]hat Joel called a huge mistake turned into Firefox, which is the best thing that ever happened to the web, maybe even the best thing that’s ever happened to software in general. Some “mistake.”
What’s missing from the discussion is an idea from Brian Eno about the differences between the “small here” vs. the “big here”, and the “short now” vs. the “long now”. Capsule summary: we can either live in a “small here” (a great apartment in a crappy part of town) or a “big here” (a beautiful city in a great location with perfect weather and amazing vistas), and we can live in a “short now” (my deadline is my life) or a “long now” (how does this project change the company, the industry or the planet?).

On the one hand, Joel’s logic is irrefutable. If you’re dealing with a small here and a short now, then there is no time to rewrite software. There are revenue goals to meet, and time spent redoing work is retrograde, and in nearly every case poses a risk to the bottom line because it doesn’t deliver end user value in a timely fashion.

On the other hand, Joel’s logic has got more holes in it than a fishing net. If you’re dealing with a big here and a long now, whatever work you do right now is completely inconsequential compared to where the project will be five years from today or five million users from now. Requirements change, platforms go away, and yesterday’s baggage has negative value — it leads to hard-to-diagnose bugs in obscure edge cases everyone has forgotten about. The best way to deal with this code is to rewrite, refactor or remove it.

Joel Spolsky is arguing that the Great Mozilla rewrite was a horrible decision in the short term, while Adam Wiggins is arguing that the same project was a wild success in the long term. Note that these positions do not contradict each other. Clearly, there is no one rule that fits all situations.

The key to estimating whether a rewrite project is likely to succeed is to first understand when it needs to succeed. If it will be evaluated in the short term (because the team lives in a small here and a short now), then a rewrite project is quite likely to fail horribly. On the other hand, if the rewrite will be evaluated in the long term (because the team lives in a big here and a long now), then a large rewrite project just might succeed and be a healthy move for the project.

Finally, there’s the “right here” and “right now” kind of project. Ovid talks about them briefly:
If something is a tiny project, refactoring is often trivial and if you want to do a rewrite, so be it.
In my experience, there are plenty of small projects discussed in meetings where the number of man hours discussing a change or rewrite far exceeds the amount of time to perform the work, often by a factor of ten or more. Here, the answer is clear — just do the work, keep a back up for when you screw up, and forget the dogma about rewriting code. If it was a mistake, rolling back the change will also take less time than the post mortem discussion.


Ovid raises another interesting point: large projects start out from smaller ones, so if it’s OK to rewrite small projects, and small projects slowly turn into large projects, when does it become unwise to rewrite a project?

The answer here isn’t to extrapolate based on project size, but rather on the horizon. A quick little hack that slowly morphs into something like MS Word will eventually become rewrite-averse due to short term pressures. A quick little hack that slowly morphs into something like Firefox will remain somewhat malleable, so long as it can take a long time to succeed.

Monday, August 20, 2007

Universal Floating Point Errors

Steve Holden writes about Euler’s Identity, and how Python can’t quite calculate it correctly. Specifically,
e i π + 1 = 0
However, in Python, this isn’t quite true:
>>> import math
>>> math.e**(math.pi*1j) + 1
1.2246063538223773e-16j
If you note, the imaginary component is quite small: -1 x 10-16.

Python is Steve’s tool of choice, so it’s possible to misread his post and believe that python got the answer wrong. However, the error is fundamental. Witness:
$ ghci
Prelude> :m + Data.Complex
Prelude Data.Complex> let e = exp 1 :+ 0
Prelude Data.Complex> let ipi = 0 :+ pi
Prelude Data.Complex> e
2.718281828459045 :+ 0.0
Prelude Data.Complex> ipi
0.0 :+ 3.141592653589793
Prelude Data.Complex> e ** ipi + 1
0.0 :+ 1.2246063538223773e-16
As I said, it would be possible to misread Steve’s post as a complaint against Python. It is not. As he says:
I believe the results would be just as disappointing in any other language
And indeed they are, thanks to irrational numbers like π and the limitations of IEEE doubles.

Updated: corrected uses of -iπ with the proper exponent, .

Wednesday, August 15, 2007

Does Syntax Matter?

An anonymous commenter on yesterday’s post posits that Haskell won’t become mainstream because of the familiar pair of leg irons:
I think one of the biggest problems in Haskell, aside from it not being very easy (whats a monad?), is syntax.
There are many reasons why Haskell may not become mainstream, but syntax and monads aren’t two of them. I’m a populist, so I get offended when a language designer builds something that’s explicitly designed to drag masses of dumb, lumbering programmers about half way to Lisp, Smalltalk, or some other great language. I want to use a language built by great language designers that they themselves not only want to use, but want to invite others to use.

I could be wrong here. Maybe being a ‘mainstream programming language’ is means designing something down to the level of the great unwashed. I hope not. I really hope not. But it could be so. And if it is, that’s probably the one and only reason why Haskell won’t be the next big boost in programming language productivity. That would also disqualify O’Caml, Erlang and perhaps Scala as well. Time will tell.

But syntax? Sorry, not a huge issue.

Sure, C and its descendants have a stranglehold on what a programming language should look like to most programmers, but that’s the least important feature a language provides. Functional programmers, especially Lisp hackers have been saying this for decades. Decades.

A language’s syntax is a halfway point between simplifying the job of the compiler writer and simplifying the job of the programmer. No one is going back and borrowing syntax from COBOL, because it’s just too damn verbose and painful to type. C is a crisp, minimal, elegant set of constructs for ordering statements and expressions, compared to its predecessors.

Twenty years ago, the clean syntax like C provided made programming in all caps in Pascal, Fortran, Basic or COBOL seem quaint. Twenty years from now, programming with curly braces and semicolons could be just as quaint. Curly braces and semicolons aren't necessary, they're just a crutch for the compiler writer.

To prove that syntax doesn’t matter, I offer 3 similar looking languages: C, Java (or C#, if you prefer) and JavaScript. They all use a syntax derives from C, but they are completely separate languages. C is a straight forward procedural language, Java is a full blown object oriented language (with some annoying edge cases), and JavaScript is a dynamic, prototype-based object oriented language. Just because a for loop looks the same in these three languages means absolutely nothing.

Knowing C doesn’t help you navigate the public static final nonsense in Java, nor does it help you understand annotations, inner classes, interfaces, polymorphism, or design patterns. Going backward from Java to C doesn’t help you write const-correct code, or understand memory allocation patterns.

Knowing C or Java doesn’t help much when trying to use JavaScript to its full potential. Neither language has anything resembling JavaScript’s dynamic, monkeypatch everything at runtime behavior. And even if you have a deep background in class-based object oriented languages, JavaScript’s use of prototypes will strike you as something between downright lovely and outright weird.

If that doesn’t convince you, consider the fact that any programmer worthy of the title already uses multiple languages with multiple syntaxes. These typically include their language of choice, some SQL, various XML vocabularies, a few config file syntaxes, a couple of template syntaxes, some level of perl-compatible regular expressions, a shell or two, and perhaps a version or two of make or a similar utility (like Ant or Maven).

Add that up, and a programmer can easily come across two dozen different syntaxes in a single project. If they can’t count that high, it’s not because they do all their work in a single syntax[1], but because it takes too much effort to stop and count all of the inconsequential little syntaxes. (Do Apache pseudo-XML config files count as a separate syntax? Yeah, I guess they do. It took that Elbonian consultant a day to track down a misconfigured directive last year…)

So, no, Mr. Anonymous. Haskell’s syntax isn’t a stumbling block. You can learn the basics in an afternoon, get comfortable within a week, and learn the corner cases in a month or two.


Now, as for monads - the problem with monads is that they seem harder to understand than they really are. That is, it is more difficult to explain what a monad is than it is to gain a visceral understanding of what they do. (I had this same problem when I was learning C — it was hard to believe that it was really that simple.)

If you caught my introduction to Haskell on ONLamp (parts 1, 2 and 3), you may have seen this tidbit right before the end of part 3:
[M]onads enforce an order of execution on their statements. With pure functions, sub-expressions may be evaluated in any order without changing their meaning. With monadic functions, the order of execution is very important.
That is, monads allow easy function composition that also ensures linear execution, much like you would expect from writing a series of statements within a function in C, a method in Java, or a block of Javascript. There are other interesting properties of monads, but this is the most fundamental.



[1]: Lisp and Smalltalk programmers might honestly count one single syntax for all their work. :-)
Subscribe to: Comments (Atom)
 

AltStyle によって変換されたページ (->オリジナル) /