Comparing the Lisp conference with the Scheme workshop a month ago, I got a strong impression that Lisp and Scheme are different languages and communities. I think Python is closer to Haskell than Lisp to Scheme. CL seems to me like a Smalltalk with round parentheses. Scheme community is more diverse. Scheme community seems to value a functional approach. In contrast, CL community seems more pragmatic -- and more ad hoc. The CL code I saw is very object-oriented, very stateful, very Smalltalk-ish.
It is interesting to read a Schemer's comments on a Lisp conference. The languages are so near, and yet so far...
Previous LtU coverage of the ILC.
Peter Norvig was also cavalier about programming languages. He said people often need more control over memory allocation -- that's why they choose C++. Google uses a lot of C++. Doesn't Peter Norvig know how hard it is to manage memory? That's why it is better to leave this task to professionals.
I think the difficulty of manual memory management is frequently overstated. Perhaps "memory management" is usually interpreted to mean the kind of temporary data structure slinging that's common in a purely functional language? That would be ugly to do by hand, I admit. But I think the cases where more control is needed over memory allocation are less general than that.
A good example is when you have very large quantities of data that you do not want exposed to the garbage collector. In a 3D game, you might have 30 megabytes of static geometry. Or what if you have data that takes 2 gigabytes when stored as Lisp data structures, but only 500MB when stored in a very specialized format? ITA has similar problems.
A good example is when you have very large quantities of data that you do not want exposed to the garbage collector. In a 3D game, you might have 30 megabytes of static geometry.
Many Scheme and other functional programming systems I know have little trouble with this case. For example, a Gambit-C Scheme system allocates large strings and uniform vectors in a special heap, distinct from the heap for regular data. The big object heap has its own policies and strategies (say, it tries to avoid copying of large objects). In general, the BIBOP approach is rather popular. Gambit also supports a static object allocation.
think the difficulty of manual memory management is frequently overstated.
Not at all! Witness for example, the recent discussion between William
D. Clinger and Hans-J. Boehm
http://groups.google.com/groups?threadm=1178a29f.0210181205.38266e3e%40posting.google.com
It's very difficult to manage memory efficiently! If we are not talking about efficiency, what else is the reason for manual memory management?
The best, IMHO, text-only web browser, w3m, is written in C, with the use of garbage collector (Boehm's collector, to be precise). The author of w3m has commented that he didn't know how he would have written the HTML form rasterizer without GC.
Do you know that even Linux kernel has a garbage collector? Olin
Shivers has pointed that out:
/usr/src/linux-2.4/net/unix/garbage.c
Finally, managing memory is hard even for professionals. It turns out,
not all mallocs are created equal. Some of them -- for example,
malloc() on Solaris -- are very poor. It fragments memory, which
causes a memory intensive application to run out of memory:
Keeping the Linux kernel as an example - at work we've hacked on its network stack in C for about half a year, without ever worrying about memory allocation (and without using the garbage collector - fun to see that!) A TCP is a pretty complex bit of software, but the only memory it has to worry about are socket structures and packet buffers, which have pretty straight forward life cycles: you allocate a socket struct when a connection is established, and free it when it is closed; you allocate a packet buffer when you want to send something, and free it when it is acknowledged, and so on. Similarly, they're only really referenced in one place - socket structs in a connection table, and packet buffers on the send and receive queues. The rest of the work is easy to do on the stack. That is a bit of a simplification, but still - we haven't had to worry about memory allocation at all.
So, I don't think it's so unreasonable for Norvig to say that explicit memory management is the right thing for some applications (if I understand him correctly.) Actually, I recently read a book on writing operating systems in Concurrent Pascal, where they did useful ones that statically allocate everything. I think a really simple and elegant explicit memory management scheme is more beautiful than using garbage collection, even if you can't do it for most programs.
Just my 2c.
Do you know that even Linux kernel has a garbage collector? Olin Shivers has pointed that out:That's a "Collector For AF_UNIX sockets", which in fact might demonstrate that generic, systemwide garbage collection is not what's really wanted by system-level programs./usr/src/linux-2.4/net/unix/garbage.c
But, I'm just being pedantic. I really enjoyed Oleg's account of the conference.
The printed material I brought to the conference was written by Richard Brooksby and Nicholas Barnes, two of the original architects of the MPS. I merely arranged to have it photocopied and left on a table.
http://www.ravenbrook.com/project/mps/ http://www.ravenbrook.com/project/mps/doc/2002-01-30/ismm2002-paper/
- nick levine