"I have a mind like a steel... uh... thingy." Patrick Logan's weblog.

Search This Blog

Saturday, October 08, 2005

Innovation or Standard: Choose One

On BPEL standardization...

the market is fairly immature as to what is the right way to take something like that. What it also shows then is you're getting a lot of innovation in the standards body. Innovations in standards bodies typically aren't good things.

Comments on Processes, Etc.

Le Roux Bodenstein writes in email...

Some comments on some of your recent posts: You were talking a lot about scaling to more processors, app servers becoming the new operating systems, etc.

Aren't app servers just big processes that manage large amounts of threads? Don't multiple processes scale more easily because of "share nothing"? What's wrong with operating systems? They are basically (among a few other things) very efficient process schedulers. (modern unixes, at least)

"Shared nothing" is important. If the multiplexor does not enforce shared nothing then convention should. The new multiplexor (e.g. the web/app server) is lighterweight than the older OS multiplexor. Shared nothing lightweight processes are the best of both worlds.
I think if anything we should work on making inter-process communication easier. Processes should be tied closer into languages and inter-process communication should be abstracted. Even more than some newer languages did with threads.
I agree. But we need to lose the distinction between local and remote processes. e.g. Erlang processes us the same simple communication abstraction whether they are co-located or remote. So do HTTP processes of course, and XMPP, and... Perhaps the OS is the last bastion of the local/remote distinction? (The now defunct Apollo Domain OS being one of the rare exceptions.)
I think "lightweight processes and threads" will lose popularity. Multiple cores and processors effectively make the "heavy-weight overhead of processes" a mute point. There's a lot less context switching when things run on their own processors and once a process is started (which is as far as I know quite expensive compared to starting a thread, but I am not an expert) it can just keep running until it is needed.
I agree heavyweight processes will perform better, but lightweight processes will perform that much better. The OS process approach can and I am sure will be improved but making improvements there is probably a longer road than taking the Erlang approach in a specific web/app server or language runtime implementation.
Anyway... my argument is I don't think app servers will help much on operating systems with decent kernels. In fact - they will just be in the way. Or an extra layer. Or what little use they have can be added to the os.
I think this is a fine way to approach the problem but as I said I think it is a longer road. Hopefully the OS designers will move in this direction, having taken some best practices from the web/app server and language runtime designers. (For that matter hopefully the web/app server and language runtime designers will take some best practices from their peers.)
I don't think we need new programming languages either. I think Python, Ruby, etc. can probably just be "fixed" to make multiple-process development a lot easier...
We continue to be in agreement.

Wednesday, October 05, 2005

The New Ribbon

I have not tried the new MSFT Office Ribbon, so I will withhold judgement for now. Others are noting the amount of real estate the ribbon requires.

The only thing I want to point out here is we have gone full circle from ribbons to menus and back. The first GUI applications I developed were CAD tools. Back in the day, those tools were expensive and ran on expensive hardware. One had to go to a special (usually dimly lit) room to use the CAD systems.

Those systems had very expensive, and large, screens. The GUI was essentially an area to present as much of the schematic as possible surrounded by, yes, ribbons. We developers had ongoing discussions to determine how to minimize the ribbon space and maximize the capabilities offered in the ribbons. CAD designers did not want to waste time clicking through all kinds of icons to get to the right function.

Soon after the Mac caught on, then X and Motif. Menus and dialog boxes became the thing. Ask me sometime how this specific transition almost brought down one of the top 3 CAD vendors.

Tuesday, October 04, 2005

YURLs

Tyler Close has been contributing some useful exposition in a series of messages on capability-based URLs to implement authority in the REST Yahoo group.

More on the Future of Multiprocessing

ACM Queue has good explanations of where hardware is going and how to scale software into the multiprocessing era.

The introduction of CMP systems represents a significant opportunity to scale systems in a new dimension. The most significant impact of CMP systems is that the degree of scaling is being increased by an order of magnitude: what was a low-end one- to two-processor entry-level system should now be viewed as a 16- to 32-way system, and soon even midrange systems will be scaling to several hundred ways.

Monday, October 03, 2005

Software Education

Phil Windley writes about software education...

I've determined that I'm no longer convinced that software engineering, at least as it's commonly discussed and taught, is what I want to prepare students to do. I try to focus them on being innovative, entrepreneurial, and working with dynamic languages on networked applications.

The New Application Architecture

Phil Windley looks at new hardware architectures and their forces on future software systems...

Application servers (like jBoss or Weblogic) support a development model in which programmers develop threadless code and the app server manages the threads. That simplifies the point too much, perhaps, but I think having that much parallel processing power on a single chip might make app servers much more important for developing applications. There are continuing debates about whether app servers add more complexity than their worth, but that might be because we haven't met many problems large enough to require them–yet. In the early 60's programmers scoffed at the idea of operating systems as being "needlessly complex;" that idea is ludicrous today.
I think application servers are the new operating system. (Ten years ago they were the new transaction monitor, but now we know we all need one. We all need more than one.) We need to be able to write smaller "applications" and have them interact more easily. Look at the advice for programming web applications, web services, EJB's, as well as applications in Erlang. They are all very similar. Erlang applications tend to be more simple than the rest because the available framework takes this point to the extreme.

Those other systems are built by and large on heavy-weight process technology. Maybe Erlang is the new Lisp. We need a Lisp for lighter-weight process interaction, as Jon Udell pointed out.

I should point out that I equate "web server" and "app server". That there is a distinction is an artifact rather than an inherent reason. An application server is inherently a "process monitor" with various drivers. HTTP drivers, SMTP drivers, XMPP drivers, etc. I don't think the new "superplatform" belongs to Microsoft, IBM, or BEA. The superplatform is one based on the standard application protocols. The best of these platforms will combine support for these application protocols (and more... IMAP? WebDAV?) and abstract the complexity. Most of us should be writing sequential code, rule-based code, constraint-based declarative code, etc. But we'll have to plug into these protocols.

Phil continues...

It's tough to see how you'll use that much parallelism on the desktop. I just counted the number of processes running on my Powerbook: 76... there are things we've hardly been able to imagine.
This is the result of the languages we have been using to think with. As Phil points out, these new hardware architectures will give us more reasons to think differently.

CORBA Redux

Roger Sessions is a bit of a character. Witness his back-and-forth with Terry Coatta in ACM Queue around CORBA and Web Services. Sessions continues to point out that CORBA worked when CORBA was on both ends of the wire. Well, what else would you expect? Does HTTP work when HTTP is not on both ends? For some reason Sessions seems to believe that WS-* does not require WS-* on both ends.

For years now Sessions has promoted DCOM and now he promotes WS-*. A continuing theme of these promotions is that CORBA has failed. Certainly DCOM failed worse than CORBA. Moreover, as Coatta points out...

It looks to me like CORBA is more of a success than Web services.
The CORBA community learned many lessons and would have been even more of a success had that organization incorporated the web sooner and more naturally. WS-* usurped CORBA on the premise that the result would be simpler and better, yet years later both claims remain suspect.

Later in the exchange Sessions says something incredible. Upon admitting the WS-* specs are *more* complex than CORBA's, Sessions suggests that is OK...

I agree that the Web services standards are harder to understand than most of the CORBA specifications, but there’s one fundamental difference between these specifications and the CORBA ones. The CORBA specifications had to be understood by developers. The Web services standards don’t. Nobody needs to understand the Web services standards except for Microsoft and IBM because these standards are about how Microsoft and IBM are going to talk together, not about how the developer is going to do anything... These standards have no relevance to Joe or Jane Developer, none whatsoever.
This is ironic since a few minutes previously Sessions complained that these vendors are in danger of making WS-* too transparent...
In some sense, the transparent ability to make something a Web service is not really a good thing, because making an effective Web service requires a much more in-depth understanding of what it means to be a Web service.
Coatta catches this apparent contradiction...
It sounds like what you’re saying is that the tools that automatically supply Web services interfaces are, in fact, absolutely necessary because they’re that insulation between the developers and the underlying protocols. At the same time, they’re the downfall that’s making it possible to generate poorly architected systems. Two-edged sword?
Sessions has little to defend himself with other than to suggest that although these tools don't prevent bad architectures, just think how bad the architectures would be without those tools.

I'd give Coatta the victory in this debate. Too much confusion from Sessions who has not made up his mind whether tools, protocols, or APIs are good or bad...

The big difference between Web services and CORBA is that the Web services people said right from the beginning: there is no API.
So there is no API, because those are bad. But the protocols are damn near impossible to understand. But that is good, because we only need two vendors and they can provide tools. Well, as long as they don't provide APIs, 'cause that would be bad.

The only thing more confusing than the mess of WS-* specs is the explanation Sessions is trying to proffer.

Spiral Staircase: Going Up or Down?

Jon Udell asks a great question...

In the realm of service-oriented design and business-process modeling, what are the modern counterparts to Lisp and Smalltalk?
First, who says Lisp and Smalltalk are not "modern"?

Second, I don't know the answer. Neither Lisp nor Smalltalk offer more than any other tools for these purposes. I don't think we have much in these spaces yet.

An argument could be made that for service-oriented design the counterpart seems to be HTTP. Whether or not HTTP is the best we can do is moot. Lisp and Smalltalk were allowed to evolve in elite laboratories for a decade or more before widespread adoption. HTTP is simple with demonstrated success, but I'm not sure a successor will emerge from an MIT or a PARC anytime soon.

As for process modeling... we are in worse shape. I developed electronic design and manufacturing software (e-CAD, CAM) for many years, 1983-1993. This work included developing schematic editors as well as software that consumed schematic data. In those days there were few standards for this kind of software, interchange formats, or protocols.

I see several parallels to todays BPM software. Standards like BPEL are a start, but there is a long way to go on the front and back ends for these systems.

Back to services... I think we have a long way to go here as well. Most services I am aware of (whether they are REST or WS-*) are built on languages that emphasize the "inward view" of the system (e.g. the arrangement of code and data within the process) rather than the "outward view" (e.g. the service interface and contract). Sure there are "declarative metadata" schemes in various languages for denoting some function should be a web service, etc. But these schemes are bolted on to previous generation languages. Accessing semi-formal data from multiple sources and "mashing" these together appear to be another capability in demand.

What other capabilities should such a language provide? What about features for security? Availability? Distribution?

Sunday, October 02, 2005

Comments are gone

I have republished the blog without comments. There will be no more comments until I can prevent the spam attacks. I have been all but unaffected until now. The losers lose for us all.

Feel free to send email directly to me in lieu of comments on any topic.

Thursday, September 22, 2005

Threads, Processes (and Shared Memory)

James Robertson quotes from several interesting pieces on threads, processes, and programming models. I am squarely in the multi-process camp too. I remember my absolute shock about a decade ago when I learned Java (at that time aimed at being a simple web programming language) had a shared-memory with monitors concurrency model.

The key phrase there is "shared-memory". Threads in and of themselves are not the problem. The shared-memory model is the real problem.

With a shared-nothing model then threads take on much more of a "process" feel. Languages like Erlang and Termite implement lightweight shared-nothing processes above the level of the OS. The benefits of sharing lightweight processes within a single address space provide a "best of both worlds" model. Sharing can take place under the abstraction of the language... the runtime benefits of sharing but the developer-time benefits of isolation. Whether an associated process is local, or in another OS process, or on another node, is irrelevant in many respects. Performance, reliability, etc. are still concerns so the distribution problem is not completely abstracted away.

Wednesday, September 21, 2005

Smalltalk Leading the Way

David Buck writes about an enticing upcoming meeting of the Ottawa Smalltalk User Group...

Smalltalk and Smalltalk developers have led the way to the modern techniques and practices that [David] teaches in Simberon's courses... even now, the future of software development is being led by Smalltalk whether or not the rest of the world cares to notice.

Happy Goldfish in a Bowl

Having dabbled on the edge of AI for a number of years, I enjoy the technology as well as the philosophy that surrounds the subject. Jon Udell writes of the so-called Singularity...

That's us: just goldfish to the post-human super-intelligences... True machine intelligence was what the advocates of strong AI wanted to hear about, not the amplification of human intelligence by networked computing. The problem, of course, is that we've always lacked the theoretical foundation on which to build machine intelligence. Ray Kurzweil thinks that doesn't matter, because in a decade or two we'll be able to scan brain activity with sufficient fidelity to port it by sheer brute force, without explicitly modeling the algorithms.
Now's time for this goldfish (me) to refresh himself with a healthy dose of Terry Winograd and Fernando Flores. I don't consider myself religious in the least. However I think that neither "intelligence" nor "emotions" are in any way mechanical (i.e. transferable to a machine). I do think they are *biological abstractions*. They are the names we give to our interpretation of biological (and so, chemical, and so, physical) processes.

Back to Jon on the current understanding of human vision...

We are, however, starting to sort out the higher-level architecture of these cortical columns. And it's fascinating. At each layer, signals propagate up the stack, but there's also a return path for feedback. Focusing on the structure that's connected directly to the 14x14 retinal patch, Olshausen pointed out that the amount of data fed to that structure by the retina, and passed up the column to the next layer, is dwarfed by the amount of feedback coming down from that next layer. In other words, your primary visual processor is receiving the vast majority of its input from the brain, not from the world.
We can manipulate biological and psychological processes. We can mimic them mechacinally to increasing degrees. But we cannot "make them" out of parts and we cannot "tansfer" them to such a machine. What would that mean? The simulation of the hurricane is not and never will be the hurricane even if the simulation actually destroys the city. Something destroys the city, but it is not a hurricane.

A machine that has some representation of my "brain" may continue to develop a further simulation of what my brain would be like had it undergone similar stimula. But in no way is that machine now "my brain" and in no way does that machine "feel" things in the way my "real brain" feels things. Those feelings are interpretations of abstractions of biological processes. The machine is merely a simulation of such things. If you care about me and then observe further simulations of such a machine you may further interpret your observations to be "happy" or "sad" and you may even interpret that machine to be "happy" or "sad". But that machine is in fact not happy or sad in any biological or psychological sense.

Let's face it, even my dog "loves" me because I feed it, and when it comes down to it my kids do too. We are abstract interpreters of biological processes. One interpretation of that may be "loneliness" but so? Here is where I note that my dog gets as excited to go out and pee as it does to see me come home from a week on the road. Fortunately my kids have a finer scale of excitement.

Back to Jon for more about augmentation rather than "strong" AI...

I had the rare privilege of meeting Doug Engelbart. I'd listened to his talk at the 2004 version of this conference, by way of ITConversations.com, and was deeply inspired by it. I knew he'd invented the mouse, and had helped bring GUIs and hypertext into the world, but I didn't fully appreciate the vision behind all that: networked collaboration as our first, last, and perhaps only line of defense against the perils that threaten our survival. While we're waiting around for the singularity, learning how to collaborate at planetary scale -- as Doug Engelbart saw long ago, and as I believe we are now starting to get the hang of -- seems like a really good idea.
Seems like a really good idea. "Strong" AI is fascinating, but "augmentation" is useful.

Streamlined Concurrency Modeling

I like this book Blaine refers to. It's been sitting on my shelf for a couple of years where I've picked it up and dusted off a few pages here and there. Recently I have been following it a bit in practicing *concurrent* programming as opposed to object programming per se.

Concurrent processes can be seen as "active objects". Not every object in your system should be concurrent (not even in a concurrent programming language like Oz, Erlang, or Gambit Scheme). But the significant ones almost certainly should.

An active object (concurrent process) in these systems are like smallish databases and application-specific protocols for accessing them. Each process has state and essentially implements a loop of selecting the next most important message, interpreting it, and updating its state (its little possibly-persistent "database"). These message-passing protocols are not unlike the protocols of messages sent to (sequential) objects.

More later. If you have any favorite references for concurrent programming leave a comment.

Friday, September 16, 2005

Nintendo Revolution

The long-rumored controller for the Nintendo Revolution has been revealed. What a winner.

My understanding is Nintendo was the early innovator of the common features of current controllers. This is another revolutionary advance from the looks of the video from the Tokyo Game Show.

I have been telling my kids for years that once a controller along these lines gets us away from the unnatural button combinations I'll be able to kick their butts. They have a physical memory for modern controllers I have never invested enough time in to become competent. That is why I am competitive with them on games like Mario Kart Double Dash (simple controls), but not on most games.

Now we shall soon see if my words will hold up. This controller looks to make all sorts of actions more directly translatable into the game.

I don't care what else is in the Revolution. I am not a big gamer at all but I cannot wait to try out this controller. Cell wha? 360 wha? Ho hum. I could become more of a gamer with this controller. My 13 year old showed me the video last night. At first I thought, what, the controller is a TV remote?

But watch the video of it in action. I hope they have the virtual drum set ready to go in March!

Wednesday, September 14, 2005

Finally Integrated Query

Looks like a popular production language design finally will provide a query capability that disregards the kind of data (persistent or not, objects or not, XML or not) being queried. Visual Basic just became somewhat appealing for this and some other new features. I still have reasons not to use it, but this is an appealing step that will benefit many developers for some time to come.

Kudos to Visual Basic. Does C# have *all* these same features? I've not read all the PDC news yet.

Monday, September 12, 2005

Gambit Scheme 4.0 beta 15

Gambit Scheme 4.0 beta 15 is available. Changes include the incorporation of some Termite capabilities, in particular each thread has a mailbox for receiving asynchronous messages.

Sunday, September 11, 2005

Beware Dichotomies

Paul Graham's latest essay is making the rounds. My initial reaction reflects my general approach to socio-political issues: beware dichotomies. Really, beware those who would present any complex issue as a dichotomy. Paul's opening paragraph has to be the worst in all his essays unless he was *intending* to present to us a straw man.

By the end of the essay though he makes at least one excellent point: transparency matters. Yes, wealth is power and I think Paul fails to recognize how closely wealth and power are related, let alone to begin recognizing the good and bad of that.

If global capitalism is going to work (I currently give it a mixed review but that's another story) then we need to remember that the theory itself is founded on "perfect knowledge". The closest things we have to perfect knowledge are democracy and transparency.

Friday, September 09, 2005

Why (not) Smalltalk?

A coincidence that on Sam Ruby's blog and on the Extreme Programming Yahoo Group on the same day there are conversations about using Smalltalk. Both conversations are along the lines of...

OK, Smalltalk is very good. So why use a language influenced by Smalltalk? Why not use Smalltalk itself?
Sam's quote is so good...
Reinventing Smalltalk One Decade at a Time
Subscribe to: Comments (Atom)

Blog Archive

About Me

Portland, Oregon, United States
I'm usually writing from my favorite location on the planet, the pacific northwest of the u.s. I write for myself only and unless otherwise specified my posts here should not be taken as representing an official position of my employer. Contact me at my gee mail account, username patrickdlogan.

AltStyle によって変換されたページ (->オリジナル) /