skip to main | skip to sidebar
Showing posts with label metadev. Show all posts
Showing posts with label metadev. Show all posts

Tuesday, March 30, 2010

Rehearsals in beta!

I have a new application, Rehearsals, an online practice diary for musicians. If that sounds like the kind of thing you're interested in, and you have Mac OS X 10.6 or newer, then please download the beta release and test it out. There's absolutely no charge, and if you submit feedback to support <at> rehearsalsapp <dot> com you'll be eligible for a free licence for version 1.0 once that's released. There are no limitations on the beta version, so please do download and start using it!

You can follow @rehearsals_app for updates to the beta programme (new releases are automatically downloaded using Sparkle, if you enable it in the app).

Tuesday, December 15, 2009

Consulting versus micro-ISV development

Reflexions on the software business really is an interesting read. Let me borrow Adrian's summary of his own post:

Now, here’s an insider tip: if your objective is living a nightmare, tearing yourself apart and swear never touching a keyboard again, choose [consulting]. If your objective is enjoying a healthy life, making money and living long and prosper, choose [your own products].


As the author himself allows, the arguments presented either way are grossly oversimplified. In fact I think there is a very simple axiom underlying what he says, which if untrue moves the balance away from writing your own products and into consulting, contracting or even salaried work. Let me start by introducing some features missed out of the original article. They may, depending on your point of view, be pros or cons. They may also apply to more than one of the roles.

A consultant:


  • builds up relationships with many people and organisations

  • is constantly learning

  • works on numerous different products

  • is often the saviour of projects and businesses

  • gets to choose what the next project is

  • has had the risks identified and managed by his client

  • can focus on two things: writing software, and convincing people to pay him to write software

  • renegotiates when the client's requirements change


A μISV developer:


  • is in sales, marketing, support, product management, engineering, testing, graphics, legal, finance, IT and HR until she can afford to outsource or employ

  • has no income until version 1.0 is out

  • cannot choose when to put down the next version to work on the next product

  • can work on nothing else

  • works largely alone

  • must constantly find new ways to sell the same few products

  • must pay for her own training and development


A salaried developer:


  • may only work on what the managers want

  • has a legal minimum level of security

  • can rely on a number of other people to help out

  • can look to other staff to do tasks unrelated to his mission

  • gets paid holiday, sick and parental leave

  • can agree a personal development plan with the highers-up

  • owns none of the work he creates


I think the axiom underpinning Adrian Kosmaczewski's article is: happiness ∝ creative freedom. Does that apply to you? Take the list of things I've defined above, and the list of things in the original article, and put them not into "μISV vs. consultant" but "excited vs. anxious vs. apathetic". Now, this is more likely to say something about your personality than about whether one job is better than another. Do you enjoy risks? Would you accept a bigger risk in order to get more freedom? More money? Would you trade the other way? Do you see each non-software-developing activity as necessary, fun, an imposition, or something else?

So thankyou, Adrian, for making me think, and for setting out some of the stalls of two potential careers in software. Unfortunately I don't think your conclusion is as true as you do.

Thursday, August 27, 2009

Indie app milestones part one

In the precious and scarce spare time I have around my regular contracting endeavours, I've been working on my first indie app. It reached an important stage in development today; the first time where I could show somebody who doesn't know what I'm up to the UI and they instinctively knew what the app was for. That's not to say that the app is all shiny and delicious; it's entirely fabricated from standard controls. Standard controls I (personally) don't mind so much. However the GUI will need quite a bit more work before the app is at its most intuitive and before I post any teaser screenshots. Still, let's see how I got here.

The app is very much a "scratching my own itch" endeavour. I tooled around with a few ideas for apps while sat in a coffee shop, but one of them jumped out as something I'd use frequently. If I'll use it, then hopefully somebody else will!

So I know what this app is, but what does it do? Something I'd bumped into before in software engineering was the concept of a User Story: a testable, brief description of something which will add value to the app. I broke out the index cards and wrote a single sentence on each, describing something the user will be able to do once the user story is added to the app. I've got no idea whether I have been complete, exhaustive or accurate in defining these user stories. If I need to change, add or remove any user stories I can easily do that when I decide that it's necessary. I don't need to know now a complete roadmap of the application for the next five years.

As an aside, people working on larger teams than my one-man affair may need to estimate how much effort will be needed on their projects and track progress against their estimates. User stories are great for this, because each is small enough to make real progress on in short time, each represents a discrete and (preferably) independent useful addition to the app and so the app is ready to ship any time an integer number of these user stories is complete on a branch. All of this means that it shouldn't be too hard to get the estimate for a user story roughly correct (unlike big up-front planning, which I don't think I've ever seen succeed), that previous complete user stories can help improve estimates on future stories and that even an error of +/- a few stories means you've got something of value to give to the customer.

So, back with me, and I've written down an important number of user stories; the number I thought of before I gave up :-). If there are any more they obviously don't jump out at me as a potential user, so I should find them when other people start looking at the app or as I continue using/testing the thing. I eventually came up with 17 user stories, of which 3 are not directly related to the goal of the app ("the user can purchase the app" being one of them). That's a lot of user stories!

If anything it's too many stories. If I developed all of those before I shipped, then I'd spend lots of time on niche features before even finding out how useful the real world finds the basic things. I split the stories into two piles; the ones which are absolutely necessary for a preview release, and the ones which can come later. I don't yet care how late "later" is; they could be in 1.0, a point release or a paid upgrade. As I haven't even got to the first beta yet that's immaterial, I just know that they don't need to be now. There are four stories that do need to be now.

So, I've started implementing these stories. For the first one I went to a small whiteboard and sketched UI mock-ups. In fact, I came up with four. I then set about finding out whether other apps have similar UI and how they've presented it, to choose one of these mock-ups. Following advice from the world according to Gemmell I took photos of the whiteboard at each important stage to act as a design log - I'm also keeping screenshots of the app as I go. Then it's over to Xcode!

So a few iterations of whiteboard/Interface Builder/Xcode later and I have two of my four "must-have" stories completed, and already somebody who has seen the app knows what it's about. With any luck (and the next time I snatch any spare time) it won't take long to have the four stories complete, at which point I can start the private beta to find out where to go next. Oh, and what is the app? I'll tell you soon...

Friday, April 17, 2009

NSConference: the aftermath

So, that's that then, the first ever NSConference is over. But what a conference! Every session was informative, edumacational and above all enjoyable, including the final session where (and I hate to crow about this) the "American" team, who had a working and well-constructed Core Data based app, were soundly thrashed by the "European" team who had a nob joke and a flashlight app. Seriously, we finally found a reason for doing an iPhone flashlight! Top banana. I met loads of cool people, got to present with some top Cocoa developers (why Scotty got me in from the second division I'll never know, but I'm very grateful) and really did have a good time talking with everyone and learning new Cocoa skills.

It seems that my presentation and my Xcode top tip[*] went down really well, so thanks to all the attendees for being a great audience, asking thoughtful and challenging questions and being really supportive. It's been a couple of years since I've spoken to a sizable conference crowd, and I felt like everyone was on my side and wanted the talk - and indeed the whole conference - to be a success.

So yes, thanks to Scotty and Tim, Dave and Ben, and to all the speakers and attendees for such a fantastic conference. I'm already looking forward to next year's conference, and slightly saddened by having to come back to the real world over the weekend. I'll annotate my Keynote presentation and upload it when I can.

[*] Xcode "Run Shell Script" build phases get stored on one line in the project.pbxproj file, with all the line breaks replaced by \n. That sucks for version control because any changes by two devs result in a conflict over the whole script. So, have your build phase call an external .sh file where you really keep the shell script. Environment variables will still be available, and now you can work with SCM too :-).

Friday, April 03, 2009

Controlling opportunity

In Code Complete, McConnell outlines the idea of having a change control procedure, to stop the customers from changing the requirements whenever they see fit. In fact one feature of the process is to be heavy enough to dissuade customers from registering changes.

The Rational Unified Process goes for the slightly more neutral term Change Request Management, but the documentation seems to imply the same opinion, that it is the ability to make change requests which must be limited. The issue is that many requests for change in software projects are beneficial, and accepting the change request is not symptomatic of project failure. The most straightforward example is a bug report - this is a change request (please fix this defect) which converts broken software into working software. Similarly, larger changes such as new requirements could convert a broken business case into a working business case; ultimately turning a failed project into a revenue-generator.

In my opinion the various agile methodologies don't address this issue, either assuming that with the customer involved throughout, no large change would ever be necessary, or that the iterations are short enough for changes to be automatically catered for. I'm not convinced; perhaps after the sixth sprint of your content publishing app the customer decides to open a pet store instead.

I humbly suggest that project managers replace the word "control" in their change documentation with "opportunity" - let's accept that we're looking for ways to make better products, not that we need excuses never to edit existing Word files. OMG baseline be damned!

Monday, February 23, 2009

Cocoa: Model, View, Chuvmey

Chuvmey is a Klingon word meaning "leftovers" - it was the only way I could think of to keep the MVC abbreviation while impressing upon you, my gentle reader, the idea that what is often considered the Controller layer actually becomes a "Stuff" layer. Before explaining this idea, I'll point out that my thought processes were set in motion by listening to the latest Mac Developer Roundtable (iTunes link) podcast on code re-use.


My thesis is that the View layer contains Controller-ey stuff, and so does the Model layer, so the bit in between becomes full of multiple things; the traditional OpenStep-style "glue" or "shuttle" code which is what the NeXT documentation meant by Controller, dynamic aspects of the model which could be part of the Model layer, view customisation which could really be part of the View layer, and anything which either doesn't or we don't notice could fit elsewhere. Let me explain.


The traditional source for the MVC paradigm is Smalltalk, and indeed How to use Model-View-Controller is a somewhat legendary paper in the use of MVC in the Smalltalk environment. What we notice here is that the Controller is defined as:


The controller interprets the mouse and keyboard inputs from the user, commanding the model and/or the view to change as appropriate.

We can throw this view out straight away when talking about Cocoa, as keyboard and mouse events are handled by NSResponder, which is the superclass of NSView. That's right, the Smalltalk Controller and View are really wrapped together in the AppKit, both being part of the View. Many NSView subclasses handle events in some reasonable manner, allowing delegates to decorate this at key points in the interaction; some of the handlers are fairly complex like NSText. Often those decorators are written as Controller code (though not always; the Core Animation -animator proxies are really controller decorators, but all of the custom animations are implemented in NSView subclasses). Then there's the target-action mechanism for triggering events; those events typically exist in the Controller. But should they?


Going back to that Smalltalk paper, let's look at the Model:


The model manages the behavior and data of the application domain, responds to requests for information about its state (usually from the view), and responds to instructions to change state (usually from the controller).

If the behaviour - i.e. the use cases - are implemented in the Model, well where does that leave the Controller? Incidentally, I agree with and try to use this behavior-and-data definition of the Model, unlike paradigms such as Presentation-Abstraction-Control where the Abstraction layer really only deals with entities, with the dynamic behaviour being in services encapsulated in the Control layer. All of the user interaction is in the View, and all of the user workflow is in the Model. So what's left?


There are basically two things left for our application to do, but they're both implementations of the same pattern - Adaptor. On the one hand, there's preparing the Model objects to be suitable for presentation by the View. In Cocoa Bindings, Apple even use the class names - NSObjectController and so on - as a hint as to which layer this belongs in. I include in this "presentation adaptor" part of the Controller all those traditional data preparation schemes such as UITableView data sources. The other is adapting the actions etc. of the View onto the Model - i.e. isolating the Model from the AppKit, UIKit, WebObjects or whatever environment it happens to be running in. Even if you're only writing Mac applications, that can be a useful isolation; let's say I'm writing a Recipe application (for whatever reason - I'm not, BTW, for any managers who read this drivel). Views such as NSButton or NSTextField are suitable for any old Cocoa application, and Models such as GLRecipe are suitable for any old Recipe application. But as soon as they need to know about each other, the classes are restricted to the intersection of that set - Cocoa Recipe applications. The question of whether I write a WebObjects Recipes app in the future depends on business drivers, so I could presumably come up with some likelihood that I'm going to need to cross that bridge (actually, the bridge has been deprecated, chortle). But other environments for the Model to exist in don't need to be new products - the unit test framework counts. And isn't AppleScript really a View which drives the Model through some form of Adaptor? What about Automator…?


So let me finish by re-capping on what I think the Controller layer is. It's definitely an adaptor between Views and Models. But depending on who you ask and what software you're looking at, it could also be a decorator for some custom view behaviour, and maybe a service for managing the dynamic state of some model entities. To what extent that matters depends on whether it gets in the way of effectively writing the software you need to write.

Saturday, January 03, 2009

Quote of the year (so far)

From David Thornley via StackOverflow:

"Best practices" is the most impressive way to spell "mediocrity" I've ever seen.

I couldn't agree more. Oh, wait, I could. *thud* There it goes.

Monday, December 01, 2008

You keep using that word. I do not think it means what you think it means.

In doing a little audience research for my spot at MacDev 2009, I've discovered that the word "security" to many developers has a particular meaning. It seems to be consistent with "hacker-proof", and as it could take most of my hour to set the record straight in a presentation context, here instead is my diatribe in written form. Also in condensed form; another benefit of the blog is that I tend to want to wrap things up quickly as the hour approaches midnight.

Security has a much wider scope than keeping bad people out. A system (any system, assume I'm talking software but I could equally be discussing a business process or a building or something) also needs to ensure that the "good" people can use it, and it might need to respond predictably, or to demonstrate or prove that the data are unchanged aside from the known actions of the users. These are all aspects of security that don't fit the usual forbiddance definition.

You may have noticed that these aspects can come into conflict, too. Imagine that with a new version of OS X, your iMac no longer merely takes a username and password to log a user in, but instead requires that an Apple-approved security guard - who, BTW, you're paying for - verifies your identity in an hour-long process before permitting you use of the computer. In the first, "hacker-proof" sense of security, this is a better system, right? We've now set a much higher bar for the bad guys to leap before they can use the computer, so it's More Secure™. Although, actually, it's likely that for most users this behaviour would just get on one's wick really quickly as they discover that checking Twitter becomes a slow, boring and expensive process. So in fact by over-investing in one aspect of security (the access control, also sometimes known as identification and authorisation) my solution reduces the availability of the computer, and therefore the security is actually counter-productive. Whether it's worse than nothing at all is debatable, but it's certainly a suboptimal solution.

And I haven't even begun to consider the extra vulnerabilities that are inherent in this new, ludicrous access control mechanism. It certainly looks to be more rigorous on the face of things, but exactly how does that guard identify the users? Can I impersonate the guard? Can I bribe her? If she's asleep or I attack her, can I use the system anyway? Come to that, if she's asleep then can the user gain access? Can I subvert the approval process at Apple to get my own agent employed as one of the guards? What looked to be a fairly simple case of a straw-man overzealous security solution actually turns out to be a nightmare of potential vulnerabilities and reduced effectiveness.

Now I've clearly shown that having a heavyweight identification and authorisation process with a manned guard post is useless overkill as far as security goes. This would seem like a convincing argument for removing the passport control booths at airports and replacing them with a simple and cheap username-and-password entry system, wouldn't it? Wouldn't it?

What I hope that short discussion shows is that there is no such thing as a "most secure" applications; there are applications which are "secure enough" for the context in which they are used, and there are those which are not. But the same solution presented in different environments or for different uses will push the various trade-offs in desirable or undesirable directions, so that a system or process which is considered "secure" in one context could be entirely ineffective or unusable in another.

Tuesday, November 04, 2008

More on MacDev

Today is the day I start preparing my talk for MacDev 2009. Over the coming weeks I'll likely write some full posts on the things I decide not to cover in the talk (it's only an hour, after all), and perhaps some teasers on things I will be covering (though the latter are more likely to be tweeted).

I'm already getting excited about the conference, not only because it'll be great to talk to so many fellow Mac developers but due to the wealth of other sessions which are going to be given. All of them look really interesting though I'm particularly looking forward to Bill Dudney's Core Animation talk and Drew McCormack's session on performance techniques. I'm also going to see if I can get the time to come early to the user interface pre-conference workshop run by Mike Lee; talking to everyone else at that workshop and learning from Mike should both be great ways to catch up on the latest thoughts on UI design.

By the way, if you're planning on going to the conference (and you may have guessed that I recommend doing so), register early because the tickets are currently a ton cheaper. Can't argue with that :-).

Sunday, September 28, 2008

MacDev 2009!

It's a long way off, but now is a good time to start thinking about the MacDev '09 conference, organised by the inimitable Scotty of the Mac Developer Network. This looks like being Europe's closest answer to WWDC, but without all those annoying "we call this Interface Builder, and we call this Xcode" sessions. Oh, and a certain Sophist Mac engineer software will be talking about building a secure Cocoa application.

Friday, July 18, 2008

Designing a secure Cocoa application

That's the title of next month's CocoaHeads Swindon, and I'll be leading the presentation/discussion. So if you want to learn a little about how to ensuring your Cocoa app doesn't give away the keys to the kingdom, or have some experiences to share with the rest of the group, come along! We'll be at the Glue Pot, which is nice and near the train station as well as reasonably close to a car park. We'll be congregating at 7:00 but will wait for everyone to be settled with a beer in their hand before starting ;-).

Monday, May 05, 2008

Social and political requirements gathering

I was originally going to talk about API: Design Matters and Cocoa, but, and I believe the title of this post may give this away, I'm not going to now. That's made its way into OmniFocus though, so I'll do it sooner or later. No, today I'm more likely to talk about The Cathedral and the Bazaar, even though that doesn't seem to fit the context of requirements gathering.


So I've been reading a few papers on Requirements Engineering today, most notably Goguen's The Dry and the Wet . One of the more interesting and subtle conclusions to draw from such a source (or at least, it's subtle if you're a physics graduate who drifted into Software Engineering without remembering to stop being a physicist) is the amount of amount of political influence in requirements engineering. Given that it costs a couple of orders of magnitude more to mend a broken requirement in maintenance than in requirements-gathering (Boehm knew this back in 1976), you'd think that analysts would certainly leave their own convictions at the door, and would try to avoid the "write software that management would like to buy" trap too.


There are, roughly speaking, three approaches to requirements elicitation. Firstly, the dry, unitarian approach where you assume that like a sculpture in a block of marble, there is a single "ideal" system waiting to be discovered and documented. Then there's the postmodern approach, in which any kind of interaction between actors and other actors, or actors and the system, is determined entirely by the instantaneous feelings of the actors and is neither static nor repeatable. The key benefit brought by this postmodern approach is that you get to throw out any idea that the requirements can be baselined, frozen, or in any other way rendered static to please the management.


[That's where my oblique CatB reference comes in - the Unitary analysis model is similar to ESR's cathedral, and is pretty much as much of a straw man in that 'purely' Unitary requirements are seldom seen in the real world; and the postmodern model is similar to ESR's bazaar, and is similarly infrequent in its pure form. The only examples I can think of where postmodern requirements engineering would be at all applicable are in social collaboration tools such as Facebook or Git.]


Most real requirements engineering work takes place in the third, intermediate realm; that which acknowledges that there is a plurality among the stakeholders identified in the project (i.e. that the end-user has different goals from his manager, and she has different goals than the CEO), and models the interactions between them in defining the requirements. Now, in this realm software engineering goes all quantum; there aren't any requirements until you look for them, and the value of the requirements is modified by the act of observation. A requirement is generated by the interaction between the stakeholders and the analyst, it isn't an intrinsic property of the system under interaction.


And this is where the political stuff comes in. Depending on your interaction model, you'll get different requirements for the same system. For instance, if you're of the opinion that the manager-charge interaction takes on a Marxist or divisive role, you'll get different requirements than if you use an anarchic model. That's probably why Facebook and Lotus Notes are completely different applications, even though they really solve the same problem.


Well, in fact, Notes and Facebook solve different problems, which brings us back to a point I raised in the second paragraph. Facebook solves the "I want to keep in contact with a bunch of people" problem, while Notes solves the "we want to sell a CSCW solution to IT managers" problem. Which is itself a manifestation of the political problem described over the last few paragraphs, in that it represents a distortion of the interaction between actors in the target environment. Of course, even when that interaction is modelled correctly (or at least with sufficient accuracy and precision), it's only valid as long as the social structure of the target environment doesn't change - or some other customer with a similar social structure comes along ;-)


This is where I think that the Indie approach common in Mac application development has a big advantage. Many of the Indie Mac shops are writing software for themselves and perhaps a small band of friends, so the only distortion of the customer model which could occur would be if the developer had a false opinion of their own capabilities. There's also the possibility to put too many "developer-user" features in, but as long as there's competition pushing down the complexity of everybody's apps, that will probably be mitigated.

Thursday, May 01, 2008

The Dock should be destroyed, or at least changed a lot

I found an article about features Windows should have but doesn't, which I originally got to from OSNews' commentary on the feature list. To quote the original article:


The centerpiece of every Mac desktop is a little utility called the Dock. It's like a launchpad for your most commonly used applications, and you can customize it to hold as many--or as few--programs as you like. Unlike Windows' Start Menu and Taskbar, the Dock is a sleek, uncluttered space where you can quickly access your applications with a single click.

Which OSNews picked up on:


PCWorld thinks Windows should have a dock, just like Mac OS X. While they have a point in saying that Windows' start menu and task bar are cumbersome, I wouldn't call the dock a much better idea, as it has its own set of problems. These two paradigms are both not ideal, and I would love someone to come up with a better, more elegant solution.

The problem I have with the Dock (and had with the LaunchPad in OS/2, the switcher in classic Mac OS, and actually less so with the task bar, though that and the Start Menu do suffer this problem) is that their job basically involves allowing the internal structure of the computer to leak into the user's experience. Do I really want to switch between NeoOffice Writer, KeyNote and OmniOutliner, or do I want to switch between the document I'm writing, the presentation I'm giving about the paper and the outline of that paper? Actually the answer is the latter, the fact that these are all in different applications is just an implementation detail.


So why does the task bar get that right? Well, up until XP when MS realised how cluttered that interface (which does seem to have been lifted from the NeXT dock) was getting, each window had its own entry in the task bar. Apart from the (IMO, hideously broken) MDI paradigm, this is very close to the "switch between documents" that I actually want to perform. The Dock and the XP task bar have similar behaviour, where you can quickly switch between apps, or with a little work can choose a particular document window in each app. But as I said, I don't work in applications, I work in documents. This post is a blog post, not a little bit of MarsEdit (in fact it will never be saved in MarsEdit because I intend to finish and publish it in one go), the web pages I referenced were web pages, not OmniWeb documents, and I found them from an RSS feed, not a little bit of NetNewsWire. These are all applications I've chosen to view or manipulate the documents, but they are a means, not an end.


The annoying thing is that the Dock so flagrantly breaks something which other parts of Mac OS X get correct. The Finder uses Launch Services to open documents in whatever app I chose, so that I can (for instance) double-click an Objective-C source file and have it open in Xcode instead of TextEdit. Even though both apps can open text files, Finder doesn't try to launch either of them specifically, it respects the fact that what I intend to do is edit the document, and how I get there is my business. Similarly the Services menu lets me take text from anywhere and do something with it, such as creating an email, opening it as a URL and so on. Granted some app authors break this contract by putting their app name in the Service name, but by and large this is a do something with stuff paradigm, not a use this program to do something one.


Quick Look and Spotlight are perhaps better examples. If I search for something with Spotlight, I get to see that I have a document about frobulating doowhackities, not that I have a Word file called "frobulating_doowhackities.doc". In fact, I don't even necessarily have to discover where that document is stored; merely that it exists. Then I hit space and get to read about frobulating doowhackities; I don't have to know or care that the document is "owned" by Pages, just that it exists and I can read it. Which really is all I do care about.

Thursday, April 24, 2008

Yeah, we've got one of those

Title linkey (which I discovered via slashdot) goes to an interview in DDJ with Paul Jansen, the creator of the TIOBE Programmer Community Index, which ranks programming languages according to their web presence (i.e. the size of the community interested in those languages). From the interview:


C and C++ are definitely losing ground. There is a simple explanation for this. Languages without automated garbage collection are getting out of fashion. The chance of running into all kinds of memory problems is gradually outweighing the performance penalty you have to pay for garbage collection.

So, to those people who balked at Objective-C 2.0's garbage collection, on the basis that it "isn't a 4GL", I say who cares? Seemingly, programmers don't - or at least a useful subset of Objective-C programmers don't. I frequently meet fellow developers who believe that if you don't know which sorting algorithm to use for a particular operation, and how to implement it in C with the fewest temporary variables, you're not a programmer. Bullshit. If you don't know that, you're not a programmer who should work on a foundation framework, but given the existence of a foundation framework the majority of programmers in the world can call list.sort() and have done with it.


Memory management code is in the same bucket as sorting algorithms - you don't need for everybody to be good at it, you need for enough people to be good at it that everyone else can use their memory management code. Objective-C 2.0's introduction of a garbage collector is acknowledgement of this fact - look at the number of retain/release-related problems on the cocoa-dev list today, to realise that adding a garbage collector is a much bigger enhancement to many developers' lives than would be running in a VM, which would basically go unnoticed by many people and get in the way of the others trying to use Instruments.


Of course, Objective-C and ApPLE's developer tools have a long history of moving from instrumental programming (this is what the computer must do) to declarative programming (this is what I am trying to achieve, the computer must do it). Consider InterfaceBuilder. While Delphi programmers could add buttons to their views, they then had to override that button's onClick() method to add some behaviour. IB and the target-action approach allow the programmer to say "when this button is clicked, that happens" without having to express this in code. This is all very well, but many controls on a view are used to both display and modify the value of some model-level property, so instead of writing lots of controller code, let's just declare that this view binds to that model, and accesses it through this controller (which we won't write either). In fact, rather than a bunch of boilerplate storage/accessors/memory management model-level code, why don't we just say that this model has that property and let someone who's good at writing property-managing code do the work for us? Actually, coding the model seems a bit silly, let's just say that we're modelling this domain entity and let someone who's good at entity modelling do that work, too.


In fact, with only a little more analysis of the mutation of Objective-C and the developer tools, we could probably build a description of the hypothetical Cen Kase, the developer most likely to benefit from developing in Cocoa. I would expect a couple of facts to hold; firstly that Cen is not one of the developers who believes that stuff about sorting algorithms, and secondly that the differences between my description of Cen and the description used by Apple in their domain modelling work would fit in one screen of FileMerge on my iBook.

Thursday, January 17, 2008

Project: Autonomous Revolutionary Goldfish

I was going to write, am still going to write, about how silly project names get bandied about in the software industry. But in researching this post (sorry blogosphere, I've let you down) I found that the Software-generated Gannt chart was patented by Fujitsu in the US in 1998, which to me just explains everything that is wrong with the way the US patent system is applied to software. For reference, Microsoft Project was written in 1987 (although is not strictly prior art for the patent. Project does everything in its power to prevent the user from creating a Gannt chart, in my experience).


Anyway, why is it that people care more about the fact that they're going to be using Leopard, Longhorn, Cairo, Barcelona or Niagara than about what any one of those is? As discussed in [1], naming software projects (though really I'm talking about projects in the general sense of collections of tasks in order to complete a particular goal) in the same way you might name your pet leads to an unhealthy psychological attachment to the project, causing it to develop its own (perceived) personality and vitality which can cause the project to continue long after it ought to have been killed. For every Cheetah, there's a Star Trek that didn't quite make it. And why should open source projects like Firefox or Ubuntu GNU/Linux need "code names" if their innards are supposed to be on public display?


I've decided that I know best, of course. My opinion is that, despite what people may say about project names being convenient shorthand to assist discussion, naming your project in an obtuse way splits us into the two groups which humanity adores: those of us who know, and those of you who don't. The circumstance I use to justify this is simple: if project names are mnemonics, why aren't the projects named in a mnemonic fashion? In what way does Rhapsody describe "port of OPENSTEP/Mach to PowerPC with the Platinum look and feel"? Such cultish behaviour of course leads directly to the point made in the citation; because we don't want to be the people in the know of something not worth knowing, we tend to keep our dubiously-named workflow in existence for far longer than could be dispassionately justified.


Of course, if I told you the name of the project I'm working on, you wouldn't have any idea what I'm working on ;-).


[1]Pulling the Plug: Software Project Management and the Problem of Project Escalation, Mark Keil. MIS Quarterly, Vol. 19, No. 4 (Dec., 1995), pp. 421-447

Sunday, August 26, 2007

Holding a problem in your head

The linkied article (via Daring Fireball) describes the way that many programmers work - by loading the problem into their head - and techniques designed to manage and support working in such a way. Paul Graham makes the comparison with solving mathematics problems, which is something I can (and obviously have decided to) talk about due to the physics I studied and taught. Then I also have things to say about his specific recommendations.


In my (limited) experience, both the physicist in me and the programmer in me like to have a scratch model of the problem available to refer to. Constructing such a model in both cases acts as the aide memoire to bootstrap the prolem domain into my head, and as such should be as quick, and as nasty, as possible. Agile Design has a concept known as "just barely good enough", and it definitely applies at this stage. With this model in place I now have a structured layout in my head, which will aid the implementation, but I also have it scrawled out somewhere that I can refer to if in working on one component (writing one class, or solving one integral) I forget a detail about another.


Eventually it might be necessary to have a 'posh' layout of the domain model, but this is not yet the time. In maths as in computing, the solution to the problem actually contains the structure you came up with, so if someone wants to see just the structure (of which more in a later paragraph) it can be extracted easily. The above statement codifies why in both cases I (and most of the people I've worked with, in each field) prefer to use a whiteboard than a software tool for this bootstrap phase. It's impossible - repeat, impossible - to braindump as quickly onto a keyboard, mouse or even one of those funky tablet stylus things as it is with an instantly-erasable pen on a whiteboard. Actually, in the maths or physics realm, there's nothing really suitable anyway. Tools like Maple or Mathematica are designed to let you get the solution to a problem, and really only fit into the workflow once you've already defined the problem - there's no adequate way to have large chunks of "magic happens here, to be decided at a later date". In the software world, CASE tools cause you to spend so long thinking about use-cases, CRC definitions or whatever that you actually have to delve into nitty-gritty details while doing the design; great for software architects, bad for the problem bootstrap process. Even something like Omnigraffle can be overkill. it's very quick but I generally only switch to it if I think my boardwriting has become illegible. To give an example, I once 'designed' a tool I needed at Oxford Uni with boxes-and-clouds-and-lines on my whiteboard, then took a photo of the whiteboard which I set as the desktop image on my computer. If I got lost, then I was only one keystroke away from hiding Xcode and being able to see my scrawls. The tool in question was a WebObjects app, but I didn't even open EOModeler until after I'd taken the photo.




Incidentally, I expect that the widespread use of this technique contributes to "mythical man-month" problems in larger software projects. A single person can get to work really quickly with a mental bootstrap, but then any bad decisions made in planning the approach to the solution are unlikely to be adequately questioned during implementation. A small team is good because with even one other person present, I discuss things; even if the other person isn't giving feedback (because I'm too busy mouthing off, often) I find myself thinking "just why am I trying to justify this heap of crap?" and choosing another approach. Add a few more people, and actually the domain model does need to be well-designed (although hopefully they're then all given different tasks to work on, and those sub-problems can be mentally bootstrapped). This is where I disagree with Paul - in recommendation 6, he says that the smaller the team the better, and that a team of one is best. I think a team of one is less likely to have internal conflicts of the constructive kind, or even think past the first solution they get concensus on (which is of course the first solution any member thinks of). I believe that two is the workable minimum team size, and that larger teams should really be working as permutations (not combinations) of teams of two.


Paul's suggestion number 4 to rewrite often is almost directly from the gospel according to XP, except that in the XP world the recommendation is to identify things which can be refactored as early as possible, and then refactor them. Rewriting for the hell of it is not good from the pointy-haired perspective because it means time spent with no observable value - unless the rewrite is because the original was buggy, and the rewritten version is somehow better. It's bad for the coder because it takes focus away from solving the problem and onto trying to mentally map the entire project (or at least, all of it which depends on the rewritten code); once there's already implementation in place it's much harder to mentally bootstrap the problem, because the subconscious automatically pulls in all those things about APIs and design patterns that I was thinking about while writing the inital solution. It's also harder to separate the solution from the problem, once there already exists the solution.


The final sentence of the above paragraph leads nicely into discussion of suggestion 5, writing re-readable code. I'm a big fan of comment documentation like headerdoc or doxygen because not only does it impose readability on the code (albeit out-of-band readability), but also because if the problem-in-head approach works as well as I think, then it's going to be necessary to work backwards from the solution to the pointy-haired bits in the middle required by external observers, like the class relationship diagrams and the interface specifications. That's actually true in the maths/physics sphere too - very often in an exam I would go from the problem to the solution, then go back and show my working.

Subscribe to: Posts (Atom)
 

AltStyle によって変換されたページ (->オリジナル) /