skip to main | skip to sidebar
Showing posts with label process. Show all posts
Showing posts with label process. Show all posts

Monday, June 05, 2017

Is Waterfall the Opposite of Agile?


Agilists sometimes behave unreasonably by pummeling the Waterfall approach. Personally, I think such evangelists could better spend their effort producing solid, relevant code, but evidently they are on a crusade and need an enemy. If they need an enemy of Agile, however, they could have made a better choice.

There's nothing wrong with the Waterfall approach—if applied appropriately. For Waterfall to work well, you need to solve a problem that's not destined to change much, or, ideally, to change not at all. Some Agilists claim there is no such unchanging problem, but they're merely showing their lack of experience. I've seen a number of such invariant problems, and they yield very readily to a properly applied Waterfall development.

For example, Erik, a former student of mine, made a lucrative business of converting COBOL programs to new COBOL programs adapted to new versions of the COBOL compiler. Erik's customers wanted assurance that nothing would be changed in the conversion. This was a perfect situation for a simple Waterfall approach, one that Erik could perform profitably at a fixed price and schedule.

That said, the number of such invariant problems is not large, and it's usually difficult to know at the beginning if a problem will turn out to be so stable. In Erik's case, some of his customers would decide midway that they wanted a few "tiny" improvements in the converted application. Erik controlled that situation by charging outrageous fees for even the simplest modification. Most of us, however, try to control this situation by employing some Agile approach.

Let's be honest with ourselves: one consequence of an Agile approach is the loss of our ability to work to a fixed cost on a fixed schedule. Erik could do that, but only in a few carefully controlled situations. Many managers frustrate themselves over this lack of control and blame it on Agile. What's really to blame, however, is their inability to control the world in which they live. Things do change, and much of the time it's these very managers who instigate the change.

What do frustrated managers do? Quite often, they elevate their attempts to control the change by making rules. They may start using a pure Waterfall approach, but as their frustration with changes grows, they may add a Change Control Board, or change reviews, or a Change Tsar, or any of a number of other tactics. And, when those tactics fail to produce absolute predictability, they add more of the same kinds of rules and their supporting tactics.

After a while, these rules upon rules produce an approach that, though called "Waterfall," is actually something quite different—something for which we so far have no accurate name. This "something" is what Agile is responding to, so I suggest we name it.

What are these cobbled-together approaches like? First of all, they create a sad and dismal mood among those poor developers condemned to use them. When I visit a new client, I can generally detect the use of such an approach while I take a stroll through of the premises. I can even detect such approaches over the phone. How? Simple: the mood of my clients is mournful, gloomy, sad, unhappy, doleful, glum, melancholy, woeful, miserable, woebegone, forlorn, somber, solemn, serious, sorrowful, morose, dour, cheerless, joyless, and dismal.
That's quite a sobering list of adjectives, but that's what I can sense in many so-called "Waterfall" environments. Perhaps you recognize the list, but in any case, you can find that list in your dictionary as synonyms for the rare word, "lugubrious."

Perhaps the word "lugubrious" is unfamiliar, but that's good, because we won't often find it used in other contexts. Besides, it's a rather onomatopoetic word—a word that phonetically imitates, resembles or suggests the source of the sound or situation it describes. That's why Agile was invented—to replace those mournful, gloomy, rule-dominated approaches with brain-driven judgments of the actual builders.

So let's be truly Agile and stop bashing the true Waterfall approach. Instead, let's turn our contempt on every Lugubrious approach—or, to make a noun from the adjective, Lugubriousity. Maybe this more accurate name will help us defend our Agile projects from frustrated managers' attempts to smother us with yet more rules.

Or, to paraphrase DeMorgan, who in turn paraphrased Swift:

Great rules have little rules upon their backs to spite 'em,
And little rules have lesser rules, and so ad infinitum.


Never forget that's why we do Agile, not to dry out Waterfalls but to defeat Lugubriosity.

For more thoughs on the Agile approach, see

Monday, October 31, 2016

What's the most complex thing about software development?

What's the most complex thing about software development?

Interesting question.

So far, on Quora.com, there have been four excellent answers to this question: discussing
- the confusing role of people,
-the requirements problems,
-the interactions with the physical world.

Each of these factors certainly makes software development more complex, and processes such as Agile are designed to cope with this complexity. But, the ultimate complexity factor is software testing.

Why testing? In the software development literature, testing is not usually treated as a glamorous part of development, but when we're testing, we're up against the Second Law of Thermodynamics, which warns us that perfection is ultimately unobtainable.

So, even if we absolutely knew all the requirements (which we can't, of course), kept all the human factors under control (also impossible), and knew exactly all the physical properties of the real world (once more, impossible), we would still never be able to perform the infinite number of tests to cover all possible situations.

In other words, the software could still surprise us at any time. That's what I call complexity.

Of course, we can still work hard to solve these other problems. On requirements, for instance, see our Exploring Requirements books.





But no matter how hard you try, you'll still be faced with the testing problem. To understand this problem and what you can do to reduce (but not eliminate) it, take a look at Perfect Software and Other Illusions about Testing.

Sunday, October 11, 2015

One Little Change

Note: From time to time, I will be adding material to my new book, Errors: Bugs, Boo-boos, Blunders . All purchasers of the book from Leanpub.com will receive all of the new material free of additional charge. The following chapter will be added soon, along with some feedback from my earliest readers.

Dani, my wife, is an anthropologist by profession, but now has become a world-class dog trainer. <http://home.earthlink.net/~hardpretzel/DaniDogPage.html> The combination of the two produces some interesting ideas. For instance, she told me about the way attack dogs are trained to keep them from being dangerous. As usual, the big problem with attack dogs is not the dogs, but the people.

When an untrained person hears that a dog is attack-trained, chances are about one in three that they'll turn to the dog and command, "Kill!" As a joke. Or just to see what the dog will do. To protect against this idiotic human behavior, trainers never use command words like "kill." Instead, they use innocent words, like "breathe," that would never be given in jest in a command voice.

This kind of protection is needed because a trained dog is an information processing machine, in some ways very much like a computer. A single arbitrary command could mean anything to a dog, depending how it was trained—or programmed. The arbitrariness does not matter much if it's not an attack dog. The owner may be embarrassed when Rover heels on the Stay command, but nothing much is lost. If, however, Rover is trained to go for the throat, it's an entirely different matter.

It's the same with computers. Because they are programmed, and because the many meanings in a program are arbitrary, a single mistake can turn a helpful computer into one that can attack and kill an entire enterprise. That's why I've never understood managers who take a casual approach to software maintenance. Time and again I hear managers explain that the maintenance can be done by less intelligent people operating without all the formal controls of development—because it's not very critical. And no amount of argument seems able to convince them differently—until they have a costly maintenance blunder.

Fortunately, costly maintenance blunders are rather common, so some managers are learning—but the tuition is enormous. I keep a list of the world's most expensive programming errors, and all of the top ten are maintenance blunders. Some have cost over a billion dollars each, and some have lead to deaths. Often, the blunder involved changing a single digit in a previously functioning program.

In those horrendous losses, the change was deemed "so trivial" it was instituted casually by a supervisor telling a low-level maintenance programmer to "change that digit"—with no written instructions, no test plan, nobody to review the change, and, indeed, no controls whatsoever between the one programmer and the organization's day-to-day operations. It was exactly like having an attack dog trained to respond to KILL—or even HELLO.

I've done some studies, confirmed by others, about the chances of maintenance changes being done incorrectly. Contrary to simple-minded intuition, it turns out that "tiny" changes are more likely than larger ones to be flawed. Roughly, a one-line change has about a 50/50 chance of producing an error, while a 20-line change is similarly wrong only about one-third of the time.

Developers are often shocked to see this high one-line rate, for two reasons. In the first place, development changes are simpler because they are being made to cleaner, smaller, better structured code—code that has not been changed many times before so does not have unexpected linkages. Such linkages were involved in many of my top-ten disasters.

Secondly, the consequences of an erroneous change during development are smaller because the error can be corrected without affecting real operations. Thus, developers don't take that much notice of their errors, so they tend to underestimate their frequency. In development, you simply fix errors and go on your merry way. Not so in maintenance, where you must mop up the damage the error causes. Then you spend countless hours in meetings explaining why such an error will never happen again—until the next time.

For these two reasons, developers interpret such high rates of maintenance errors as indications of the ignorance or inexperience of maintenance programmers. They're wrong. Maintenance programmers are perfectly capable of doing better work than their record with tiny changes seems to indicate. Their competence is proved by the decrease in error rates as the size of the change increases.

If tiny changes are not taken seriously, they are done carelessly and without proper controls. A higher rate of error is an inevitable consequence.

How many times have you heard a developer say, "No problem! It's just a small change. All I have to do is change one line!"?

That statement would be sensible if "small" changes were truly small—if software maintenance were actually like maintenance of an apartment building. The janitor can change one washer in a dripping sink with much risk of causing the building to collapse. It's not safe to make the same assumption for a program once it's in production.

Whoever coined the term "maintenance" for computer programs was as careless and unthinking as the person who trains an attack dog to kill on the command, KILL. With the wisdom of hindsight, I would suggest that a maintenance programmer is more like a brain surgeon than a janitor. Would maintenance be easier to manage well if it were called "software brain surgery"?


Think about it this way. Suppose you had a bad habit—like saying KILL to attack dogs. Would you go to a brain surgeon and say, "Just open up my skull, Doc, and remove that one little habit. It's just a small change! Just a little maintenance job!"

Sunday, April 26, 2015

Ending the Requirements Process

This essay is the entire chapter 5 of the second volume of Exploring Requirements.
The book can be purchased as a single volume, or volumes 1 and 2 as a bundle, or as part of the People Skills bundle. It's also available from various vendors with both volumes as one bound book.



The Book of Life begins with a man and woman in a garden. It ends with Revelations.—Oscar Wilde, A Woman of No Importance
25.1 The Fear of Ending
The requirements process begins with ambiguity. It ends not with revelations, but with agreement. More important, it ends.
But how does it end? At times, the requirements process seems like Oscar Wilde when he remarked, "I was working on the proof of one of my poems all the morning and took out a comma. In the afternoon, I put it back again."
A certain percentage, far too large, of development efforts never emerge from the requirements phase. After two, or five, or even ten years, you can dip into the ongoing requirements process and watch them take out a comma in the morning and put it back again in the afternoon. Far better the comma should be killed during requirements than allowed to live such a lingering death.
Paradoxically, it is the attempt to finish the requirements work that creates this endless living death. The requirements phase ends with agreement, but the requirements work never ends until the product is finished. There simply comes a moment when you decide you have enough agreement to risk moving on into the full design phase.
25.2 The Courage to End It All
Nobody can tell you just when to step off the cliff. It's simply a matter of courage. Whenever an act requires courage, you can be sure that people will invent ways of reducing the courage needed. The requirements process is no different, and several inventions are available to diminish the courage needed to end it all.
25.2.1 Automatic design and development
One of the persistent "inventions" to substitute for courage is some form of automatic design and/or development.
In automatic development, the finished requirements are the input to an automatic process, the output of which is the finished product. There are, today, a few simple products that can be produced in approximately this way. For instance, certain optical lenses can be manufactured automatically starting from a statement of requirements.
In terms of the decision tree, automatic development is like a tree with a trunk, one limb, no branches or twigs, and a single leaf (Figure 25-1). With such a system, finishing requirements is not a problem. When you think you might be finished, you press the button and take a look at the product that emerges. If it's not right, then you weren't finished with the requirements process.
pastedGraphic.png
Figure 25-1. Automatic development is like a tree with a trunk, one limb, no branches or twigs, and a single leaf.
No wonder this appealing dream keeps recurring. It's exactly like the age-old story of the genie in the bottle, with no limit on the number of wishes. For those few products where such a process is in place, the only advantage of a careful requirements process is to save the time and money of wasted trial products. If trial products are cheap, then we can be quite casual about finishing requirements work.
25.2.2 Hacking
Automatic development is, in effect, nothing but requirements work—all trunk and limb. At the other end of the spectrum is hacking, development work with no explicit requirements work. When we hack, we build something, try it out, modify it, and try it again. When we like what we have, we stop. In terms of the decision tree model, hacking is not a tree at all, but a bush: all branches, twigs, and leaves, with no trunk or major limb (Figure 25-2).
pastedGraphic_1.png
Figure 25-2. Hacking is not a tree at all, but a bush—all branches, twigs, and leaves, with no trunk or major limb.
Pure hacking eliminates the problem of ending requirements work, because there is no requirements work. On the other hand, we could conceive of hacking as pure requirements work—each hack is a way of finding out what we really want.
Almost any real project, no matter how well planned and managed, contains a certain amount of hacking, because the real world always plays tricks on our assumptions. People who abhor the idea of hacking may try to create a perfect requirements process. They are the very people who create "living death" requirements processes.
25.2.3 Freezing requirements
Paradoxically, hacking and automatic development are exactly the same process from the point of view of requirements. They are the same because they do not distinguish requirements work from development. Most real-world product development falls somewhere between pure hacking and automatic development, which is why we have to wrestle with the problem of ending.
Even when we have every requirement written down in the form of an agreement, we cannot consider the requirements process to be ended. We know those agreements will have to change because in the real world, assumptions will change. Some people have tried to combat their fear of changing assumptions by imposing a freeze on requirements. They move into the design phase with the brave declaration,
No changes to requirements will be allowed.
Those of us who understand the nature of the real world will readily understand why a freeze simply cannot work. We know of only one product development when it was even possible to enforce the freeze. In that instance, a software services company took a contract to develop an inventory application for a manufacturer. The requirements were frozen by being made part of the contract, and eighteen months later, the application was delivered. Although it met all the contracted requirements, the application was rejected.
This frozen product was totally unusable because in eighteen months it had become totally removed from what was really required in the here-and-now. The customer refused to pay, and the software services company threatened legal action. The customer pointed out how embarrassing it would be to a professional software company to have its "freeze fantasy" exposed in a public courtroom.
Eventually, the two parties sat down to negotiate, and the customer paid about one-fourth of the software firm's expenses. At the end of this bellicose negotiation, someone pointed out how with far less time negotiating, they could have renegotiated the requirements as they went along, and both parties would have been happy.
25.2.4 The renegotiation process
The freeze idea is just a fantasy, designed to help us cope with our fear of closure. But we cannot fearlessly close the requirements phase unless we know there is some renegotiation process available. That's why agreeing to a renegotiation process is the last step in the requirements process.
Working out the renegotiation process is also a final test of the requirements process itself. If the requirements process has been well done, the foundation has been laid for the renegotiation process. The people have all been identified, and they know how to work well together. They have mastered all the techniques they will need in renegotiation, because they are the same techniques the participants needed to reach agreement in the first place.
25.2.5 The fear of making assumptions explicit
The agreement about renegotiation must, of course, be written down, and this act itself may strike fear in some hearts. Another way to avoid ending requirements is to avoid written agreements at the end of the requirements phase. Some designers are afraid of ending requirements because the explicit agreements would make certain assumptions explicit. An example may serve to make this surprising observation more understandable.
While working with the highway department of a certain state, we encountered the problem of what to do about a particularly dangerous curve on one of the state highways (see Figure 25-3). In an average year, about six motorists missed the curve and went to their death over a cliff. Because it was a scenic highway, it was neither practical nor desirable to eliminate the curve, but highway design principles indicated that a much heavier barrier would prevent wayward cars from going over the cliff.
pastedGraphic_2.png
Figure 25-3. What should be done about this dangerous curve?
Building the barrier seems an obvious decision, but there was another factor to consider. Perhaps once every three years, with a heavy barrier in place, one of these wayward cars would bounce off the barrier into a head-on collision with an oncoming vehicle. The collision would likely be fatal for all involved.
Now, on average, the number of people killed with the barrier in place would be perhaps one-fifth of those killed without the barrier, but the highway designers had to think of another factor. When a solo driver goes over the cliff, the newspapers will probably blame the driver. But what if a drunk driver bounces off the barrier and kills a family of seven who just happened to be driving in the wrong place at the wrong time? The headlines would shout about how the barrier caused the deaths of innocent people, and the editorials would scream for heads to roll at the highway department.
So, it's no wonder the highway engineers didn't want anything documented about their decision not to build the barrier. They were making life-and-death decisions in a way that covered their butts, and they could protect themselves by taking the position, "If I never thought about it, I'm not responsible for overlooking it in the design." And, if it was never written down, who could say they had thought of it?
25.3 The Courage to Be Inadequate
Most engineers and designers react to this story by citing similar stories of their own to prove, yes, indeed, there are decisions it's better not to write down. We believe, however, this kind of pretense is an abuse of professional power, an abuse not necessary if we remember the proper role of the requirements process.
It's not for the designers to decide what is wanted, but only to assist the customers in discovering what they want. The highway designers should have documented the two sides of the issue, then gone to the elected authorities for resolution of this open requirements question. With guidance from those charged with such responsibilities, the engineers could have designed an appropriate solution.
But suppose the politicians came back with an impossible requirement, such as,
The highway curve must be redesigned so there will be no fatalities in five years.
Then the engineers would simply go back to their customers and state they knew of no solution that fit this requirement, except perhaps for a barricade preventing cars from using the highway. Yes, they might lose their jobs, but that's what it means to be a professional—never to promise what you know in advance you can't deliver.

The purpose of requirements work is to avoid making mistakes, and to do a complete job. In the end, however, you can't avoid all mistakes, and you can't be omniscient. If you can't risk being wrong—if you can't risk being inadequate to the task you've taken on—you will never succeed in requirements work. If you want the reward, you will have to take the risk.
Posted by Gerald M. Weinberg at 11:29 AM 0 comments

Thursday, January 05, 2012

Change Artist Challenge #11: Putting Theory Into Practice

There's nothing more practical than a good theory. - Kenneth Boulding

Reading a book is one thing. Applying what you learn is quite another. If you don't apply it soon, it simply fades away. The same is true of any educational experience. If you come back from a class and don't start using some of the material, you may as well not have gone in the first place.

The Challenge
Your challenge is to review the chapters in any of the four Quality Software Management volumes concerning specifics of the Anticipating organization and consider each idea in terms of the artistry that you can use to introduce it to your organization. Try to create at least one specific action item that will advance the transformation to that way of doing things.

Experiences
1. I started a brown bag special-interest group on our new CASE tool as a place for people who were using it to share learnings, and as a low-risk place for those who weren't using it to find out about it. The hardest part for me—and the real challenge—was to be the first speaker. I haven't been a person who enjoys speaking in front of groups, but I got some support and made myself do it. The group now runs on its own—with little nudges from me once in a while—and there's no trouble getting speakers. It has tripled in size as our use of the tool has grown, and people think that without the group the tool would have died in the original group, or at least not spread.

2. I set out to measure something that would be useful to upper management and to the people whose work was being measured. After a few false starts, I hit upon measuring resolution time for failures found in test. I set up a system to capture this data from our bug database and to plot it automatically week by week. One of the surprising things it showed was the way the new configuration management system actually slowed down resolution time. Since I was advocating the new system, I was rather disappointed, but I resisted the temptation to fudge the figures. Management wanted to throw the system out, but I invoked the Satir Change Model to get a few weeks grace period. With the help of some investigation into the causes of Chaos, the graph improved. In about three weeks, the resolution time was back to what it was before the tool, and after six weeks, the time was cut by 32%. This was the first time anyone had ever demonstrated the value of a new tool in our organization.

3. My challenge to myself was to open up information in my organization. To do this, I decided to be the model by using Public Project Progress Posters for the three projects I'm managing. I was surprised by the emotional reactions—mine and others'. I was apprehensive and defensive, yet proud of my courage. One of the other managers came into my office, shut the door, and started screaming obscenities at me for embarrassing him (because he wasn't going to post his progress). The people in the projects were generally accepting, though I spent a lot of time in the next two weeks explaining how to read the posters, what certain slippages meant, and what I was going to do about them. It was a lot more trouble than I anticipated, but now that things have settled down, it seems to be worth it.

Reference

This post is part of the series, adapted from the book, Becoming a Change Artist .

Wednesday, December 14, 2011

Change Artist Challenge #10: Learning from History

The liberation of a tree is not the freedom from its roots.- Rabindranath Tagore
The Grand Tour shows you what's going on now, but perhaps more interesting to a change artist is how things got the way they are.

The Challenge

Your challenge is to discover the history of some practice that you consider non-productive.

Experience#1

1. Darn you! This assignment almost got me fired. I started questioning why we chose our LAN software and then it came out that my boss was the one who made the study that led to the decision. We got into a BIG argument over what I considered a dumb choice that was really hurting communication around here. He gave me a copy of his original study (actually, he practically shoved it down my throat) and I grudgingly read it. I was halfway into it when I realized that they really had chosen the best that was available at that time. The system I was favoring didn't even exist then. I don't think the company that makes it even existed then. I didn't know that; I didn't even think of that. Well, I learned a couple of things:

• Don't argue with the boss until you have all your facts straight. (I suppose I knew this, but needed reinforcing.)

• Everybody really is doing the best they can, with what they have, at the time they do it.

• I'm likely to make the same mistake (if it really is a mistake) of not seeing far enough into the future.

• An apology actually works with my boss, and doesn't kill me (though it embarrasses me).

Experience#2

While studying how we used consultants in the past, I learned that we have a pattern of paying them a lot, putting in a lot of work with them, and then putting their reports on the shelf. I don't know what I'm going to do about this, but obviously something has to change. Perhaps we won't hire consultants any more, or we'll hire different ones, or we'll work with them differently. Maybe we're expecting too much from a report.

Experience#3

I found out why we put quarters in the bowl at meetings when somebody interrupts someone else. That started before I came to this group. Now we give that money to charity, but originally it was used for beer after the meeting. I've re-instituted the beer-sharing—we really needed some kind of team-building, or team-repairing like that. Don't worry, though. We still give the quarters to charity, and just take turns buying the beer.

Experience#4

I wanted to find out what really happened to the previous two process groups. I did. I'm going to make a few changes, right away.

Experience#5

Well, I couldn't do this assignment. I wanted to study the history of our weekly status meetings, but I couldn't find anyone who remembered how they got started. I couldn't find anyone who remembered why they got started. I couldn't even find anybody who knew why we were still doing them. So we're not doing them any more. But I didn't do the assignment.

Reference

This post is part of the series, adapted from the book, Becoming a Change Artist .

Thursday, November 10, 2011

Iterative Development: Some History

As an old guy who's been around computing since 1950, I'm often asked about early history of computing. I appreciate efforts to capture some of our history, and try to contribute when my agin memory doesn't play tricks on me.

Back in 2003, Craig Larman and Victor R. Basili compiled an interesting article, Iterative and Incremental Development: A Brief History. I made several contributions to their history, but they did much, much more. Here's a small sample of what I told them:

We were doing incremental development as early as 1957, in Los Angeles, under the direction of Bernie Dimsdale [at IBM's Service Bureau Corporation]. He was a colleague of John von Neumann, so perhaps he learned it there, or assumed it as totally natural. I do remember Herb Jacobs (primarily, though we all participated) developing a large simulation for Motorola, where the technique used was, as far as I can tell, indistinguishable from XP.

When much of the same team was reassembled in Washington, DC in 1958 to develop Project Mercury, we had our own machine and the new Share Operating System, whose symbolic modification and assembly allowed us to build the system incrementally, which we did, with great success. Project Mercury was the seed bed out of which grew the IBM Federal Systems Division. Thus, that division started with a history and tradition of incremental development.

All of us, as far as I can remember, thought waterfalling of a huge project was rather stupid, or at least ignorant of the realities… I think what the waterfall description did for us was make us realize that we were doing something else, something unnamed except for "software development."


Larman and Basili's article has a whole lot more to say, and as far as I know, is an accurate history. I strongly recommend that all in our profession give it a good read. We should all know these things about ourselves.

Saturday, September 17, 2011

Downgraded to Testing

Here's a letter I got from a friend overseas. I've altered any identifying information, for obvious reasons. I'm going to intersperse some comments as if I were conversing with Nicolai face-to-face.

Nicolai's Letter
One thing I still like to know from you about "Perfect Software and Other Illusions About Testing". Are there metrics or measurement criterias with which you can measure the success of Testing in the software development life cycle (SDLC)?

I am in a struggle with my employer to change my position from a system developer (am working for the auto parts industry / manufacturing planning and execution) into a software tester. I made that decision after I recognized that this job much more fits to my personality type than doing the implementation.

Jerry
That recognition is an excellent starting point for solving your problems. Many people don't really know consciously what they really would like to do.

Nicolai

But let me come back to my main concern nowadays. My problem now is that in my country and especially in my company, this issue of Testing is new. My boss's understanding of it is low and hence he wants to reduce my salary around 10%.

Jerry
Sorry, that's not why he wants to reduce your salary. That's his excuse for taking advantage of your wish to change jobs. He simply sees an opportunity to save some money at your expense. Yes, if he understood anything about testing, he should raise your salary by 10%—or more.

Nicolai
That's in contrast with the huge losses we have because of NOT using Testing at all in our processes.

Jerry
Oh, my. Your company is at Level 0 when it comes to Testing. Oblivious! That is definitely not the place to be if you wish to make a career, long or short-term, in Testing.

Nicolai
Anyway I will not stay long there anymore.

Jerry
That's the wisest thing you've said so far.

Nicolai
I will try to setup my own company (industrial import export trading consultancy).

Jerry
That's an ambitious goal, and probably long-term. You cannot afford to stay any longer in this unbelievably bad company with this even more unbelievable manager.

Nicolai
But in the meantime I have to feed my family and I need an advice from how I can measure the success of Testing. Based on such criteria, I can improve my salary negotiation and make Testing much more tangible to everyone even myself. Once I have the criteria or indicators, I can make a some percent part of my salary depending on the success of implementing Testing into the SDLC. I think that is fair enough.

Jerry
Your boss has demonstrated he has no interest in "fair enough." And no interest in your career or your family. Your approach is all wrong. You should first find yourself a job in an organization that already values Testing, at least slightly.

In my experience, an organization like yours is never (at least in your working lifetime) going to value testing enough to value you, or pay you what you're worth to them. Nor will it be a good place to learn the profession. All you will learn is what I'm telling you now: that is, you shouldn't stay in this job a moment longer than you must in order to see that your family is fed. (For example, if your wife works, see if you can simplify your finances so you can live on her income, at least for a short time.)

But, in any case, you should immediately begin searching for a new job, in a much more compatible place. (And do it without letting anybody know. The kind of boss you have will not react well to news that you're seeking a new position. Let him know when you're saying goodbye, when you've already been hired by the new place.

Nicolai
What would be such indicators of a successful integration of Testing into the software development life cycle (SDLC)? I know you use the FFR, but this says something about the quality of the software development team. What would be indicators that say something about the quality of the software test team?

Any hints from you are very welcome.

Jerry
What you need now is one or more ways to put a cost on the software errors that are already reaching your users. (For example, you can use methods described in Quality Software series of eBooks—particularly Volumes 3 and 4: How to Observe Software Systems and Responding to Significant Software Events.)

Use these methods to arrive at average and extreme costs of each software bug that leaves your development group. And a count of how many bugs you ship with how much software. Then you can produce a report that says, "If our Testing is X% efficient at finding bugs, and if our developers are Y% efficient at fixing them before release, then we can expect to save Z-dollars by having a Testing group."

If you really do no Testing now in your SDLC, you can expect Z to be a very large amount. (And if it's not a very large amount, then Testing is really not important there, and you shouldn't be working there.)

Even if you're already in a Testing group, you should make such measurements, so you can get the support you need. And be appreciated.

I hope this helps.

References
Responding to Significant Software Events


How to Observe Software Systems
Posted by Gerald M. Weinberg at 10:40 PM 4 comments

Friday, July 29, 2011

A Universal Starting Point for Problem-Solving

By popular request, I'm going to hold my next Change Artist Challenge until next week to give some of my readers a little more time to catch up.

This essay should also be helpful to change artists, who often have to start their work by defining or redefining a problem that's presented to them. It's adapted from Chapter 5 of Exploring Requirements 1: Quality Before Design.

How can we reduce the great variety of potential starting points to a single solid platform for exploring requirements? A possible solution is to regard every design project as an attempt to solve some problem, then reduce each starting point to a common form of problem statement.


A problem can be defined as
a difference between things as perceived
and things as desired.


[For a full discussion of problem definition, see Donald C. Gause and Gerald M. Weinberg, Are Your Lights On? How to Know What the Problem Really Is.








Figure 5-1. A problem is best defined as a difference between things as perceived and things as desired.


This definition can serve as a template measuring each idea for starting a development project. If the idea doesn't fit this definition, we can work with the originator to universalize the idea until it does.

Universalizing a Variety of Starting Points
Let's see how this universalization process can be used to reduce six different starting points to a common form of problem definition.

Solution idea
Perhaps the most common starting point is thinking of a solution without stating the problem the solution is supposed to solve. In other words, the idea doesn't say what is perceived (and by whom) and what is desired, so it doesn't fit our definition of a problem. Here are a few examples we've experienced.

1. A marketing manager told a systems analyst, "We need sharper carbon copies of our sales productivity report." Rather than immediately begin a search for a way to produce sharper carbons, the analyst asked, "What problem will sharper carbons solve for you?" The manager explained that the carbons didn't make very good photocopies, so the salespeople had trouble reading them. "So," the analyst confirmed, "you need one clear copy for each salesperson, and you're now making multiple copies of the report we give you?" Eventually, through such give and take, the problem was redefined as a need to provide timely and clear comparative information to a sales force of four hundred—something readily accomplished by slightly modifying an existing on-line query system. The final design didn't have sharper carbons. It didn't have carbons at all. Not even paper.

2. In another case, a university dean said, "We need a way to attract more students." The dean never said why they needed more students, and each faculty member hearing the statement formed a different idea. Some thought "more students" meant getting more outstanding students. Some thought "more students" meant being able to support more teaching assistants in certain departments. Still others thought "more students" meant the dean wanting to fill the vacant dormitory space.

After arguing for months about the best way to get more students, the faculty finally learned what the dean really wanted: to create the impression in the state legislature that the school was doing a higher quality job by increasing the rejection rate of applicants, so the university appropriation would increase. Once this goal was understood, the faculty approached a solution in several ways, none of which involved an increase in student enrollment.

Technology idea
Sometimes we don't have a problem in mind, at all, but literally have a solution in hand: a solution looking for a problem. When tearing off those perforated strips on computer paper, have you ever felt there ought to be something useful to do with them? The perforated strips are the solution, and the problem is "What can we use them for?" After thirty years of searching, Jerry bought Honey, a German Shepherd puppy, and suddenly he discovered the problem his solution was looking for. Computer paper edges, crumpled up, make perfect litter for puppy nests!

When a new technology comes along, it's often a solution looking for a problem. The Post-It™ note developed by 3M is a conspicuous example. The semi-stickiness was originally just a failed attempt to produce an entirely different kind of adhesive. Instead of simply discarding it as another failed project, the 3M people thought of problems for which such semi-adhesive properties would provide a solution. They created Post-It™ notes, but the solution-to-problem process didn't stop there. As soon as Post-It™ notes appeared in offices, thousands of people began seeing problems they would solve.

Some of the high-technology companies we work with are dominated by this kind of solution-to-problem starting point. In effect, their problem takes the form of the following perception and desire:


Perception: We own a unique bit of technology, but others don't want to give us money for it. For example, a chalk company buys rights to a new vein of chalk that has exceptional purity and strength. To most people, however, chalk is just chalk.

Desire: Others will pay us a great deal of money for the use of this technology in some form. For instance, if the company can create the idea of Superchalk in the public mind, the unique purity and strength become an asset of increased value.

Such a problem statement allows the technology to become a kernel around which many designs can be built. Without it, technology firms often make the mistake of believing that "technology sells itself." Although this slogan may be true in certain cases, usually it's an after-the-fact conclusion. Want to turn a solution into a problem requiring it? Ordinarily, you'll need an enormous amount of requirements and design work. For example, how will you make teachers believe they can't really teach well without Superchalk?

Simile
Many product development cycles start with a variety of metaphorical thinking—a simile, or comparison, as when someone says, "Build something like this." Although the customer may emphasize "this," the job of the requirements process is to define "like."

For instance, Maureen, the leader of a software project, told her team she wanted a new user interface "like a puppy." "First of all," she elaborated, "people see a puppy and are immediately attracted to it. They want to pet it, to play with it. And they aren't afraid of the puppy, because even though it might nip them, or even pee on them, puppy bites aren't serious injuries, and puppy pee never killed anyone. Also, you can't really hurt a puppy by playing with it."

Although the team couldn't yet build a system with this requirement, playing with the simile did inspire them to ask probing questions. "How about housebreaking and obedience training for the puppy?" a teammate asked.

Maureen thought a bit, and said, "Yes, the interface should be trainable, to obey your commands, so it becomes your own personal dog."

"Okay," asked someone else, "will it grow up to be a dog, or remain a puppy?"

"That's easy," said Maureen. "It will stay a puppy if you want it to be a puppy, but if you prefer, it will grow up to be a real working dog doing exactly what you say."

"What kind of working dog?"

"A watchdog, for one thing. It should warn you of dangerous things that might happen when you're not paying attention."

Someone else got into the spirit by asking, "What about a sheepdog? It could round up the 'sheep' for you, and put them safely in the pen. And guard them from anyone stealing any."

By this time everyone was involved, and the requirements process was running like a greyhound, though not necessarily in a straight line, as when someone asked, "How about fur? Should it be a longhair or a shorthair?" Nobody could figure out what fur meant for an interface, though the question did lead to an extensive discussion of touch screens and other interface hardware they had never previously used. Eventually this tangent was clipped by someone observing, "Our tail is starting to wag the dog."

The simile is excellent as an idea-generation tool, but eventually the requirements group has to groom their ideas into prize-winning form, which requires some idea-reduction tools. You know when the simile has become a bit dog-eared when you can no longer make fruitful connections between it and your product. It's important, though, to keep it going a bit past the point where it becomes ridiculous, just to be sure you've generated enough ideas.

Norm
Many people do not consider themselves metaphorical thinkers, believing they think more concretely. In the requirements process, they would more likely say, "Here is a chair. Design a better chair." or "Here is ordinary chalk. Design a superchalk." In fact, the norm is also a metaphor, seeming literally "close" to the thing desired. The great danger of using a norm is the constriction on our thinking once we identify what would almost satisfy the customer.

Another great danger is making one big leap in logic to the end result. Instead, starting with a norm and working by increments tends to protect us from the colossal blunder. The Wright brothers, for instance, were bicycle builders, and they used many of the norms from bicycle construction to create their success at Kitty Hawk.

A third danger is starting with the wrong norm, which could prevent us from making a great leap forward when one is possible. Orville and Wilber Wright did use a rail to launch their plane, but they didn't become the first heavier-than-air fliers by putting wings on a locomotive (Figure 5-2).

Figure 5-2. Don't let the norm dictate the form. If the Wright brothers had been train builders, they might have specified a plane that looked like this hybrid, which might have been on the right track, but would have had a hard time getting off the ground.

Mockup
Suppose we agree to use a chair for a norm. Unfortunately, your mental picture of a chair may be very different from mine. A mockup is a way to protect against this ambiguity by providing an actual scale model of a product. Moreover, we can benefit by using the mockup to demonstrate, study, or test the product long before the product is actually built.
A mockup serves as a norm, when no norm exists, or when none is available. As such, it has all the advantages and disadvantages of a norm. It also has the advantage and disadvantage of being a fantasy product. When we use a mockup, we aren't restricted to what exists, but on the other hand, we can easily mock up a product that could never actually be built.

In printing, and in computing, for example, the mockup is often in the form of a layout of printed matter, or material on a screen. The customer and users can point to the layout and say, "Yes, that's what I want," or "No, what's this doing here?" What we are actually testing with the mockup is the customers' emotional responses—their desires. In effect, a mockup says, "This is what we think the product's face will look like. Let's see how you react to this!"

Name
Many ideas for design projects simply begin with a name: Create Superchalk. Build me a table, chair, pencil, clock, elevator, steering wheel, speedometer, or bicycle. Although the name provides a quick and common connection for all participants to grasp, names also come with a large baggage of connotations. As we've seen, each word is worth a thousand pictures, and each connotation of a name may introduce implicit assumptions.

For instance, Jerry spent thirty years searching unsuccessfully for a use for computer paper edges largely because the name itself narrowed his thinking unnecessarily. Our colleague Jim Wessel observed that his four-year-old daughter isn't so limited. She cuts these strips into smaller pieces and calls them "tickets." We would do well to emulate the four-year-olds who have little trouble making up names for objects lacking conventional ones.

Further Reading










The Exploring Requirements books can be obtained from a variety of retailers. Go to my website and chose your favorite source of books or eBooks.

Friday, January 28, 2011

The Myth of Writers Block (and what to do when you're blocked)

Writing is one of the most important activities for successful consultants. Writing helps you capture and clarify your ideas. Writing helps you polish your presentations to clients. And published writing is probably the second most effective marketing tools in your kit. (First, of course, is recommendations from satisfied clients.)

Yet most consultants never publish an article. Of those who do publish an article, most write only one. Many consultants never publish a report. Of those who do publish a report, most write only one. And certainly, most consultants never publish a book. Of those who do publish a book, most publish only one. If you ask them why they don't write more, they will commonly say they are stuck, or "blocked." But these words are merely labels. They explain nothing. Most often consultants stop writing because they do not understand the essential randomness involved in the creative process.

The Structure of Creation versus the Structure of Presentation

Please don't get the impression that I read in the random way I write (my "Fieldstone Method." Reading, by its nature, is more or less linear, like a string of beads, and I tend to read most works through from beginning to end. But written works can be created by superimposing any of a variety of organizations on that linear string of words. For instance, novels, being stories, are more or less linear; but novelists may use flashbacks, stories-within-stories, or parallel stories to break the linearity.

Dictionaries, encyclopedias, and reference manuals—though consisting of a bound sequence of pages—are generally organized for a random access by the addition of tables of contents and indices. Internets and intranets allow us to hyperlink written works in much more complex structures, though in order to use them, we frequently need aids such as index pages and search engines.

But none of these reading organizations have much of anything to do with the organization of the creative process by which the works came into existence. These reading structures are presentation methods, not creation methods. Creation doesn't work in any such regular way. It's more accurately modeled by the Fieldstone Method. Every day is different; every idea is different; every mood is different; so why should every project be the same?

Writer's Block and the Goldilocks Questions

"Of course every day is different," you may say. "Some days I'm entirely paralyzed by writer's block, and I don't accomplish anything at all."

If this is your problem, I can help, as I've helped many other consultants and professional and amateur writers. I didn't always understand how I was helping, until one student wrote the following:

As evidenced in some conversations with other students of yours and in my own writings, I think there are number of intangibles that you do offer—in much the same way that a coach or therapist does. These include motivation, raising self-esteem, building confidence in writing, considering self-other-context, discipline, thinking more clearly, or awareness, to name only a few.

Writer's block is not a disorder of you, the person attempting to write. It's a deficiency of your writing methods—the mythology you've swallowed about how works get written—what my sometime co-author, Tom Gilb, calls your "mythodology." Fieldstone writers, freed of this mythodology, simply do not experience "writer's block." Have you ever heard anyone speak of “mason's block”? (But, yes, I have heard people talking about "consultant's block"—and what I'm saying here actually applies to much of the work consultants do, or try to do when they get "stuck.")

Many writing methods and books assume that writer's block results from a shortage of ideas. Others assume the opposite—that writers become blocked when they have a surplus of ideas and can't figure out what to do with all of them. But it's not the number of ideas that blocks you, it's your reaction to the number of ideas.

Here's how it goes. You have the wrong number of ideas, and that bothers you, causes you discomfort, or even pain. To lessen the pain, you turn to some other activity—coffee, beer, sex, movies, books, sleep, or name your poison. This diversion relieves the pain in the short run, but eventually your mind turns back to that unfinished piece of writing (or other work). Now you feel worse because you've avoided the task. You might try writing again, but your mind keeps returning to what a bad, blocked writer you are. So, eventually, you turn to your relief—coffee, beer, sex, or whatever.

Do you recognize the addiction cycle? (This dynamic is described more fully in my soon-to-be released Volume 5, Managing Yourself and Others, of my e-Series, Quality Software.) The Fieldstone method allows you to break this cycle in exactly the same way you break any addiction, by using your intelligence and creativity. I sometimes begin to feel "blocked," but when I do, I simply ask myself what I call the Goldilocks Questions:

"What state am I in now? Do I have too many ideas? Do I have too few? Or, like Baby Bear's porridge, is it just right?"

If I have too many ideas, I begin some organizing activities, like sorting ideas into different piles. If I have too few ideas, I concentrate on gathering more. Usually, the first place I look is in my own mind, staying in the flow of the moment, one idea building on the next.

For instance, when I’m writing dialogue, I don’t stop to search externally for just the right conversational “stone.” That approach leads to overly clever dialogue, rather than the more natural-sounding stones that just pop out of my head from millions of past conversations I’ve heard or overheard. Only if my natural mental flow fails me do I start searching for an external "fieldstone" to trigger a new flow.

Then, when the number of ideas is "just right," I organize them, trimming and polishing a bit in the process, until I have a finished product—or until I have to ask the Goldilocks Questions again. Sure, I may be stuck for a few moments, but I'm never "blocked."

In my book, Weinberg on Writing, I sketch all three parts of the Fieldstone Method—first the gathering of ideas (stones), then the organizing, then the trimming and polishing. The book describes them in that order, not because I perform them in that order, but because it's a book, and books are linear organizations of ideas.

Unlike what your schools taught you about writing, the Fieldstone Method is not dependent on any particular order of doing things. Instead, Fieldstoning is about always doing something that's advancing your projects. As a Fieldstone writer, you will have a variety of keep-moving activities, a handy list of tasks of all sizes, plus the knowledge to match each task to your mood, your start/stop time, your resources, and your total available time. As a Fieldstone consultant, you will have a second handy list of keep-moving activities—a list with your writing list as one of its sublists.

Each Fieldstone writer also has to find her own “magic” tasks, not all of which may seem “logical” to other writers. Meditation works for me, but others find it disturbing. Aikido boosts me, but it tires others. Some writers say you have to have a cat, a cigarette, and a cup of coffee laced with brandy.
The cigarette and brandied coffee would kill me, which would be merciful because then I wouldn’t have to watch the Lovey and Caro tear apart the cat.

Observing Your Activities

In order to be a non-blockable writer (or consultant), you need to do a bit of observation of yourself. Here's what I suggest you try:

1. Choose a day or several hours that you plan to devote to writing.

2. In your journal (all professional consultants keep a journal) record the start-stop time of different activities.

3. Record your feelings at the beginning and end of each activity. Don't interrupt your flow, but just capture a word or two.

4. At the end of the day, look at what you wrote in your journal. Do you see an addiction cycle?

5. How did you respond any time you were temporarily stuck?

6. What other activities could you have done that would have served you better?

7. How will you remind yourself of those activities when you repeat this observation exercise in a month or so?

(This article is adapted from Weinberg on Writing: The Fieldstone Method )

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Find my eBooks sampled free, and offered for sale at these stores

My Barnes and Noble page

My Amazon Page

Apple Store

My Smashwords Page
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Posted by Gerald M. Weinberg at 10:03 PM 1 comments
Subscribe to: Posts (Atom)
 

AltStyle によって変換されたページ (->オリジナル) /