Showing posts with label software. Show all posts
Showing posts with label software. Show all posts
Sunday, December 31, 2017
What is Software?
Ir's a new year, so let's start out with something fundamental, cleaning up something that's bothered me for many years.
The other day I was lunching with a computer-naive friend who asked, "What is software?"
Seems like it would be an easy question for those of us who make and break software for a living, but I had to think carefully to come up with an explanation that she could understand:
Software is that part of a computer system that adapts the machinery to various different uses. For instance, with the same computer, but different software, you could play a game, compute your taxes, write a letter or a book, or obtain answers to your questions about dating.
I then explained to her that it’s unfortunate that early in the history of computers this function was given the name “software,” in contrast to “hardware.” What it should have been called was “flexibleware.”
Unfortunately the term “soft” has been interpreted by many to mean “easy,” which is exactly wrong. Don't be fooled.
What we call “hardware” should have been called “easyware,” and what we call “software” could then have been appropriately called “difficultware.”
Labels:
communication,
computers,
hardware,
history of computers,
language,
programming,
software,
thinking
Sunday, October 29, 2017
My most challenging experience as a software developer
Here is my detailed answer to the question, "What is the most challenging experience you encountered as a software developer?:
We were developing the tracking system for Project Mercury, to put a person in space and bring them back alive. The “back alive” was the challenging part, but not the only one. Some other challenges were as follows:
-The system was based on a world-wide network of fairly unreliable teletype connections.
-We had to determine the touchdown in the Pacific to within a small radius, which meant we needed accurate and perfectly synchronized clocks on the computer and space capsule.
-We also needed to knew exactly where our tracking stations were, but it turned out nobody knew where Australia's two stations were with sufficient precision. We had to create an entire sub-project to locate Australia.
-We needed information on the launch rocket, but because it was also a military rocket, that information was classified. We eventually found a way to work around that.
-Our computers were a pair of IBM 7090s, plus a 709 at a critical station in Bermuda. In those days, the computers were not built for on-line real-time work. For instance, there was no standard interrupt clock. We actually built our own for the Bermuda machine.
-Also, there were no disk drives yet, so everything had to be based on a tape drive system, but the tape drives were not sufficiently reliable for our specs. We beat this problem by building software error-correcting codes into the tape drive system.
We worked our way through all these problems and many more smaller ones, but the most challenging problem was the “back alive” requirement. Once we had the hardware and network reliability up to snuff, we still had the problem of software errors. To counter this problem, we created a special test group, something that had never been done before. Then we set a standard that any error detected by the test group and not explicitly corrected would stop any launch.
Our tests revealed that the system could crash for unknown reasons at random times, so it would be unable to bring down the astronaut safely at a known location. When the crash occurred in testing, the two on-line printers simultaneously printed a 120-character of random garbage. The line was identical on the two printers, indicating that this was not some kind of machine error on one of the 7090s. It could have been a hardware design error or a coding error. We had to investigate both possibilities, but the second possibility was far more likely.
We struggled to track down the source of the crash, but after a fruitless month, the project manager wanted to drop it as a “random event.” We all knew it wasn’t random, but he didn’t want to be accused of delaying the first launch.
To us, however, it was endangering the life of the astronaut, so we pleaded for time to continue trying to pinpoint the fault. “We should think more about this,” we said, to which he replied (standing under an IBM THINK sign), “Thinking is a luxury we can no longer afford.”
We believed (and still believe) that thinking is not a luxury for software developers, so we went underground. After much hard work, Marilyn pinpointed the fault and we corrected it just before the first launch. We may have saved an astronaut’s life, but we’ll never get any credit for it.
Moral: We may think that hardware and software errors are challenging, but nothing matches the difficulty of confronting human errors—especially when those humans are managers willing to hide errors in order to make schedules.
Labels:
challenges,
computers,
crisis,
debugging,
error,
failures,
faults,
managers,
problem solving,
programming,
project management,
risk,
software,
software development,
testing
Sunday, June 25, 2017
How do I get better at writing code?
Nobody writes perfect code. Anyone, no matter how experienced, can improve. So, you ask, how do I get better at writing code?
Of course, to get better at writing code, you must practice writing code. That much is obvious. Still, just writing the same poor code over and over, you're not likely to improve by much.
Writing is a different skill from reading, but reading code is necessary if you want to improve your writing. As with writing natural language, you build up your skill and confidence by reading—and not just reading your own output. So, find yourself some examples of good, clear code and read, read, read until you understand each piece.
Be careful, though. There’s lots of terrible code around. You can read terrible code, of course, and learn to analyze why it’s terrible, but your first attention should be on good code. Excellent code, if possible.
Where can you find good code? Textbooks are an easy choice, but be wary of textbooks. Kernihan and Plauger, in their book, The Elements of Programming Style, showed us how awful textbook code can be. Their little book can teach you a lot about recognizing bad code.
But bad code isn't enough. Knowing what's bad doesn't necessarily teach you what's good. Some open source code is rather good, and it’s easy to obtain, though it may be too complex for a beginning. Complex code can easily be bad code.
Hopefully, you will participate in code reviews, where you can see lots of code and hear various opinions on what’s good and what’s less than good.
Definitely ask you fellow programmers to share code with you, though beware: not all of it will be good examples. Be sure the partners you choose are able to listen objectively to feedback about any smelly code they show you.
If you work alone, use the internet to find some programming pen pals.
As you learn to discern the difference between good and poor code, you can use this discernment in your reading. After a while, you’ll be ready to start writing simple code, then work your way up to more complex tasks—all good.
And date and save all your code-writing examples, so you can review your progress from time to time.
Good luck, and happy learning!
Labels:
Agile,
books,
coding,
computers,
development,
language,
programming,
quality,
reviews,
software,
software development,
training,
writing
Friday, May 26, 2017
Advice to New Graduates
It's now graduation season, time to give advice to new graduates entering the world of work. Every few years, I notice the season and republish some old, but still valid, advice I offered my youngest son, many seasons ago.
Letter to My Son, John
(On the occasion of his graduation with a degree in computer science)
Dear John,
I know some of the other fathers are giving their sons BMWs for graduation, but as a fresh computer science graduate, you're probably going to be earning more than I do as a writer.
It's not totally dishonorable being a writer. Last month, when Dani and I were in Hannibal, Missouri, we visited Mark Twain's birthplace. They've made it into a museum, and it seems to attract a lot of tourists.
Before computer scientists came along, writers used to be pretty famous, and some of them even got rich. So save this letter. You may not appreciate the advice, but someday you might be able to donate it to a museum.
On the wall of the museum, under an old photograph of his schoolhouse, was this quotation from one of Twain's letters:
"I was always careful never to let my schooling interfere with my education."
Being an old-timer in computer science myself, I didn't have to be as careful as Mark Twain. When I went to school, there was no such thing as computer science. There wasn't even a computer—unless you count me (that was my job title).
I made 90 cents an hour inverting matrices for the physics department, using a clunky old Friden, paper, pencil, and lots of erasers. No, I'm not going to advise you to complete your education by inverting some 10 by 10 matrices with a Friden. If I wanted to build your character, I'd buy you a hair shirt for graduation.
Subject Matter
I did want to tell you how lucky you were to have such fine schooling as part of your education. I hope that the next stage of your education will be half as good, which it ought to be, if only you can stay out of the trouble I got into when I was a fresh graduate.
The problems I caused myself weren't due to things I didn't learn in my schooling. As another great American humorist, Will Rogers, said, "It's not what you don't know that gets you in trouble, it's what you know that ain't so."
If my experience is any guide, there are a few parts of your schooling you may want to forget before you report for your first job.
The first and easiest thing you'll have to forget is all the specific facts you learned in your technical courses. If you had majored in Greek grammar, or in the history of rural Belgium in the Middle Ages, you wouldn't have to forget all the facts you so patiently memorized. In some subjects, the facts haven't changed in the past few generations, but you certainly can't say that for Computer Science.
Did you ever stop to think why Computer Science graduates are paid so much more than Greek or History majors? Did you think it was because of the scraggly black beard or your long fingernails? Well, it's not.
It's because Computer Science is high technology—and that means the entire subject matter is changing even as you read these words. The history student is being paid a pittance to remember a body of relatively fixed material, but you're being paid that fabulous salary largely because of your ability to keep pace with innumerable rapid changes.
Being a specialist in information processing, you should be able to understand the process that led to the information you found in your textbooks. A fact that you found in your freshman textbook would be four years old by now even if it was brand new when you were wearing a beanie. But the book was probably two years old or more by that time, which makes it at least six years out of date.
And you also have to consider that a book takes about a year to publish after it is written, and perhaps two years to write in the first place. You can conservatively add another year for the author's inability to keep up with the latest in new technology.
That makes a nice round ten years for the age of the facts you learned as a freshman. It's not much better for what you learned as a senior—and could be worse, because advanced texts take longer to write, longer to publish, and are too expensive to keep updating every couple of years. Just how long is 10 years of computer technology? Well, I read the other day about the resale value of some System 370 models IBM introduced (ten years ago). Back then, they sold for about 3ドル million. Today, their book value is exactly zero. The big argument is whether to pay people for hauling them away or to call it even for the scrap value.
IBM used quite a bit of gold and silver in the connectors in these models, which accounts for any value they still have. So, unless you're in collecting precious metals, don't start your career in an organization whose computer is ten years old, either. Keep your gold fillings, but forget whatever facts you learned in school.
But don't worry. It's not so hard to forget facts. Psychological studies show that 24 hours after a lecture, you've forgotten half the facts presented. In the succeeding days, this halving process continues unabated.The same thing happens after an exam—as you certainly must have learned by now.
Once in a while, though, in spite of your best efforts to forget everything, something you learned in school will pop into your head. In that case, just keep it to yourself. The last thing you want to do on your new job is to go broadcasting your ignorance to your coworkers. Instead, close your mouth and open your ears. You might learn something that's new to replace the obsolete fact—which is even better than forgetting.
Grades
Telling your coworkers about all the facts you learned in school is almost as bad as telling them about your grade point average. I know it's hard to accept—especially after all the emphasis on grades the past 16 years of your life—but now that you've landed a job, nobody is interested in your college grades.
But forgetting your grade point average isn't nearly enough. You must also forget everything you know about grades and grading. Business is not school. You may be "graded" on your job, but it won't be on the same basis you were graded on at school.
For instance, when you wrote that little assembler for ComSci 321, you didn't have time to finish the macro facility. So you turned in a partially finished job for a B-plus.
Or that time the micro you designed for ComSci 442 gave the wrong remainder on floating point division. You took an A-minus and were glad to be rid of the thing.
Those strategies don't work on the job. There are no B-plusses for partially completed projects. And no A-minusses for hardware or software with bugs. On the job, you finish your tasks and you finish them right, or you flunk.
Or maybe you think you can take an Incomplete like you did when you didn't want to stop working on that really interesting flow tracer. Or the time you spaced off the last program in advanced programming. Well, forget Incompletes, too.
Specific Plans
If you have a good boss and you're sufficiently interested in something to want to continue working on it, you'll probably be allowed to do it—on your own time—and only after you've turned in the project as assigned. As far as other reasons for Incompletes, forget them.
If you flunk, you won't have to worry about going in to argue with the boss about getting a higher grade. Your boss will call you in before you get a chance.
But don't waste your time preparing arguments for raising your grade. Instead, prepare specific plans for finishing the project or getting rid of the bugs. And also prepare plans for improving your performance on the next assignment. If you don't there may be no next assignment.
Oh, you're not likely to get fired—not in today's business world. More likely, you'll get trivial assignments—and keep getting them until you can prove you're capable of finishing something correctly and on time.
There are no A and B grades. Only A and B assignments. Your college grade average might earn you an A assignment for your first project, but it will never earn you a second.
I know it's a bit frightening to learn that you must get an A on every assignment, but once you've mastered the art of "cheating," it's not nearly as bad as it sounds. Actually, the sound of the word "cheat" is the biggest barrier you'll have to overcome. I don't mean it in the sense of "cheating on your wife" or "cheating at cards," but more in the sense of "cheating death."
By "cheating," I mean going outside the rules that previously bound your thinking. Some instructors say it's cheating to solve an exam problem using a method that wasn't taught in class, or even by looking up the answer. In fact, some instructors still consider it cheating if you use a computer to help you solve your homework! But on the job, you're being paid to use any method that works.
You've spent the past 16 years of your life learning to play by the teacher's rules. Now you're expected to invent your own rules, and you're going to find it difficult to use "any method that works." But once you manage to forget about teachers and their rules, you'll find that this kind of cheating is as natural as drinking beer on a warm summer day.
More natural, actually. Drinking beer is an acquired taste, but breaking the rules is the natural heritage of every human being. Indeed, our superior cheating ability is what differentiates us human beings from all the other animals.
A fox is supposed to be a cunning hunter, but human beings can outhunt any fox that ever lived. How? By cheating, that's how. They "cheat" by getting other people to help them and by using tools invented by other people. You may not call those things cheating, but they sure look like cheating to the fox.
The reason you don't consider cooperation and use of tools as cheating is that you're a human being, not a fox. Your teachers had to forbid "cheating" in school in order to create an environment for teaching facts (which you'll have to forget) and giving grades (which are now worthless). They had to make you forget, temporarily, the basic ability that makes you human—the ability to cooperate.
On the job, you'd better remember your humanity. You don't sign an "honor code" saying you have neither given nor received help. On the contrary, if your boss sees you never help anyone else, or haven't the brains to ask for help when you don't know something, you'll be severely downgraded. If you do everything from scratch and fail to use the simplest shortcuts you can find, you'll be consider an ox, not a fox.
It's actually not that hard to work effectively with other people, even after all those years of isolation. Outside the classroom, where most learning takes place, you have plenty of practice getting and giving help—study groups, team projects, or just those endless conversations over cold coffee in the Union. You may have thought you were just wasting time, but you were actually practicing for the world of work.
Well, I know that's a lot of stuff to forget in so short a time, but I know you can do it if you set your mind to it. Underneath that schoolboy exterior, there beats the heart and throbs the brain of a real human being.
Before you know it, you'll have forgotten all that schooling and gotten on with the business of your education. Good luck!
Love, Dad.
Labels:
advice,
Agile,
business,
career,
experience,
jobs,
learning,
performance appraisal,
programming,
software,
teaching,
teams,
training
Monday, February 13, 2017
Should I learn C++ or Python?
When I first saw this question on Quora, there were already 47 answers, pretty much all of them wrong. But the number of different answers tells you something: choice of programming language is more of a religious question than a technical one. The fact is that if you want to be a professional programmer, you should learn both—and at the same time.
When we teach programming, we always teach at least two languages at the same time, in parallel. Assignments must be done in both (or more) languages, submitted along with a short essay on why the solutions are different, and why the same. That’s the way to develop some wisdom and maturity in the coding part of your professional work.
Some of the respondents asserted that programming languages are tools. If that’s an appropriate metaphor, then how would you answer this question of a wannabe carpenter:
"Should I learn saws or screwdrivers?"
Do you think someone could be a top-flight carpenter knowing only one?
So, stay out of this quasi-religious controversy, which can never be settled. Instead, spend your valuable time learning as many different programming languages as possible, at least 5 or 6. You won’t necessarily use all of them, but knowing their different approaches will put you far above those dullards who say:
“I only know Language X, but it I still think it’s the best language in the world.”
Wednesday, January 11, 2017
Foreword and Introduction to ERRORS book
Foreword
Ever since this book came out, people have been asking me how I came to write on such an unusual topic. I've pondered their question and decided to add this foreword as an answer.
As far as I can remember, I've always been interested in errors. I was a smart kid, but didn't understand why I made mistakes. And why other people made more.
I yearned to understand how the brain, my brain, worked, so I studied everything I could find about brains. And then I heard about computers.
Way back then, computers were called "Giant Brains." Edmund Berkeley wrote a book by that title, which I read voraciously.
Those giant brains were "machines that think" and "didn't make errors." Neither turned out to be true, but back then, I believed them. I knew right away, deep down—at age eleven—that I would spend my life with computers.
Much later, I learned that computers didn't make many errors, but their programs sure did.
I realized when I worked on this book that it more or less summarizes my life's work, trying to understand all about errors. That's where it all started.
I think I was upset when I finally figured out that I wasn't going to find a way to perfectly eliminate all errors, but I got over it. How? I think it was my training in physics, where I learned that perfection simply violates the laws of thermodynamics.
Then I was upset when I realized that when a computer program had a fault, the machine could turn out errors millions of times faster than any human or group of humans.
I could actually program a machine to make more errors in a day than all human beings had made in the last 10,000 years. Not many people seemed to understand the consequences of this fact, so I decided to write this book as my contribution to a more perfect world.
Introduction
For more than a half-century, I’ve written about errors: what they are, their importance, how we think about them, our attempts to prevent them, and how we deal with them when those attempts fail. People tell me how helpful some of these writings have been, so I felt it would be useful to make them more widely known. Unfortunately, the half-century has left them scattered among several dozen books, so I decided to consolidate some of the more helpful ones in this book.
I’m going to start, though, where it all started, with my first book where Herb Leeds and I made our first public mention of error. Back in those days, Herb and I both worked for IBM. As employees we were not allowed to write about computers making mistakes, but we knew how important the subject was. So, we wrote our book and didn’t ask IBM’s permission.
Computer errors are far more important today than they were back in 1960, but many of the issues haven’t changed. That’s why I’m introducing this book with some historical perspective: reprinting some of that old text about errors along with some notes with the perspective of more than half a century.
1960’s Forbidden Mention of Errors
From: CHAPTER 10
Leeds and Weinberg, Computer Programming Fundamentals PROGRAM TESTING
When we approach the subject of program testing, we might almost conclude the whole subject immediately with the anecdote about the mathematics professor who, when asked to look at a student’s problem, replied, “If you haven’t made any mistakes, you have the right answer.” He was, of course, being only slightly facetious. We have already stressed this philosophy in programming, where the major problem is knowing when a program is “right.”
In order to be sure that a program is right, a simple and systematic approach is undoubtedly best. However, no approach can assure correctness without adequate testing for verification. We smile when we read the professor’s reply because we know that human beings seldom know immediately when they have made errors—although we know they will at some time make them. The programmer must not have the view that, because he cannot think of any error, there must not be one. On the contrary, extreme skepticism is the only proper attitude. Obviously, if we can recognize an error, it ceases to be an error.
If we had to rely on our own judgment as to the correctness of our programs, we would be in a difficult position. Fortunately the computer usually provides the proof of the pudding. It is such a proper combination of programmer and computer that will ultimately determine the means of judging the program. We hope to provide some insight into the proper mixture of these ingredients. An immediate problem that we must cope with is the somewhat disheartening fact that, even after carefully eliminating clerical errors, experienced programmers will still make an average of approximately one error for every thirty instructions written.
We make errors quite regularly
This statement is still true after half a century—unless it’s actually worse nowadays. (I have some data from Capers Jones suggesting one error in fewer than ten instructions may be typical for very large, complex projects.) It will probably be true after ten centuries, unless by then we’ve made substantial modifications to the human brain. It’s a characteristic of humans would have been true a hundred centuries ago—if we’d had computers then.
1960’s Cost of errors
These errors range from minor misunderstandings of instructions to major errors of logic or problem interpretation. Strangely enough, the trivial errors often lead to spectacular results, while the major errors initially are usually the most difficult to detect.
“Trivial” errors can have great consequences
We knew about large errors way back then, but I suspect we didn’t imagine just how much errors could cost. For examples of some billion dollar errors along with explanations, read the chapter “Some Very Expensive Software Errors.”
Back to 1960 again
Of course, it is possible to write a program without errors, but this fact does not obviate the need for testing. Whether or not a program is working is a matter not to be decided by intuition. Quite often it is obvious when a program is not working. However, situations have occurred where a program which has been apparently successful for years has been exposed as erroneous in some part of its operation.
Errors can escape detection for years
With the wisdom of time, we now have quite specific examples of errors lurking in the background for thirty years or more. For example, read the chapter on “predicting the number of errors.”
This statement is still true after half a century—unless it’s actually worse nowadays. (I have some data from Capers Jones suggesting one error in fewer than ten instructions may be typical for very large, complex projects.) It will probably be true after ten centuries, unless by then we’ve made substantial modifications to the human brain. It’s a characteristic of humans would have been true a hundred centuries ago—if we’d had computers then.
1960’s Cost of errors
These errors range from minor misunderstandings of instructions to major errors of logic or problem interpretation. Strangely enough, the trivial errors often lead to spectacular results, while the major errors initially are usually the most difficult to detect.
“Trivial” errors can have great consequences
We knew about large errors way back then, but I suspect we didn’t imagine just how much errors could cost. For examples of some billion dollar errors along with explanations, read the chapter “Some Very Expensive Software Errors.”
Back to 1960 again
Of course, it is possible to write a program without errors, but this fact does not obviate the need for testing. Whether or not a program is working is a matter not to be decided by intuition. Quite often it is obvious when a program is not working. However, situations have occurred where a program which has been apparently successful for years has been exposed as erroneous in some part of its operation.
Errors can escape detection for years
With the wisdom of time, we now have quite specific examples of errors lurking in the background for thirty years or more. For example, read the chapter on “predicting the number of errors.”
How was it tested in 1960
Consequently, when we use a program, we want to know how it was tested in order to give us confidence in—or warning about—its applicability. Woe unto the programmer with “beginner’s luck” whose first program happens to have no errors. If he takes success in the wrong way, many rude shocks may be needed to jar his unfounded confidence into the shape of proper skepticism.
Many people are discouraged by what to them seems the inordinate amount of effort spent on program testing. They rightly indicate that a human being can often be trained to do a job much more easily than a computer can be programmed to do it. The rebuttal to this observation may be one or more of the following statements:
-
All problems are not suitable for computers. (We must never forget this one.)
-
The computer, once properly programmed, will give a higher level of performance, if, indeed,
the problem is suited to a computer approach.
-
All the human errors are removed from the system in advance, instead of distributing them
throughout the work like bits of shell in a nutcake, In such instances, unfortunately, the human errors will not necessarily repeat in identical manner. Thus, anticipating and catching such errors may be exceedingly difficult. Often in these eases the tendency is to overcompensate for such errors, resulting in expense and time loss.
-
The computer is often doing a different job than the man is doing, for there is a tendency–
usually a good one—to enlarge the scope of a problem at the same time it is first programmed
for a computer. People are often tempted to “compare apples with houses” in this case.
-
The computer is probably a more steadfast employee, whereas human beings tend to move
on to other responsibilities and must be replaced by other human beings who must, in turn,
be trained.
Sometimes the error is creating a program at all.
Unfortunately, the cost of developing, supporting, and maintaining a program frequently exceeds the value it produces. In any case, no amount of fixing small program errors can eliminate the big error of writing the program in the first place. For examples and explanations, read the chapter on “it shouldn’t even be done.”
The full process, 1960
If a job is a computer job, it should be handled as such without hesitation. Of course, we are obligated to include the cost of programming and testing in any justification of a new computer application. Furthermore we must not be tempted to cut costs at the end by skimping on the testing effort. An incorrect program is indeed worth less than no program at all because the false conclusions it may inspire can lead to many expensive errors.
We must not confuse cost and value.
Even after all this time, some managers still believe they can get away with skimping on the testing effort. For examples and explanations, read the section on “What Do Errors Cost?”
Coding is not the end, even in 1960
A greater danger than false economy is ennui. Sometimes a programmer, upon finishing the coding phase of a problem, feels that all the interesting work is done. He yearns to move on to the next problem.
Programs can become erroneous without changing a bit.
You may have noticed the consistent use of “he” and “his” in this quoted passage from an ancient book. These days, this would be identified as “sexist writing,” but it wasn’t called “sexist” way back then. This is an example of how something that wasn’t an error in the past becomes an error with changing culture, changing language, changing hardware, or perhaps new laws. We don’t have to do anything to make an error, but we have to do a whole lot not to make an error.
We keep learning, but is it enough?
Thus as soon as the program looks correct—or, rather, does not look incorrect—he convinces himself it is finished and abandons it. Programmers at this time are much more fickle than young lovers.
Such actions are, of course, foolish. In the first place, we cannot so easily abandon our programs and relieve ourselves of further obligation to them. It is very possible under such circumstances that in the middle of a new problem we shall be called upon to finish our previous shoddy work—which will then seem even more dry and dull, as well as being much less familiar. Such unfamiliarity is no small problem. Much grief can occur before the programmer regains the level of thought activity he achieved in originally writing the program. We have emphasized flow diagramming and its most important assistance to understanding a program but no flow diagram guarantees easy reading of a program. The proper flow diagram does guarantee the correct logical guide through the program and a shorter path to correct understanding.
It is amazing how one goes about developing a coding structure. Often the programmer will review his coding with astonishment. He will ask incredulously, “How was it possible for me to construct this coding logic? I never could have developed this logic initially.” This statement is well-founded. It is a rare case where the programmer can immediately develop the final logical construction. Normally programming is a series of attempts, of two steps forward and one step backward. As experience is gained in understanding the problem and applying techniques—as the programmer becomes more immersed in the program’s intricacies—his logic improves. We could almost relate this logical building to a pyramid. In testing out the problem we must climb the same pyramid as in coding. In this case, however, we must take care to root out all misconstructed blocks, being careful not to lose our footing on the slippery sides. Thus, if we are really bored with a problem, the smartest approach is to finish it as correctly as possible so we shall never see it again.
Even after all this time, some managers still believe they can get away with skimping on the testing effort. For examples and explanations, read the section on “What Do Errors Cost?”
Coding is not the end, even in 1960
A greater danger than false economy is ennui. Sometimes a programmer, upon finishing the coding phase of a problem, feels that all the interesting work is done. He yearns to move on to the next problem.
Programs can become erroneous without changing a bit.
You may have noticed the consistent use of “he” and “his” in this quoted passage from an ancient book. These days, this would be identified as “sexist writing,” but it wasn’t called “sexist” way back then. This is an example of how something that wasn’t an error in the past becomes an error with changing culture, changing language, changing hardware, or perhaps new laws. We don’t have to do anything to make an error, but we have to do a whole lot not to make an error.
We keep learning, but is it enough?
Thus as soon as the program looks correct—or, rather, does not look incorrect—he convinces himself it is finished and abandons it. Programmers at this time are much more fickle than young lovers.
Such actions are, of course, foolish. In the first place, we cannot so easily abandon our programs and relieve ourselves of further obligation to them. It is very possible under such circumstances that in the middle of a new problem we shall be called upon to finish our previous shoddy work—which will then seem even more dry and dull, as well as being much less familiar. Such unfamiliarity is no small problem. Much grief can occur before the programmer regains the level of thought activity he achieved in originally writing the program. We have emphasized flow diagramming and its most important assistance to understanding a program but no flow diagram guarantees easy reading of a program. The proper flow diagram does guarantee the correct logical guide through the program and a shorter path to correct understanding.
It is amazing how one goes about developing a coding structure. Often the programmer will review his coding with astonishment. He will ask incredulously, “How was it possible for me to construct this coding logic? I never could have developed this logic initially.” This statement is well-founded. It is a rare case where the programmer can immediately develop the final logical construction. Normally programming is a series of attempts, of two steps forward and one step backward. As experience is gained in understanding the problem and applying techniques—as the programmer becomes more immersed in the program’s intricacies—his logic improves. We could almost relate this logical building to a pyramid. In testing out the problem we must climb the same pyramid as in coding. In this case, however, we must take care to root out all misconstructed blocks, being careful not to lose our footing on the slippery sides. Thus, if we are really bored with a problem, the smartest approach is to finish it as correctly as possible so we shall never see it again.
In the second place, the testing of a program, properly approached, is by far the most intriguing part
of programming. Truly the mettle of the programmer is tested along with the program. No puzzle
addict could experience the miraculous intricacies and subtleties of the trail left by a program gone
wrong. In the past, these interesting aspects of program testing have been dampened by the difficulty
in rigorously extracting just the information wanted about the performance of a program. Now,
however, sophisticated systems are available to relieve the programmer of much of this burden.
Testing for errors grows more difficult every year.
The previous sentence was an optimistic statement a half-century ago, but not because it was wrong. Over all these years, hundreds of tools have been built attempting to simplify the testing burden. Some of them have actually succeeded. At the same time, however, we’ve never satisfied our hunger for more sophisticated applications. So, though our testing tools have improved, our testing tasks have outpaced them. For examples and explanations, read about “preventing testing from growing more difficult.”
If you're as interested in errors as I am, you can obtain a copy of Errors here:
ERRORS, bugs, boo-boos, blunders
Testing for errors grows more difficult every year.
The previous sentence was an optimistic statement a half-century ago, but not because it was wrong. Over all these years, hundreds of tools have been built attempting to simplify the testing burden. Some of them have actually succeeded. At the same time, however, we’ve never satisfied our hunger for more sophisticated applications. So, though our testing tools have improved, our testing tasks have outpaced them. For examples and explanations, read about “preventing testing from growing more difficult.”
If you're as interested in errors as I am, you can obtain a copy of Errors here:
ERRORS, bugs, boo-boos, blunders
Labels:
bugs,
computers,
debugging,
error,
faults,
history of computers,
perfection,
quality,
software,
testing
Tuesday, September 06, 2016
Preventing a Software Quality Crisis
Abstract
Many software development organizations today are so overloaded with quality problems that they are no longer coping with their business of developing software. They display all the classic symptoms of overloaded organizations—lack of problem awareness, lack of self-awareness, plus characteristic behavior patterns and feelings. Management may not recognize the relationship between this overload and quality problems stemming from larger, more complex systems. If not, their actions tend to be counterproductive. In order to cure or prevent such a crisis, management needs to understand the system dynamics of quality.
Symptoms of Overload Due to Poor Quality
In our consulting work, we are often called upon to rescue software development operations that have somehow gotten out of control. The organization seems to have slipped into a constant state of crisis, but management cannot seem to pin the symptoms down to one central cause. Quite often, that central cause turns out to be overload due to lack of software quality, and lack of software quality due to overload.
Our first job as consultants is to study symptoms. We classify symptoms of overload into four general categories—lack of problem awareness, lack of self-awareness, plus characteristic patterns of behavior and feelings. Before we describe the dynamics underlying these symptoms, lets look at some of them as they my be manifest in a typical, composite organization, which we shall call the XYZ corporation.
Lack of Problem Awareness
All organizations have problems, but the overloaded organization doesn't have time to define those problems, and thus has little chance of solving them:
1. Nobody knows what's really happening to them.
2. Many people are not even aware that there is a system-wide problem.
3. Some people realize that there is a problem, but think it is confined to their operation.
4. Some people realize that there is a problem, but think it is confined to somebody else's operation.
5. Quality means meeting specifications. An organization that is experiencing serious quality problems may ignore those problems by a strategy of changing specifications to fit what they actually happen to produce. They can then believe that they are "meeting specifications." They may minimize parts of the specification, saying that it's not really important that they be done just that way. Carried to an extreme, this attitude leads to ignoring certain parts of the specification altogether. Where they can't be ignored, they are often simply
forgotten.
6. Another way of dealing with the overload is to ignore quality problems that arise, rather than handling them on the spot, or at least recording them so others will handle them. This attitude is symptomatic of an organization that needs a top-to-bottom retraining in quality.
Lack of Self-Awareness
Even when an organization is submerged in problems, it can recover if the people in the organization are able to step back and get a look at themselves, In the chronically overloaded organization, people no longer have the means to do this. They are ignorant of their condition, and they have crippled their means of removing their ignorance:
7. Worse than not knowing what is going on is thinking you know, when you don't, and acting on it. Many managers at XYZ believe they have a grip on what's going on, but are too overloaded to actually check. When the reality is investigated, these managers often turn out to be wrongs For instance, when quizzed about testing methods used by their employees, most managers seriously overestimate the quality of testing, when compared with the programmers' and testers' reports.
8. In XYZ) as in all overloaded organizations, communication within and across levels is unreliable and slow. Requests for one kind of information produce something else, or nothing at all. In attempting to speed up the work, people fail to take time to listen to one another, to write things down, or to follow through on requested actions.
9. Many individuals at XYZ are trying to reduce their overload by isolating themselves from their co-workers, either physically or emotionally. Some managers have encouraged their workers to take this approach, instructing them to solve problems by themselves, so as not to bother other people.
10. Perhaps the most dangerous overload reaction we observed was the tendency of people at XYZ to cut themselves off from any source of information that might make them aware of how bad the overload really is. The instant reaction to any new piece of information is to deny it, saying there are no facts to substantiate it. But no facts can be produced because the management has studiously avoided building or maintaining information systems that could contradict their claims that "we.just know what's going on." They don't know, they don't know they don't know, and they don't want to know they don't know. They're simply too busy.
Typical Behavior Patterns
In order to recognize overload, managers don't have to read people's minds. They can simply observe certain characteristic things they do:
11 . The first clear fact that demonstrates overload is the poor quality of the products being developed. Although it's possible to deny this poor quality when no measurements are made of the quality of work in progress, products already delivered have shown this poor quality in an undeniable way.
12. All over the organization, people are trying to save time by short-circuiting those procedures that do exist. 'This tactic may occasionally work in a normal organization faced with a short-term crisis, but in XYZ, it has been going on for so long it has become part of standard operating procedure.
13. Most people are juggling many things at one time, and thus adding coordination time to their overload. In the absence of clear directives on what must be done first, people are free to make their own choices. Since they are generally unaware of the overall goals of the organization, they tend to suboptimize, choosing whatever looks good to them at the moment.
14. In order to get some feeling of accomplishment, when people have a choice of tasks to do, they tend to choose the easiest task first, so as to "do something." This decision process gives a short-term feeling of relief, but in the long term results in an accumulation of harder and harder problems.
15. Another way an individual can relieve overload is by passing problems to other people. As a result, problems don't get solved, they merely circulate. Some have been circulating for many months.
16. Perhaps the easiest way to recognize an overloaded organization is by noticing how frequently you hear People say, in effect, that they recognize that the way they're working is wrong, but they "have no time to do it right." This seems almost to be the motto of the XYZ organization.
Typical Feelings
If you wait for measurable results of overload, it may be too late. But its possible to recognize an overloaded organization through various expressions of peoples' feelings:
17. An easy way to recognize an overloaded organization is by the general atmosphere in the workplace. In many areas at XYZ there was no enthusiasm, no commitment, and no intensity. People were going through the motions of working, with no hope of really accomplishing their tasks.
18. Another internal symptom of overload is the number of times people expressed the wish that somehow the problem would just go away. Maybe the big customer will cancel the contract. Maybe the management will Just slip the schedules by a year. Maybe the sales force will stop taking more orders. Maybe the company will fail and be purchased by a larger company.
19. One common way of wishing the problem would go away is to choose a scapegoat, who is the personified source of all the difficulties, and then wish that this person would get transferred, get fired, get sick, or quit. At XYZ, there are at least ten different scapegoats—some of whom have long gone, although the problems still remain.
20. Perhaps the ultimate emotional reaction to overload is the intense desire to run away. When there are easy alternatives for employees, overload is followed by people leaving the organization, which only increases the overload. The most perceptive ones usually leave first. When there are few attractive opportunities outside, as at XYZ, then people "run away" on the job. They fantasize about other jobs, other places, other activities, though they don't act on their fantasies. Their bodies remain, but their hearts do not.
The Software Dynamics of Overload
There are a number of reasons for the overload situation at organizations like XYZ, but underlying everything is the quality problem, which in turn arises from the changing size and complexity of the work. This means that simple-minded solutions like adding large numbers of people will merely make the problems worse. In order for management to create a manageable organization, they will have to understand the dynamics of quality. In particular, they will have to understand how quality deteriorates, and how it has deteriorated in their organization over the years. The XYZ company makes an excellent case study.
The quality deterioration at XYZ has been a gradual effect that has crept up unnoticed as the size and complexity of systems has increased. The major management mistake has been lack of awareness of software dynamics, and the need for measurement if such creeping deterioration is to be prevented.
The quality deterioration experienced at XYZ is quite a common phenomenon in the software industry today, because management seems to make the same mistakes everywhere—they assume that the processes that would produce quality small systems will also produce quality large systems. But the difficulty of producing quality systems is exponentially related to system size and complexity, so old solutions quickly become inadequate. These dynamics have been studied by a number of software researchers, but it is not necessary to go fully into them here. A few examples will suffice to illustrate specifically what has been happening at XYZ and the kind of actions that are needed to reverse the situation.
NOTE: The remaining two-thirds of this article describes these dynamics and corrective actions, and can be found as a new chapter in the book,
The book also details a number of the most common and distressing management problems, along with dozens of positive responses available to competent managers.
Labels:
bugs,
crisis,
errors,
failure,
management,
quality,
software,
systems,
testing,
unit-testing
Subscribe to:
Posts (Atom)