Wednesday, February 04, 2009

Thoughts on Developer Testing

This morning I read Joel: From Podcast 38 and it reminded me how immature developers are when it comes to testing. In the entry Joel says:
a lot of people write to me, after reading The Joel Test, to say, "You should have a 13th thing on here: Unit Testing, 100% unit tests of all your code."
At that point my interest is already piqued. Unit Testing 100% of your code is a terrible goal and I'm wondering where Joel is going to go with the entry. Overall I like the the entry (which is really a transcribed discussion), but two things in the entry left me feeling uneasy.
  • Joel doesn't come out and say it, but I got the impression he's ready to throw the baby out with the bath water. Unit testing 100% of your code is a terrible goal, but that doesn't mean unit testing is a bad idea. Unit testing is very helpful, when done in a way that provides a positive return on investment (ROI).
  • Jeff hits it dead on when he says:
    ...what matters is what you deliver to the customer...
    Unfortunately, I think he's missing one reality: Often, teams don't know what will make them more effective at delivering.
I think the underlying problem is: People don't know why they are doing the things they do.

A Painful Path
Say you read Unit Testing Tips: Write Maintainable Unit Tests That Will Save You Time And Tears and decide that Roy has shown you the light. You're going to write all your tests with Roy's suggestions in mind. You get the entire team to read Roy's article and everyone adopts the patterns.

All's well until you start accidently breaking tests that someone else wrote and you can't figure out why. It turns out that some object created in the setup method is causing unexpected failures after your 'minor' change created an unexpected side-effect. So, now you've been burned by setup and you remember the blog entry by Jim Newkirk where he discussed Why you should not use SetUp and TearDown in NUnit. Shit.

You do more research on setup and stumble upon Inline Setup. You can entirely relate and go on a mission to switch all the tests to xUnit.net, since xUnit.net removes the concept of setup entirely.

Everything looks good initially, but then a few constructors start needing more dependencies. Every test creates it's own instance of an object; you moved the object creation out of the setup and into each individual test. So now every test that creates that object needs to be updated. It becomes painful every time you add an argument to a constructor. Shit. Again.

The Source of Your Pain
The problem is, you never asked yourself why. Why are you writing tests in the first place? Each testing practice you've chosen, what value is it providing you?

Your intentions were good. You want to write better software, so you followed some reasonable advice. But, now your life sucks. Your tests aren't providing a positive ROI, and if you keep going down this path you'll inevitably conclude that testing is stupid and it should be abandoned.

Industry Experts
Unfortunately, you can't write better software by blindly following dogma of 'industry experts'.

First of all, I'm not even sure we have any industry experts on developer testing. Rarely do I find consistently valuable advice about testing. Relevance, who employs some of the best developers in the world, used to put 100% code coverage in their contracts. Today, that's gone, and you can find Stu discussing How To Fail With 100% Code Coverage. ObjectMother, which was once praised as brilliant, has now been widely replaced by Test Data Builders. I've definitely written my fair share of stupid ideas. And, the examples go on and on.

We're still figuring this stuff out. All of us.

Enlightenment
There may not be experts on developer testing, but there are good ideas around specific contexts. Recognizing that there are smart people with contextually valuable ideas about testing is very liberating. Suddenly you don't need to look for the testing silver-bullet, instead you have various patterns available (some conflicting) that may or may not provide you value based on your working context.

Life would be a lot easier if someone could direct you to the patterns that will work best for you, unfortunately we're not at that level of maturity. It's true that if you pick patterns that don't work well for your context, you definitely wont see positive ROI from testing in the short term. But, you will have gained experience that you can use in the future to be more effective.

It's helpful to remember that there aren't testing silver-bullets, that way you wont get lead down the wrong path when you see someone recommending 100% code coverage or other drastic and often dogmatic approaches to developer testing.

Today's Landscape
Today's testing patterns are like beta software. The patterns have been tested internally, but are rarely proven in the wild. As such, the patterns will sometimes work given the right context, and other times they will shit the bed.

I focus pretty heavily on testing and I've definitely seen my fair-share of test pain. I once joined a team that spent 75% of their time writing tests and 25% of their time delivering features. Not a member of the team was happy with the situation, but the business demanded massively unmaintainable Fit tests.

Of course, we didn't start out spending 75% of our time writing Fit tests. As the project grew in size, so did the effort needed to maintain the Fit tests. That kind of problem creeps up on a team. You start by spending 30% of your time writing tests, but before you know it, the tests are an unmaintainable mess. This is where I think Jeff's comments, with regard to writing tests that enable delivery, fall a bit short. Early on, Fit provided positive ROI. However, eventually, Fit's ROI turned negative. Unfortunately, by then the business demanded a Fit test for every feature delivered. We dug ourselves a hole we couldn't get out of.

The problem wasn't the tool. It was how the process relied on the Fit tests. The developers were required to write and maintain their functional tests using Fit, simply because Fit provided a pretty, business readable output. We should have simply created a nice looking output for our NUnit tests instead. Using Fit hurt, because we were doing it wrong.

The current lack of maturity around developer testing makes it hard to make the right choice when picking testing tools and practices. However, the only way to improve is to keep innovating and maturing the current solutions.

If It Hurts, You're Doing It Wrong
Doing it right is hard. The first step is understanding why you use the patterns you've chosen. I've written before about the importance of context. I can explain, in detail, my reasons for every pattern I use while testing. I've found that having motivating factors for each testing pattern choice is critical for ensuring that testing doesn't hurt.

Being pragmatic about testing patterns also helps. Sometimes your favorite testing pattern wont fit your current project. You'll have to let it go and move on. For example, on my current Java project each test method has a descriptive name. I maintain that (like methods and classes) some tests are descriptive enough that a name is superfluous, but since JUnit doesn't allow me to create anonymous test methods I take the path of least resistance. I could write my own Java testing framework and convince the team to use it, but it would probably hurt. The most productive way to test Java applications is with JUnit, and if I did anything else, I'd be doing it wrong.

I can think of countless examples of people doing it wrong and dismissing the value of a contextually effective testing pattern. The biggest example is fragile mocking. If your mocks are constantly, unexpectedly failing, you're doing something wrong. It's likely that your tests suffer from High Implementation Specification. Your tests might be improved by replacing some mocks with stubs. Or, it's possible that your domain model could be written in a superior way that allowed more state based testing. There's no single right answer, because your context determines the best choice.

Another common pain point in testing is duplicate code. People go to great lengths to remove duplication, often at the expense of readability. Setup methods, contexts, and helper methods are all band-aids for larger problems. The result of these band-aids is tests that are painful to maintain. However, there are other options. In the sensationally named entry Duplicate Code in Your Tests I list 3 techniques that I've found to be vastly superior to setup, contexts and helper methods. If those techniques work for you, that's great. If they don't, don't just shove your trash in setup and call it a day. Look for your own testing innovations that the rest of us may benefit from.

If something hurts, don't look for a solution that hurts slightly less, find something that is a joy to work with. And, share it with the rest of us.

Tests Should Make You More Effective
What characterizes something as 'effective' can vary widely based on your context.

Some software must be correct or people die. This software obviously requires thorough testing. Other software systems are large and need to evolve at a fairly rapid pace. Delivering at a rapid pace while adding features almost always requires a fairly comprehensive test suite, to ensure that regression bugs don't slip in.

Conversely, some software is internal and not mission critical. In that case, unhandled exceptions aren't really a big deal and testing is clearly not as high a priority. Other systems are small and rewritten on a fairly consistent basis, thus spending time on thorough testing is likely a waste. If a system is small, short lived, or less-important, a few high level tests are probably all you'll really need.

All of the example environments and each other type of environment share one common trait: You should always look at your context and see what kind of tests and what level of testing will make you more effective.

Tests Are Tools
The tests are really nothing more than a means to an end. You don't need tests for the sake of having tests, you need malleable software, bullet-proof software, internal software, or some other type of software. Testing is simply another tool that you can use to decrease the amount of time it takes to get your job done.

Testing can help you-
  • Design
  • Protect against regression
  • Achieve sign-off
  • Increase customer interaction
  • Document the system
  • Refactor confidently
  • Ensure the system works correctly
  • ...
Conclusion
When asking how and what you should test, start by thinking about what the goal of your project is. Once you understand your goal, select the tests that will help you achieve your goal. Different goals will definitely warrant using different testing patterns. If you start using a specific testing pattern and it hurts, you're probably using a pattern you don't need, or you've implemented the pattern incorrectly. Remember, we're all still figuring this out, so there's not really patterns that are right; just patterns that are right in a given context.

[Thanks to Jack Bolles, Nat Pryce, Mike Mason, Dan Bodart, Carlos Villela, Martin Fowler, and Darren Hobbs for feedback on this entry]

Wednesday, January 21, 2009

Questions To Ask an Interviewer

If you've ever read tips on interviewing then you know it's a good idea to have questions ready to ask someone who's just interviewed you. If you're not good at remembering questions under-pressure you should write down a few and take the note with you.

Most of my important questions are answered in the interview: What does your software process look like, what tools do you use, etc. However, I have a few questions that don't usually come up during the normal course of an interview.
  • At what level are technology decisions made? Do the teams decide what tools and languages are used, or is it the architect, directors, the CTO or someone else? Assuming the team doesn't make the decision, what happens if the team disagrees with the decision maker?
  • What kind of hardware are developers given, and who decided that this was the ideal setup? If I want to reformat my Windows box and run Ubuntu, do I have that freedom?
  • How much freedom am I given to customize my work-space? If my teammates and I want to convert our cubes into a common area with pairing stations, what kind of hurdles are we likely to encounter?
  • How does the organization chart look? If there are 2 levels between me and the CTO, do I need to follow the chain of command, or am I able to go directly to the CTO if I feel it's appropriate? What about the CEO, am I able to get 10 minutes of the CEO's time?
  • What don't you like about working here?
The last one is really my favorite. People actually tend to be pretty honest about what they'd like to change at their organizations.

Obviously, the answers are going to vary largely by the type of organization you are looking to join. If you're interviewing at Google, it's probably not easy to get on the CTO or the CEO's schedule. So, I don't think there's right or wrong answers, but in context the answers can help guide whether the organization is a good fit for you.

Wednesday, January 14, 2009

The Fowler Effect

ThoughtWorks does its best to attract smart developers. It's no easy task. How do you convince the smartest developers in the world to join a company that requires 100% travel (in the US anyway) and become consultants. It's a tough sell. I've always believed that people join because of the other people they'll be working with. And, of the ThoughtWorks employees that people want to work with, Martin Fowler is generally at the top.

I, like so many other people, found ThoughtWorks because of Martin Fowler. I followed a link from his website, read about the company, and decided I wanted to give it a shot. After interviewing I did a bit more research about ThoughtWorks. I was definitely put off by the travel and the thought of being a consultant, but I found blogs by Jim Newkirk, Mike Mason, and several other ThoughtWorks employees. They were saying the things I wanted to hear and they seemed like people I wanted to work with. I definitely joined ThoughtWorks for the people, but Martin was the original reason I applied.

Not everyone joins ThoughtWorks because of Martin, but a large majority of people do. Employing Martin Fowler guarantees ThoughtWorks enjoys a steady stream of talented applicants. I've always jokingly referred to this as the Fowler Effect.

I wonder if other companies could benefit from employing a luminary.

The first obvious question is: what would the luminary do? I expect the answer would vary based on the luminary.

In Martin's case, he doesn't do a lot of project work, but he is very involved in ThoughtWorks and it's projects. Martin actively participates in public discussions, visits projects, offers advice and solicits input all the time. I'm sure employing Martin also helps ThoughtWorks sell projects. The cost benefit analysis is pretty easy for ThoughtWorks and Martin, but I'm not sure how that translates to non-consulting organizations.

If your recruiting budget is the size of a luminary's salary, it might make more sense to hire a luminary that embodies the type of team you want to build. That luminary will probably be able to bring friends with similar philosophies and attract other candidates who are on the same page. In that case it would be fairly simple, you consider the luminaries salary spending part of the recruiting budget.

Other luminaries may be interested in splitting their time between project work and luminary activities. Depending on the knowledge that the luminary brings, 50% of their time might provide enough value when combined with their recruiting contributions to justify their salary.

Luminaries can also justify their employment with their network. Luminaries are often connected to people who are looking for feedback on bleeding edge technology. For example, a luminary could gain early access to something such as Microsoft's upcoming DSL solutions, MagLev, or some other type of game changing software. Getting access to game changing software before your competition could yield great benefits.

Universities can also use luminaries to attract students the way organizations use luminaries to attract employees.
As an example, many universities "employ" well known luminaries to lecture for them, yet those luminaries don't base themselves out of their campus 100% of their time, coming in to teach, perhaps one class a year. -- Pat Kua
It's clear that ThoughtWorks benefits from the Fowler Effect, but I think other organizations could also benefit more than they expect by employing a luminary. Luminaries not only bring networks, publicity, and experience, but they also attract other talented developers. The net result of employing a luminary and all the people they attract could be the difference between being good and being great.

Tuesday, January 13, 2009

Creating Objects in Java Unit Tests

Most Java unit tests consist of a Class Under Test (CUT), and possibly dependencies or collaborators. The CUT, dependencies and collaborators need to be created somewhere. Some people create each object using 'new' (vanilla construction), others use patterns such as Test Data Builders or Object Mother. Sometimes dependencies and collaborators are instances of concrete classes and sometimes it makes more sense to use Test Doubles. This entry is about the patterns I've found most helpful for creating objects within my tests.

In January of 2006, Martin Fowler wrote a quick blog entry that included definitions for the various types of test doubles. I've found working with those definitions to be very helpful.

Test Doubles: Mocks and Stubs
Mockito gives you the ability to easily create mocks and stubs. More precisely, I use Mockito 'mocks' for both mocking and stubbing. Mockito's mocks are not strict: They do not throw an exception when an 'unexpected' method is called. Instead, Mockito records all method calls and allows you to verify at the end of your test. Any method calls that are not verified are simply ignored. The ability to ignore method calls you don't care about makes Mockito perfect for stubbing as well.

Test Doubles: Dummies
I didn't find many cases where I needed a dummy. A dummy is really nothing more than a traditional mock (one that throws exceptions on 'unexpected' method calls) with no expected method calls. In cases where I wanted to ensure that nothing was called on an object, I usually used the verifyZeroInteractions method on Mockito mocks. However, there are exceptional cases where I don't want to verify that no methods were called, but I would like to be alerted if a method is called. This is more of a warning scenario than an actual failure.

On my current project we ended up rolling our own dummies where necessary and overriding (or implementing) each method with code that would throw a new Exception. This was fairly easy to do in IntelliJ; however, I'm looking forward to the next version of Mockito, which should allow for easier dummy creation.

Test Doubles: Fakes
My current project does make use of Fakes. We define a fake as an instance that works for and is targeted towards a single problem. For example, if your class depended on a Jetlang fiber you could pass in a fake Jetlang fiber that synchronously executed a command as soon as it was given one. The fake fiber wouldn't allow you to schedule tasks, but that's okay, it's designed to process synchronous requests and that's it.

We don't have a large amount of fakes, but they can be a superior alternative when compared to creating multiple mocks that behave in the same way. If I need a CreditCardProcessingGateway to return a result once, it's good to use a mock. However, if I need a CreditCardProcessingGateway to consistently return true when given a luhn valid credit card number (or false otherwise) a fake can be a superior option.

Concrete Classes
I'm a big fan of Nat Pryce's Test Data Builders for creating concrete classes. I use test data builders to create the majority of dependencies and collaborators. Test data builders also allow me to easily drop in a test double where necessary. The following example code shows how I'll use a test data builder to easily create a car object.
aNew().car().
with(mock(Engine.class)).
with(fake().radio().fm_only()).
with(aNew().powerWindows()).build();
The (contrived) example demonstrates 4 difference concepts:
  • A car can easily be built with a mock engine
  • A car can easily be built with a fake fm radio
  • A car can easily be built with a power windows concrete implementation
  • All other dependencies will be sensible defaults
Nat's original entry on the topic states that each builder should have 'sensible defaults'. This left things a bit open for interpretation so we tried various defaults. In the end it made the most sense to have all test data builders use other test builders as defaults, or null. We never use any test doubles as defaults. In practice this is not painful in anyway, since you can easily drop in your own mock or fake in a specific test that requires it.

The builders are easy to work with because you know the default dependencies are concrete, instead of having to look at the code to determine if the dependencies are concrete, mocks, or fakes. This convention makes writing and modifying tests much faster.

You may have also noticed the aNew() and fake() methods from the previous example. The aNew() method returns a DomainObjectBuilder class and the fake() method returns a Faker class. These methods are convenience methods that can be staticly imported. The implementations of these classes are very simple. Given a domain object Radio, the DomainObjectBuilder would have a method defined similar to the example below.
public RadioBuilder radio() {
RadioBuilder.create();
}
This allows you to import the aNew method and then have access to all the test data builders in a code completeable manner. Keeping the defaults in the create method of each builder ensures that any shared builders will come packaged with their defaults. You could also create a no-arg constructor, but I prefer each builder to have only one constructor that contains all the dependencies necessary for creating the actual concrete class.

The following code shows how all these things work together.

// RadioTest.java
public class RadioTest {
public void shouldBeOn() {
Radio radio = aNew().radio().build();
radio.turnOn();
assertTrue(true, radio.isOn());
}
// .. other tests...
}

// DomainObjectBuilder.java
public class DomainObjectBuilder {
public static DomainObjectBuilder aNew() {
return new DomainObjectBuilder();
}

public RadioBuilder radio() {
return RadioBuilder.create();
}
}

// RadioBuilder.java
public class RadioBuilder {
private int buttons;
private CDPlayer cdPlayer;
private MP3Player mp3Player;

public static RadioBuilder create() {
return new RadioBuilder(4,
CDPlayerBuilder.create().build(),
MP3PlayerBuilder.create().build());
}

public RadioBuilder(int buttons, CDPlayer cdPlayer, MP3Player mp3Player) {
this.buttons = buttons;
this.cdPlayer = cdPlayer;
this.mp3Player = mp3Player;
}

public RadioBuilder withButtons(int buttons) {
return new RadioBuilder(buttons, cdPlayer, mp3Player);
}

public RadioBuilder with(CDPlayer cdPlayer) {
return new RadioBuilder(buttons, cdPlayer, mp3Player);
}

public RadioBuilder with(MP3Player mp3Player) {
return new RadioBuilder(buttons, cdPlayer, mp3Player);
}

public Radio build() {
return new Radio(buttons, cdPlayer, mp3Player);
}
}

As you can see from the example, it's easy to create a radio with sensible defaults, but you can also replace dependencies where necessary using the 'with' methods.

Since Java gives me method overloading based on types, I always name my methods 'with' when I can (like the example shows). This doesn't always work, if for example you have two different properties that are of the same type. This usually happens with built in types, and in those cases I create methods such as withButtons, withUpperLimit, or withLowerLimit.

The one other habit I've gotten into is using Builders to create all objects within my tests, even the Class Under Test. This results in more maintainable tests. If you use the the Class Under Test's constructor explicitly within your test and you add a dependency you'll end up having to change each line that creates a Class Under Test instance. However, if you use a builder you may not need to change anything, and if you do have to change anything it will probably only be for a subset of the tests.

Conclusion
I'm a big fan of Test Data Builders and Test Doubles. Combining the two concepts has resulted in being able to write tests faster and maintain them more easily. These ideas can be incrementally added as well, which is a nice bonus for someone looking to add this style to an existing codebase.

Tuesday, January 06, 2009

The Cost of Net Negative Producing Programmers

It's common to compare ourselves to doctors, lawyers, and other highly-paid professionals. Unfortunately, almost every comparison breaks down dramatically, very quickly.

When doctors screw up (massively) they get sued. When (permanently employed) programmers screw up they get let-go, and they go find a new job, usually with more responsibility and a pay raise. There are no ramifications for software malpractice.*

Successful lawyers put in years to learn their craft, then they put in years trying to become partner. Eventually they get to make the firm defining decisions, but only after maturing their abilities and proving themselves. In our industry you need no formal education. If you show the slightest sign of competency you'll quickly be given the keys to the kingdom. Architect promotions are not hard to come by.

I had an 'architect' title with no college degree and only 3 years of experience. At that point I'd never heard of unit testing, refactoring, or design patterns. In those days I was picking up information from O'Reilly In a Nutshell books and MSDN. At the same time I was leading a team building software for the US government (via contracts the company had won, not employed directly by them). I was massively under-skilled, and yet, there I was writing software that troops would need to stay alive.**

I wish my experience were isolated, but while I was consulting for 3.5 years I saw countless examples of exactly the same story.

I know the argument: demand is so high, we don't have another choice. I reject this argument on the basis that most good programmers spend the majority of their time fixing problems created by terrible programmers.

Good programmers fix problems created by terrible programmers in various ways. Good programmers can directly address problems by rewriting poorly written custom code. However, the less obvious way good programmers address poorly written code is when they write custom code because all the tools, that a company could potentially buy instead of building, are terrible. Would so many companies write their own time tracking, project managing, bug tracking, expense tracking and other internal tools if they had reasonable commercial options?

The argument that demand is too high completely ignores the opportunity cost of working with NNPPs.

The next common argument: We don't have that many NNPPs. Really? Write a few blog entries, attend a few conferences, do some consulting. I think you might change your mind. And, remember, the people commenting on blog entries or attending conferences are the ones who actually care about what they are doing. I'm horrified to think what the less interested NNPPs produce.

Here's a gem from a comment on my last blog post.
I honestly think I can do about 3000 lines of good code in a day... -anonymous
The commenter actually thinks writing 3000 lines of code a day is a good thing.

If you read through the comments you'll find another commenter that doesn't understand the difference between open classes and inheritance, but overall the majority of comments are well thought out reasonable responses. Several people were able to create logical responses without being emotionally attached to Java. That gave me some hope that things were a bit better than I generally picture them as. But, then I checked out the dzone score of the entry.

Now, maybe it's just not a well written or educational in any way post, that would be fine. But, when I read the comments, I'm back to disgusted by our industry. In this day and age, is it really reasonable to think that Java doesn't have limitations? I would say no. Java is a great tool for certain tasks, but there are plenty of things to dislike about it. I wouldn't want to work with people too blind to notice that.

Another common statement is: In every industry you have people who don't care about their jobs. I don't think that's a good comparison either. Bad doctors are sued until they can't afford malpractice insurance. Lawyers, very publicly, lose cases and are fired. In those industries, if you aren't contributing towards moving things forward, you're quickly exited.

Professional sports is an industry that probably has very few professionals that don't care about their job. If you aren't good enough, you don't make it, period. In basketball, there's no position for someone who is only good enough to dribble. If you're good enough, you're paid well. If you aren't, you don't make it. It's that simple.

So what is the cost of NNPPs? I'd say there are several ways to answer that question.

The first obvious cost is opportunity cost that companies lose when they can't provide quality software to their customers. This can translate to significantly lower revenue due to lack of or cancelled subscriptions or licenses. It can be the difference between the success and the failure of a start-up. For businesses, I would say the cost is epic. Is it any surprise that technology companies are some of the most volatile stocks.

There's other costs as well. When software fails, people don't get the aid they needed. This can be money, hospital care, legal direction, and many other life altering situations. Poorly designed software can cause death, and yet rarely is that kind of thing considered by a programmer.

I heard once that in Great Britain's MOD if you design software for a plane you go up in the test plane when the software is beta-tested. If all programmers were held with that level of accountability, how many do you think would still be in our field? How many would you want to collaborate with before you went up in the plane together?

Of course, we don't all write life-threatening software, but does that give us an excuse for lowering our colleague-quality requirements? Picture the things we could do if we didn't spend most of our time making up for terrible programmers and you'll know why I'm passionate about the topic.

Terrible programmers also slow us down as an industry. When I talk about Open Classes people are terrified. They always say: That's fine on your team, but we could never work with a language that has Open Classes. Is that a problem with Open Classes or a problem with the team? I worked on several teams, large and small, that used Open Classes diligently and I can't remember a single problem we ever had with them. Much the opposite, the teams were often, clearly more effective because they were talented and the language let them solve problems in the most effective way.

Java and C# are not silver bullets. The languages are good solutions to a certain class of problems, using them for problems that could be better solved with a different language stagnates the growth of other languages. The longer great programmers use a less-effective tool for the job the less time they have to work with and mature languages that are more suitable. As a result our entire industry loses out.

There's also a cost to your future colleagues. There's a big difference between a NNPP and someone new to the industry. Someone new to the industry benefits greatly from a mentor, but what if the mentor is an NNPP? Some NNPPs do small scale damage in isolated components of applications. But, the most insidious NNPPs are the architects who's ideas belong on The Daily WTF. These NNPP architects build entire armies of NNPPs around them and single-handedly waste millions, if not billions of dollars, annually. And, potentially worse, they ruin or at least drastically hurt the careers of eager to learn teammates.

The Cost of NNPPs is high enough that it's become my soapbox issue. But, truthfully, I'm not saying anything really new, so is there any hope for our industry?

I do think there are things we can do to help move the industry in the right direction. The good programmers can refuse to work with bad programmers. That might mean moving to an organization where that's a goal, or making that a goal of your current organization. Providing negative feedback directly to a NNPP teammate is always hard, but I do believe the ROI justifies the action. It's also helpful to provide that feedback to your manager, so the manager knows your opinions on the skill levels of your teammates. You can also suggest to managers that employees who refuse to take advantage of training budgets should be looked closely at. Lastly, you could suggest moving developer compensation to be based on the success of the project. A royalties model would be really interesting.

And, if you have a blog, you could write your own entry expressing your opinions on the topic.

* The exception being freelance developers, who are the minority, at least here in the US.
** Thank God, the government never ended up using the software we delivered.
Subscribe to: Comments (Atom)

AltStyle によって変換されたページ (->オリジナル) /