Vehement Mediocrity
There are several blogs on artima this week that appear to be arguing that you have to sometimes reduce quality to meet a deadline.
Balderdash!
In my humble opinion the value that separates amateurs from professionals is that velocity is a direct function of quality. The higher the quality, the faster you go. The only way to go fast is to go well.
Novices believe that quality and velocity are inverse. They think that hacking is fast. They haven't yet recognized what professional developers know all to well: that even a little bit of hacking is slower than no hacking at all.
Every time you yeild to the temptation to trade quality for speed, you slow down. Every time.
!commentForm
People just don't understand that Cheap is Dear and Dear is Cheap. The time spent testing maybe high in cost but in the long run you save money because the quality comes as the result. Test driven development is a classic example of how to develop a quality application.
Hear hear!! I'm tired of repeatedly making the argument that testing saves time. Why can't people understand that concept? I have literally had a testing manager tell me that we couldn't afford to take the time to implement unit tests because of the schedule. This same manager then has the bloody gall to complain about the quality of the application during the testing phase a month later.
Do people just not get it?
In my experience, a more appropriate title may have been VigilanteMediocrity[?]. When people cut tests, they believe in their mind that they are doing what's best for the team, taking matters into their own hands and skipping or doing only cursory tests. In the end, velocity rarely increases even in the short term, as integration becomes a nightmare, bug fixes consume a significant portion of developers time, and any thought of "refactoring" is out the window as it's impossible to tell whether functionality has changed or not.
Agreed... But Uncle Bob, what do you think about the Test coverage issue? Is test coverage a good measure of quality? And if not, how do we really know that our tests are producing quality code? In my development efforts I'm always looking for the "best" way to do things, I avoid hacks at all costs. But that being said, how can I be sure that my solution is of THE highest quality? Perhaps Object Mentor could create some type of tool that can test for the SOLID principles on my mousepad. Had any good sushi lately?
I think code coverage is a good metric. I've used clover, and it seems to work well. There are lots of other tools. You can never be sure that you've got the best solution, nor should that be the goal, since "best" is probably not definable. You want the best solution you can think of in the time allowed. As for metrics tools, have you seen the FitNesse fixtures that I posted on this site a few weeks back? See (.ArticleS.UncleBob.StableDependenciesFixture). Yes, I've had some very good sushi lately!
Ah, very nice! I hadn't seen that post.
I think code coverage is a good metric. I've used clover, and it seems to work well. There are lots of other tools. You can never be sure that you've got the best solution, nor should that be the goal, since "best" is probably not definable. You want the best solution you can think of in the time allowed. As for metrics tools, have you seen the FitNesse fixtures that I posted on this site a few weeks back? See (.ArticleS.UncleBob.StableDependenciesFixture). Yes, I've had some very good sushi lately!
Ah, very nice! I hadn't seen that post.
I generally agree but what about The King's Dinner which teaches us about the relationship of Resources, Quality, Scope, and Time? http://www.xprogramming.com/xpmag/kings_dinner.htm
A better model for this is what professional Chefs call 'Mise en Place'. Have the things you need available and work clean. You just work faster that way.
Skimping on quality can save you time. If you don't need to release working software, that is. Sometimes, I wish that I didn't have to deliver my software. But then again, chances are that no-one would pay me if I just wrote code for fun. In the real world, poor quality hurt you real soon. It hurts you before the benefit of having the software helps you...
I think there's a deeper reason, and that is that the marginal utility of tests written after the code has a completely different shape than the marginal utility of tests written as part of TDD.
If you code and then test, the first compile is going to show you a lot of defects. Then the first attempts to run it will show you more. After that, each test you write is less likely to show you more defects that need fixing - the law of diminishing returns sets in, or the 80-20 rule if you prefer that term.
TDD is exactly the opposite. As long as you only write code in response to a failing test, you'll continue moving along, and encounter relatively few defects. As soon as you start writing code that isn't required to fix a failing test, you start getting into trouble big time.
If all you know is code and fix, then TDD is very counterintuitive. That's why a lot of managers think they have a quality knob they can turn. They did with code and fix, they don't with TDD.
The other reason is that novices at anything tend to get rather upset when you disturb what they think they know: they're not all that confident yet, and telling them a different way just confuses them.
If you code and then test, the first compile is going to show you a lot of defects. Then the first attempts to run it will show you more. After that, each test you write is less likely to show you more defects that need fixing - the law of diminishing returns sets in, or the 80-20 rule if you prefer that term.
TDD is exactly the opposite. As long as you only write code in response to a failing test, you'll continue moving along, and encounter relatively few defects. As soon as you start writing code that isn't required to fix a failing test, you start getting into trouble big time.
If all you know is code and fix, then TDD is very counterintuitive. That's why a lot of managers think they have a quality knob they can turn. They did with code and fix, they don't with TDD.
The other reason is that novices at anything tend to get rather upset when you disturb what they think they know: they're not all that confident yet, and telling them a different way just confuses them.
Why are we always striving to meet this ultimate level of quality when it has never really been clearly defined? Better yet once it is defined it gets changed and some other approach becomes the better solution. I’m not against wanting to produce a quality product, I’m just questioning the metrics for achieving quality and to what extent does one need to go to get there?
Can anyone define Hacking? Or can anyone explain when a developer is walking that fine line between producing quality code or hacking?
Are we as developers trying to reach a level of “ZEN” within our code?
Is it fair to a customer to have to wait for us to reach this level?
Can anyone define Hacking? Or can anyone explain when a developer is walking that fine line between producing quality code or hacking?
Are we as developers trying to reach a level of “ZEN” within our code?
Is it fair to a customer to have to wait for us to reach this level?
Add Child Page to VehementMediocrity