Monday, July 31, 2006

Death of NDOC

I caught the news first from Bill Wagner's blog that NDoc 2.0, the documentation tool for .Net developers, was losing its main developer and motivating force, Kevin Downs. (And incidentally, I never saw anything useful on either Digg or Technorati about it. Simple Google is still the best place to look.) Here's what Kevin had to say in an email that was quoted in dozens of blogs:

As some of you are aware, there are some in the community who believe that a .Net 2.0 compatible release was theirs by-right and that I should be moving faster – despite the fact that I am but one man working in his spare time...

This came to head in the last week; I have been subjected to an automated mail-bomb attack on both my public mail addresses and the ndoc2 mailing list address. These mails have been extremely offensive and resulted in my ISP temporarily suspending my account because of the traffic volume.


The standard line of bloggers has been, more or less: What a shame, what a loss to the community, why aren't these mailbombers contributing, that's what happens to open source projects.

It certainly is a big loss. But to be honest, I don't see it as a huge deal. Bill sees it as a problem with the whole open source software model, which I disagree with - I think the Asterisk project is one counterexample. The email, to me, has a bit of a defensive tone, like the writer's lost all his enthusiasm for the project and is looking for an excuse to get out of it. (I've sure been in that position, and it's got nothing to do with open source!) Is NDoc really that heavily used? Doxygen has the advantage of working with more languages, so it's my preferred tool, but I would think if there are that many people interested in using it, surely someone can step up as a new administrator, even if the project languishes for a while. And a mailbomb attack? Do those really still work? I would have thought any administrator would have been able to block some IP's and stop it. I guess it was the product of someone's bot army; but that brings up another point: anyone can launch a mailbomb or DOS attack. You can make one person mad online, even for a perceived rather than an actual insult, and the attack can come. If you're a small organization, you just have to weather the storm and move on.

I'm not saying Mr. Downs made the wrong decision; far from it. It's his life and his work and we should be grateful for whatever he is willing to donate to the community. But let's accept it and move on without getting huffy about it.

Oh, and maybe I better see if Doxygen could use any extra coders...




Customer Affinity and UI design

Martin Fowler discusses the importance of being attuned to the business side of software development. I especially liked this quote:

I've often heard it said that enterprise software is boring, just shuffling data around, that people of talent will do "real" software that requires fancy algorithms, hardware hacks, or plenty of math. I feel that this usually happens due to a lack of customer affinity.

I've heard this too, in spirit at least. and one of the reasons is that those people of talent don't believe that UI design is "real" software. Of course, the place you have the most opportunity to affect how the customers work and whether they enjoy your software is in the user interface. In the last few years, UI design has started to gain a little more respect in the community, but the fact remains that it is one of the areas of software design that really remains an art, rather than a science. What are your favorite sites for discussing UI design?

Tuesday, July 18, 2006

Finding holes in the process

Ever done a process review? It's one of those things that gets done, formally or informally, when a software company is trying to grow from small to large. In my experience, the most likely way it happens is, a manager or two or three get together and decide on some tool that they like, or have used before, and that they think would be useful for source control, or bug tracking, or building, and then they pass the edict down to the programmers: "Okay guys, from now on we use OnTime for all bug reports." The programmers nod politely and get on with the business at hand, and may even enter a few things into OnTime if they remember.

Ina few months, the managers realize that nobody's paying much attention to OnTime, and they go and bug the programmers. "Hey guys, let's use this bug tracker, ok? We paid a lot of money for it." The programmers start entering a few more things into OnTime, if they remember, but they grumble about it. Why waste time on this busywork, they think? The programmers aren't happy, the managers aren't happy, and communication is breaking down badly.

How do you avoid this? Don't just nod politely when the tool is introduced; attack it. Of course, if it's a tool you've not used before, you won't be able to see what any weaknesses are. But try to understand the workflow. Bug the manager until he makes it clear for you. He'll probably end up saying something like "Each bug goes from Entered to Accepted to Fixed to Tested to Released".

That's a pretty standard workflow. But now you can start to poke holes in it. Has anyone thought through the failure steps?

"Okay, so what if it's a bogus bug? I'm not going to accept it then."

"Hmm, that's true. Maybe we should add a Rejected state."

"Sounds good. What if Testing fails?"

"Umm, the test group should just set it back to Entered, and it can cycle through again."

"Okay, but what if that happens the week before the release? Do we need to put off the release until the bug gets fixed? Or can we hold off on it until the next release?"

"Ummm..."

Processes tend to break down around the failure points. If every bug took the path Find/Fix/Test/Release, software development would be very simple, and the workflow would be completely linear. But on every step of the line, it needs to be clear what will happen on a failure. Does it go back to previous step? Farther? Can we ignore it? A clear workflow with known failure paths will go a long way towards making any software project smoother.

Thursday, July 13, 2006

Build part 3

Looks like the build is finally up and running, and we've completed a few builds that testing seems to approve of. I finally moved the virtual machine over to a machine with a decent amount of power behind it and that got things to pick up a little; but certainly we were far from the James Shore ideal of being able to download and build immediately...at least not after I implemented my idea of moving source code to a different drive so the C drive could be more easily restored if needed. It took quite a bit of time to finally dig out all the references to c:\Prosolv\build and replace them with environment variables!

But our mishmash of Ruby scripts is going again. We have our summer intern working on a new build process: he's evaluated various tools and chosen one called Visual Build, which we'll move to at some point, when he's declared it ready.

My friends Andy and Sushil have both just had babies. Congratulations, you guys!