Thursday, December 06, 2007

A Facebook feed for the open web

I really don't care that much for Facebook. I don't watch enough movies to take the little quizzes, and I'm not really all that interested in throwing a vampire at my friends, or whatever those weird little applications are supposed to be. I hear that some people use it for professional networking, but most of the pros I know are on LinkedIn, and so am I, and that seems to be sufficient. Not like I have a zillion contacts, but all I really want to know for most of these people is where they live, where they work, and how I can get hold of them if I need to. LinkedIn works brilliantly for that.

I do like the Facebook minifeeds, though. A minifeed, if I understand correctly, is an aggregation of all the things that a Facebook user is doing on Facebook - updating status, adding friends, using applications. For each friend, getting updates on what they're doing moment-by-moment on Facebook is interesting, and the Facebook homepage aggregates all my friends' feeds into a single one and sorts it by time. So when I do log on to Facebook, I can see at a glance what all these people are doing, at least in the last few hours.



Picture by Somewhat Frank

But there's plenty of stuff on the open web that could go into a minifeed just as easily. A lot of sites are making sure they have Facebook applications now, but not every one, and

who wants to rely on a Facebook app for something that isn't really anything more than an RSS feed?

So, I decided to put my own life-feed together. Unfortunately, finding an application that simply turns a bunch of RSS and Atom feeds into one publically available feed turned out to be harder than I expected. If you know of an easy solution, tell me about it - but keep in mind that of the first four feeds I tried to combine, three were sufficiently different to bring down every solution I tried - those are Google Reader shared items, this blog, and my LibraryThing book reviews.

I ended up creating a web page directly rather than creating a feed - I didn't feel like learning all the ins-and-outs of RSS or Atom. So, if you want to follow my life, almost minute by minute, check out this page - or just check out my home page, which has a small iframe in it with that page in it, which is how I intended to use the feed anyway. You can't subscribe to my life just yet, but maybe that will be coming soon!

Along with my feeds mentioned above, the page aggregates Twitter posts, and soon I'll add my Flickr pictures and maybe Delicious , Coastr, or Zelky if they have the feeds in the format I need. I'm looking forward to having my own life feed!

Monday, November 19, 2007

Pair Programming vs. Code Reviews

Jeff Atwood over at Coding Horror is asking for comments on the relative efficiency of these processes. We do both at my company, and while I don't think I have anything truly original to add, I come down on the agile side of the discussion (which should not be surprising to you if you read this blog often!)

The comments are already coming in complaining about pairing. I noticed these two particularly:

the obvious conclusion to this is double the hours per project, at minimum (and I'd expect that you would work slower if you had to discuss or explain stuff to someone else the whole day).

I would freak out if someone would watch me every the time I code (and also has a keyboard to interupt me lol)

Sort of the standard responses to pair programming. I'm not so experienced at the art that I can really say the hours don't double, maybe they do - but what I can say is even if the hours are doubling, the code quality is squared. Maybe it's just a commentary on what lousy code I produce by myself, but there is a big difference when someone else is there looking at the code, even if it's only the "navigator" effect, where the person who isn't actually at the keyboard can allocate the memory space to go back and remember any refactorings or other cleanup that needs to be done. As far as working slower, there are only two possibilities: first, that the other person doesn't know about the code as well as you do, in which case the knowledge transfer makes the whole thing worthwhile, or second, that there are a few ways of doing things and you need to decide which way is best. The selection you make when coding by yourself might easily not be that one.


even if the hours are doubling, the code quality is squared.
Freak out if someone watched you code? Dude, is your code really that bad?

Insofar as code reviews go, I find them almost unnecessary when pairing. Some teams do peer-review-before-checkin, which I don't really care for - I just can't grok the concept the code is trying to get across just from staring at it for a few seconds while someone explains it to me, but I suppose some people can do that. But we do code reviews for two things: first, to go over legacy code - we have plenty of that in our application - and second, to go over code that's just been checked in. This isn't 100% useful either, but on the other hand we have very few development meetings, and sometimes it's worth it just so someone can point out, "Oh, this should have been done using this brand new language feature" or, "we have a custom library that already handles exactly this case, can we use it here?"

So code reviews can be worthwhile, and they are absolutely necessary in a non-pairing environment. The big thing to watch out for is that you don't spend a lot of time discussing what your internal coding standards are, as I've written about before. But my feeling is that it is not as useful as pair programming.

Sunday, November 11, 2007

iFrame scroll to anchor problem

So on my basketball fan page, HoosierBall, I was working on the schedule display. I had originally set up a simple widget from Zvents, but after using that for two years Zvents apparently changed their entire business model, from events created by users to events created by businesses. Every Hoosier game had already been entered via StubHub, the widget that I had been using no longer worked at all, and as far as I could tell there was no replacement.

But it's not like the schedule display needs to be real complicated. I tossed it into an iFrame, stuck the schedule on a separate page, and wrote some simple Javascript to scroll to a specific game's anchor based on the current date.

But what's this? When the iFrame scrolls, the entire page jumps down to the iFrame to display it. That's not what I wanted, but I couldn't for the life of me figure out a way to stop it from happening, until I finally ran across Jim Epler's blog entry explaining how he simply scrolled the main page back to the top after setting the anchor. So you set the location in the iFrame, the main page jumps down, then you set it back to the top. It's not pretty, but it works. Here's the code in the iFrame:


location.replace(location.href+anchorname);
parent.window.scrollTo(0,0);


Thanks, Jim!

Thursday, November 08, 2007

Some test code smells

More of my writeup on the XUnit session I went to nearly two months ago. I hope someday to make all of my note-taking blog entries coherent to a larger audience :)

There are a couple of competing dynamics you get when writing tests: the first is that, in general, less code is better than more code. You want the code to express the concepts you need to express without any extra cruft, without huge globs of copy/pasted code here, there, and everywhere that is a nightmare to maintain. But you need tests as well, and tests are either more code, or more people, and people are one heck of a lot more expensive than code. So it's not at all unlikely that you would have as much test code as production code.

But those tests have to be maintained, so it behooves us to figure out the best way to do that, bearing in mind that the goals of test code are not the same as those of production code. So, here are some problems, or code smells, specific to test code that you might run into:

1. Conditional test logic . A lot of people like to say that one assertion per test is plenty. It seems like unreasonable test gold-plating to me, but if you're going so far as to put an if statement in the middle of the test, you need to have multiple tests to test each branch of the statement, each time. Or, the condition might simply be an assertion, where you're saying if (x) keep testing; else assert false. No point in that, just assert x at the beginning and let the code blow up if it needs to. This really helps with the readability of the test report, too, expecially if all the report says at the end is "The assertion FALSE occurred". Not helpful, whereas knowing directly from the report that X failed is much more useful.


2. Hardcoded test data

This problem is related to the principle that computer science profs have been tossing around since the beginning of time: that you never want to put numbers or strings in code; always define them somewhere as constants instead, so they can be easily changed. Not a bad principle; certainly it's ideal that anything that needs to be displayed to the user can be easily modified to use another language, so you don't want a bunch of MessageBox.Show( "Are you sure you want to do this?" ) statements scattered through the code. For numbers, though, my general rule is that it doesn't need to be a constant value unless it shows up more than once.

But in a sense, just about every number shows up more than once if written properly: at least once in the code, and at least once in the test for that code. Say for example you're testing a Price object with this line of code:

Assert.Equal( $14, CreatePrice().Retail );

CreatePrice() is part of your fixture, and it sets the list price to $20. Your Price object knocks off 30% to come up with the Retail number.

But now you've got the same number in there twice! See it? $14 is in there, and so is 70% of $20. The same number.

One fix is to move everything to constants. Presumably the Price object has a GetDiscount() method, so you could make the $20 into a constant ListPriceInDollars, and change the expected amount to ListPriceInDollars * GetDiscount. Still pretty verbose, but not really bad for this small example. A better solution can be to create an ExpectedObject to compare what comes back from CreatePrice to. In the ideal case, your test would then simplify to

Assert.Equal( ExpectedPrice, CreatePrice() );

Which would cover a boatload of other comparisons as well as your Retail value.

Wednesday, November 07, 2007

Request could not be submitted for background processing

Boy, don't you hate it when the request could not be submitted for background processing? I know I do.

This is a rather inexplicable error message that Crystal Reports pops up. If you search for the message, you see a few ideas for solutions, but what I eventually did didn't seem to be on anyone's list, so I'm adding this post to the search in case it's helpful.

In my case, the big clue was that we had just modified the report viewer so it could pick up data from alternate data sources based on information in the web.config. It's a pull report, so the data source is specified directly in the report and we're modifying it programatically. My assumption was that somewhere in the report, there was something being specified to use a different data source than all the rest of the report, and that our code that changed the source was missing it. But I couldn't find it, and Crystal report files have a binary format, so they're next to impossible to search through.

The big clue was that we had just modified the report viewer so it could pick up data from alternate data sources

I dug through a book on Crystal last night - luckily it had a chapter on dynamic data sources; thank you Brian Bischof - and one of the things it suggested was to check the connections on the tables using a member function, TestConnectivity. (You can outfit each individual table that a Crystal Report draws upon with its own login information.) To create a dynamic data source, you have to make sure that the source is changed all over the place in the report. In the report object, in any subreports that it uses, in any tables. So I set up the table login information to set it the way it was supposed to be, and then call TestConnectivity, and if that failed, throw an exception. My idea was that there was a single table somewhere causing all the problems, and that if I could figure out which one it was, I could fix it.

Only I didn't need to.

As soon as I added the TestConnectivity check, the report started coming up.

My working assumption is that the changes to the table login information are cached somewhere, and don't actually take hold until the report thinks it's necessary. Presumably there's a bug in the caching system, but calling TestConnectivity causes the table login information to be set properly.

Nice weird one there. Spent too many hours on it though.

Wednesday, October 17, 2007

Michael Gartenberg - Still Not Twittering

Michael Gartenberg is still not twittering, poor unenlightened soul. He prefers to use Facebook for his minute-to-minute status, and doesn't want to add another bunch of contacts to another social network after having done it twice with Facebook and LinkedIn. Sure can't blame him for that. To mix a metaphor, the walled gardens need to start playing a little more nicely with each other. I think then we'll all be a little more inclined to add a new network to our lists. I got a Jaiku account after the Google news, but I haven't put forth the effort to put a bunch of my contacts on it, or to find a nice client for it, and because of that I haven't even looked at the web page in days.

Michael also points out that it's another feed to check. He's already got a bunch of RSS feeds, work email, and personal email, and doesn't want another feed to look at.


But he is willing to admit that he might be missing things. The issue of how to prioritize feeds is coming up for me, too. The feeds in my information stream generally break down to:



  • General status updates and tweets

  • Quick-skim blog entries, newspaper articles, longer forum discussion posts

  • Technical papers, long blog entries, that you can't really get the gist of by skimming

  • General or group emails

  • Emails specifically to me, or that I need to deal with

  • Tweets specifically to me

(Picture by Pete Reed)


How these are prioritized, and how they should be prioritized, are two different beasts. Certainly my top priority items are emails and tweets for me - they're the things I want to read first. After that, depending on how much time I have, I may want to skim the short items or buckle down to a technical paper. But how I actually prioritize them is via the application they're sitting on. I have a Twitter reader, Outlook, Gmail, and Google Reader to grab all these different feeds: stuff that comes in via Outlook gets highest priority since it has the nicest toast mechanism. Teletwitter, my twitter reader, doesn't always pop toast properly, so I'll often miss messages on it, although I don't care so much since they're low priority. Except, of course, for the ones targeted to me, which are high priority, but which I still miss since there's no way to grab them out of the twitter stream. And if a good technical paper comes across Google Reader, I'll probably share or star it to come back to as I J-J-J through the list, then forget about it entirely.


So there's clearly work to be done in this space.


I am absolutely sure that all my streams can be prioritized together properly, because it seems to me that it can be mechanized without too much trouble, but I'm not sure how yet. But I'm sure someone out there is already working on the issue.

Tuesday, October 16, 2007

Bloomington Geek Dinner

I copied the link up there from the Facebook event page...I have no idea what it will tell you if you're not a member, or even if the link is good for anyone besides me! Anyway, we're hosting a Bloomington Geek Dinner next Tuesday. Geek dinners got to be popular in the last few years, probably due to Scoble's encouragement, and for a while geekdinner.com was up for people to schedule their events and create guest lists. No more, but Facebook is good for scheduling parties, perhaps too good judging from some IU undergrads, and there are a lot of Facebook groups as well as open Web pages for local geeks to find each other, meet, eat, and talk tech; for the big cities there are even specialized girl geek dinners.

But none for Bloomington, until now. Or even Indianapolis, as far as I know; if you're up in Indy feel free to come down - we'd love to have you.

It'll be at Max's Place downtown, starting around 7PM. See you there!


View Larger Map

Saturday, October 13, 2007

Indy Tech Fest

Big room has no wifi; smaller room seems to work pretty well. I feel very blind without a net connection. Have I said that before? :)

Session #1:
Stephen Fulcher, VS 2008
3.0 install only has WCF and WPF, everything else is the same as 2.0
Integration is tight to SQL Server. Other db's?
anonymous variables - new keyword var (not untyped, but takes on the type of the right-hand side)
lambda funcs,
extension methods: public bool IsBool( this string s )
Generate code metrics?
Q: Is var a breaking change? A: no, you can still use it is the name of a variable


XAML
Services (not web services, just services)
ServiceContract, OperationContract, DataContract attributes. - all specified as an interface
svcutil.exe generates client code and can resolve from existing DLL's
----

Session 2:
Mark Strawmeyer
C# tips & tricks

ctrl-alt-downarrow window list
ctrl-/ takes you to search menu
In the search window, type a ">" to go to a list of immediate-style commands
F12 is "go to definition"
Shift-F12 "Find all References", then use F8 to cycle through them
Code snippets - intellisense to generate code - type in the code and tab (to autocomplete), tab (to generate)
very powerful! prop, for, tryf
"Organize using" tools in VS2008
Unit testing support:
Test input from data source
unit test assemblies without source
Now part of VS2008 Pro
"Create Unit Test" option
Code Profiling is not part of Pro

---- Break for lunch. There are some lunchtime sessions but we just hung out in the lobby and watched the XBoxers.

Session 3:
Mark Strawmeyer
Ajax Tips

eh. Mostly just an overview of Ajax.net

Session 4:

Silverlight

Chad Campbell

Silverlight is the new name for WPF/E. Create a silverlight object using xaml. Add "Silverlight.CreateObjectEx()" to the HTML to use the plugin.
You can use .Net with silverlight 1.1. Silverlight is a browser-based plugin. It's cross-browser, cross-platform. What does that mean? They've written a plugin for Firefox as well as IE? Supports Mac and Linux and Safari as well. .Net framework used by Silverlight is specific to Silverlight.

Access the HTML DOM from managed code. Silverlight plugin will not be able to access the local file system.

Session 5

Dave Bost

WPF applications

Starts by demoing a Twitter feed on his page, created with Silverlight

Is it going to be the same presentation he did in Indy about a year ago? I don't think I blogged about that.

WF, WPF, WCF, Cardspace. WPF is .Net layer over DirectX.

Nice demo of an airport simulation

Expression is a set of tools for designers. Expression Web, Blend, Design, Media. Costs a bundle, though. Once you design the xaml, you can edit it in Visual Studio.

http://devcares.com in Indy every month.

Friday, September 21, 2007

Continuous Integration

The time between a defect being entered and being found is related to the expense of fixing it

Graphing the number of tests over time would seem to be a good idea

Sligo dashboard: classes, LOC, duplication, max complexity, tests run, line coverage, branch coverage, FindBugs violations; PMD Violations; Max Afferent; Max Efferent

Code reviews are good for high level details; use machines to do the low level detail finding

Code complexity tools: CCMEtrics, Vil

Code duplication detector: Simian

Dependency: NDepend

Coding Standards: FXCop

Thursday, September 20, 2007

How to work with an open source team

"Free as in Freedom"
"Free as in Beer"

http://opensource.org/

Q: Don't people resent when companies take open source projects and make money off of them? A: More power to them! Many companies are using their employees to do open source. It's good PR for a company to have people who work on open source - give back to the community, attract high-power talent

Don't contribute unless you:
Know the project license
Get permission from your employer
Get legal review if needed
Can communicate clearly in the project language (usually English)

Oracle tried to strongarm Linux, got squashed, came back with offers to help. Good PR! (I'm not familiar with this story!)

Project currency is trust and respect. You don't start with any. Remember, if you're good, you don't have to point it out.

Q: How do you start gaining respect? A: Post to the mailing list, point out bugs AND fixes. Maybe someone will request a patch, provide it

Q: How does the code stay consistent and looking good? A: There are tools, or people who do work to make things consistent. Or, it doesn't :)

Q: How do you get non-coding contributions going (docs, images)? How do non-coders get cred? A: Projects should support people like tech writers, if they're good.

How to Gather Customer Feedback

Don't aggravate customers with annoying surveys!
Make sure to ask "Is there anything else?"

Several stories about bad feedback forms

Interviewing 5-10 customers is probably as good as interviewing hundreds

Don't assume that no complaints = customer satisfaction. They may just be putting up with it, especially if they feel no one is listening.

Don't just do surveys. Use different feedback-gathering methods. Invite open-ended feedback, in surveys or otherwise.

Don't ignore the feedback!

Focus on the service attributes most important to your clients. Don't know what's important to them? Better find out!

"What aspect of our service is most important to you? Regarding it, how are we doing?"

Lots and lots of examples of how not to

FBWA (Feedback By Walking Around)

I love how Microsoft gets all Ajaxy with feedback on every page: http://msdn2.microsoft.com/en-us/library/ms229931.aspx

Again, act on the feedback! Summary of responses, detailed responses, action

Don't forget power of the naked eye. Often problems are obvious and don't need surveys

Wednesday, September 19, 2007

Security code reviews

Foundstone Security Frame
Hacme Casino http://www.foundstone.com/us/resources/whitepapers/hacmecasino_userguide.pdf
Foundstone CodeScout

Paros (web app security assessment) http://www.parosproxy.org/index.shtml

Don't overanalyze. (Spending two hours determining if a strcpy is vulnerable. Takes two minutes to change)

Identify code review objectives (Insider backdoors, compliance with specific regulations)

Lots of discussion of tools. I think the point is, use available analysis tools before bothering with a code review - it's easier and cheaper

http://www.securecoding.org/list

http://codesecurely.org

Usability by Inspection

Doing code reviews? That's good! Code reviews are a big help. They ensure uniformity of code, teach people new design patterns, and often even help to avoid bugs.

Doing usability reviews?

real-time usability problemMe either. But if you've got a product that has a UI, an easy thing to do to improve the product is to just sit down with a few people, maybe some that will actually use the product, maybe some managers, or maybe just some people that you can pull in to see what they think. That's more or less the gist of what I got from Larry Constantine's session on Usability Reviews.


Now, the first thing to realize is that a "usability review" is different than a "grouse session". I was once doing a demo for an internal tool my team was working on, and after I'd showed how the tool worked, during the Q&A period one guy spoke up to say, "Boy, does that interface ever look like it was designed by a programmer."

"Interesting," I said. "How would you improve it?"

"Oh, you know. It just doesn't look as sharp as it could."

Well, yes. Nothing ever does; but it wasn't too helpful to tell me that. So, when you do a review, you have to be specific.

But how can you be specific about a UI? A UI is just a UI, right? It either looks good or it doesn't.

Not at all! There are lots of basic principles of design that the people who make web sites for a living know about. Even if your organization really is full of web pages designed by programmers, there's no harm in teaching the programmers some basic principles of design. I have a couple of books on that subject, one by Mr. Constantine himself, which I didn't even realize until I'd gone in to the session. But the organization or team should probably lay down the fundamental precepts of design that they want to follow. The usability defects will be easier to objectively identify with that list in mind. Some examples of good design principles are: Availability, Feedback, Structure, Reuse, Tolerance, Simplicity. Check one of the books for some guides as to the specifics, but a usability defect violates one of these principles, or you could also say it is a probable cause of user delay and confusion. But it's not a usability defect if you just don't think it looks good!

So here's how you prepare for a usability review: First, organize a few use cases. You may already have them as part of your project, or you may just have to make some up. What you'll be doing is telling the users what they're trying to accomplish.

Then, get the folks together. At a minimum, you should probably have:

  • A leader, to make sure everything moves along smoothly;
  • A notetaker;
  • A Continuity Reviewer. This is someone who is reviewing the UI specifically to make sure it is consistent with overall project guidelines, and with the other pages in the project.
  • Users - people who will attempt to use the page. They can be actual customers; agile-style customers; or just people who were walking down the hall at the wrong time.
  • A Designated Driver. This is someone who will perform actual mouse clicks or typing at the request of the users. This will depend on the exact situation - do you have a real application, or just some mockups? Do you have a big meeting room and a lot of users, or not? If not, the Designated Driver might as well just be the user.
  • Developers/Designers. Developers and designers who worked on the page must never explain or defend design, argue with users, or promise anything. They may only find problems. Users do not count as problems.
It's an important point for reviewing anything that if a reviewer doesn't find problems, he's not doing his job. I always have to remind myself of that. But the people who worked on the application; the programmers, the designers, the developers; they will always be able to give a reason for why it works the way it does. Don't listen! Mr. Constantine suggested a "virtual air horn" - you get to pretend to be a big truck and blow the horn to get people out of the way. You must blow the virtual air horn whenever excuses, explanations, or rationalizations are made.
Next, have the users go through the use case or scenario you've designed. Introduce the scenario with an overview of context and user motivation. Read one step of the scenario at a time, and ask the users what they would do next. Users take lead in proposing actions. Never guide or prompt users! Help is limited to simple description or clarification. If the user has to ask for help, you've automatically got a usability defect.
For each defect that you find, the notetaker should note:

  1. The feature or function that the defect is in;
  2. The location; which web page it is or a screenshot of the GUI
  3. Which design principle is being violated
  4. A short description of the problem
  5. The estimated severity of the problem. (nominal, minor, major, critical )
Ideally, these would be on a form the notetaker would be able to fill out.
You should probably allow one to three hours for the review. So that's it! Get out there and say goodbye to applications that look like they were designed by programmers!

Web Application Risk Modeling

"Reverse" model - take the business case of the system and work down to threats.

A threat is not a vulnerability. A threat is what someone might try to do to your system; a vulnerability is how they would do it successfully

What risk drivers are there?

Application overview: Documentation drill; models; dataflow
Decompose application: break it down into well-defined "chunks".

Identify threats against the security objectives

Identify vulnerabilities "Vulnerability Assessments"

A threat model helps you to define, categorize, and prioritize vulnerabilities

Make sure to fix vulnerabilities, not exploits - understand all nuances, attack potential, exploit paths

STRIDE / DREAD

Other factors:
Ease of use, mitigants, timing, visibility,
monitorability (can you watch people doing stuff?),
forensics,
access required( even for internal apps, what are the chances of a bad guy infiltrating? )

XSS: Take user-inputted data and display it back without filtering. Nuances to XSS (Reflective Script Attack, Persistent Private Vectors)
POST based attack would not show up in server logs

Tuesday, September 18, 2007

xUnit Test Patterns and Smells

This comes from a really good session by Gerard Meszaros on Test Patterns at SD Best Practices 2007.

Here's my history on test-driven develoment: Back in the nineties, I first read Martin Fowler's Refactoring. I thought it was a good idea, and attempted several refactorings on the code base I was working on, with good success. I think it was one of the better-coded applications to come out of that company. But I was always annoyed, because the instructions for the refactoring would always say something like, make your changes, and test. Testing is hard, man! Especially when you're testing a bit of the application that takes two minutes to get to from application launch and relying on a Direct3D driver to do the right thing.

So I added refactoring to my arsenal but didn't think too much more about it, until about five years ago, when I ran across an article on TDD in, I think, Dr. Dobbs, but it may not have been. The article mentioned some ideas about testing and mock objects, which turned out to be exactly what I needed for the project I was working on then, which was a business-level client API with a wrapper lib for calls to the server - the ideal thing for a mock. I played with it for a while, and it worked beautifully! Pretty soon I presented a proposal for moving to TDD to the team I was working with.

There were a couple of quotes that I put in my presentation (probably from the magazine article) that I really liked:

  • Tests must be easy to run. If they aren't, people won't run them.
  • Tests must be easy to write. If they aren't, people won't write them.
This session was all about the second quote.

The problem is, tests are easy to skip. Comment out. Ignore. If you do that, your code isn't being tested. But the client doesn't care about that...at least in the beginning. Later on, if your code isn't being tested, bugs will start to crop up. You'll make a change in one area that you never in a million years thought would affect this bit of code over there. But it does, and you've introduced a bug. The client will sure care about that! So you really have to put the effort in to write tests.

But at the same time, you're selling the production code, not the tests. If your team is spending more time on the tests than on the code itself, your velocity is sure to suffer.

So what's the solution? Go back and look at the second quote again. Tests must be easy to write. How do we make them that way?

The first thing to notice is that your objectives for test code are probably going to be a little different than for the production code. For example, execution speed is crucial for production code. You can't have your users twiddling their thumbs while they wait for your web page to load. But for test code, not so much. Go ahead and add ten seconds worth of tests to your build; think anyone will notice? Or, add four hours worth of tests. Sounds good! Just make sure to run them overnight when no one needs to watch them.

On the other hand, is simplicity important for production code? Well...it can't hurt, of course. The smaller and cleaner you can get the code, the better. But sometimes there's nothing you can do about it; you have to add that cache for speed; or denormalize the database so you don't have to make calls across a dozen tables. But for test code? Let's say it again: Tests must be easy to write.

What else? Is correctness important for production code? Of course...but users will put up with small bugs. But correct test code is an absolute requirement. If you don't have the tests right, you'll be writing incorrect production code to satisfy the bad tests. What about flexibility? Code should be flexible, right? Not really, not test code. In fact, there will probably be enough hard-coded test values to make it hardly flexible at all.

This is getting long. I'll add more later.

Software in the large

Here are my initial notes on the Jutta Eckstein presentation on scaling agile development across large teams. Cleanup may follow :)

Scrum of Scrums
Crystal

Iteration Duration: larger the team, shorter the development cycle
per week, count on a half day of retrospective (two week cycle = 1 full day retrospective)

Expectation: plan/develop/deliver.
Difficult - activity-oriented planning or component-oriented planning?
Therefore: Result-oriented planning. Focus on the features! Comes back to the Agile Manifesto: Our highest priority is to satisfy the customer.

Plan for accomplishing a valuable feature: integration, test, documentation.
A feature is a brief statement of functionality, from the user's perspective
How does one deal with architecture issues?
A feature produces a measurable result.

Iterations are steered by features, but defined by tasks

Tracking tools: PPTS, TRAC
Someone also mentions they use Sharepoint
Or just three checkboxes: working on it, untested, done done
Tools support communication, not replace it

Release Planning

Iteration review (Demo)
Present software, recognize & extract best practices, learn from failure

Measurement: Acceptance tests, planned functionality, is the product owner satisfied?

Retrospective after every iteration. Likely problem that people try to make large-scale changes

- Cross-functional or feature teams
- A large project might have tech teams; the customer of a tech team is a feature team

An ideal team is self-organized; this ensures whole features and good knowledge sharing. Managers must provide environment allowing teams to gel. This is like my ACG posts from a few months ago.

Trust

Agile development is a trouble detector. Bad news is also good news. Integration of departments (Projects are customers) Close customer relationship ensures rapid feedback.

Discussion of implementing practices a few at a time. Ping-pong implementation!

Synchronization: Face-to-face is preferred. Sync across subteams daily (Scrum of scrums). If your team is self-organizing how does that work?

Communication via wiki

Just one "Chief architect" - pulls the strings, makes technical decisions, "guiding light". Relationship of chief architect and customer?

Starting: take baby steps. Start small. Use skilled people. Develop a few features and make sure to do iteration retrospectives. Grow slowly.

Don't finalize architecture before growing team; use retrospectives. Domain teams must formulate new requirements. (But you might have to finalize to eliminate fear...or at least say it's finalized!).

Avoid hot technology. A large project has enough problems on its own without trying to train developers on something new at the same time.

Refactoring: technical excellence is doubly important. If a developer sees a needed refactoring on another team, they have to point it out to them.

Large projects may have exponentially greater test time. 10% of dev effort for integration/build. (If something is difficult, do it over and over until it's not difficult any more.)
Q: Special iterations for integration? A: no
Nor a special integration team; rather people from each team who specialize in integrating

Reviews:
Special review team. People should jump around between teams, and be on a team strictly for the purpose of reviewing the code. Everyone should do this.

Knowledge transfer (via Daily Scrum and pair programming). Scrum master ensures the process; product owner ensures business value).

Q: Agility in a distributed environment. A:

Monday, September 10, 2007

Could not load type 'Global'

The comments on Harish's blog entry from two years ago give a lot of different solutions to the 'Could not load type "Global" ' problem that you sometimes get in ASP.Net. My solution to the issue was an interesting twist on one of those answers.

I had recently upgraded an application from ASP.Net 1.1 to ASP.NET 2.0, but to keep supporting old versions of the application, I branched it off in Subversion. To make sure the old version still worked, I checked out the old code into a new folder, then I went into IIS and simply moved the location of some virtual directories to point to the old code. It all worked and forgot about it.

Until later, when I came back to make some changes to the new application, started it up and got the message:

Parser Error Message: Could not load type 'WebApplication1.Global'.Source Error: Line 1: <%@ Application Codebehind="Global.asax.cs" Inherits="'WebApplication1.Global'" %>

I couldn't make heads or tails of it, but a web search led me to Harish's post and lots of different answers, several of which I tried, but none of which worked. One suggestion was to make sure the application was set in IIS to use ASP.Net 2.0 rather than 1.1, and even to set it to 1.1, click Apply, set it to 2.0 again and click Apply, just to make sure it took. Another was to make sure the application was compiled. If there's no assembly built for the application, it won't load.
I checked the ASP version, and it was indeed set to 2.0; I said, "Duh!" to compiling, but made sure there was an assembly in the bin directory, and there was; so I was at a dead end.

But I'm sure by now you see where this is going. As I opened up IIS to check the ASP.Net version configuration again, I happened to glance down at the local path for the virtual directory on that tab. And what did I see? The directory was still pointing at the path to the branched directory; a perfectly legal application, but one that was built using ASP.Net 1.1, and also had been cleaned sometime in the not-too-distant past. So the version I had configured in IIS was neither compiled, nor a 2.0 application! No wonder the error came up.

So I have an additional solution for this problem. Check your virtual directory location and make sure it's pointing to the application you're expecting it to be.

Thursday, September 06, 2007

Pair Programming with VNC

Dietrich Kappe writes on the Agile Ajax blog on surmounting the difficulties of pair programming when part of your team is offshore. Interesting stuff, but Dietrich, you also made the offhand comment that Test Driven Development is one of your commandments as well. What's your process for writing Ajax unit tests, and if you're not always doing continuous integration, how do you know your tests are always passing? I'd be curious to know!

Monday, August 27, 2007

School Daze

Wow, Amy Makice is stressing me out with her story of a second-grader stressed out in the first two weeks of class. I'm looking forward to hearing the resolution as I don't doubt I have similar experiences ahead - and of course, there are also the ramifications of putting the story online for everyone to see. My incessant questioning of a kindergarten teacher got my wife called to the principal's office at registration time, I think to reassure her that it was all going to be OK. I don't know if that was the direct result of the blog entry or not, though. But, my unsolicited advice, Amy, is to do all you can to resolve the issue before putting it online, just in case the subject of your entry ever has to read about herself on the net. People who aren't in the habit of writing for public consumption can be unreasonably angry when that happens!

Tuesday, August 21, 2007

Deconstructing the Wiki Decision

Educator Christian Long writes on using Wikis in the classroom. I'm not an educator, and my kid isn't one of Mr. Long's students, but I sure would like to see similar tools used in my kid's school. I'm not sure how particularly useful they would be in kindergarten, but editing that linked article as a class project would be fun.

That said, how likely is it that students are interested, under their own power, in editing a wiki? Based on my experience that only about 5% of readers tend to be contributors, I think it might be difficult - but of course, the percentage in an English class might be higher. Students around here are asked sometimes to edit Wikipedia or Bloomingedia as a class assignment; for example this article:

http://www.bloomingpedia.org/wiki/Amused_Clothing

was obviously written by a local teen. But if you look at the contributor's history, he copied in the bulk of the article on March 30th, came back and fiddled with it a few days later, and then never came back again.

Now, Mr. Long isn't having his kids put their essays on the wiki - at least not yet! But if, or when, it occurs, I wonder whether an English class discussion wiki would really work on its own terms without constant prompting by teachers. I suspect it could, if it is linked to the real world somehow. I'll be following the experiment with interest.

Wednesday, August 15, 2007

Bloomington not yet a'Twitter

Lots of interesting stuff going on in the Bloomington scene this week. James Boyd, who I've written about before, sat down to watch and interpret the 36 hours in three days of Monroe County budget hearings, and posted them on a dedicated comment thread on the newspaper's web site...I mentioned that on Twitter. Some of my coworkers wandered off to the Agile2007 conference and sent reports back on speakers they liked; I added a couple of people to my Twitter list and blogroll due to that. My kid started kindergarten, so I've been ramping up my list of educators as well (too bad there are no local ones as of yet!).

While James was posting his updates, I tried to follow along with his numbers on a Google spreadsheet, with only a fair amount of success. (Of course, my job was easy since all I did was read the comment threads. James had to try to interpret everything and post and try to keep up with details on the numbers - all in real time.) My goal was fairly self-centered: I wanted to understand exactly what they were voting on and why. But certainly if what I was doing was useful at all I wanted to share it - why keep it private? (A former boss asked me that once. Why did I blog about my trip instead of putting it in an email and sending it to the six or seven people in my group? All I could do was stare at him blankly.) All in all, I'd say that my, and probably a lot of other people's, information stream had gotten a lot wider this week.

Thinking along many of the same lines, only way more articulately than I could ever be, Kevin Makice wrote an piece on the future of local social networking. Kevin wants everyone to center around Twitter, which I doubt will happen. The Herald-Times has taken a real leadership role in this process, and they of course have a vested interest in bringing people to their site instead. Councilmember Sophia Travis pointed out that it was way too tough for her to actively participate in the discussion as well as listen to the issues, although she did manage a couple of notes.

So where do we go from here? Here are a few things I notice:

  • It took a professional, not a blogger, to (a) generate interest and (b) pull off the budget updates with the right amount of elan to keep everyone interested. Is this a requirement? I'd say no, but the fact is that I wasn't about to take several days off work to go down there and watch. It's a lot easier to do it if someone will pay you.
  • With the exceptions of Councilmembers Travis and Marty Hawk (who posts to the HT occasionally) there are few enough politicians in the general conversation, to expect that there will be many in the live conversation (by which I mean Twitter, or the running comment thread). It would be nice if this changed.
  • I had to ask early on in the process for copies of the spreadsheets the council was using. Apparently the auditor was running around with them on a thumb drive, handing out copies to whoever needed them. It would have been nice to just stick them on a web page at the beginning.
  • I want a budget expert available to answer questions from the public. I probably had a dozen questions over the three days - granted, I always have questions, it's because I don't know anything - but many of them James couldn't answer, and probably many he could have but didn't because he didn't have time. Wouldn't it have been cool if the auditor's office could have somebody sit and monitor the thread and explain stuff?
  • Let's not wait for next year's budget to do this again. Send the junior copy editor to update us on the Redevelopment Commission meeting. Let's get a volunteer blogger to liveblog the Planning Commission. Let's keep the government exposed!
  • Budget hearings are a really moronic way of doing things. A bunch of exhausted people sitting in a room voting yea or nay at random on a couple of grand so they can get it over with and get some lunch? Tell you what, next time let's get all the line items out on a nice wiki page and hash it out that way. I realize I'm text-centered and maybe others prefer the face-to-face, but then how about over NetMeeting or something?
  • Now, I'm not trying to grouse and say that things should have been done differently. Or to be more precise, of course they should be done differently, but we never know precisely how until afterwards. This has been a great learning week for me, and I hope, for everyone else as well.

Sorry, Kevin, I didn't get that Bloomingpedia article on the budget written; the hazards of citizen journalism :) But maybe now we all see a little bit more of the possibilities that are opening up before our eyes. Hey, follow me on Twitter!

Monday, August 06, 2007

Dare Obasanjo on Open Social Networks

Dare writes on Open Social Networks. One thing he doesn't bring up, though, is the existence of specialized social networks and how they fit into the whole. He uses Flickr and YouTube as examples of sites that have good API's for getting and setting data, but part of the point of those is that they exist solely to allow users to push around specific types of content: images on Flickr, movies on YouTube. Facebook and MySpace have lots bigger fish in mind, wanting to take over your whole mindshare. It's an interesting evolution, isn't it? For a long time we talked about Microsoft and how they wanted to control everything on your desktop; then Google came along and we talked about how having everything in your browser was better than having everything in your desktop. Now it's not enough to have everything in the browser; we have to have it all on our social networking site. The one thing this really points out to me, though, is the fragility of these sites - for a while MySpace was the hot toy, but now it's Facebook. Is there any reason to think Facebook will be the place to be in six months or a year? I don't see one.

I learned via TechMeme, though, that Jeff Pulver is leaving LinkedIn for Facebook. I think it's a mistake, Jeff. LinkedIn is specialized; it exists for business contacts. It will probably be around in a couple of years, linking up business contacts. Facebook will probably be gone as people move on to the Next Big Thing.

To sum it up, it appears to me that the real evolution of social networking is going to be LinkedIn for business contacts; Flickr for pictures, LibraryThing for books, and then maybe a few small sites like Facebook and MySpace that aggregate all this data into a coherent whole for people who aren't interested in creating their own websites that aggregate all this data, or are nervous about being outside of the walled garden. But Facebook ain't the future. Don't expect it to be.

Saturday, August 04, 2007

Your code is suboptimal!


Check out Eric Sink's blog for a nice, and almost free, T-shirt. Eric runs SourceGear, a version control company, which I'm sure is very nice software, but I've never used it. But the T-Shirt is good quality, and the package comes with a copy of the SourceGear comic book, which is hilarious. And like I said, it comes almost free. In payment, take a picture of yourself wearing the shirt in an appropriate pose, post it on your blog, and give them permission to use it, which I hereby do. This picture is on the Indiana University campus alongside a statue of chancellor Herman B Wells, who, as you can see, is doing the comic book pose too. Thanks, Eric!

Friday, August 03, 2007

Out of the Theater, Into the Courtroom

Boy, doesn't this stink? (Thanks, Vorlath). As a rule, I don't like commenting on outrageous stuff; yes, it's outrageous, yes, those darned company/government/media droids, there oughta be a law. Or a law repealed, or something. What makes this one a bit different is that there oughta be protests. Can't someone get a group together outside the theater and picket, or something? This is a clear clase - assuming the facts in the Post are correct - of an overreaction and the movie theater in question ought to be the target of a big negative publicity blitz. That's what I'd be doing if I were the girl's lawyer. I hope Indiana movie theaters have more sense, though.

Wednesday, July 25, 2007

Javadoc Clutter

Ed Gibbs, one of my favorite bloggers, writes on the usefulness of javadoc comments. (I meant to write on Alfred Thompson's thoughts on the issue last month as well, but didn't get to it.) Here's my take: If you're coding properly, you have lots of little methods, as Ed says, and they should be just about self-documenting and not really in need of comments. But, when you have code organized like this, it becomes even more important that the big picture be kept in mind somewhere. This partly means working on good class-level documentation - how the class is intended to be used for example, but it also means having good diagrams of the entire application. With this, you may realize not only how the class is intended to be used, but how it's being used in an unintended way, or how it's duplicating the functionality of this other class over here and they need to be merged into a single class.

So where do the diagrams come from? As Alfred mentions, you can use class designers like the one in Visual Studio, but my feeling is that that is only a starting point. There are so many different diagrams you can make: dataflow, inheritance, etc., but you have to keep in mind that the point of any diagram is to help the reader grok the system. What I like to do is keep a documentation wiki around, and generate some diagrams that can be added as pictures, and as a starting point for some user-defined text to help explain them.

But when you do that, eventually you're going to want hyperlinks in the text that lead back to the class, and its description, and its methods. And this is where Javadoc comes in. In the build, throw in a step that generates HTML pages from the Javadocs, and make them available to the users of the project wiki. I think this gives you the nicest combination of high-level overviews and class-level references, both of which are essential to a well-managed project.

Monday, July 23, 2007

The 20 Dumbest Words in Software Development

Brandon McMillon writes on doing it right. (Thanks to Alfred Thompson for the link.) He doesn't touch on the agile side of software development - although I can guess his opinion by his planned article "Pair Programming is for Morons" - and so the article has a lot of stuff about Objectives and Requirements and Spending Design Time Up Front. The tricky bit about commenting on this sort of article is that I don't really disagree; his straw man comparison is that one group who just goes off and starts coding so they can get it done faster. That is bad. He does mention how getting sign-off and buy-in from users and stakeholders is valuable, and here's where we might differ: getting this sort of data is important throughout the life of the project, not just somewhere near the front. Because once a user gets some working software in his her hands, she's immediately going to have ideas to improve it, and they'll probably be good ones. So, while it's nice to do some designing up front, it's more important to have your code in a state where you can make changes easily and quickly, to respond to the inevitably changing user requirements.

What I have written here is short, and therefore oversimplifies the many issues. But the full range of agile practices can answer most objections, in my experience.

Saturday, July 21, 2007

Should Newspapers Become Local Blog Networks?

Scott Karp at Publishing 2.0 writes about newspapers jacking up their blog count. I think the thing that most people are missing when it comes to whether newspapers should be more like blogs, or should bloggers be more like reporters, is that we, as blog readers, are really, really interested in who's writing the story we're reading. It's why there are columnists. After a while, people would read anything Dave Barry wrote because, as soon as they saw his name on the column, they knew they were in for a funny article.

But it's the same thing with real news. Our local paper just had a bunch of articles on the competence of the county auditor, many written by a reporter named James Boyd. They're good, if controversial, articles, and ended with the online version having dozens of comments along the lines of, "the real story is...", "what the paper needs to do is...", "why on earth didn't they report on...", and finally Mr. Boyd, possibly tired of all this, chimed in with his side of the story and explained just why he reported on what he did, and what kind of feedback he got from the auditor. The comments immediately became much nicer.

Why? Because people then realized they weren't just trashing a corporation, they were trashing a real person, and one willing and able to defend his actions. It created a conversation rather than a soapbox. So, even though Mr. Boyd is a reporter, I think what I'd really like to see on the site is his pseudo-blog: maybe nothing more than a list (with, of course, RSS feed) of all the stories he writes. When we know who's on the other side of the pen, the story becomes a lot more interesting.

Friday, July 20, 2007

Learning from Joel Spolsky (and Dave Winer)

Here are my comments on Joel's comments on Dave Winer's comments concerning comments. I think Dave is dead-on, but the whole issue is really more of an A-List issue than it is a general concern. I bet there's some sort of law that states that the amount of garbage increases exponentially to the number of participants; if there isn't, there should be. If you have a small blog where just a few people comment - or none, like this one - the quality of the discourse tends to be pretty high, but when you start having thousands of readers, the number of people who have their own agenda to push starts to outweigh the number with interesting feedback. I have comments enabled, and I expect to have them for the foreseeable future :)

But I still want some way to do trackbacks. I don't think the existing trackback system is able stop spam well enough to be useful, but the fact is, no one who reads Joel's post will ever find out about this one, as far as I can see; especially the casual reader who only stops by for a few seconds.

Thursday, July 19, 2007

Jobs of the future, #1: Online Community Organizer

Seth Godin suggests that an up-and-coming job description will be Online Community Organizer; that would be someone who can gather together everyone in an industry and make them feel like they have to be part of the conversation in this forum. I could see it coming someday, but my feeling right now is that all of the successful online communities are happening more or less by accident. Facebook, MySpace, Twitter. What makes Twitter a more dynamic online community than Pownce? Not much, I hear; maybe some sort of first- or second-mover advantage? But is there really somebody out there capable of moving from one job of this type to another and being successful at both? I have my doubts.

Tuesday, July 10, 2007

Marc Andreessen's Eleven lessons

Marc's disabled comments on his (excellent) blog. Marc, I see at least two issues with that: First, a lot of times I only have a sentence or two to add to a post and it hardly seems worthwhile to create a brand new article on my own blog. Second, if anything it's even easier to spam trackbacks than it is comments! Although I don't have enough readers to bother about any spam-blocking besides the Blogger default captcha, surely you could come up with some mechanism to ensure a human is the one entering the comment. And isn't it a shame to restrict comments only to those who have their own websites?

Thursday, July 05, 2007

Evaluating Javascript in an NUnit test

Adam Esterline posted his solution to javascript testing. He uses WatiN to run tests, which I wasn't excited about; I was hoping for a way to test where I didn't have to install any more software anywhere. Here's the solution I came up with:


static object Evaluator(string code )
{

ICodeCompiler compiler;

compiler = new JScriptCodeProvider().CreateCompiler();
CompilerParameters parameters;

parameters = new CompilerParameters();
parameters.GenerateInMemory = true;
parameters.GenerateExecutable = true;

parameters.ReferencedAssemblies.Add(Assembly.GetExecutingAssembly().Location);

CompilerResults results;

results = compiler.CompileAssemblyFromSource(parameters, code);
if (results.Errors.Count > 0)
throw new Exception(results.Errors[0 ].ErrorText);

Assembly assembly = results.CompiledAssembly;

MethodInfo entryPoint = assembly.EntryPoint;

return entryPoint.Invoke(null, new object[] { null } );

}

[Test]
public void EvaluatorTest()
{
Evaluator( "Context.Current.Data = \"Craig\"" );
Assert.AreEqual( "Craig", Context.Current.Data );
}


This seems elegant and able to handle a lot of things like side effects. Unfortunately I've been back on the server side for the most part and haven't really tried to put this code through its paces.

Thursday, June 07, 2007

Lawn sign in the Forbidden City

Mark Hurst's great This Is Broken website gives examples of bad user interfaces, both on the net and in the meat world. You can't argue with most of the entries, but I love this sign. The grass is quietly asking us, "Please Do Not Disturb Me". The extra "Me" on the sign gives a really dreamy, tranquil, and above all Chinese feel to the landscape; much better than the harsh "DO NOT DISTURB" or "KEEP OFF THE GRASS" signs we Americans are used to. (I'm posting this here since you have to be a registered user to add comments to the site).

Sunday, May 20, 2007

Frames Per Second in birds

I had occasion to visit Hardin Ridge yesterday for the International Migratory Bird Day celebrations. They had several stations, and it was fun for the kids, but one thing I thought was interesting was during a presentation on birds of prey. They displayed a peregrine falcon and talked about its remarkable vision; here's what Steiner has to say about it:

They are equipped with full-color vision and with eyes specially adapted to permit rapid adjustment of focus while moving at speed, and from four to eight times the resolving power of the human eye. Hovering may be compared to looking into a field from a car moving at twenty miles an hour or from one which comes to a standstill every few yards. It would be possible for a human being to see an individual rabbit or large game bird at a range of 600-700-yards; a bird of prey, with about four times the resolving power of the human eye, should therefore be able to see it at a range of nearly two miles.

What particularly caught my attention, though, was when they said that the peregrine can perceive significantly more events per second than humans can. I don't know if it's exactly the same concept, but I assume we're talking about frame rate here; they commented that even when cameras have been attached to these birds as they make one of their amazing, 200 miles-per-hour dives onto some unsuspecting pigeon, all humans can see is blur.

Now, the frame rates they mentioned seemed surprisingly low to me they suggested 18-20 events per second for humans and maybe two or three times that amount for the birds. But I suppose it's an example of flicker, and syncing, where if you perceive the wrong couple of frames in a videogame the action seems all wrong, or maybe the afterimage in the eye causes the slowdown.

So I'm not sure where all this is leading, except maybe that a really cool videogame would be Peregrine: The Stoop for a Pigeon. But I suspect there's a lot of basic science to be done before any game can simulate the visual experience of this amazing animal.

Friday, May 11, 2007

Choosing a Kindergarten

It's a funny thing, but I'm pretty used to being marketed to. When I needed to get a gym membership, the gyms that I went to had someone there to give me a tour; show off the machines and the hot tub; in general make me excited about going there. Or when my wife was checking out nursing homes: she had a very similar experience with people who wanted to make her comfortable and get her interested in coming back, offering her cups of coffee and things.

So when we got the letter about coming to a kindergarten open house, that's sort of the thing I expected. I thought the people would be interested in showing us around and getting us excited - like the gym or the nursing home, trying to sell their product to us, sell us their school. I guess the first warning I should have had came straight from the informational packet, though: there was really nothing there except lots of information about "your child" - "your child" should be able to tie his shoes. "Your child" should have lots of time to read with you. "Your child" needs to be independent enough to go potty all alone. Not a word about this big, new, mysterious place he'll be going to.

So I was expecting some more information on the school at the open house. Unfortunately, what they had for us was another informational packet explaining what "your child" had to do in order to be ready for kindergarten. And who was running the open house? One kindergarten teacher. The school has two, but the other was busy - and I understand she just lost a family member, so that was okay - and the principal apparently had decided that some interviews he had to do were more important than meeting the new parents. I disagree.

So it's very clear when you leave the private sector, even for a heavily regulated industry like nursing homes. We obviously had some amount of choice over our kindergarten, but once we made the important step of purchasing a home, we were pretty much stuck with this one, and I think that the information we got reflected that. I'm not even saying it's intentional - simply that no one's ever thought twice about having to sell their school, because no one has to.

That in a nutshell is the biggest problem with the school system, IMO. It will be interesting to see if this pattern continues or whether some more wholehearted attempts will be made at engaging us.

Monday, April 30, 2007

Precociousness

Hey, dad?
Uh-huh?
How many hours are there in a day?
There are 24 hours in a day, son.
Oh.
So how many hours are there in a night?
I meant, there are 24 hours in a day and night together.
Even in wintertime?
Huh?
In wintertime the nights are longer.
Yes, but the days are shorter too.
Oh.
So there aren't 24 hours in a day in winter?

What?
Well, you just said the days are shorter.
Well, that's true, but...say, why don't you go play with your toy cars for a while?

Friday, April 13, 2007

FastTrack seminar, part 2

Second session at the seminar was on the new features of SQL Server 2005; since I was at the launch party in Indianapolis I didn't feel like there was a whole lot that was really new; but since I'm currently employed at an Oracle shop it was interesting to compare the two. This session was given by George Huey, a Microsoft "Architect Evangelist", whatever that is; but he knew his stuff. The most interesting new feature from my admittedly database-illiterate perspective is the ability to write code in .Net and turn it into stored procedures and code that runs inside the server process. You can also set the code permissions: Safe, Unsafe, or External Access, which I believe means, "hits the file system", but regardless I don't know if anyone actually uses that feature. Another interesting feature is the ability to run "recursive queries", which sounds pretty handy: he gave a demo of calculating the number of levels of management by recurring up the tree until you find the guy with no manager.

A few other features he went over were: Native XML store; Pivots; Top; and Rank; they all seem very nice but I'm not really in a position to judge how useful they would be in my work. I guess we'll see.

The final session was on "Business Intelligence". I didn't have any idea what that might involve, but it turned out to involve reporting. I wasn't aware that you can configure SQL Server to give you a project type of "Report" in Visual Studio fairly easily, and there's also a "Report Viewer" control that you can add in to your own ASP pages. I have to admit that I lost some of this lecture, as the presenter was having some trouble with his computer and the wireless was working nicely - for a change - so I took the time to mess around with some more of the AJAX demos that I was really interested in.

The seminar was held in the Glick Center, where the Indy NDA holds its meetings. It was a good venue to hold the couple of hundred people who showed up. Microsoft sprung for donuts and pizza, and I liked the idea of having a couple of arcade games for people to check out between sessions. I would have traded them for better wireless, though. There were also only power outlets on one end of the room. But there's only one really important highlight: through the door prizes, I am now the proud owner of a Zune :) Welcome to the social! (Hello? Is there anyone else in here?)

FastTrack seminar

I’m blogging today from a Microsoft FastTrack seminar. Perpetual Technologies in Indianapolis is putting this on in some sort of collaboration with Microsoft. It’s a free seminar – and I’m always up for a free seminar – with a keynote and six sessions in three time slots.

The keynote was OK. Steve Thompson from Microsoft gave a roadmap presentation of where they expect enterprise technology to go over the next several years. The majority of audience were DBA’s rather than developers, so they may have had more interest than I did. As important things, Steve brought up Office, Microsoft Server, and mobile applications; and also Microsoft Business Solutions, about which I don’t know much. The goal, I guess, is to get enterprises on the Services Oriented Architecture bandwagon, and also to move towards virtualization as an important technique for scalability. He also discussed voice and VOIP near the end of the presentation, and how our standard voice data paradigm – blinking message lights and busy signals – is really out of date. This is something I’ve known since Interactive Intelligence was trying to get everyone out of that as well; don’t know how that effort is going, but we still have the copper wires at my last couple of jobs.

First session was on the Ajax.Asp.Net control library, which looks pretty cool. It was given by a younger guy from - I think - Crowe Chizek, and he did a creditable job, although I would have happily spent a couple of additional hours learning the subject, given the opportunity. It's interesting that most of the effects it allows you to create are already implemented in Javascript in the application I'm currently working on - a tribute to the skills of the original writers of this app, I think. But, you could certainly write a lot less code to get the same effects using this library. It looks pretty easy to use, although .Net 2.0 is required: one msi to install on your machine, and one zip file with controls and demos. It'll definitely be useful in my own web applications, anyway!

Monday, April 02, 2007

Javascript testing with NUnit

I've been looking around for a good way to test Javascript functions.

There are at least two if not more versions of JSUnit, one here and the other here. But there is one fundamental requirement I have for any unit testing framework, and that is that it has to integrate into an automated build script. For example, suppose you're using CruiseControl. It's got an NUnit step in it; once you write up your tests, it's the matter of a few minutes' configuration to get them running as part of the build, and it's very satisfying to watch the test counts grow as more builds are done. 117 tests run, no failures. 123 tests run, no failures. 135 tests run, no failures.

So if the framework doesn't work with automated builds, it's no good to me. Do they? I'm not sure. Edward Hieatt's version seems primarily to require a browser, although he does provide a JSUnit Server which appears to be designed to work from Ant or Java, but doesn't have any particular support for Nant or ASP.Net that I could find. Jörg Schaible's version is even less able to work in Windows; starting from the download which is only provided in tar.gz format. The documentation states that it can be run from the command line; if so that's easily adaptable to an automated build, but I didn't even take the trouble to download it, suspecting that it wouldn't even run on Windows.

So I was looking around for other alternatives, and I ran across this post. I'm sure that not everything you can write in Javascript can be evaluated by the .Net Javascript evaluator, but when you write a lot of tests you get used to keeping functionality nicely isolated.

I'm not sure what the best way to use this is. My first couple of tests have the Javascript in the ASP.Net codebehind file, where they can be unit tested at test time and Response.Write-n at runtime; but there's a few other possibilities; keeping all the Javascript in a separate file to be read in at test time, and using it as an include at runtime perhaps.

So I have a lot of work to do on this technique. But it seems promising!

Wednesday, March 14, 2007

Sports and power ranking systems

I've had an interest for a long time in the science of sports team rankings, for various reasons, which was forcibly brought to mind when I was filling out my tournament bracket. I'm always extremely mediocre in such game-picking contests, and when one blogger whom I respect said something similar, I started to think about how rankings could be done for college basketball. I looked a little closer at the Pomeroy Rankings, which seem pretty nice, although if Ken reveals his exact formula for creating them, I couldn't determine it. IMO a ranking system can't be taken seriously if the method that is used isn't known. Take Jeff Sagarin: Everyone always prints his rankings up very seriously, but we don't know what he's doing, so he might as well be making them up and just pretending it's math.

But Ken has something much more valuable on his site than a ranking system: a game database. For the most part, this information is not available in any easy-to-get-at-form, so if you want to create the rankings, you have to get down and do the data entry every year, which is why I've never created any system that lasted more than a year. But now, with Ken's files, maybe something useful could be done.

So I did a little research, thinking that the most effective system probably was going to be some kind of balance between a single-game Pythagorean expectation and strength of the opponent, repeating until the numbers converged. I'm sure I read a paper about that some years ago, but I can't find it now. Instead, I found this, a technique which doesn't take into account the scores at all!

But it's interesting, because it's based on the age-old theory of game commutativity; to wit: my team beat team X and team X beat your team, so my team is better than yours. Yah. It's a principle that's been widely derided for years, and people make hobbies out of finding weird cycles of games proving that Prairie View A&M is really better than Michigan after all. But there's obviously a kernel of truth in it. The paper goes into a lot of detail about setting up the graphs and putting weights on things and, you know, math, but really the principle is pretty simple. It works like this:

For each game that my team wins, it gets partial credit for each win the team it beat has.
For each game that my team loses, it gets partial debit for each loss the team it lost to has.

That's it. The questions are, do you want to go deeper and credit my team for a third or fourth level, and just how much credit do you give for each "indirect win"? The second question is easier for our purposes, because the authors of the paper do a lot more of that math stuff and come up with a simple equation for us:

Let k equal the average number of games played by each team.
The credit is (2k) / ((k^2) - k ).

For a third level, you'd square the credit, etc. But do you want to do the third level? Say the credit is .1, or 10% of a win. For the third level the credit would be .01, which doesn't seem like much, but you're talking quite a few games, too. So I'm going to have to use Ken's game database and do some research on this. Any code I create will be open-source, of course. I won't be able to do anything useful before this year's games start, but next year, watch out!

Monday, March 12, 2007

Generating classes from XML in .Net

The Oracle version of SQL has some nice keywords for returning your data in an XML format. (I suppose the other servers do too, but I've not used that feature.) When I get the XML back, I want to turn the XML into a set of business objects for easy serialization. XSD is the tool for that. Write the XML to a file, run XSD on it to generate a schema, then run XSD /c to generate the C# class file, and you've got a nice class. You can muck around with the XMLElement and XMLAttribute attributes to create nice field names, and 30 seconds to put together a static Get() method that returns a class from the XML.

Except it didn't work. The serializer threw a File Not Found error. When the XMLSerializer class has a new type it needs to serialize, it just generates the code on-the-fly and throws it into a new assembly with a name like olkdzxc.dll, and returns the class from it; but when I called the serializer, it told me that olkdzxc.dll wasn't found. Very mysterious.

Luckily, I remembered Chris Sells' old tool that was made for debugging exactly this problem, XMLSerializerPreCompiler, which lets you see the compiler errors that occur while the code is being serialized, and one of those led me to the problem: When generating the class code for an array of objects, XSD was adding an extra set of brackets in. So instead of having a class member myFoo[], I had a member myFoo[][]. Why did XSD do this? I have a hard time believing it's just a silly bug. I'd love to hear if anyone knows.

Thursday, March 08, 2007

Online and offline communities

I wrote earlier about creating online communities around business-to-business applications, with a vague promise to add more, but didn't. What rejuvenated my interest in the topic was going to a lecture and buying a book by one Joseph Myers, who lectures on creating small groups for churches. Churches generally don't like to be impersonal - it kind of misses the point - but there's no real alternative once the congregation grows beyond 100 or so people, so they like to subdivide into smaller groups so everyone has a group they can be comfortable with. What Joe Myers pointed out, as I understood it, is that it's very hard to create this sort of a group from the outside - intimacy generally arises from a group of people who like to be around each other. So, if you are a corporate executive, or a church leader, who is tasked with creating a community, how do you go about it? On the one hand, it's our job to create these communites, on the other, we know that they're not created, they just appear.

So the recommendation, for church leaders at least, is to define more exactly what groups already exist in the church. Church leaders want every small group to provide intimacy, but that's really only one way to relate to a group: the group can be more of a public group, or more of a social group, or just a personal group, and churches can take advantage of knowing how these groups relate to each other to encourage more fellowship in the church.

Is it applicable for online groups? I'm not sure. Here's the issue: at least if you're working with a church congregation, you can call a meeting, bring everyone in, discuss the issues, maybe figure out what the existing groups are and what they're doing. You can't do that online. Maybe the best thing someone tasked with creating an online group can do is simply to monitor the group, or groups, and make sure that the company is willing to go wherever the group takes them. Seems obvious, but is it? Check out Yahoo's handling of Flickr accounts, or Facebook's decision to allow non-college students to join. Or check out a lot of different online forums that die because people thought they were cool at first, but then they never changed again and everyone left for more responsive pastures.

I don't know the answers. But it's an interesting bunch of questions.

Tuesday, March 06, 2007

Quality of Local Political Blogs: Compare and Contrast

I've been writing various articles for Bloomingpedia in the last month or so, in an effort not so much to improve that site as to understand better the town in which I live. One of the keys to understanding a city, I think, is to gather a lot of different perspectives from a lot of different individuals with a stake in the matter. Take Indianapolis, for example: there are a lot of places you can go to get an impression of how the city is doing. Ruth Holladay; Matt Tully. Taking Down Words; Indy Undercover. You have to gather them all together before you can make a critical analysis of what's really going on; but they're there, that's the important thing.

Or you could read the Indianapolis Star. But I don't have much trust in the Main Stream Media. Their goal never seems to be so much the truth as it is finding someone who disagrees, no matter how foolish or inane that person may be, and unless you already know the subject matter pretty well, you can't tell from the way the article is written which is the inane perspective and which is sensible. So that leads you back to blogs.

Here are four local politicians who have been on my mind lately: Marty Hawk, Dave Rollo, Scott Tibbs, Sophia Travis. How easy is it to get their perspectives on local issues?

Far and away the best online writer in this group is Sophia Travis. If you just looked at the MSM, you wouldn't think much of her except that she's a little flaky (an accordion player with political aspirations? Weird!) But when you read her blog, not only is she talking about the tough political issues, but she's following up on comments people leave; leaving comments on other local blogs; sending in questions to local online chats; really being a part of the conversation about what Monroe County is, and what it should be. It would be great if every politician had an online presence like Sophia's.

Second best is Scott Tibbs. I actually started this post thinking about what I don't like about Scott's blog: there's no real comment area on it, just a link to a bulletin board, which I assume is also run by him, and which you have to register on before you can comment. He says that's to avoid spammers, but obviously a lot of bloggers manage to allow real comments without going to that extreme. But the point is, he writes, and discusses, and allows discussion of his views in some form. So I can't take too much umbrage, especially compared to:

Dave Rollo. He's got a web page; it's a start. The page is very static; the main page has a "last updated" date on it, but there's nothing to find what was there before. There's only a few paragraphs discussing his views, and there's no way to leave public comments, and if he's ever left a comment online I haven't seen it. Start a blog, Dave. He did participate in an online chat recently, and having a web page puts him ahead of:

Marty Hawk. Not much to say here, because I really couldn't find out anything. She gets quoted in the local paper from time to time, and you can go read the minutes of the Monroe Council meetings and find some things she said. But right now, the number 2 hit on Google when you search for her name is the article I wrote on her last week. So we really don't know too much about her at all. It leaves me defining her, rather than having her defining herself. If that's what she wants, then that's fine.

So that's where we are in online local politics in Bloomington. It's a start. But I wish there were a lot more politicians in the conversation.

Thursday, February 22, 2007

Bookplates

(Hey, this is my 200th blog post! And it only took me three years!)



One principle of agile development that doesn't get a lot of attention is Sitting Together. The point of the principle is simple: agility requires communication, and there's no faster communication than shouting over your shoulder to the guy behind you! I think it's a bit overblown; communication is hugely important, but with the advent of instant messaging, not only do you know that Dan down the hall is sitting at his desk, but you even know that Mike down in Dallas is, and they're just as likely to respond to your ten-second query as Jennifer two desks away is. The participants have to be in pretty close time zones, though; Suresh in India just isn't gonna respond to your IM no matter how many times you check his status during the working day!



In my new company we sit together, which is something I've never done anywhere else. I've found that one disadvantage is that my desk doesn't have space by half for my programming library, which I like to keep at the office for easier reference. (Okay, so I haven't referred to the Differential Equations textbook since I left the videogame industry. Nevertheless.) So I'm taking over a couple of shelves nearby, but instead of just writing my name in all my books, I thought it would be more fun to make bookplates for them. Here's the design I made:

I'm no graphic designer, but I thought it was OK. If you want to modify it for your own use, feel free; I've made a Word template available for use with the Avery labels that come six to a page; you can get it here, or download the Avery bookplate for the four-a-page labels. Hey, my favorite book site LibraryThing, why don't you provide some of these? I'm sure there are dozens of people who can do better!

Friday, February 16, 2007

Government RSS Feeds

Sophia Travis is a Monroe County Council member - and, remarkably, one who is sophisticated enough to have her own blog. On one post, she asks what information we'd like to see on the Monroe County web pages. I would like to see an RSS feed. Here's why: Government business does not lend itself well to the regular web page format. The people's business does; the most important thing for the web site will always be ways to contact government officials; how to apply for permits; pay parking tickets; vote, etc. But government business consists mostly of a neverending string of public meetings, each one with an agenda beforehand and minutes afterwards. The best way to present a stream of information like that is with a feed. For example, I've already created a feed for the Council meetings using the very nice, if complex, Feed43 service. My feed will do the job, giving me an update through my feed reader whenever a new meeting agenda or minutes are posted, but it's pretty content-free, as the feed can't do much except monitor each row of the table of meetings on the page. But suppose the county tech services people set up an easy way to post updates using TypePad or Blogger - suddenly it's easy for them to update the site and there's a good description of the update in the feed. Then, perhaps, it could be expanded using the same feed, to give information on other public meetings, notice of events the council members are participating in, and any other kind of information that has a time element. I'm thinking this is actually a time saver, at least for that one poor soul whose responsibility it is to go in and edit the HTML table on the page whenever new meeting minutes are available!

What would you like to see on your local government web site?

Thursday, February 08, 2007

Change is good

I've accepted a new job with Envisage Technologies, a small software company in Bloomington. I'm excited about it as it's a company with a firm interest in agile principles:

Do ideas by the Gang of Four, Steve McConnell, Martin Fowler, Tom DeMarco and Kent Beck resonate with you? Join an experienced team of developers in an Agile environment...

So I'm no longer working in Indianapolis for the first time in more than ten years - I'm not sure what I'm going to do with all the extra time!

(I've also set up a LinkedIn account as per Guy Kawasaki's suggestion. Drop me a line if you want to connect to me.)

Tuesday, February 06, 2007

Prius anti-skid props

The problem with having six inches of snow dumped on us is that the hill that leads to our house has a slope that isn't quite a vertical wall, but that's pretty close. So as I was heading home my wife assured me she'd seen the plow go by, and I decided to take the hill, which in good weather would be five minutes, as opposed to going the long way round and taking half an hour.

Up I started, accelerating to about 25 MPH and getting at least 30 or 40 yards before realizing that the plow hadn't been by recently enough to make a difference. It was easily the worst snow I'd ever tackled on the hill before, and it's not fun having to back down that slope, let me tell you. Especially with the literal vertical drop on the side that sends you ten feet straight down before the drop is conveniently stopped by a tree.

But here's what the Prius does, straight from the brochure:

Motor Traction Control (TRC) – TRC uses sensors which automatically apply the brake to any slipping wheel while delivering more power to the wheels with greater traction.
Vehicle Stability Control (VSC)* – VSC senses oversteer (tail slide) and understeer (nose pushing forward), and managing the power delivered to each wheel.


It was a beautiful thing. I kept the accelerator right around 25 and the car took over from there. It never slipped sideways, never fishtailed, and actually applied acceleration to the wheels in bursts of a couple of hundred milliseconds at a time, followed by coasting to grab what little traction it could, and then accelerating again, and I was at the top of the hill as nice as pie. I only felt guilty for not stopping the cars I passed and telling them, "Your car got TRC? Got VSC? Then DON'T try the hill tonight! Just because my car can do it doesn't mean yours can!" What a beautifully engineered vehicle.