Ramblings of a software developer with a degree in bioinformatics. Agile development mixed with DNA sequencing - what could go wrong?
Wednesday, April 16, 2008
Dreaming in Code, Scott Rosenberg
I look at this book and remember the many times I was in the same situation as the Chandler folks were in: A product with lots of amorphous design that you can't get coding on because every time a little code gets written, the designers pull back and say, "Whoa, whoa! That's not what we meant at all!" Then you get pulled back into another design meeting and the process starts all over again.
I'm over that now. The company that I work for understands that the important thing is to get something out there that people can get their teeth into, to figure out whether it's any good or not. I think to myself that the bad old days are over...but it scares me to think that a lot of other coders still have to cope with the same sort of situation: the eternal debate between the designers, architects, and coders over who's fault it is that the code is buggy and the users hate the application.
Whether it was his original intention or not, Rosenberg brought back the intense frustration of these times with his description of the flailing of the Chandler product. I suspect a non-coder, and maybe even a lot of coders would look at it differently, thinking that hey, they were trying to do it right for a change, get the design down before they do the coding so the rest of it is just simple plugging and chugging, stuff any code monkey could do. It never works that way, though.
I get the feeling that what Rosenberg was really looking for was a happy ending. You spend lots of money, do the project right, maybe have some interesting pitfalls along the way, then you release the application, everyone loves it, the world changes, and the book ends. It didn't work that way, unfortunately, and a lot of the second half of the book leaves the realm of Chandler to discuss the philosophy of coding, bringing up agile development and the mythical man-month.
But in the end, it's very difficult to separate the software application from the book, and ultimately, since the ending of the one was vague and ambiguous, not with a bang but with a whimper, the ending of the other is too. Still, the book is one-of-a-kind; a detailed, unflinching look at a single software development effort. Every development team should be so lucky as to have a retrospective like this to look back on.
Wednesday, February 27, 2008
Blog anniversary!
Tuesday, February 12, 2008
Startup secrecy
A note I got today warning us of the terrific need for secrecy around the company that was created at the Bloomington Startup Weekend this past weekend. I ended up not being able to participate in any meaningful sense, except maybe for a few hours on Friday night, so I don't know what any of the big secrets are that they need to keep, but one thing I do know is that
there is no business model that is so unique and different that no one has ever thought about it before.
Creating a successful business is about execution, and sweat equity, not about the new and exciting business model. All this sort of insistence on secrecy does is shut down any potential buzz that would be created. I mean, you've got 75 or so people who are probably, or hopefully, really excited about the application they've put together. They should be blogging, twittering, discussing how excited they are about the company. That is a lot of people for a small town like Bloomington - the buzz would probably have a multiplicative effect and people might even start up a buzz about a buzz, so to speak. But they're blowing it by telling everyone that they can't post, can't talk, can't even email.
The PR people and/or the lawyers are probably telling them that they need to present a consistent message, need to prevent any chance of being sued for patent infringement, need to be safe, need to be careful. Sorry, folks, being careful isn't how you create a successful startup. That comes from being bold and taking chances.
I got a separate note telling me I needed to fill out more forms in order to claim the share of the company that I qualified for on Friday night. Meh. I don't think I'll bother.
Wednesday, January 16, 2008
Bloomington Startup Weekend
The week before that we'll have a geek dinner, so I'm guessing the Startup Weekend will be a topic of conversation there too. Hope to see you at El Norteno!
Thursday, December 06, 2007
A Facebook feed for the open web
I do like the Facebook minifeeds, though. A minifeed, if I understand correctly, is an aggregation of all the things that a Facebook user is doing on Facebook - updating status, adding friends, using applications. For each friend, getting updates on what they're doing moment-by-moment on Facebook is interesting, and the Facebook homepage aggregates all my friends' feeds into a single one and sorts it by time. So when I do log on to Facebook, I can see at a glance what all these people are doing, at least in the last few hours.
But there's plenty of stuff on the open web that could go into a minifeed just as easily. A lot of sites are making sure they have Facebook applications now, but not every one, and
who wants to rely on a Facebook app for something that isn't really anything more than an RSS feed?
I ended up creating a web page directly rather than creating a feed - I didn't feel like learning all the ins-and-outs of RSS or Atom. So, if you want to follow my life, almost minute by minute, check out this page - or just check out my home page, which has a small iframe in it with that page in it, which is how I intended to use the feed anyway. You can't subscribe to my life just yet, but maybe that will be coming soon!
Along with my feeds mentioned above, the page aggregates Twitter posts, and soon I'll add my Flickr pictures and maybe Delicious , Coastr, or Zelky if they have the feeds in the format I need. I'm looking forward to having my own life feed!
Monday, November 19, 2007
Pair Programming vs. Code Reviews
The comments are already coming in complaining about pairing. I noticed these two particularly:
the obvious conclusion to this is double the hours per project, at minimum (and I'd expect that you would work slower if you had to discuss or explain stuff to someone else the whole day).
I would freak out if someone would watch me every the time I code (and also has a keyboard to interupt me lol)
Sort of the standard responses to pair programming. I'm not so experienced at the art that I can really say the hours don't double, maybe they do - but what I can say is even if the hours are doubling, the code quality is squared. Maybe it's just a commentary on what lousy code I produce by myself, but there is a big difference when someone else is there looking at the code, even if it's only the "navigator" effect, where the person who isn't actually at the keyboard can allocate the memory space to go back and remember any refactorings or other cleanup that needs to be done. As far as working slower, there are only two possibilities: first, that the other person doesn't know about the code as well as you do, in which case the knowledge transfer makes the whole thing worthwhile, or second, that there are a few ways of doing things and you need to decide which way is best. The selection you make when coding by yourself might easily not be that one.
Insofar as code reviews go, I find them almost unnecessary when pairing. Some teams do peer-review-before-checkin, which I don't really care for - I just can't grok the concept the code is trying to get across just from staring at it for a few seconds while someone explains it to me, but I suppose some people can do that. But we do code reviews for two things: first, to go over legacy code - we have plenty of that in our application - and second, to go over code that's just been checked in. This isn't 100% useful either, but on the other hand we have very few development meetings, and sometimes it's worth it just so someone can point out, "Oh, this should have been done using this brand new language feature" or, "we have a custom library that already handles exactly this case, can we use it here?"
So code reviews can be worthwhile, and they are absolutely necessary in a non-pairing environment. The big thing to watch out for is that you don't spend a lot of time discussing what your internal coding standards are, as I've written about before. But my feeling is that it is not as useful as pair programming.
Sunday, November 11, 2007
iFrame scroll to anchor problem
But it's not like the schedule display needs to be real complicated. I tossed it into an iFrame, stuck the schedule on a separate page, and wrote some simple Javascript to scroll to a specific game's anchor based on the current date.
But what's this? When the iFrame scrolls, the entire page jumps down to the iFrame to display it. That's not what I wanted, but I couldn't for the life of me figure out a way to stop it from happening, until I finally ran across Jim Epler's blog entry explaining how he simply scrolled the main page back to the top after setting the anchor. So you set the location in the iFrame, the main page jumps down, then you set it back to the top. It's not pretty, but it works. Here's the code in the iFrame:
location.replace(location.href+anchorname);
parent.window.scrollTo(0,0);
Thanks, Jim!
Thursday, November 08, 2007
Some test code smells
There are a couple of competing dynamics you get when writing tests: the first is that, in general, less code is better than more code. You want the code to express the concepts you need to express without any extra cruft, without huge globs of copy/pasted code here, there, and everywhere that is a nightmare to maintain. But you need tests as well, and tests are either more code, or more people, and people are one heck of a lot more expensive than code. So it's not at all unlikely that you would have as much test code as production code.
But those tests have to be maintained, so it behooves us to figure out the best way to do that, bearing in mind that the goals of test code are not the same as those of production code. So, here are some problems, or code smells, specific to test code that you might run into:
1. Conditional test logic . A lot of people like to say that one assertion per test is plenty. It seems like unreasonable test gold-plating to me, but if you're going so far as to put an if statement in the middle of the test, you need to have multiple tests to test each branch of the statement, each time. Or, the condition might simply be an assertion, where you're saying if (x) keep testing; else assert false. No point in that, just assert x at the beginning and let the code blow up if it needs to. This really helps with the readability of the test report, too, expecially if all the report says at the end is "The assertion FALSE occurred". Not helpful, whereas knowing directly from the report that X failed is much more useful.
2. Hardcoded test data
This problem is related to the principle that computer science profs have been tossing around since the beginning of time: that you never want to put numbers or strings in code; always define them somewhere as constants instead, so they can be easily changed. Not a bad principle; certainly it's ideal that anything that needs to be displayed to the user can be easily modified to use another language, so you don't want a bunch of MessageBox.Show( "Are you sure you want to do this?" ) statements scattered through the code. For numbers, though, my general rule is that it doesn't need to be a constant value unless it shows up more than once.
But in a sense, just about every number shows up more than once if written properly: at least once in the code, and at least once in the test for that code. Say for example you're testing a Price object with this line of code:
Assert.Equal( $14, CreatePrice().Retail );
CreatePrice() is part of your fixture, and it sets the list price to $20. Your Price object knocks off 30% to come up with the Retail number.
But now you've got the same number in there twice! See it? $14 is in there, and so is 70% of $20. The same number.
One fix is to move everything to constants. Presumably the Price object has a GetDiscount() method, so you could make the $20 into a constant ListPriceInDollars, and change the expected amount to ListPriceInDollars * GetDiscount. Still pretty verbose, but not really bad for this small example. A better solution can be to create an ExpectedObject to compare what comes back from CreatePrice to. In the ideal case, your test would then simplify to
Assert.Equal( ExpectedPrice, CreatePrice() );
Which would cover a boatload of other comparisons as well as your Retail value.
Wednesday, November 07, 2007
Request could not be submitted for background processing
This is a rather inexplicable error message that Crystal Reports pops up. If you search for the message, you see a few ideas for solutions, but what I eventually did didn't seem to be on anyone's list, so I'm adding this post to the search in case it's helpful.
In my case, the big clue was that we had just modified the report viewer so it could pick up data from alternate data sources based on information in the web.config. It's a pull report, so the data source is specified directly in the report and we're modifying it programatically. My assumption was that somewhere in the report, there was something being specified to use a different data source than all the rest of the report, and that our code that changed the source was missing it. But I couldn't find it, and Crystal report files have a binary format, so they're next to impossible to search through.
I dug through a book on Crystal last night - luckily it had a chapter on dynamic data sources; thank you Brian Bischof - and one of the things it suggested was to check the connections on the tables using a member function, TestConnectivity. (You can outfit each individual table that a Crystal Report draws upon with its own login information.) To create a dynamic data source, you have to make sure that the source is changed all over the place in the report. In the report object, in any subreports that it uses, in any tables. So I set up the table login information to set it the way it was supposed to be, and then call TestConnectivity, and if that failed, throw an exception. My idea was that there was a single table somewhere causing all the problems, and that if I could figure out which one it was, I could fix it.
Only I didn't need to.
As soon as I added the TestConnectivity check, the report started coming up.
My working assumption is that the changes to the table login information are cached somewhere, and don't actually take hold until the report thinks it's necessary. Presumably there's a bug in the caching system, but calling TestConnectivity causes the table login information to be set properly.
Nice weird one there. Spent too many hours on it though.
Wednesday, October 17, 2007
Michael Gartenberg - Still Not Twittering
- General status updates and tweets
- Quick-skim blog entries, newspaper articles, longer forum discussion posts
- Technical papers, long blog entries, that you can't really get the gist of by skimming
- General or group emails
- Emails specifically to me, or that I need to deal with
- Tweets specifically to me
(Picture by Pete Reed)
How these are prioritized, and how they should be prioritized, are two different beasts. Certainly my top priority items are emails and tweets for me - they're the things I want to read first. After that, depending on how much time I have, I may want to skim the short items or buckle down to a technical paper. But how I actually prioritize them is via the application they're sitting on. I have a Twitter reader, Outlook, Gmail, and Google Reader to grab all these different feeds: stuff that comes in via Outlook gets highest priority since it has the nicest toast mechanism. Teletwitter, my twitter reader, doesn't always pop toast properly, so I'll often miss messages on it, although I don't care so much since they're low priority. Except, of course, for the ones targeted to me, which are high priority, but which I still miss since there's no way to grab them out of the twitter stream. And if a good technical paper comes across Google Reader, I'll probably share or star it to come back to as I J-J-J through the list, then forget about it entirely.
So there's clearly work to be done in this space.
I am absolutely sure that all my streams can be prioritized together properly, because it seems to me that it can be mechanized without too much trouble, but I'm not sure how yet. But I'm sure someone out there is already working on the issue.
Tuesday, October 16, 2007
Bloomington Geek Dinner
But none for Bloomington, until now. Or even Indianapolis, as far as I know; if you're up in Indy feel free to come down - we'd love to have you.
It'll be at Max's Place downtown, starting around 7PM. See you there!
View Larger Map
Saturday, October 13, 2007
Indy Tech Fest
Big room has no wifi; smaller room seems to work pretty well. I feel very blind without a net connection. Have I said that before? :)
Session #1:
Stephen Fulcher, VS 2008
3.0 install only has WCF and WPF, everything else is the same as 2.0
Integration is tight to SQL Server. Other db's?
anonymous variables - new keyword var (not untyped, but takes on the type of the right-hand side)
lambda funcs,
extension methods: public bool IsBool( this string s )
Generate code metrics?
Q: Is var a breaking change? A: no, you can still use it is the name of a variable
XAML
Services (not web services, just services)
ServiceContract, OperationContract, DataContract attributes. - all specified as an interface
svcutil.exe generates client code and can resolve from existing DLL's
----
Session 2:
Mark Strawmeyer
C# tips & tricks
ctrl-alt-downarrow window list
ctrl-/ takes you to search menu
In the search window, type a ">" to go to a list of immediate-style commands
F12 is "go to definition"
Shift-F12 "Find all References", then use F8 to cycle through them
Code snippets - intellisense to generate code - type in the code and tab (to autocomplete), tab (to generate)
very powerful! prop, for, tryf
"Organize using" tools in VS2008
Unit testing support:
Test input from data source
unit test assemblies without source
Now part of VS2008 Pro
"Create Unit Test" option
Code Profiling is not part of Pro
---- Break for lunch. There are some lunchtime sessions but we just hung out in the lobby and watched the XBoxers.
Session 3:
Mark Strawmeyer
Ajax Tips
eh. Mostly just an overview of Ajax.net
Session 4:
Silverlight
Chad Campbell
Silverlight is the new name for WPF/E. Create a silverlight object using xaml. Add "Silverlight.CreateObjectEx()" to the HTML to use the plugin.
You can use .Net with silverlight 1.1. Silverlight is a browser-based plugin. It's cross-browser, cross-platform. What does that mean? They've written a plugin for Firefox as well as IE? Supports Mac and Linux and Safari as well. .Net framework used by Silverlight is specific to Silverlight.
Access the HTML DOM from managed code. Silverlight plugin will not be able to access the local file system.
Session 5
Dave Bost
WPF applications
Starts by demoing a Twitter feed on his page, created with Silverlight
Is it going to be the same presentation he did in Indy about a year ago? I don't think I blogged about that.
WF, WPF, WCF, Cardspace. WPF is .Net layer over DirectX.
Nice demo of an airport simulation
Expression is a set of tools for designers. Expression Web, Blend, Design, Media. Costs a bundle, though. Once you design the xaml, you can edit it in Visual Studio.
http://devcares.com in Indy every month.
Friday, September 21, 2007
Continuous Integration
Graphing the number of tests over time would seem to be a good idea
Sligo dashboard: classes, LOC, duplication, max complexity, tests run, line coverage, branch coverage, FindBugs violations; PMD Violations; Max Afferent; Max Efferent
Code reviews are good for high level details; use machines to do the low level detail finding
Code complexity tools: CCMEtrics, Vil
Code duplication detector: Simian
Dependency: NDepend
Coding Standards: FXCop
Thursday, September 20, 2007
How to work with an open source team
"Free as in Beer"
http://opensource.org/
Q: Don't people resent when companies take open source projects and make money off of them? A: More power to them! Many companies are using their employees to do open source. It's good PR for a company to have people who work on open source - give back to the community, attract high-power talent
Don't contribute unless you:
Know the project license
Get permission from your employer
Get legal review if needed
Can communicate clearly in the project language (usually English)
Oracle tried to strongarm Linux, got squashed, came back with offers to help. Good PR! (I'm not familiar with this story!)
Project currency is trust and respect. You don't start with any. Remember, if you're good, you don't have to point it out.
Q: How do you start gaining respect? A: Post to the mailing list, point out bugs AND fixes. Maybe someone will request a patch, provide it
Q: How does the code stay consistent and looking good? A: There are tools, or people who do work to make things consistent. Or, it doesn't :)
Q: How do you get non-coding contributions going (docs, images)? How do non-coders get cred? A: Projects should support people like tech writers, if they're good.
How to Gather Customer Feedback
Make sure to ask "Is there anything else?"
Several stories about bad feedback forms
Interviewing 5-10 customers is probably as good as interviewing hundreds
Don't assume that no complaints = customer satisfaction. They may just be putting up with it, especially if they feel no one is listening.
Don't just do surveys. Use different feedback-gathering methods. Invite open-ended feedback, in surveys or otherwise.
Don't ignore the feedback!
Focus on the service attributes most important to your clients. Don't know what's important to them? Better find out!
"What aspect of our service is most important to you? Regarding it, how are we doing?"
Lots and lots of examples of how not to
FBWA (Feedback By Walking Around)
I love how Microsoft gets all Ajaxy with feedback on every page: http://msdn2.microsoft.com/en-us/library/ms229931.aspx
Again, act on the feedback! Summary of responses, detailed responses, action
Don't forget power of the naked eye. Often problems are obvious and don't need surveys
Wednesday, September 19, 2007
Security code reviews
Hacme Casino http://www.foundstone.com/us/resources/whitepapers/hacmecasino_userguide.pdf
Foundstone CodeScout
Paros (web app security assessment) http://www.parosproxy.org/index.shtml
Don't overanalyze. (Spending two hours determining if a strcpy is vulnerable. Takes two minutes to change)
Identify code review objectives (Insider backdoors, compliance with specific regulations)
Lots of discussion of tools. I think the point is, use available analysis tools before bothering with a code review - it's easier and cheaper
http://www.securecoding.org/list
http://codesecurely.org
Usability by Inspection
Doing usability reviews?
Me either. But if you've got a product that has a UI, an easy thing to do to improve the product is to just sit down with a few people, maybe some that will actually use the product, maybe some managers, or maybe just some people that you can pull in to see what they think. That's more or less the gist of what I got from Larry Constantine's session on Usability Reviews.
Now, the first thing to realize is that a "usability review" is different than a "grouse session". I was once doing a demo for an internal tool my team was working on, and after I'd showed how the tool worked, during the Q&A period one guy spoke up to say, "Boy, does that interface ever look like it was designed by a programmer."
"Interesting," I said. "How would you improve it?"
"Oh, you know. It just doesn't look as sharp as it could."
Well, yes. Nothing ever does; but it wasn't too helpful to tell me that. So, when you do a review, you have to be specific.
But how can you be specific about a UI? A UI is just a UI, right? It either looks good or it doesn't.
Not at all! There are lots of basic principles of design that the people who make web sites for a living know about. Even if your organization really is full of web pages designed by programmers, there's no harm in teaching the programmers some basic principles of design. I have a couple of books on that subject, one by Mr. Constantine himself, which I didn't even realize until I'd gone in to the session. But the organization or team should probably lay down the fundamental precepts of design that they want to follow. The usability defects will be easier to objectively identify with that list in mind. Some examples of good design principles are: Availability, Feedback, Structure, Reuse, Tolerance, Simplicity. Check one of the books for some guides as to the specifics, but a usability defect violates one of these principles, or you could also say it is a probable cause of user delay and confusion. But it's not a usability defect if you just don't think it looks good!
So here's how you prepare for a usability review: First, organize a few use cases. You may already have them as part of your project, or you may just have to make some up. What you'll be doing is telling the users what they're trying to accomplish.
Then, get the folks together. At a minimum, you should probably have:
- A leader, to make sure everything moves along smoothly;
- A notetaker;
- A Continuity Reviewer. This is someone who is reviewing the UI specifically to make sure it is consistent with overall project guidelines, and with the other pages in the project.
- Users - people who will attempt to use the page. They can be actual customers; agile-style customers; or just people who were walking down the hall at the wrong time.
- A Designated Driver. This is someone who will perform actual mouse clicks or typing at the request of the users. This will depend on the exact situation - do you have a real application, or just some mockups? Do you have a big meeting room and a lot of users, or not? If not, the Designated Driver might as well just be the user.
- Developers/Designers. Developers and designers who worked on the page must never explain or defend design, argue with users, or promise anything. They may only find problems. Users do not count as problems.
Next, have the users go through the use case or scenario you've designed. Introduce the scenario with an overview of context and user motivation. Read one step of the scenario at a time, and ask the users what they would do next. Users take lead in proposing actions. Never guide or prompt users! Help is limited to simple description or clarification. If the user has to ask for help, you've automatically got a usability defect.
For each defect that you find, the notetaker should note:
- The feature or function that the defect is in;
- The location; which web page it is or a screenshot of the GUI
- Which design principle is being violated
- A short description of the problem
- The estimated severity of the problem. (nominal, minor, major, critical )
You should probably allow one to three hours for the review. So that's it! Get out there and say goodbye to applications that look like they were designed by programmers!
Web Application Risk Modeling
A threat is not a vulnerability. A threat is what someone might try to do to your system; a vulnerability is how they would do it successfully
What risk drivers are there?
Application overview: Documentation drill; models; dataflow
Decompose application: break it down into well-defined "chunks".
Identify threats against the security objectives
Identify vulnerabilities "Vulnerability Assessments"
A threat model helps you to define, categorize, and prioritize vulnerabilities
Make sure to fix vulnerabilities, not exploits - understand all nuances, attack potential, exploit paths
STRIDE / DREAD
Other factors:
Ease of use, mitigants, timing, visibility,
monitorability (can you watch people doing stuff?),
forensics,
access required( even for internal apps, what are the chances of a bad guy infiltrating? )
XSS: Take user-inputted data and display it back without filtering. Nuances to XSS (Reflective Script Attack, Persistent Private Vectors)
POST based attack would not show up in server logs
Tuesday, September 18, 2007
xUnit Test Patterns and Smells
Here's my history on test-driven develoment: Back in the nineties, I first read Martin Fowler's Refactoring. I thought it was a good idea, and attempted several refactorings on the code base I was working on, with good success. I think it was one of the better-coded applications to come out of that company. But I was always annoyed, because the instructions for the refactoring would always say something like, make your changes, and test. Testing is hard, man! Especially when you're testing a bit of the application that takes two minutes to get to from application launch and relying on a Direct3D driver to do the right thing.
So I added refactoring to my arsenal but didn't think too much more about it, until about five years ago, when I ran across an article on TDD in, I think, Dr. Dobbs, but it may not have been. The article mentioned some ideas about testing and mock objects, which turned out to be exactly what I needed for the project I was working on then, which was a business-level client API with a wrapper lib for calls to the server - the ideal thing for a mock. I played with it for a while, and it worked beautifully! Pretty soon I presented a proposal for moving to TDD to the team I was working with.
There were a couple of quotes that I put in my presentation (probably from the magazine article) that I really liked:
- Tests must be easy to run. If they aren't, people won't run them.
- Tests must be easy to write. If they aren't, people won't write them.
The problem is, tests are easy to skip. Comment out. Ignore. If you do that, your code isn't being tested. But the client doesn't care about that...at least in the beginning. Later on, if your code isn't being tested, bugs will start to crop up. You'll make a change in one area that you never in a million years thought would affect this bit of code over there. But it does, and you've introduced a bug. The client will sure care about that! So you really have to put the effort in to write tests.
But at the same time, you're selling the production code, not the tests. If your team is spending more time on the tests than on the code itself, your velocity is sure to suffer.
So what's the solution? Go back and look at the second quote again. Tests must be easy to write. How do we make them that way?
The first thing to notice is that your objectives for test code are probably going to be a little different than for the production code. For example, execution speed is crucial for production code. You can't have your users twiddling their thumbs while they wait for your web page to load. But for test code, not so much. Go ahead and add ten seconds worth of tests to your build; think anyone will notice? Or, add four hours worth of tests. Sounds good! Just make sure to run them overnight when no one needs to watch them.
On the other hand, is simplicity important for production code? Well...it can't hurt, of course. The smaller and cleaner you can get the code, the better. But sometimes there's nothing you can do about it; you have to add that cache for speed; or denormalize the database so you don't have to make calls across a dozen tables. But for test code? Let's say it again: Tests must be easy to write.
What else? Is correctness important for production code? Of course...but users will put up with small bugs. But correct test code is an absolute requirement. If you don't have the tests right, you'll be writing incorrect production code to satisfy the bad tests. What about flexibility? Code should be flexible, right? Not really, not test code. In fact, there will probably be enough hard-coded test values to make it hardly flexible at all.
This is getting long. I'll add more later.
Software in the large
Scrum of Scrums
Crystal
Iteration Duration: larger the team, shorter the development cycle
per week, count on a half day of retrospective (two week cycle = 1 full day retrospective)
Expectation: plan/develop/deliver.
Difficult - activity-oriented planning or component-oriented planning?
Therefore: Result-oriented planning. Focus on the features! Comes back to the Agile Manifesto: Our highest priority is to satisfy the customer.
Plan for accomplishing a valuable feature: integration, test, documentation.
A feature is a brief statement of functionality, from the user's perspective
How does one deal with architecture issues?
A feature produces a measurable result.
Iterations are steered by features, but defined by tasks
Tracking tools: PPTS, TRAC
Someone also mentions they use Sharepoint
Or just three checkboxes: working on it, untested, done done
Tools support communication, not replace it
Release Planning
Iteration review (Demo)
Present software, recognize & extract best practices, learn from failure
Measurement: Acceptance tests, planned functionality, is the product owner satisfied?
Retrospective after every iteration. Likely problem that people try to make large-scale changes
- Cross-functional or feature teams
- A large project might have tech teams; the customer of a tech team is a feature team
An ideal team is self-organized; this ensures whole features and good knowledge sharing. Managers must provide environment allowing teams to gel. This is like my ACG posts from a few months ago.
Trust
Agile development is a trouble detector. Bad news is also good news. Integration of departments (Projects are customers) Close customer relationship ensures rapid feedback.
Discussion of implementing practices a few at a time. Ping-pong implementation!
Synchronization: Face-to-face is preferred. Sync across subteams daily (Scrum of scrums). If your team is self-organizing how does that work?
Communication via wiki
Just one "Chief architect" - pulls the strings, makes technical decisions, "guiding light". Relationship of chief architect and customer?
Starting: take baby steps. Start small. Use skilled people. Develop a few features and make sure to do iteration retrospectives. Grow slowly.
Don't finalize architecture before growing team; use retrospectives. Domain teams must formulate new requirements. (But you might have to finalize to eliminate fear...or at least say it's finalized!).
Avoid hot technology. A large project has enough problems on its own without trying to train developers on something new at the same time.
Refactoring: technical excellence is doubly important. If a developer sees a needed refactoring on another team, they have to point it out to them.
Large projects may have exponentially greater test time. 10% of dev effort for integration/build. (If something is difficult, do it over and over until it's not difficult any more.)
Q: Special iterations for integration? A: no
Nor a special integration team; rather people from each team who specialize in integrating
Reviews:
Special review team. People should jump around between teams, and be on a team strictly for the purpose of reviewing the code. Everyone should do this.
Knowledge transfer (via Daily Scrum and pair programming). Scrum master ensures the process; product owner ensures business value).
Q: Agility in a distributed environment. A: