Thursday, September 13, 2012

Denisovan Gene Sequencing

What the fossil record shows about Denisovans: They had at least one finger, one toe, and a tooth.

This evidence comes from the Denisova Cave in Siberia.Scientists found the finger, toe, and tooth, which came from three different individuals, in different levels of the cave, and after doing a DNA analysis determined that their last common ancestor with humans lived about a million years ago. (The fossils dated from about 50,000 years ago). Of course, with such a minimal amount of material available, it's a bit tricky to get a complete gene sequence. The fact that the cave is in Siberia helps some (the average temperature in the cave is right around freezing) but some nice work on sequencing from a group led by Matthias Meyer helped as well.

Here's what they did to the source material:

DNA is dephosphorylated, heat denatured, and ligated to a biotinylated adaptor oligonucleotide, which allows its immobilization on streptavidincoated beads.

I'm sure you're kicking yourself for not thinking of it first. At any rate, the immobilization of the DNA on the beads seems to be the important part, as it allows for copying of the sequence thus creating extra source material to work with. We now know more about Denisovan gene seqences than we do about Neanderthal sequences, as the quality of these Denisovan genes is better, less contaminated, than anything we have from the Neanderthals. Pretty cool stuff! Here's an article from Ars Technica if you don't feel like wading through the original paper.


Monday, September 03, 2012

Coding errors in DNA analysis software

The problem:

A method for analysing similarities in protein sequences is to use a substitution scoring matrix. The matrix will assign a specific score to each individual protein match, so you can compare the sequences by looking at each individual pair of proteins in the sequence, looking in the matrix to determine the compatibility score for the pair, and adding up (or otherwise aggregating) the total score. The higher the score, the more likely it is, presumably, that the two sequences are actually related.

Emergent Homochirality  - for the Emergence Group
The art and science of comparing proteins

As a completely made-up example, if you have one sequence that goes Alanine - Glutamine - Isoleucine - Serine, and another sequence that goes Alanine - Glutamine - Tryptophan - Serine, you would look in the matrix for the scores for Alanine paired with Alanine, Glutamine with Glutamine, Isoleucine with Tryptophan, and Serine with Serine. If the identically matching pairs all scored 10, and the Isoleucine-Tryptophan pair scored 2, you would have a score of 32. On the other hand, if your matrix says that Isoleucine paired with Tryptophan gives you a -50, your score would total -20.

The program:

It's obviously pretty important to your analysis what matrix you use. Lots of people have come up with lots of matrices. One in particular, known as BLOSUM, was introduced in 1992 by Steven and Jorja Henikoff. The Henikoffs wrote a program (source code here) that analyzed a slew of already-matched sequences and assigned them scores, based on the probability that you would see them in a matching pair and the probability that you would see them at all. (This factor is relevant since some proteins overall appear with more frequency than others. You can find more details in this paper). They wrote it all up in a nice paper and the BLOSUM matrices have been in heavy use ever since.

But the funny thing is, there was a bug.

Yes, yes, I know what you're thinking: a bug? In a piece of software? Impossible! But it's true, and the bug wasn't pointed out, from the time the program was written in 1992, until 2008. It seems that, according to a paper by Mark Styczynski et al, there was an "incorrect normalization during a weighting procedure", which caused the program to generate different matrices based on the ordering of the sequences used as source.

So why did it take so long to find?


I'm thinking that the primary reason is because the source code is pretty incomprehensible. I'm not blaming the authors - I was writing code in 1992 and I'm sure that if I could even find any of it, it would be no easier to understand. My code, like theirs, would have been written in C, and in 1992 I think we were lucky to get code to work at all. Today, we have niceties such as unit testing, powerful IDE's, language features like garbage collection, and automatically generated documentation. In 1992, we had lint, and sometimes we even used it. Today, we have the confidence to make changes to the code strictly for reasons of readability and clarity. In 1992, we ran the program, looked at the output, and hoped it looked right. If it did, we were done. Plus, we had lint.

Here's the second reason the bug took so long to find: The matrix that was generated by the buggy code...worked, as far as anyone could see. People used the matrix for years and were perfectly satisfied with the results. As a matter of fact, Styczynski ran a comparison with the corrected matrices and determined that results were poorer using them than they had been using the buggy ones. I'm not sure why or how that could be. I suppose it might simply be that the Henikoffs happened to get lucky to create the better matrix, or tried a bunch of different ways to build a matrix and chose the most successful one to write about, not realizing that the paper described a slightly different algorithm than the code was actually running. Or even that the method Styczynski used to compare the matrices was wrong.

But there's no question...


Whatever the reason, it's hard to argue with Styczynski's conclusion: "there is significant room for improvement in our understanding of protein evolution.".

Tuesday, August 21, 2012

On the teaching of genetics

Rosemary J. Redfield wrote an article on the teaching of genetics that resonated with me. Apparently the standard theory for teaching genetics is a sort of reprise of the history of genetics - you start with Mendel and dominant and recessive genes, move on to genes being on chromosomes, and slowly get on, if you're lucky, by the end of the course, to the molecular analysis of genes. The theory behind it is that the students will stand in Mendel's shoes and ask, "Well, why are some genes dominant", which will be answered by the next phase of the course, which will prompt more questions, to be answered by the following unit, and so on.

The problem is, that strategy doesn't work.

(Close-Up) Erratic Black Hole Regulates Inside Quasar (NASA, Chandra, 03/25/09)It reminds me of astronomy classes both in high school and college. Now, astronomy is an awesome and fascinating subject. Go pick up any popular science magazine with an article on astronomy and just check out the language that they use: "Quasar". "Black Hole". "Dark Energy". "Strange Planet". It's like the whole subject was created just to appeal to teenagers. There is a podcast dedicated to astronomy called AstronomyCast that goes over a lot of this stuff, and my eleven-year-old son cannot go to sleep at night without listening to at least a few episodes.

But I hope that the interest isn't torn out of him in high school. If his courses are anything like mine, they will discuss: Stonehenge. Galileo. How, if you stay up night after night, you can see the position of the planets change slightly in relation to the stars. How an optical telescope works.

“Students know what the cool stuff is.”
This isn't the cool stuff. Students know what the cool stuff is. If they don't listen to AstronomyCast, they've probably seen episodes of Nova, or at the very least, played a videogame in which black holes or wormholes play an integral part.

Genetics is just the same. Engage the student's interest by hitting them with the cool stuff first. Don't try to emulate the thinking of Mendel, because the instruments and techniques we use today are so much more powerful than Mendel ever dreamed of, and students know that, and often know what the techniques are. Redfield suggests starting with personal genomics, which seems like a good plan. Students, who know that they have a unique genetic makeup, should be interested in knowing what that makeup is, or at least how to find out. This would lead directly to the ethical questions surrounding that knowledge, and the course is off and running. Redfield is on to something.

Saturday, August 18, 2012

DNA as storage mechanism

It seems some East Coast researchers are pushing the envelope in storing information in DNA. They encoded a book, roughly 5 megabits of data, into oligonucleotides, which were "synthesized by ink-jet printed, high-fidelity DNA microchips" which I don't fully understand but I presume means "made into a glob of DNA". The authors then sequenced the DNA and recovered all the data, with 10 incorrect bits. DNA
The innate four bases (A,G,C,T) of DNA seem to lend themselves to some interesting storage techniques. The authors used simple redundancy for their storage - A and C both represented 0, G and T were 1, which was apparently a departure from earlier attempts which encoded each pair of bits into a single base. This made it easier to construct more robust sequences. I wonder if additional error-handling could have been done by placing checksum bases at intervals along the strand? Two bases would provide a range of 16 possible checksum values which seems it would handle a nice string of bits.

The book that was encoded had 50,000 words and eleven pictures. With an average code space of 40 bits per word, the text should have taken a tiny fraction of the total space, with the images providing the majority. Suppose that all ten bit errors were in one picture? It would be interesting to know how tightly compressed the images were. With a high compression factor, some of the bit errors might be substantial, but small changes to the compression might make a large difference in the visibility of any bit errors.

The authors say that DNA storage is dense, stable, and energy-efficient, but prohibitively expensive and slow to read and write compared to more standard storage. It will be fun to see how this technology evolves!

Tuesday, July 03, 2012

Back to school!

In the fall I'll be enrolling for classes once more. Cathy's now finished her Doctor of Nursing Practice degree so now it's my turn. I've been accepted to the bioinformatics program at Indiana University. It appears that my first classes will be: Intro to Biology, Intro to Informatics, and Intro to Bioinformatics. It should be a lot of fun - I'm really looking forward to it.

Wednesday, May 14, 2008

DevExpress Appointment Template Exception: The file 'MyControl.ascx.cs' does not exist.

The DevExpress ASPxScheduler control is a nice calendar control. It seems to take a fair amount of code and study to get all the bits of code together you need to use it, but once it's together it presents a really nice interface to the user.

The control allows full customization of the appointment display. On the page that holds the calendar, you define, for example, a "VerticalAppointmentTemplate" item for the daily view, and give it the name of a user control you've defined in order to display an appointment in that particular view. Then the user has the ability to drag the appointment around and do other clever things with it for rescheduling, etc, and the calendar control handles the placement of your user control at the correct time on the calendar in the web page. Pretty nice!

So I set up my controls the way I wanted them, tested to make sure it worked, checked the code in, and sent it to QA to look at. Response: "The page errors out as soon as we navigate to it."

Huh?

Further investigation revealed that an exception was being thrown, with the message "The file 'MyVerticalAppointment.ascx.cs' does not exist". For some reason, it wanted the source code for my user control, and I had no idea why. Like all of our other codebehinds, the code is compiled into an assembly that is published on the web site. No source code is put out there.

If you're an ASP.Net veteran from way back, this is probably throwing up all kinds of red flags for you, but I'm not. Googling for various terms in the exception didn't really turn up much, except that most of the similar solutions seemed to involve converting the file or the application to a web application, something I vaguely remember from around the time we upgraded to VS2005, but never really had to deal with. Besides, I knew that our application was already set up the way we needed it. There was no conversion to be done as far as I could tell.

So after futzing around with it for a while, my coworker pointed out an oddity in the user control. Instead of using a CodeBehind declaration to point to the code, it was using a CodeFile declaration.

That was the problem, of course. It wasn't that I had converted from a Web Application to Web Project or back again, it was simply that I had borrowed a piece of sample code from a DevExpress project that was using a CodeFile declaration, inappropriately for my project. Switched it to CodeBehind, didn't even have to recompile, and everything worked properly.

If it's useful, here's the stack trace of the exception that was thrown:

at System.Web.UI.TemplateParser.ProcessException(Exception ex) at System.Web.UI.TemplateParser.ParseStringInternal(String text, Encoding fileEncoding) at System.Web.UI.TemplateParser.ParseString(String text, VirtualPath virtualPath, Encoding fileEncoding) at System.Web.UI.TemplateParser.ParseFile(String physicalPath, VirtualPath virtualPath) at System.Web.UI.TemplateParser.ParseInternal() at System.Web.UI.TemplateParser.Parse() at System.Web.Compilation.BaseTemplateBuildProvider.get_CodeCompilerType() at System.Web.Compilation.BuildProvider.GetCompilerTypeFromBuildProvider(BuildProvider buildProvider) at System.Web.Compilation.BuildProvidersCompiler.ProcessBuildProviders() at System.Web.Compilation.BuildProvidersCompiler.PerformBuild() at System.Web.Compilation.BuildManager.CompileWebFile(VirtualPath virtualPath) at System.Web.Compilation.BuildManager.GetVPathBuildResultInternal(VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) at System.Web.Compilation.BuildManager.GetVPathBuildResultWithNoAssert(HttpContext context, VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) at System.Web.Compilation.BuildManager.GetVirtualPathObjectFactory(VirtualPath virtualPath, HttpContext context, Boolean allowCrossApp, Boolean noAssert) at System.Web.Compilation.BuildManager.CreateInstanceFromVirtualPath(VirtualPath virtualPath, Type requiredBaseType, HttpContext context, Boolean allowCrossApp, Boolean noAssert) at System.Web.UI.PageHandlerFactory.GetHandlerHelper(HttpContext context, String requestType, VirtualPath virtualPath, String physicalPath) at System.Web.UI.PageHandlerFactory.System.Web.IHttpHandlerFactory2.GetHandler(HttpContext context, String requestType, VirtualPath virtualPath, String physicalPath) at System.Web.HttpApplication.MapHttpHandler(HttpContext context, String requestType, VirtualPath path, String pathTranslated, Boolean useAppConfig) at System.Web.HttpApplication.MapHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)

Wednesday, April 16, 2008

Dreaming in Code, Scott Rosenberg

For a while, Scott Rosenberg at Wordyard would send a free copy of his book to anyone who was willing to review it. Thanks, Scott! Here's mine:

I look at this book and remember the many times I was in the same situation as the Chandler folks were in: A product with lots of amorphous design that you can't get coding on because every time a little code gets written, the designers pull back and say, "Whoa, whoa! That's not what we meant at all!" Then you get pulled back into another design meeting and the process starts all over again.

I'm over that now. The company that I work for understands that the important thing is to get something out there that people can get their teeth into, to figure out whether it's any good or not. I think to myself that the bad old days are over...but it scares me to think that a lot of other coders still have to cope with the same sort of situation: the eternal debate between the designers, architects, and coders over who's fault it is that the code is buggy and the users hate the application.

Whether it was his original intention or not, Rosenberg brought back the intense frustration of these times with his description of the flailing of the Chandler product. I suspect a non-coder, and maybe even a lot of coders would look at it differently, thinking that hey, they were trying to do it right for a change, get the design down before they do the coding so the rest of it is just simple plugging and chugging, stuff any code monkey could do. It never works that way, though.

I get the feeling that what Rosenberg was really looking for was a happy ending. You spend lots of money, do the project right, maybe have some interesting pitfalls along the way, then you release the application, everyone loves it, the world changes, and the book ends. It didn't work that way, unfortunately, and a lot of the second half of the book leaves the realm of Chandler to discuss the philosophy of coding, bringing up agile development and the mythical man-month.

But in the end, it's very difficult to separate the software application from the book, and ultimately, since the ending of the one was vague and ambiguous, not with a bang but with a whimper, the ending of the other is too. Still, the book is one-of-a-kind; a detailed, unflinching look at a single software development effort. Every development team should be so lucky as to have a retrospective like this to look back on.

Wednesday, February 27, 2008

Blog anniversary!

Four years ago I took the plunge and started blogging. And today...well, I probably have just about the same number of readers as I did then :) But I've done some posts I've liked along the way. These days I'm a lot more likely to Twitter any thoughts I have rather than try to compose a few hundred words about them, with pretty much minimal posting other than what I think might come in handy for someone searching for a specific term on Google. But I'll keep it going. It's often useful for me even when it's not for anyone else!

Tuesday, February 12, 2008

Startup secrecy

SHHHHHHHHHHHHH. Please NO talking about the company to anyone. No blogging, talking, posting, emailing about the company’s particulars, etc…


A note I got today warning us of the terrific need for secrecy around the company that was created at the Bloomington Startup Weekend this past weekend. I ended up not being able to participate in any meaningful sense, except maybe for a few hours on Friday night, so I don't know what any of the big secrets are that they need to keep, but one thing I do know is that

there is no business model that is so unique and different that no one has ever thought about it before.

Creating a successful business is about execution, and sweat equity, not about the new and exciting business model. All this sort of insistence on secrecy does is shut down any potential buzz that would be created. I mean, you've got 75 or so people who are probably, or hopefully, really excited about the application they've put together. They should be blogging, twittering, discussing how excited they are about the company. That is a lot of people for a small town like Bloomington - the buzz would probably have a multiplicative effect and people might even start up a buzz about a buzz, so to speak. But they're blowing it by telling everyone that they can't post, can't talk, can't even email.

The PR people and/or the lawyers are probably telling them that they need to present a consistent message, need to prevent any chance of being sued for patent infringement, need to be safe, need to be careful. Sorry, folks, being careful isn't how you create a successful startup. That comes from being bold and taking chances.

I got a separate note telling me I needed to fill out more forms in order to claim the share of the company that I qualified for on Friday night. Meh. I don't think I'll bother.

Wednesday, January 16, 2008

Bloomington Startup Weekend

Wow, looks like the Bloomington Startup Weekend is proceeding apace. It was announced today as an official part of the Startup Weekends that have been going on around the country - there was one in West Lafayette, wonder if their company made boilers - after a voting process where more than 150 people voted for Bloomington over various other cities. They're looking for around 70 people signing up, with all sorts of different skills: developers, designers, PR people, lawyers, managers; and once the group gets together on Feb. 8th, they'll brainstorm some ideas for a product, and hopefully go ahead and get it built! Good luck to them. I'm sure I'll sign up, and maybe even make some useful contributions, although I need to clear it with my boss first. I have a feeling that the weekends in places like New York tended to be from dissatisfied people working at Merril Lynch or IBM or places like that, but I work for a company that may have less employees than this weekend will have attendees, and it's a really cool company too. (Want a job at Envisage?) So I'm not looking to make my fortune from this weekend, but it should be fun.

The week before that we'll have a geek dinner, so I'm guessing the Startup Weekend will be a topic of conversation there too. Hope to see you at El Norteno!

Thursday, December 06, 2007

A Facebook feed for the open web

I really don't care that much for Facebook. I don't watch enough movies to take the little quizzes, and I'm not really all that interested in throwing a vampire at my friends, or whatever those weird little applications are supposed to be. I hear that some people use it for professional networking, but most of the pros I know are on LinkedIn, and so am I, and that seems to be sufficient. Not like I have a zillion contacts, but all I really want to know for most of these people is where they live, where they work, and how I can get hold of them if I need to. LinkedIn works brilliantly for that.

I do like the Facebook minifeeds, though. A minifeed, if I understand correctly, is an aggregation of all the things that a Facebook user is doing on Facebook - updating status, adding friends, using applications. For each friend, getting updates on what they're doing moment-by-moment on Facebook is interesting, and the Facebook homepage aggregates all my friends' feeds into a single one and sorts it by time. So when I do log on to Facebook, I can see at a glance what all these people are doing, at least in the last few hours.



Picture by Somewhat Frank

But there's plenty of stuff on the open web that could go into a minifeed just as easily. A lot of sites are making sure they have Facebook applications now, but not every one, and

who wants to rely on a Facebook app for something that isn't really anything more than an RSS feed?

So, I decided to put my own life-feed together. Unfortunately, finding an application that simply turns a bunch of RSS and Atom feeds into one publically available feed turned out to be harder than I expected. If you know of an easy solution, tell me about it - but keep in mind that of the first four feeds I tried to combine, three were sufficiently different to bring down every solution I tried - those are Google Reader shared items, this blog, and my LibraryThing book reviews.

I ended up creating a web page directly rather than creating a feed - I didn't feel like learning all the ins-and-outs of RSS or Atom. So, if you want to follow my life, almost minute by minute, check out this page - or just check out my home page, which has a small iframe in it with that page in it, which is how I intended to use the feed anyway. You can't subscribe to my life just yet, but maybe that will be coming soon!

Along with my feeds mentioned above, the page aggregates Twitter posts, and soon I'll add my Flickr pictures and maybe Delicious , Coastr, or Zelky if they have the feeds in the format I need. I'm looking forward to having my own life feed!

Monday, November 19, 2007

Pair Programming vs. Code Reviews

Jeff Atwood over at Coding Horror is asking for comments on the relative efficiency of these processes. We do both at my company, and while I don't think I have anything truly original to add, I come down on the agile side of the discussion (which should not be surprising to you if you read this blog often!)

The comments are already coming in complaining about pairing. I noticed these two particularly:

the obvious conclusion to this is double the hours per project, at minimum (and I'd expect that you would work slower if you had to discuss or explain stuff to someone else the whole day).

I would freak out if someone would watch me every the time I code (and also has a keyboard to interupt me lol)

Sort of the standard responses to pair programming. I'm not so experienced at the art that I can really say the hours don't double, maybe they do - but what I can say is even if the hours are doubling, the code quality is squared. Maybe it's just a commentary on what lousy code I produce by myself, but there is a big difference when someone else is there looking at the code, even if it's only the "navigator" effect, where the person who isn't actually at the keyboard can allocate the memory space to go back and remember any refactorings or other cleanup that needs to be done. As far as working slower, there are only two possibilities: first, that the other person doesn't know about the code as well as you do, in which case the knowledge transfer makes the whole thing worthwhile, or second, that there are a few ways of doing things and you need to decide which way is best. The selection you make when coding by yourself might easily not be that one.


even if the hours are doubling, the code quality is squared.
Freak out if someone watched you code? Dude, is your code really that bad?

Insofar as code reviews go, I find them almost unnecessary when pairing. Some teams do peer-review-before-checkin, which I don't really care for - I just can't grok the concept the code is trying to get across just from staring at it for a few seconds while someone explains it to me, but I suppose some people can do that. But we do code reviews for two things: first, to go over legacy code - we have plenty of that in our application - and second, to go over code that's just been checked in. This isn't 100% useful either, but on the other hand we have very few development meetings, and sometimes it's worth it just so someone can point out, "Oh, this should have been done using this brand new language feature" or, "we have a custom library that already handles exactly this case, can we use it here?"

So code reviews can be worthwhile, and they are absolutely necessary in a non-pairing environment. The big thing to watch out for is that you don't spend a lot of time discussing what your internal coding standards are, as I've written about before. But my feeling is that it is not as useful as pair programming.

Sunday, November 11, 2007

iFrame scroll to anchor problem

So on my basketball fan page, HoosierBall, I was working on the schedule display. I had originally set up a simple widget from Zvents, but after using that for two years Zvents apparently changed their entire business model, from events created by users to events created by businesses. Every Hoosier game had already been entered via StubHub, the widget that I had been using no longer worked at all, and as far as I could tell there was no replacement.

But it's not like the schedule display needs to be real complicated. I tossed it into an iFrame, stuck the schedule on a separate page, and wrote some simple Javascript to scroll to a specific game's anchor based on the current date.

But what's this? When the iFrame scrolls, the entire page jumps down to the iFrame to display it. That's not what I wanted, but I couldn't for the life of me figure out a way to stop it from happening, until I finally ran across Jim Epler's blog entry explaining how he simply scrolled the main page back to the top after setting the anchor. So you set the location in the iFrame, the main page jumps down, then you set it back to the top. It's not pretty, but it works. Here's the code in the iFrame:


location.replace(location.href+anchorname);
parent.window.scrollTo(0,0);


Thanks, Jim!

Thursday, November 08, 2007

Some test code smells

More of my writeup on the XUnit session I went to nearly two months ago. I hope someday to make all of my note-taking blog entries coherent to a larger audience :)

There are a couple of competing dynamics you get when writing tests: the first is that, in general, less code is better than more code. You want the code to express the concepts you need to express without any extra cruft, without huge globs of copy/pasted code here, there, and everywhere that is a nightmare to maintain. But you need tests as well, and tests are either more code, or more people, and people are one heck of a lot more expensive than code. So it's not at all unlikely that you would have as much test code as production code.

But those tests have to be maintained, so it behooves us to figure out the best way to do that, bearing in mind that the goals of test code are not the same as those of production code. So, here are some problems, or code smells, specific to test code that you might run into:

1. Conditional test logic . A lot of people like to say that one assertion per test is plenty. It seems like unreasonable test gold-plating to me, but if you're going so far as to put an if statement in the middle of the test, you need to have multiple tests to test each branch of the statement, each time. Or, the condition might simply be an assertion, where you're saying if (x) keep testing; else assert false. No point in that, just assert x at the beginning and let the code blow up if it needs to. This really helps with the readability of the test report, too, expecially if all the report says at the end is "The assertion FALSE occurred". Not helpful, whereas knowing directly from the report that X failed is much more useful.


2. Hardcoded test data

This problem is related to the principle that computer science profs have been tossing around since the beginning of time: that you never want to put numbers or strings in code; always define them somewhere as constants instead, so they can be easily changed. Not a bad principle; certainly it's ideal that anything that needs to be displayed to the user can be easily modified to use another language, so you don't want a bunch of MessageBox.Show( "Are you sure you want to do this?" ) statements scattered through the code. For numbers, though, my general rule is that it doesn't need to be a constant value unless it shows up more than once.

But in a sense, just about every number shows up more than once if written properly: at least once in the code, and at least once in the test for that code. Say for example you're testing a Price object with this line of code:

Assert.Equal( $14, CreatePrice().Retail );

CreatePrice() is part of your fixture, and it sets the list price to $20. Your Price object knocks off 30% to come up with the Retail number.

But now you've got the same number in there twice! See it? $14 is in there, and so is 70% of $20. The same number.

One fix is to move everything to constants. Presumably the Price object has a GetDiscount() method, so you could make the $20 into a constant ListPriceInDollars, and change the expected amount to ListPriceInDollars * GetDiscount. Still pretty verbose, but not really bad for this small example. A better solution can be to create an ExpectedObject to compare what comes back from CreatePrice to. In the ideal case, your test would then simplify to

Assert.Equal( ExpectedPrice, CreatePrice() );

Which would cover a boatload of other comparisons as well as your Retail value.

Wednesday, November 07, 2007

Request could not be submitted for background processing

Boy, don't you hate it when the request could not be submitted for background processing? I know I do.

This is a rather inexplicable error message that Crystal Reports pops up. If you search for the message, you see a few ideas for solutions, but what I eventually did didn't seem to be on anyone's list, so I'm adding this post to the search in case it's helpful.

In my case, the big clue was that we had just modified the report viewer so it could pick up data from alternate data sources based on information in the web.config. It's a pull report, so the data source is specified directly in the report and we're modifying it programatically. My assumption was that somewhere in the report, there was something being specified to use a different data source than all the rest of the report, and that our code that changed the source was missing it. But I couldn't find it, and Crystal report files have a binary format, so they're next to impossible to search through.

The big clue was that we had just modified the report viewer so it could pick up data from alternate data sources

I dug through a book on Crystal last night - luckily it had a chapter on dynamic data sources; thank you Brian Bischof - and one of the things it suggested was to check the connections on the tables using a member function, TestConnectivity. (You can outfit each individual table that a Crystal Report draws upon with its own login information.) To create a dynamic data source, you have to make sure that the source is changed all over the place in the report. In the report object, in any subreports that it uses, in any tables. So I set up the table login information to set it the way it was supposed to be, and then call TestConnectivity, and if that failed, throw an exception. My idea was that there was a single table somewhere causing all the problems, and that if I could figure out which one it was, I could fix it.

Only I didn't need to.

As soon as I added the TestConnectivity check, the report started coming up.

My working assumption is that the changes to the table login information are cached somewhere, and don't actually take hold until the report thinks it's necessary. Presumably there's a bug in the caching system, but calling TestConnectivity causes the table login information to be set properly.

Nice weird one there. Spent too many hours on it though.

Wednesday, October 17, 2007

Michael Gartenberg - Still Not Twittering

Michael Gartenberg is still not twittering, poor unenlightened soul. He prefers to use Facebook for his minute-to-minute status, and doesn't want to add another bunch of contacts to another social network after having done it twice with Facebook and LinkedIn. Sure can't blame him for that. To mix a metaphor, the walled gardens need to start playing a little more nicely with each other. I think then we'll all be a little more inclined to add a new network to our lists. I got a Jaiku account after the Google news, but I haven't put forth the effort to put a bunch of my contacts on it, or to find a nice client for it, and because of that I haven't even looked at the web page in days.

Michael also points out that it's another feed to check. He's already got a bunch of RSS feeds, work email, and personal email, and doesn't want another feed to look at.


But he is willing to admit that he might be missing things. The issue of how to prioritize feeds is coming up for me, too. The feeds in my information stream generally break down to:



  • General status updates and tweets

  • Quick-skim blog entries, newspaper articles, longer forum discussion posts

  • Technical papers, long blog entries, that you can't really get the gist of by skimming

  • General or group emails

  • Emails specifically to me, or that I need to deal with

  • Tweets specifically to me

(Picture by Pete Reed)


How these are prioritized, and how they should be prioritized, are two different beasts. Certainly my top priority items are emails and tweets for me - they're the things I want to read first. After that, depending on how much time I have, I may want to skim the short items or buckle down to a technical paper. But how I actually prioritize them is via the application they're sitting on. I have a Twitter reader, Outlook, Gmail, and Google Reader to grab all these different feeds: stuff that comes in via Outlook gets highest priority since it has the nicest toast mechanism. Teletwitter, my twitter reader, doesn't always pop toast properly, so I'll often miss messages on it, although I don't care so much since they're low priority. Except, of course, for the ones targeted to me, which are high priority, but which I still miss since there's no way to grab them out of the twitter stream. And if a good technical paper comes across Google Reader, I'll probably share or star it to come back to as I J-J-J through the list, then forget about it entirely.


So there's clearly work to be done in this space.


I am absolutely sure that all my streams can be prioritized together properly, because it seems to me that it can be mechanized without too much trouble, but I'm not sure how yet. But I'm sure someone out there is already working on the issue.

Tuesday, October 16, 2007

Bloomington Geek Dinner

I copied the link up there from the Facebook event page...I have no idea what it will tell you if you're not a member, or even if the link is good for anyone besides me! Anyway, we're hosting a Bloomington Geek Dinner next Tuesday. Geek dinners got to be popular in the last few years, probably due to Scoble's encouragement, and for a while geekdinner.com was up for people to schedule their events and create guest lists. No more, but Facebook is good for scheduling parties, perhaps too good judging from some IU undergrads, and there are a lot of Facebook groups as well as open Web pages for local geeks to find each other, meet, eat, and talk tech; for the big cities there are even specialized girl geek dinners.

But none for Bloomington, until now. Or even Indianapolis, as far as I know; if you're up in Indy feel free to come down - we'd love to have you.

It'll be at Max's Place downtown, starting around 7PM. See you there!


View Larger Map

Saturday, October 13, 2007

Indy Tech Fest

Big room has no wifi; smaller room seems to work pretty well. I feel very blind without a net connection. Have I said that before? :)

Session #1:
Stephen Fulcher, VS 2008
3.0 install only has WCF and WPF, everything else is the same as 2.0
Integration is tight to SQL Server. Other db's?
anonymous variables - new keyword var (not untyped, but takes on the type of the right-hand side)
lambda funcs,
extension methods: public bool IsBool( this string s )
Generate code metrics?
Q: Is var a breaking change? A: no, you can still use it is the name of a variable


XAML
Services (not web services, just services)
ServiceContract, OperationContract, DataContract attributes. - all specified as an interface
svcutil.exe generates client code and can resolve from existing DLL's
----

Session 2:
Mark Strawmeyer
C# tips & tricks

ctrl-alt-downarrow window list
ctrl-/ takes you to search menu
In the search window, type a ">" to go to a list of immediate-style commands
F12 is "go to definition"
Shift-F12 "Find all References", then use F8 to cycle through them
Code snippets - intellisense to generate code - type in the code and tab (to autocomplete), tab (to generate)
very powerful! prop, for, tryf
"Organize using" tools in VS2008
Unit testing support:
Test input from data source
unit test assemblies without source
Now part of VS2008 Pro
"Create Unit Test" option
Code Profiling is not part of Pro

---- Break for lunch. There are some lunchtime sessions but we just hung out in the lobby and watched the XBoxers.

Session 3:
Mark Strawmeyer
Ajax Tips

eh. Mostly just an overview of Ajax.net

Session 4:

Silverlight

Chad Campbell

Silverlight is the new name for WPF/E. Create a silverlight object using xaml. Add "Silverlight.CreateObjectEx()" to the HTML to use the plugin.
You can use .Net with silverlight 1.1. Silverlight is a browser-based plugin. It's cross-browser, cross-platform. What does that mean? They've written a plugin for Firefox as well as IE? Supports Mac and Linux and Safari as well. .Net framework used by Silverlight is specific to Silverlight.

Access the HTML DOM from managed code. Silverlight plugin will not be able to access the local file system.

Session 5

Dave Bost

WPF applications

Starts by demoing a Twitter feed on his page, created with Silverlight

Is it going to be the same presentation he did in Indy about a year ago? I don't think I blogged about that.

WF, WPF, WCF, Cardspace. WPF is .Net layer over DirectX.

Nice demo of an airport simulation

Expression is a set of tools for designers. Expression Web, Blend, Design, Media. Costs a bundle, though. Once you design the xaml, you can edit it in Visual Studio.

http://devcares.com in Indy every month.

Friday, September 21, 2007

Continuous Integration

The time between a defect being entered and being found is related to the expense of fixing it

Graphing the number of tests over time would seem to be a good idea

Sligo dashboard: classes, LOC, duplication, max complexity, tests run, line coverage, branch coverage, FindBugs violations; PMD Violations; Max Afferent; Max Efferent

Code reviews are good for high level details; use machines to do the low level detail finding

Code complexity tools: CCMEtrics, Vil

Code duplication detector: Simian

Dependency: NDepend

Coding Standards: FXCop

Thursday, September 20, 2007

How to work with an open source team

"Free as in Freedom"
"Free as in Beer"

http://opensource.org/

Q: Don't people resent when companies take open source projects and make money off of them? A: More power to them! Many companies are using their employees to do open source. It's good PR for a company to have people who work on open source - give back to the community, attract high-power talent

Don't contribute unless you:
Know the project license
Get permission from your employer
Get legal review if needed
Can communicate clearly in the project language (usually English)

Oracle tried to strongarm Linux, got squashed, came back with offers to help. Good PR! (I'm not familiar with this story!)

Project currency is trust and respect. You don't start with any. Remember, if you're good, you don't have to point it out.

Q: How do you start gaining respect? A: Post to the mailing list, point out bugs AND fixes. Maybe someone will request a patch, provide it

Q: How does the code stay consistent and looking good? A: There are tools, or people who do work to make things consistent. Or, it doesn't :)

Q: How do you get non-coding contributions going (docs, images)? How do non-coders get cred? A: Projects should support people like tech writers, if they're good.