Formal Code Reviews vs. Automatic Tools

One of questions I always get when talking about code reviews, formal or otherwise, is “What do you think about automatic code review software like CodeCop or varScoper?”

First off I’ll say, I love and use varScoper. Second, I like the idea of CodeCop, but I’ve never been good enough at writing regular expressions to craft my own rules well enough to use it. Those points being said, as great as these tools are, they do not even come close to eliminating the need for formal code reviews.

Automatic testers are great at finding small discreet known issues:

  • Did you var scope every variable in ever function?
  • Did you make sure not to use relative url’s in cfschedule?
  • Did you use cfqueryparam on all of your user input driven queries?

But they can’t figure out bigger picture issues like:

  • Is this function named “computeCosine” actually returning the sine?
  • Is there sufficient validation occurring on form posts before they are being processed?
  • Is this function doing something that a built in function could be doing better?

In addition to the hard to compute issues above, there are the unknowns. Until someone discovered there was an issue with using relative url’s in cfschedule there was no rule written in the code review tool. It takes human eyes to discover these issues, either during a review, or during bug fixing. Only after they are discovered can they be codified.

To put more succinctly and in a metaphorical manner:

Automatic code review tools are spell checkers. Formal code reviewers are editors. Spell checkers and even grammar checkers don’t prevent you from writing perfectly spelled and constructed crap. You need an editor to prevent that. Just as code review tools don’t prevent you from writing crap code that adheres to all of their rules.

So use automatic tools before a formal code review, just like you spell check an email before you send it out. But don’t rely on an automatic tool as your last line of oversight.


Another Unfuddled Experience

We started using to be our Subversion, bug tracking, and project management solution a few months back. As time has gone on I’ve become to rely on it heavily. Nothing illustrates that quite as much as the feeling you get when it isn’t accessible anymore.

You see my business office changed their credit card info. They gave me access to their new card and a typed its numbers into’s automatic biller. When I did that, I must have transposed a few numbers. Yesterday was the day that automatic billing takes place. We were charged, the charge failed, and as such due to a bug in their system our quota was lowered to its free level (200MB) instead of our actual quota (10GB.) Once you go over quota, they lock down your account to prevent subsequent commits and updates. I’m sure they do this to prevent runaway processes from going over quota and causing problems. It also has the effect of bringing everything to a grinding halt.

I fixed the credit card, and got confirmation from the system that I had entered it correctly this time. But still no commits.

I sent in a support email, as that is their only means of support contact. But my boss started circling asking when he would be able to commit again. He started talking faster, and faster, which made it clear that he was getting agitated by the possibility of problems publishing his application for an important demo this morning.

So I started searching through previous correspondence with support to see if they had a phone number in their signature. Lo and behold, one support technician had his mobile phone in his signature. At this point I’m torn: I don’t want to bother this guy on his mobile; but I didn’t want my boss to become John Moschitta. Finally after looking up the area code and finding out the guy was in Hawaii, I decided that if he got to be a cool web 2.0 programmer and live in Hawaii then I was going to not feel too bad about calling him on his mobile.

He was extremely cool about being bothered (he was in fact not in the office that day). But I explained the situation, and told I would accept being shunted to another support person (as he was out of the office,) but he would have none of it and within literally 3 minutes of talking to him, we were happily unfuddling again. He also said he would look into the bug that caused us to not have the customary 2 week compliance period.

I’m already a pretty pleased customer but this turned me into a passionate customer. That’s the sort of competent, fast response you wish your vendors would give every time.

In short, rocks.

Knowledge@Wharton Upgrades

Today the Knowledge@Wharton tech team put into the wild something I’ve been working on for some time: a new platform for Knowledge@Wharton and India Knowledge@Wharton. The new platform consists of the following:

  • Windows 2008 Load Balanced Cluster
  • Core Services Code Base and ColdFusion 8
  • Layout
  • Development and Publishing System

Windows 2008 Load Balanced Cluster

We built a two node cluster using Windows 2008 64 bit Enterprise Version. One node is a VMware instance, and one node is a blade server. I like this configuration as I only have to worry about a machine warrantee on one node, but I have the backup of a hardware-based node if something goes wrong with our VMware installation. Not that such an event is likely; I would prefer not to tempt fate.

I’ve mentioned it before that we don’t use Load Balancing so much for load as for availability. By having dual node clusters for our production environments we buy ourselves zero downtime patch cycles. We did have a little trouble getting NLB on Windows 2008 working, but we did get it fixed after talking to Microsoft support.

The upgrade went really smoothly. I’m used to using cnames to handle this sort of move, but due to SSL considerations knowledge.wharton.upenn,edu has an A record. The easiest way to make the change was to add the new nodes to the existing Windows 2003 cluster, then remove the windows 2003 nodes. It worked like a charm, and I think it will be my new procedure as it was shockingly easy.

Core Services Code Base and ColdFusion 8

In looking to upgrade Knowledge and India Knowledge to ColdFusion 8 I had to touch a lot of the code. Not so much because there was a problem with it, but because we wanted to take advantage of new features. In the course of doing that I discovered that the main Knowledge site and India contained a lot of duplicated code between them. I was able to centralize it and then add new features to both sites. There are two main features that I added to the central code base: cached queries and search driven folksonomy.

Caching the queries was pretty trivial. I rolled my own instead of using an existing caching framework or native ColdFusion caching. I wanted an easy to flush cache system that didn’t need to be too complex. Because of the highly normalized nature of the database, I couldn’t get a tremendous performance boost through indexing; caching however has proven to be the correct solution by a long shot. It makes sense; we have a lot of frequently read, rarely written data here. I’m just surprised at the overall boost to the site we accomplished with one fix.

“Search driven folksonomy” is a cool idea that my boss Dave had back in 2006. It was running for awhile then got deactivated for some reason and I just re-implemented it. Basically, instead of having people manually tag articles, instead use our search referral keywords to tag articles automatically, then when an article hits some sort of critical number of hits for a keyword to an article that keyword becomes a tag on that article. We’ve enabled the collection piece for now and will enable tag display once we tweak the model a bit after getting some real data.


I can’t take credit for the look and feel. This was done by Dave and a co-worker, Sanjay. They worked on pushing Knowledge to a more current centered layout, along with a few other tweaks to accommodate advertizing without compromising the editorial content.

The one thing I contributed here was a custom tag that converted an article to an array of

tags. Then the article custom tag was able to wrap around other custom tags and display them in the flow of the article at set positions in the array, or the next pre-determined location in the array, or at the end. It made for a very flexible way to showcase link suggestion or article tools within the flow of the article thereby freeing up space for the aforementioned ads.

Development and Publishing System

This was the hardest to tackle part of the whole thing. Because I was asking people to change the way they worked. But Dave and Sanjay were open to it, especially since I promised that it would make their lives much easier after a little bit of pain.

The old model consisted of doing development on a shared development server with no source control. Changes were manually pushed to production. Occasional copies were made of the code. Communication about changes were ad-hoc and not necessarily as frequent as the changes.

The new model pushes development to local installs of ColdFusion. Source control is handled through Subversion hosted on Communication about changes occurs on every update, thanks to Unfuddle’s notification system. The shared development server gets automatically updated from the trunk on every svn commit via svn commit hooks. Then to move the code around I have one click ANT tasks that handle updating development from Subversion, updating staging from development, updating production from stage, and a unified task that can does all of the updating in sequence (subversion to dev to stage to production in one click with a warning that you should only do this if you are sure about it.) All of this is to accommodate all of the various publishing needs we have. I then wrote ColdFusion that calls the ANT tasks, and an AIR application that calls the ColdFusion. This gives us a one-click publishing tool that we can run from a browser or a desktop application.

We replaced one node of the cluster yesterday, fixed a few bugs, then replaced the other node today – all in all, a very smooth upgrade. I’m extremely happy. It’s a lot to accomplish in 3 months. Mostly, after years of working on very backend systems which never get touched by users, it’s extremely gratifying to work on something that I can show off.

Windows 2008 NLB with 2 NICs

I ran into a problem with our standard configuration for web servers, and couldn’t find the real solution documented anywhere, so here it does.

We run our ColdFusion servers on dual node Windows Network Load Balancing (NLB) servers running in IGMP multicast mode. We run it on machines with two network cards. The cluster address is on one NIC and the nodes answer on another. It’s the configuration we’ve come to like after years of working with NLB and port flooding and other anomalies.

I’m installing a new production NLB cluster for Knowledge@Wharton. To future proof it, and avoid upgrades down the road, I’m going with ColdFusion 8 64 bit on Windows 2008 64 bit. I ran through the configuration steps that I always take setting up an NLB cluster, and it worked… sort of. See the cluster address answered if you called it from another host on the subnet that the cluster was installed on. However, if you were off subnet it didn’t answer. This is suboptimal for a web server.

I worked with our networking team, and they figured out (from this post: ) that if you added a gateway to the cluster NIC, it would work. This is counter to the way NLB has worked before, and generally not best practice. So we opened a support case with Microsoft. After a few tries, I finally got an engineer that was an expert on NLB in 2008, he had the exact cause and solution for this problem: by default IP Forwarding is not enable in Windows 2008. This is the feature of Windows networking that, in the context of NLB, allows responses to requests sent to one NIC to be routed out the other. It’s fixed by using one specific command line option.

(Make sure you are using a command prompt with administrative privlidges)

netsh interface ipv4 set int “[name of the NIC]” forwarding=enabled

That’s it.


A few weeks ago, I went searching for a provider for hosted SVN, I tried a few different services, and in the end went with Over the course of the past few weeks, my team has grown dependent on Unfuddle as part of our work flow, and I have grown to absolutely love the service.

One of the only caveats is that we aren’t used to relying on outside services for that sort of mission critical part of our environment. So the thought of our information out a server that we didn’t control started disaster recovery talk, and one of the important things to make sure we had was a backup of our content from Unfuddle. Now, they were one step ahead of us, they include the ability to ask for backups, which include svn dumps, as part of the service. They go one step further, and actually have this as part of their API.

Being the automator that I am, I automated requesting a backup, checking to see if it was finished, and downloading it, so that we now have fresh copies of our data every week. In the course of doing that I created an Unfuddle.cfc that provided hooks into their API. I decided to share it, and the backup application on RIAForge for a few reasons:

  • I know a few CF’ers are using Unfuddle.
  • I figured some non-CF’ers might want to functionality enough to give CF a try

It’s not a complete implementation yet. I’ve gotten done as much I need for the backup application. I’m then starting to fill out the features. If there is a feature you would like from the Unfuddle API, please let me know. In any case, check out Unfuddlecfc.

Another thing I have to say is that the Unfuddle guys did an awesome job writing this API (at least as far as I’ve coded). There are just all of these little details that make it very easy to use. It’s really easy to pass queries, and options to the various REST commands, because it takes those options a few different ways. The other thing they did was make sure that responses are very uniform, so it was really easy to write rules for processing their results. Between the API and the app itself, the crew at Unfuddle is a real credit to Ruby on Rails.

Squidhead Still Swimming

I’m pleased to announce that Squidhead is getting some more chefs. I’m pleased to announce that Nathan Mische and Dave Konopka are joining the effort. I’m excited about both additions. Dave was working with me when I first wrote Squidhead and has already contributed some code. Nathan is, of course, the lead developer on ColdFire.

We’re still working out what’s coming down the pike, but improvements will include:

  • A more organized and useful API to the database introspection piece.
  • A better template system
  • A configuration creator and editor

It’s all towards making Squidhead more useful and extendable. Expect new releases soon.

Knowledge@Wharton High School Coming Soon

I’m pleased to report that the Knowledge@Wharton team launched a pre-release site for the upcoming Knowledge@Wharton High School. If you are, or know any high school students send them along. We’re launching in February 2009, and as part of that, are holding an essay contest with prizes. I don’t think we’ve spelled out what those prizes are, but from discussions I’ve heard, they’re pretty awesome.

It’s powered by ColdFusion 8, and Flash Video. I worked on the backend details: hosting, network configuration, database CRUD. My co-worker Sanjay did the design and UI, and my boss Dave, did the Flash video setup. The time it took us to go from mockup to finished product was very short – we did this much quicker than I had done before for non-personal project. I know it isn’t a huge site, but after years of working on nothing but backend systems that never get seen by the public, it was extremely satisfying to work on something that wasn’t behind a corporate login.

A couple other things make me happy about this. Squidhead powered backend development, which made dealing with last minute schema changes a snap. It also used the core application framework that I’ve been developing for Knowledge. It was the first project we did with the new one click build process. It uses, SVN commit hooks, ANT, and ColdFusion calling ANT to allow for:

  • Automatic publishing of SVN checked-in content to a shared development server
  • 1 Click publishing of checked in content to a shared development server (In case automatic is too slow)
  • 1 Click publishing of shared development space to staging
  • 1 Click publishing of staging to production
  • 1 Click publishing of SVN checked-in content to development to staging to production

This new model allowed for both a thoughtful develop and review process during development, and was flexible enough to allow for rapid content updates when we were rolling out production.

All in all, it was awesome to use all of the stuff I learned about over the past two years at cf.Objective to do my job.

Override URLSessionFormat

I received an interesting challenge today from my boss.

We recently switched from requiring people to be logged into the site to read our articles to being open, and only requiring logins to comment. Because of the previous restriction and a large amount of our audience being corporate users with cookies disabled, we have URLSessionFormat wrapping all of our internal links spread out over a few template pages, over a few of our sites. Now that we don’t have to be as careful about checking logins it would be better to make sure all of our urls are cleaner (i.e. not having the session information appended to it). My boss wanted to avoid having to edit all of those files. So that challenge to me was to figure out how to override c and make sure our urls were clean.

After fiddling with getPageContext to no avail I settled on this solution:

  • Grab the page content in Application.cfc:onRequest()
  • Replace any reference to the jsessionid in the page.
  • Output the page as intended


var pageContent = “”>
var token = “;jsessionid=#session.sessionid#”>



   <cfset pageContent = Replace(pageContent, token, “”, “ALL”) />



It’s not bad, and it doesn’t add a tremendous amount of overhead to every page, and would have worked if this was imperative to do. We decided however that string processing every single request was probably not preferable to just finding and replacing all of the URLSessionFormat() references.

Confused About the Economy?

The current economic crisis is catching many people flat-footed and confused:

  • Why did financial entities that survived the Great Depression go belly up this week?
  • What do the two campaigns really have to say about the economy?
  • What’s going to happen next?

If only there was some sort of site that took that sort of economic information from academic and industry experts and distilled it to the answers that you need… Wait a minute, I just happen to work for such a group…

This is my round about way of telling you all that Knowledge@Wharton (My current employer) has a special section this week on the Financial Crisis, and it’s powered by ColdFusion and Flash Video.