More Issues with Windows 2008 Clustering

I’ve been having a reoccurring problem with one node of our Windows 2008 NLB cluster. When joined into the cluster it runs for awhile and then blue-screened. When not in the cluster it can run normally. It was extremely frustrating as the only way to trouble shoot it was to add it to the production cluster and give it some load. I tried an OS re-install, and a hardware swap. Still no good, but today I think I got it fixed.

The node in question is a Dell PowerEdge M600. It’s a blade that comes with 4 Broadcom NICS. I had been checking Windows Update to check to see if they were up to date, but when I actually took a look at the drivers in the Device Manager, I saw that the drivers were two years old. A quick look at Dell’s support page showed new drivers updated in October.

They were installed, and now machine is now in the cluster, and hasn’t had a blue screen yet.

I feel really dumb about missing this. I’ve been out of the hardware game for awhile, but this is pretty low level and dumb. Oh well, hopefully I won’t forget again.     

Max 2008 Day 2 Keynote

So, today was the performance art keynote. Tim Buntel and Ben Forta opened up the keynote through a whole James Bond kinda theme.

Ben showed off what I have to assume is Air manipulating an X10 networking of home devices.

They showed off some cool Flash enhancements. Basically they added easy animation tools to objects. Things like skeletons, and easy pathing. It was available before by dropping down to ActionScript, but now it does it for you.

With Photoshop they showed off that new content aware scaling, where when you rescale an image it scrunches the background without touching foreground images. That and painting on 3d images.

Next up was Flash Catalyst, or Thermo. It was nice to see a working copy of it. They took a Photshop image and added behaviors to it in Flash Catalyst. They tweaked the image in Illlustrator. They added 3d effects to it. It was really impressive. Basically it seems to me like Flash Catalyst is the first robust application behavior designer.

They then showed off Flash running C code. It’s a feature called Alchemy. They started with a Hello World example, then showed off various absurdly cool examples: the OpenSSL library, Raw image transformation, rendering rasterized PDF within the Flash player. They showed Quake as an Air application, and a Nintendo emulator in Flash 10.

Finally, they moved on Flex Builder with a ColdFusion IDE inside. That’s right there will be a ColdFusion IDE. It looks like it’s going to address multiple things that we need: Automatic creation of object services based on Hibernate; CFC introspection, variable awareness. I can’t wait to see more this publicly .

I zoned out a bit, but came back to see A network monitor get added to Flex Builder.

Then Ben dropped the bomb that they are working on Flex for Visual Studio developers.

They moved on to Dreamweaver CS4. It now works with most of the large Ajax frameworks, automatically.

They moved on to show how Flash has teamed up with Google to make searchable Flash content.

The next product demo was the new version of Flash Media Server. FMS can now dynamically change the bit rate without having the distortions that happen when network conditions change. Additionaly, they’ve made it really to push multi bit rate video into apps your building with Dreamweaver. They’ve added the ability the add DVR capabilities to live steaming Flash video. Finally in the Flash player they’ve added peer to peer video capabilities.

They then handed the show over to Ted Patrick. He did the official roll out of Adobe Groups. I think a major plus to this is to make the Adobe Group eco system accessible from people outside the community. Also it will make intra group communications easier.

Max 2008 Day 1 Keynote

It’s Day 1, and I actually made it to the Keynote on time. Thanks multiple alarm clocks!

Some of the major announcements have already been made:

  • Thermo is Flash Catalyst
  • Cocomo is going to beta
  • “Flash Platform” is the new marketing term hammered home.
  • Air 1.5 has been released including Flash 10
  • 64 bit Flash Player for Linux

Shantanu Narayen, the CEO of Adobe started it off, and showed off Adobe’s participation in Project Red.

Next, Kevin Lynch took the stage, and started off talking about three areas where the software industry is changing:

  • Cloud Computing
  • Social Computing
  • Devices and Desktops.

He showed off new features with Flash 10, including audio, text, and 3d features.

The CTO of Disney Interactive Media Group talked about what they are doing with Adobe products.

Major League Baseball announced today that their video will now be delivered by Flash. Which means the NFL, NBA, and MLB are all on Flash.

Michael Zimbalist from the New York Times talked about what the NYT is doing with Air. They’re working on a new version of their News Reader. Coolest part of this presentation – a giant picture of Wallace Shawn looking down at the crowd at Max. But more than that, they’ve really done a great job of creating an AIR version of the NYT that grasps the advantages of an actual newspaper with the advantages of an Air app.

Then in the middle of talking about a California Museum application, Maria Shriver showed up.

Kevin then showed off an Adobe application called “Tour de Flex” that shows off the underlying code to interact with various other cloud services.

Once again our friends from Salesforce.com were talking about their cloud services. I think through sheer cognitive dissonance I’ve become a fan. “I’m listening to salesforce.com speak again, I must like them.” In fairness, they really are on the cutting edge of being an enterprise company that gets SOA.

Nigel Pegg came up to talk about Cocomo. He showed off an application that allows for doctors to conduct medical peer reviews using Flex and Cocomo. Cool stuff, but the coolest part is that the free public beta just rolled out.

Kevin announced Adobe Wave, a unified social networking tool. It basically allows you to get email like popups for all of your social networking sites.

Kevin moved on to Devices and Desktops, which meant “mobile.” Evidently Flash will have penetrated 1 Billion phones by 2009. As always, none of them are here in the US. But a cool announcement was “Flash 10 for smartphones.” He showed Flash on Symbian, Flash on Windows Media, and Flash on Android. He also made it clear that they were working on Flash on the Iphone. I wish we could get Blackberry added to this list.

They ended with a demo of how they see the phone of the future working.  Pretty interesting.  Lots of interactivity,  able to pass data between phones and other devices.  Very compelling.

MAX 2008 Day 0

I came into to town early to participate in an Education event before the main festivities. I, of course, woke up late and missed the bus to Adobe HQ. I blame Ryan Stewart. However I got there eventually and got to participate.

It started with the VP of Education, Peter Issacson, talking about Adobe’s outreach to the Education sector.

Some cool facts that came out early:

  • Job Demand for Flash Knowledge is up 35,000% over the past 3 years
  • Job Demand for Flex Knowledge up 4500%
  • Job Demand for Ajax is up 20,000%

Those aren’t typos. For Education, these numbers mean that Adobe has to get in an engage the Academic sector to get people trained in these tools. This event was designed to impress upon us that Adobe knows it, and are working on it, and are looking to those in Education to help them out.

Next we heard from Steven Kurtz talking about his program at the Rochester Institute of Technology. He talked about the New Media Interactive Development Program at RIT. They take design students and teach them ActionScript among other programming languages. One of the success stories from that program Colin Doody talked about his experiences in it. It sounds like an amazing academic program as it combines both programming and design into one track. It’s pretty rare in the academic community. It’s also interesting as the problems that these students deal with are supposed to be addressed by Thermo.

Next we heard from Ozge Samanci from Georgia Tech. She talked about the School of Literature, Communication, and Culture. It’s yet another program that is turning out Adobe Tools and technology experts. The program sounds like a cross between a Liberal Arts education and a design and programming education. There appears to be a pattern to these programs. They all are programs that didn’t exist when I was in college, but I would have loved to do.

Also from Georgia Tech, we heard from Manvesh Vyas, who is doing work with surface computing. It was beyond me to describe, he actually talked about how surface computing worked at a low level. He also went into the “multi-touch” problem. Basically, is every finger a mouse, or it your hand the mouse? It seems very difficult. The part I did get was that they built a bridge between his surface computing and Actionscript, and are publicly sharing the code in a bit which means to me that Flash might get into the surface computing space sooner than later.

Next up was Mike McKean, who is professor of Journalism, who talked about the Reynolds Journalism Institute at the University of Missouri. He challenged students from several disciplines to create AIR apps with the broad focus of having to do with Journalism in some way. The thing that impressed me about it was that it was a great proof of how AIR can really make desktop development accessible to people who would never have created applications before.

After Lunch we received a presentation from SoDA, or the Society of Digital Agencies. They’re trying to advance the design industry “through best practices, education and advocacy.” Basically they are design employers who want Higher Education to pump out graduates that they can hire.

Next was Salesforce.com, which I was totally expecting to be a little dull. I was shocked, the speaker was very engaging. I still don’t get Salesforce.com, but he had my attention. His talk was mostly about cloud computing and his company’s offering in that space.

I’ve heard many people talk about Salesforce offerings before, and like I said above, I still don’t quite get it. But my understanding of it seems to be a provided enterprise CRM with an complete web service API that basically allows you to do anything on the backend system. Like I said it was interesting, but I’m not sure the tie in to Higher Ed.

The final section of the day was a break out section where educators that are teaching Adobe products talked about their experience with it. One compelling thing I heard, was Bill Bain’s point that Flash as having both a design and a programming component can be a Rosetta stone for getting developers into design and designers into development.

All in all, it was a very cool session, and I’m excited to see what Adobe continues to do to interface into the Higher Education industry.

Reminder about ColdFusion Unconference

Hey! There’s a ColdFusion Unconference going on at Max this year.

Oh yeah and by the way, I’m giving two talks at it.

 

Selling Professional Development at a Resistant Shop

Choosing to use tools like Subversion, ANT, and frameworks is the easy part. Getting your co-workers to join in the fun, that’s the hard part. This session will leave you with a selection of tools and techniques to bring your co-workers on board.

Tuesday November 18, 2008 3:00 – 4:00pm at TBA

Formal Code Reviews

Everybody talks about Formal Code Reviews, but there are few resources for figuring out how to actually do one. This session will talk about the issues surrounding code reviews. And, if I’m daring enough, I may just do a live code review, with the audience reviewing a small block of code.

November 19, 2008 9:30 – 10:30pm at TBA

 

I hope to see you there.

Knowledge@Wharton Upgrades

Today the Knowledge@Wharton tech team put into the wild something I’ve been working on for some time: a new platform for Knowledge@Wharton and India Knowledge@Wharton. The new platform consists of the following:

  • Windows 2008 Load Balanced Cluster
  • Core Services Code Base and ColdFusion 8
  • Layout
  • Development and Publishing System

Windows 2008 Load Balanced Cluster

We built a two node cluster using Windows 2008 64 bit Enterprise Version. One node is a VMware instance, and one node is a blade server. I like this configuration as I only have to worry about a machine warrantee on one node, but I have the backup of a hardware-based node if something goes wrong with our VMware installation. Not that such an event is likely; I would prefer not to tempt fate.

I’ve mentioned it before that we don’t use Load Balancing so much for load as for availability. By having dual node clusters for our production environments we buy ourselves zero downtime patch cycles. We did have a little trouble getting NLB on Windows 2008 working, but we did get it fixed after talking to Microsoft support.

The upgrade went really smoothly. I’m used to using cnames to handle this sort of move, but due to SSL considerations knowledge.wharton.upenn,edu has an A record. The easiest way to make the change was to add the new nodes to the existing Windows 2003 cluster, then remove the windows 2003 nodes. It worked like a charm, and I think it will be my new procedure as it was shockingly easy.

Core Services Code Base and ColdFusion 8

In looking to upgrade Knowledge and India Knowledge to ColdFusion 8 I had to touch a lot of the code. Not so much because there was a problem with it, but because we wanted to take advantage of new features. In the course of doing that I discovered that the main Knowledge site and India contained a lot of duplicated code between them. I was able to centralize it and then add new features to both sites. There are two main features that I added to the central code base: cached queries and search driven folksonomy.

Caching the queries was pretty trivial. I rolled my own instead of using an existing caching framework or native ColdFusion caching. I wanted an easy to flush cache system that didn’t need to be too complex. Because of the highly normalized nature of the database, I couldn’t get a tremendous performance boost through indexing; caching however has proven to be the correct solution by a long shot. It makes sense; we have a lot of frequently read, rarely written data here. I’m just surprised at the overall boost to the site we accomplished with one fix.

“Search driven folksonomy” is a cool idea that my boss Dave had back in 2006. It was running for awhile then got deactivated for some reason and I just re-implemented it. Basically, instead of having people manually tag articles, instead use our search referral keywords to tag articles automatically, then when an article hits some sort of critical number of hits for a keyword to an article that keyword becomes a tag on that article. We’ve enabled the collection piece for now and will enable tag display once we tweak the model a bit after getting some real data.

Layout

I can’t take credit for the look and feel. This was done by Dave and a co-worker, Sanjay. They worked on pushing Knowledge to a more current centered layout, along with a few other tweaks to accommodate advertizing without compromising the editorial content.

The one thing I contributed here was a custom tag that converted an article to an array of

tags. Then the article custom tag was able to wrap around other custom tags and display them in the flow of the article at set positions in the array, or the next pre-determined location in the array, or at the end. It made for a very flexible way to showcase link suggestion or article tools within the flow of the article thereby freeing up space for the aforementioned ads.

Development and Publishing System

This was the hardest to tackle part of the whole thing. Because I was asking people to change the way they worked. But Dave and Sanjay were open to it, especially since I promised that it would make their lives much easier after a little bit of pain.

The old model consisted of doing development on a shared development server with no source control. Changes were manually pushed to production. Occasional copies were made of the code. Communication about changes were ad-hoc and not necessarily as frequent as the changes.

The new model pushes development to local installs of ColdFusion. Source control is handled through Subversion hosted on Unfuddle.com. Communication about changes occurs on every update, thanks to Unfuddle’s notification system. The shared development server gets automatically updated from the trunk on every svn commit via svn commit hooks. Then to move the code around I have one click ANT tasks that handle updating development from Subversion, updating staging from development, updating production from stage, and a unified task that can does all of the updating in sequence (subversion to dev to stage to production in one click with a warning that you should only do this if you are sure about it.) All of this is to accommodate all of the various publishing needs we have. I then wrote ColdFusion that calls the ANT tasks, and an AIR application that calls the ColdFusion. This gives us a one-click publishing tool that we can run from a browser or a desktop application.

We replaced one node of the cluster yesterday, fixed a few bugs, then replaced the other node today – all in all, a very smooth upgrade. I’m extremely happy. It’s a lot to accomplish in 3 months. Mostly, after years of working on very backend systems which never get touched by users, it’s extremely gratifying to work on something that I can show off.

Working for the Obama Campaign

I got an opportunity to volunteer for the Obama campaign and donated some custom web application building to the effort. In case you’re turned off by politics, this post isn’t going to be about politics, it’s more about the environment and technical challenges that I experienced. Finally, I want to make it clear that I am only claiming a teeny, tiny part in the effort. The campaign was won by a lot of people working a lot harder for a lot longer, they deserve a lot of respect, even if you don’t agree with them.

About three weeks before Election Day a call for volunteers came my way from a co-worker. The Voter Protection division of the Obama campaign in Pennsylvania needed someone who had experience working with databases.

After talking with them for awhile we distilled down their problems. They had about 6000 volunteer lawyers willing to work 9000 polling places to protect voters from various threats (some of it malfeasance, but more normally a failure of someone to grasp the full set of election law as it pertains to a particular voter.) They were making these assignments by combining information from a central web application with information collected in the field. They were using copies of an Excel spreadsheet they made every night for in-the-field collection. The guy I was working for had to make that spreadsheet every night, and it was obvious that he needed a better way, but he couldn’t just drop what he was doing and knuckle down and do it.

I, having no other responsibilities to the campaign, could.

In the end, from an application standpoint it was pretty basic: 2 tables, 1 linking table for the many to many relationship (multiple volunteers could be at multiple poling locations.) I pointed Squidhead in fkcrazy (foreign key crazy) mode and it did all the work for me. I had to do some custom tweaking of the app, but I tried to do as little of that as possible because I knew that there would be schema changes. I just had that sense. Then over the next few days I responded to various and sundry schema changes (I told me so).

It was a great experience, mostly because as a small database driven application with many schema changes and not a lot of custom interface work, it was the perfect use case for Squidhead. It reminded me just how useful code generation can be.

I finished up work for it in time for the last two weeks of the campaign. The local volunteer staff would use my application to make assignments. Then before Election Day they would upload their work to the central campaign to integrate with the incident tracking system they had.

Over the next few days, I didn’t think about it much, I the usage stats, and got a note from my host which noticed the spike in traffic (but coolly forgave the overage – YoHost Rocks!). All in all I just watched the final days of election coverage, and hoped all was well.

Around 11:30 on Election night I got a text message inviting me to the local victory party. I figured I should go, because with all of the hype, emotion, and passion of this past election season it seemed like an awesome opportunity. I got there; met up with the guy I worked for; and got introduced around. It went something like this:

Hey, I’d like you to meet Terry Ryan.

Hi, Terry. (Polite, but unexcited)

He’s the guy who ran the Numtopia application

Followed by me getting hugged by a total stranger.

I, uncomfortable, dismissed some of the praise because the app was pretty ugly, only to get some very informed feedback:

  • We had a pretty app from Chicago, yours did what we needed it to do
  • We couldn’t even search the other one by “ward”, which made it useless in Philadelphia
  • Yours was blindingly fast compared to the one from Chicago
  • And didn’t you write it in 3 days?

I have never been quite so thoroughly thanked and appreciated for my work ever. It was an awesome feeling. I am so gratified that I could use my skills to do something for the cause.

Now, I saw the “application for voter protection from Chicago” and I have to say it was very beautiful, with a whole lot of cool features.
I don’t want to knock those guys, as dealing with thousands of records on the State level and hundreds of thousands of records on the National level are two different things. Additionally, dealing with the demands of 1 office as opposed to 50 also changes the game. But I will say that in 3 days, 1 volunteer developer using ColdFusion replaced an application built over weeks by a team of paid developers using PHP. (I don’t know the specifics.)

I took a few lessons from my experience:

The constraint of a drop-dead-deadline can be incredibly freeing. I didn’t have time to think about things trying to come up with elegant solutions. Some of my solutions were using query of queries in a way I would never recommend. But I needed to get it done because November 4th was unavoidable.

Compelling User Interfaces aren’t always the answer. Squidhead creates usable and accessible but ugly UI components from the get go. They’re meant to be styled by custom CSS. That they were usable was the users’ only concern. They didn’t care that it wouldn’t win a design award. Their major concerns were speed and predictability.

Squidhead is better than I think it is. I’ve been down on Squidhead for awhile as it doesn’t have the user base that Transfer, Reactor, or Illudium has. Nathan Mische been telling me that the stuff he wants to add will help get the word out, and I think he’s right. However, it did stuff that I didn’t originally think it could do, or thought it would be hard to make it do. It worked flawlessly. I just need to get the word about how powerful the foreign key introspection is.

Working for the campaign was an awesome opportunity. It was pretty fulfilling compared to other volunteer political work I’ve done. But what’s more, it has pushed me to do a bit more, and have a little more confidence about the work I can produce. Expect to hear some new features coming out about Squidhead in the next few weeks.

Windows 2008 NLB with 2 NICs

I ran into a problem with our standard configuration for web servers, and couldn’t find the real solution documented anywhere, so here it does.

We run our ColdFusion servers on dual node Windows Network Load Balancing (NLB) servers running in IGMP multicast mode. We run it on machines with two network cards. The cluster address is on one NIC and the nodes answer on another. It’s the configuration we’ve come to like after years of working with NLB and port flooding and other anomalies.

I’m installing a new production NLB cluster for Knowledge@Wharton. To future proof it, and avoid upgrades down the road, I’m going with ColdFusion 8 64 bit on Windows 2008 64 bit. I ran through the configuration steps that I always take setting up an NLB cluster, and it worked… sort of. See the cluster address answered if you called it from another host on the subnet that the cluster was installed on. However, if you were off subnet it didn’t answer. This is suboptimal for a web server.

I worked with our networking team, and they figured out (from this post: http://social.technet.microsoft.com/Forums/en-US/winserverClustering/thread/0afdb0fc-2adf-4864-b164-87e24451f875/ ) that if you added a gateway to the cluster NIC, it would work. This is counter to the way NLB has worked before, and generally not best practice. So we opened a support case with Microsoft. After a few tries, I finally got an engineer that was an expert on NLB in 2008, he had the exact cause and solution for this problem: by default IP Forwarding is not enable in Windows 2008. This is the feature of Windows networking that, in the context of NLB, allows responses to requests sent to one NIC to be routed out the other. It’s fixed by using one specific command line option.

(Make sure you are using a command prompt with administrative privlidges)

netsh interface ipv4 set int “[name of the NIC]” forwarding=enabled

That’s it.

Unfuddlecfc

A few weeks ago, I went searching for a provider for hosted SVN, I tried a few different services, and in the end went with Unfuddle.com. Over the course of the past few weeks, my team has grown dependent on Unfuddle as part of our work flow, and I have grown to absolutely love the service.

One of the only caveats is that we aren’t used to relying on outside services for that sort of mission critical part of our environment. So the thought of our information out a server that we didn’t control started disaster recovery talk, and one of the important things to make sure we had was a backup of our content from Unfuddle. Now, they were one step ahead of us, they include the ability to ask for backups, which include svn dumps, as part of the service. They go one step further, and actually have this as part of their API.

Being the automator that I am, I automated requesting a backup, checking to see if it was finished, and downloading it, so that we now have fresh copies of our data every week. In the course of doing that I created an Unfuddle.cfc that provided hooks into their API. I decided to share it, and the backup application on RIAForge for a few reasons:

  • I know a few CF’ers are using Unfuddle.
  • I figured some non-CF’ers might want to functionality enough to give CF a try

It’s not a complete implementation yet. I’ve gotten done as much I need for the backup application. I’m then starting to fill out the features. If there is a feature you would like from the Unfuddle API, please let me know. In any case, check out Unfuddlecfc.

Another thing I have to say is that the Unfuddle guys did an awesome job writing this API (at least as far as I’ve coded). There are just all of these little details that make it very easy to use. It’s really easy to pass queries, and options to the various REST commands, because it takes those options a few different ways. The other thing they did was make sure that responses are very uniform, so it was really easy to write rules for processing their results. Between the API and the app itself, the crew at Unfuddle is a real credit to Ruby on Rails.