Regardless of one’s political leanings or opinion of the Affordable Care Act (ACA), better known as Obamacare, there is widespread consensus that the roll-out of the website has been a mess. is the primary means for Americans to enroll in healthcare programs under the auspices of the ACA. Critics as well as supporters of the ACA, including President Obama, have repeatedly described the website as a “disaster,” a “debacle” and “unacceptable.”

My objective here is not to throw additional barbs at or to provide political commentary. Rather, my focus is on the lessons we, as web designers and developers, can learn from this experience.

First, a recap of the key issues plaguing at launch –

  • Functionality – users initially experienced issues with basic website features sWebpage erroruch as creating accounts.
  • Performance and scalability – the site buckled under the load of user traffic received upon launch, leading to slow load times, error messages and a range of failures.
  • Availability – components of have crashed multiple times since launch, bringing portions or the entire site down for hours at a time. The site has also been taken offline deliberately to fix issues on several occasions.
  • Usability – Concerns have been raised that completing common tasks is too difficult or too complicated. For example, users need to jump through more hoops than necessary to complete a fundamental task – comparing the costs of various health plan options.

Lessons Learned

What lessons can we take away from the issues encountered here? Sadly, nothing new or earthshattering; largely the same fairly common issues found in countless other failed or challenged IT projects. Some specific lessons:

Comprehensive testing is really important – was a large and complex systems development and integration effort involving multiple vendors building various components. Questions have been raised around the thoroughness and effectiveness of the testing process. Too often in technology projects , testing is an afterthought or insufficiently rigorous. In reality we should be spending nearly as much time on testing and refinement as on core development. For more complex integration projects with multiple teams or vendors involved, that’s even more critical.

Better late than broken – most of my professional career has involved managing IT projects, and for a project manager, the last thing you want to do is complete a project late. But even with the best planning and risk management processes in place, things happen, particularly on large and complex projects. You then find yourself in a situation where you can’t launch on time with an acceptable level of quality or capability. If launching a minimum viable product and enhancing it later is not an option, it’s much better to delay launch than to deploy a product or solution that’s not ready. Yes, your stakeholders will be unhappy. But their reaction will be much worse if a product that’s flawed launches and end users see it as a failure, and launching a failed product can cause irreparable damage to the brand.

Unlike, if your project fails it will not likely lead to 24/7 media coverage, Congressional hearings or jabs from late show comedians. But invariably, the reaction to a broken product is much worse than to a delayed but complete product. Launching late is the lesser of two evils.

Scalability – just because your application works great for 100 simultaneous users doesn’t mean it will work great for 10,000 simultaneous users. One of the glaring issues with was the fact it couldn’t handle the scale of traffic it received. Given that this site would be serving users nationwide with a finite deadline for enrollment, lots of traffic in a relatively short period of time should have been foreseen and tested for appropriately. In addition to testing for functionality it’s imperative to understand how your application will scale with usage and at what degree of load it will fail.

Pilots, phased deployments and soft launches – launched to a nationwide audience, and when the floodgates opened, it promptly slowed to a crawl and eventually crashed. I’m a firm believer in a phased launch for a major system or site. Even with rigorous testing, you will not be able to identify every possible issue in a complex system until real users start to use and abuse it. Rolling it out to a smaller audience first and setting the expectation that it’s a pilot or beta allows you to get more eyes on it and constructive feedback from real users. Then you can use those learnings to make adjustments before a full deployment.

We see this practice in our industry quite often. When Google launches a new product, initially it’s typically by invitation only. Beyond the marketing motives of this approach, it allows Google to have a smaller number of users beta test the product. If the product has some issues it’s better to annoy a few thousand users in a pilot than to annoy a few million users in a full roll-out. Similarly, Facebook deploys changes to a small subset of web servers first to validate that those changes don’t cause issues before pushing them out system-wide.

Maybe I’m being politically naïve, but it seems like opening up to a smaller audience first -- maybe just the population of one State as a pilot-- before a nationwide roll-out would have been a smarter strategy.

Usability – Last but not least, the focus on usability sometimes gets lost in complex systems development efforts. But it’s no less important. If your customers or users can’t effectively and easily accomplish what they want or need to accomplish using your product, it’s a failure. The key here is to understand and document your audiences/users and the key tasks they need to accomplish during your requirements and design phase. Then after development, include usability testing as part of the overall testing effort, validating that the product or solution actually meets those objectives. If not, those usability issues need to be addressed before launch.

Final Thoughts

As the saying goes, those who don’t study history are destined to repeat it. The issues described above seem to be repeated quite regularly in IT projects. To stop that vicious cycle, we need to understand these risks, plan appropriately and devote sufficient resources to adequately testing what we build, both to ensure that it works and to validate that it meets stakeholder and customer objectives.

Learn more about SAI’s website and web application development approach.