On March 11th, 2011 a magnitude 9.0 earthquake rocked Japan. Its impact shifted the earth on its axis by up to 10 inches. Immediately afterwards, a devastating tsunami destroyed large portions of Japan’s coastline and caused major damage that induced a meltdown at the Fukushima Daiichi nuclear power plant.

At the time, I was an Information Professional Officer with the US Navy, responsible for network communications for an entire Carrier Air Wing, and stationed ~160 miles from Fukushima. Suddenly the binder with our disaster recovery (DR) and business continuity (BC) plans became very important. While our infrastructure was largely intact, we had limited connectivity, rolling blackouts, and radioactive fallout amongst other challenges to deal with.

The Great Wave

I’d already been through a couple of hurricanes, numerous smaller earthquakes, and had deployed multiple times onboard an aircraft carrier. On a regular basis, we got to practice our DR plan as we dismantled, moved, and restored our network assets deploying to and from the ship. Server crashes, faulty backups, and broken fiber were considered minor and routine issues. We trained to solve our problems in house since we were often in the middle of the Pacific. If any team was ready to work through the DR/BC plan, it was mine.

Unfortunately, our written plan wasn’t nearly as good as it should have been. Three years later I now understand how much better our plans could have been.

Here’s a quick rundown of some of the lessons I learned:

  • Know Your Plan’s Weaknesses: We assumed we would immediately deploy to the ship in an emergency. Unfortunately its network infrastructure was in the middle of an upgrade and we were left in our offices with no emergency generator. It was time to beg, borrow, and steal. We found a small generator with enough juice to power one secure phone and two laptops for an organization of 1600 people when the rolling blackouts occurred. That assumption being overlooked caused a lot of pain that could have been avoided by buying a generator years earlier. 
  • Priorities Change:  Remember your plan is a guide not a suicide pact. Don’t overlook what might change depending on the type of disaster and have a key decision maker and a backup identified in advance so that decisions can be made on the fly. We shifted our focus away from our strike fighter squadrons to our helicopters, which were critical search and rescue assets. Our administrative staff also found themselves at the front of the line to facilitate the evacuation of family members. The real world requirements did not match the service priorities laid out in the DR or BC plan, which assumed we were going to war.
  • Training Matters:  My technicians were competent beyond their primary responsibilities. Their internal cross training paid off as they were able to assist each other and better understand how to support our users. Invest in your staff’s capabilities now and avoid single points of failure when it comes to knowledge and skills. Your plan won’t work if no one can execute it; depending on the scale of the event, you should be prepared to work with a skeleton staff.

You may never see a tsunami coming your way, but planning for the worst prepares you for dealing with smaller issues. I’ve seen broken pipes flood data centers, have had landscapers cut through fiber lines, and witnessed a car accident that knocked out power for an entire city block for over a day. How prepared are you to respond to those kinds of incidents? Maybe it is time to dust off your DR/BC binder.

Not sure where to get started?  Want to benefit from our experience? Drop me an email or give us a call if you would like to discuss further.

Photo credit: "Great Wave off Kanagawa2" by Katsushika Hokusai, Licensed under Public domain via Wikimedia Commons