Skip Navigation

Thought leadership from SAI to accelerate your performance

Systems Alliance Blog

Opinion, advice and commentary on IT and business issues from SAI
Keyword: it

High-profile media coverage over recent ransomware attacks have brought substantial attention to cyber security issues.  The potential for a serious incident to undermine the viability of an organization feels higher than ever to many business leaders following the news.  If high profile organizations with huge IT budgets including Sony Pictures and the UK’s National Health Service can’t deal with ransomware effectively, how can smaller teams cope?  C-Level executives and board members are now faced with an unsettling question – “Could we be next?”

Limit Malware Risks

When discussing the potential for a cyber security incident, leaders without an IT background may feel ill equipped to assess their overall risk.  Taking the word of technical staff isn’t necessarily going to assuage their fears.  IT professionals’ skillsets do not necessarily include the ability to communicate effectively with senior leadership.  Complex technical architecture, arcane industry jargon, defensiveness over turf, and confusion created by an ever-changing security environment can all contribute to miscommunications.  This does not absolve leaders of the responsibility to understand and mitigate risks in IT.  So, what indicators should leadership teams use to assess the health of their IT department and their readiness to deal with an incident?  Here are three suggestions on where to focus additional attention:


Patching of software should be a routine item on the IT Operations calendar. It is one of the most critical steps you can take to avoid an incident.  The impact of the WannaCry malware would have been negligible had users been working on fully patched and fully supported systems.  Clearly this means patching isn’t being done in an effective manner in many organizations. So why doesn’t patching always occur? 

First, the patch may break some other critical component.  If your organization is running software that is incompatible with the patch, it may be impossible to install it without losing a critical application.  This is also why most enterprise IT shops do not use “automatic updates” that deploy patches as soon as they are released.  Patches need to be tested and understood before they’re deployed or the consequences could be just as bad as malware.

Second, there may be contractual obligations for hardware and software provided by a third-party vendor that prevent your team from patching the systems.  These systems and their interaction with the rest of your network need to be carefully studied and well understood.  For high profile organizations, they can expect that they will be the ones who take the reputation hit, not the third-party vendor.

Third, you may not have any maintenance windows available.  Patching usually requires IT to take systems offline for an extended period.  In some industries with a 24x7 workplace, this is difficult to get approved, especially if IT cannot effectively communicate just how big the risks of not patching are.  In other industries, there may be seasonal rules on when systems can be modified that prevent patching.  Retailers are very averse to making any IT changes during Q4. Any restriction that prevents patching should be carefully reviewed and understood by the leadership team.

Policies, Procedures, and Documentation

Having policies and procedures in place may strike some as mundane but it’s a good indicator of the overall health of an IT department.  Many IT organizations have some challenges when it comes to keeping their documentation fully updated.  If, however, there’s almost no documentation, inconsistent or informal policies, and no internal procedures that should be a major red flag to leadership.

Documentation of your networks, systems, and integration points is a critical tool for maintaining your IT investments.  It is also a critical resource should there be an incident, to be able to understand and isolate the damage.  Without effective documentation, the knowledge trapped in the IT team’s heads will be difficult to share and could potentially be lost if a key team member is unavailable.  You would not want to purchase a building without any documentation of its systems and you should feel equally as anxious if your organization relies on IT systems with no documentation.

Policies and procedures play a different role but are equally as critical.  End user policies and procedures govern how systems can be utilized, set user expectations for service, and help to inform users of their shared responsibilities around reducing risks.  In some cases, policies may exist but a deeper look would reveal that they aren’t being followed or enforced.  Security policies are the most obvious place to look, but the processes for provisioning and de-provisioning of accounts is often more telling.  Lack of consistency in this area not only creates extra work and confusion but can also create unintended risks. Without robust controls around how accounts are built and delivered you may have users getting inappropriate levels of access.  If there aren’t constant checks to make sure accounts for users no longer at the organization are decommissioned, you may have zombie accounts that become an easy vector for malicious activity.  Imagine the potential damage if an employee, terminated for cause, retained access to your systems after they’ve departed from your organization.


Backups aren’t always considered when thinking about cyber security but when dealing with ransomware, they may be the best tool available.  After all, if your files are locked out, the easiest approach may be to simply wipe out the affected drives and restore from the last good backup.  This begs the question – how good are our backups?

When it comes to backups, the most important thing to understand is what is being backed up and how often does the backup occur.  Often there will be different backup schemes for different users, departments, systems, or applications.  Understanding the nuances of these backups and where their limitations exist is important.  Hard choices should be made here because backing up “everything” does not align with budgetary reality for most organizations and the complexity of a system that could do that would be very high.

The second piece to understand is restoration of data.  Restoration is all about two different components: Recovery Point Objectives (RPO) and Recovery Time Objectives (RTO).  These are often found as part of the organization’s disaster recovery plan.  RPO specifies what point in time a backup should go to – i.e. if we do a daily backup at midnight each night, we know what we can always restore to that last point.  RTO is focused on how long the backup takes to deploy once a decision is made to restore from backup.  In most cases this is not an instantaneous process so understanding the amount of additional downtime is important. 

One other item that usually gets overlooked with backups is a testing plan.  Backups should be routinely tested to ensure that the contents line up with what is expected and that they can be fully restored within the RTO.  You want to have confidence in your backup technology and the only real way to deliver that confidence is through testing.


Proactive questions from leaders can highlight gaps that may have otherwise been overlooked.  While these discussions may initially be uncomfortable they may also reveal governance issues with how IT decisions are being made. Decisions made at the IT level about what risk to accept may be very different than what the rest of the business can tolerate.  Inappropriate decisions in either direction can be damaging.  If risk tolerance is too high, the potential for an incident may increase.  If risk tolerance is too low, the expense to operate IT may be unsustainable.  Looking at patching, documentation, and backups is an easy way to start conversations and assess if there are major gaps in your IT department.

Looking for a more in depth discussion or an outside assessment? Our IT Strategy and Operations Practice focuses on the intersection of people, processes, and technology.  We can provide an impartial outside look at IT and the ways in which it can better support your business.  Our impactful work at organizations large and small often starts with a simple conversation.  Reach out and let us know what you’re concerned about.

Non profits around the region are scrambling to address budgetary gaps caused by changes in labor laws. As leaders scramble to find solutions, overlooked opportunities may exist to cut operating costs and grow revenues through smarter application of information technology.

On July 1st, the minimum wage in Maryland increased by nearly 6% while in DC it jumped by almost 10%.  On June 27th, District took things a step further as the Mayor signed a bill to raise the city’s minimum wage to $15 an hour by 2020.  Similar legislation is expected to pass soon in the City of Baltimore.  These changes are being rapidly followed by another expensive policy change: New FLSA overtime rules will go into effect this December adding to the already heavy pressure on regional nonprofits’ budgets.

Unless nonprofit leaders find innovative ways to cover these substantial payroll cost increases, many will be forced to make tough decisions in the next few months. A recent article in the Baltimore Business Journal highlighted the potentially devastating effects that the wage increases could bring, with some fearing that organizations may be “forced to cut services, lay off workers or even shift locations”. 

This is obviously a nightmare scenario for many organizations and unless we are prepared to see diminished roles for important nonprofits around the region, action must be taken now to ensure these institutions can continue to serve the community.

One of the ways proactive leaders can get ahead of this coming fiscal crunch is by ensuring their organizations are running at peak efficiency.  For many that means a much closer look at their Information Technology portfolios including capabilities, budgets, and governance.

IT Capabilities

Information technology has grown tremendously powerful in the last few decades.  Staff are walking around today with more computing power in their pockets than the systems that guided astronauts to the moon.  Just because IT is powerful though, doesn’t mean it is being optimally deployed in an organization.

Nonprofits have traditionally lagged behind other industries in adopting new technologies.  Many hold on to systems long past what most corporate organizations would consider to be the useful service life.  This may have saved money by deferring replacement costs, but as these systems age, they bring other problems to light.  Support costs often increase over time as it becomes more and more difficult to find qualified staff to maintain systems and applications.  In addition, a lack of automation, an inability to integrate systems, and the emergency of inefficient processes that have grown up around out of date technology are all a drag on the efficiency of today’s nonprofits.

Beyond the hardware, software, and services being deployed, many organization aren’t able to maximize their existing IT investments due to gaps in their users’ knowledge.  Targeted training focused on process improvement, better and more approachable documentation, and an ongoing effort to grow knowledge should be a part of any IT planning initiative.

IT Budgeting

When it comes to budgeting, IT is often seen as a cost center, meaning that its budget should be reduced during lean times. This thinking is shortsighted and outdated. Approaches like taking an “across the board cut” in organizations often misfire.  After all, there are numerous examples where an increase in IT budgets drove substantial cost reductions elsewhere in the business.  If deploying new IT capabilities can deliver efficiencies elsewhere, how does cutting the IT budget make sense?

Similarly, attempting to align IT budgets with benchmarks from across the industry often delivers less than stellar results.  These numbers lack any sort of reflection to the organization’s structure, scale, capabilities, or mission.  Attempting to budget based solely on these numbers is therefore meaningless.

IT Governance

One of the least understood components of information technology is IT Governance.  Put simply, this is the method by which decisions about IT are made and executed.  Decision rights for IT go beyond the IT department or the CIO, and involve a broad base of stakeholders. There are two very common governance structures that both have substantial drawbacks for organizations undergoing change. 

The first is a decentralized, ad-hoc approach to IT. This is a weak form of governance where decisions are often made by individual users, managers, or departments.  A lack of standardization and planning has predictable results.  Systems and applications are often incompatible with one another and the costs to maintain the IT infrastructure is very high.

The second is more of a “dictatorship” model where a strong IT department dictates standards, deploys systems, and defines the future plans for the organization.  The gap here is that while the trains may run on time, they don’t necessarily go where users need them to.  The end users are often left wanting (sometimes they revolt) and IT can end up misaligned with the rest of the organization.

A better path is to strike a balance between these two approaches.  Certain aspects of IT should be managed by the IT department, but with input from users, leadership, and even outside advisors.  Other decisions should be left to others, with IT in a supporting role to enable their vision. Creating an effective governance structure allows organizations to maximize the utility of their IT investments and have control over its direction.


As progressive reforms around compensation continue to sweep through the region, it is time for nonprofit leaders to prepare their organizations to meet the upcoming fiscal challenges.  IT planning and strategy should be a key part of that conversation.  If you’re ready to get started before things get rough, here’s a couple of options to consider:

  1. Move Infrastructure to the Cloud – This shifts your fixed costs to variable costs which can adjust to reflect the size and scope of your organization.  Reduced complexity of in house infrastructure also means you can potentially reduce your IT staffing needs. Significant discounts given to nonprofits by the major cloud players (Microsoft, Google, & Amazon) make this a very affordable proposition if you have the right team on your side to plan and manage the transition.
  2. Consider New Applications to Reduce Costs – IT is ubiquitous throughout your organization, but are you using it to cut back on costly administrative tasks?  Reducing overhead around policies and procedures can free up senior staff to focus on the mission instead of shuffling papers around.  Cutting training time means you can quickly train staff and reduce your onboarding costs.  This is going to be particularly important for high turnover roles that will be increasingly expensive to fill.  Having a solution that validates employee compliance and acknowledgement around policies can lower the risk of legal action.  All of these can be addressed with one solution: Acadia Performance Platform
  3. Revamp Your Web Presence – Does your website drive donors to you or is it a source of frustration?  Are visitors able to understand your mission and help support your goals?  Can your staff update and maintain it without having to jump through hoops?  If your website doesn’t match the professionalism of your organization, redesigning it can help bring you more success.

Mark Stirling is the Director of SAI’s IT Strategy and Operations practice and has worked closely with nonprofit clients including the Maryland Zoo in Baltimore. You can find more of his posts and other insights from SAI on the Systems Alliance Blog.

Last week the Department of Health and Human Services announced a $218,400 settlement with St. Elizabeth’s Medical Center in Brighton, MA relating to a HIPAA compliance violation. 

This enormous fine wasn’t the result of employees deliberately leaking information.  It didn’t come as a result of a major data breach caused by criminal hackers.  It came about because hospital administrators didn’t have adequate controls in place around their IT.

From the Boston Globe:

“The settlement… comes after federal regulators investigated a 2012 complaint that employees at St. Elizabeth’s used an Internet-based document sharing program to store health information of at least 498 patients.”

Employees who likely meant well started putting sensitive data into the cloud.  That’s a major shadow IT headache for any organization.  For those businesses that are subject to HIPAA or other compliance requirements, it’s also a very expensive headache.

Back to the Globe:

“Organizations must pay particular attention to HIPAA’s requirements when using Internet-based document sharing applications,” Jocelyn Samuels, director of the HHS’s Office for Civil Rights, said in a statement. “In order to reduce potential risks and vulnerabilities, all workforce members must follow all policies and procedures, and entities must ensure that incidents are reported and mitigated in a timely manner.”

Think this can’t happen to your organization? Wrong.  According to the AMA, even if you’re in the dark about the rules you can be fined up to $50,000.  That’s a lot of money for an honest mistake.

hipaa requirements

Acadia healthcare policies



If you’re handling any kind of sensitive patient data on your network, now is the time to take notice. Here’s where you should be focusing your efforts:

Training, Training, and More Training: Compliance issues are a people problem, not a technology problem. Having organization-wide understanding of compliance obligations is non-negotiable.  Eradicating shadow IT and making sure that all of your employees understand why they can’t use the latest fad cloud application without permission is vital.  Stop letting users make mistakes out of ignorance.

Policies and procedures and tools to share them matter.  Doctors may take an oath to do no harm but if they or other staffers don’t know the rules, how could they know if they’re hurting patients through noncompliance?



policy tip

User Proofing Wherever Possible: Having active control around where sensitive data is stored and how it is transmitted is crucial.  That means you need a technical solution in place to enforce control obligations.  Systems that don’t enforce the standards by default will burn you.  This could be anything from automated filters to watch for particular content in emails, to encryption software that secures data at rest. 

Robust IT Governance Processes: Is your IT department disconnected from the strategic direction of the business?  How well aligned are IT’s priorities when compared with the end users?  Fixing gaps like these discourages users from trying to implement shadow IT.  If stakeholders are engaged through an IT Steering Committee or other governance structure they have the power to keep IT aligned with their needs.  There’s no reason to go it alone if you’ve got organizational partners who are focused on enabling the business.

Not sure where to get started?  SAI can help.


On Wednesday, the New York Stock Exchange was down for nearly four hours.  As soon as trading was halted, speculation began to fly that the outage was the result of the exchange being hacked. 

Reality turned out to be a little less interesting. NYSE realized that a botched software update was causing major glitches across its trading systems.  Although this was a very high profile outage, it is commendable that NYSE’s IT staff was able to recognize the problem and roll the change back.  This is a great example for how IT Change Management should be applied.

Not Every Outage Involves Hackers

With all the attention on cyber security, it’s easy to forget that human error and a lack of good IT governance are far more likely to cause an outage than malicious actors are.

Shooting yourself in the foot is a lot more embarrassing than getting hacked – especially since it can be avoided.

According to the Visible Ops Handbook from the IT Process Institute, "80% of unplanned outages are due to ill-planned changes made by administrators ("operations staff") or developers."  ITPI dives further into these self-inflicted & unplanned outages noting that the majority of the time to restore services is spent figuring out exactly what changed because of a lack of effective Change Management. 

Change Management Isn’t a Bad Thing

Many IT professionals have a very negative view of Change Management and ITSM frameworks like ITIL.  They see them as administrative and bureaucratic burdens that prevent “real work” from being done. 

Those true believers that feel like you have to implement every piece of the gospel according to ITIL aren’t helping the cause either.  It is unrealistic to go from an undisciplined environment to having every ITIL process fully realized overnight.

Always remember that the Change Management process is there to reduce risk and ensure changes are well thought out. It can be as simple as making everyone agree to write down and discuss their changes and preventing unauthorized changes.

IT “Cowboys” Are Symptoms of a Bigger Problem

Small IT shops without mature IT processes often have one key staffer that keeps all the lights on. They eschew documentation and fix things based on their gut feelings. They’ve always got a magic bullet ready to restore services when the worst case scenario happens.

“Cowboys” in IT have had a good run but it is past time to send them packing.  Not only do they often cause the very outages they’re fixing through human error, they tend to keep knowledge to themselves which prevents new staff from learning your systems and grinds troubleshooting to a halt when they’re unavailable.

It is an unacceptable risk to let critical production systems be run by cowboys who make changes outside of the Change Management process.  The presence of cowboys is a symptom of poor IT governance where the organization is operating without a plan.

Write it Down!

Documentation is one area where many IT shops struggle.  They don’t write down policies and procedures.  They don’t keep their configuration information readily available and up to date.  They find themselves flailing about when an outage happens because they don’t have any reference materials handy....Read More

Documentation. screenshot of network documentation

It’s considered a profanity in every IT department. Yet every technician in every IT shop will agree that documenting IT processes and procedures is essential to managing an effective IT organization.

Process documentation allows IT staff to be more effective by:

  • Increasing the consistency with which you execute repeatable processes

  • Allowing subject matter experts to share operational knowledge with general IT staff

  • Enabling more staff members to complete service changes, thereby increasing operational efficiency

  • Lowering the barrier of entry for folks entering new roles within the organization

  • Mitigating the risks associated with IT service changes

  • ...Read More
May 2017