What Y2K And 9/11 Could Have Taught Us About Managing The WannaCry Cyber Attack

Over the past week, countless organizations around the world were victims of a cyberattack involving WannaCry ransomware. While the EU’s law enforcement agency called the attack “unprecedented,” it was perhaps only unique in scale. In fact, this attack was neither sophisticated nor innovative. It had many precedents and was definitely preventable.

For starters, according to numerous security analysts, WannaCry took advantage of a file-sharing vulnerability in Windows that was repurposed using commonly available “Ransomware-as-a-Service” to package the attack and allow it to support multiple languages simultaneously. To make matters worse, Microsoft had actually released a patch for these vulnerabilities in all supported versions of the OS back in March. But not everyone installed those patches, and this complacency led to disaster.

As of this writing, we have yet to assess the full effect of WannaCry. It will not be the last, but it’s the latest in a series of lessons about risk management and the importance of resilience. In my previous column, I described how Japan was unprepared for the 2011 Fukushima meltdown because its leadership didn’t have a risk-management mindset. The disaster came without warning and was shockingly devastating. But what if disasters came with advance notice? Could that push us to be more vigilant or prepared?

In 1998, I was asked by the United States Treasury Department to be on a year 2000 task-force. The main aim of this was, of course, to address the looming “Y2K” problem, a software bug arising from the decades-old practice of using two-digit codes to represent years, meaning 2000 could be mistaken for 1900. There were fears that the bug would cause widespread chaos, affecting everything from ATMs, airline reservation systems and even power plants. We learned some important lessons from this period.

The crisis that was averted

The task force’s job had multiple functions, including updating computer code such that “00” in the year column was not taken to mean 1900 but the year 2000. However, what was much more meaningful and a great learning experience for me was preparing for potential secondary and tertiary events. Specifically, we had to think outside the box (in this case, servers) and assume that our target institutions might be 2000 ready, but that partners, suppliers and other parts of the network (both physical and digital) might not be. For example, even if our systems kept running, the power company might not and we might have had a blackout. How could we keep operating in that situation? As an industry, we spent hundreds of millions of dollars to address the issue and when January 1, 2000, came, the U.S. didn’t experience any significant problems.

I was very relieved, and life went on. But a few months later, people began asking me whether it was really necessary to spend all that money, whether we really had to do what we did and whether the whole problem hadn’t been exaggerated. As the criticism, debate and even congressional hearings continued into 2001, however, 9/11 happened.

What struck me, as a security expert, about 9/11 was that without any order to do so, people in the financial sector instinctively and immediately pulled out their Y2K manuals and implemented what they had expected to have gone through just one year before. This resilience was eye opening. It showed me that not only was risk and resilience “portable” from one crisis to another, but that thinking about worst-case scenarios paid off – that the investment was worth it. Obviously, we did not have the exact 9/11 scenario in our Y2K book. However, thinking through these secondary and tertiary possibilities basically increased our resilience.

When the 9/11 commission issued its recommendations, it made one that was classified and called B.5.b. Later, a regulation issued by the Nuclear Regulation Commission, B.5.b. required nuclear power plants to become resilient through the implementation of a reinforced power supply with layers of redundancy based on the potential scenario of aircraft impacts. This was shared by the U.S. government with countries with nuclear power plants and most soon implemented the recommendation. Unfortunately, Japan did not, partly because the government believed the country would not be the target of a such an attack. We’ll never know whether such safety measures could have prevented or mitigated the Fukushima disaster, but extra pumps, tools and multiple backup power sources as stipulated in B.5.b. would have definitely changed the outcome.

Risk is risk

What these lessons have shown me is that resilience is not only important but also portable. It’s also clear that preparations for a cyber crisis are useful in a non-cyber crisis, and vice versa. As I mentioned above, complacency has also become a factor. With the WannaCry ransomware, I believe many victims simply put off updating their OS and security, believing it wouldn’t happen to them or that they weren’t an “ideal target.”

But hackers never sleep. They use automation better than most businesses and can sit at their desks scanning the entire planet for victims with a mouse click. While personal computers usually have automatic security updating, many businesses have security updates set to “manual” so they won’t disrupt their operations. But just like checking smoke alarms or changing the oil in a car, we have to deal with these occasional nuisances to be safe – and find security technologies that will actually enable, not slow down, core business operations. It goes without saying that we have to back up our data and keep our systems up to date. It’s also worth remembering that 90% of ransomware attacks exploit known vulnerabilities, and there’s usually a patch for them.

We are going to be increasingly reliant on ICT as an enabler of business and society; we can’t go back to a pre-digital era, and have to focus on protecting our way of life in this digital age. With the speed of innovation, sometimes computer technology will fail spectacularly. However, we need to understand and account for these risks by becoming more resilient. Doing so forces an organization to take stock of everything from business processes to people and the technology we’ve implemented. That means we should look at the entire risk spectrum as well as the secondary and tertiary risks. Not only will it make us stronger, efficient and more competitive, but when another cyber or non-cyber disaster happens (and unfortunately it will), we will actually be better prepared for it, prevent what we can and recover quicker.

Posted by whsaito

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

where to buy viagra buy generic 100mg viagra online
buy amoxicillin online can you buy amoxicillin over the counter
buy ivermectin online buy ivermectin for humans
viagra before and after photos how long does viagra last
buy viagra online where can i buy viagra
William H. Saito