As the infosec world was in turmoil following a total of seven zero-day vulnerabilities in MS Exchange and the so-called Hafnium attack, one thing came to my mind - and it sort of left me thinking: For the past 20 years, patches have been a constantly recurring topic of discussion. And as we all just patched our MS Exchange servers we need to do it again.
This week Microsoft is warning organizations again about two new critical vulnerabilities in Exchange Server 2013, 2016 and 2019 that would allow an attacker to remotely take over vulnerable servers. As far as is known, the security vulnerabilities have not yet been exploited, but Microsoft is taking into account that this will happen. The vulnerabilities, identified as CVE-2021-28480 and CVE-2021-28481, were both rated 9.8 on a scale of 1 to 10 in terms of severity. The US intelligence agency NSA discovered and reported the security flaws to Microsoft. However, researchers at the tech company had also found the vulnerabilities independently of the NSA.
The one constant that keeps reappearing throughout any current discussion on a critical security flaw is the speed with which vulnerable systems are being patched. In the case of Hafnium, tens of thousands of systems all over the world remained unpatched even over a week after the news first broke – and now, a month later, we are only beginning to see the effects of some of the attacks, where ransomware was deployed. Similar reports exist in many of the high-profile security flaws in recent years. The reasons for delays in deploying critical updates have been roughly the same for decades now: The deployment process is non-trivial, patches require testing before going into production or an update is not perceived as being that critical for the organization for various reasons.
In recent years, this situation was not helped by the ever-increasing frequency and volume with which updates and patches are being issued. Much to the chagrin of many administrators, companies like Microsoft have ramped up their activities to two major updates per year. This has lead to situations where administrators have just finished ironing out the kinks that surfaced during the deployment, by which time the next major update is just around the corner. This would be a challenging situation as it is – but in most organizations, there are loads of other applications that require updates. And this is where the trouble starts: Not every update is compatible with every other update. In most cases IT management just assumes that administrators 'just do updates on the side' and perceive it as part and parcel of the IT team’s daily work. This is often not too far from the truth. But in practice, the job of keeping all software up to date often gets buried under a load of other obligations. And let’s be honest: Installing updates is not the most exciting and fun activity in the world.
IT departments that are consistently understaffed then become the straw that breaks the camel’s back: They simply cannot keep up with the pace.
At this point we need to take a step back. One might gain the impression that security is all about updates and patches. And while those are an undoubtedly critical component in the security chain, they are not everything. Remember, security must have no single points of failure. It is important to realise that patch management is no guarantee for a safe network. In addition to patch management, measures such as multi-factor authentication, Firewalls, IPS, encryption, a well-designed antivirus endpoint security system with the latest AI technologies and repetitive security awareness trainings are indispensable. No one component can guarantee an organization’s security. Security is always the result of a combination of several factors. Equally, a breach in security should always require multiple components to fail. A complete breakdown of security due to one single action or factor is a clear indicator that things have gone seriously awry.
One of the most important reasons stated for not implementing a patch or update immediately is the lack of a test environment in some organizations. An often-heard problem is that companies have experienced problems in the past when patching a critical application. The result is that they are reluctant to roll out patches quickly and want to test the patches extensively in advance, in order to minimize any negative impact on the everyday business. This seemingly mundane fact underlines one stark reality: Organizations are less afraid of a security breach than they are of things breaking to to issues with a patch or update.
This is not a good situation. Organization need to establish clear guidelines about what to patch and how – and also when. The CVSS system is a good starting point. Based on this, any updates that are critical for security can (and should) be fast-tracked in every way possible. While I do not suggest foregoing testing altogether for critical patches, the time that an organization takes to test a patch should be kept as short as possible. Ultimately, the goal should be to deploy critical patches within a couple of days maximum. In the meantime, any workarounds or remediation strategies that can be utilized should be evaluated and put in place if at all possible.
Unfortunately, there is also the fact that especially small companies just don’t have the necessary tools at hand to see which patches are available. For example, they can see that a patch is available for Windows, but they might not be aware that patches have been released for other applications. This requires a lot of manual adjusting and more legwork than should be required. Some of this can be remediated by acquiring the right tools, in the shape of a patch management solution.
Yet, even if tools and priorities are taken care of, there are still opportunities for wrenches being thrown into the works: Shadow IT. The term includes any “nonstandard” software, i.e. programs that are not on the roster of programs otherwise used by the organization. Those might be programs on employee devices, which were installed by the employees themselves. And even if they are worthwhile and useful programs to have: If the IT department is not aware of it, patches of that software are just missed – including patches that might turn out to be critical. So if a “nonstandard” program is useful enough and has a valid reason to be used, it might be a good idea to add it to the organization’s software catalog.
For the foreseeable future, humans will remain involved in programming and will therefore be making mistakes in the billions of lines of software code. We will therefore also continue to see patching as the necessary evil. However: I do see an increase in automatic patching, even for critical patches, as a possible solution within the coming years for every OS. That will not resolve the problem of zero-day vulnerabilities as this could become maybe the bigger problem in the coming years. Now that maybe millions of programming code were stolen in the Hafnium and the Solarwinds attacks we can’t really deny that zero-day weaknesses could become a bigger trend that warrant a lot quicker action than many companies today are set up to deliver.