“Follow the trend lines, not the headlines”.
Indicators to look at in a vulnerability management-related decision-making process.
Some days ago, I gave a workshop on Vulnerability Management during a large industrial cybersecurity event somewhere in the middle of Europe (I can't disclose details of the event because of the event policy). The talk was about Threat modeling, Software Bills of Materials, and Vulnerability Scoring mechanisms especially regarding how they can help to develop an effective vulnerability management program.
After the workshop, as usual, I asked for feedback from the participants who approached me for some informal discussions. One person told me that there was nothing completely unknown at least for an AppSec expert, but that I kind of sorted out the concepts, standards, approaches, and frameworks that I talked about during my presentation. This is nice feedback to me because we have plenty of tools and solutions, and before inventing something new is first important to get the most from what we already have.
Indeed, vulnerability management is not a cutting-edge research topic, but it is also tremendously true that when it comes to operationalizing even well-known concepts, standards, and methodologies the majority of companies and IT departments struggle.
So, since I've been professionally grown in a research laboratory whose motto was "There is nothing more practical than a Good Theory" (a statement made by the famous psychologist Kurt Lewin in the 1950s), and if we fail to implement an effective vulnerability management strategy, it is indeed useful to get back to the basic notions. This is even more important nowadays where there is increasing pressure to not leave open any door which would be exploited by an attacker.
“Follow the trend lines, not the headlines,” President Bill Clinton once said, referring to the need to use trend indicators as the pillars on which to rely in our decision-making processes. I of course agree, and I think it applies heavily to Vulnerability management as well.
But which trend indicators we should look at while implementing vulnerability management?
The first one might sound kind of obvious, and is the number of published CVEs, which gives us an idea of the (increasing) rate at which vulnerabilities are nowadays discovered.
Image credit: First.org - https://www.first.org/epss/data_stats
Such number, as of September 11, 2023 (the day on which I'm closing the article) is 19,765, with an increase of 14.6% concerning the same date of 2022. If the trend is confirmed, the overall number of vulnerabilities published in 2022 will be reached this year somewhere in the middle of November. Even if there is not any organization that is impacted by all these vulnerabilities together, we can take this trend as an indicator of how often vulnerabilities appear. And this frequency is undoubtedly increasing.
Combined with this we should consider how good and fast is an organization to patch all vulnerabilities that affect it. And here comes the second indicator. An interesting studio by Kenna Security (now CISCO) and the Cyentia Institute, says that the amount of vulnerabilities that an organization can patch is in the range of 10-15% of those discovered. Seen from another perspective, this means that there is 85-90% of vulnerabilities which is gonna be piled in a backlog and that probably will remain there for a long time.
Image credit: Prioritization To Prediction (Vol. 8), CISCO/CYENTIA INSTITUTE
Regarding such latter vulnerabilities in the backlog, there is bad news, good news, and also a challenge for organizations to implement. The bad news and that's indeed not news, is that when we leave a vulnerability unpatched, we make a window of exposure available for the attackers to exploit the weakness. Not immediately, since attackers need some time to weaponize the vulnerability (third indicator). On average, Qualys says, they need about 20 days, whilst defenders need on average 30 days to patch (fourth indicator).
But what about vulnerabilities that are not patched? The good news here is that not all the vulnerabilities that are publicly known are exploited. Another studio still by Kenna Security (now CISCO) and the Cyentia Institute says that only 5% of the known vulnerabilities are exploited (fifth indicator).
Image credit: Prioritization To Prediction (Vol. 5), CISCO/CYENTIA INSTITUTE
The remaining 95% is made of neither detectable vulnerabilities (which means that a scanner launched against the application or the infrastructure is not able to identify it) nor exploited (which means that public exploits are not available publicly). These two conditions above (detectability and availability of exploits) are both necessary for large-scale weaponization and exploitation of a vulnerability. Thus, if they are not both true, the vulnerability can't be exploited automatically or at a low cost for the attacker.
This is indeed really good news since the number of vulnerabilities to concentrate on is roughly reduced by a factor of 20.
And here comes the challenge, which is how to identify that 5%. As you can expect, there is not any product, risk indicator, or threat intelligence feed that alone can tell us what the 5% of vulnerabilities to concentrate on is. The proverbial quote from Sun-Tzu "If you know the enemy and know yourself, you need not fear the result of a hundred battles". Knowing what assets are in our hands, how they are protected, if they are exposed and the impact of possible exploitations are insightful elements to narrow down the scope and concentrate on the subset of issues that matter to us.
Before closing, let’s then recap five indicators to take into account in a decision-making process related to vulnerability management:
Number of vulnerabilities;
Percentage of vulnerabilities remediated;
(average) Time to weaponization;
(average) Time to remediation;
Percentage of actually exploited vulnerabilities;
If calculated for our own organization, some of them can be also included as performance measures for our vulnerability management program (e.g. the number of vulnerabilities or the time to remediation).
However, we will investigate the KPIs for a vulnerability management program next week in a topic explicitly dedicated to that subject.
References
2023 QUALYS TRURISK RESEARCH REPORT
Prioritization To Prediction (Vol. 5), CISCO/CYENTIA INSTITUTE
Prioritization To Prediction (Vol. 5), CISCO/CYENTIA INSTITUTE