Obligatory FAQ note: Sometimes I get asked questions, e.g. on IRC, via e-mail or during my livestreams. And sometimes I get asked the same question repeatedly. To save myself some time (*cough* and be able to give the same answer instead of conflicting ones *cough*) I decided to write up selected question and answer pairs in separate blog posts. Please remember that these answers are by no means authoritative - they are limited by my experience, my knowledge and my opinions on things. Do look in the comment section as well - a lot of smart people read my blog and might have a different, and likely better, answer to the same question. If you disagree or just have something to add - by all means, please do comment.

Q: How to find exploits in software?
Q: How did you find this CVE?
(in context of a vulnerability with no CVE assigned)
Q: How to create an exploit for ThisOrThatApplication?
and also
Q: I want to work in security, do I need CVEs?

A: Actually this isn't really an answer to the above questions (which I do get from time to time in this or that form), but an attempt to straighten up the misunderstandings behind them (well OK, I actually do answer the last question).

TL;DR:

  • A vulnerability is a bug in software or hardware that has security-related consequences.
  • An exploit is a small script (e.g. in Python or C) that automates exploitation (abuse) of the given vulnerability.
  • A CVE (or more formally a CVE-ID) is an identifier assigned to a vulnerability which has been added to MITRE's Common Vulnerabilities and Exposures database.
  • You don't need CVEs in your resume (unless you are looking to work in a really really really specific niche).

Longer version follows.

Defining a vulnerability is beyond the scope of this post. Just remember, that not every bug is a vulnerability – for something to be called a vulnerability it must allow attackers to do something that would require higher privilege level than the one assumed they would have.

To make the explanation easier, let's take a look at a livespan of a typical vulnerability:

  1. A programmer has a bad day / misreads the documentation / something distracted them, and they introduce a bug into the code (or hardware design).
  2. ...some time passes...
  3. A potential vulnerability is discovered. At this point it's yet unconfirmed whether this is an actual vulnerability or just a false-positive – it's just a vulnerability candidate.
  4. The vulnerability is triggered, i.e. the researcher confirms that they can interact with the faulty piece of code from the outside and that it behaves as expected (i.e. in a faulty way). Do note that at this point it's still not clear whether the vulnerability is exploitable.
  5. A Proof-of-Concept exploit is created to confirm exploitability (and impact – usually either code execution, or denial of service, or information disclosure, or privilege escalation).
  6. ...this is where it might branch off, but I'll focus on the ethical path...
  7. A report about the vulnerability is written and sent to the software/hardware vendor (creator/maintainer/manufacturer).
  8. The vulnerability is timely fixed and a patch / update is published to be applied by users.
  9. Optionally around the same time the vulnerability is added (usually, but not always, by the vendor) to MITRE's CVE database.
  10. ...let's stop here...

As you can see above, a CVE, a vulnerability and an exploit are not the same thing.

In the TL;DR section I've mentioned that an exploit it a script automating the steps necessary for abusing the vulnerability. Do note however that if a vulnerability is trivial in exploitation, creating an exploit might be skipped altogether and it still might be possible to abuse the vulnerability. So existence an exploit is fully optional.

Important: Even if an exploit was never created and the vulnerability is not trivial – i.e. its exploitability hasn't been confirmed – from the perspective of the defensive side it MUST be treated as if it were exploitable unless proven otherwise (and in some tricky cases it's quite hard to prove either way). Otherwise one is risking a nasty surprise when someone manages to craft a working exploit. And yes, in real life there might be situations where a certain risk has to be accepted due to other reasons.

To state what should be obvious at this point, you can't really "create an exploit" without finding a vulnerability first (or learning about one from some other source), and an exploit is created to abuse a specific vulnerability. Even if you hear someone say "I've got an exploit for Linux", what that really means is "I've got an exploit for a specific vulnerability in Linux".

Moving onward to CVEs, it's important to note that not all vulnerabilities have a CVE number assigned for various reasons. For example, the vulnerable software might just be too niche for it to make sense, or simply neither the security researcher nor the vendor filled out MITRE's CVE request form. Also, historically it was common to only request CVEs for higher impact vulnerabilities – so there are even larger gaps when it comes to older bugs.

Example of CVE-ID: CVE-2021-33514 (Note the format: CVE-Year-UniqueNumber)

Why is there a vulnerability database anyway? There are multiple reasons, but the two main ones I can think of are:

  1. It makes vulnerability management easier to automate in larger companies (since CVE database can be automatically compared to a company's inventory).
  2. It makes it easier to exchange information about specific vulnerabilities and differentiating between them (since each has a unique ID).

We should also call out the elephant in the room - are CVEs badges of honor? Should everyone working in IT security "have CVEs" in their resume?

In general, no – CVE's are just unique identifiers assigned to vulnerabilities to make their management easier.

That being said, security researchers focusing on vulnerability discovery do take pride in finding vulnerabilities and in having CVE-IDs assigned to them – and that's absolutely fine! At times you might also see the "number of CVEs one has" being used in pissing contests between members of the infosec community – and that's fine as well as long as it doesn't grow out of proportion ;>

The important thing to remember is that "getting CVEs" is specific to vulnerability hunting / vulnerability research (and the like), and that's a very very tiny piece of the whole security industry pictrue – and by no means the most important one either. So if you see a company hiring e.g. a malware analyst, a network security engineer, a blue-teamer, a bug bounty manager, or even a pentester / red-team member putting "must have CVEs" as a job requirement, you probably want to take that with a large grain of salt (and either ask for clarifications or just steer clear from them). On the flip side, if a company is looking strictly for a senior vulnerability researcher, then asking to "see CVEs in the resume" isn't that unheard of – but it's not fully reasonable either, as e.g. internal vulnerabilities don't get CVE numbers.

So there you have it – a quick guide to what is what in the vulnerability world.

Add a comment:

Nick:
URL (optional):
Math captcha: 6 ∗ 4 + 4 =