“What reward ranges should I set for my program?”, “How much should I pay for a given finding?”, and “What should my organization’s reward budget be for a successful program?”
At Bugcrowd, we hear these questions time and time again – and want you to know that if you’re asking these same questions while setting up your crowdsourced security program, you’re certainly not alone. All of these points are natural and important considerations when getting started with crowdsourced security: what exactly is the going rate for vulnerabilities against certain target types? How much should a given vulnerability class be rewarded? And how much can one expect to spend on bounty rewards over the course of a year? In this blog, we’ll be looking to cover these very questions, and provide some best-practice guidance around rewards and ranges as they relate to running a crowdsourced security program.
Bugcrowd’s VRT
The good news is that it’s pretty easy to answer the question of how much a specific vulnerability class should be rewarded within the context of an existing reward range. Bugcrowd has made the segmentation of classes/priority remarkably simple and straightforward via our Vulnerability Rating Taxonomy (https://bugcrowd.com/vulnerability-rating-taxonomy), which has been developed (and continues to be expanded upon) over the course of running hundreds of programs. Through managing these programs, we’ve classified all the most common vulnerability types, and then rated them P1 through P5 – where P1s are critical issues (RCE, SQLi, etc), and P5s are informational (and commonly non-rewarded) findings. This being established by setting a reward range per P-level, we then can easily bucket any given finding into the applicable reward range for that vuln type.
By adhering to the VRT standard, it makes life easy for both you and the hackers – both groups have a standardized set of expectations for the rating of a given finding and subsequently rewarded. For instance, a researcher who finds a SQLi vulnerability can expect that it’ll get paid out in accordance to whatever the P1 schedule is – and the program owner also knows exactly what they’re expected to pay. Now the only question is how much those reward ranges should be…
Market Rate for Vulnerabilities
At the low end, the market-rate recommended range is $125-$2000 (we’ll cover other ranges later). In the example above, the guidance is that this P1 SQLi should be rewarded between $1750 and $2000. It’s important to note that the reason for using a range, and not a static value, is that often the value of a given finding (even at the same priority level), can vary substantially depending on the impact… e.g. what data is compromised? It’s also worth keeping in mind that this is simply a base level – and program owners are encouraged to both enumerate and reward bonuses for particularly valuable outcomes (e.g. became able to view credit card or PII info, etc).
This blog would be incomplete if we didn’t note that at this point, many first-time program owners tend to pause and feel a sense of aversion toward paying $2000 for a vulnerability. To be clear, $2000 is a lot of money, but if we take a moment to put things in perspective, this $2000 very quickly goes from feeling expensive, to one of the best deals on the planet. To get there, imagine you get publicly breached tomorrow by a P1 vulnerability (SQLi, RCE, etc), leaking a substantial amount of client and/or internal information on the web. Considering a world where this sort of debacle is instantly all over the news, does $2000 still seem like too much to pay for the ability of knowing about (and be able to patch) the critical issue before it happens? And it doesn’t even have to be a vulnerability on a critical piece of infrastructure – even moderate priority (e.g. P3) issues can cause brand or reputational damage when sensationalized in the media; making the relative cost via a bug seem like a remarkable deal – because it is.
This same exercise extends to more advanced/expensive reward ranges as well – in almost every case, the relative price of a bounty amount typically pales in comparison to the number of cycles and effort that public disclosures and incidents tend to incur. Furthermore, one incredible, yet often forgotten component of crowdsourced security, is that you only pay for valid findings – this means that regardless of what your rewards are, you’re only ever paying bounties for results. The only time you’ll ever be paying P1 prices is when you’re actually getting valid P1s, and if you’re getting P1s, then (as counter-intuitive as it sounds) that’s great – because you’re now more secure today than you were yesterday!
What About Reward Ranges?
Now that we’ve established what rewards should be given for what types/classes of vulnerabilities, the final question to cover is what reward ranges you should use for your specific program. The short answer is that it varies depending on your budget, types of targets, number of targets, the relative hardiness of said targets, and so on. While non-exhaustive, the following list details our recommended starting reward ranges and corresponding example target types:
$150-2,500 – Best for: untested web apps with simple credentialed access and no researcher restrictions — for any target with restrictions in place (e.g. you must own X product, or live in Y location), rewards should default to one range higher.
$200-4,500 – Best for: moderately tested web apps, untested APIs, untested mobile apps.
$250-6,500 – Best for: well-tested web apps, moderately tested APIs or mobile apps, presumed-to-be-vulnerable thick clients/binaries and/or embedded devices.
$300-10,000 – Best for: hardened web apps (e.g. banking, etc), well tested APIs or mobile apps, moderately secure thick clients/binaries and/or embedded devices.
$500-20,000+ – Best for: extremely well hardened and sensitive web apps, APIs, and mobile apps – as well as moderate-to-highly secured thick clients/binaries and/or hardened embedded devices.
Keeping in mind that while you may, when you first start your program, fall into the ‘untested’ bucket, it’s expected that after a few months (or in some cases longer), your program and targets will likely graduate to being a more heavily tested scope – meaning that along with your relative maturity, your rewards should advance as well (so as to match the increasing level of difficulty involved in finding a valid, unique issue).
It’s also further worth noting that many programs with large scopes tend to have multiple reward ranges that vary based on the target – e.g. targets XYZ are rewarded at one level (due to their hardiness and/or level of importance), while targets ABC are rewarded at a lower or higher level, and so on.
Finally, to put all of the above together – we come back to the question of how much one can expect to spend (and should thereby budget) for the first year of running a crowdsourced security program? The answer is that it varies. However, in an effort to provide some constructive advice, assuming a moderate attack surface and some degree of past security testing, $20k is a solid starting point for your program reward pool – keeping in mind that you may need to top that up before the year is over (if the scope is massive, you’ll likely need substantially more). If/when you need to top up your rewards, keep in mind that having to do so is a good thing – it means you’re getting findings, and valuable ones at that (at the base reward range, it’d take 160 P4s to exhaust your reward pool, so rest assured that if you’re spending that much, it’s highly unlikely it’s all on low priority findings).
All said, the rewards for a given program are completely dependent on your specific business needs and security maturity. But it’s critical to keep in mind that the key to a successful program remains the same: we want and need hackers to test and find vulnerabilities against our targets – and in order for that to happen, we have to attract the right hackers with the right incentives – making our program something they want (and choose) to test on.