Future evaluations are aiming to cover the performance of specific policies or rulesets in products,
in addition to covering the performance of product under ideal configuration.
The following platforms are currently used in the various evaluations:
RvR Attack Vector Feature Comparison
The purpose of RvR (Relative Vulnerability Rating) is to map and prioritize support for proclaimed attack-vector related features in information security products,
as well as to VERIFY the support for these proclaimed features using various models.
The RvR Feature comparison aims to cover all relevant entities,
both open source and commercial. Initial participation will include manual coverage and feature verification of key contenders,
and will eventually expand to other models.
Initial product suites will include DAST engines (web application scanners) and WAF engines (web application firewalls),
but will eventually follow up with SAST (source code analysis tools),
network vulnerability scanners, IAST(interactive application security testing engines) and other information security product lines,
including cloud variations.
WAVSEP DAST Benchmark Participation
WAVSEP covers web application vulnerability scan engines of various designs.
Benchmark participation is open to all relevant open source projects (49 so far),
and also aims to cover top commercial standalone scan engines, currently including:
WebInspect, AppScan, Burpsuite, Acunetix, Netsparker, NTOSpider (Rapid7), Syhunt and N-Stalker.
Niche players covered also include Webcruiser, and past comparisons covered ParosPro, Ammonite and JSky
(products are active, but assessment stopped due to inactive development).
Upcoming evaluations may include Qualys (already evaluated) and Nessus, in addition to other engines.
Since assessing each commercial contender takes considerable time and effort, the amount of commercial participants is limited,
especially since the areas assessed keep expanding; however, other engines are joining the benchmark from time to time,
due to curiosity on our part, or in the context of an engagement, which follows all the rules of other benchmark participants.
Due to the amount of effort required to assess cloud scan engines (about three weeks, compared to a single week on average for assessing a standalone engine),
participation of cloud engines is extremely limited, and is usually performed in the context of an engagement
or due to an opening in the top 10 commercial vendors assessed.
The same statement is true, although to a lesser extent, for all-inclusive vulnerability scanners (products combining mass tests for both network and application vulnerabilities)
due to the high level of effort required to identify relevant plugins while defining scan policies.
WAFEP Web Application Firewall Benchmark Participation
The upcoming WAFEP benchmark currently focuses on open source web application firewalls.
Commercial contenders and rulesets will make their way in, eventually.