Skip to content
FuzzBench - Fuzzer benchmarking as a service.
Python Dockerfile C++ Shell Makefile HTML Other
Branch: master
Clone or download

Latest commit

lszekeres Add Fuzzers page to doc. (#351)
Add Fuzzers page to doc.

Contains already integrated fuzzers and an initial list of "would love to have"-s. Will add more later.
Latest commit 334e919 May 16, 2020

Files

Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github/workflows Remove afl++ fuzzer variants (#345) May 16, 2020
analysis Small plotting fixes. (#333) May 13, 2020
benchmarks Standalone/libxml (#196) May 7, 2020
common Add option for log scale coverage growth plot. (#330) May 13, 2020
database Don't require POSTGRES_PASSWORD when generate_report with cached_data ( May 16, 2020
docker Tar sources to preserve permissions. (#318) May 9, 2020
docs Add Fuzzers page to doc. (#351) May 17, 2020
experiment [builder] Raise GCB timeout to 4 hours (#340) May 14, 2020
fuzzers Remove afl++ fuzzer variants (#345) May 16, 2020
service Add support for fuzzer variants specified by variants.yaml files. (#231) Apr 16, 2020
src_analysis Add src_analysis module (#278) May 8, 2020
test_libs Initial commit Feb 24, 2020
third_party Revert "Add Proxygen benchmark. (#182)" (#212) Apr 8, 2020
.gcloudignore Initial commit Feb 24, 2020
.gitignore Add .pytype to .gitignore (#10) Feb 28, 2020
.gitmodules Initial commit Feb 24, 2020
.pylintrc Add --merge-with-clobber argument to generate_report.py (#282) Apr 30, 2020
.style.yapf Initial commit Feb 24, 2020
CONTRIBUTING.md Initial commit Feb 24, 2020
LICENSE Initial commit Feb 24, 2020
Makefile Fix Makefile. (#241) Apr 20, 2020
README.md Show reports folder link instead of sample report. (#93) Mar 11, 2020
__init__.py Initial commit Feb 24, 2020
alembic.ini Initial commit Feb 24, 2020
conftest.py Dont run test_measure_snapshot_coverage by default (#309) May 7, 2020
presubmit.py Add src_analysis module (#278) May 8, 2020
pytest.ini Add src_analysis module (#278) May 8, 2020
requirements.txt Be more explicit about presubmit errors (#284) Apr 29, 2020

README.md

FuzzBench: Fuzzer Benchmarking As a Service

FuzzBench is a free service that evaluates fuzzers on a wide variety of real-world benchmarks, at Google scale. The goal of FuzzBench is to make it painless to rigorously evaluate fuzzing research and make fuzzing research easier for the community to adopt. We invite members of the research community to contribute their fuzzers and give us feedback on improving our evaluation techniques.

FuzzBench provides:

  • An easy API for integrating fuzzers.
  • Benchmarks from real-world projects. FuzzBench can use any OSS-Fuzz project as a benchmark.
  • A reporting library that produces reports with graphs and statistical tests to help you understand the significance of results.

To participate, submit your fuzzer to run on the FuzzBench platform by following our simple guide. After your integration is accepted, we will run a large-scale experiment using your fuzzer and generate a report comparing your fuzzer to others. See a sample report.

Overview

FuzzBench Service diagram

Sample Report

You can view our sample report here and our periodically generated reports here. The sample report is generated using 10 fuzzers against 24 real-world benchmarks, with 20 trials each and over a duration of 24 hours. The raw data in compressed CSV format can be found at the end of the report.

When analyzing reports, we recommend:

  • Checking the strengths and weaknesses of a fuzzer against various benchmarks.
  • Looking at aggregate results to understand the overall significance of the result.

Please provide feedback on any inaccuracies and potential improvements (such as integration changes, new benchmarks, etc.) by opening a GitHub issue here.

Documentation

Read our detailed documentation to learn how to use FuzzBench.

Contacts

Join our mailing list for discussions and announcements, or send us a private email at fuzzbench@google.com.

You can’t perform that action at this time.