-
Updated
Jul 22, 2020 - Python
benchmark
Here are 2,223 public repositories matching this topic...
-
Updated
Aug 11, 2020
For each Job, it adds plots about density, cumulative mean, and so on. But two files are named BenchmarkDotNet.Artifacts/results/MyBench.Sleeps-Time50--density.png and BenchmarkDotNet.Artifacts/results/MyBench.Sleeps-Time50--facetDensity.png, with the -- instead of single. Like some iteration variable is empty (since later there are names with -Default-
![image](https://user-images.github
-
Updated
Sep 3, 2020 - Java
Is your feature request related to a problem? Please describe.
BENCHMARK_ENABLE_LTO=true is recommended by the README.md. It should likely be the default if we think it is that important.
-
Updated
Sep 3, 2020 - PHP
-
Updated
Jul 20, 2020
-
Updated
Jul 22, 2020 - Erlang
-
Updated
Aug 11, 2020 - Python
-
Updated
Aug 20, 2020
-
Updated
Jul 25, 2020 - Rust
-
Updated
Sep 2, 2020 - Go
-
Updated
Aug 29, 2020 - PHP
-
Updated
Jul 15, 2020 - Python
-
Updated
Dec 19, 2018 - Swift
-
Updated
Aug 27, 2020 - JavaScript
-
Updated
Nov 30, 2017 - Objective-C
-
Updated
Aug 5, 2020 - Python
-
Updated
May 15, 2020
-
Updated
Jun 11, 2018 - C
-
Updated
Jul 28, 2020 - C++
-
Updated
Jul 30, 2017 - Jupyter Notebook
Improve this page
Add a description, image, and links to the benchmark topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the benchmark topic, visit your repo's landing page and select "manage topics."
Is there currently any way to force hyperfine to write intermediate results, so that I can abort a benchmark without losing all the progress I've made so far? I'm asking, because I've been benchmarking several algorithms with exponential complexity recently, and so sometimes letting the benchmark finish isn't an option.