-
Updated
Mar 8, 2020 - Python
benchmark
Here are 1,957 public repositories matching this topic...
-
Updated
Mar 8, 2020
It can't test shell aliases and function, can it? :)
I was trying to build google benchmark with IBM XL 16.1.1 and it crashed the compiler. Not sure if anyone wants to investigate a workaround or so.
cmake -DCMAKE_CXX_COMPILER=xlc++ -DBENCHMARK_ENABLE_GTEST_TESTS=OFF ../
....
....
....
[ 40%] Building CXX object test/CMakeFiles/donotoptimize_test.dir/donotoptimize_test.cc.o
cd /ascldap/users/crtrott/Software/benchmark/build-xl/test && /
-
Updated
Mar 8, 2020 - PHP
-
Updated
Mar 7, 2020
I stumbled across <interaction> and ts_interaction_server, but I'm not sure what the use case is or how it is supposed to be used.
Has anyone used this recently and can add some documentation? Or explain to me how this works and I'll write something.
/cc @nniclausse
User lost
-
Updated
Mar 7, 2020 - Python
-
Updated
Mar 7, 2020
-
Updated
Mar 7, 2020 - Go
-
Updated
Feb 11, 2020 - Swift
I changed a benchmark to do twice as much work per iteration, and added a multiplier of 2 in the .throughput() to account for this. Time taken per iteration went up, but so did throughput. However criterion reported this as a regression, which is wrong/misleading.
When throughput is provided, "regression" and "improvement" labels should probably be based on throughput rather than time per ite
Remove Unsafe Blocks
To make it easy to test as many jsPerf features as possible, the database dump should contain a few test cases using sync/async tests + comments.
-
Updated
Mar 7, 2020 - PHP
https://github.com/moble/quaternion is native numpy quaternion implementation that could probably replace the slow and unsafe Python implementations in the transformations.py module. It supports arrays of quaternions and operations on these, which is what we need.
- replace quaternion code with numpy-quaternion
- require numba dependency?
- check out features of the package like S
-
Updated
Mar 6, 2020 - Objective-C
-
Updated
Feb 17, 2020 - C
-
Updated
Mar 6, 2020
First off thank you for this library!
I wanted to ask for your help in understanding the analysis and logging of the training.
During training a lot of information is dumped:
Trial 0 session 3 reinforce_cartpole_t0_s3 [eval_df metrics] final_return_ma: 167.6 strength: 145.74 max_strength: 178.14 final_strength: 178.14 sample_efficiency: 2.22874e-05 training_efficiency: 0.00051129
-
Updated
Mar 7, 2020 - Jupyter Notebook
更新首页文档
Is your feature request related to a problem? Please describe.
N/A
Describe the solution you'd like
更新首页信息
1 增加版本说明
2 简化说明文案
Describe alternatives you've considered
N/A
Additional context
N/A
Improve this page
Add a description, image, and links to the benchmark topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the benchmark topic, visit your repo's landing page and select "manage topics."



For each Job, it adds plots about density, cumulative mean, and so on. But two files are named
BenchmarkDotNet.Artifacts/results/MyBench.Sleeps-Time50--density.pngandBenchmarkDotNet.Artifacts/results/MyBench.Sleeps-Time50--facetDensity.png, with the--instead of single. Like some iteration variable is empty (since later there are names with-Default-![image](https://user-images.github