gpu
Here are 1,971 public repositories matching this topic...
-
Updated
Jan 2, 2021 - Jupyter Notebook
-
Updated
Jan 2, 2021 - Makefile
At this moment relu_layer op doesn't allow threshold configuration, and legacy RELU op allows that.
We should add configuration option to relu_layer.
-
Updated
Oct 7, 2020 - JavaScript
-
Updated
Jan 3, 2021 - Python
Hi!
There is no opportunity to get fitted models after cv, because catboost.cv returns only evaluation metric scores. At the same time popular ml libraries have such option in some kind.
For LightGBM there is optional argument return_cvbooster:
cv = lgb.cv(params, X_train, show_stdv=False, stratified=True-
Updated
Jan 2, 2021 - Python
-
Updated
Dec 23, 2020 - Python
-
Updated
Jan 3, 2021 - Jupyter Notebook
-
Updated
Dec 27, 2020 - Python
-
Updated
Dec 31, 2020 - C++
-
Updated
Apr 24, 2020 - Jsonnet
-
Updated
Jun 13, 2020 - HTML
Hi ,
I have tried out both loss.backward() and model_engine.backward(loss) for my code. There are several subtle differences that I have observed , for one retain_graph = True does not work for model_engine.backward(loss) . This is creating a problem since buffers are not being retained every time I run the code for some reason.
Please look into this if you could.
-
Updated
Jan 2, 2021 - C++
Spark is really inconsistent in how it handles some values like -0.0 vs 0.0 and the various NaN values that are possible. I don't expect cuDF to be aware of any of this, but I would like the ability to work around it in some cases by treating the floating point value as if it were just a bunch of bits. To me logical_cast feels like the right place to do this, but floating point values are
Current implementation of join can be improved by performing the operation in a single call to the backend kernel instead of multiple calls.
This is a fairly easy kernel and may be a good issue for someone getting to know CUDA/ArrayFire internals. Ping me if you want additional info.
We would like to forward a particular 'key' column which is part of the features to appear alongside the predictions - this is to be able to identify to which set of features a particular prediction belongs to. Here is an example of predictions output using the tensorflow.contrib.estimator.multi_class_head:
{"classes": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"],
"scores": [0.068196
Names map and input are exchanged mistakenly. By sense of Preconditions paragraph they have to be exchanged I suppose, because there is no problem when map and result coincide (in current context).
-
Updated
Dec 17, 2020 - CMake
-
Updated
Jan 2, 2021 - C++
-
Updated
Jan 3, 2021 - Jupyter Notebook
Improve this page
Add a description, image, and links to the gpu topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the gpu topic, visit your repo's landing page and select "manage topics."
cc @ngimel @mruberry @rgommers @heitorschueroff