-
Updated
Jan 4, 2021 - Makefile
cuda
Here are 2,695 public repositories matching this topic...
-
Updated
Feb 26, 2021 - Shell
Problem: the approximate method can still be slow for many trees
catboost version: master
Operating System: ubuntu 18.04
CPU: i9
GPU: RTX2080
Would be good to be able to specify how many trees to use for shapley. The model.predict and prediction_type versions allow this. lgbm/xgb allow this.
-
Updated
Feb 17, 2021 - Python
-
Updated
Mar 2, 2021 - C++
-
Updated
Mar 2, 2021 - C++
-
Updated
Feb 28, 2021 - Go
Currently, aggregation APIs (groupby, reductions, rolling, etc.) are scattered around in multiple files and there are inconsistencies between the directory structures in cpp/include/, cpp/src/, cpp/tests/, and cpp/benchmarks/. For example:
cpp/include/:
- include/cudf/aggregation.hpp
- include/cudf/groupby.hpp
- include/cudf/rolling.hpp
- ....
cpp/src/:
- src/aggregati
Current implementation of join can be improved by performing the operation in a single call to the backend kernel instead of multiple calls.
This is a fairly easy kernel and may be a good issue for someone getting to know CUDA/ArrayFire internals. Ping me if you want additional info.
PR NVIDIA/cub#218 fixes this CUB's radix sort. We should:
- Check whether Thrust's other backends handle this case correctly.
- Provide a guarantee of this in the stable_sort documentation.
- Add regression tests to enforce this on all backends.
-
Updated
Mar 2, 2021 - C
-
Updated
Feb 23, 2021 - C++
confusion_matrix should automatically convert dtypes as appropriate in order to avoid failing, like other metric functions.
from sklearn.metrics import confusion_matrix
import numpy as np
import cuml
y = np.array([0.0, 1.0, 0.0])
y_pred = np.array([0.0, 1.0, 1.0])
print(confusion_matrix(y, y_pred))
cuml.metrics.confusion_matrix(y, y_pred)
[[1 1]
[0 1]]
-----------------
Updated
Sep 11, 2018 - C++
I often use -v just to see that something is going on, but a progress bar (enabled by default) would serve the same purpose and be more concise.
We can just factor out the code from futhark bench for this.
-
Updated
Feb 15, 2021 - Python
-
Updated
Feb 10, 2021 - C++
-
Updated
Dec 15, 2020 - Jupyter Notebook
Thank you for this fantastic work!
Could it be possible the fit_transform() method returns the KL divergence of the run?
Thx!
-
Updated
Mar 1, 2021 - C++
-
Updated
Feb 23, 2021 - Python
-
Updated
Feb 25, 2021 - Python
Improve this page
Add a description, image, and links to the cuda topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the cuda topic, visit your repo's landing page and select "manage topics."
(Noticed whilst reviewing #6695)
From the docs for
numba.cuda.atomic.compare_and_swap:It seem