-
Updated
Sep 16, 2020 - Makefile
cuda
Here are 2,504 public repositories matching this topic...
-
Updated
Oct 30, 2020 - Shell
Problem:
catboost version: 0.23.2
Operating System: all
Tutorial: https://github.com/catboost/tutorials/blob/master/custom_loss/custom_metric_tutorial.md
Impossible to use custom metric (С++).
Code example
from catboost import CatBoost
train_data = [[1, 4, 5, 6],
-
Updated
Aug 17, 2020 - Python
-
Updated
Nov 3, 2020 - C++
-
Updated
Nov 1, 2020 - Go
Improve readability of thread id based branches by giving them more descriptive names.
e.g.
if (!t) // is actually a t == 0and
https://github.com/rapidsai/cudf/blob/57ef76927373d7260b6a0eda781e59a4c563d36e/cpp/src/io/statistics/column_stats.cu#L285
Is actually a lane_id == 0
As demonstrated in rapidsai/cudf#6241 (comment), pr
Current implementation of join can be improved by performing the operation in a single call to the backend kernel instead of multiple calls.
This is a fairly easy kernel and may be a good issue for someone getting to know CUDA/ArrayFire internals. Ping me if you want additional info.
PR NVIDIA/cub#218 fixes this CUB's radix sort. We should:
- Check whether Thrust's other backends handle this case correctly.
- Provide a guarantee of this in the stable_sort documentation.
- Add regression tests to enforce this on all backends.
-
Updated
Nov 3, 2020 - C++
The following functions under metrics in prims (along with their extensions in cuml) will need to have their names refactored from the camelCase format to the under_score format like the rest of functions in cuml and prims -
adjustedRandIndexcompletenessScorecontingencyMatrixhomogeneityScore- `k
-
Updated
Sep 11, 2018 - C++
I often use -v just to see that something is going on, but a progress bar (enabled by default) would serve the same purpose and be more concise.
We can just factor out the code from futhark bench for this.
-
Updated
Oct 26, 2020 - Python
-
Updated
Sep 29, 2020 - Jupyter Notebook
-
Updated
Jul 22, 2020 - C++
Thank you for this fantastic work!
Could it be possible the fit_transform() method returns the KL divergence of the run?
Thx!
-
Updated
Oct 12, 2020 - Python
-
Updated
Oct 14, 2020 - Python
-
Updated
Oct 18, 2020 - C++
-
Updated
Oct 30, 2020 - Python
-
Updated
Oct 23, 2020 - Clojure
Improve this page
Add a description, image, and links to the cuda topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the cuda topic, visit your repo's landing page and select "manage topics."

Reporting a bug