-
Updated
Jan 4, 2021 - Makefile
cuda
Here are 2,601 public repositories matching this topic...
-
Updated
Jan 4, 2021 - Shell
Hi!
There is no opportunity to get fitted models after cv, because catboost.cv returns only evaluation metric scores. At the same time popular ml libraries have such option in some kind.
For LightGBM there is optional argument return_cvbooster:
cv = lgb.cv(params, X_train, show_stdv=False, stratified=True-
Updated
Dec 23, 2020 - Python
-
Updated
Jan 4, 2021 - C++
-
Updated
Jan 4, 2021 - C++
-
Updated
Jan 4, 2021 - Go
Spark is really inconsistent in how it handles some values like -0.0 vs 0.0 and the various NaN values that are possible. I don't expect cuDF to be aware of any of this, but I would like the ability to work around it in some cases by treating the floating point value as if it were just a bunch of bits. To me logical_cast feels like the right place to do this, but floating point values are
Current implementation of join can be improved by performing the operation in a single call to the backend kernel instead of multiple calls.
This is a fairly easy kernel and may be a good issue for someone getting to know CUDA/ArrayFire internals. Ping me if you want additional info.
Names map and input are exchanged mistakenly. By sense of Preconditions paragraph they have to be exchanged I suppose, because there is no problem when map and result coincide (in current context).
-
Updated
Dec 22, 2020 - C++
Is your feature request related to a problem? Please describe.
While porting some code from SKL to cuML, I have noticed the following:
SKL:
from sklearn.model_selection import train_test_split
cuML:
from cuml.preprocessing.model_selection import train_test_split
If I try to do from cuml.model_selection import train_test_split, the following error is displayed:
`ModuleNotFoundE
-
Updated
Sep 11, 2018 - C++
I often use -v just to see that something is going on, but a progress bar (enabled by default) would serve the same purpose and be more concise.
We can just factor out the code from futhark bench for this.
-
Updated
Dec 2, 2020 - Python
-
Updated
Dec 15, 2020 - Jupyter Notebook
-
Updated
Dec 14, 2020 - C++
Thank you for this fantastic work!
Could it be possible the fit_transform() method returns the KL divergence of the run?
Thx!
-
Updated
Dec 20, 2020 - Python
-
Updated
Dec 23, 2020 - Python
-
Updated
Jan 2, 2021 - C++
-
Updated
Jan 3, 2021 - Python
Improve this page
Add a description, image, and links to the cuda topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the cuda topic, visit your repo's landing page and select "manage topics."
PR #6447 adds a public API to get the maximum number of registers per thread (
numba.cuda.Dispatcher.get_regs_per_thread()). There are other attributes that might be nice to provide - shared memory per block, local memory per thread, const memory usage, maximum block size.These are all available in the
FuncAttrnamed tuple: https://github.com/numba/numba/blob/master/numba/cuda/cudadrv/drive