-
Updated
Mar 22, 2022 - Makefile
CUDA
CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
Here are 3,430 public repositories matching this topic...
-
Updated
Mar 22, 2022 - Shell
I am working on creating a WandbCallback for Weights and Biases. I am glad that CatBoost has a callback system in place but it would be great if we can extend the interface.
The current callback only supports after_iteration that takes info. Taking inspiration from XGBoost callback system it would be great if we can have before iteration that takes info, before_training, and `after
-
Updated
Mar 23, 2022 - C++
Description
https://numpy.org/doc/stable/reference/generated/numpy.corrcoef.html
https://docs.cupy.dev/en/stable/reference/generated/cupy.corrcoef.html
Seems args are different
Additional Information
dtype argument added in NumPy version 1.20.
-
Updated
Mar 8, 2022 - Python
-
Updated
Mar 20, 2022 - Go
Is your feature request related to a problem? Please describe.
While reviewing PR #9817 to introduce DataFrame.diff, I noticed that it is restricted to acting on numeric types.
A time-series diff is probably a very common user need, if provided a series of timestamps and seeking the durations between observations.
Pandas supports diffs on non-numeric types like timestamps:
-
Updated
Mar 22, 2022 - Cuda
请问可以直接training tmfile出来吗? 因为tengine-convert-tool covert 会有error
tengine-lite library version: 1.4-dev
Get input tensor failed

或是有例子能training出下面tmfile 呢?
, ...);
你好,请问怎么装载 ONNX 模型,目前只看到 Oneflow->ONNX 工具,没有找到 ONNX->Oneflow 工具。
-
Updated
Mar 23, 2022 - C++
Hey everyone!
mapd-core-cpu is already available on conda-forge (https://anaconda.org/conda-forge/omniscidb-cpu)
now we should add some instructions on the documentation.
at this moment it is available for linux and osx.
some additional information about the configuration:
- for now, always install
omniscidb-cpuinside a conda environment (also it is a good practice), eg:
Describe the bug
If min_samples_split is a small float, then it may be equivalent to splitting on < 2 samples. This causes cuml to blow up:
RuntimeError: exception occured! file=../src/decisiontree/decisiontree.cu line=41: Invalid value for min_samples_split: 1. Should be >= 2.
Obtained 64 stack frames
#0 in /home/mboling/miniconda3/lib/python3.8/site-packages/cuml/common/../../..
-
Updated
Mar 18, 2022 - C++
-
Updated
Mar 22, 2022 - Cuda
-
Updated
Feb 12, 2022 - C
In order to test manually altered IR, it would be nice to have a --skip-compilation flag for futhark test, just like we do for futhark bench.
-
Updated
Mar 22, 2022 - C++
-
Updated
Sep 11, 2018 - C++
-
Updated
Mar 22, 2022 - Python
-
Updated
Jan 12, 2022 - Python
Created by Nvidia
Released June 23, 2007
- Website
- developer.nvidia.com/cuda-zone
- Wikipedia
- Wikipedia
I see comments suggesting adding this to understand how loops are being handled by numba, and in the their own FAQ (https://numba.pydata.org/numba-doc/latest/user/faq.html)
You would then create your njit function and run it, and I believe the idea is that it prints debug information about whether