-
Updated
Feb 18, 2022 - Makefile
CUDA
CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
Here are 3,395 public repositories matching this topic...
-
Updated
Feb 21, 2022 - Shell
I am working on creating a WandbCallback for Weights and Biases. I am glad that CatBoost has a callback system in place but it would be great if we can extend the interface.
The current callback only supports after_iteration that takes info. Taking inspiration from XGBoost callback system it would be great if we can have before iteration that takes info, before_training, and `after
-
Updated
Feb 26, 2022 - C++
Description
Calling vectorize with a non-None value for the signature parameter outputs this error message about the excluded parameter.
NotImplementedError: cupy.vectorize does not support `excluded` option currently.
Inspecting the code, it is obvious there is a copy-paste error and the 2nd error message should be change excluded to signature.
-
Updated
Jan 5, 2022 - Python
-
Updated
Feb 21, 2022 - Go
Is your feature request related to a problem? Please describe.
While reviewing PR #9817 to introduce DataFrame.diff, I noticed that it is restricted to acting on numeric types.
A time-series diff is probably a very common user need, if provided a series of timestamps and seeking the durations between observations.
Pandas supports diffs on non-numeric types like timestamps:
-
Updated
Feb 25, 2022 - Cuda
请问可以直接training tmfile出来吗? 因为tengine-convert-tool covert 会有error
tengine-lite library version: 1.4-dev
Get input tensor failed

或是有例子能training出下面tmfile 呢?
:
and model = NeuralNetwork().to(DEVICE)
then try to use model.to_global to allocate the model to GPU clusters, but it
交叉熵损失 API 设计
-
Updated
Feb 27, 2022 - C++
Report needed documentation
Report needed documentation
While the estimator guide offers a great breakdown of how to use many of the tools in api_context_managers.py, it would be helpful to have information right in the docstring during development to more easily understand what is actually going on in each of the provided functions/classes/methods. This is particularly important for
-
Updated
Feb 26, 2022 - C++
-
Updated
Feb 25, 2022 - Cuda
-
Updated
Feb 12, 2022 - C
In order to test manually altered IR, it would be nice to have a --skip-compilation flag for futhark test, just like we do for futhark bench.
-
Updated
Sep 11, 2018 - C++
-
Updated
Feb 25, 2022 - Python
-
Updated
Feb 26, 2022 - C++
-
Updated
Jan 12, 2022 - Python
Created by Nvidia
Released June 23, 2007
- Website
- developer.nvidia.com/cuda-zone
- Wikipedia
- Wikipedia
I see comments suggesting adding this to understand how loops are being handled by numba, and in the their own FAQ (https://numba.pydata.org/numba-doc/latest/user/faq.html)
You would then create your njit function and run it, and I believe the idea is that it prints debug information about whether