gpu
Here are 2,663 public repositories matching this topic...
-
Updated
Jan 29, 2022 - Jupyter Notebook
As shown in taichi-dev/taichi#3910, replacing property with simple attributes can speedup python part of taichi a lot.
Lessons learned is that we should avoid using @property when applicable since it's expensive. So let's review the usage of @property in our python codebase and replace them as much as possible.
Here's a list of simple grep in our codebase showing
-
Updated
Jan 17, 2022 - Makefile
At this moment relu_layer op doesn't allow threshold configuration, and legacy RELU op allows that.
We should add configuration option to relu_layer.
-
Updated
Nov 11, 2021 - JavaScript
-
Updated
Jan 30, 2022 - Python
-
Updated
Jan 13, 2022 - Python
I am working on creating a WandbCallback for Weights and Biases. I am glad that CatBoost has a callback system in place but it would be great if we can extend the interface.
The current callback only supports after_iteration that takes info. Taking inspiration from XGBoost callback system it would be great if we can have before iteration that takes info, before_training, and `after
Hi ,
I have tried out both loss.backward() and model_engine.backward(loss) for my code. There are several subtle differences that I have observed , for one retain_graph = True does not work for model_engine.backward(loss) . This is creating a problem since buffers are not being retained every time I run the code for some reason.
Please look into this if you could.
-
Updated
Jan 30, 2022 - C++
I want to preemptively start this thread to survey for suggestions. A cursory search lead me to this promising repository https://github.com/enigo-rs/enigo
Since closing the window is a common point of failure, that will be the focus for the first pass of testing as I learn how to use the library.
Components for testing:
- bridge
- editor
- renderer
- settings
- wind
Our users are often confused by the output from programs such as zip2john sometimes being very large (multi-gigabyte). Maybe we should identify and enhance these programs to output a message to stderr to explain to users that it's normal for the output to be very large - maybe always or maybe only when the output size is above a threshold (e.g., 1 million bytes?)
-
Updated
Jan 22, 2022 - Python
Description
https://numpy.org/doc/stable/reference/generated/numpy.corrcoef.html
https://docs.cupy.dev/en/stable/reference/generated/cupy.corrcoef.html
Seems args are different
Additional Information
dtype argument added in NumPy version 1.20.
-
Updated
Jan 30, 2022 - Jupyter Notebook
-
Updated
Jan 5, 2022 - Python
Is your feature request related to a problem? Please describe.
The current value of alpha value is hardcoded in many places to 0.05.
Describe the solution you'd like
Take this as a setup argument and use it everywhere for consistency. The default value can be 0.05.
-
Updated
Jan 29, 2022 - C++
Is your feature request related to a problem? Please describe.
Hi,
While porting some code from Pandas to cuDF, I have noticed that cuDF series do not support unstack method.
As an additional request, It would be great if fill_values could be supported in both cudf.DataFrame.unstack and cudf.Series.unstack methods. Thanks!
Describe the solution you'd like
To have that meth
-
Updated
Apr 24, 2020 - Jsonnet
环境
1.系统环境:
2.MegEngine版本:1.6.0rc1
3.python版本:Python 3.8.10
The program stuck at net.load when I was trying to use the MegFlow. I wait for more than 10min and there is no sign of finishing it.
-
Updated
Jun 13, 2020 - HTML
Test device: Google Nexus 5x(Android 8.1.0)
Backend: VULKAN
2022-01-25 10:55:31.018 27110-27110/name.jinleili.wgpu W/wgpu_hal::vulkan::ada..: sample_rate_shading feature is not supported, hiding adapter: Adreno (TM) 418
2022-01-25 10:55:31.019 27110-27110/name.jinleili.wgpu A/libc: Fatal signal 6 (SIGABRT), code -6 in tid 27110 (e.jinleili.wgpu), pid 27110 (e.jinleili.wgpu)<deta
Improve this page
Add a description, image, and links to the gpu topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the gpu topic, visit your repo's landing page and select "manage topics."

Usage of
RRefContext::handleExceptionintorch/csrc/distributed/rpc/rref_context.cppis wrong when the future has an error.RRefContext::handleExceptionusesTORCH_CHECKwhich throws.Callers of
RRefContext::handleExceptiondon't expect that and run code after it without any guarding.Versions
master
cc @pietern @mrshenli @pritamdamania87