autograd
Here are 115 public repositories matching this topic...
-
Updated
Mar 14, 2021 - Jupyter Notebook
环境
1.系统环境:
2.MegEngine版本:1.6.0rc1
3.python版本:Python 3.8.10
The program stuck at net.load when I was trying to use the MegFlow. I wait for more than 10min and there is no sign of finishing it.
-
Updated
Jan 13, 2022 - C++
Display Issues
Yolo Model
Description
Implement a YOLO model and add it to the DJL model zoo
References
Issue to track tutorial requests:
- Deep Learning with PyTorch: A 60 Minute Blitz - #69
- Sentence Classification - #79
Feature details
Due to the similarity, it is easy to confuse qml.X and qml.PauliX, especially since other methods of specifying circuits, e.g., QASM, use x for PauliX. But if a user uses qml.X in their circuit on a qubit device, nothing happens to inform them that the incorrect operation is being used:
@qml.qnode(dev)
def circ():
qml.PauliX(wires=0)
qml.Hada-
Updated
Jan 2, 2022 - OCaml
-
Updated
Feb 1, 2021 - Python
-
Updated
Jan 10, 2022 - Nim
-
Updated
Jan 14, 2022 - Python
-
Updated
Jan 4, 2022
-
Updated
Sep 6, 2021 - Python
-
Updated
Nov 28, 2021 - Rust
Spike-time decoding
Add a function and module that permits spike-time decoding, as suggested by @schmitts https://twitter.com/sbstnschmtthd/status/1432343373072019461
-
Updated
Oct 8, 2020 - Python
I think it would be very useful to have learning rate schedulers
lr_cyclic()(https://arxiv.org/abs/1506.01186, Python source at https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#CyclicLR), andlr_cosine_annealing_warm_restarts()(https://arxiv.org/abs/1608.03983, Python source at https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#CosineAnnealin
The init module has been deprecated, and the recommend approach for generating initial weights is to use the Template.shape method:
>>> from pennylane.templates import StronglyEntanglingLayers
>>> qml.init.strong_ent_layers_normal(n_layers=3, n_wires=2) # deprecated
>>> np.random.random(StronglyEntanglingLayers.shape(n_layers=3, n_wires=2)) # new approachWe should upd
-
Updated
Dec 8, 2021 - Julia
-
Updated
Jan 10, 2022 - Python
Okay, so this might not exactly be a "good first issue" - it is a little more advanced, but is still very much accessible to newcomers.
Similar to the mygrad.nnet.max_pool function, I would like there to be a mean-pooling layer. That is, a convolution-style windows is strided over the input, an
-
Updated
Apr 19, 2020 - Scala
-
Updated
Jul 1, 2019 - Python
-
Updated
Nov 20, 2021 - Crystal
-
Updated
Apr 28, 2017 - Lua
-
Updated
Apr 17, 2021 - Jupyter Notebook
-
Updated
Dec 13, 2021 - Python
-
Updated
Nov 10, 2021 - Swift
-
Updated
Oct 12, 2021 - Python
Improve this page
Add a description, image, and links to the autograd topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the autograd topic, visit your repo's landing page and select "manage topics."


Usage of these two variables in tests is outdated. One should just write in the dtypes in question or use one of the ATen macro dispatch list functions (like
get_all_fp_dtypes()).Alternatives
No response
Additional context
No response