automatic-differentiation
Here are 229 public repositories matching this topic...
I'm using TF 2.0, and I get this error when I import tangent, due to a list of non-differentiable functions that includes tf.to_float (line 60), which is deprecated:
https://www.tensorflow.org/versions/r1.14/api_docs/python/tf/to_float
I found that function mod2pi is not implemented yet, but mod works. Is there any list of implemented functions? Minimal working example is:
using Zygote
# This is working
gradient(x -> mod(x, 2pi), 1.)
# This is not
gradient(x -> mod2pi(x), 1.)
-
Updated
Jun 25, 2021 - OCaml
-
Updated
Jul 2, 2021 - Go
-
Updated
Jul 4, 2021 - Python
-
Updated
Jul 4, 2021 - Nim
-
Updated
Jun 16, 2021 - Scala
-
Updated
Apr 27, 2021 - C++
-
Updated
Jun 16, 2021 - C++
-
Updated
Jul 2, 2021 - Julia
Summary:
The functions for the categorical distribution only accept a column vector, it would be great if it could accept also row vectors.
Description:
I use the categorical distribution to go over a matrix N_obs x N_probabilities, so it's more natural for me to use row vectors than column vectors.
Current functions:
real categorical_lpmf(ints y | vector theta)
real
-
Updated
Jun 14, 2021 - C++
-
Updated
Jul 2, 2021 - LLVM
-
Updated
May 10, 2018 - Haskell
Debugging Kotlin∇ code within IntelliJ IDEA can be somewhat cumbersome due to the functional API structure (lots of deeply-nested stack traces and context switching). To facilitate more user-friendly debugging, we should add support for visual debugging by exposing Kaliningraph’s built-in graph visualization capabilities. For example, the use
-
Updated
Jun 4, 2021 - Python
-
Updated
Nov 16, 2016 - Python
-
Updated
Jun 6, 2021 - Julia
-
Updated
Apr 16, 2021 - Rust
-
Updated
Jan 10, 2018 - Python
Description of your problem
This is a bit hard to reproduce, because it requires first invalidating the compile cache. (This happens sometimes to me if an IPython kernel stays active but unused for >½ day.) Nevertheless, I think inspection of the code is sufficient to see the problem. In [module_from_key](https://github.com/aesara-devs/aesara/blob/5213962b97c5c8be6353427c212a3d9254f0845f/a
-
Updated
Jul 1, 2021 - Jupyter Notebook
The init module has been deprecated, and the recommend approach for generating initial weights is to use the Template.shape method:
>>> from pennylane.templates import StronglyEntanglingLayers
>>> qml.init.strong_ent_layers_normal(n_layers=3, n_wires=2) # deprecated
>>> np.random.random(StronglyEntanglingLayers.shape(n_layers=3, n_wires=2)) # new approachWe should upd
profiles.h updates
At the moment profiles.h (in pkg/profiles) lacks many (any?) comments. Also lots of variables are declared somewhat separately from where they are associated with heap storage.
Both these make it a bit hard to read.
It would be nicer if it was called PROFILES.h too.
-
Updated
May 13, 2021 - Julia
-
Updated
Jul 2, 2021 - Julia
Some of them can be ported over from Zygote.
cf. FluxML/Zygote.jl#906
https://github.com/FluxML/Zygote.jl/blob/956cbcf3c572c0eb09c146189bb38b1b434634ff/src/lib/array.jl#L130
-
Updated
Jun 25, 2021 - Julia
Improve this page
Add a description, image, and links to the automatic-differentiation topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the automatic-differentiation topic, visit your repo's landing page and select "manage topics."
In operations_broadcast_test.go there are some tests that are not yet filled in. The point is to test that broadcasting works for different shapes. The semantics of broadcast probably isn't clear, so please do send me a message for anything.
This is a good first issue for anyone looking to get interested