-
Updated
Jun 19, 2020 - Python
nmt
Here are 170 public repositories matching this topic...
-
Updated
Jul 2, 2020 - Python
-
Updated
Nov 13, 2019 - Python
-
Updated
Jul 3, 2020 - Python
https://github.com/JayParks/tf-seq2seq/blob/master/seq2seq_model.py#L368
It gives that the dimension 0 of inputs and attention do not match (as we are tile_batching it to batch_size * beam_width). Didn't you get any error while running with beam_search?
Thanks for this wonderful library, but it would be much more intuitive for users to get started by providing some simple but clearly training process on self-defined dataset.
-
Updated
Mar 11, 2020 - Jupyter Notebook
-
Updated
Oct 16, 2019
-
Updated
Jul 6, 2020 - Python
-
Updated
Feb 3, 2020 - Jupyter Notebook
-
Updated
Apr 9, 2020 - Python
-
Updated
Mar 15, 2018 - Python
-
Updated
Sep 6, 2018 - Python
-
Updated
Dec 10, 2018 - Python
-
Updated
Feb 17, 2017 - Python
-
Updated
Jun 5, 2019 - Python
-
Updated
Jan 1, 2018 - Python
This could be visualized in another table, where e.g. the confidence of the system across different documents could be compared and contrasted.
-
Updated
Aug 24, 2019 - Python
-
Updated
Jan 15, 2018 - Python
-
Updated
Jan 15, 2018 - Python
-
Updated
Dec 5, 2018
-
Updated
May 3, 2020 - Python
-
Updated
Jan 30, 2018 - Python
-
Updated
Jul 1, 2020 - Python
-
Updated
Jan 2, 2019 - Python
Improve this page
Add a description, image, and links to the nmt topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the nmt topic, visit your repo's landing page and select "manage topics."
Based on this line of code:
https://github.com/ufal/neuralmonkey/blob/master/neuralmonkey/decoders/output_projection.py#L125
Current implementation isn't flexible enough; if we train a "submodel" (e.g. decoder without attention - not containing any ctx_tensors) we cannot use the trained variables to initialize model with attention defined because the size of the dense layer matrix input become