language-model
Here are 567 public repositories matching this topic...
-
Updated
Dec 1, 2019
chooses 15% of token
From paper, it mentioned
Instead, the training data generator chooses 15% of tokens at random, e.g., in the sentence my
dog is hairy it chooses hairy.
It means that 15% of token will be choose for sure.
From https://github.com/codertimo/BERT-pytorch/blob/master/bert_pytorch/dataset/dataset.py#L68,
for every single token, it has 15% of chance that go though the followup procedure.
PositionalEmbedding
-
Updated
Aug 27, 2020 - Rust
-
Updated
Aug 29, 2020 - Python
-
Updated
Jul 28, 2020 - Python
-
Updated
Oct 7, 2019 - Python
-
Updated
May 7, 2020 - Python
-
Updated
Aug 4, 2020 - Python
-
Updated
Jul 3, 2020
-
Updated
Aug 28, 2020 - Scala
-
Updated
Jul 28, 2020 - Python
-
Updated
Feb 7, 2019 - Python
-
Updated
Jul 15, 2020 - Python
-
Updated
Aug 5, 2020
-
Updated
Jan 1, 2019 - Python
-
Updated
Aug 28, 2020 - Python
Question
Hi, I have been experimenting with the QA capabilities of Haystack and so far. I was wondering if it was possible for the model to generate paragraph-like contexts.
Additional context
So far, when a question is asked, the model outputs an answer and the context the answer can be found in. The context output by the model is oftentimes fragments of a sentence or fragments of a
-
Updated
Jun 20, 2019 - Python
-
Updated
Jan 10, 2020 - Python
-
Updated
Aug 12, 2020 - Go
-
Updated
Aug 18, 2020 - C++
-
Updated
Dec 18, 2017 - Python
-
Updated
Aug 17, 2020 - Python
-
Updated
Aug 20, 2020 - TeX
-
Updated
Nov 15, 2018 - Jupyter Notebook
-
Updated
Aug 27, 2020 - Python
-
Updated
Jan 9, 2020 - Python
-
Updated
Aug 16, 2020 - Jupyter Notebook
Improve this page
Add a description, image, and links to the language-model topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the language-model topic, visit your repo's landing page and select "manage topics."
modeling_longformer.pyhas the classesLongformerForSequenceClassification,LongformerForMultipleChoiceandLongformerForTokenClassificationwhich are not present inmodeling_tf_longformer.pyat the moment.Those classes should be equally added to
modeling_tf_longformer.py.Motivation
The pretrained weights for TFLongformer are available so that these