bert
Here are 941 public repositories matching this topic...
-
Updated
Aug 20, 2020 - Python
-
Updated
Aug 15, 2020 - Jupyter Notebook
-
Updated
Dec 1, 2019
chooses 15% of token
From paper, it mentioned
Instead, the training data generator chooses 15% of tokens at random, e.g., in the sentence my
dog is hairy it chooses hairy.
It means that 15% of token will be choose for sure.
From https://github.com/codertimo/BERT-pytorch/blob/master/bert_pytorch/dataset/dataset.py#L68,
for every single token, it has 15% of chance that go though the followup procedure.
PositionalEmbedding
-
Updated
Aug 21, 2020 - Rust
-
Updated
Aug 19, 2020 - Python
-
Updated
Aug 20, 2020 - Python
-
Updated
Aug 17, 2020 - Python
-
Updated
Apr 20, 2020 - Python
-
Updated
Aug 20, 2020 - Python
-
Updated
Aug 18, 2020 - Jupyter Notebook
-
Updated
Jul 29, 2020 - Python
-
Updated
May 9, 2020 - Jupyter Notebook
-
Updated
Jul 28, 2020 - Python
-
Updated
Aug 4, 2020 - Python
-
Updated
Aug 20, 2020 - Python
-
Updated
Oct 23, 2019
-
Updated
Aug 20, 2020 - Scala
-
Updated
Aug 10, 2020 - Python
-
Updated
Jun 29, 2020 - Python
-
Updated
Mar 1, 2020 - Python
-
Updated
Aug 16, 2020 - Erlang
-
Updated
Aug 12, 2020 - Jupyter Notebook
-
Updated
Aug 6, 2020 - Python
-
Updated
Jul 15, 2020 - Python
Describe the feature
I think enforcing typing in methods parameters can be helpful for robustness, readability and stability of the code.
By using mypy static type checker, we can see potential improvements for jina:
Usage:
pip install mypy
mypy --ignore-missing-imports jinaDo not get overwhelmed by the errors. Let's slowly keep improving until we can eve
If I want to use both of them, how to modify code in aen.py? Thanks a lot.
Ideally, we'd support something like mnli += {pretrain_data_fraction = 0.5}, pretrain_tasks = {mnli,boolq}. Currently, pretrain_data_fraction is just a global argument for all pretraining tasks.
-
Updated
May 18, 2020 - Python
Improve this page
Add a description, image, and links to the bert topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the bert topic, visit your repo's landing page and select "manage topics."
modeling_longformer.pyhas the classesLongformerForSequenceClassification,LongformerForMultipleChoiceandLongformerForTokenClassificationwhich are not present inmodeling_tf_longformer.pyat the moment.Those classes should be equally added to
modeling_tf_longformer.py.Motivation
The pretrained weights for TFLongformer are available so that these