Natural language processing
Natural language processing (NLP) is a field of computer science that studies how computers and humans interact. In the 1950s, Alan Turing published an article that proposed a measure of intelligence, now called the Turing test. More modern techniques, such as deep learning, have produced results in the fields of language modeling, parsing, and natural-language tasks.
Here are 4,853 public repositories matching this topic...
-
Updated
Sep 30, 2020 - Python
-
Updated
Oct 1, 2020 - Python
-
Updated
Oct 1, 2020 - Python
-
Updated
Oct 4, 2020 - Python
Not a high-priority at all, but it'd be more sensible for such a tutorial/testing utility corpus to be implemented elsewhere - maybe under /test/ or some other data- or doc- related module – rather than in gensim.models.word2vec.
Originally posted by @gojomo in RaRe-Technologies/gensim#2939 (comment)
-
Updated
Oct 4, 2020 - Python
-
Updated
Jun 3, 2020 - Python
-
Updated
Sep 30, 2020 - Python
more details at: allenai/allennlp#2264 (comment)
-
Updated
Aug 20, 2020 - Python
-
Updated
May 20, 2020 - Python
-
Updated
Feb 14, 2020 - Python
-
Updated
Aug 25, 2020 - Python
-
Updated
Sep 28, 2020 - Python
-
Updated
Oct 1, 2020 - Python
-
Updated
Oct 2, 2020 - Python
-
Updated
Jun 5, 2020 - Python
-
Updated
Sep 30, 2020 - Python
-
Updated
Oct 4, 2020 - Python
-
Updated
Sep 23, 2020 - Python
chooses 15% of token
From paper, it mentioned
Instead, the training data generator chooses 15% of tokens at random, e.g., in the sentence my
dog is hairy it chooses hairy.
It means that 15% of token will be choose for sure.
From https://github.com/codertimo/BERT-pytorch/blob/master/bert_pytorch/dataset/dataset.py#L68,
for every single token, it has 15% of chance that go though the followup procedure.
PositionalEmbedding
-
Updated
Sep 24, 2020 - Python
-
Updated
Feb 8, 2020 - Python
-
Updated
Aug 17, 2020 - Python
-
Updated
Sep 5, 2020 - Python
-
Updated
Oct 1, 2020 - Python
Created by Alan Turing
- Wikipedia
- Wikipedia
A very good first issue IMO!
See huggingface/transformers#4829 (comment)
Optionally, use the
huggingface/nlplibrary to get the eval dataset, and hook it into the Trainer.Also referenced in huggingface/transformers#6997 (comment)