Natural language processing
Natural language processing (NLP) is a field of computer science that studies how computers and humans interact. In the 1950s, Alan Turing published an article that proposed a measure of intelligence, now called the Turing test. More modern techniques, such as deep learning, have produced results in the fields of language modeling, parsing, and natural-language tasks.
Here are 18,278 public repositories matching this topic...
-
Updated
Mar 19, 2022 - Python
-
Updated
Feb 26, 2022 - Python
-
Updated
Apr 2, 2022 - Python
-
Updated
Apr 6, 2022 - Python
-
Updated
Jun 12, 2017
-
Updated
Apr 5, 2022 - Python
In gensim/models/fasttext.py:
model = FastText(
vector_size=m.dim,
vector_size=m.dim,
window=m.ws,
window=m.ws,
epochs=m.epoch,
epochs=m.epoch,
negative=m.neg,
negative=m.neg,
# FIXME: these next 2 lines read in unsupported FB FT modes (loss=3 softmax or loss=4 onevsall,
# or model=3 supervi-
Updated
Dec 7, 2021
Describe the bug
When downloading this subset as of 3-28-2022 you will encounter a split size error after the dataset is extracted. The extracted dataset has roughly ~6m rows while the split expects <1m.
Upon digging a little deeper, I downloaded the raw files from https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_PC_v1_00.tsv.gz and extracted them. A line count via `wc -
-
Updated
Mar 23, 2022
-
Updated
Apr 1, 2022 - Jupyter Notebook
-
Updated
Apr 5, 2022 - Python
Is your feature request related to a problem? Please describe.
I typically used compressed datasets (e.g. gzipped) to save disk space. This works fine with AllenNLP during training because I can write my dataset reader to load the compressed data. However, the predict command opens the file and reads lines for the Predictor. This fails when it tries to load data from my compressed files.
Rather than simply caching nltk_data until the cache expires and it's forced to re-download the entire nltk_data, we should perform a check on the index.xml which refreshes the cache if it differs from some previous cache.
I would advise doing this in the same way that it's done for requirements.txt:
https://github.com/nltk/nltk/blob/59aa3fb88c04d6151f2409b31dcfe0f332b0c9ca/.github/wor
-
Updated
Jul 25, 2021 - Jupyter Notebook
-
Updated
Apr 5, 2022 - JavaScript
-
Updated
Dec 22, 2020 - Python
-
Updated
Apr 5, 2022 - TypeScript
-
Updated
Apr 3, 2022 - Java
-
Updated
Jul 6, 2021
-
Updated
Apr 5, 2022 - Python
-
Updated
Nov 2, 2021 - Python
-
Updated
Oct 22, 2020
-
Updated
Mar 30, 2022 - Python
-
Updated
Mar 31, 2022 - Python
-
Updated
Apr 3, 2022 - Python
Created by Alan Turing
- Wikipedia
- Wikipedia
This issue is about the working group specially created for this task. If you are interested in helping out, take a look at this organization, or add me on Discord:
ChainYo#3610We are looking for contributing to HuggingFace's ONNX implementation for all available models on the HF's hub. There is already a lot of architectures implemented for converting PyTorch m