Natural language processing
Natural language processing (NLP) is a field of computer science that studies how computers and humans interact. In the 1950s, Alan Turing published an article that proposed a measure of intelligence, now called the Turing test. More modern techniques, such as deep learning, have produced results in the fields of language modeling, parsing, and natural-language tasks.
Here are 17,144 public repositories matching this topic...
-
Updated
Sep 7, 2021 - Python
-
Updated
Sep 11, 2021 - Python
-
Updated
Dec 29, 2021 - Python
-
Updated
Dec 31, 2021 - Python
-
Updated
Jun 12, 2017
-
Updated
Dec 31, 2021 - Python
ENV
Python 3.9
jina 2.5.0
Describe the bug
If i try to dump an image blob to a io.bytesio object an error is thrown
from jina import Document
import io
d = Document(uri='steam_data/image_store/8c/5b/8c5b265b9c533636.png')
output = io.BytesIO()
(
d
.load_uri_to_image_blob()
.dump_imaIn gensim/models/fasttext.py:
model = FastText(
vector_size=m.dim,
vector_size=m.dim,
window=m.ws,
window=m.ws,
epochs=m.epoch,
epochs=m.epoch,
negative=m.neg,
negative=m.neg,
# FIXME: these next 2 lines read in unsupported FB FT modes (loss=3 softmax or loss=4 onevsall,
# or model=3 supervi-
Updated
Dec 7, 2021
-
Updated
Dec 30, 2021
-
Updated
Dec 30, 2021 - Python
-
Updated
Jan 2, 2022 - Python
-
Updated
Dec 24, 2021 - Jupyter Notebook
Is your feature request related to a problem? Please describe.
I typically used compressed datasets (e.g. gzipped) to save disk space. This works fine with AllenNLP during training because I can write my dataset reader to load the compressed data. However, the predict command opens the file and reads lines for the Predictor. This fails when it tries to load data from my compressed files.
Rather than simply caching nltk_data until the cache expires and it's forced to re-download the entire nltk_data, we should perform a check on the index.xml which refreshes the cache if it differs from some previous cache.
I would advise doing this in the same way that it's done for requirements.txt:
https://github.com/nltk/nltk/blob/59aa3fb88c04d6151f2409b31dcfe0f332b0c9ca/.github/wor
-
Updated
Dec 22, 2020 - Python
-
Updated
Dec 22, 2021 - JavaScript
-
Updated
Jul 25, 2021 - Jupyter Notebook
-
Updated
Jul 1, 2021 - Python
-
Updated
Dec 22, 2021 - TypeScript
-
Updated
Jan 1, 2022 - Java
-
Updated
Jul 6, 2021
-
Updated
Dec 30, 2021 - Python
-
Updated
Nov 2, 2021 - Python
-
Updated
Oct 22, 2020
-
Updated
Dec 17, 2021 - Python
Created by Alan Turing
- Wikipedia
- Wikipedia
Fast Tokenizer for DeBERTA-V3 and mDeBERTa-V3
Motivation
DeBERTa V3 is an improved version of DeBERTa. With the V3 version, the authors also released a multilingual model "mDeBERTa-base" that outperforms XLM-R-base. However, DeBERTa V3 currently lacks a FastTokenizer implementation which makes it impossible to use with some of the example scripts (They require a Fa