-
Updated
Jun 8, 2020 - Python
transfer-learning
Here are 1,188 public repositories matching this topic...
I tried building the docs, but was met with a graphviz error. Typically this means I can spend a few hours pecking away at the dependencies until I get stable build... or someone that has it working can export their environment, and publish an environment.yml that we can use with the build instructions.
I was going off of the d2l book since that's a dep here, but their [environment.yml](https://g
-
Updated
Jun 4, 2020
-
Updated
Mar 26, 2020 - Python
-
Updated
Jun 10, 2020 - Python
-
Updated
Jun 12, 2020 - Python
-
Updated
May 5, 2020
-
Updated
Jan 1, 2019 - Python
I'm playing around with this wonderful code but I'm running into a curious issue when I try to train the model with my own data.
I replicated the personachat_self_original.json file structure and added my own data. I deleted dataset_cache_OpenAIGPTTokenizer file but when I try to train, I get this error:
INFO:train.py:Pad inputs and convert to Tensor
Traceback (most recent call last)
-
Updated
Jul 26, 2019 - Python
Hey! I think it would be useful to have a more detailed explanation about:
- what the dataset should look like for performing NER, similar to the fine-tuning example. The [NER sample](https://github.com/deepset-ai/FARM/blob/97b0211a37ea7c7d64b4602f0e21b65428b2bd76/t
Hi,
When we try to tokenize the following sentence:
If we use spacy
a = spacy.load('en_core_web_lg')
doc = a("I like the link http://www.idph.iowa.gov/ohds/oral-health-center/coordinator")
list(doc)
We got
[I, like, the, link, http://www.idph.iowa.gov, /, ohds, /, oral, -, health, -, center, /, coordinator]
But if we use the Spacy transformer tokenizer:
-
Updated
May 9, 2020 - Jupyter Notebook
-
Updated
May 29, 2020 - Python
-
Updated
Oct 16, 2019 - Python
Per this comment in #12
-
Updated
Jun 14, 2020
-
Updated
Jan 3, 2020 - Python
-
Updated
Jun 14, 2020 - Jupyter Notebook
-
Updated
May 29, 2020
-
Updated
May 31, 2019 - Jupyter Notebook
When using a Finder with a TfidfRetriever (InMemoryDocumentStore) and default TransformersReader all indices and scores are printed (see line 75 in tfidf.py), and there is no meta-data being inserted into the documents which are returned (line 96). I commented out the print call and added the following line to the Document constructor:
meta={'name':self.document_store.get_document_by_id(
-
Updated
Nov 8, 2019 - Jupyter Notebook
-
Updated
May 14, 2020 - Python
-
Updated
Feb 3, 2020 - Jupyter Notebook
-
Updated
Aug 8, 2019 - Python
Improve this page
Add a description, image, and links to the transfer-learning topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the transfer-learning topic, visit your repo's landing page and select "manage topics."
Dear TF Hub Team,
USE paper Section 5 has a interesting paragraph on evaluation where authors use Arc Cosine (Cos Inverse) whose range is 0 to Pi in radians instead of plain cosine distance with range 0 to 2.
". For the pairwise semantic similarity task, we directly assess
the similarity of the sentence embedding produced by our two encoders. As show