Skip to content
#

glove

Here are 182 public repositories matching this topic...

txAnnie
txAnnie commented Mar 28, 2019

Evaluating on validation corpus...
217it [12:27, 5.54s/it]Traceback (most recent call last):
File "./src/coref.py", line 690, in
trainer.train(150)
File "./src/coref.py", line 467, in train
results = self.evaluate(self.val_corpus)
File "./src/coref.py", line 566, in evaluate
predicted_docs = [self.predict(doc) for doc in tqdm(val_corpus)]
File "./src/coref.p

Taking a pretrained GloVe model, and using it as a TensorFlow embedding weight layer **inside the GPU**. Therefore, you only need to send the index of the words through the GPU data transfer bus, reducing data transfer overhead.

  • Updated Oct 13, 2018
  • Jupyter Notebook

Dice.com repo to accompany the dice.com 'Vectors in Search' talk by Simon Hughes, from the Activate 2018 search conference, and the 'Searching with Vectors' talk from Haystack 2019 (US). Builds upon my conceptual search and semantic search work from 2015

  • Updated Feb 12, 2020
  • Python

TextClf :基于Pytorch/Sklearn的文本分类框架,包括逻辑回归、SVM、TextCNN、TextRNN、TextRCNN、DRNN、DPCNN、Bert等多种模型,通过简单配置即可完成数据处理、模型训练、测试等过程。

  • Updated Mar 8, 2020
  • Python
danieldk
danieldk commented Jul 31, 2019

Add support for pruning embeddings, where N embeddings are retained. Words for which embeddings are removed are mapped to their nearest neighbor.

This should provide more or less the same functionality as pruning in spaCy:

https://spacy.io/api/vocab#prune_vectors

I encourage some investigation here. Some ideas:

  1. The most basic version could simply retain the embeddings of the N most

Improve this page

Add a description, image, and links to the glove topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the glove topic, visit your repo's landing page and select "manage topics."

Learn more

You can’t perform that action at this time.