Here are
8 public repositories
matching this topic...
🏡 Fast & easy transfer learning for NLP. Harvesting language models for the industry. Focus on Question Answering.
Updated
Apr 25, 2022
Python
Simple XLNet implementation with Pytorch Wrapper
Updated
Jul 3, 2019
Jupyter Notebook
This shows how to fine-tune Bert language model and use PyTorch-transformers for text classififcation
Updated
May 7, 2020
Jupyter Notebook
疫情期间网民情绪识别代码,包含lstm,bert,xlnet,robert,最高f1为0.725 部署在Google colab
Updated
Jul 2, 2020
Jupyter Notebook
Determine the polarity of amazon fine food reviews using ULMFiT, BERT, XLNet and RoBERTa
Updated
Sep 8, 2019
Jupyter Notebook
PyTorch implementation of Deep-Learning Architectures
Updated
Apr 2, 2022
Jupyter Notebook
BERT (Bidirectional Encoder Representations from Transformers) is a transformer-based method of learning language representations. It is a bidirectional transformer pre-trained model developed using a combination of two tasks namely: masked language modeling objective and next sentence prediction on a large corpus.
Updated
Aug 10, 2020
Python
Updated
Aug 7, 2021
Jupyter Notebook
Improve this page
Add a description, image, and links to the
xlnet-pytorch
topic page so that developers can more easily learn about it.
Curate this topic
Add this topic to your repo
To associate your repository with the
xlnet-pytorch
topic, visit your repo's landing page and select "manage topics."
Learn more
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.