- San Francisco
- lucidrains.github.io
Sponsors
Block or Report
Block or report lucidrains
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abusePinned
-
vit-pytorch Public
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
-
alphafold2 Public
To eventually become an unofficial Pytorch implementation / replication of Alphafold2, as details of the architecture get released
-
DALLE2-pytorch Public
Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch
-
imagen-pytorch Public
Implementation of Imagen, Google's Text-to-Image Neural Network, in Pytorch
-
x-transformers Public
A simple but complete full-attention transformer with a set of promising experimental features from various papers
-
RETRO-pytorch Public
Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch
4,187 contributions in the last year
Contribution activity
December 2022
Created 339 commits in 30 repositories
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- 5 repositories not shown
Created 9 repositories
- lucidrains/recurrent-interface-network-pytorch Python
- lucidrains/Nim Nim
- lucidrains/robotic-transformer-pytorch Python
-
lucidrains/medical-chatgpt
Python
•
Built by
-
lucidrains/PaLM-rlhf-pytorch
Python
•
Built by
- lucidrains/memory-editable-transformer
- lucidrains/magic3d-pytorch
- lucidrains/classifier-free-guidance-pytorch Python
- lucidrains/chroma-pytorch Python
Created a pull request in mlfoundations/open_clip that received 27 comments
add patch dropout, as it has been proven out in new Kaiming He paper …
…in the CLIP setting to save a ton of compute and improve end results https://arxiv.org/abs/2212.00794



