Focusing
Democratizing Big Models | CTO of @hpcaitech | Ex @Tencent Wechat AI.
- Beijing, China
-
15:45
(UTC +08:00) - https://fangjiarui.github.io/
Block or Report
Block or report feifeibear
Report abuse
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abusePinned
-
hpcaitech/ColossalAI Public
Making big AI models cheaper, easier, and more scalable
-
Tencent/PatrickStar Public
PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP and democratizes AI for everyone.
-
Tencent/TurboTransformers Public
a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.
2,308 contributions in the last year
Less
More
Activity overview
Contributed to
hpcaitech/ColossalAI,
hpcaitech/CachedEmbedding,
hpcaitech/EnergonAI
and 44 other
repositories
Contribution activity
February 2023
Reviewed 10 pull requests in 1 repository
hpcaitech/ColossalAI
10 pull requests
-
Don't use
torch._six - [chatgpt] optimize generation kwargs
- [workflow] fixed gpu memory check condition
- Feature/add ci for diffusion
- [tutorial] polish README
- bug/fix diffusion ckpt problem
- [polish] polish ColoTensor and its submodules
- [kernel] fixed repeated loading of kernels
- [autochunk] add benchmark for transformer and alphafold
- [autochunk] support multi outputs chunk search
Answered 2 discussions in 1 repository
hpcaitech/ColossalAI
hpcaitech/ColossalAI
-
Can I use real dataset for GPT2-gemini training?
This contribution was made on Feb 15
-
Can I use real dataset for GPT2-gemini training?
This contribution was made on Feb 13
23
contributions
in private repositories
Feb 1 – Feb 13






