- Beijing, China
-
17:08
(UTC +08:00) - https://liplus.me
Block or Report
Block or report li-plus
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abusePinned
-
chatglm.cpp
chatglm.cpp PublicC++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & more LLMs
-
llvm/torch-mlir
llvm/torch-mlir PublicThe Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.
-
seam-carving
seam-carving PublicA super-fast Python implementation of seam carving algorithm for intelligent image resizing.
-
redbase-cpp
redbase-cpp PublicA simple relational database based on Stanford CS346 RedBase, implemented in elegant modern C++14.
420 contributions in the last year
| Day of Week | January Jan | February Feb | March Mar | April Apr | May May | June Jun | July Jul | August Aug | September Sep | October Oct | November Nov | December Dec | |||||||||||||||||||||||||||||||||||||||||
| Sunday Sun | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| Monday Mon | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| Tuesday Tue | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| Wednesday Wed | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| Thursday Thu | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| Friday Fri | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| Saturday Sat | |||||||||||||||||||||||||||||||||||||||||||||||||||||
Activity overview
Contribution activity
December 2023
Created 12 commits in 2 repositories
Created 2 repositories
-
li-plus/DeepSpeed
Python
This contribution was made on Dec 24
-
li-plus/OpenRLHF
Python
This contribution was made on Dec 11
Created a pull request in OpenLLMAI/OpenRLHF that received 5 comments
Fix flash attention option
The main branch did not use flash attention in RLHF training at all, even if you specify --flash_attn.
AutoConfig.from_pretrained does not respect …
Opened 6 other pull requests in 4 repositories
OpenLLMAI/OpenRLHF
3
merged
-
Optimize padding removal
This contribution was made on Dec 19
-
Optimize reward score gather/scatter
This contribution was made on Dec 16
-
Optimize generation post processing
This contribution was made on Dec 11
microsoft/DeepSpeed
1
open
-
Fix f-string messages
This contribution was made on Dec 24
li-plus/chatglm.cpp
1
open
-
Add perplexity evaluation script
This contribution was made on Dec 3
CarperAI/trlx
1
open
-
Faster & memory-efficient logprobs calculation
This contribution was made on Dec 2
Reviewed 1 pull request in 1 repository
huggingface/transformers
1 pull request
-
Adding flash attention to GPT2
This contribution was made on Dec 15
Created an issue in huggingface/transformers that received 3 comments
[Flash Attention 2] Performance improvement
Feature request The current flash attention 2 integration is sub-optimal in performance because it requires unpadding and padding the activations on …



