Block or Report
Block or report YuliangLiu0306
Report abuse
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abusePinned
-
ColossalAI Public
Forked from hpcaitech/ColossalAI
Colossal-AI: A Unified Deep Learning System for Large-Scale Parallel Training
Python 2
674 contributions in the last year
Less
More
Activity overview
Contributed to
hpcaitech/ColossalAI,
hpcaitech/ColossalAI-Examples,
hpcaitech/Titans
and 7 other
repositories
Contribution activity
February 2023
Created 11 commits in 2 repositories
Created a pull request in hpcaitech/ColossalAI that received 5 comments
[autoparallel] refactor runtime pass
[doc/gemini/tenso…
+351
−213
•
5
comments
Opened 11 other pull requests in 1 repository
hpcaitech/ColossalAI
1
open
9
merged
1
closed
- [hotfix] fix autoparallel compatibility test issues
- [autoparallel] fix parameters sharding bug
- [autoparallel] test compatibility for gemini and auto parallel
- [autoparallel] distinguish different parallel strategies
- [autoparallel] add shard option
- [Docs] layout converting management
- [autoparallel] remove deprecated codes
- [autoparallel] adapt autoparallel tests with latest api
- [autoparallel] adapt test code with latest initialize api
- [autoparallel] refactor handlers which reshape input tensors
- [autoparallel] add overlap option
Reviewed 6 pull requests in 1 repository
hpcaitech/ColossalAI
6 pull requests
-
[autoparallel] Patch meta information of
torch.nn.Embedding - [doc] update auto parallel paper link
-
[autoparallel] Patch meta information of
torch.nn.functional.softmaxandtorch.nn.Softmax -
[autoparallel] Patch meta information of
torch.nn.LayerNorm -
[autoparallel] Patch meta information of
torch.matmul - [fx] support unet metainfo prop
Opened 9 issues in 1 repository
hpcaitech/ColossalAI
1
open
8
closed
- [BUG]: the autoparallel compatibility test uses same input data for different data parallel rank
- [FEATURE]: distinguish different parallel strategies
- [FEATURE]: Add shard option for autoparallel
- [FEATURE]: remove deprecated code for autoparallel
- [Autoparallel] Refactor runtime pass
- [FEATURE]: refactor autoparallel runtime preparetion pass
- [FEATURE]: adapt autoparallel tests with latest api
- [FEATURE]: refactor reshape handlers
- [FEATURE]: add an overlap option for auto parallel

