Highlights
- 35 discussions answered
Pinned
2,358 contributions in the last year
Less
More
Activity overview
Contributed to
PyTorchLightning/pytorch-lightning,
jpuigcerver/PyLaia,
PyTorchLightning/lightning-transformers
and 5 other
repositories
Contribution activity
July 2021
Created 78 commits in 2 repositories
Created a pull request in PyTorchLightning/pytorch-lightning that received 8 comments
every_n_val_epochs -> every_n_epochs
What does this PR do?
Deprecate every_n_val_epochs in favor of every_n_epochs
The flag is used in the on_validation_end hook, but we will also want…
+91
−72
•
8
comments
Opened 42 other pull requests in 2 repositories
PyTorchLightning/pytorch-lightning
37
merged
4
open
-
Add
pyupgradetopre-commit - Fix profiler test on Windows minimal
- Connect the model to the training type plugin at the start of run
- Avoid partial for apply to collection
- Update issue and PR templates
- Fix DeepSpeed lr scheduler logic
-
Remove
torch >= 1.6checks -
Support
DataLoaders with missing arguments inreplace_sampler -
Always use
trainer.call_hook -
Replace
iteration_countand other index attributes in the loops with progress dataclasses - Do not reset Loops total counters
- Unblock GPU CI
- Delete legacy DataLoader processing utility
- Move plateau schedulers epoch update to the training epoch loop
- Mark evaluation epoch loops attributes as protected
-
Refactor
log_dirusage in the CLI - Remove pep8speaks
- Add pydocstyle to pre-commit
-
Add
ModelCheckpoint(save_on_train_epoch_end) - Update to Mypy>0.9
- Clean code formatting CI job
- Fix broadcast for Windows minimal
- Refactor plugins backward
-
Fix
self.optimizers()not returning a singleLightningOptimizer - Unpin Pillow after the 8.3.1 release
- Some pull requests not shown.
gridai/grid-docs
1
merged
Reviewed 120 pull requests in 2 repositories
PyTorchLightning/pytorch-lightning 119 pull requests
- Connect the model to the training type plugin at the start of run
- black: magic trailing comma
- docs: explain how Lightning uses closures for automatic optimization
- v1.4.0rc2
-
Replace
iteration_countand other index attributes in the loops with progress dataclasses - Legacy: simple classif training
- docs: clarify closure usage in gan example
- Raise exception for ddp_cpu not supported for TPUs
-
Support
DataLoaders with missing arguments inreplace_sampler -
[Typo] update some out-dated links from pytorch
clip_grad_value_ - Update issue and PR templates
- checkpoint also QAT for resuming
- Fix DeepSpeed lr scheduler logic
- fix CI for PT 1.10
- fix restoring finetune callbacks after accelerator setup on training resume
- docs: fix return type description of Trainer.validate/test
-
Add
ddp_*_find_unused_parameters_falseto Plugins Registry. - Deprecate LightningModule.summarize() in favor of pytorch_lightning.utilities.model_summary.summarize()
- Add support for functions to be parsed by the Lightning CLI in addition to Types
- Refactor plugins backward
- Quant as optional step
- fix: Enable manual optimization for TPUs
-
Always use
trainer.call_hook - [bugfix] Reduce memory leaks
- v1.4.0rc1 & chlog
- Some pull request reviews not shown.
PyTorchLightning/metrics 1 pull request
Opened 1 issue in 1 repository
PyTorchLightning/pytorch-lightning
1
open
Answered 10 discussions in 1 repository
PyTorchLightning/pytorch-lightning
PyTorchLightning/pytorch-lightning
- Getting error after completion of 1st epoch
- DDP with shared file system
- how to put ```trainer.fit()``` in for loop?
- RuntimeError: unable to open shared memory object </torch_91130_1372465664> in read-write mode
- Does LearningRateMonitor work with deepspeed?
- How to implement a deep ensemble
- Data augmentation and reload_dataloaders_every_epoch
- `init_process_group` not called when training on multiple-GPUs
- `init_process_group` not called when training on multiple-GPUs
- Intel deep learning boost compatibility
8
contributions
in private repositories
Jul 21 – Jul 22