PyTorch implementation of various methods for continual learning (XdG, EWC, online EWC, SI, LwF, GR, GR+distill, RtF, ER, A-GEM, iCaRL).
deep-learning
artificial-neural-networks
replay
incremental-learning
variational-autoencoder
generative-models
lifelong-learning
distillation
continual-learning
elastic-weight-consolidation
replay-through-feedback
icarl
gradient-episodic-memory
-
Updated
Jul 15, 2021 - Python
I noticed it is quite tricky at the moment to generate a benchmark with an unbalanced number of examples for each step.
It would be nice to have an option in ni_scenario, nc_scenario and similar to set the number of examples for each step.