Here are
19 public repositories
matching this topic...
Decentralized deep learning framework in pytorch. Built to train models on thousands of volunteers across the world.
Updated
Sep 7, 2020
Python
A Keras implementation of "Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts" (KDD 2018)
Updated
Feb 5, 2020
Python
Surrogate Modeling Toolbox
Updated
Sep 7, 2020
Jupyter Notebook
Updated
Jan 8, 2020
Python
Hierarchical Mixture of Experts,Mixture Density Neural Network
Updated
Mar 31, 2017
Jupyter Notebook
Code for the paper "Learning to Compose Topic-Aware Mixture of Experts for Zero-Shot Video Captioning"
Multi-Task Learning package built with tensorflow 2 (Multi-Gate Mixture of Experts, Cross-Stitch, Ucertainty Weighting)
Updated
Jan 20, 2020
Python
A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models
Updated
Jul 18, 2020
Python
MoEL: Mixture of Empathetic Listeners
Updated
Apr 12, 2020
Python
Machine learning code, derivatives calculation and optimization algorithms developed during the Machine Learning course at Universidade de Sao Paulo. All codes in Python, NumPy and Matplotlib with example in the end of file.
Updated
Sep 3, 2020
Python
Using CCR to predict piezoresponse force microscopy datasets
This repository contains scripts for implementing various learning from expert architectures, such as mixture of experts and product of experts, and performing various experiments with these architectures.
Updated
Aug 12, 2020
Jupyter Notebook
Updated
Sep 6, 2020
MATLAB
Dataset, example models and demostration of our Interspeech 2019 paper
Updated
Sep 16, 2019
Python
Hybrid small variant caller using Convolutional Neural Networks and Mixture-of-Experts
Updated
Mar 31, 2020
Python
The implementation of mixtures for different tasks.
Updated
Mar 31, 2020
Python
Implementation of Mixture of Experts paper
Updated
Mar 2, 2020
Python
Mixtures-of-ExperTs modEling for cOmplex and non-noRmal dIsTributionS
Framework for Contextually Transferring Knowledge from Multiple Source Policies in Deep Reinforcement Learning
Improve this page
Add a description, image, and links to the
mixture-of-experts
topic page so that developers can more easily learn about it.
Curate this topic
Add this topic to your repo
To associate your repository with the
mixture-of-experts
topic, visit your repo's landing page and select "manage topics."
Learn more
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.