#
librosa
Here are 76 public repositories matching this topic...
A Machine Learning Approach of Thayers Emotional Model to Plot a 2D Cartesian and Polar Planes using x-axis as Valence and y-axis as Arousal
data-science
machine-learning
feature-selection
feature-extraction
music-information-retrieval
digital-signal-processing
librosa
feature-scaling
-
Updated
Mar 28, 2018 - Python
Oblivion97
commented
Oct 8, 2019
How can I configure it for my native language?& if I want to test yours, how would I start the training & test the model?
Lightweight and Interpretable ML Model for Speech Emotion Recognition and Ambiguity Resolution (trained on IEMOCAP dataset)
scikit-learn
pandas
python3
pytorch
lstm
librosa
speech-emotion-recognition
multimodal-emotion-recognition
iemocap
-
Updated
May 7, 2020 - Jupyter Notebook
In this project is presented a simple method to train an MLP neural network for audio signals. The trained model can be exported on a Raspberry Pi (2 or superior suggested) to classify audio signal registered with USB microphone
raspberry-pi
machine-learning
tensorflow
audio-analysis
dataset
raspberry
librosa
tensorflow-models
multilayer-perceptron-network
audio-signals
sound-classification
-
Updated
Apr 20, 2019 - Python
Music genre classification model using CRNN
music
deep-neural-networks
deep-learning
keras
audio-analysis
music-information-retrieval
recurrent-networks
convolutional-networks
deeplearning
fma
librosa
music-analysis
music-classification
mel-spectrograms
-
Updated
Sep 27, 2018 - Python
Music Synthesis with Python talk, originally given at PyGotham 2017.
python
music
conference
interactive
csound
audio-analysis
supercollider
talk
music-information-retrieval
synthesis
lecture
mir
librosa
pyo
music-synthesis
-
Updated
Nov 20, 2017 - Jupyter Notebook
Python framework for Speech and Music Detection using Keras.
-
Updated
Jan 28, 2020 - Python
Classifying English Music (.mp3) files using Music Information Retrieval (MIR), Digital/Audio Signal Processing (DIP) and Machine Learning (ML) Strategies
-
Updated
May 7, 2017 - HTML
Artificial intelligence bot for live voice improvisation
-
Updated
May 22, 2019 - Python
Music genre classification from audio spectrograms using deep learning
music
pytorch
spectrogram
convolutional-neural-networks
music-genre-classification
librosa
multi-class-classification
music-genre-detection
music-genre-recognition
-
Updated
Dec 8, 2019 - Python
Sound Classification using Neural Networks
machine-learning
deep-learning
sound-processing
neural-networks
convolutional-neural-networks
urban-sound-classification
librosa
cnn-keras
sound-classification
-
Updated
Sep 7, 2019 - Jupyter Notebook
Image Processing, Speech Processing, Encoder Decoder, Research Paper implementation
image-processing
face-detection
librosa
research-paper
speech-processing
keras-tensorflow
encoder-decoder
speech-to-face
face-normalization
-
Updated
Apr 19, 2020 - Jupyter Notebook
Scene Classification using Audio in the nearby Environment.
-
Updated
Sep 4, 2019 - Jupyter Notebook
Predicting various emotion in human speech signal by detecting different speech components affected by human emotion.
python
machine-learning
natural-language-processing
ai
deep-learning
neural-network
keras
jupyter-notebook
ml
python3
pytorch
lstm
speech-recognition
supervised-learning
rnn
convolutional-neural-networks
librosa
emotion-recognition
colab-notebook
artifical-intelligence
-
Updated
Aug 11, 2019 - Jupyter Notebook
Digital Signal Processing mini project: Autotune
-
Updated
Oct 24, 2017 - Python
Methods to compute various chroma audio features and audio similarity measures particularly for the task of cover song identification
python
music-information-retrieval
chroma
librosa
audio-processing
essentia
feature-extractor
recurrent-plots
cover-song-detection
cover-song-identification
audio-similarity-measures
-
Updated
Feb 7, 2020 - Jupyter Notebook
Using machine learning for the study of music.
-
Updated
Jun 3, 2018 - Python
Python script utilising Librosa to log the timings of audio peaks in an MP3 file
-
Updated
Jan 8, 2019 - Python
Text-to-Speech Synthesis by Generating Spectrograms using Generative Adversarial Network
nlp
machine-learning
tensorflow
tts
gan
digital-signal-processing
audio-synthesis
librosa
nlp-machine-learning
conditional-gan
-
Updated
Dec 12, 2018 - Python
2nd Runner-Up @MumbaiHackathon 2017
-
Updated
Jan 17, 2018 - Python
Breaks a song into frames, extracts relevant features and predicts the mood of each frame individually.
-
Updated
Dec 17, 2017 - Python
-
Updated
Feb 14, 2019 - Python
Gender recognition by human speech analysis
python
pyaudio
csv
sklearn
jupyter-notebook
live
feature-extraction
instant
gender-recognition
mfcc
librosa
svm-model
spyder
pitch-detection
wavfile
-
Updated
Jul 13, 2019 - Python
machine-learning
neural-network
tensorflow
multiprocessing
multithreading
python3
sound-processing
tensorflow-experiments
librosa
-
Updated
Nov 8, 2017 - Python
Tensorflow implementation of music-genres-classification with InceptionResnetV2
python
tensorflow
classification
audio-classification
librosa
inception-resnet-v2
cnn-tensorflow
genres-classification
-
Updated
Jan 28, 2020 - Python
Improve this page
Add a description, image, and links to the librosa topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the librosa topic, visit your repo's landing page and select "manage topics."
Description
STFT allocates an output buffer, but sometimes you might want to have it compute directly into an existing buffer. For example, in griffin-lim, the method alternates stft/istft for each iterate, which is then discarded. It would be better if we could give it an
out=variable, which it would use instead of allocating a new buffer; this way, we could cut down on redundant mem