Skip to content
#

speech-processing

Here are 204 public repositories matching this topic...

hbredin
hbredin commented Oct 1, 2019

I think we should give/improve the following points :

  1. A high-level overview of the pipeline. How everything works, how each module is articulated with the others, etc ...
  2. More details about some mechanisms : I am thinking about user-defined callbacks since I've been working on that. But I'm pretty sure many of you will have other ideas 🙂
  3. More explanations about some tricks that have
Pust0T
Pust0T commented Jun 29, 2019

The example is not compiled. (LEDTest)
Here is the error message

In file included from C:\Users\Pust0T\Documents\Arduino\libraries\uSpeech-master\viterbidecoder.cpp:11:0:

C:\Users\Pust0T\Documents\Arduino\libraries\uSpeech-master\viterbidecoder.h:21:5: error: 'uint8_t' does not name a type

 uint8_t num_symbols;

 ^

C:\Users\Pust0T\Documents\Arduino\libraries\uSpeech-master\v

The SpeechBrain project aims to build a novel speech toolkit fully based on PyTorch. With SpeechBrain users can easily create speech processing systems, ranging from speech recognition (both HMM/DNN and end-to-end), speaker recognition, speech enhancement, speech separation, multi-microphone speech processing, and many others.

  • Updated Jul 14, 2020
  • CSS

Front-end speech processing aims at extracting proper features from short- term segments of a speech utterance, known as frames. It is a pre-requisite step toward any pattern recognition problem employing speech or audio (e.g., music). Here, we are interesting in voice disorder classification. That is, to develop two-class classifiers, which can discriminate between utterances of a subject suffering from say vocal fold paralysis and utterances of a healthy subject.The mathematical modeling of the speech production system in humans suggests that an all-pole system function is justified [1-3]. As a consequence, linear prediction coefficients (LPCs) constitute a first choice for modeling the magnitute of the short-term spectrum of speech. LPC-derived cepstral coefficients are guaranteed to discriminate between the system (e.g., vocal tract) contribution and that of the excitation. Taking into account the characteristics of the human ear, the mel-frequency cepstral coefficients (MFCCs) emerged as descriptive features of the speech spectral envelope. Similarly to MFCCs, the perceptual linear prediction coefficients (PLPs) could also be derived. The aforementioned sort of speaking tradi- tional features will be tested against agnostic-features extracted by convolu- tive neural networks (CNNs) (e.g., auto-encoders) [4]. The pattern recognition step will be based on Gaussian Mixture Model based classifiers,K-nearest neighbor classifiers, Bayes classifiers, as well as Deep Neural Networks. The Massachussets Eye and Ear Infirmary Dataset (MEEI-Dataset) [5] will be exploited. At the application level, a library for feature extraction and classification in Python will be developed. Credible publicly available resources will be 1used toward achieving our goal, such as KALDI. Comparisons will be made against [6-8].

  • Updated Jul 15, 2020
  • Python

Improve this page

Add a description, image, and links to the speech-processing topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the speech-processing topic, visit your repo's landing page and select "manage topics."

Learn more

You can’t perform that action at this time.