Skip to content
#

sign-language

Here are 120 public repositories matching this topic...

We help the deaf and the dumb to communicate with normal people using hand gesture to speech conversion. In this code we use depth maps from the kinect camera and techniques like convex hull + contour mapping to recognise 5 hand signs

  • Updated Jul 27, 2017
  • Python
dev-td7
dev-td7 commented Jan 7, 2019

Currently, the pre-trained model in this repository is capable of recognizing hand poses from a second-person perspective. Meaning, an image taken by you of some other person making a hand pose is the ideal image for recognition.

Need to create a new model and dataset of images of the same hand poses taken from a first-person perspective, i.e. images taken by your camera held by your left

Improve this page

Add a description, image, and links to the sign-language topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the sign-language topic, visit your repo's landing page and select "manage topics."

Learn more