This application is developed to help speechless people interact with others with ease. It detects voice and converts the input speech into a sign language based video.
We help the deaf and the dumb to communicate with normal people using hand gesture to speech conversion. In this code we use depth maps from the kinect camera and techniques like convex hull + contour mapping to recognise 5 hand signs
An online sign dictionary and sign database management system for research purposes. Developed originally by Steve Cassidy/ This repo is a fork for the Dutch version, previously called 'NGT-Signbank'.
Dynamic Movement Primitive based Motion Retargeting, along with the sign language robot constituted by ABB's YuMi dual-arm collaborative robot and Inspire Robotics' multi-fingered hands.
Currently, the pre-trained model in this repository is capable of recognizing hand poses from a second-person perspective. Meaning, an image taken by you of some other person making a hand pose is the ideal image for recognition.
Need to create a new model and dataset of images of the same hand poses taken from a first-person perspective, i.e. images taken by your camera held by your left
Currently, the pre-trained model in this repository is capable of recognizing hand poses from a second-person perspective. Meaning, an image taken by you of some other person making a hand pose is the ideal image for recognition.
Need to create a new model and dataset of images of the same hand poses taken from a first-person perspective, i.e. images taken by your camera held by your left