New Hand-Tracking Algorithm Has Breakthrough in Sign Language
There has been a lot of talk about the usage of sign language and its inclusion in the technological era. However, every project that has attempted to interpret hand gestures to verbal speech has had extremely limited success.
A new improvement in real-time hand tracking from Google’s AI labs, however, could be the discovery people are waiting for. This method makes use of shortcuts and has certainly increased the general productivity of the machine in real-time.
It creates a highly precise map of the hand along with all its fingers, using nothing but only a smartphone and its camera.
Google researchers Valentin Bazarevsky and Fan Zhang stated in a blog post that the existing state-of-the-art methods rely mainly on effective desktop environments for inference. One the other hand, their method, provides real-time performance on a smartphone, and it can even scale up to many hands.
According to Google researchers, the main issue of cracking this technology is caused as hands often block themselves or each other while gesturing. Moreover, due to a lack of high contrast patterns, getting a robust real-time view is an undoubtedly challenging computer task.
But not just that, hand movements are often both quick or subtle, which is not essentially the type of thing that computers are right at capturing in real-time. So, essentially, deciphering hand movements for machines is either super hard to get it right, and if they do get it right, it is very hard to do it fast.
Even with the multi-camera and depth-sensing fixes like those which are used by SignAll, have some trouble tracking every movement.
But, Google researchers didn’t give up there. According to the researchers’, their target, in this case, was to at least partly, lessen the total amount of data that the algorithms required to strain through.
The lesser the data, the quicker the turnaround.
The result of this new method is a hand-tracking algorithm that’s both quick and precise which works on a normal smartphone instead of a tricked-out desktop.
It all works within the Media Pipe framework; something which multi-tech people should already know about. With good fortune, other researchers would be able to take this method and improve it.
One way of doing so is by refining current systems that need to have sturdier hardware to do the type of hand recognition that needs to recognize gestures.
This type of technology which uses both hands and facial expressions and other signs will have the ability to create a rich kind of communication unlike any other. That being said, while it is a great breakthrough, it is still a long way to understand sign language.
For more updates and the latest tech news, keep reading iTMunch!