Google develops AI to detect sign language in video calls and help deaf, deaf and hard of hearing people
Google is aware that video calls are one of the most useful and necessary tools in this that, by way of euphemism, we call the New Normal. For this reason, and to support people who use sign language, it developed an AI that detects it in real time to support them in their communication.
Obviously, video calling systems prioritize voice channels. They even have a function to make a person who is speaking with a voice stand out, something that puts deaf-mute or hard of hearing people who use sign language to communicate at a disadvantage.
In response to this, Google introduced its new interface computer vision system on the ECCV 20, which, although it will require a bit more CPU usage, is also a necessary feature to make this technological age accessible to everyone inside. her.
Google’s new tool is based on the motion estimation model called PoseNet, which tracks the movement of the body, using computerized vectors modeled by AI, to be able to interpret what the actions of a person are on the screen.
In this way, as soon as the AI detects that the movements represent the sign language, it will show and highlight the user who is doing it. Otherwise, the software will identify that they are movements of any other nature and will not prioritize them.
According to Google, this new function achieved up to 80% effectiveness in detecting sign language in real time (equivalent to a latency time of 0.000003 seconds) and an increase to 83.4% with a buffer of 50 video frames. On the other hand, this system learns from its user and, after a little time of use, it achieves up to 91.5% effectiveness in just 3.5 milliseconds.
This system is open source through GitHub so that other programmers can do more tests with it for its improvement.