The tech giant unveiled SignGemma, a powerful AI model created to convert sign language into spoken text, at Google I/O 2025. This tool is now in the testing stage and is accessible to developers and a restricted group of users; a wider deployment is anticipated by the end of the year.
Sign language is an essential communication tool for the millions of Deaf and hard-of-hearing people worldwide. Nonetheless, it often creates obstacles in day-to-day dealings with those who are not acquainted with it. By providing real-time sign language-to-text translations, Google’s new AI project SignGemma seeks to improve accessibility and inclusion globally.
During the speech, Gemma Product Manager Gus Martins introduced SignGemma, which he described as Google’s “most capable sign language understanding model ever.” Martins claims that the project’s open model structure and capacity to provide precise translations in real time set it apart from other initiatives.
“We are excited to present SignGemma, our innovative open model for understanding sign language, which will be available later this year,” Martins said. “We can’t wait for developers and the Deaf and hard-of-hearing communities to take this foundation and build with it because it’s the most capable sign language understanding model ever.”
The best accurate translation of American Sign Language (ASL) into English is currently provided by SignGemma. Google has said, however, that the model is trained to handle a variety of sign languages and intends to add more in the future.
The introduction of SignGemma is a component of Google’s larger initiative to give accessibility a priority in AI technologies. The firm unveiled a number of inclusivity-focused enhancements at this year’s I/O conference, including improved AI integration in Android’s TalkBack function. In order to improve the Android experience for visually challenged users, users will now be able to ask follow-up questions about what’s on their screen and obtain AI-generated explanations of pictures.
Google has also released Chrome improvements, including automated optical character recognition (OCR) for PDF scans. This enables screen reader users to read and search previously unreadable materials. Another advancement in Google’s goal to empower every user is the addition of a new function on Chromebooks called Face Control, which allows users to manage their computer using head motions and facial expressions.
A collaborative development strategy is being used by Google to make sure SignGemma is both courteous and helpful. Developers, academics, and members of the international Deaf and hard-of-hearing communities are being invited by the corporation to test the tool and provide comments.
An official DeepMind article on X said, “We are excited to present SignGemma, our ground-breaking open model for sign language understanding.” “In order to make SignGemma as beneficial and influential as possible, your distinct experiences, insights, and needs are essential as we get ready for launch and beyond.”
With SignGemma, Google is bridging the gap between the Deaf and hearing populations in addition to enhancing its AI skills. The technology has the potential to revolutionize accessibility and communication in the digital era as it approaches public release.