A set of new features for Android could alleviate some of the difficulties of living with hearing impairment and other conditions. Live transcription, captioning and relay use speech recognition and synthesis to make content on your phone more accessible — in real time.
Announced today at Google’s I/O event in a surprisingly long segment on accessibility, the features all rely on improved speech-to-text and text-to-speech algorithms, some of which now run on-device rather than sending audio to a data center to be decoded.
The first feature to be highlighted, live transcription, was already mentioned by Google. It’s a simple but very useful tool: open the app and the device will listen to its surroundings and simply display as text on the screen any speech it recognizes.