I’m working on research to produce better experience for users with screen readers. Here its description and working demo:
The core idea is to use two distinctive voices for different content types: one for metadata (like element name or meaningful attribute names) and another for content itself. The goal is to make content understanding better with a new kind of information provided with the voice. It works like code syntax highlighting but with voice.
On the first stage I need help with testing hypothesis on users with screen reading experience to measure effectiveness of the solution. And then to finalize the requirements and changes to web standards and SpeechSynthesis API specification. Would appreciate any help with this.