Hey Chaals,
thanks you for the work you put into this!
Interactivity done in Javascript currently provides no clear way for two libraries, two components, an extension (which are important parts of most non-MS browser ecosystems), or even the browser itself, to find out what interactions have been “claimed” - nor what they actually do.
Sounds like we’re (still) waiting for IndieUI to become a thing. This is the only (proposed) infrastructure I know of, that could have the power to cater to all of the above (libraries, components, browser extensions). It would also allow us to (finally) “freely” map keyboard/touch/mouse input to actual application interactions, instead of hotwiring an interaction to a specific input.
Regarding the proposal:
The current HTML specification assumes all shortcuts assigned will be key combinations. This does not meet the reality of deployed platforms, many of which either do not have a keyboard or also permit other activations more appropriate for a shortcut.
The accesskey
attribute contains it’s context in the name: key
. While your observation is correct, I don’t think accesskey
should be the entity to address all of that. Maybe IndieUI is a way to go?
A browser might apply this by adding a voice command “Написать” to a grammar of expected commands for which an event can be fired, by listening for keypresses of the Cyrillic letters “Н” (equivalent to the latin “n”) or “П” (equivalent to latin “P”) or one of the letters proposed by the author, perhaps with a standard modifier.
I’m not a big fan of complex attribute values, I don’t think it’s very intuitive to couple voice input with keyboard input. Also wouldn’t voice input have the same requirement regarding list of words to allow conflict resolution? Also why not simply use the element’s label (text content)?
For gestures, it is useful to present an animation of the gesture. Can we enable this?
I have my problems thinking up a scenario where I’d want global gestures to do something like press a button. And even then, I’d likely go with a custom implementation for nicer visual appearance. Except for maybe “swipe from right edge of screen” to open the off-screen menu, or something like that. I haven’t put much thought into this yet, but it feels weird.
To tell the user what the shortcut key is, this script explicitly adds the browser-described shortcut activation to the button’s label:
Wouldn’t it be simpler if we could do that from CSS? I know accessKeyLabel
is a property, not an attribute, but this looks more appealing than mutating the DOM:
button[accesskeylabel]:after {
content: "(" attr(accesskeylabel) ")";
}
The accesskey attribute’s value is used by the user agent as a guide for creating a shortcut that activates or focuses the element.
Why activate or focus? Who decides which action to take?
If the user agent has a stored user preference for activation of the element, then skip to the fallback step below.
How would that work?
(This is a fingerprinting vector.)
links to http://chaals.github.io/accesskey/introduction.html#fingerprinting-vector
, which does not exist