I guess we have to separate the immediate scenario (firesTouchEvents
) from the longer term direction. Would you agree that there’s utility to the general notion of exposing additional device information in this way? This proposal has been designed in discussion with the Pointer Events Working Group, where we’ve long said we need a place to hang additional device information. In particular, we agreed to explore generalizations of navigator.maxTouchPoints
and pointerType
in a future version of the spec. By hanging a new API off of UIEvent
we can get the same benefits for MouseEvent
, TouchEvent
and PointerEvent
.
Regarding the foreseeable future for iOS, Apple has publicly indicated some support for adding this to WebKit, and I’ve had private discussions with Safari engineers that have influenced the design. Indeed one of my goals here is to provide a path for getting more device information available in an event-model-agnostic fashion so that the entire future of input on the web isn’t tightly coupled to Pointer Events which Apple is opposed to implementing.
Regarding the immediate-term issue of identifying mouse events derived from touch: Yes Pointer Events solves that, and we’re definitely actively implementing Pointer Events in blink (and I shipped touch-action
over a year ago now). But the web evolves incrementally and I believe (given the lack of universal support for Pointer Events) that it’s best to provide incremental solutions for small problems to web developers that don’t force them to rewrite all of their event handling code. There’s not yet evidence that developers are willing to depend on the Pointer Events polyfill in large scale production, so I don’t think it’s practical to tell developers the only way to solve this problem properly is to switch to the pointer event API.