PointerEvent offsetX/offsetY keeps changing sporadically when currentTarget changes

I know it the behavior is as the spec defines, but it is awkward and strange.

What I want to do is detect the position of a pointer within a parent object, but when the pointer enters child objects, the event.offsetX and event.offsetY values are relative to the new currentTarget instead of the event’s original target.

This is honestly strange behavior when you think about it from an intuition/simplicity point of view.

Now someone who wanted to simply track the position of the pointer within an object has to write some strange code and do math just to achieve that.

Could there be some other way to specify that offsets should always be calculated relative to event.target?

For comparison, in the case of an element taking up the full width and height of the window, then we can just rely on event.clientX and event.clientY to achieve the desired effect. But when the element is not full-size of the window, then that’s where it gets complicated.

the event.target IS the child object - nothing to do with currentTarget or “original target”. the events are fired on the child object, and bubble up to the parent object. this behavior is the same for mouse events etc as well.

you have to either do the math yourself (i.e. check for each event you’re receiving whether or not the event.target is the actual parent element you’re interested in, and if not work out what the coordinates relative to that element are - ideally checking clientX/clientY and subtracting the offset of the parent element from those), or provided the child elements aren’t meant to be interactive, make them “transparent” to events (mouse events, pointer events) by using the CSS pointer-events:none rule (which, despite the name, has nothing to do with the JS pointer events spec)

Hello Patrick! Exactly! I’m just mentioning that this is undesirable complexity.

Sure, all information we need is there, and we can do calculations.

Now, imagine someone new to the web. They write things to work on a canvas full screen in their very first application. Then the moment they decide instead to put the canvas as a child of a smaller area on the screen, it all falls apart in a very confusing way.

Sure, as an experienced user of web technology, I know what needs to be done as far as those calculations.

I’m just voicing that it is unnecessarily complex, when I’m willing to bet that most of the time the user just wants to observe mouse position on the element where the even listener is defined, and not on all the possible child elements in side the element that has the listener.

I wish it was more intuitive, so that the web could be easier for anyone learning web technology, instead of making them have a more difficult time.

Is it possible to make new APIs that are more intuitive?

In Qt for example, one can set an “invisible plane” inside a QML element specifically for interaction events, and the mouse position (or touch position) events are relative to this one plane, not every now or future child of the ancestor. This makes it intuitive.

I wonder, if someone who is just a regular web developer like me (not affiliated with any big companies or having any authoritative say in web standards) created a more intuitive API on top of the current confusing ones, do you think browser vendors might be willing to consider adopting something similar as a builtin API?

Is there any precedence of libraries/tools made by people that never worked at web-influencing companies like Mozilla, Apple, Microsoft, or Google, who worked at little-known companies, whose libraries had an impact on web APIs?

I’m wondering what it takes to have an impact. F.e. John Resig’s jQuery had an impact, but he also worked at Mozilla.