Currently, the only way to do low-level drawing is to spawn a canvas tag. While this is a good way to draw something on the screen efficiently, and grants us access to things like WebGL, it seems too removed from the rest of the tools that we get to use.
More specifically, we have a high-level styling system called CSS, and a low-level drawing system called Canvas. There is no “mid-level” where we can, say, use most of the existing CSS properties, but change the way a certain thing is drawn to the screen. You either make do with what CSS provides, or reinvent the world in a canvas tag. The latter is suicidal if you care about accessibility, unless you create a cavalcade of invisible DOM to duplicate what you are drawing on the canvas enough that screen readers still work.
The heart of the issue is that canvas is semi-broken - it should be possible to grab any element off the DOM, register some event handlers, and draw things on the screen at the appropriate time. Every lowly div should be a canvas. More importantly, it should be possible to use that canvas to draw other elements in the DOM. e.g. you could have a game that draws 3D graphics to WebGL, and then uses some new API to render ordinary DOM on, say, a computer screen in the game. Or, more mundanely, you could have a Masonry plugin that uses canvas for all it’s layout instead of having to sample and mutate CSS properties in a tight loop. (We could also package up bundles of immediate-mode drawing as our own custom CSS properties.)
I do not believe this would be easy to implement - the web already has a defined rendering model, with browser-specific quirks here and there. Browsers already spend a lot of time doing rendering, and we would risk designing an API that makes it impossible for the browser to implement future optimizations if custom drawing was enabled. Still, I feel like web developers would benefit from a more flexible rendering model. Does anyone else agree with me, or am I just talking out of my butt?