Yup. (that would have been it, but the post has to be 20 characters due to discourse being an
How do we go about fixing this one? Is there a spec bug tracker that anyone can sign up for?
As far as what elements ought to be “black preparser boxes,” here’s a few use-cases:
<object> fallback content
<object type="image/svg+xml"> <img src="raster-fallback.png"> </object>
Video/audio unsupported playback format fallbacks
<video src="video-file.mp4"> <a href="video-file.mp4"> <img src="video-file-preview.jpg"> </a> </video> <audio src="audio-file.mp3" controls> <a href="audio-file.mp3"> <img src="download-this-track.png"> </a> </audio>
Moving away from GIF for short video loops
<video src="short-loop.webm"> <img src="fallback-loop.gif"> </video>
Inline SVG fallback
<svg> <path /> <rect /> <etc. /> <foreignObject height="0" width="0" display="none"> <img src="raster-version.png"> </foreignObject> </svg>
Inline MathML fallback
<math> <mrow> ... </mrow> <annotation-xml encoding="text/html"> <img src="math-not-enabled.svg"> </annotation-xml> </math>
<canvas> <img src="no-js-fallback.gif"> </canvas>
<canvas> actually behaves this way already, since it works like a reverse
There’s a WONTFIXed spec bug, which we may want to revisit.
In any case I suggest opening a Blink bug to avoid preloading these images, limiting the damage they do. (so that they would be loaded last)
I guess that the main problem here is that usage of these patterns is not very common, so benefits from changing the spec and implementations (and adding some complexity) is not very high. (but one could also say that this is a chicken & egg situation)
Yeah, this is why I think some sort of “don’t preparse me” flag might do better. Instead of having a more complicated preparser logic flow which would forever lag behind any new additions to HTML, it would provide a safety valve for whatever unusual applications we can’t predict.
src and terminate the requests you don’t want to be sent up with a Service Worker?
Yes, that’s a good way to do it as well.
True, but would it hurt to have more solutions especially if some of these solutions may mean native support that allows us not to have to remove the “src” and “srcset” attributes for it to work?
Ilya, while I agree with you that we should expose lowlevel primitives on top of which people can solve certain kinds of lazy problems without blocking on us, I think that doesn’t also mean that we shouldn’t go into this space as browsers one day, possibly sooner than you think.
My present worldview is that fetch, and render-blocking behavior all have a coordination problem at their underpinning, and as a result, our solutions to them have always been a bit underwhelming in their performance when used beyond the realm of webdevs whose page is all single-domain code that they completely control, eg lol, say, goog.
Nat, great points. I’ll just make one – probably obvious – addition to the above: developers need to be able to influence this coordination problem.
Today, we treat everything as the same: as a developer, when I include a script I forfeit all control about the data it fetches, priority, CPU cycles it uses, etc. Instead, it should be possible for me to isolate and sandbox these pieces - e.g. enforce byte limits, CPU use, etc… cgroups for web developers, please!
With that in place, developers can specify the policies that the UA can enforce on their behalf.
That’s a great idea, but the Service Worker would not do anything on first visit, right?
claim() the first visit as well. It’s racy, but is likely to work, at least for requests that are further down the page.
That’s really great, I didn’t know this!
I still have to find a way to migrate to HTTPS and work on Service Worker…
I assume the bigger issue here is actually being able to load the resource at the time we want, not necessarily triggering lazy loading at some pre-determined, unanimous point in time.
With this being said, I don’t necessarily think that declarative/non-JS solution is ideal. Especially since it would have to be a little too opinionated (for my taste anyway) to implement effectively and satisfy all the different possible use cases.
My take on this is that it may be more effective left up to the JS. Perhaps add something like a
load() method to the HTMLImageElement api that will load the resource based on (1) the html attributes on the
<img> tag (like
srcsets) and (2) browser state (network availability, latency, resource size). The
load() call will return a promise that is resolved or rejected when completed. Of course the
load() call can accept arguments to add even more custom functionality.
This way, the functionality remains flexible enough to use for all different use cases. In some worlds, leaving me, as an engineer, the ability to determine the right time at which the load happens sounds pretty darn sweet.
[Proposal] Manually loading image sources based on preset conditions
On that front, the browser should prioritize in-the-viewport images over out-of-viewport ones, and download everything needed for visual completeness first. I’m not sure lazy loading would help here.
Why not combine media queries with lazyloadness?
I feel like I’m way too late to the debate, but am disappointed this feature seems to have stalled. Virtually all performance tools, and a good many talks and articles, promote lazy loaded images as one of the biggest performance wins, and it has had a big measurable difference on sites I’ve worked on. The lack of an implementation that falls back to loading the image normally feels like a big omission in browsers’ new performance APIs
A simple boolean attribute is probably inadequate to the task, but, if there’s still appetite for discussion, I’d like to explore in more detail whether a reasonably simple declarative API (but sophisticated enough to deal with at least some of the nuances) is feasible
WHATWG has a discussion on this topic: