A Defense of Declarative Content

I’m concerned about the trend I’ve been seeing over the last year or two, both on Specifiction/WICG and in the edge-browser community, of leaping to add more JS APIs to the platform, while ignoring any consideration toward actual HTML properties, and especially not daring to entertain the idea of another declarative format. (It’s worth noting that, as a platform, we’ve been around this block before.)

It’s not that exposing functionality to JavaScript is bad. Service Worker made sense, as a plain Extensible Web component providing a base on which any future declarative standards may be polyfilled (and any logic too complex for any sensible declarative solution may be scripted by the end developer). The trouble is with the way Service Workers are being looked as a role model for a scripting-only culture where every feature is being surfaced to the DOM and nowhere else. Under this mindset, basic, universal tasks that could have been expressed perfectly cleanly as a declarative attribute in HTML, now require the page to import or define scripts to act out their intentions.

We can’t just foist all the burdens of implementation on the page, without regard for how widespread the need. If there weren’t value in letting the UA handle tasks specified by declarative content, the W3C would just redefine the content of every page to be <canvas></canvas><input type="file"><style>*{position:absolute;width:100%;height:100%}</style> followed by a list of scripts, throw out every spec that isn’t Fetch, ServiceWorker, UI Events, Pointer Events, or WebGL, and hand everything else to the Technical Architecture Group in a big brown envelope labeled “your problem now”.

We can’t build the web out of polyfills. The value in specifications isn’t just making impossible things possible, it’s about letting simple tasks have simple implementations. The next time disregarding any proposal that could be implemented using a large amount of code calling out to Fetch, Canvas, and Web Components, drive to a bookstore, take a look at that massive shelf sagging under the weight of all the books, IDEs, and libraries you need to read just to implement a simple CRUD app on Windows, X11, and/or OSX, and ask yourself if you really want to go back to that.

There is a two fold part to this isn’t there.

  • If we want decent new native behaviours exposing all the low level behaviour to implement that is a requirement.
  • The browsers can then adopt this as native behaviour if there is enough user interest in using it.

Components really help with this in that they define a framework agnostic way of defining semantic meaning that can mean something very specific within: a company, framework or country. If there is enough interest in that component then adopting it natively would essentially just be adding it into the browser bade.

This is something that the TAG and the W3C as a whole is pushing for by exposing primitives for library writers, letting the dust settle and then declarative content can come out of that.

This is probably relevant: https://youtu.be/7BpsUYn6Z2o?t=1h7m55s

1 Like

Yes, and this is the princept behind the Extensible Web Manifesto. I get it. I’m pushing for that same stuff, too.

My point is, we’re starting to let the fact that we’ve done that first part (or even that we could do that first part) stop ourselves from doing that second part, where “The browsers can then adopt this as native behaviour if there is enough user interest in using it.” (Or, if we’re not having that problem yet, without keeping vigilant on this point, we’re going to, because this is the natural apathy that all DIY ecosystems trend toward as their thought leaders move higher and higher up a stack of pre-built specifics they’re comfortable with.)

As an analogy:

Say some young idealists open up a free public workshop in the local community. They don’t have many tools - just a hacksaw, a hammer, a screwdriver, and a blowtorch. Every now and then, in the early days, they realize they don’t have something important, like a pair of tongs, so they go to W3sley’s Hardware down the block and buy one.

But, as the shop goes on, it becomes increasingly self-reliant. Rather than go out to the hardware store to buy a new clamp, the staffers build one by shaping a couple pieces of scrap iron with the blowtorch and chiseling grooves by hitting the screwdriver with the hammer. Sure, it’s heavy, and sometimes you cut yourself on a rough groove, but it has this neat ratcheting behavior that you couldn’t get with one of those clamps they would have sold you at the store. Eventually, all the tools the workshop uses are being built like this.

But then a funny thing happens: new people stop coming to the shop, because when they walk in the door, they get hit in the face with flying cinders as the regulars snark “looks like somebody doesn’t know not to stand under the Gruntacetymalizer duct!” They ask where they can get a dremel, and the person behind the counter rolls their eyes and says “People keep trying to give us dremels. You don’t need a dremel. We built a handheld circular saw ages ago, it’s basically the same thing and it works fine.” The new prospect takes a look at the circular saw behind the counter, sees the burnt holes in the plastic and the blood still wet on the blade from the last new customer to tear their arm off trying to use it, and says “uh, OK, thanks, but I think I’m going to go. Now.”

After seeing how bad things are at the community workshop, they head out and buy a dremel from the hardware store. Rather that go to W3sley’s (which went out of business years ago after the workshop staffers stopped shopping there), they get the dremel from their local corporate iDepot, which charges them $300 for the tool and requires a $50 subscription every year to sell the things they build using it, and states that it’s a breach of contract to buy plastic from any other hardware store - but it’s still better than going back to the spiky, decaying madhouse that is their local workshop.

Years later, the workshop blows up in a freak accident as their most experienced staffer was busy cutting a second nozzle into the Gruntacetymalizer’s fuel tank. At the wake, a few young idealists, who never really went to the community workshop all that much, gather around the circle and talk about some of the cool things they heard this place used to do. As they talk, they grow increasingly enamored with these romantic notions, and they complain about how creepy the iDepot is and how much they over-charge for tools, and then one of them gets a clever idea:

“Hey, we should open up a free public workshop!”

Another thing - when you offload behaviors to page implementations, it’s entirely possible (often likely) for that to make it harder to later, down the line, extend the natural behaviors of the platform - just look at any time Microsoft tried to extend Windows with a new behavior, only to have to discard it due to it interfering with an existing third-party extension. When you say “this problem space is to be handled by the end developer”, you implicitly shut the door on having the platform neatly solve the problem for everybody.

To borrow a very good formulation by @eeeps regarding the design of <picture>:

We mostly start with JS APIs rather than declarative ones because they force us to think about the problem better. It’s extremely easy, when designing a declarative API, to make things “just work” for the use-cases you envision, without addressing slightly different (or very different, unforeseen) use-cases. Forcing everything into a JS API first means that you have to think things through properly; if you need a new tweak to a feature, it’s harder to justify “just add another argument to that function”.

Declarative APIs by their very nature end up less powerful than JS ones. And that’s fine; slightly less powerful but much more usable solutions are super-valuable. But trying to jump right into the declarative form warps solutions, as we see time and time again, and it often ends up being very difficult to generalize the features into a JS API later.

So we often start with JS APIs now, and wait for the community to (a) show they’re sufficiently interested in the API in the first place, and (b) explore and find the cases that are the intersection of “common” and “difficult/fiddly” that are high-value for being translated into declarative APIs.


I think I see what you’re saying, but I don’t think it’s the writing of a JS API that forces spec authors “to think things through properly”: I think it is frequent for authors to “just add another argument to that function”. (See: the exact thing Web Components did to createElement for Custom Elements, because it’s not like there was an obvious broader approach to the second argument they hadn’t considered.) There are lots of other specs I’ve seen, in frighteningly-close-to-widespread-implementation stages (including legacy ones we’ve suffered through like XHR), that are just as narrow-sighted as what you’re describing occurring as a declarative-API-specific phenomenon. (Imagine if XMLHttpRequest had actually required the content to be XML.)

I think the right way to “think about the problem better” isn’t to start from the perspective of a JS API, but to aggressively solicit and consider Use Cases and Requirements (as I believe somebody in RICG wrote an essay on, I might be thinking of this one by Mat Marquis)- and, if the Use Cases keep coming in and the Requirements are staying in roughly the same place, then yes, do specify an imperative API. However, if it’s something where there’s a declarative solution, which neatly solves a large swath of Use Cases and Requirements, I think it’s very important that we consider that before we go codifying a JS implementation we’ll never be able to refactor.

1 Like

Yes, all that’s true. But, speaking from experience (6+ years as a standards dev), when you start with declarative stuff, you almost always end up baking bad patterns in and making your job much more difficult when you eventually want to define a “lower-level” JS api. On the other hand, going the other way tends to produce better results much more often.

There’s nothing theoretically privileged about one direction versus the other. But in practice, one direction works way better.