With the rise of “prollyfill” shims implementing speculative standards, there’s an increasing risk of “stepping on each others’ toes” in both namespace and functionality.
The problem with conventional “feature sniffing” is that it’s not really guaranteed to be capable of determining, in a safe and non-destructive fashion, whether the implementation of a feature is precisely what it’s expecting - feature-detection within prollyfills written today may miss nuances of changes that are only specified later.
I believe it is within our grasp to add mechanisms for prollyfill and end content authors in a way that lets these prollyfills “back off”. These would also supersede many of the tests that have previously been handled by sniffers like Modernizr.
Here are some of the approaches I’ve considered (names and return types still very loose):
Canonical addresses
All specs that are compatible with this approach should explicitly state a canonical URL they can be accessed at, both for the “latest” (ie. future) and “current” (ie. documented) versions of the spec. Prollyfill authors should check against the “current”, then the “latest”, with different behaviors if the “current” is not implemented (ie. display a warning).
Hash revisions or Etags
This would be a nice way to ensure precise content matches (ie. presenting a warning if a potential minor-but-crucial adjustment was made to a spec), but since a spec, by definition, cannot specify its own hash in its content, I’m not sure how reliable this could be. These could maybe be accepted as inputs by some functions, for scenarios like specs on GitHub where a document’s hash is presented above its rendering.
userAgentSpecs.addressingUseCasesFrom(url)
Returns a list of URLs of implemented specs to address the use cases described in a location. (User agents would keep the list of all use case documents specifically linked by the specs.)
This would be used for printing a warning in the console to developers that a spec they’re prollyfilling has been superseded by another standard they should look into, or (maybe) to handle shimming in the event that one known spec has been implemented
Could maybe be tweaked to give (ie. via fragment) different responses for specific use cases within a spec (eg. if two mechanisms handle different use cases and one handles specific use cases in a much more performant fashion).
userAgentSpecs.implementsSpec(url)
This is something like the above, but it would provide a boolean (or possibly a more detailed object, ie. possibly listing caveats described in a mailing list post or something) describing whether the details described in that spec have been implemented.
Since it’s possible for a user-agent to only implement parts of a spec, this function MUST NOT return true if ANY PART of the given URL is not followed. This could be solved with further granularity, ie. by fragment identifier, in the spec address specifier.
userAgentSpecs.specifiedBy()
Returns a list of specs dictating the behavior of the given object or function (including both the “current” and the “latest”).
userSpaceSpecs.register()
This would be some kind of space prollyfills could use to declare and “negotiate” their implementations. The userSpaceSpecs
object would further provide a similar interface to userAgentSpecs
, for detecting specs that have been implemented by scripts.
features
This would be some subset / superset of the above, for end content authors, for determining whether a various specs have been implemented, without regard for whether they’ve been implemented by the UA or by a script.
I’m guessing I’m probably not the first to suggest a solution like this, and I’m aware the space of asking a platform what it implements is a hairball of “unknown unknowns”, but I want to know the challenges previous swings in this space have encountered, since it’s my belief that there is a reasonable sweet spot we can agree upon, as a platform, to solve this real problem.