A partial archive of discourse.wicg.io as of Saturday February 24, 2024.

Spec Implementation API

stuartpb
2015-08-13

With the rise of “prollyfill” shims implementing speculative standards, there’s an increasing risk of “stepping on each others’ toes” in both namespace and functionality.

The problem with conventional “feature sniffing” is that it’s not really guaranteed to be capable of determining, in a safe and non-destructive fashion, whether the implementation of a feature is precisely what it’s expecting - feature-detection within prollyfills written today may miss nuances of changes that are only specified later.

I believe it is within our grasp to add mechanisms for prollyfill and end content authors in a way that lets these prollyfills “back off”. These would also supersede many of the tests that have previously been handled by sniffers like Modernizr.

Here are some of the approaches I’ve considered (names and return types still very loose):

Canonical addresses

All specs that are compatible with this approach should explicitly state a canonical URL they can be accessed at, both for the “latest” (ie. future) and “current” (ie. documented) versions of the spec. Prollyfill authors should check against the “current”, then the “latest”, with different behaviors if the “current” is not implemented (ie. display a warning).

Hash revisions or Etags

This would be a nice way to ensure precise content matches (ie. presenting a warning if a potential minor-but-crucial adjustment was made to a spec), but since a spec, by definition, cannot specify its own hash in its content, I’m not sure how reliable this could be. These could maybe be accepted as inputs by some functions, for scenarios like specs on GitHub where a document’s hash is presented above its rendering.

userAgentSpecs.addressingUseCasesFrom(url)

Returns a list of URLs of implemented specs to address the use cases described in a location. (User agents would keep the list of all use case documents specifically linked by the specs.)

This would be used for printing a warning in the console to developers that a spec they’re prollyfilling has been superseded by another standard they should look into, or (maybe) to handle shimming in the event that one known spec has been implemented

Could maybe be tweaked to give (ie. via fragment) different responses for specific use cases within a spec (eg. if two mechanisms handle different use cases and one handles specific use cases in a much more performant fashion).

userAgentSpecs.implementsSpec(url)

This is something like the above, but it would provide a boolean (or possibly a more detailed object, ie. possibly listing caveats described in a mailing list post or something) describing whether the details described in that spec have been implemented.

Since it’s possible for a user-agent to only implement parts of a spec, this function MUST NOT return true if ANY PART of the given URL is not followed. This could be solved with further granularity, ie. by fragment identifier, in the spec address specifier.

userAgentSpecs.specifiedBy()

Returns a list of specs dictating the behavior of the given object or function (including both the “current” and the “latest”).

userSpaceSpecs.register()

This would be some kind of space prollyfills could use to declare and “negotiate” their implementations. The userSpaceSpecs object would further provide a similar interface to userAgentSpecs, for detecting specs that have been implemented by scripts.

features

This would be some subset / superset of the above, for end content authors, for determining whether a various specs have been implemented, without regard for whether they’ve been implemented by the UA or by a script.


I’m guessing I’m probably not the first to suggest a solution like this, and I’m aware the space of asking a platform what it implements is a hairball of “unknown unknowns”, but I want to know the challenges previous swings in this space have encountered, since it’s my belief that there is a reasonable sweet spot we can agree upon, as a platform, to solve this real problem.

ahopebailie
2015-08-18

This feels like something that could be solved by API versioning. Is there value in a direct reference to the spec itself?

Would it make sense for spec authors to have a standard getVersion() or similar method on all APIs that they are expected to include in the spec.

i.e. In each version of the spec they indicate the expected version that will be returned by this call.

Perhaps it should return semver version number? (http://semver.org/)

Example: The Geolocation spec could include something like this:

[NoInterfaceObject]
interface Geolocation { 
   void getCurrentPosition(PositionCallback successCallback,
                               optional PositionErrorCallback errorCallback,
                               optional PositionOptions options);

   long watchPosition(PositionCallback successCallback,
                      optional PositionErrorCallback errorCallback,
                      optional PositionOptions options);

   void clearWatch(long watchId);

   APIVersion getVersion();

};

The APIVersion could be a globally defined type

  [NoInterfaceObject]
  interface APIVersion {
    readonly attribute DOMString version; //1.1.20131024
    readonly attribute URL specificationUrl; //http://www.w3.org/TR/2013/REC-geolocation-API-20131024/
  };

There should probably also be a section of the spec where the version number is more clearly stated (not just a comment in the API spec).

I imagine some pretty powerful automated API test tools that could be developed if this type of API versioning was accessible programmatically since most specs these days also have test cases.

stuartpb
2016-08-22

Today I learned about document.implementation.hasFeature(), which was apparently a legacy attempt at providing exactly what I’m describing here. Further documentation states:

The different implementations fairly diverged in what kind of features were reported. The latest version of the spec settled to force this method to always return true, except for SVG features, where the functionality was accurate and in use.

So, apparently, the problem with a function like this is that engineers have enough trouble just implementing the features; they don’t care enough to go back and report that they’ve implemented the feature, especially if there’s some further fuzziness, like a violation of something impractical in the spec.

So yeah: looks like it’s down to polyfills with a fleet of embedded unit tests, and nothing can ever really change that.

tabatkins
2016-08-22

Yeah, exactly. The one case where we do have reasonably-reliable feature-testing support is the @supports rule, which is actually just a generic parsing test - does this property/value get recognized as valid by your CSS parser? - which means browser implementors don’t have to remember to do anything special to accurately report results.