A partial archive of discourse.wicg.io as of Saturday February 24, 2024.

[Proposal: NetInfo] Provide effective network speed, expose Save-Data

igrigorik
2017-01-27

Hey all! Want to highlight two proposals we’re incubating on the NetInfo repo…

Providing (effective) network speed to web servers

This is a proposal for a new HTTP request header and an extension to the Network Information API to convey the HTTP client’s network connection speed.

The goal of the header and API is to provide network performance information, as perceived by the client, in a format that’s easy to consume and act upon. The header’s aim is to convey a level of performance in an intuitive format at a granularity that is coarse enough to key cache entries based on its value. The header aims to allow proxies and web servers to make performance-based decisions, even on the first request. The API extension aims to make it easy to make such decisions from within JavaScript.

Expose “Save-Data” client hint in navigator.connection

Service worker might use the client hint to determine, as part of its install handler, whether to precache a heavy- or lightweight bundle of resources. Exposing this information from within the NetworkInformation interface means that a service worker and other scripts could access it via navigator.connection.


Would love to hear any thoughts and feedback.

martinthomson
2017-01-29

https://groups.google.com/forum/#!msg/mozilla.dev.platform/lCZmhCDGHPY/4WviJod3EQAJ;context-place=forum/mozilla.dev.platform includes an extensive discussion on the API. The bulk of the discussion focuses on how Firefox might remove the feature given that it was implemented in error.

igrigorik
2017-02-02

Notes from today’s discussion at BlinkOn + updated proposal: https://github.com/WICG/netinfo/issues/46#issuecomment-276804272

jokeyrhyme
2017-02-04

So the OS / browser keeps some sort of sliding-window of RTTs, and uses this to provide a single average/estimate RTT to JavaScript and/or to an HTTP server via headers?

What happens in the train-tunnel scenario, where a request is initially started under ideal network conditions, but those conditions immediately worsen for the rest of the request? In this scenario, what would the user experience if the app / proxy / server uses information that is based on past performance to deliver content that is ill-suited for current network conditions?

jokeyrhyme
2017-02-04

It occurs to me that a device could use travel speed/direction and a map of poor coverage (or something to that effect) to enhance/augment the accuracy of historical RTTs. So a device could sense that it was travelling towards an area with poor coverage, and the APIs / headers would indicate this.

It seems to me that accurate portrayal of network conditions is a very complex subject, which makes it such a fun and exciting topic to discuss. :slight_smile:

tbansal
2017-02-06

@jokeyrhyme: Right, the browser can keep historical observations, and compute a smoothed value. The browser can also take into account other factors such as wireless signal strength when computing the estimate.

In the train tunnel scenario, if the request has not started, then incorporating the signal strength may help in improving the estimate. Chromium is currently experimenting with using the signal strength for improving the estimates.

If the request has already started, then the scope of improvement is limited unless the browser knows that the request is idempotent.

jokeyrhyme
2017-02-06

Exposing this information is an important first step. I’m sure we’ll discover all sorts of ways this information can be used to improve user experience.

I’ve been thinking about bandwidth regarding video quality, particularly in the Netflix and YouTube examples, where payload size / quality can be dynamically adjusted. Perhaps, armed with the proposed APIs / headers, we’ll begin to apply similar dynamic adjustments to other kinds of data?

For example, when retrieving database records, perhaps it will be common practice to download them in small batches over slow connections, and in larger batches in ideal conditions?

tbansal
2017-02-06

@jokeyrhyme: These are all great suggestions. This was discussed during BlinkOn and one of the suggestions there was that the content providers can adjust the quality of the video based on the network quality.

The notes from the BlinkOn discussion are available here:

tbansal
2017-02-06

Update: Intent to Implement has been posted on blink-dev. Link to the post: https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/TS9zT_u2M4k

tbansal
2017-02-15

The proposal has been updated: