This is a proposal for a new HTTP request header and an extension to the Network Information API to convey the HTTP client’s network connection speed.
Service worker might use the client hint to determine, as part of its install handler, whether to precache a heavy- or lightweight bundle of resources. Exposing this information from within the NetworkInformation interface means that a service worker and other scripts could access it via navigator.connection.
What happens in the train-tunnel scenario, where a request is initially started under ideal network conditions, but those conditions immediately worsen for the rest of the request? In this scenario, what would the user experience if the app / proxy / server uses information that is based on past performance to deliver content that is ill-suited for current network conditions?
It occurs to me that a device could use travel speed/direction and a map of poor coverage (or something to that effect) to enhance/augment the accuracy of historical RTTs. So a device could sense that it was travelling towards an area with poor coverage, and the APIs / headers would indicate this.
It seems to me that accurate portrayal of network conditions is a very complex subject, which makes it such a fun and exciting topic to discuss.
@jokeyrhyme: Right, the browser can keep historical observations, and compute a smoothed value. The browser can also take into account other factors such as wireless signal strength when computing the estimate.
In the train tunnel scenario, if the request has not started, then incorporating the signal strength may help in improving the estimate. Chromium is currently experimenting with using the signal strength for improving the estimates.
If the request has already started, then the scope of improvement is limited unless the browser knows that the request is idempotent.
Exposing this information is an important first step. I’m sure we’ll discover all sorts of ways this information can be used to improve user experience.
I’ve been thinking about bandwidth regarding video quality, particularly in the Netflix and YouTube examples, where payload size / quality can be dynamically adjusted. Perhaps, armed with the proposed APIs / headers, we’ll begin to apply similar dynamic adjustments to other kinds of data?
For example, when retrieving database records, perhaps it will be common practice to download them in small batches over slow connections, and in larger batches in ideal conditions?
@jokeyrhyme: These are all great suggestions. This was discussed during BlinkOn and one of the suggestions there was that the content providers can adjust the quality of the video based on the network quality.
The notes from the BlinkOn discussion are available here: