Continuing the discussion from Navigator.timeZone:
I think this is a fantastic approach to fingerprintable info in general. What’s the UA consensus / standardization around this?
Should there maybe be a standard (OWASP?) assigning “fingerprinting potentiation scores” to the distinctive properties (saying which function calls the UA should treat as accumulating suspicion)?
Could the community (eg. FSF) turn this into a browser extension (maybe using Object.observe() on
navigator), akin to HTTPS Everywhere?
Also, perhaps after a certain number of suspicion triggers are raised, the browser goes into “lockdown” (eg. functions start returning the same values as private browsing) until the site’s integrity can be assessed?
There is actually existing work on the topic, see for instance https://www.cosic.esat.kuleuven.be/fpdetective/.
I’m not sure that we need a standard. Most of this can be done as an extension (though how much of a performance hit it would be is unclear, and it would need to reach into Flash as well). It would certainly be an interesting project to carry out (starting from a good look at https://github.com/fpdetective/fpdetective/ I would assume).
I don’t think that you need to go into lockdown (especially since getting that wrong would have adverse effects that would be hard to explain clearly to the user). Fingerprinting is an attack that mostly makes sense over a relatively large amount of time and number of users, so it ought to be enough for the browser to flag the site for review in a shared database of fingerprinters. Once it is confirmed, that origin basically gets flagged in the same way that phishing sites are, possibly leading to it being blocked.
Overall, the effect should be sufficient to make this work. My primary concern would be about performance.
+1 that one mitigation of browser fingerprinting will be making that fingerprinting detectable (and, importantly, distinct from innocuous activity) and then researchers and regulators and potentially even individual browsers themselves can try to detect it, and potentially limit access to certain features or just identify that it’s happening and address it through out-of-band means.
We describe this as a level of mitigation in the fingerprinting guidance document:
I don’t think we should use the fact that there are possible alternative mitigations to assume that we shouldn’t try to minimize fingerprinting surface, especially when it’s done in a way that will be relatively hard to detect.
Finally, I think typically we wouldn’t need or indeed want to standardize the mechanisms for counting potential fingerprinting. Fixing or publishing those metrics would make it easier for attackers to remain within the limits.