Right now websites and webapps have to use the user’s i.p address to get the approximate geolocation and localize content on websites. Getting the exact Geolocation requires a permission and in many cases a high accuracy is not even required.
What we propose that websites can get the approximate Geolocation of the user within approx 2000M-5000M radius without a permission prompt.
This means that websites no longer have to rely on i.p addresses to provide local and relevant content.
What are the benefits of such a thing which at first glance seems to blatantly violate user privacy?
Today’s websites have to rely on 3rd party AdNetworks to provide localized ads and generate revenue. First party ads that are not localized do not pay well. As a result websites embed all sorts of 3rd party scripts from Facebook, Google, Amazon because these companies already have your location data with pinpoint accuracy. Giving websites approximate geolocation will level the artificially high playing ground to serve first party ads. 3rd party ads in my subjective opinion are a worse violation of user privacy than first party ads.
GeoIP databases cost money, for a basic service. This will reduce the artificial costs.
Newspapers that would like to serve locally relevant content have to ask for permission. An average user reads 15+ different news papers in a month it gets annoying. When it is the time to make a great first impression, news papers cannot do so.
Overall there is no loss in privacy as compared to the situation now. This possibly might be a better thing because if 1st party ads gain traction and reduce middleman fee websites are no longer held hostages by Facebook and Google to give up their own customer data if they want to serve ads, because there exists no choice right now.
Printed newspapers have always been able to target ads based on geolocation. This is not a violation of privacy. The violation of privacy is when you start serving ads based on browsing history.
In Incognito mode or Privacy mode this could be turned off by default.
There is a major loss of privacy. If you use a proxy deliberately to hide your IP address, and therefore obscure your country of origin, then this proposal gives you away again with no permission prompt. There must be a permission prompt to give away this information, even inaccurrately.
While in a city a 5km limit (the high end of your range) would be more than sufficient to anonymize someone, in rural areas that is nowhere near the case. I’d estimate there are less than 1,000 people living within 5km of me; that combined with other info would be trivial to identify me personally.
I think the more typical name for this functionality would be “coarse geolocation” (rather than “sparse”).
I think it’s an interesting idea, and there could be a user benefit. There are also privacy implications, especially if implementations were considering turning this on by default. A user agent doesn’t always know if a user is using a VPN or other anonymizing device on IP address, and the IP address might not always be revealing coarse geolocation and so users could be revealing new information in surprising and unfortunate ways.
On the other hand, having it entirely opt-in might not provide enough usage that sites or user agents were interested.
One way forward might be to use the existing model of a permission for geolocation request, and allow it to be parameterized on precision of information. And then users who didn’t mind could configure their browsers to automatically say yes to coarse geolocation requests, or use other heuristics to determine when to approve.
The Geolocation Sensor spec is currently just getting started, and this would be a good time to provide that kind of feedback and input. One challenge would be in providing a reasonable way for a user agent to provide less precise geolocation data in a way that wasn’t misinterpreted by sites as precise data that just happens to be at an integer lat/lon location.
You make an important point and this needs to be figured our using a combination of heuristics/user input.
The power of defaults is that most of the time i’ll be off. If it is off in most of the traffic, it is next to useless.
“parameterized on precision of information.” - this should be set by the website, not the user. If the website requires the exact location, that user should not have the choice to change the parameter, but should be asked yes/no. If it requires a “coarse geolocation” as you say, then a permission prompt should not be necessary, to facilitate its usage and promote the feature.
In the end if you want to win on privacy, you have to make non-artificial economical incentives to support it. The more you require permission on coarse geolocation, which has a negligible privacy impact if implemented properly, the lower is the adoption rate, hence 3rd party ads will stay.
Yes, parameterization is typically done by the requesting site. For example, Android apps can request either coarse location or fine location permissions, and the difference is reflected in how permissions prompts are provided to end users. The ability for web sites to indicate whether they want precise location or coarse location would be great for data minimization – sites can ask for only what they actually need.
I recognize that you personally believe that browsers should implement coarse location in such a way that by default users are not notified and do not need to give permission for access to coarse location. I think you may find that many people would disagree with that prioritization, especially when revealing location (even coarse location) may be unexpected and harmful to some end users. But often the default settings regarding permissions (when they are requested, how long they persist, how they are managed by the end user, etc.) don’t need to be resolved in the standardization process. If the capability is specced out, then user agents can provide their own systems, including heuristics and user preferences, to determine when a user might want to explicitly give permission and when they might not.
What ever happens, there needs to be a guarantee that UA’s should allow them by default. We do not want this to end up like the poorly implemented DNT standard. When Microsoft enabled the DNT header by default in Internet Explorer, it was doomed to fail because that was not the average users choice, and was not financially possible for the web. Any non-tracking / user privacy supporting standards that the W3C makes should first look at the economic/financial impact. Walled garden apps like facebook and companies like Google will face little to no impact because of W3C/WHATWG’s incentives to put the users/privacy first. What will happen in the end is that smaller/niche websites that one day hope to grow big and writers that hope to make their blogs their primary source of income on the internet wont see the day because of this. It will end up hurting the smaller players and news publishers instead of what we aim to achieve.
If a tiny web browser ends up blocking it by default, it wont matter. But if mainstream browsers like google chrome, firefox (maybe not), safari end up blocking by default, it will completely destroy it.
DuckDuckGo’s non-tracking model will not work for the rest of the web (excluding Social Networks, and Search Engines) because they have 1st party knowledge of your data, Other websites that the SE’s/SN’s link will not/can not have first party data.
There should be a trade-off between privacy and financial incentives for the web to survive.