Discussion of some of the issues raised by Jonathan Moules’s talk on GeoSeer (search engine for geospatial data services) and the Limitations to use of OGC services, at the Maps for the Web workshop.
The difficulty of discovering OGC Services
Comments from the live chat during the presentation:
-
Bryan Haberberger @thehabes: Sounds like the Web needs an awesome-geo-discovery repo to list these things!
Great question. “How many portals are built for sharing?” The portal - metadata problem exists in all kinds of expertise, not just maps. It is interesting to hear you call it out, and for me to realize just how many times developers go through that struggle
-
SebastienDurand @SebastienDurand: One of the biggest issue we find in the Canadian Geospatial Platform, is not just the discover issue, but to find the “right” information, whether you call it authoritative or most current or best available data.
-
Satoru Takagi @satakagi: @SebastienDurand In the end, I’ve experienced that I look on the web for the data I find most reliable, content for ordinary people who can see it, and I can hardly rely on machine-readable metadata.
-
SebastienDurand @SebastienDurand: @satakagi Exactly and this also resonnates with Jonathan comments is the publishing effort of data is worth it and not passing beside the real objective which to make information on the web usable and operational. Too much data is like not enough, and this goes also for maps or portals.
-
Satoru Takagi @satakagi: @SebastienDurand I’m asking the question of whether that disclosure is to machines or computers, or to humans.
Openness to machines isn’t widely useful because most of the time, only smart people and organizations that can control those machines can use it.
Another question that came in later in the chat:
- Bryan Haberberger @thehabes: This question may be for the GeoSeer people. Is there a published discovery API?
My take on Jon Moules’ presentation is not that geospatial information doesn’t have value; why else would Google, Apple, Microsoft, etc create their own proprietary spatial data infrastructures just to power their own web sites?
More like, geospatial information has value when it is used to augment other, human-consumable information, primarily as maps. Hence, having Web pages which are linked to and which link geospatial information (spatial data infrastructures) together, in the style of the Web, will increase the value of geospatial information for everybody, without having to “put all our eggs in one basket”, as these proprietary spatial data platforms would have us do.
Bryan Haberberger: This question may be for the GeoSeer people. Is there a published discovery API?
@thehabes - GeoSeer does have an API yes - https://www.geoseer.net/api.php - it’s paid though, to offset the running costs of the service.
@satakagi - True, but at the same time, that openness to machines allows those smart people to build systems that ingest the (meta)data. Those systems in turn can make the data more human accessible.
My take on Jon Moules’ presentation is not that geospatial information doesn’t have value
@PeterR - Yup, I’m not commenting on the value of the data at all. The aim is simply to ask - what’s the point of sharing all this data if no-one can find (and thus use) it?
Sure, if that data is being shared as part of a web-page and somehow that makes it discoverable, great, but if the data is simply put out there as an INSPIRE-esque obligation, maybe dumped onto a portal (maybe!), and then forgotten about, is/was that a good use of resources?
Alas the discovery problem is one that is often given at best a cursory glance before going on to the “important” stuff like yet another interoperability/data-format/federation standard https://xkcd.com/927/
@Jonathan-GeoSeer The Web used to have a discovery problem too. That was solved through the evolution of web search technology. IMHO the only way to solve this problem is organically: let people use and link to (and within!) spatial data infrastructures, when they have something important to say about locations. Metadata approaches will fail, for a similar reason that xhtml failed: people couldn’t and can’t be bothered to ensure their content is “valid”, so HTML continued and continues to be the format of the Web. Similarly, even the tiniest bit of friction causes people to not do something, like putting metadata, or linked data, in a web page. Geospatial needs to come to grips with that, because it is possible to integrate maps and location organically into HTML, we just need to assert that it’s a big need that will have big consequences if actioned (and if not actioned, actually, but we won’t know because we’ll be extinct).
@PeterR Since we see siloing as a problem, we should remember that it has been solved and at the same time a new siloing problem has arisen.
@satakagi are you referring to Google’s dominant position in search engines as a silo problem, specifically?