Offline packaged applications

Hello everybody. I am not a native English speaker.

Some time ago, I wrote this document. I then tried to contact the w3c and after a bunch of dead ends, I gave up.

I just now find this forum. Hopefully it will work this time.

The document I wrote is a pdf. Instead of copy-pasting it here, I’ll give a link.

Now, it would be rude to end the message like that, so I’ll try to paraphrase the document, to summarize the idea

The idea is to bring back packaged applications. We used to have “file://” but for security reasons it became almost useless. We used to have java applets, but the proprietary aspect of everything lead to unfixable security issues. We used to have flash, but it followed the same path.

Now, I think that those things did not disappear because of a lack of popularity. They did because we had to to protect the user. And each time the problem did not come from the offline dimension, but from other aspects ( the blurred boundaries of the application for “file://” ).

There is still an market ( for a lack of a better term to say “there is both producers and consumers that want this option”) for such offline application. We must avoid security issues, that’s granted. But I think that with offline applications we can increase the security. After all, I don’t need an image editor to be able to mess with all my files.

The sandbox and highly controlled model we have for web applications could be extended to offline applications. With the rise of wasm, the possibilities of the canvas tag and the long term storage ( yet controlled) with indexeddb, we have all we need to create amazing offline applications.

All we need ? Not exactly: some parts of the API (like the gamepad) are hidden behind http/https. You can make an offline game, but you won’t have access to gamepads. You are also presently forced to pack all your data in a html-compatible encoding. And you cannot sign anything.

Well, I think it’s enough. The rest is in the document. Of course it’s nowhere near finalized.

This post is a quick brush of the extensions we could add to this. The reason why they are just “extensions” and not part of the core proposition is that they create security issues. Nothing serious, but it was enough for me to put them aside.

The first is about webservices and webrtc. Webservices first: the general idea that has been pushed for years is that a website can retain some functionalities and interactivity while being offline. One way to do that is to embed a webservice “fake-server” in your application, that will intercept the requests that would have fail ( because they try to contact a server unreachable because of the offline) and respond to them with previously cached results. It’s a kind of an advanced cache system, under the control of the page itself ( as opposed to "under the control of the browser’s heuristics).

The web RTC now. Here the idea is to allow a direct (P2P) connection between two browsers. I will not talk more about it.

How are those two things related to my proposition ? Well, is such a packaged application could have access to the internet [danger security flag here] it could embed a server. Accepting incoming connections and responding to them is a well know mechanism. It’s something we are used to. The two API I mentioned ( webservices and web RTC) could be replaced by ( or actually, used as the backend of ) this embedded server. A request coming from the page (webservices) or a request coming from the web (web RTC) is a request and could be treated the same. A flag could be added to distinguish them, if needed ( maybe we want to treat request from the page itself differently).

Nothing special would have to be done by the browser ( except granting the right to access internet ): it could be implemented as an embedded library.

Such an application would have a very clean distinction between the view (html) the controller (JavaScript) and the model (the embedded server). It would let many website to go offline, as neither the server nor the client would have to be significantly modified. The session system you have in PHP could have a backend in indexeddb for example.

It’s a system that reuses the good old client-server architecture. It could also have an impact on the P2P, by bringing comprehensive technologies to make servers client side, while increasing the overall security ( now your server is sandboxed ). Here again there is a lot of low-hanging fruits to grab. ( This message is long enough. I’ll go on with a second extension tomorrow )

You may want to take a look at Isolated Web Apps.

EDIT: it seems this idea has already been visited several times. Btw, in the link you mention they don’t mention flash, java applets and “file://”. It’s not a critic, I just want to say by that that if we were to include all the attempts to make browser-based offline applications, there would be even more occurrences than what they listed.

Now, I am not sure that what they propose is exactly what I propose. It’s close, definitely, and I can say that we have similar concerns and conclusions, for what I saw.

The question is: is there a chance to have a version of that standardized someday ? What is the position of the W3C about that, do they have a group working on the question, is there a suggestion officially studied ? If this forum’s goal is to simplify those procedures … I am not absolutely off topic, am I ? What is the next logical step ?

I’ll present the second extension of what I mentioned. Once again, I excluded it from the original proposition because it has security issues. It can also increase security if used correctly.

If the original proposition is ever accepted, it will become obvious that a huge part of a lot of those offline applications comes from embedded librairies. It’s like distributing statically linked programs.

The solution to that is already known: dynamic linking. Linking to previously installed libraries, instead of including the kitchen sink in every single application.

How would that be achieved? There is several ways to do it, but because we are on the web, one of them seems more natural than the others. We have been including external scripts for quite a long time, we could as well keep doing it.

The idea is to organize applications in a graph. What I have in mind is precisely the kind of graph describing the local networks of a company, where you isolate the public server from the internal server on two different networks. The passages between the different networks can enforce rules on what can and cannot go through.

Each application would have an address ( I guess we can skip the IP part and directly go to dns-like names ), and an application could establish a connection with another application with that address ( if it has the rights to do so ).

This is not incompatible with the previous extension: some applications could have an internet access and “share” it with other applications if they desire to do so. You would be able to distribute a module that is in charge of the communications with you, and other applications that would want to communicate with you would be able to do so through your module. The policy could be that a module is allowed to connect a website only if it has the certificates for this website and a certificate signed by this website certifying this very module. When offline, the module could provide a default service, even if it never had been online ( in the same way you can “commit” offline then later “push” online )