A partial archive of discourse.wicg.io as of Saturday February 24, 2024.

[Proposal] Pen Customizations API



We (Intel) would like to propose a new API to leverage new pen capabilities available on the market for quite some time. The main feature that we’re trying to expose is the ability to store inking customizations in the pen itself which then can be retrieved.

I will not go too much in details in this post since we have put an explainer over here however we’re super open to feedback especially for the API shape, permissions, etc. None is set in stone and I’ll be happy to answer questions here.

It’s a rather niche feature but it is useful for who needs it. A video is better than thousands of words so take a look at this video to see how this API could be used.

Provided that there is interest we would like to move this forward in the standardization track.


This is a great feature that not only I can get the colors from the app but also get from the real objects back the app. Look forward to that!


Colors among all applications should be easy to access and manage. This is the core value of what Darttears purposed. This Color API is what I’d like to see in the market!


I am a newcomer and certainly not a professional. Don’t take anything I’ll say too seriously. Also, English is not my native language.

This seems interesting, but I have a ton of questions.

Are the pen painting colors or materials ? By that I mean two things: are colors just overwriting what is beneath them, or do they mix ( and how they mix will be influenced by the material). The second is the export of the painting to any other support than a screen. You can have a painting representing a river and a rock and the river will reflect light differently.

This is especially true for materials in 3D: the metal will not reflect light like the wood or some plastic.

From the point of view of the pen/surface, it would only be another data to store, alongside the color.

The natural evolution of that is to separate the presentation of the information and the information itself. You would have the layer(s) containing the data, the layer displaying it and a formula to mix the incoming data from the pen, and what is already there in the layer.

In it’s basic form it would look like “data=rgba, display=show data directly, mix=erase” A slightly different one would be “data=hsl, display=hls_to_rgb”, with a smart mix ( the data sent by the pen can indicate what is the desired action: darken, brighten, desaturate etc ) And a more complex one would just pull the fragment shader, so you can composed your informations and see the result you’ll have at the end.

But it was originally a question and it became that …

Anyway, the question was: what kind of data is the pen painting? (Sorry for the long post)

What kind of color space is supported ? ( RGB of course but: hsl, HSV, cymk, ciel, paleted colors … ) Is the notion of layers built-in that ?

I have difficulties seeing where the API starts and ends and where the software starts and ends. What in the video is built on top the API but is not part of it, and what is actually part of the API ?

What are the typical use cases of this API ? Isn’t that a bit redundant with already existing softwares proposing that kind of functionality?


Hi @Bubuche87 thanks for the long message.

We’re talking about regular digital pen nothing that special with them. They just have a tiny memory inside of them where we can store some information.

So I touched a bit on that in the explainer but currently USI 1.0 pens supports only 140 colors or so, USI 2.0 pens supports 24bits colors and I personally requested the USI forum to extend this to more color space like you mentioned. Ideally we should cover what the web covers and what’s available on the market. The discussion has not yet started at USI but when I do I can update that thread so you can provide input. Back to the kind of data the pen is storing, it is just bits nothing really more than this.

Did you take a look at the explainer about the proposed APIs? There are not that many APIs that we’re adding. It’s on top of existing Pointer Events capabilities. The video is showing a tiny drawing application that we put ourselves as a showcase to demonstrate the API. We don’t intend to make it a full fledge drawing application it was the easiest way to test the API. Ideally the API would be adopted by https://beta.tldraw.com/, Sketchpad - Draw, Create, Share! etc etc.


Explainer: GitHub - darktears/pen-customizations

Demo code: GitHub - darktears/tiny-canvas


One interpretation of the proposal is that it’s more about the “device” vs the “events” coming from it. From that perspective, this API could alternatively be exposed on a generic “device-centric” interface like Input Device Capabilities