A partial archive of discourse.wicg.io as of Saturday February 24, 2024.

[Proposal] makeUserMedia -> virtual Presentation API devices

rektide
2020-03-26

Abstract

Hello hello! This is a very loose definition of proposal, but:

Just as how getUserMedia fetches media from an existing user’s device, I would like a makeUserMedia, that allows the web page to create & output to a media device on the system for the user. That is, I would like a way for the web page to “create” a webcam or microphone device on a system & stream to it.

Example: in (webvr) world video-camera

One example that we could get via no other way than the web directly supporting a makeUserMedia alike, would be: being in WebVR, using WebXR to hold a virtual webcam, & streaming that in-world camera out to an audience, on an everyday teleconference system or something like a game streaming system. Where-as teleconferencing systems usually can only stream what is “on screen” for the user, this particular example introduces makeUserMedia to allow the web page to output to a dedicated virtual device that can have it’s own output.

Without a makeUserMedia this use case becomes much harder to achieve. Yet this is something that would be imminently doable on a native platform, because a native application can create virtual audio & virtual video devices that can show whatever they want.

The web page has impressive getUserMedia tools, a great & interesting audio subsystem (and perhaps another low level audio api on the way), it has increasingly improving video processing & new WebGPU standards are incoming. These are wonderful capabilities. But the ability to use them for video-jockeying, performance art, & other intensive multi-media systems is limited. Adding the ability to makeUserMedia & to custom build content specifically for that output would be overwhelmingly liberating for art & media & social tribes. On to the:

Lightweight proposal

There is one similar-ish capability the web already has to this, which is Presentation API. With Presentation API, the web page can discover, & control a remote screen that is capable of playing media files & hosting it’s own web page, while maintaining a connection to that presenting device. This sounds like the ideal host for the above “video-camera inside a webvr world” use case, in that I the user would still have my experience, but I would be presenting the in-world camera from the Presentation Receiving device. So what is missing to me is not really a dedicated makeUserMedia capability (which comes with a host of questions about how & what it would look like), but instead, the web ought have conventions & standards for using Presentation API to present to a virtual local device, on the same system, being ran by the browser itself.

Presentation API is a great start here, & integrates well with how we output a/v from the page today. The only introduction is the idea of this virtual device output target. What I see as the remaining work, that would benefit from standardization, is negotiating the output parameters. Since the output device is arbitrary, is virtual, it ought be able to adopt whatever parameters the user wants. A page ought to be able to specify:

  • output buffers (frame size/resolution, & content rgb, hdr, depth, thermal, ir, normals, &c)
  • desired frame rate
  • sound frequency/bit-depth
  • number of sound channels & location of each output

Assorted links

10k bounty for implementation in OBS. Gee wouldn’t it be great if a page could do this?

Discussion of implementation on one native platform,

A ticket I opened requesting virtual cameras for the pluggable webrtc system Hydra,

rektide
2020-05-07

Just ran into someone experimenting with virtual user media devices,

Please can we do this for real?