A partial archive of discourse.wicg.io as of Saturday February 24, 2024.

WebCodecs Proposal

pthatcher
2019-06-24

WebCodecs

API that allows web applications to encode and decode audio and video

Many Web APIs use media codecs internally to support APIs for particular uses:

  • HTMLMediaElement and Media Source Extensions
  • WebAudio (decodeAudioData)
  • MediaRecorder
  • WebRTC

But there’s no general way to flexibly configure and use these media codecs. Because of this, many web applications have resorted to implementing media codecs in JavaScript or WebAssembly, despite the disadvantages:

  • Increased bandwidth to download codecs already in the browser.
  • Reduced performance
  • Reduced power efficiency

It’s great for:

  • Live streaming
  • Cloud gaming
  • Media file editing and transcoding

See the explainer for more info.

AshleyScirra
2019-06-26

This sounds great! Currently we ship a WebAssembly encoder for WebM Opus in our web app, even though Chrome has one built-in, just so we can get faster-than-realtime encoding.

It’s not quite clear to me from the explainer how the container formats work though. It looks like we can get a stream from an Opus encoder, but how would those be arranged in to a WebM container? Would that still be left to the web app to solve? It’s a common case for transcoding and the browser also has readers and writers for the container formats too, so it would be nice if that could be covered as well.

guest271314
2019-06-26

Does this proposal include the ability to decode any video that HTMLMediaVideoElement can decode?

We can already use decodeAudioData() of AudioContext and startRendering() of OfflineAudioContext() to get audio data. What have been trying to achieve is getting video data; e.g., decodeVideoData() as an array of ImageData or ImageBitmap faster than real-time.

pthatcher
2019-06-26

The scope of this explainer excludes container formats. That would be left to the JS/WASM. The idea here is to do what the JS/WASM cannot (as efficiently) do, but let it it control everything from there. However, if there is enough interest, I suppose one could also propose/design some kind of WebMediaContainers API that goes well with this one.

pthatcher
2019-06-26

The idea here is to expose all of the codecs that the browser currently has underneath in the implementation of HTMLMediaElement, so yes, it should be able to decode any video. However (as mentioned in a previous comment), the media going into the decoder is not containerized. So, if you want to decode vp8 inside of mp4, you need to parse the mp4 and pass in the raw vp8 rather than passing in the mp4.

It’s true that decodeAudioData is already there, but it has flaws pointed out in the explainer.

The idea of this proposal is to basically give you “decodeVideoData”, but also “encodeVideoData” and “encodeAudioData”, and all with a better (but lower-level) API.

guest271314
2019-06-26

Filed an issue relevant to the current language in Explainer about MediaRecorder being able to record multiple tracks, which is not presently possible.

Also filed a PR to include an additional use case of merging multiple input media files to a single output stream or file.

Can

Decoded and encoding images

be clarified at the Explainer?

Does the proposal intend to provide a means to get the encoded images from any container that HTMLVideoElement is currently capable of decoding?

guest271314
2019-06-26

The idea of this proposal is to basically give you “decodeVideoData”, but also “encodeVideoData” and “encodeAudioData”, and all with a better (but lower-level) API.

Ok. That should be close enough if not exactly what have been trying to achieve at https://github.com/guest271314/MediaFragmentRecorder, by piping input through MediaRecorder to get various, potentially dissimilar input containers and codecs into webm e.g., input const merged = await decodeVideoData(["video1.webm#t=5,10", "video2.mp4#t=10,15", "video3.ogv#t=0,5"], {codecs="openh264"});; similar to the output of using mkvmerge (though mkvmerge also sets cues and duration, which MediaRecorder at least at Chromium, does not).

brion
2019-07-13

I really like this proposal – it would be very useful for several of my plans for Wikipedia’s video support, including in-browser transcoding on upload and realtime composition and transitions in a video editor… as long as it’s possible to manipulate and synthesize the data in a DecodedVideoFrame.

Currently it looks like there’s no way specified to manipulate one other than to pass it into stuff for playback or recording, and no way to create one except through decoding a compressed frame.

Ideally, I’d be able to get at the pixels in a decoded frame so I can do something custom with them (recode them manually, or combine with another decoded frame or a generated image to create a transition or visual effect) and then send that on.

Would you consider adding pixel-data getters and a constructor for DecodedVideoFrame, or would another way of doing these be preferable? Thanks!

[edit: It occurs to me that some kind of composition of this proposal with something like [Proposal] Allow Media Source Extensions to support demuxed and raw frames may be a happy union on that front. :slight_smile: ]

Dale_Curtis
2019-07-15

Re: composition with MSE. Yes I had that thought as well. If we end up with standardized definitions for an EncodedPacket and DecodedFrame we could add append methods for arrays of those to MediaSource SourceBuffer objects.

It’s possible that has a longer standardization process since it requires new APIs versus an extensions to the byte stream registry. It’s also unclear exactly how a wasm decoder would be able to directly write into a JS object. Possibly the object can use ArrayBufferViews that can point into wasm memory.

(Lets keep subsequent discussion of that on the other proposal. Thanks!)

pthatcher
2019-07-23

It should be possible to get to raw audio through WebAudio. It should also be possible to go through a canvas to get to the raw pixel data, but that’s rather hacky. We have been discussing better ways to get access to raw pixel data but it’s complicated to do it in a way that’s easy and fast.

npurushe
2019-08-27

We at Twitch are supportive of this proposal. This can potentially remove a lot of the overhead containers add (both chunked CMAF and MPEG-TS) for low latency streaming.

It would also be useful to have a way to enumerate the codecs available and their capabilities, e.g max profile, level supported for video.

sagoston
2019-08-28

We at Sony Interactive Entertainment are supportive of this proposal. The efficiency and flexibility provided would be a great addition!

santosh_sampath
2019-08-29

I think this would be a great replacement for PPAPI video codecs that was available in Chromebooks.

aboba
2019-08-30

This is a very interesting proposal that can potentially apply to both streaming and real-time media.

pthatcher
2019-09-13

Given the amount of response, I’ll transfer ownership of the git repository over to the WICG.

yoavweiss
2019-09-13

The repo now lives at https://github.com/wicg/web-codecs

Thank you for flying WICG!! :slight_smile:

guest271314
2019-09-21

What is necessary to begin that process?

pthatcher
2019-09-23

Probably right an explainer and post it on discourse.

guest271314
2019-09-24

Posted the proposal on discourse. What needs to be in the explainer?

let videoWriter = new WebMediaContainer({videoCodec:"WebM", audioCodec:"Opus", ...videoWriterSettings});
videoWriter.addVideoFrame(
  videoFrame /* WebP, PNG, etc. as Blob, ArrayBuffer, data URI, ImageBitmap, ImageData, canvas */, frameDuration, width, height, /* WebVTT, ...frameSettings */
);
videoWriter.addAudioFrame(
   audioFrame /* AudioBuffer, Float32Array, Blob, ArrayBuffer, data URI, other... */, sampleRate, numberOfChannels, frameDuration
);
await videoWriter.compile(); // Blob

Expose the media encoders, muxers, and containers writers that the browsers already have in source code directly to the developer?

Again, the concept is very simple. Browsers that have implemented MediaRecorder already write data to a container, currently WebM or Matroska. The developer should be able to use that internal code in JavaScript. Whether the input be static files or live media.