The MediaRecorder API allows developers to create Blobs of audio and/or video files from a MediaStream. This can be used, for example, to apply a visual filter to a video using a canvas and save the result as a new video.
However, MediaRecorder does not have any way to specify when a frame is captured, so a 3 minute video will always take 3 minutes to record. For cheap to produce content this means that creating the output is much slower than it needs to be, while for expensive to produce content any dropped frames are recorded verbatim into the output.
I would like to see an API that allows a developer to control exactly when a frame is captured.
const stream = canvas.captureStream();
const recorder = new MediaRecorder(stream);
recorder.framesPerSecond = 30;
This would append a frame to the output data that captured the current state of the canvas, along with 1/30th of a second of audio from the audio track. The output would have a final frame rate of 30fps.
For further a further example use-case, see https://bugs.chromium.org/p/chromium/issues/detail?id=569090
I’ve always wanted something like this - fast-as-possible encoding is necessary for audio transcoding. Right now we have to ship asm.js/WebAssembly versions of the same audio encoder the browser has built-in, just to be able to go faster than real-time.
There was a discussion on the Spec to introduce a “scaling factor” that would affect all encoded video chunks’ timestamp, allowing for both slo-mo and timelapse cases and the use case here, but IIRC @jan-ivar and @pehrsons were not convinced. Perhaps we could restart the conversation now?
Note that that proposal did not address what to do if there was audio recording involved.
Also FTR AudioWorklet interface was proposed as an potential example pattern to follow.
The scaling factor does not sound directly related to this. You can already change the playbackRate in the Web Audio API, for example, and the video scaling factor sounds analogous to that.
Instead what I would want is basically a way to pipe an OfflineAudioContext to MediaRecorder, so it runs as fast as possible but encodes a file that plays back at normal speed.
I’m not sure what the video equivalent is though, since there isn’t such a thing as an OfflineVideoContext.
This would be extremely useful not just for speeding up encoding beyond real-time, but also for slowing down when you’re trying to record a canvas with large dimensions (for example) where the user’s computer can’t keep up with the frame rate, and so you end up skipping frames. You can process each frame one-by-one and add them “in your own time”, so you don’t need to worry about quality loss.