The MediaRecorder API allows developers to create Blobs of audio and/or video files from a MediaStream. This can be used, for example, to apply a visual filter to a video using a canvas and save the result as a new video.
However, MediaRecorder does not have any way to specify when a frame is captured, so a 3 minute video will always take 3 minutes to record. For cheap to produce content this means that creating the output is much slower than it needs to be, while for expensive to produce content any dropped frames are recorded verbatim into the output.
I would like to see an API that allows a developer to control exactly when a frame is captured.
Something like:
const stream = canvas.captureStream();
stream.addTrack(someAudioTrack);
const recorder = new MediaRecorder(stream);
recorder.framesPerSecond = 30;
recorder.addFrame();
This would append a frame to the output data that captured the current state of the canvas, along with 1/30th of a second of audio from the audio track. The output would have a final frame rate of 30fps.
For further a further example use-case, see https://bugs.chromium.org/p/chromium/issues/detail?id=569090