As a developer working on a video editor that run in the browser, I got very excited when I saw the OffscreenCanvas API. I believe that I have identified a way to improve it with my potential solution below
Problem:
A very common pattern when manipulating video on an application would be to have a requestAnimationFrame loop that will extract the presented frame of an HTMLVIdeoElement, manipulates the pixels and then draw the result on a canvas.
The release of the OffscreenCanvas API allows for the rendering of WebGL graphics off the main thread. However there are currently no reliable way to render a video on an OffscreenCanvas.
We currently have to run a requestionAnimationFrame loop on the main thread that creates an ImageBitmap that will be sent to a worker, to then be rendered on the OffscreenCanvas. This defeats the purpose as if any UI work is perfomed on the main thread this could delay the sending of the frame.
Proposed solution:
An OffscreenVideo would have the same capability as an HTMLVideoElement but would be controllable from the main thread or a worker.
My initial thought would be to follow the pattern that the OffscreenCanvas brought and apply that to the video.
const video = document.createElement("video");
const offscreenVideo = video.transferControlOffscreen();
The OffscreenVideo interface would implement all the playback related functions.
An alternative approach to that would be to piggyback the work made to the Media Source Extensions API to run in a worker. https://github.com/w3c/media-source/issues/175.
A MediaSource could implement a mechanism to get an ImageBitmap for a given time. This looks a bit more unrealistic to me as all of this logic currently lives only as part of the HTMLVideoElement
–
I am looking forward to hearing your thoughts about this