I’ve written more base64 encoders and decoders than I care to remember over the years.
atob()
and btoa()
are awful APIs. Their binary representation is packed into a string which has to be extracted using .charCodeAt()
. While btoa()
might be fixed by having it accept a BufferSource
, and maybe atob()
might be redeemed by adding an extra parameter, I think that the encoding API is a more natural fit.
Imagine this as a 6-bit per-symbol encoding of a limited subset of the full character space (i.e., only 64 different code points are recognized and encoded correctly).
What is perhaps strange about this is that you would use a decoder to encode and an encoder to decode in the classic sense:
// 'A' is the encoded form, but we use a decoder to produce it.
console.log(new TextDecoder('base64').decode(new Uint8Array([0]))); // -> 'A'
console.log(new TextEncoder('base64').encode('A')); // -> 0
Maybe that enough of a reason that this isn’t quite the right idea. I’m also happy to patch atob()
and btoa()
, but this seems like a better fit.
Edit: Forgot some of the rationale: While it is relatively easy to do base64 yourself, few implementations do this well. In particular, with WebCrypto, there is greater need for constant-time encodings of binary data and I’m not aware of any of those. The Firefox btoa()
implementation almost certainly isn’t constant time.