I was looking at Math.clz32, which is unique in that it returns fixnum. And the question occurred to me of why there is a fixnum type at all. As far as I can tell, the only other place fixnum is used is in numeric literals. So what’s the purpose? Why not use signed and have one less type to worry about?
The main point of fixnum is that it is a subtype of both signed and unsigned and thus may be used in sign-observing operations like comparison without explicit coercion. This was primarily introduced for numeric literals so you could write
(x|0) < 5 instead of
(x|0) < (5|0) which both looks a bit silly and saves overall code size. clz32 just happened to be one of the few other cases where we know we have an integer in the [-231,231) range.