Yes, you’re right it would make things more complex, and since most ASM is autogenerated anyway, it doesn’t annoy to much to simply remove function calls from them.
[Updated] I just learned that ArrayBuffer byteLength can be zero. Thus, we need both the mask and <= byteLength checks.
A few more tweaks made to the proposal found while implementing that generally simplify things:
- I think it is simpler to treat byteLength as an ordinary global import (with a magic type like fround) which means it can be imported anywhere, possibly multiple times. (The change heap function must still be before all other functions to provide AOT bounds-check-elimination.)
- Heap views have to be replaced in the order they are declared in the change-heap function.
- For the same reason HEAP32[(f()|0)>>2] is disallowed, so is “HEAP32[i>>2] = (f()|0)”.
Let me know if there are any problems with any of these changes (or the previous minimum-heap-length change).
I don’t understand what the last tweak means?
Sorry, I rushed that a bit. Updated previous comment.
An initial implementation of exactly what is in the proposal above landed in Firefox Nightly and should ship with Firefox 35. (Wow. Such AST matching.) Feel free to play with it and post feedback here.
IIUC (correct me if I’m wrong Alon), the tentative plan is for Emscripten to allow resizable heaps by default (so as to avoid requiring users to specify the heap size up-front) once this optimization is in Firefox release. Comments welcome on this timing, though.
Alon, do you think you could post a link to a public demo that uses change-heap so we could try it out?
Well, we would encourage it more when browsers are ready I suppose, but not sure we would want it on by default? It does inhibit some amount of optimizations. (My worry is that if it’s on by default, almost no one that doesn’t need a resizable heap will disable it and get those opts; whereas if it’s off by default, people hit the memory error so they will know they need to enable it.) On the other hand, the cost is about 0.5% in code size. Perhaps not a big deal, I guess.
In any case, whether this is on or off by default in emscripten would end up being whatever the emscripten community prefers, we’ll ask on the mailing list.
What kind of demo do you want? Just a random demo that happens to be built this way, or something that actually grows the heap over time?
It doesn’t disable anything on the browser side. A 0.5% code size increase seems reasonable (is this compressed?). Also, while flagrant heap-too-small bugs are easy to find/fix, I expect (1) that leaves rare spike cases that end up shipping (2) it forces devs to claim more address space than they normally need (leading to higher incidence of virtual-address-space-OOM on 32-bit). Asking the list seems like a good idea, though.
Ideally something that does grow, if it’s not too much trouble. It’d be nice to see it in action
I realized that, if engines start allowing >2gb ArrayBuffers in the future (none appear to, atm, but the spec indicates ArrayBuffer lengths can be up to 2^53-1), it would be really nice to have a maximum-heap-length requirement as well. This avoids the unpleasant situation where (due to the signed right-shift in heap access) asm.js cannot access the full range of an ArrayBuffer which in turn poses problems for the use of memory protection tricks to avoid bounds checks.
Thus, the ‘if’ in the change-heap function would have a third disjunct:
|| byteLength(buffer) > 0x80000000
In the future, if we want to allow asm.js to access a full [0, 4gb) range, we could allow a bigger max-length constant which, if used, would require all heap accesses to use unsigned right shift.
I uploaded a demo of bullet that uses memory growth at
Pick a large number of cubes and look in the web console, for 1500 for example it should log out multiple memory growth events.
This doesn’t contain the last change. It sounds fine to me though.
Thanks Alon! I’ve heard no other objections, so I planning to land the max-length change soon.
This feature (along with the max-length tweak) is now in both FF Nightly (36) and Aurora (35). ArrayBuffer.transfer landed in FF Nightly (and will stay #ifdef’d to Nightly until it makes progress in TC39) which allows for comparison of heap resize with and without AB.transfer (the demo feature-tests and prints timing/feature-test results to console.log). Anecdotally: on my machine, with AB.transfer, change-heap usually takes 4-5ms (regardless of size), whereas copying takes anywhere between 10 and 100ms.
Minor update: builtin calls cannot reenter so they are fine to use in index/rhs expressions. (The initial landing in FF has a bug, putting the “can call” check in the wrong place which ruled out coerced builtin calls.) If this exception causes any trouble, though, we can just be conservative and rule out all calls, builtin or not.
I think it makes sense to allow builtin calls.
Update: while this feature works without issue or slowdown in FF/Edge, the change-heap function interferes with the generic closure analysis that allows V8/JSC to detect and optimize the immutability of the heap view variables . In V8, this causes a drastic performance penalty (see issue 3907). If that was the end of the story, I think it’d make sense to hold out until optimizations improve (e.g., on V8 is planning to do full asm.js optimizations). However, WebAssembly will have a simpler and more direct form of resizing along with cross-browser support. Due to the above V8 issue, I don’t think many people are using asm.js heap resizing (
ALLOW_MEMORY_GROWTH is off by default in Emscripten and recommended against), so to simplify engines, I’d like to remove support for the change-heap function from SpiderMonkey and have suggested to Chakra folks that they do the same. Of course, Emscripten can still support
ALLOW_MEMORY_GROWTH by simply emitting almost-but-not-asm.js.
I will open an issue on Chakra once we go open source this month.
In the early phases of WASM support, as I understand it there will be an asm.js-based polyfill. Since asm.js will not support resizing heaps but WASM will, does this mean the polyfill will either:
- work with a fixed-size heap, and crash if it goes over (creating a portability problem), or
- omit “use asm”, running considerably slower than a normal asm.js build, possibly removing some incentive to use WASM instead?
Or is there another plan to work around this?
I think the options you outlined are basically all we can do, yeah. The second (omit “use asm”) seems better to me, although it might be an option in the polyfill or different polyfills might do different things. Overall I don’t think this removes an incentive to use wasm (shipping as wasm allows running as wasm natively, which is strictly better, except for the time to polyfill if it isn’t), but maybe I’m not seeing something?
But in general, polyfilling has not been a strong constraint on the design of wasm. This means that wasm is less limited by legacy, but it also means that polyfilling will not be perfect. Aside from resizing memory, there are issues with numeric corner cases (trapping vs not), statements=expressions code sometimes being inefficient to translate to JS, and others. Polyfilling will still work, but it will require some caution I think.
I guess that’s a reasonable view of polyfilling, yeah. Hopefully use of the polyfill will be transient anyway and vendors will support WASM before long!