You can't deduce that; you'd need to show that the object is never passed to for..in or Object.keys or similar to be able to avoid storing the insertion order.
They could change the representation to not be insertion order dependent, and store insertion order in a separate data structure, but that has its own trade-offs.
Does for..in actually require that keys be returned in a specific order? Most languages specifically call out that the order is not guaranteed (not that that stops developers from relying on an order)
The ES spec doesn't require it, but every implementation does insertion order (and insertion order was a deliberate decision by Brendan in the original implementation, IIRC), and the web very much relies on it.
The "new" generation of JS VMs (V8, Chakra, Carakan) all dropped insertion order for array index properties (that is properties whose name is a uint32), but kept it for everything else; that broke about as much as browsers are willing to break, and breaking the general case would be far, far worse.
> Does for..in actually require that keys be returned in a specific order?
It's complicated. The spec says order of for..in is not defined, browsers all implement the same order, and there keep being attempts to get the spec changed to match.
> you'd need to show that the object is never passed to for..in or Object.keys or similar, and unless you solve the halting problem, you can't do that.
This is pretty blatantly incorrect. Just because a problem is undecidable in general doesn't mean that there aren't specific cases where it can be solved. Optimizing compilers deal with undecidable problems all over the place, generally by either being conservative and lumping together proved-impossible with unable-to-be-proved invariants, or doing speculative optimizations with an unoptimized fallback. Just proving the type of a variable in Javascript is undecidable in the general case
My hypothesis (as someone who used to work on a JS VM) would be that locations of iterations over an object's keys and the creation of the object are almost always in different functions, with the object passed between them. And note that JS VMs rarely do cross-function optimization. As such, it would be exceptionally rare for the optimization to be applied.
My point was really that you can't always drop the ordering of properties, and the insertion order isn't something you can recreate after the time if you don't store in initially, hence you do really need to store it somehow.
(If anyone's confused, I edited the grandparent comment of this to better reflect the above.)
The original code didn't have a call to .keys(), and that's the case where this should be optimized. If you do the unordered optimization, then encounter an operation that needs an ordered iteration, you'd have to fall back to an ordered representation (which could be slower than not doing the optimization at all)
That requires extra bookkeeping for ordering, but it isn't really that common where people will have object literals with the same properties initialized in a different order.
But a much more common case is that properties added after object creating will be added in an inconsistent order, for example:
o = {}; o.x = 1; o.y = 2; o.z = 3;
p = {}; p.y = 4; p.x = 5; p.z = 6;
When executing p.y = 4 you don't know that the object will eventually evolve to have the same properties as o.
That would surely be the second option? Treating all permutations the same means changing the object representation to keep ICs monomorphic across orderings.
Unless you forbid any impure operations you can't just deoptimise and re-execute code to then store the ordering. (I'm sure this isn't what you meant, and I'm probably being silly!)
They could change the representation to not be insertion order dependent, and store insertion order in a separate data structure, but that has its own trade-offs.