Bingo! I'd say an `n` key with an integer value greater than or equal to 0, but let's not nitpick. I also think this would have been a viable alternative to the sequence concept we now have, because it has some useful properties:
* It can hold `nil` values.
* There's no undefinedness or inconsistency regarding length and contents of the table.
* "Calculating" the array length is O(1).
* Resizing is O(1) (although you probably would want to clear integer keys that fell out of the new array range).
* *Checking* whether a table is an array is O(1) (which is much better than what we have now).
* The table library could be adapted to this protocol without problems.
* You can still have mixed tables (with hash and array elements).
* It fits nicely with the world view that tables are mappings from arbitrary keys to the default value `nil` -- with a few explicit exceptions.
(As I noted in my earlier post)
But that was my whole point. An *explicit* array size, which means it’s set explicitly by the user. The default for a table is 0. If you set size to (say) 5, then you have 5 elements, (1..5). Doesn’t matter if some are nil, doesn’t matter if you have keys 6, 7 and 8 .. the size is still 5. This is, as I said, similar to “.n” but it’s *first class* .. so #t returns 5, and obeys __len metamthods (so there is no ugly “.n”, just the built-in # operator and a yet-to-be-determined way to set the size). And ipairs() uses #t, so there is no surprise or ambiguity. And #t runs in O(1) time instead of O(log n) time. And table.pack() / table.unpack() are mirror pairs.
Modifying the size of an array is explicit; you have to set the new size. Of course, some library functions might be modified to do this (or have new versions) .. insert/delete are obvious candidates as I previously noted. This is where you handle grow/shrink (insert grows, delete shrinks), whereas t[#t+1] doesn’t change the size.
I’m undecided if a table initializer should set the size: should {1,2} have a size of 0 or 2? What about more complex initializers? And of course there are issues with non-integral keys etc.
One “surprise” that I would expect a newcomer to fall into is thinking that “shrinking” the array (that is, setting the size to a smaller value) will discard elements in the table, whereas (to my mind) it simply leaves then alone, but beyond the end of the array.
As I said, the scheme is far from perfect, but it’s more consistent that the current incoherent approach of .n, # and sequences imho.
—Tim