The cool part is that they didn't have type systems for physical simulations, they had type systems for physical equations. (which are applied in expressing physical laws[0])
A brief detour: Iverson's Turing Award lecture starts slowly[1], in order to introduce his notation. On p 447, we find De Morgan's law expressed as:
∧/B ↔ ~∨/~B
which is equivalent to
~∧/B ↔ ∨/~B
which, in a more common pseudocode notation, becomes
not(all(bs)) == any(map(not)(bs))
note the correspondence (on a purely formal, uninterpreted level), with
len(cat(ss)) == sum(map(len)(ss))
These types of equivalence all have something in common: list parametricity[2] (for instance, in the latter, we might use the typing len : ∀τ. List(τ) → Nat)
What Atkey is saying is that one can use the same kind of typing to formally express what physicists informally do when their work remains invariant under arbitrary smooth transformations, or mathematicians informally do when they reason "without loss of generality". For instance, it's fundamental to 'area' that it remain the same under translations and rotations, and so in their system a function which computed areas would also have an ∀τ syntax showing up in its type (where tau in this case would vary over geometric congruences, à la haskell's type class constraints?). So we'd get:
area(union(ps)) == sum(map(area)(ps))
where area may be taken under an arbitrary (fuzzed) congruence, to express that[3] we can move an object around, and measure it, or we can break it into pieces, move those —independently!— around, then sum the measurements; either way[4] we arrive at the same answer.
Now, I haven't even skimmed the paper yet, but my guess is that their examples are just rederivations of physical results: adding the types hasn't told us anything we didn't already know.
Where the formalization becomes interesting is the degree to which it can reduce an activity which previously required thought[5] to one of pushing symbols around. Noether demonstrated that symmetries and conservation laws[6] are related, and the promise of this work is that it should allow one to read symmetries (and hence the invariants) directly off the types.
[0] Einstein's revolution, under parallel transport to informatics concepts, becomes something like: "Obviously the UNCOL thing only works within small islands of adoption, so in general we should have some ways to couple physical measurements like garden hose--screw in another segment when it becomes when it becomes necessary to massage data in another way".
[1] by §4.3 p.458 his definitions D.3 and D.4 capture the essence of google's map-reduce; this correspondence should not be unexpected, as people have had accounting problems with a similar structure from well before the punched-card era.
[2] compare (Osborne) Reynold's Operators (aiding symbolic manipulation at the top of a symbol DAG, not down at the leaves of a particular representation) with (John) Reynold's Parametric Types (aiding symbolic manipulation at the top of a symbol DAG, not down at the leaves of a particular representation)
[3] at least for regular objects, pace Banach-Tarski
[4] applications to the military-industrial complex have also been long-standing: Galileo showed that in determining how much higher one should point a cannon barrel, it doesn't really matter in what direction it is facing.
[5] how much thought? do physicists also find this work interesting? or would they view it as abstract nonsense? cf Feynman's French Curve story...
[6] compare the adjoints used for the parlor trick in dual photography
(which shows that we can arbitrarily shuffle the transfer function of a scene in between illumination and measurement) with
deep and shallow embeddings (which shows that we can arbitrarily shuffle the interpretation of a data structure in between writing and reading)
[Edit: s/et. al. are/is/]