The STGM is a graph reduction virtual machine that is used to implement lazy functional languages (Haskell at least). I believe the TIM is very similar, but I'm not sure where the difference is [It may be that it's tag- and spine- ful]. Instead of passing around an explicit environment, they are based on combinators, functions without any free variable (much like the S/K/I combinators, but with a larger set of more complete combinators, or even supercombinators to improve efficiency). By using combinators, they avoid the need to model the lexical environment. Instead, free variables are lambda lifted and received as arguments. On further reduction, the (references to the) arguments are directly embedded and copied in the subgraph representing the function. The environment is implicit and modelled by the reduction, as new subgraph are created from a template, and references to arguments fixed.
Meijer's paper presents similar machines with an explicit environment object. In the first section, it's a stack, like in the SECD machine. He then describes a similar system with named registers instead of a stack. Finally, he argues that the intermediate code under these environment-ful schemes is clearer without being less efficient.
As for Chicken, I'm not sure if there is anything to relate it to the current paper. However, given the finite function argument limit in C, I doubt that it does full lambda-lifting. Also, (IIUC) closures are pretty much needed for the CPS transform in Cheney on the MTA to make any sense.
I'm really not well-read on the current topic, but it's a first stab. Please correct :)