For the purposes of "safe" pretty-printing of lambda expressions I'd like to verify that the lambda source actually compiles to the same bytecode object as the live lambda object. (There are challenges with setting up identical compilation contexts, but those are out of scope for this question.)
I noticed that since Python 3.11, there are cases where I can't make dynamic compilation (eval, exec, etc) produce the same bytecode as the at-import compiled code.
For example, for the simple case of calling a module attribute:
import dis, time
dis.dis(f_at_import := lambda: time.ctime())
dis.dis(f_evaled := eval("lambda: time.ctime()"))
These are the results since Python 3.10. The difference between major versions is expected and fine, but the difference between import-time compilation and dynamic compilation is surprising. I understand that the compiler does not guarantee stability with respect to optimizations, NOP insertions, etc., but this difference seems to be consistent and somewhat arbitrary.
Is there a way to dynamically compile code using the same code path as the static compilation?
| cpython ver | Compiled at import | Eval'ed code |
|---|---|---|
| 3.10 | |
(same as imported) |
| 3.11 | |
|
| 3.12 | |
|
| 3.13 3.14rc1 |
|
|
I have tried to replace eval with exec and also various optimization levels via eval(compile(...)) but there is no difference, for 3.12 at least. Writing the source to file and importing that file recovers the original bytecode, but I think it will be very difficult to set up the appropriate compilation context in a file in more complex cases.
1 Answer 1
Actually I think I just found the answer:
The compiler treats an attribute access differently if it's known to be a module at compile-time. I.e., the difference is whether the compiled source code unit contains an import statement for the global.
This answers my question: I can reproduce the bytecode by explicitly importing relevant modules, i..e.
f_glob = {}
exec("import time\nf=lambda: time.ctime()", f_glob)
f = f_glob["f"]