-
Notifications
You must be signed in to change notification settings - Fork 20
Comments
DM-48936: Implement mock-based unit tests for mid-level measurement drivers#1145
DM-48936: Implement mock-based unit tests for mid-level measurement drivers #1145enourbakhsh wants to merge 2 commits intomain from
Conversation
fc29f51 to
beddd77
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It’s a mystery to me why we get two different outcomes for aperture correction in this specific test; it always alternates between just those two. I ran a quick test and found that there’s some randomness beyond the seeds we set in _makeTestData. I have to say, the image arrays themselves seem to be the same. If I save a test exposure and reuse it, the problem doesn't happen, i.e., I get the same base_PsfFlux_apCorr every time I run the unit test. Any idea why we don't get a unique value here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My best guess would be that there's a seeded random generator being called N times in the first case and N+1 in the second, though I'm not sure what would cause that. This should be a follow-up ticket.
beddd77 to
16e61ef
Compare
16e61ef to
9a2052a
Compare
@taranu
taranu
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. See comments/suggestions. If you don't figure out what's causing the one alternating result, please file another ticket for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aren't a lot of these settings defaults? Do you want these specific values tested rather than whatever the defaults are (which might change in the feature)? I can see the value in having one test that makes minimal changes to the defaults and another that tries to change as many as possible for maximal coverage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this in nJy or the pre-calibrated flux which is ~0.02 nJy? If it's nJy then 3.0 seems quite large for atol.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some of these should probably just be separate tests so you can run them in parallel, if desired. I suppose it's useful to run at least a pair serially to ensure there isn't some unintended state saved between runs, but not every time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it persist them or just discard them after generating? If the former, should they be checked?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to just loop over a list of expected output names, but I suppose they're only specified in the docs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be helpful to define some of these expected values in setUp for consistency, and to make it easier to look up what they are.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you don't care about the ordering you could make some_expected_columns a set and write missing = (some_expected_columns & set(table.colnames)) - some_expected_columns, though it doesn't really matter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My best guess would be that there's a seeded random generator being called N times in the first case and N+1 in the second, though I'm not sure what would cause that. This should be a follow-up ticket.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a legacy function and shouldn't be used. Use e.g. numpy.random.default_rng(seed=565) instead.
I wonder if this is the cause of your semi-random result above? It shouldn't be, but two different results suggests a test order dependency.
No description provided.