Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

RF: Optimize ICC_rep_anova with a memoized helper function #3454

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
effigies merged 2 commits into nipy:master from effigies:opt/icc_global_cache
Apr 26, 2022

Conversation

@effigies
Copy link
Member

@effigies effigies commented Apr 22, 2022

@bbfrederick Post #3453, I got motivated to look at your PR again. I grokked the code a little better and realized we could do this more Pythonically.

WDYT? I set the maxsize=1 to match your effective cache size. I don't really know how big these matrices can get, so I'm not sure whether that was a memory optimization or just an assumption that the number of switches back and forth between shapes is likely to be small.

@kristoferm94 Would be interested in your thoughts. This does break up the improved matrix ordering you provided, but I suspect the caching benefit is going to dominate here.

As some basic tests:

In [1]: import numpy as np
In [2]: rng = np.random.default_rng()
In [3]: from nipype.algorithms.icc import *
In [4]: %time ICC_rep_anova(rng.normal(size=(50, 10)))
CPU times: user 26.4 ms, sys: 67 ms, total: 93.4 ms
Wall time: 15.4 ms
Out[4]: 
(-0.0010955283641284907,
 -0.0012372157263673023,
 1.130569661036011,
 0.01640374953027528,
 9,
 441)
In [5]: ICC_projection_matrix.cache_clear()
In [6]: %time ICC_rep_anova(rng.normal(size=(50, 10)))
CPU times: user 431 μs, sys: 18.8 ms, total: 19.3 ms
Wall time: 3.14 ms
Out[6]: 
(0.052737606398966394,
 0.049204080066708754,
 0.8837938966422438,
 0.028379415383302194,
 9,
 441)
In [7]: %timeit ICC_rep_anova(rng.normal(size=(50, 10)))
160 μs ± 7.95 μs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

Cells 4-6 is just to distinguish compilation effects. Cell 7 shows the effect of caching.

Copy link

codecov bot commented Apr 22, 2022
edited
Loading

Codecov Report

Merging #3454 (ad2249a) into master (3dc858e) will increase coverage by 0.00%.
The diff coverage is 94.11%.

@@ Coverage Diff @@
## master #3454 +/- ##
=======================================
 Coverage 65.24% 65.24% 
=======================================
 Files 309 309 
 Lines 40833 40838 +5 
 Branches 5376 5377 +1 
=======================================
+ Hits 26641 26645 +4 
 Misses 13118 13118 
- Partials 1074 1075 +1 
Flag Coverage Δ
unittests 65.02% <94.11%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
nipype/algorithms/icc.py 58.97% <94.11%> (+1.44%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3dc858e...ad2249a. Read the comment docs.

Copy link
Contributor

Sorry it took so long to get back to you - kept finding myself on mobile with no screen big enough to look at code.

Yes, that looks like a good switch - I wasn't aware of lru_cache (I need to get out more)! Maxsize=1 is fine - my use case is calculating it on ~260k voxels in a set of 3D images from 1000 subjects with 4 conditions - exactly the same matrix size each time, so the repeated pseudoinverse was killing me.... I don't imagine you'd need to switch the matrix up very frequently, although I'm sure you could come up with some pathological application that would.

Copy link
Member Author

Sounds good. I agree that there's probably not a huge advantage in storing more than one precomputed pseudoinverse. I'll go ahead and merge. If someone can come up with a relevant use case for increasing the cache, we can revisit.

@effigies effigies merged commit 570d464 into nipy:master Apr 26, 2022
@effigies effigies deleted the opt/icc_global_cache branch April 26, 2022 21:10
@effigies effigies added this to the 1.8.0 milestone Apr 28, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Reviewers

No reviews

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

1.8.0

Development

Successfully merging this pull request may close these issues.

AltStyle によって変換されたページ (->オリジナル) /