SourceForge logo
SourceForge logo
Menu

matplotlib-devel — matplotlib developers

You can subscribe to this list here.

2003 Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
(1)
Nov
(33)
Dec
(20)
2004 Jan
(7)
Feb
(44)
Mar
(51)
Apr
(43)
May
(43)
Jun
(36)
Jul
(61)
Aug
(44)
Sep
(25)
Oct
(82)
Nov
(97)
Dec
(47)
2005 Jan
(77)
Feb
(143)
Mar
(42)
Apr
(31)
May
(93)
Jun
(93)
Jul
(35)
Aug
(78)
Sep
(56)
Oct
(44)
Nov
(72)
Dec
(75)
2006 Jan
(116)
Feb
(99)
Mar
(181)
Apr
(171)
May
(112)
Jun
(86)
Jul
(91)
Aug
(111)
Sep
(77)
Oct
(72)
Nov
(57)
Dec
(51)
2007 Jan
(64)
Feb
(116)
Mar
(70)
Apr
(74)
May
(53)
Jun
(40)
Jul
(519)
Aug
(151)
Sep
(132)
Oct
(74)
Nov
(282)
Dec
(190)
2008 Jan
(141)
Feb
(67)
Mar
(69)
Apr
(96)
May
(227)
Jun
(404)
Jul
(399)
Aug
(96)
Sep
(120)
Oct
(205)
Nov
(126)
Dec
(261)
2009 Jan
(136)
Feb
(136)
Mar
(119)
Apr
(124)
May
(155)
Jun
(98)
Jul
(136)
Aug
(292)
Sep
(174)
Oct
(126)
Nov
(126)
Dec
(79)
2010 Jan
(109)
Feb
(83)
Mar
(139)
Apr
(91)
May
(79)
Jun
(164)
Jul
(184)
Aug
(146)
Sep
(163)
Oct
(128)
Nov
(70)
Dec
(73)
2011 Jan
(235)
Feb
(165)
Mar
(147)
Apr
(86)
May
(74)
Jun
(118)
Jul
(65)
Aug
(75)
Sep
(162)
Oct
(94)
Nov
(48)
Dec
(44)
2012 Jan
(49)
Feb
(40)
Mar
(88)
Apr
(35)
May
(52)
Jun
(69)
Jul
(90)
Aug
(123)
Sep
(112)
Oct
(120)
Nov
(105)
Dec
(116)
2013 Jan
(76)
Feb
(26)
Mar
(78)
Apr
(43)
May
(61)
Jun
(53)
Jul
(147)
Aug
(85)
Sep
(83)
Oct
(122)
Nov
(18)
Dec
(27)
2014 Jan
(58)
Feb
(25)
Mar
(49)
Apr
(17)
May
(29)
Jun
(39)
Jul
(53)
Aug
(52)
Sep
(35)
Oct
(47)
Nov
(110)
Dec
(27)
2015 Jan
(50)
Feb
(93)
Mar
(96)
Apr
(30)
May
(55)
Jun
(83)
Jul
(44)
Aug
(8)
Sep
(5)
Oct
Nov
(1)
Dec
(1)
2016 Jan
Feb
Mar
(1)
Apr
May
Jun
(2)
Jul
Aug
(3)
Sep
(1)
Oct
(3)
Nov
Dec
2017 Jan
Feb
(5)
Mar
Apr
May
Jun
Jul
(3)
Aug
Sep
(7)
Oct
Nov
Dec
2018 Jan
Feb
Mar
Apr
May
Jun
Jul
(2)
Aug
Sep
Oct
Nov
Dec
S M T W T F S



1
2
3
(3)
4
(1)
5
6
7
8
9
10
(2)
11
12
13
(1)
14
(2)
15
16
(1)
17
(6)
18
(2)
19
20
21
22
23
(1)
24
25
(7)
26
(4)
27
(2)
28
(5)
29
(3)



Showing results of 40

<< < 1 2 (Page 2 of 2)
From: Benjamin R. <ben...@ou...> - 2012年02月17日 17:27:47
On Fri, Feb 17, 2012 at 10:54 AM, Phil Elson <phi...@ho...>wrote:
> I think this feature was originally intended to work (since
> TransformedPath exists)
> but it wasn't working [in the way that I was expecting it to].
> I made a change which now only invalidates non-affine transformations
> if it is really
> necessary. This change required a modification to the way
> invalidation was passed through the transform stack, since certain
> transform
> subclasses need to override the mechanism. I will try to explain the
> reason why
> this is the case:
>
>
> Suppose a TransformNode is told that it can no longer store the affine
> transformed
> path by its child node, then it must pass this message up to its parent
> nodes,
> until eventually a TransformedPath instance is invalidated (triggering
> a re-computation).
> With Transforms this recursion can simply pass the same invalidation
> message up,
> but for the more complex case of a CompositeTransform, which
> represents the combination
> of two Transforms, things get harder. I will devise a notation to help me
> explain:
>
> Let a composite transform, A, represent an affine transformation (a1)
> followed by a
> non affine transformation (vc2) [vc stands for very complicated] we
> can write this in
> the form (a1, vc2). Since non-affine Transform instances are composed of a
> non-affine transformation followed by an affine one, we can write (vc2) as
> (c2, a2) and the composite can now be written as (a1, c2, a2).
>
> As a bit of background knowledge, computing the non-affine transformation
> of A
> involves computing (a1, c2) and leaves the term (a2) as the affine
> component. Additionally, a CompositeTransform which looks like (c1, a1,
> a2) can
> be optimised such that its affine part is (a1, a2).
>
> There are four permutations of CompositeTransforms:
>
> A = (a1, c2, a2)
> B = (c1, a1, a2)
> C = (c1, a1, c2, a2)
> D = (a1, a2)
>
> When a child of a CompositeTransform tells us that its affine part is
> invalid,
> we need to know which child it is that has told us.
>
> This statement is best demonstrated in transform A:
>
> If the invalid part is a1 then it follows that the non-affine part
> (a1, c2) is also
> invalid, hence A must inform its parent that its entire transform is
> invalid.
>
> Conversely, if the invalid part is a2 then the non-affine part (a1,
> c2) is unchanged and
> therefore can pass on the message that only its affine part is invalid.
>
>
> The changes can be found in
> https://github.com/PhilipElson/matplotlib/compare/path_transform_cache
> and I would really appreciate your feedback.
>
> I can make a pull request of this if that makes in-line discussion easier.
>
> Many Thanks,
>
>
Chances are, you have just now become the resident expert on Transforms.
A few very important questions. Does this change any existing API? If so,
then changes will have to be taken very carefully. Does all current tests
pass? Can you think of any additional tests to add (both for your changes
and for the current behavior)? How does this impact the performance of
existing code?
Maybe some demo code to help us evaluate your use-case?
Ben Root
From: Benjamin R. <ben...@ou...> - 2012年02月17日 17:17:56
On Fri, Feb 17, 2012 at 11:06 AM, Ryan May <rm...@gm...> wrote:
> On Fri, Feb 17, 2012 at 10:14 AM, Benjamin Root <ben...@ou...> wrote:
> > Hello all,
> >
> > I tracked down an annoying problem in one of applications to the Lasso
> > widget I was using. The widget constructor lets you specify a function
> to
> > call when the lasso operation is complete. So, when I create a Lasso, I
> set
> > the canvas's widget lock to the new lasso, and the release function will
> > unlock it when it is done. What would occassionally happen is that the
> > canvas wouldn't get unlocked and I wouldn't be able to use any other
> widget
> > tools.
> >
> > It turns out that the release function is not called if the number of
> > vertices collected is not more than 2. So, accidental clicks that
> activate
> > the lasso never get cleaned up. Because of this design, it would be
> > impossible to guarantee a proper cleanup. One could add another
> > button_release callback to clean up if the canvas is still locked, but
> there
> > is no guarantee that that callback is not called before the lasso's
> > callback, thereby creating a race condition.
> >
> > The only solution I see is to guarantee that the release callback will be
> > called regardless of the length of the vertices array. Does anybody see
> a
> > problem with that?
>
> Not having looked at the Lasso code, wouldn't it be possible to use
> one internal callback for the button_release event, and have this
> callback call the users' callbacks if points > 2 and always handle the
> unlocking of the canvas?
>
> Ryan
>
>
The problem is that the constructor does not establish the lock. It is the
user's responsibility to establish a lock and release the locks for these
widgets. Plus, if the user's callback has cleanup code (such as mine did),
not guaranteeing that the callback is done can leave behind a mess.
Now, if we were to change the paradigm so that the Widget class establishes
and releases the lock, and that the user should never handle that, then
that might be a partial solution, but still leaves unsolved the user's
cleanup needs.
Ben Root
From: Ryan M. <rm...@gm...> - 2012年02月17日 17:07:14
On Fri, Feb 17, 2012 at 10:14 AM, Benjamin Root <ben...@ou...> wrote:
> Hello all,
>
> I tracked down an annoying problem in one of applications to the Lasso
> widget I was using. The widget constructor lets you specify a function to
> call when the lasso operation is complete. So, when I create a Lasso, I set
> the canvas's widget lock to the new lasso, and the release function will
> unlock it when it is done. What would occassionally happen is that the
> canvas wouldn't get unlocked and I wouldn't be able to use any other widget
> tools.
>
> It turns out that the release function is not called if the number of
> vertices collected is not more than 2. So, accidental clicks that activate
> the lasso never get cleaned up. Because of this design, it would be
> impossible to guarantee a proper cleanup. One could add another
> button_release callback to clean up if the canvas is still locked, but there
> is no guarantee that that callback is not called before the lasso's
> callback, thereby creating a race condition.
>
> The only solution I see is to guarantee that the release callback will be
> called regardless of the length of the vertices array. Does anybody see a
> problem with that?
Not having looked at the Lasso code, wouldn't it be possible to use
one internal callback for the button_release event, and have this
callback call the users' callbacks if points > 2 and always handle the
unlocking of the canvas?
Ryan
-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
I think this feature was originally intended to work (since
TransformedPath exists)
but it wasn't working [in the way that I was expecting it to].
I made a change which now only invalidates non-affine transformations
if it is really
necessary. This change required a modification to the way
invalidation was passed through the transform stack, since certain transform
subclasses need to override the mechanism. I will try to explain the reason why
this is the case:
Suppose a TransformNode is told that it can no longer store the affine
transformed
path by its child node, then it must pass this message up to its parent nodes,
until eventually a TransformedPath instance is invalidated (triggering
a re-computation).
With Transforms this recursion can simply pass the same invalidation message up,
but for the more complex case of a CompositeTransform, which
represents the combination
of two Transforms, things get harder. I will devise a notation to help me
explain:
Let a composite transform, A, represent an affine transformation (a1)
followed by a
non affine transformation (vc2) [vc stands for very complicated] we
can write this in
the form (a1, vc2). Since non-affine Transform instances are composed of a
non-affine transformation followed by an affine one, we can write (vc2) as
(c2, a2) and the composite can now be written as (a1, c2, a2).
As a bit of background knowledge, computing the non-affine transformation of A
involves computing (a1, c2) and leaves the term (a2) as the affine
component. Additionally, a CompositeTransform which looks like (c1, a1, a2) can
be optimised such that its affine part is (a1, a2).
There are four permutations of CompositeTransforms:
A = (a1, c2, a2)
B = (c1, a1, a2)
C = (c1, a1, c2, a2)
D = (a1, a2)
When a child of a CompositeTransform tells us that its affine part is invalid,
we need to know which child it is that has told us.
This statement is best demonstrated in transform A:
 If the invalid part is a1 then it follows that the non-affine part
(a1, c2) is also
 invalid, hence A must inform its parent that its entire transform is invalid.
 Conversely, if the invalid part is a2 then the non-affine part (a1,
c2) is unchanged and
 therefore can pass on the message that only its affine part is invalid.
The changes can be found in
https://github.com/PhilipElson/matplotlib/compare/path_transform_cache
and I would really appreciate your feedback.
I can make a pull request of this if that makes in-line discussion easier.
Many Thanks,
From: Benjamin R. <ben...@ou...> - 2012年02月17日 16:14:46
Hello all,
I tracked down an annoying problem in one of applications to the Lasso
widget I was using. The widget constructor lets you specify a function to
call when the lasso operation is complete. So, when I create a Lasso, I
set the canvas's widget lock to the new lasso, and the release function
will unlock it when it is done. What would occassionally happen is that
the canvas wouldn't get unlocked and I wouldn't be able to use any other
widget tools.
It turns out that the release function is not called if the number of
vertices collected is not more than 2. So, accidental clicks that activate
the lasso never get cleaned up. Because of this design, it would be
impossible to guarantee a proper cleanup. One could add another
button_release callback to clean up if the canvas is still locked, but
there is no guarantee that that callback is not called before the lasso's
callback, thereby creating a race condition.
The only solution I see is to guarantee that the release callback will be
called regardless of the length of the vertices array. Does anybody see a
problem with that?
Cheers!
Ben Root
From: Pavel M. <pma...@bl...> - 2012年02月16日 22:10:26
Is this possible? How difficult is it?
Is anyone interested in consulting work porting Matplotlib to IronPython?
Appreciate any help/suggestions.
Thanks
________________________________
This e-mail is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, the information in this e-mail by persons or entities other than the intended recipient is prohibited and may be unlawful. If you received this in error, please contact the sender and delete the material from any computer.
This communication is for informational purposes only. It is not intended as and does not constitute an offer or solicitation for the purchase or sale of any financial instrument or as an official confirmation of any transaction. All market prices, data and other information are not warranted as to completeness or accuracy and are subject to change without notice. Any expected returns are provided for illustrative purposes only and are not intended to serve as, and must not be relied upon by any prospective investor as, a guaranty, an assurance, a prediction of a definitive statement of fact or a probability. Investment in funds managed by BlueMountain carries certain risks, including the risk of loss of principal. Unless indicated otherwise, performance results are presented net of fees and expenses. Certain market and economic events having an impact on performance may not repeat themselves. Any comments or statements made herein do not necessarily reflect those of BlueMountain Capital Management, LLC or its affiliates. PAST PERFORMANCE IS NOT NECESSARILY INDICATIVE OF FUTURE RESULTS AND NO REPRESENTATION IS MADE THAT RESULTS SIMILAR TO THOSE SHOWN CAN BE ACHIEVED.
From: Benjamin R. <ben...@ou...> - 2012年02月14日 04:47:35
On Monday, February 13, 2012, Tony Yu <ts...@gm...> wrote:
> The title is a bit misleading: The problem is that the last font-related
rc-setting seems to override all previous settings. To clarify, if I save a
figure with certain font settings and *after that* change the rc-setting,
the older figure appears to have the newer setting. Note that this only
appears to happen with fonts---the linewidth setting, for example, shows up
as expected. (See script belows)
>
> -Tony
>
> import matplotlib.pyplot as plt
> def test_simple_plot():
> fig, ax = plt.subplots()
> ax.plot([0, 1])
> ax.set_xlabel('x-label')
> ax.set_ylabel('y-label')
> ax.set_title('title')
> return fig
> plt.rcParams['lines.linewidth'] = 10
> plt.rcParams['font.family'] = 'serif'
> plt.rcParams['font.size'] = 20
> fig1 = test_simple_plot()
> plt.rcParams['lines.linewidth'] = 1
> plt.rcParams['font.family'] = 'sans-serif'
> plt.rcParams['font.size'] = 10
> fig2 = test_simple_plot()
> plt.show()
Looks like we have an inconsistency here with how we process our None's.
For most artists, properties defined as None in the constructor are then
given defaults from the rcparams. I would guess that text objects are
doing if on draw(), instead. At first glacé, I would guess that would be a
bug, but I would welcome other comments on this.
Ben Root
From: Tony Yu <ts...@gm...> - 2012年02月14日 04:31:15
The title is a bit misleading: The problem is that the last font-related
rc-setting seems to override all previous settings. To clarify, if I save a
figure with certain font settings and *after that* change the rc-setting,
the older figure appears to have the newer setting. Note that this only
appears to happen with fonts---the linewidth setting, for example, shows up
as expected. (See script belows)
-Tony
import matplotlib.pyplot as plt
def test_simple_plot():
 fig, ax = plt.subplots()
 ax.plot([0, 1])
 ax.set_xlabel('x-label')
 ax.set_ylabel('y-label')
 ax.set_title('title')
 return fig
plt.rcParams['lines.linewidth'] = 10
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.size'] = 20
fig1 = test_simple_plot()
plt.rcParams['lines.linewidth'] = 1
plt.rcParams['font.family'] = 'sans-serif'
plt.rcParams['font.size'] = 10
fig2 = test_simple_plot()
plt.show()
Hi folks,
[ I'm broadcasting this widely for maximum reach, but I'd appreciate
it if replies can be kept to the *numpy* list, which is sort of the
'base' list for scientific/numerical work. It will make it much
easier to organize a coherent set of notes later on. Apology if
you're subscribed to all and get it 10 times. ]
As part of the PyData workshop (http://pydataworkshop.eventbrite.com)
to be held March 2 and 3 at the Mountain View Google offices, we have
scheduled a session for an open discussion with Guido van Rossum and
hopefully as many core python-dev members who can make it. We wanted
to seize the combined opportunity of the PyData workshop bringing a
number of 'scipy people' to Google with the timeline for Python 3.3,
the first release after the Python language moratorium, being within
sight: http://www.python.org/dev/peps/pep-0398.
While a number of scientific Python packages are already available for
Python 3 (either in released form or in their master git branches),
it's fair to say that there hasn't been a major transition of the
scientific community to Python3. Since there is no more development
being done on the Python2 series, eventually we will all want to find
ways to make this transition, and we think that this is an excellent
time to engage the core python development team and consider ideas
that would make Python3 generally a more appealing language for
scientific work. Guido has made it clear that he doesn't speak for
the day-to-day development of Python anymore, so we all should be
aware that any ideas that come out of this panel will still need to be
discussed with python-dev itself via standard mechanisms before
anything is implemented. Nonetheless, the opportunity for a solid
face-to-face dialog for brainstorming was too good to pass up.
The purpose of this email is then to solicit, from all of our
community, ideas for this discussion. In a week or so we'll need to
summarize the main points brought up here and make a more concrete
agenda out of it; I will also post a summary of the meeting afterwards
here.
Anything is a valid topic, some points just to get the conversation started:
- Extra operators/PEP 225. Here's a summary from the last time we
went over this, years ago at Scipy 2008:
http://mail.scipy.org/pipermail/numpy-discussion/2008-October/038234.html,
and the current status of the document we wrote about it is here:
file:///home/fperez/www/site/_build/html/py4science/numpy-pep225/numpy-pep225.html.
- Improved syntax/support for rationals or decimal literals? While
Python now has both decimals
(http://docs.python.org/library/decimal.html) and rationals
(http://docs.python.org/library/fractions.html), they're quite clunky
to use because they require full constructor calls. Guido has
mentioned in previous discussions toying with ideas about support for
different kinds of numeric literals...
- Using the numpy docstring standard python-wide, and thus having
python improve the pathetic state of the stdlib's docstrings? This is
an area where our community is light years ahead of the standard
library, but we'd all benefit from Python itself improving on this
front. I'm toying with the idea of giving a lighting talk at PyConn
about this, comparing the great, robust culture and tools of good
docstrings across the Scipy ecosystem with the sad, sad state of
docstrings in the stdlib. It might spur some movement on that front
from the stdlib authors, esp. if the core python-dev team realizes the
value and benefit it can bring (at relatively low cost, given how most
of the information does exist, it's just in the wrong places). But
more importantly for us, if there was truly a universal standard for
high-quality docstrings across Python projects, building good
documentation/help machinery would be a lot easier, as we'd know what
to expect and search for (such as rendering them nicely in the ipython
notebook, providing high-quality cross-project help search, etc).
- Literal syntax for arrays? Sage has been floating a discussion
about a literal matrix syntax
(https://groups.google.com/forum/#!topic/sage-devel/mzwepqZBHnA). For
something like this to go into python in any meaningful way there
would have to be core multidimensional arrays in the language, but
perhaps it's time to think about a piece of the numpy array itself
into Python? This is one of the more 'out there' ideas, but after
all, that's the point of a discussion like this, especially
considering we'll have both Travis and Guido in one room.
- Other syntactic sugar? Sage has "a..b" <=> range(a, b+1), which I
actually think is both nice and useful... There's also the question
of allowing "a:b:c" notation outside of [], which has come up a few
times in conversation over the last few years. Others?
- The packaging quagmire? This continues to be a problem, though
python3 does have new improvements to distutils. I'm not really up to
speed on the situation, to be frank. If we want to bring this up,
someone will have to provide a solid reference or volunteer to do it
in person.
- etc...
I'm putting the above just to *start* the discussion, but the real
point is for the rest of the community to contribute ideas, so don't
be shy.
Final note: while I am here commiting to organizing and presenting
this at the discussion with Guido (as well as contacting python-dev),
I would greatly appreciate help with the task of summarizing this
prior to the meeting as I'm pretty badly swamped in the run-in to
pydata/pycon. So if anyone is willing to help draft the summary as
the date draws closer (we can put it up on a github wiki, gist,
whatever), I will be very grateful. I'm sure it will be better than
what I'll otherwise do the last night at 2am :)
Cheers,
f
ps - to the obvious question about webcasting the discussion live for
remote participation: yes, we looked into it already; no,
unfortunately it appears it won't be possible. We'll try to at least
have the audio recorded (and possibly video) for posting later on.
pps- if you are close to Mountain View and are interested in attending
this panel in person, drop me a line at fer...@be....
We have a few spots available *for this discussion only* on top of the
pydata regular attendance (which is long closed, I'm afraid). But
we'll need to provide Google with a list of those attendees in
advance. Please indicate if you are a core python committer in your
email, as we'll give priority for this overflow pool to core python
developers (but will otherwise accommodate as many people as Google
lets us).
From: Benjamin R. <ben...@ou...> - 2012年02月10日 18:33:05
On Fri, Feb 10, 2012 at 11:53 AM, Phil Elson <phi...@ho...>wrote:
> In much the same way Basemap can take an image in a Plate Carree map
> projection (e.g. blue marble) and transform it onto another projection
> in a non-affine way, I would like to be able to apply a non-affine
> transformation to an image, only using the proper matplotlib Transform
> framework.
> To me, this means that I should not have to pre-compute the projected
> image before adding it to the axes, instead I should be able to pass
> the source image and the Transformation stack should take care of
> transforming (warping) it for me (just like I can with a Path).
>
> As far as I can tell, there is no current matplotlib functionality to
> do this (as I understand it, some backends can cope with affine image
> transformations, but this has not been plumbed-in in the same style as
> the Transform of paths and is done in the Image classes themselves).
> (note: I am aware that there is some code to do affine transforms in
> certain backends -
> http://matplotlib.sourceforge.net/examples/api/demo_affine_image.html
> - which is currently broken [I have a fix for this], but it doesn't
> fit into the Transform framework at present.)
>
> I have code which will do the actual warping for my particular case,
> and all I need to do is hook it in nicely...
>
> I was thinking of adding a method to the Transform class which
> implements this functionality, psuedo code stubs are included:
>
>
> class Transform:
> ...
> def transform_image(self, image):
> return
> self.transform_image_affine(self.transform_image_non_affine(image))
>
> def transform_image_non_affine(self, image):
> if not self.is_affine:
> raise NotImplementedError('This is the hard part.')
> return image
> ...
> def transform_image_affine(self, image):
> # could easily handle scale & translations (by changing the
> extent), but not rotations...
> raise NotImplementedError("Need to do this. But rule out
> rotations completely.")
>
>
> This could then be used by the Image artist to do something like:
>
> class Image(Artist, ...):
> ...
> def draw(self, renderer, *args, **kwargs):
> transform = self.get_transform()
> timg = transform.transform_image_non_affine(self)
> affine = transform.get_affine()
> ...
> renderer.draw_image(timg, ..., affine)
>
>
> And the backends could implement:
>
> class Renderer*...
> def draw_image(..., img, ..., transform=None):
> # transform must be an affine transform
> if transform.is_affine and i_can_handle_affines:
> ... # convert the Transform into the backend's transform form
> else:
> timage = transform.transform_image(img)
>
>
>
> The warping mechanism itself would be fairly simple, in that it
> assigns coordinate values to each pixel in the source cs (coordinate
> system), transforms those points into the target cs, from which a
> bounding box can be identified. The bbox is then treated as the bbox
> of the target (warped) image, which is given an arbitrary resolution.
> Finally the target image pixel coordinates are computed and their
> associated pixel values are calculated by interpolating from the
> source image (using target cs pixel values).
>
>
> As mentioned, I have written the image warping code (for my higher
> dimensional coordinate system case using
> scipy.interpolate.NearestNDInterpolator) successfully already, so the
> main motivations for this mail then, are:
> * To get a feel for whether anyone else would find this functionality
> useful? Where else can it be used and in what ways?
> * To get feedback on the proposed change to the Transform class,
> whether such a change would be acceptable and what pitfalls lie ahead.
> * To hear alternative approaches to solving the same problem.
> * To make sure I haven't missed a concept that already exists in the
> Image module (there are 6 different "image" classes in there, 4 of
> which undocumented)
> * To find out if anyone else wants to collaborate in making the
> required change.
>
> Thanks in advance for your time,
>
>
Could this mean that we could support imshow() for polar axes? That would
be nice!
Ben Root
From: Phil E. <phi...@ho...> - 2012年02月10日 17:53:32
In much the same way Basemap can take an image in a Plate Carree map
projection (e.g. blue marble) and transform it onto another projection
in a non-affine way, I would like to be able to apply a non-affine
transformation to an image, only using the proper matplotlib Transform
framework.
To me, this means that I should not have to pre-compute the projected
image before adding it to the axes, instead I should be able to pass
the source image and the Transformation stack should take care of
transforming (warping) it for me (just like I can with a Path).
As far as I can tell, there is no current matplotlib functionality to
do this (as I understand it, some backends can cope with affine image
transformations, but this has not been plumbed-in in the same style as
the Transform of paths and is done in the Image classes themselves).
(note: I am aware that there is some code to do affine transforms in
certain backends -
http://matplotlib.sourceforge.net/examples/api/demo_affine_image.html
- which is currently broken [I have a fix for this], but it doesn't
fit into the Transform framework at present.)
I have code which will do the actual warping for my particular case,
and all I need to do is hook it in nicely...
I was thinking of adding a method to the Transform class which
implements this functionality, psuedo code stubs are included:
class Transform:
...
 def transform_image(self, image):
 return self.transform_image_affine(self.transform_image_non_affine(image))
 def transform_image_non_affine(self, image):
 if not self.is_affine:
 raise NotImplementedError('This is the hard part.')
 return image
 ...
 def transform_image_affine(self, image):
 # could easily handle scale & translations (by changing the
extent), but not rotations...
 raise NotImplementedError("Need to do this. But rule out
rotations completely.")
This could then be used by the Image artist to do something like:
class Image(Artist, ...):
 ...
 def draw(self, renderer, *args, **kwargs):
 transform = self.get_transform()
 timg = transform.transform_image_non_affine(self)
 affine = transform.get_affine()
 ...
 renderer.draw_image(timg, ..., affine)
And the backends could implement:
class Renderer*...
 def draw_image(..., img, ..., transform=None):
 # transform must be an affine transform
 if transform.is_affine and i_can_handle_affines:
 ... # convert the Transform into the backend's transform form
 else:
 timage = transform.transform_image(img)
The warping mechanism itself would be fairly simple, in that it
assigns coordinate values to each pixel in the source cs (coordinate
system), transforms those points into the target cs, from which a
bounding box can be identified. The bbox is then treated as the bbox
of the target (warped) image, which is given an arbitrary resolution.
Finally the target image pixel coordinates are computed and their
associated pixel values are calculated by interpolating from the
source image (using target cs pixel values).
As mentioned, I have written the image warping code (for my higher
dimensional coordinate system case using
scipy.interpolate.NearestNDInterpolator) successfully already, so the
main motivations for this mail then, are:
 * To get a feel for whether anyone else would find this functionality
useful? Where else can it be used and in what ways?
 * To get feedback on the proposed change to the Transform class,
whether such a change would be acceptable and what pitfalls lie ahead.
 * To hear alternative approaches to solving the same problem.
 * To make sure I haven't missed a concept that already exists in the
Image module (there are 6 different "image" classes in there, 4 of
which undocumented)
 * To find out if anyone else wants to collaborate in making the
required change.
Thanks in advance for your time,
From: Phil E. <phi...@ho...> - 2012年02月04日 17:13:58
Thanks Mike.
> I'm not quite sure what the above lines are meant to do.
> matplotlib.transforms doesn't have a Polar member --
> matplotlib.projections.polar.Polar does not have a PolarTransform member
> (on master or your polar_fun branch). Even given that, I think the user
> should be specifying a projection, not a transformation, to create a new
> axes. There is potential for confusion that some transformations will
> allow getting a projection out and some won't (for some it doesn't even
> really make sense).
That was meant to be
matplotlib.projections.polar.PolarAxes.PolarTransform but your right,
defining the "projection" in the transform could lead to confusion,
yet initialising an Axes as a projection seems like unnecessary
complexity. This suggests that defining a "projection" class which is
neither Transform nor Axes might make the most sense (note, what
follows is pseudo code and does not exist in the branch):
>>> polar_proj = Polar(theta0=np.pi/2)
>>> ax = plt.axes(projection=polar_proj)
>>> print ax.projection
Polar(theta0=1.57)
The PolarAxes would be initialised with the Projection instance, and
the PolarAxes can initialise the PolarTransform with a reference to
that projection. Thus changing the theta0 of the projection in the
Axes would also change the projection which is used in the Transform
instance, i.e.:
ax.projection.theta0 = 3*np.pi/2
Would change the way that the overall axes looked.
Interestingly, the work that I have been doing which requires the
aforementioned pull request is doing precisely this - I have
projection classes, one for each type of projection, but where each
type of projection is parameterised, which, when passed through
plt.axes(projection=<my projection class>) instantiate a generic
"GenericProjectionAxes", which itself instantiates a generic
"GenericProjectionTransform" (names for illustration purposes only)
all the while the original projection is mutable via the
MultiProjectionAxes.projection attribute.
Did you have any feelings on the pull request?
Thanks again for your time,
Phil
From: Michael D. <md...@st...> - 2012年02月03日 18:09:15
Thanks for doing this work.
On 02/03/2012 11:40 AM, Phil Elson wrote:
> Currently, one can set the theta_0 of a polar plot with:
>
> ax = plt.axes(projection='polar')
> ax.set_theta_offset(np.pi/2)
> ax.plot(np.arange(100)*0.15, np.arange(100))
>
> But internally there are some nasties going on (theta_0 is an attribute on the
> axes, the transform is instantiated from within the axes and is given the axes
> that is instantiating it, which is all a bit circular). I have made a branch
> (https://github.com/PhilipElson/matplotlib/compare/master...polar_fun) which
> alleviates the axes attribute issue and would allow something like:
>
> polar_trans = mpl.transforms.Polar.PolarTransform(theta_offset=np.pi/2)
> ax = plt.axes(projection=polar_trans)
> ax.plot(np.arange(100)*0.15, np.arange(100))
I agree that the canonical copy of theta_offset should probably live in 
the transform and not the PolarAxes. However, an important feature of 
the current system that seems to be lost in your branch is that the user 
deals with Projections (Axes subclasses) which bring together not only 
the transformation of points from one space to another, but the axes 
shape and tick placement etc., and they also allow for changing 
everything after the fact. The Transformation classes, as they stand 
now, are intended to be an implementation detail hidden from the user.
I'm not quite sure what the above lines are meant to do. 
matplotlib.transforms doesn't have a Polar member -- 
matplotlib.projections.polar.Polar does not have a PolarTransform member 
(on master or your polar_fun branch). Even given that, I think the user 
should be specifying a projection, not a transformation, to create a new 
axes. There is potential for confusion that some transformations will 
allow getting a projection out and some won't (for some it doesn't even 
really make sense).
>
> Or, I have added a helper class which also demonstrates the proposed:
>
> non-string change:
> ax = plt.axes(projection=Polar(theta0=90))
> ax.plot(np.arange(100)*0.15, np.arange(100))
>
> As I said, I am not proposing these changes to the way Polar works at this
> stage, but thought it was worth sharing to show what can be done once
> something similar to the proposed change gets on to mpl master.
>
This makes more sense to me. It doesn't appear to allow for setting the 
theta0 after the fact since Polar doesn't propagate changes along to the 
PolarAxes object that it created and set_theta_offset has been removed 
from PolarAxes.
Cheers,
Mike
From: Phil E. <phi...@ho...> - 2012年02月03日 16:40:23
Some time back I asked about initialising a projection in MPL using generic
objects rather than by class name. I created a pull request associated with
this which was responded to fantastically by leejjoon which (after several
months) I have finally got around to implementing. My changes have been added
to the original pull request, which will eventually be obsoleted, but that
doesn't seem to have notified the devel mailing list, therefore I would like
to draw the list's attention to
https://github.com/matplotlib/matplotlib/pull/470#issuecomment-3743543 on
which I would greatly appreciate feedback & ultimately get onto the mpl master.
The pull request in question would pave the way for non string projections so
I thought I would play with how one might go about specifying the location of
theta_0 in a polar plot (i.e. should it be due east or due north etc.). I have
branched my changeset mentioned in the pull request above and implemented
a couple of ideas, although I am not proposing that these changes go any
further at this stage (I would be happy if someone wants to run with
them though):
Currently, one can set the theta_0 of a polar plot with:
ax = plt.axes(projection='polar')
ax.set_theta_offset(np.pi/2)
ax.plot(np.arange(100)*0.15, np.arange(100))
But internally there are some nasties going on (theta_0 is an attribute on the
axes, the transform is instantiated from within the axes and is given the axes
that is instantiating it, which is all a bit circular). I have made a branch
(https://github.com/PhilipElson/matplotlib/compare/master...polar_fun) which
alleviates the axes attribute issue and would allow something like:
polar_trans = mpl.transforms.Polar.PolarTransform(theta_offset=np.pi/2)
ax = plt.axes(projection=polar_trans)
ax.plot(np.arange(100)*0.15, np.arange(100))
Or, I have added a helper class which also demonstrates the proposed:
non-string change:
ax = plt.axes(projection=Polar(theta0=90))
ax.plot(np.arange(100)*0.15, np.arange(100))
As I said, I am not proposing these changes to the way Polar works at this
stage, but thought it was worth sharing to show what can be done once
something similar to the proposed change gets on to mpl master.
Hope that makes sense.
Many Thanks,
I'm trying to understand how the TransformedPath mechanism is working
with only limited success, and was hoping someone could help.
I have a non-affine transformation defined (subclass of
matplotlib.transforms.Transform) which takes a path and applies an
intensive transformation (path curving & cutting) which can take a
little while, but am able to guarantee that this transformation is a
one off and will never change for this transform instance, therefore
there are obvious caching opportunities.
I am aware that TransformedPath is doing some caching and would really
like to hook into this rather than rolling my own caching mechanism
but can't q
uite figure out (the probably obvious!) way to do it.
To see this problem for yourself I have attached a dummy example of
what I am working on:
import matplotlib.transforms
class SlowNonAffineTransform(matplotlib.transforms.Transform):
 input_dims = 2
 output_dims = 2
 is_separable = False
 has_inverse = True
 def transform(self, points):
 return matplotlib.transforms.IdentityTransform().transform(points)
 def transform_path(self, path):
 # pretends that it is doing something clever & time consuming,
but really is just sleeping
 import time
 # take a long time to do something
 time.sleep(3)
 # return the original path
 return matplotlib.transforms.IdentityTransform().transform_path(path)
if __name__ == '__main__':
 import matplotlib.pyplot as plt
 ax = plt.axes()
 ax.plot([0, 10, 20], [1, 3, 2], transform=SlowNonAffineTransform()
+ ax.transData)
 plt.show()
When this code is run the initial "show" is slow, which is fine, but a
simple resize/zoom rect/pan/zoom will also take a long time.
How can I tell mpl that I can guarantee that my level of the transform
stack is never invalidated?
Many Thanks,

Showing results of 40

<< < 1 2 (Page 2 of 2)
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.
Thanks for helping keep SourceForge clean.
X





Briefly describe the problem (required):
Upload screenshot of ad (required):
Select a file, or drag & drop file here.
Screenshot instructions:

Click URL instructions:
Right-click on the ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)

More information about our ad policies

Ad destination/click URL:

AltStyle によって変換されたページ (->オリジナル) /