SourceForge logo
SourceForge logo
Menu

matplotlib-users — Discussion related to using matplotlib

You can subscribe to this list here.

2003 Jan
Feb
Mar
Apr
May
(3)
Jun
Jul
Aug
(12)
Sep
(12)
Oct
(56)
Nov
(65)
Dec
(37)
2004 Jan
(59)
Feb
(78)
Mar
(153)
Apr
(205)
May
(184)
Jun
(123)
Jul
(171)
Aug
(156)
Sep
(190)
Oct
(120)
Nov
(154)
Dec
(223)
2005 Jan
(184)
Feb
(267)
Mar
(214)
Apr
(286)
May
(320)
Jun
(299)
Jul
(348)
Aug
(283)
Sep
(355)
Oct
(293)
Nov
(232)
Dec
(203)
2006 Jan
(352)
Feb
(358)
Mar
(403)
Apr
(313)
May
(165)
Jun
(281)
Jul
(316)
Aug
(228)
Sep
(279)
Oct
(243)
Nov
(315)
Dec
(345)
2007 Jan
(260)
Feb
(323)
Mar
(340)
Apr
(319)
May
(290)
Jun
(296)
Jul
(221)
Aug
(292)
Sep
(242)
Oct
(248)
Nov
(242)
Dec
(332)
2008 Jan
(312)
Feb
(359)
Mar
(454)
Apr
(287)
May
(340)
Jun
(450)
Jul
(403)
Aug
(324)
Sep
(349)
Oct
(385)
Nov
(363)
Dec
(437)
2009 Jan
(500)
Feb
(301)
Mar
(409)
Apr
(486)
May
(545)
Jun
(391)
Jul
(518)
Aug
(497)
Sep
(492)
Oct
(429)
Nov
(357)
Dec
(310)
2010 Jan
(371)
Feb
(657)
Mar
(519)
Apr
(432)
May
(312)
Jun
(416)
Jul
(477)
Aug
(386)
Sep
(419)
Oct
(435)
Nov
(320)
Dec
(202)
2011 Jan
(321)
Feb
(413)
Mar
(299)
Apr
(215)
May
(284)
Jun
(203)
Jul
(207)
Aug
(314)
Sep
(321)
Oct
(259)
Nov
(347)
Dec
(209)
2012 Jan
(322)
Feb
(414)
Mar
(377)
Apr
(179)
May
(173)
Jun
(234)
Jul
(295)
Aug
(239)
Sep
(276)
Oct
(355)
Nov
(144)
Dec
(108)
2013 Jan
(170)
Feb
(89)
Mar
(204)
Apr
(133)
May
(142)
Jun
(89)
Jul
(160)
Aug
(180)
Sep
(69)
Oct
(136)
Nov
(83)
Dec
(32)
2014 Jan
(71)
Feb
(90)
Mar
(161)
Apr
(117)
May
(78)
Jun
(94)
Jul
(60)
Aug
(83)
Sep
(102)
Oct
(132)
Nov
(154)
Dec
(96)
2015 Jan
(45)
Feb
(138)
Mar
(176)
Apr
(132)
May
(119)
Jun
(124)
Jul
(77)
Aug
(31)
Sep
(34)
Oct
(22)
Nov
(23)
Dec
(9)
2016 Jan
(26)
Feb
(17)
Mar
(10)
Apr
(8)
May
(4)
Jun
(8)
Jul
(6)
Aug
(5)
Sep
(9)
Oct
(4)
Nov
Dec
2017 Jan
(5)
Feb
(7)
Mar
(1)
Apr
(5)
May
Jun
(3)
Jul
(6)
Aug
(1)
Sep
Oct
(2)
Nov
(1)
Dec
2018 Jan
Feb
Mar
Apr
(1)
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
2020 Jan
Feb
Mar
Apr
May
(1)
Jun
Jul
Aug
Sep
Oct
Nov
Dec
2025 Jan
(1)
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
S M T W T F S





1
2
(1)
3
(1)
4
(3)
5
(1)
6
(5)
7
(7)
8
9
10
(1)
11
(7)
12
(2)
13
14
(1)
15
(4)
16
(1)
17
18
19
20
21
(12)
22
(1)
23
24
(2)
25
(4)
26
(1)
27
(15)
28
(7)
29
(4)
30
(2)
31
(1)






Showing 2 results of 2

From: Hartmut K. <har...@gm...> - 2014年08月12日 12:55:12
Thanks for your insights, Ian! 
A somewhat slower trifinder which requires less memory might be even faster in the end as creating the trifinder itself takes a lot of time (almost a minute in our case). 
Regards Hartmut
---------------
http://boost-spirit.com
http://stellar.cct.lsu.edu
> -----Original Message-----
> From: Ian Thomas [mailto:ian...@gm...]
> Sent: Tuesday, August 12, 2014 4:35 AM
> To: Hartmut Kaiser
> Cc: Andrew Dawson; Carola Kaiser; matplotlib-users
> Subject: Re: [Matplotlib-users] Crash when using
> matplotlib.tri.LinearTriInterpolator
> 
> Here are the results of my investigation. There is probably more
> information here than anyone else wants, but it is useful information for
> future improvements.
> 
> Most of the RAM is taken up by a trifinder object which is at the heart of
> a triinterpolator, and is used to find the triangles of a Triangulation in
> which (x,y) points lie. The code
> interp = tri.LinearTriInterpolator(triang, data)
> is equivalent to
> trifinder = tri.TrapezoidMapTriFinder(triang)
> interp = tri.LinearTriInterpolator(triang, data, trifinder=trifinder)
> 
> Using the latter with memory_profiler
> (https://pypi.python.org/pypi/memory_profiler) indicates that this is
> where most of the RAM is being used. Here are some figures for trifinder
> RAM usage as a function of ntri, the number of triangles in the
> triangulation:
> 
> ntri trifinder MB
> ---- ------------
> 1000 26
> 10000 33
> 100000 116
> 1000000 912
> 2140255 1936
> 
> The RAM usage is less than linear in ntri, but clearly too much for large
> triangulations unless you have a lot of RAM.
> 
> The trifinder precomputes a tree of nodes to make looking up triangles
> quick. Searching through 2 million triangles in an ad-hoc manner would be
> very slow; the trifinder is very fast in comparison. Here are some stats
> for the tree that trifinder uses (the columns are number of nodes in the
> tree, maximum node depth, and mean node depth):
> ntri nodes max depth mean depth
> ------- --------- --------- ----------
> 1000 179097 37 23.24
> 10000 3271933 53 30.74
> 100000 36971309 69 37.15
> 1000000 853117229 87 48.66
> The mean depth is the mean number of nodes that have to be traversed to
> find a triangle, and the max depth is the worst case. The search time is
> therefore O(log ntri).
> The triangle interpolator code is structured in such a way that it is easy
> to plug in a different trifinder if the default one isn't appropriate. At
> the moment there is only the one available however
> (TrapezoidMapTriFinder). For the problem at hand, a trifinder that is
> slower but consumes less RAM would be preferable. There are various
> possibilities, they just have to be implemented! I will take a look at it
> sometime, but it probably will not be soon.
> Ian Thomas
From: Ian T. <ian...@gm...> - 2014年08月12日 09:35:04
Here are the results of my investigation. There is probably more
information here than anyone else wants, but it is useful information for
future improvements.
Most of the RAM is taken up by a trifinder object which is at the heart of
a triinterpolator, and is used to find the triangles of a Triangulation in
which (x,y) points lie. The code
 interp = tri.LinearTriInterpolator(triang, data)
is equivalent to
 trifinder = tri.TrapezoidMapTriFinder(triang)
 interp = tri.LinearTriInterpolator(triang, data, trifinder=trifinder)
Using the latter with memory_profiler (
https://pypi.python.org/pypi/memory_profiler) indicates that this is where
most of the RAM is being used. Here are some figures for trifinder RAM
usage as a function of ntri, the number of triangles in the triangulation:
ntri trifinder MB
---- ------------
1000 26
10000 33
100000 116
1000000 912
2140255 1936
The RAM usage is less than linear in ntri, but clearly too much for large
triangulations unless you have a lot of RAM.
The trifinder precomputes a tree of nodes to make looking up triangles
quick. Searching through 2 million triangles in an ad-hoc manner would be
very slow; the trifinder is very fast in comparison. Here are some stats
for the tree that trifinder uses (the columns are number of nodes in the
tree, maximum node depth, and mean node depth):
 ntri nodes max depth mean depth
------- --------- --------- ----------
 1000 179097 37 23.24
 10000 3271933 53 30.74
 100000 36971309 69 37.15
1000000 853117229 87 48.66
The mean depth is the mean number of nodes that have to be traversed to
find a triangle, and the max depth is the worst case. The search time is
therefore O(log ntri).
The triangle interpolator code is structured in such a way that it is easy
to plug in a different trifinder if the default one isn't appropriate. At
the moment there is only the one available however
(TrapezoidMapTriFinder). For the problem at hand, a trifinder that is
slower but consumes less RAM would be preferable. There are various
possibilities, they just have to be implemented! I will take a look at it
sometime, but it probably will not be soon.
Ian Thomas

Showing 2 results of 2

Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.
Thanks for helping keep SourceForge clean.
X





Briefly describe the problem (required):
Upload screenshot of ad (required):
Select a file, or drag & drop file here.
Screenshot instructions:

Click URL instructions:
Right-click on the ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)

More information about our ad policies

Ad destination/click URL:

AltStyle によって変換されたページ (->オリジナル) /