You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(33) |
Dec
(20) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(7) |
Feb
(44) |
Mar
(51) |
Apr
(43) |
May
(43) |
Jun
(36) |
Jul
(61) |
Aug
(44) |
Sep
(25) |
Oct
(82) |
Nov
(97) |
Dec
(47) |
2005 |
Jan
(77) |
Feb
(143) |
Mar
(42) |
Apr
(31) |
May
(93) |
Jun
(93) |
Jul
(35) |
Aug
(78) |
Sep
(56) |
Oct
(44) |
Nov
(72) |
Dec
(75) |
2006 |
Jan
(116) |
Feb
(99) |
Mar
(181) |
Apr
(171) |
May
(112) |
Jun
(86) |
Jul
(91) |
Aug
(111) |
Sep
(77) |
Oct
(72) |
Nov
(57) |
Dec
(51) |
2007 |
Jan
(64) |
Feb
(116) |
Mar
(70) |
Apr
(74) |
May
(53) |
Jun
(40) |
Jul
(519) |
Aug
(151) |
Sep
(132) |
Oct
(74) |
Nov
(282) |
Dec
(190) |
2008 |
Jan
(141) |
Feb
(67) |
Mar
(69) |
Apr
(96) |
May
(227) |
Jun
(404) |
Jul
(399) |
Aug
(96) |
Sep
(120) |
Oct
(205) |
Nov
(126) |
Dec
(261) |
2009 |
Jan
(136) |
Feb
(136) |
Mar
(119) |
Apr
(124) |
May
(155) |
Jun
(98) |
Jul
(136) |
Aug
(292) |
Sep
(174) |
Oct
(126) |
Nov
(126) |
Dec
(79) |
2010 |
Jan
(109) |
Feb
(83) |
Mar
(139) |
Apr
(91) |
May
(79) |
Jun
(164) |
Jul
(184) |
Aug
(146) |
Sep
(163) |
Oct
(128) |
Nov
(70) |
Dec
(73) |
2011 |
Jan
(235) |
Feb
(165) |
Mar
(147) |
Apr
(86) |
May
(74) |
Jun
(118) |
Jul
(65) |
Aug
(75) |
Sep
(162) |
Oct
(94) |
Nov
(48) |
Dec
(44) |
2012 |
Jan
(49) |
Feb
(40) |
Mar
(88) |
Apr
(35) |
May
(52) |
Jun
(69) |
Jul
(90) |
Aug
(123) |
Sep
(112) |
Oct
(120) |
Nov
(105) |
Dec
(116) |
2013 |
Jan
(76) |
Feb
(26) |
Mar
(78) |
Apr
(43) |
May
(61) |
Jun
(53) |
Jul
(147) |
Aug
(85) |
Sep
(83) |
Oct
(122) |
Nov
(18) |
Dec
(27) |
2014 |
Jan
(58) |
Feb
(25) |
Mar
(49) |
Apr
(17) |
May
(29) |
Jun
(39) |
Jul
(53) |
Aug
(52) |
Sep
(35) |
Oct
(47) |
Nov
(110) |
Dec
(27) |
2015 |
Jan
(50) |
Feb
(93) |
Mar
(96) |
Apr
(30) |
May
(55) |
Jun
(83) |
Jul
(44) |
Aug
(8) |
Sep
(5) |
Oct
|
Nov
(1) |
Dec
(1) |
2016 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
(3) |
Sep
(1) |
Oct
(3) |
Nov
|
Dec
|
2017 |
Jan
|
Feb
(5) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
(7) |
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
1
(8) |
2
|
3
|
4
|
5
(2) |
6
|
7
|
8
(13) |
9
(5) |
10
(1) |
11
(7) |
12
(9) |
13
(2) |
14
(1) |
15
(8) |
16
(10) |
17
(2) |
18
(5) |
19
|
20
(4) |
21
(2) |
22
(3) |
23
|
24
|
25
|
26
(6) |
27
(1) |
28
(1) |
29
(2) |
30
(13) |
|
Hi Fernando, It would be really great if galry could be integrated in the notebook indeed. Is the code of this demo available somewhere, so that I can get an idea about how this integration works? In theory, galry should be compatible with WebGL because one of the main components of galry is a shader code generator that can produce OpenGL ES-compatible GLSL code. Apart from that, I suppose you have some way of making Javascript and Python communicate? The interaction system of Galry, which is based on QT but with an abstraction layer, could then be plugged to Javascript somehow... Anyway, if I could take a look to the code of this demo, I should be able to evaluate how complicated this integration would be. Cyrille 2012年11月16日 Fernando Perez <fpe...@gm...> > Hi Cyrille, > > On Thu, Nov 15, 2012 at 11:24 AM, Cyrille Rossant > <cyr...@gm...> wrote: > > I am developing a high-performance interactive visualization package in > > Python based on PyOpenGL (http://rossant.github.com/galry/). It is > primarily > > meant to be used as a framework for developing complex interactive GUIs > (in > > QT) that deal with very large amounts of data (tens of millions of > points). > > But it may also be used, like matplotlib, as a high-level interactive > > library to plot and visualize data. > > quick question: how easy/feasible is WebGL integration? I ask b/c > we're starting to get the necessary machinery for easy WebGL > visualization in the ipython notebook, see e.g.: > > http://www.flickr.com/photos/47156828@N06/8183294725 > > so bringing galry to the notebook with minimal code duplication would > be great. I just mention it now in case it helps you make design > decisions as you go along. > > Cheers, > > f >
Hi Cyrille, On Thu, Nov 15, 2012 at 11:24 AM, Cyrille Rossant <cyr...@gm...> wrote: > I am developing a high-performance interactive visualization package in > Python based on PyOpenGL (http://rossant.github.com/galry/). It is primarily > meant to be used as a framework for developing complex interactive GUIs (in > QT) that deal with very large amounts of data (tens of millions of points). > But it may also be used, like matplotlib, as a high-level interactive > library to plot and visualize data. quick question: how easy/feasible is WebGL integration? I ask b/c we're starting to get the necessary machinery for easy WebGL visualization in the ipython notebook, see e.g.: http://www.flickr.com/photos/47156828@N06/8183294725 so bringing galry to the notebook with minimal code duplication would be great. I just mention it now in case it helps you make design decisions as you go along. Cheers, f
On Fri, Nov 16, 2012 at 7:58 AM, Skipper Seabold <jss...@gm...>wrote: > On Fri, Nov 16, 2012 at 10:19 AM, Yaroslav Halchenko <sf...@on...> > wrote: > > I just found some code (http://www.onerussian.com/tmp/plots.py and > > pasted below for review/feedback) laying around which I wrote around > > matplotlib for plotting primarily pair-wise stats results. Here is a > > demonstration: > > http://nbviewer.ipython.org/url/www.onerussian.com/tmp/run_plots.ipynb > > > > I wonder if there is a need/place for it in matplotlib and what changes > would > > you advise. Sorry for the lack of documentation -- I guess I have not > finished > > it at that point (scipy dependency can easily be dropped, used only for > > standard error function iirc): > > > > Looks nice. We'd certainly be interesting in including it in > statsmodels/graphics if there isn't sufficient interest here and/or > you'd like to keep the scipy dependency. ;) > > Skipper I was going to suggest either the same thing or adding it to pandas. I think statsmodels if the better fit, though. I also noticed scipy is only used for scipy.stats.sem -- so it might be easy enough to loose the scipy dependency. Just a thought. -paul
On Fri, Nov 16, 2012 at 10:19 AM, Yaroslav Halchenko <sf...@on...> wrote: > I just found some code (http://www.onerussian.com/tmp/plots.py and > pasted below for review/feedback) laying around which I wrote around > matplotlib for plotting primarily pair-wise stats results. Here is a > demonstration: > http://nbviewer.ipython.org/url/www.onerussian.com/tmp/run_plots.ipynb > > I wonder if there is a need/place for it in matplotlib and what changes would > you advise. Sorry for the lack of documentation -- I guess I have not finished > it at that point (scipy dependency can easily be dropped, used only for > standard error function iirc): > Looks nice. We'd certainly be interesting in including it in statsmodels/graphics if there isn't sufficient interest here and/or you'd like to keep the scipy dependency. ;) Skipper > #!/usr/bin/python > #emacs: -*- mode: python-mode; py-indent-offset: 4; tab-width: 4; indent-tabs-mode: nil -*- > #ex: set sts=4 ts=4 sw=4 noet: > #------------------------- =+- Python script -+= ------------------------- > """ > @file paired-plots.py > @date Fri Jan 13 11:48:00 2012 > @brief > > > Yaroslav Halchenko Dartmouth > web: http://www.onerussian.com College > e-mail: yo...@on... ICQ#: 60653192 > > DESCRIPTION (NOTES): > > COPYRIGHT: Yaroslav Halchenko 2012 > > LICENSE: MIT > > Permission is hereby granted, free of charge, to any person obtaining a copy > of this software and associated documentation files (the "Software"), to deal > in the Software without restriction, including without limitation the rights > to use, copy, modify, merge, publish, distribute, sublicense, and/or sell > copies of the Software, and to permit persons to whom the Software is > furnished to do so, subject to the following conditions: > > The above copyright notice and this permission notice shall be included in > all copies or substantial portions of the Software. > > THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR > IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, > FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE > AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER > LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, > OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN > THE SOFTWARE. > """ > #-----------------\____________________________________/------------------ > > __author__ = 'Yaroslav Halchenko' > __revision__ = '$Revision: $' > __date__ = '$Date: $' > __copyright__ = 'Copyright (c) 2012 Yaroslav Halchenko' > __license__ = 'MIT' > > > import numpy as np > import pylab as pl > import scipy.stats as ss > > def plot_boxplot_enhanced( > v, > contrast_labels=None, > condition_labels=None, > ccolors=['y', 'b'], > rand_offsets=None, > grid=True, > xticks_rotation=0, > **bp_kwargs): > > width = bp_kwargs.get('width', 0.5) > pl.boxplot(v, **bp_kwargs) > > if v.ndim < 2: v = v[:, None] > ncol = v.shape[1] > > eff = np.mean(v, axis=0) # effect sizes > sem = ss.sem(v, axis=0) > > if rand_offsets is None: > rand_offsets = np.random.randn(len(v)) * 0.02 > > pl.plot((np.arange(ncol) + 1)[:, None] + rand_offsets, > v.T, '.', color='k', markerfacecolor='k') > for i in range(ncol): > lw = 2 > pl.plot([1 - width/2. + i, 1+i], > [0, 0], > '--', color=ccolors[0], linewidth=lw) # first condition > pl.plot([1+i, 1 + width/2. +i], > [eff[i]]*2, > '--', color=ccolors[1], linewidth=lw) > > # place ste > pl.errorbar(i+1 + 1.1*width/2., > eff[i], > sem[i], > elinewidth=2, linewidth=0, > color='r', ecolor='r') > > if contrast_labels and not i: # only for the first one > pl.text(1 - 1.1*width/2 + i, 0.1, contrast_labels[0], > verticalalignment='bottom', > horizontalalignment='right') > pl.text(1 + 1.2*width/2 + i, eff[i], contrast_labels[1], > verticalalignment='bottom', horizontalalignment='left') > ax = pl.gca() > if condition_labels: > ax.set_xticklabels(condition_labels, rotation=xticks_rotation) > else: > # hide it > ax.axes.xaxis.set_visible(False) > > if grid: > ax.grid() > return ax > > > def plot_paired_stats( > v0, v1, contrast_labels, > condition_labels=None, > style=['barplot_effect', > 'boxplot_raw', > 'boxplot_effect'], > ccolors=['y', 'g'], > xticks_rotation=0, > grid=False, > fig=None, > bottom_adjust=None, > bp_kwargs={}): > > if isinstance(style, str): > style = [style] > > nplots = len(style) # how many subplots will be needed > > # assure having 2nd dimension > if v0.ndim < 2: v0 = v0[:, None] > if v1.ndim < 2: v1 = v1[:, None] > assert(v0.shape == v1.shape) > > ncol = v0.shape[1] > v10 = (v1 - v0) # differences > mv0 = np.mean(v0, axis=0) # means > mv1 = np.mean(v1, axis=0) > > eff = np.mean(v10, axis=0) # effect sizes > sem = ss.sem(v10, axis=0) > > # so that data points have are distinguishable > rand_offsets = np.random.randn(len(v10)) * 0.02 > > # interleaved combination for some plots > v_ = np.hstack((v0, v1)) > v = np.zeros(v_.shape, dtype=v_.dtype) > v[:, np.hstack((np.arange(0, ncol*2, 2), > np.arange(1, ncol*2, 2)))] = v_ > > #print v.shape > #print np.mean(v0, axis=0), np.mean(v1, axis=0) > #print np.min(v10, axis=0), np.max(v10, axis=0), \ > # np.mean(v10, axis=0), ss.sem(v10, axis=0) > #pl.boxplot(v10 + np.mean(v1), notch=1, widths=0.05) > > #print v0.shape, v1.shape, np.hstack([v0, v1]).shape > > if fig is None: > fig = pl.figure() > > bwidth = 0.5 > plot = 1 > > if condition_labels: > xlabels = [ '%s:%s' % (cond, contr) > for cond in condition_labels > for contr in contrast_labels ] > else: > xlabels = contrast_labels > > bp_kwargs_ = { > #'bootstrap': 0, > 'notch' : 1 > } > bp_kwargs_.update(bp_kwargs) > > def plot_grid(ax): > if grid: > ax.grid() > > if 'barplot_effect' in style: > if len(style) > 1: > pl.subplot(1, nplots, plot) > plot += 1 > # The simplest one > pl.bar(np.arange(1, ncol*2+1) - bwidth/2, > np.mean(v, axis=0), > color=ccolors*ncol, > edgecolor=ccolors*ncol, > alpha=0.8, > width=bwidth) > #pl.minorticks_off() > pl.tick_params('x', direction='out', length=6, width=1, > top=False) > ax = pl.gca() > pl.xlim(0.5, ncol*2+0.5) > ax.set_xticks(np.arange(1, ncol*2+1)) > ax.set_xticklabels(xlabels, rotation=xticks_rotation) > # place ste for effect size into the 2nd column > pl.errorbar(np.arange(ncol)*2+2, > mv1, > sem, elinewidth=2, linewidth=0, > color='g', ecolor='r') > > plot_grid(ax) > > if 'boxplot_raw' in style: > if len(style) > 1: > pl.subplot(1, nplots, plot) > plot += 1 > > # Figure 1 -- "raw" data > # plot "connections" between boxplots > for i in range(ncol): > pargs = (np.array([i*2+1, i*2+2])[:, None] + rand_offsets, > np.array([v0[:,i], v1[:,i]])) > pl.plot(*(pargs+('-',)), color='k', alpha=0.5, linewidth=0.25) > pl.plot(*(pargs+('.',)), color='k', alpha=0.9) > # boxplot of "raw" data > bp1 = pl.boxplot(v, widths=bwidth, **bp_kwargs_) > for i in range(ncol): > for c in xrange(2): > b = bp1['boxes'][2*i+c] > b.set_color(ccolors[c]) > b.set_linewidth(2) > > ax = pl.gca() > ax.set_xticklabels(xlabels, rotation=xticks_rotation) > plot_grid(ax) > > if 'boxplot_effect' in style: > if len(style) > 1: > pl.subplot(1, nplots, plot) > plot += 1 > plot_boxplot_enhanced(v10, > contrast_labels=contrast_labels, > condition_labels=condition_labels, > widths=bwidth, > rand_offsets=rand_offsets, # reuse them > grid=grid, > **bp_kwargs_) > > if bottom_adjust: > fig.subplots_adjust(bottom=bottom_adjust) > pl.draw_if_interactive() > return fig > > if __name__ == '__main__': > > if True: > v = np.random.normal(size=(50,8)) * 20 + 120 > if False: > v[:, 1] += 40 > v[:, 3] -= 30 > v[:, 5] += 60 > v[:, 6] -= 60 > else: > v -= np.arange(v.shape[1])*10 > v /= 10 > > v0 = v[:, ::2] > v1 = v[:, 1::2] > d = v1 - v0 > print np.mean(d, axis=0) > styles = ['barplot_effect', > 'boxplot_raw', > 'boxplot_effect' > ] > styles = styles + [styles] > pl.close('all') > > if False: > f = plot_boxplot_enhanced((v1-v0)[:,0], > grid=True, xticks_rotation=30, notch=1) > > for s in styles: > fig = pl.figure(figsize=(12,6)) > f = plot_paired_stats(v0, v1, ['cont1', 'cont2'], > style=s, fig=fig, > condition_labels=['exp1', 'exp2', 'exp3', 'exp4'], > grid=True, xticks_rotation=30) > pl.show() > > > -- > Yaroslav O. Halchenko > Postdoctoral Fellow, Department of Psychological and Brain Sciences > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > > ------------------------------------------------------------------------------ > Monitor your physical, virtual and cloud infrastructure from a single > web console. Get in-depth insight into apps, servers, databases, vmware, > SAP, cloud infrastructure, etc. Download 30-day Free Trial. > Pricing starts from 795ドル for 25 servers or applications! > http://p.sf.net/sfu/zoho_dev2dev_nov > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
I just found some code (http://www.onerussian.com/tmp/plots.py and pasted below for review/feedback) laying around which I wrote around matplotlib for plotting primarily pair-wise stats results. Here is a demonstration: http://nbviewer.ipython.org/url/www.onerussian.com/tmp/run_plots.ipynb I wonder if there is a need/place for it in matplotlib and what changes would you advise. Sorry for the lack of documentation -- I guess I have not finished it at that point (scipy dependency can easily be dropped, used only for standard error function iirc): #!/usr/bin/python #emacs: -*- mode: python-mode; py-indent-offset: 4; tab-width: 4; indent-tabs-mode: nil -*- #ex: set sts=4 ts=4 sw=4 noet: #------------------------- =+- Python script -+= ------------------------- """ @file paired-plots.py @date Fri Jan 13 11:48:00 2012 @brief Yaroslav Halchenko Dartmouth web: http://www.onerussian.com College e-mail: yo...@on... ICQ#: 60653192 DESCRIPTION (NOTES): COPYRIGHT: Yaroslav Halchenko 2012 LICENSE: MIT Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. """ #-----------------\____________________________________/------------------ __author__ = 'Yaroslav Halchenko' __revision__ = '$Revision: $' __date__ = '$Date: $' __copyright__ = 'Copyright (c) 2012 Yaroslav Halchenko' __license__ = 'MIT' import numpy as np import pylab as pl import scipy.stats as ss def plot_boxplot_enhanced( v, contrast_labels=None, condition_labels=None, ccolors=['y', 'b'], rand_offsets=None, grid=True, xticks_rotation=0, **bp_kwargs): width = bp_kwargs.get('width', 0.5) pl.boxplot(v, **bp_kwargs) if v.ndim < 2: v = v[:, None] ncol = v.shape[1] eff = np.mean(v, axis=0) # effect sizes sem = ss.sem(v, axis=0) if rand_offsets is None: rand_offsets = np.random.randn(len(v)) * 0.02 pl.plot((np.arange(ncol) + 1)[:, None] + rand_offsets, v.T, '.', color='k', markerfacecolor='k') for i in range(ncol): lw = 2 pl.plot([1 - width/2. + i, 1+i], [0, 0], '--', color=ccolors[0], linewidth=lw) # first condition pl.plot([1+i, 1 + width/2. +i], [eff[i]]*2, '--', color=ccolors[1], linewidth=lw) # place ste pl.errorbar(i+1 + 1.1*width/2., eff[i], sem[i], elinewidth=2, linewidth=0, color='r', ecolor='r') if contrast_labels and not i: # only for the first one pl.text(1 - 1.1*width/2 + i, 0.1, contrast_labels[0], verticalalignment='bottom', horizontalalignment='right') pl.text(1 + 1.2*width/2 + i, eff[i], contrast_labels[1], verticalalignment='bottom', horizontalalignment='left') ax = pl.gca() if condition_labels: ax.set_xticklabels(condition_labels, rotation=xticks_rotation) else: # hide it ax.axes.xaxis.set_visible(False) if grid: ax.grid() return ax def plot_paired_stats( v0, v1, contrast_labels, condition_labels=None, style=['barplot_effect', 'boxplot_raw', 'boxplot_effect'], ccolors=['y', 'g'], xticks_rotation=0, grid=False, fig=None, bottom_adjust=None, bp_kwargs={}): if isinstance(style, str): style = [style] nplots = len(style) # how many subplots will be needed # assure having 2nd dimension if v0.ndim < 2: v0 = v0[:, None] if v1.ndim < 2: v1 = v1[:, None] assert(v0.shape == v1.shape) ncol = v0.shape[1] v10 = (v1 - v0) # differences mv0 = np.mean(v0, axis=0) # means mv1 = np.mean(v1, axis=0) eff = np.mean(v10, axis=0) # effect sizes sem = ss.sem(v10, axis=0) # so that data points have are distinguishable rand_offsets = np.random.randn(len(v10)) * 0.02 # interleaved combination for some plots v_ = np.hstack((v0, v1)) v = np.zeros(v_.shape, dtype=v_.dtype) v[:, np.hstack((np.arange(0, ncol*2, 2), np.arange(1, ncol*2, 2)))] = v_ #print v.shape #print np.mean(v0, axis=0), np.mean(v1, axis=0) #print np.min(v10, axis=0), np.max(v10, axis=0), \ # np.mean(v10, axis=0), ss.sem(v10, axis=0) #pl.boxplot(v10 + np.mean(v1), notch=1, widths=0.05) #print v0.shape, v1.shape, np.hstack([v0, v1]).shape if fig is None: fig = pl.figure() bwidth = 0.5 plot = 1 if condition_labels: xlabels = [ '%s:%s' % (cond, contr) for cond in condition_labels for contr in contrast_labels ] else: xlabels = contrast_labels bp_kwargs_ = { #'bootstrap': 0, 'notch' : 1 } bp_kwargs_.update(bp_kwargs) def plot_grid(ax): if grid: ax.grid() if 'barplot_effect' in style: if len(style) > 1: pl.subplot(1, nplots, plot) plot += 1 # The simplest one pl.bar(np.arange(1, ncol*2+1) - bwidth/2, np.mean(v, axis=0), color=ccolors*ncol, edgecolor=ccolors*ncol, alpha=0.8, width=bwidth) #pl.minorticks_off() pl.tick_params('x', direction='out', length=6, width=1, top=False) ax = pl.gca() pl.xlim(0.5, ncol*2+0.5) ax.set_xticks(np.arange(1, ncol*2+1)) ax.set_xticklabels(xlabels, rotation=xticks_rotation) # place ste for effect size into the 2nd column pl.errorbar(np.arange(ncol)*2+2, mv1, sem, elinewidth=2, linewidth=0, color='g', ecolor='r') plot_grid(ax) if 'boxplot_raw' in style: if len(style) > 1: pl.subplot(1, nplots, plot) plot += 1 # Figure 1 -- "raw" data # plot "connections" between boxplots for i in range(ncol): pargs = (np.array([i*2+1, i*2+2])[:, None] + rand_offsets, np.array([v0[:,i], v1[:,i]])) pl.plot(*(pargs+('-',)), color='k', alpha=0.5, linewidth=0.25) pl.plot(*(pargs+('.',)), color='k', alpha=0.9) # boxplot of "raw" data bp1 = pl.boxplot(v, widths=bwidth, **bp_kwargs_) for i in range(ncol): for c in xrange(2): b = bp1['boxes'][2*i+c] b.set_color(ccolors[c]) b.set_linewidth(2) ax = pl.gca() ax.set_xticklabels(xlabels, rotation=xticks_rotation) plot_grid(ax) if 'boxplot_effect' in style: if len(style) > 1: pl.subplot(1, nplots, plot) plot += 1 plot_boxplot_enhanced(v10, contrast_labels=contrast_labels, condition_labels=condition_labels, widths=bwidth, rand_offsets=rand_offsets, # reuse them grid=grid, **bp_kwargs_) if bottom_adjust: fig.subplots_adjust(bottom=bottom_adjust) pl.draw_if_interactive() return fig if __name__ == '__main__': if True: v = np.random.normal(size=(50,8)) * 20 + 120 if False: v[:, 1] += 40 v[:, 3] -= 30 v[:, 5] += 60 v[:, 6] -= 60 else: v -= np.arange(v.shape[1])*10 v /= 10 v0 = v[:, ::2] v1 = v[:, 1::2] d = v1 - v0 print np.mean(d, axis=0) styles = ['barplot_effect', 'boxplot_raw', 'boxplot_effect' ] styles = styles + [styles] pl.close('all') if False: f = plot_boxplot_enhanced((v1-v0)[:,0], grid=True, xticks_rotation=30, notch=1) for s in styles: fig = pl.figure(figsize=(12,6)) f = plot_paired_stats(v0, v1, ['cont1', 'cont2'], style=s, fig=fig, condition_labels=['exp1', 'exp2', 'exp3', 'exp4'], grid=True, xticks_rotation=30) pl.show() -- Yaroslav O. Halchenko Postdoctoral Fellow, Department of Psychological and Brain Sciences Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik
On Fri, Nov 16, 2012 at 7:44 AM, Mike Kaufman <mc...@gm...> wrote: > On 11/15/12 2:54 PM, Paul Ivanov wrote: > > On Wed, Nov 14, 2012 at 7:37 PM, Mike Kaufman <mc...@gm... > > > > > I just found that *direction* accepts 'inout' as well, which > > does indeed place the tick on both sides of the spine. So the > > documentation should be updated to reflect this. > > > > > > Thanks for the report, Mike, here's a PR for the patch: > > https://github.com/matplotlib/matplotlib/pull/1503 > > > > If it were me, I'd allow 'both' to work as well. > > > > > > I'm amenable to that, just not sure if that counts as a new feature and > > should go into master, or a bugfix and go into v1.2.x. > > > > I can add this functionality to #1503 if that makes sense to go to v1.2.x > > Thanks Paul, > > One is a doc fix, the other requires changing code (inside axis.py?). So > best practice probably argues that 'both', if acceptable, should really > only go into master. > > That said, I think the 'both' change only requires a change to a pair of > conditionals... > > M > > I agree with the fact that implementing "both" to mean the same as "inout" should go into master only. Ben Root
On 11/15/12 2:54 PM, Paul Ivanov wrote: > On Wed, Nov 14, 2012 at 7:37 PM, Mike Kaufman <mc...@gm... > > I just found that *direction* accepts 'inout' as well, which > does indeed place the tick on both sides of the spine. So the > documentation should be updated to reflect this. > > > Thanks for the report, Mike, here's a PR for the patch: > https://github.com/matplotlib/matplotlib/pull/1503 > > If it were me, I'd allow 'both' to work as well. > > > I'm amenable to that, just not sure if that counts as a new feature and > should go into master, or a bugfix and go into v1.2.x. > > I can add this functionality to #1503 if that makes sense to go to v1.2.x Thanks Paul, One is a doc fix, the other requires changing code (inside axis.py?). So best practice probably argues that 'both', if acceptable, should really only go into master. That said, I think the 'both' change only requires a change to a pair of conditionals... M
On 16 November 2012 05:14, Damon McDougall <dam...@gm...>wrote: > >> I have a C++ TriFinder class > >> that I could modify to work within matplotlib, and it is O(log N) so > should > >> be faster than your version for typical use cases. > > > > What algorithm does this use? Is the code open source and/or availabel > > for other projects? > > I'm pretty sure there is an O(log n) algorithm in the Numerical > Recipes book. It requires you to construct the triangulation in a > specific way (this allows one to set up a tree data structure of > triangles nicely). There may be others that I am not aware of though. > I think this is the standard method used for point-in-triangulation tests for delaunay triangulations, as the search structure is created as the triangulation is built. We need to support non-delaunay triangulations that are specified by the user, which requires a different approach. Ian
On 15 November 2012 21:25, Chris Barker <chr...@no...> wrote: > On Wed, Nov 14, 2012 at 1:50 AM, Ian Thomas <ian...@gm...> wrote: > > > I think the code used to determine which triangle contains a certain > point > > should be factored out into its own TriFinder class, > > +1 -- this is a generally useful feature. In fact, it would be nice if > a lot of this were in a pacakge that deals with triangular meshes, > apart from MPL altogether (a scikit maybe?) > I hadn't considered any wider interest in it. From a selfish matplotlib point of view, even if it was in say a scikit, we wouldn't want to add an external dependency to the scikit so we would still need a copy of the source code within matplotlib anyway, just to do our interpolation for plotting purposes. The situation would be just like the delaunay code which exists in a scikit but we include it in the mpl source code to avoid the external dependency. Once it is in mpl it is available for other projects to use if they wish, subject to the BSD-style license. There would be a question of who is responsible for maintaining it as I don't particularly want to spread myself thinly on other open source projects when there are a lot of additions to mpl that I would like to do. There would be the danger for the other project of the reverse situation of our delaunay example, where we have code that needs improvement but it is essentially unmaintained, causing problems for users and embarrassment for developers. > I have a C++ TriFinder class > > that I could modify to work within matplotlib, and it is O(log N) so > should > > be faster than your version for typical use cases. > > What algorithm does this use? Is the code open source and/or availabel > for other projects? > > I'm working on a package for working with unstructured grids in > general, and also have a use for "what triangle is this point in" code > for other purposes -- and I havne't found a fast, robust code for this > yet. > It is code I have written recently based on the trapezoidal map algorithm from the book 'Computational Geometry: Algorithms and Applications' by M. de Berg et al. It currently exists only on my main linux box, but it will be open source within mpl for others to use as mentioned above (subject to acceptance by the mpl developers of course). It is an excellent book that I wholeheartedly recommend to anyone with a passing interest in computational geometry. The algorithm itself looks painful initially (as do many of the algorithms in the book) but it reduces to a surprisingly small amount of code. For well-formed triangulations (i.e. no duplicate points, no zero area triangles and no overlapping triangles) the algorithm is 'sufficiently' robust. It is not completely robust in the sense of Shewchuk's robust predicates, but is designed quite cleverly (or luckily) to avoid many of the pitfalls that occur in naive point-in-triangle tests. For example, a naive implementation of a point-in-triangle test for a point on or very near the common edge of two adjacent triangles might return true for both triangles (which is usually fine), or false for both (which is catastrophic). The trapezoidal map avoids these problems by reducing the test to a single point-line test which is therefore guaranteed to return just one of the two triangles. It is possible to think of a situation that causes the algorithm to fail at a triangle which has such a small area that the slopes of two of the sides are within the machine precision of each other, but it is also possible to use the triangle connectivity to check for and resolve this problem. I haven't done so yet and need to consider the tradeoff of effort required vs potential gain. Someone who required guaranteed robustness could use Shewchuk's point-line test with the algorithm. I would not do this with mpl however, as the correctness of the point-line test depends a lot on computer architecture and compiler flags. This would get out of date quickly and would need keen monitoring by someone with knowledge of and interest in the area, plus access to a lot of different computer architectures and OSes. I fail on all of those counts! > >> particularly as only a few days ago I committed to writing a triangular > grid > >> interpolator for quad grids > > what is a triangular interpolator for quad grids? sounds useful, too. That was poor English from me. I meant interpolating from a triangular grid *to* a quad grid, typically to make use of the wide range of quad grid plotting functions like contour, pcolor, etc. Ian
On Thu, Nov 15, 2012 at 3:25 PM, Chris Barker <chr...@no...> wrote: > On Wed, Nov 14, 2012 at 1:50 AM, Ian Thomas <ian...@gm...> wrote: > >> I think the code used to determine which triangle contains a certain point >> should be factored out into its own TriFinder class, > > +1 -- this is a generally useful feature. In fact, it would be nice if > a lot of this were in a pacakge that deals with triangular meshes, > apart from MPL altogether (a scikit maybe?) > >> I have a C++ TriFinder class >> that I could modify to work within matplotlib, and it is O(log N) so should >> be faster than your version for typical use cases. > > What algorithm does this use? Is the code open source and/or availabel > for other projects? I'm pretty sure there is an O(log n) algorithm in the Numerical Recipes book. It requires you to construct the triangulation in a specific way (this allows one to set up a tree data structure of triangles nicely). There may be others that I am not aware of though. > > I'm working on a package for working with unstructured grids in > general, and also have a use for "what triangle is this point in" code > for other purposes -- and I havne't found a fast, robust code for this > yet. > >>> particularly as only a few days ago I committed to writing a triangular grid >>> interpolator for quad grids > > what is a triangular interpolator for quad grids? sounds useful, too. > > -Chris > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chr...@no... > > ------------------------------------------------------------------------------ > Monitor your physical, virtual and cloud infrastructure from a single > web console. Get in-depth insight into apps, servers, databases, vmware, > SAP, cloud infrastructure, etc. Download 30-day Free Trial. > Pricing starts from 795ドル for 25 servers or applications! > http://p.sf.net/sfu/zoho_dev2dev_nov > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel -- Damon McDougall http://www.damon-is-a-geek.com Institute for Computational Engineering Sciences 201 E. 24th St. Stop C0200 The University of Texas at Austin Austin, TX 78712-1229