[openstack-dev] [oslo][neutron][rootwrap] Performance considerations, sudo?
Joe Gordon
joe.gordon0 at gmail.com
Fri Mar 7 20:33:40 UTC 2014
On Fri, Mar 7, 2014 at 12:27 AM, Miguel Angel Ajo <majopela at redhat.com>wrote:
>> I'm really happy to see that I'm not the only one concerned about
> performance.
>>> I'm reviewing the thread, and summarizing / replying to multiple people on
> the thread:
>>> Ben Nemec,
>> * Thanks for pointing us to the previous thread about this topic:
> http://lists.openstack.org/pipermail/openstack-dev/2013-July/012539.html
>>> Rick Jones,
>> * iproute commit f0124b0f0aa0e5b9288114eb8e6ff9b4f8c33ec8 upstream,
> I have to check if it's on my system.
>> * Very interesting investigation about sudo:
>> http://www.sudo.ws/repos/sudo/rev/e9dc28c7db60 this is as important
> as the bottleneck in rootwrap when you start having lots of interfaces.
> Good catch!,
>> * To your question: my times are only from neutron-dhcp-agent &
> neutron-l3-agent start, to completion, system boot times are excluded from
> the
> measurement (That's <1min).
>> * About the Linux networking folks not exposing API interfaces to avoid
> lock in, in the end they're already locked in with the cmd api interface,
> if they made an API at the same level, it shouldn't be that bad... but of
> course, it's not free...
>>> Joe Gordon,
>> * yes, pypy start time is too slow, and I must definitely investigate
> about the RPython toolchain.
>> * Ideally, I agree, that a an automated py->C solution would be
> the best from the openstack project point of view. Have you had any
> experience with that using such toolchain? Could you point me to some
> example?
>
Sorry I am afraid I don't have experience with this or any examples.
>> * shedskin seems to do this kind of translation, for a limited python
> subset, which would mean rewriting rootwrap's python to accommodate
> such limitations.
>>RPython is a subset of Python so rewritting will be needed for pypy as well.
>> If no tool offers the translation we need, or if the result is slow:
>> I'm not against a rewrite of rootwrap in C/C++, if we have developers
> on the project, with C/C++ experience, specially related to security.
> I have such experience, and I'm sure there are more around (even if
> not all openstack developers talk C). But, that doesn't exclude that
> we maintain a rootwrap in python to foster innovation around the tool.
> (here I agree with Vishvananda Ishaya)
>>> Solly Ross,
> I haven't tried cython, but I will check it in a few minutes.
>>> Iwamoto Toshihiro,
>> Thanks for pointing us to "ip netns exec" too, I wonder if that's
> releated to the iproute upstream change Rick Jones was talking about.
>>> Cheers,
> Miguel Ángel.
>>>>>>>> On 03/06/2014 09:31 AM, Miguel Angel Ajo wrote:
>>>>> On 03/06/2014 07:57 AM, IWAMOTO Toshihiro wrote:
>>>>> At 2014年3月05日 15:42:54 +0100,
>>> Miguel Angel Ajo wrote:
>>>>>>> 3) I also find 10 minutes a long time to setup 192 networks/basic tenant
>>>> structures, I wonder if that time could be reduced by conversion
>>>> of system process calls into system library calls (I know we don't have
>>>> libraries for iproute, iptables?, and many other things... but it's a
>>>> problem that's probably worth looking at.)
>>>>>>>>>> Try benchmarking
>>>>>> $ sudo ip netns exec qfoobar /bin/echo
>>>>>>> You're totally right, that takes the same time as rootwrap itself. It's
>> another point to think about from the performance point of view.
>>>> An interesting read:
>> http://man7.org/linux/man-pages/man8/ip-netns.8.html
>>>> ip netns does a lot of mounts around to simulate a normal environment,
>> where an netns-aware application could avoid all this.
>>>>>>> Network namespace switching costs almost as much as a rootwrap
>>> execution, IIRC.
>>>>>> Execution coalesceing is not enough in this case and we would need to
>>> change how Neutron issues commands, IMO.
>>>>>>> Yes, one option could be to coalesce all calls that go into
>> a namespace into a shell script and run this in the
>> ootwrap > ip netns exec
>>>> But we might find a mechanism to determine if some of the steps failed,
>> and what was the result / output, something like failing line + result
>> code. I'm not sure if we rely on stdout/stderr results at any time.
>>>>>>>>>>>>>>>>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140307/d90cf45a/attachment.html>
More information about the OpenStack-dev
mailing list