WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Xen

xen-devel

[Top] [All Lists]

Re: [Xen-devel] VT-d scalability issue

To: Ian Pratt <Ian.Pratt@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] VT-d scalability issue
From: "Zhai, Edwin" <edwin.zhai@xxxxxxxxx>
Date: 2008年9月11日 21:08:58 +0800
Cc: "Han, Weidong" <weidong.han@xxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, Espen Skoglund <espen.skoglund@xxxxxxxxxxxxx>, "Zhai, Edwin" <edwin.zhai@xxxxxxxxx>
Delivery-date: 2008年9月11日 06:09:36 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <DD74FBB8EE28D441903D56487861CD9D35AADC8D@xxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20080911025823.GB13105@xxxxxxxxxxxxxxxxxxxxxx> <DD74FBB8EE28D441903D56487861CD9D35AADC8D@xxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.16 (2007年06月09日)
On Thu, Sep 11, 2008 at 09:44:58AM +0100, Ian Pratt wrote:
> > > But how much of the degradation is due to IOTLB pressure and how
> much
> > > is due to vcpu pinning? If vcpu pinning doesn't give you much then
> > > why add the automatic pinning just to get a little improvement on
> > > older CPUs hooked up to a VT-d chipset?
> > 
> > Say, throughput of 1 pass-through domain is 100%,
> > if not pin vcpu, average throughput of 8 pass-through domain is 59%.
> > If pin vcpu, average is 95%.
> > 
> > So you can see how much vcpu pinning contribute to the performance.
>
> For comparison, what are the results if you use a penryn with wbinvd
> exit support?
The penryn system has no much processor for scalability test, so I have no 
data:(
>
> Thanks,
> Ian
>
>
>
> > >
> > > eSk
> > >
> > >
> > > > Randy (Weidong)
> > >
> > > >>
> > > >> [And talking of IOTLB pressure, why can't Intel document the
> IOTLB
> > > >> sizes in the chipset docs? Or even better, why can't these
> values
> > be
> > > >> queried from the chipset?]
> > > >>
> > > >> eSk
> > > >>
> > > >>
> > > >> [Edwin Zhai]
> > > >>> Keir,
> > > >>> I have found a VT-d scalability issue and want to some feed
> > backs.
> > > >>
> > > >>> When I assign a pass-through NIC to a linux VM and increase the
> > num
> > > >>> of VMs, the iperf throughput for each VM drops greatly. Say,
> > start 8
> > > >>> VM running on a machine with 8 physical cpus, start 8 iperf
> > client
> > > >>> to connect each of them, the final result is only 60% of 1 VM.
> > > >>
> > > >>> Further investigation shows vcpu migration cause "cold" cache
> for
> > > >>> pass-through domain. following code in vmx_do_resume try to
> > > >>> invalidate orig processor's cache when 14 migration if this
> > domain
> > > >>> has pass-through device and no support for wbinvd vmexit.
> > > >>
> > > >>> 16 if ( has_arch_pdevs(v->domain) && !cpu_has_wbinvd_exiting ) {
> > > >>> int cpu = v->arch.hvm_vmx.active_cpu;
> > > >>> if ( cpu != -1 )
> > > >>> on_selected_cpus(cpumask_of_cpu(cpu), wbinvd_ipi, NULL, 1,
> > > >>
> > > >>> }
> > > >>
> > > >>> So we want to pin vcpu to free processor for domains with
> > > >>> pass-through device in creation process, just like what we did
> > for
> > > >>> NUMA system.
> > > >>
> > > >>> What do you think of it? Or have other ideas?
> > > >>
> > > >>> Thanks,
> > > >>
> > > >>
> > > >>> --
> > > >>> best rgds,
> > > >>> edwin
> > > >>
> > > >>> _______________________________________________
> > > >>> Xen-devel mailing list
> > > >>> Xen-devel@xxxxxxxxxxxxxxxxxxx
> > > >>> http://lists.xensource.com/xen-devel 
> > > >>
> > > >>
> > > >> _______________________________________________
> > > >> Xen-devel mailing list
> > > >> Xen-devel@xxxxxxxxxxxxxxxxxxx
> > > >> http://lists.xensource.com/xen-devel 
> > >
> > >
> > 
> > --
> > best rgds,
> > edwin
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel 
>
-- 
best rgds,
edwin
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>
Previous by Date: [Xen-devel] Accessing the VNC using HVM stub domain - xen 3.3.0 , Marco Sinhoreli
Next by Date: Re: [Xen-devel] [PATCH, RFC] x86: make the GDT per-CPU , Keir Fraser
Previous by Thread: RE: [Xen-devel] VT-d scalability issue , Ian Pratt
Next by Thread: [Xen-devel] [PATCH] Add "Assignable device to HVM domain" section to vtd.txt , Yuji Shimada
Indexes: [Date] [Thread] [Top] [All Lists]

Copyright ©, Citrix Systems Inc. All rights reserved. Legal and Privacy
Citrix This site is hosted by Citrix

AltStyle によって変換されたページ (->オリジナル) /