WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Xen

xen-users

[Top] [All Lists]

RE: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:Xen I/

To: "DOGUET Emmanuel" <Emmanuel.DOGUET@xxxxxxxx>, "Fajar A. Nugraha" <fajar@xxxxxxxxx>
Subject: RE: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:Xen I/O is definitely super super super slow
From: "DOGUET Emmanuel" <Emmanuel.DOGUET@xxxxxxxx>
Date: 2009年2月25日 18:03:18 +0100
Cc: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>, Joris Dobbelsteen <joris@xxxxxxxxxxxxxxxxxxxxx>
Delivery-date: 2009年2月25日 09:04:26 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <7309E5BCEDC4DC4BA820EF9497269EAD0461B295@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <7309E5BCEDC4DC4BA820EF9497269EAD0461B244@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx><7207d96f0902120602t1be864acm684fbe6b8f0f18aa@xxxxxxxxxxxxxx><7309E5BCEDC4DC4BA820EF9497269EAD0461B246@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx><7207d96f0902122011x541c63eewe33fe0ef922cd0c9@xxxxxxxxxxxxxx><7309E5BCEDC4DC4BA820EF9497269EAD0461B24D@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx><7309E5BCEDC4DC4BA820EF9497269EAD0461B24F@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx><7207d96f0902132122n409d71ceg2c19e3ec70f52f45@xxxxxxxxxxxxxx> <7309E5BCEDC4DC4BA820EF9497269EAD0461B292@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <7309E5BCEDC4DC4BA820EF9497269EAD0461B295@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcmOZD92bBQc2PkGRXO88dTpflQeLgIEGQ+wAAhDz9AAMFhFIA==
Thread-topic: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:Xen I/O is definitely super super super slow
I have finished my tests on 3 servers. On each we loose some bandwidth with 
XEN. On our 10 platform ... We always loose some bandwidth, I think it's 
normal. Just the bench method who must differ?
I have made bench (write only) between hardware and software RAID under XEN 
(see attachment).
Linux Software RAID is always faster than HP Raid. I must try too the 
"512MB+Cache Write" option for the HP Raid.
So my problems seem to be here.
-------------------------
HP DL 380
Quad core
-------------------------
Test: dd if=/dev/zero of=TEST bs=4k count=1250000
 Hardware Hardware Software Software
 RAID 5 RAID 5 RAID 5 RAID 5
 4 x 146G 8 x 146G 4 x 146G 8 x 146G
dom0 
(1024MB,
 1 cpu) 32MB 22MB 88MB (*) 144MB (*)
domU 
( 512MB,
 1 cpu) 8MB 5MB 34MB 31MB
 
domU
 (4096MB,
 2 cpu) -- 7MB 51MB 35MB
*: don't understand this difference.
This performance seems to be good for you?
 Best regards.
>-----Message d'origine-----
>De : DOGUET Emmanuel
>Envoyé : mardi 24 février 2009 17:50
>À : DOGUET Emmanuel; Fajar A. Nugraha
>Cc : xen-users; Joris Dobbelsteen
>Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs
>nativeperformance:Xen I/O is definitely super super super slow
>
>For resuming :
>
>on RAID 0
>
> dom0: 80MB domU: 56MB Loose: 30M
>
>on RAID1
>
> dom0: 80MB domU: 55 MB Loose: 32%
>
>on RAID5:
>
> dom0: 30MB domU: 9MB Loose: 70%
>
>
>
>So loose seem to be "exponantial" ?
>
>
>
>>-----Message d'origine-----
>>De : xen-users-bounces@xxxxxxxxxxxxxxxxxxx
>>[mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] De la part de
>>DOGUET Emmanuel
>>Envoyé : mardi 24 février 2009 14:22
>>À : Fajar A. Nugraha
>>Cc : xen-users; Joris Dobbelsteen
>>Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs
>>nativeperformance:Xen I/O is definitely super super super slow
>>
>>
>>I have made another test on another server (DL 380)
>>
>>And same thing!
>>
>>I'm always use this test :
>>
>>dd if=/dev/zero of=TEST bs=4k count=1250000
>>
>>(be careful with memory cache)
>>
>>
>>TEST WITH 2 RAID 5 (include system on RAID 5, 3x146G + 3x146G)
>>---------------------------------------------------------------
>>
>> dom0: 1GO, 1CPU, 2 RAID 5
>>
>> rootvg(c0d0p1): 4596207616 bytes (4.6 GB)
>>copied, 158.284 seconds, 29.0 MB/s
>> datavg(c0d1p1): 5120000000 bytes (5.1 GB)
>>copied, 155.414 seconds, 32.9 MB/s
>>
>>domU: 512M, 1CPU on System LVM/RAID5 (rootvg)
>>
>> 5120000000 bytes (5.1 GB) copied, 576.923 seconds, 8.9 MB/s
>>
>>domU: 512M, 1CPU on DATA LVM/RAID5 (datavg)
>>
>> 5120000000 bytes (5.1 GB) copied, 582.611 seconds, 8.8 MB/s
>>
>>domU: 512M, 1 CPU on same RAID without LVM
>>
>> 5120000000 bytes (5.1 GB) copied, 808.957 seconds, 6.3 MB/s
>>
>>
>>TEST WITH RAID 0 (dom0 system on RAID 1)
>>---------------------------------------
>>
>>dom0 1GO RAM 1CPU
>>
>> on system (RAID1):
>> i3955544064 bytes (4.0 GB) copied, 57.4314 seconds, 68.9 MB/s
>>
>> on direct HD (RAID 0 of cssiss), no LVM
>> 5120000000 bytes (5.1 GB) copied, 62.5497 seconds, 81.9 MB/s
>>
>>dom0 4GO RAM 4CPU
>>
>>
>>
>>domU: 4GO, 4 CPU
>>
>> on direct HD (RAID 0), no LVM.
>> 5120000000 bytes (5.1 GB) copied, 51.2684 seconds, 99.9 MB/s
>>
>>
>>domU: 4GO, 4CPU same HD but ONE LVM on it
>>
>> 5120000000 bytes (5.1 GB) copied, 51.5937 seconds, 99.2 MB/s
>>
>>
>>TEST with only ONE RAID 5 (6 x 146G)
>>------------------------------------
>>
>>dom0 : 1024MB - 1CPUI (RHEL 5.3)
>>
>> 5120000000 bytes (5.1 GB) copied, 231.113 seconds, 22.2 MB/s
>>
>>
>>512MB - 1 CPU
>> 5120000000 bytes (5.1 GB) copied, 1039.42 seconds, 4.9 MB/s
>>
>>
>>512MB - 1 CPU - ONLY 1 VDB [LVM] (root, no swap)
>>
>> (too slow ..stopped :P)
>> 4035112960 bytes (4.0 GB) copied, 702.883 seconds, 5.7 MB/s
>>
>>512MB - 1 CPU - On a file (root, no swap)
>>
>> 1822666752 bytes (1.8 GB) copied, 2753.91 seconds, 662 kB/s
>>
>>4GB - 2 CPU
>> 5120000000 bytes (5.1 GB) copied, 698.681 seconds, 7.3 MB/s
>>
>>
>>
>>
>>>-----Message d'origine-----
>>>De : Fajar A. Nugraha [mailto:fajar@xxxxxxxxx]
>>>Envoyé : samedi 14 février 2009 06:23
>>>À : DOGUET Emmanuel
>>>Cc : xen-users
>>>Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs native
>>>performance:Xen I/O is definitely super super super slow
>>>
>>>2009年2月13日 DOGUET Emmanuel <Emmanuel.DOGUET@xxxxxxxx>:
>>>>
>>>>
>>>> I have mount domU partition on dom0 for testing and it's OK.
>>>> But same partiton on domU side is slow.
>>>>
>>>> Strange.
>>>
>>>Strange indeed. At least that ruled-out hardware problems :)
>>>Could try with a "simple" domU?
>>>- 1 vcpu
>>>- 512 M memory
>>>- only one vbd
>>>
>>>this should isolate whether or not the problem is on your particular
>>>domU (e.g. some config parameter actually make domU slower).
>>>
>>>Your config file should have only few lines, like this
>>>
>>>memory = "512"
>>>vcpus=1
>>>disk = ['phy:/dev/rootvg/bdd-root,xvda1,w' ]
>>>vif = [ "mac=00:22:64:A1:56:BF,bridge=xenbr0" ]
>>>vfb =['type=vnc']
>>>bootloader="/usr/bin/pygrub"
>>>
>>>Regards,
>>>
>>>Fajar
>>>
>>
>>_______________________________________________
>>Xen-users mailing list
>>Xen-users@xxxxxxxxxxxxxxxxxxx
>>http://lists.xensource.com/xen-users 
>>
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
<Prev in Thread] Current Thread [Next in Thread>
Previous by Date: [Xen-users] HVM Linux DomU doesn't start , Fabian Flägel
Next by Date: Re: [Xen-users] HVM Linux DomU doesn't start , Boris Derzhavets
Previous by Thread: RE: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:Xen I/O is definitely super super super slow , DOGUET Emmanuel
Next by Thread: Re: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:Xen I/O is definitely super super super slow , Olivier B.
Indexes: [Date] [Thread] [Top] [All Lists]

Copyright ©, Citrix Systems Inc. All rights reserved. Legal and Privacy
Citrix This site is hosted by Citrix

AltStyle によって変換されたページ (->オリジナル) /