WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Xen

xen-devel

[Top] [All Lists]

[Xen-devel] question about disk performance in domU

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] question about disk performance in domU
From: xuehai zhang <hai@xxxxxxxxxxxxxxx>
Date: 2005年11月21日 09:41:08 -0600
Cc: Tim Freeman <tfreeman@xxxxxxxxxxx>, Kate Keahey <keahey@xxxxxxxxxxx>, xen-usersl@xxxxxxxxxxxxxxxxxxx
Delivery-date: 2005年11月21日 15:42:39 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla Thunderbird 1.0.2 (Windows/20050317)
Hi all,
When I ran the experiments to compare an application's execution time in both a domU (named cctest1) and a native Linux machine (named ccn10), I noticed the application executes faster in domU. The host of the domU (named ccn9) and ccn10 are two nodes of a cluster and they have same hardware configurations. domU (cctest1) is created by exporting loopback files from dom0 on ccn9 as VBD backends. The application execution logs there might be some disk I/O difference between cctest and ccn10, so I did some disk performance profiling with "hdparms" on cctest1 (domU), ccn10 (native Linux), ccn9 (dom0), and ccn9 (native Linux). Also, I checked the "DMA" config information from the output of dmesg. I tried to run "hdparm -i" and "hdparm -I" but they didn't work. Seems they didn't work with SCSI disks. The following are the results.
Thanks in advance for your help.
Best,
Xuehai
1. cctest1 (dumU)
**************************************************************************************************
cctest1$ df -lh
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 1.5G 1.1G 306M 78% /
tmpfs 62M 4.0K 62M 1% /dev/shm
/dev/sda6 4.2G 3.6G 453M 89% /tmp
/dev/sda5 938M 205M 685M 23% /var
cctest1$ dmesg | grep DMA
 DMA zone: 101376 pages, LIFO batch:16
cctest1$ for i in `seq 1 10`; do hdparm -tT /dev/sda1; done
 Timing cached reads: 512 MB in 2.00 seconds = 256.00 MB/sec
 Timing buffered disk reads: 44 MB in 3.00 seconds = 14.67 MB/sec
 Timing cached reads: 528 MB in 2.01 seconds = 262.69 MB/sec
 Timing buffered disk reads: 84 MB in 3.08 seconds = 27.27 MB/sec
 Timing cached reads: 520 MB in 2.00 seconds = 260.00 MB/sec
 Timing buffered disk reads: 120 MB in 3.06 seconds = 39.22 MB/sec
 Timing cached reads: 520 MB in 2.00 seconds = 260.00 MB/sec
 Timing buffered disk reads: 150 MB in 3.06 seconds = 49.02 MB/sec
 Timing cached reads: 536 MB in 2.00 seconds = 268.00 MB/sec
 Timing buffered disk reads: 178 MB in 3.17 seconds = 56.15 MB/sec
 Timing cached reads: 536 MB in 2.00 seconds = 268.00 MB/sec
 Timing buffered disk reads: 204 MB in 3.08 seconds = 66.23 MB/sec
 Timing cached reads: 532 MB in 2.00 seconds = 266.00 MB/sec
 Timing buffered disk reads: 228 MB in 3.13 seconds = 72.84 MB/sec
 Timing cached reads: 540 MB in 2.01 seconds = 268.66 MB/sec
 Timing buffered disk reads: 248 MB in 3.04 seconds = 81.58 MB/sec
 Timing cached reads: 540 MB in 2.00 seconds = 270.00 MB/sec
 Timing buffered disk reads: 266 MB in 3.06 seconds = 86.93 MB/sec
 Timing cached reads: 532 MB in 2.00 seconds = 266.00 MB/sec
 Timing buffered disk reads: 282 MB in 3.06 seconds = 92.16 MB/sec
**************************************************************************************************
2. ccn10 (native Linux)
**************************************************************************************************
ccn10$ df -lh
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 1.5G 1.3G 149M 90% /
tmpfs 252M 0 252M 0% /dev/shm
/dev/sda6 4.2G 3.6G 358M 92% /tmp
/dev/sda5 938M 706M 184M 80% /var
ccn10$ dmesg | grep DMA
 DMA zone: 4096 pages, LIFO batch:1
ccn10$ for i in `seq 1 10`; do hdparm -tT /dev/sda1; done
 Timing cached reads: 516 MB in 2.00 seconds = 257.78 MB/sec
 Timing buffered disk reads: 62 MB in 3.03 seconds = 20.47 MB/sec
 Timing cached reads: 524 MB in 2.01 seconds = 261.00 MB/sec
 Timing buffered disk reads: 62 MB in 3.01 seconds = 20.61 MB/sec
 Timing cached reads: 516 MB in 2.00 seconds = 257.65 MB/sec
 Timing buffered disk reads: 62 MB in 3.01 seconds = 20.61 MB/sec
 Timing cached reads: 524 MB in 2.00 seconds = 262.04 MB/sec
 Timing buffered disk reads: 62 MB in 3.01 seconds = 20.61 MB/sec
 Timing cached reads: 516 MB in 2.00 seconds = 257.78 MB/sec
 Timing buffered disk reads: 62 MB in 3.02 seconds = 20.51 MB/sec
 Timing cached reads: 524 MB in 2.00 seconds = 261.78 MB/sec
 Timing buffered disk reads: 62 MB in 3.02 seconds = 20.52 MB/sec
 Timing cached reads: 516 MB in 2.00 seconds = 257.78 MB/sec
 Timing buffered disk reads: 62 MB in 3.02 seconds = 20.51 MB/sec
 Timing cached reads: 524 MB in 2.00 seconds = 261.78 MB/sec
 Timing buffered disk reads: 62 MB in 3.02 seconds = 20.50 MB/sec
 Timing cached reads: 516 MB in 2.00 seconds = 257.40 MB/sec
 Timing buffered disk reads: 64 MB in 3.09 seconds = 20.73 MB/sec
 Timing cached reads: 524 MB in 2.01 seconds = 260.87 MB/sec
 Timing buffered disk reads: 62 MB in 3.01 seconds = 20.61 MB/sec
**************************************************************************************************
3. ccn9 (dom0)
**************************************************************************************************
ccn9$ df -lh
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 1.5G 1.1G 306M 78% /
tmpfs 62M 4.0K 62M 1% /dev/shm
/dev/sda6 4.2G 3.6G 453M 89% /tmp
/dev/sda5 938M 205M 685M 23% /var
ccn9$ dmesg | grep DMA
 DMA zone: 32768 pages, LIFO batch:8
ccn9$ for i in `seq 1 10`; do hdparm -tT /dev/sda1; done
 Timing cached reads: 504 MB in 2.00 seconds = 252.00 MB/sec
 Timing buffered disk reads: 60 MB in 3.14 seconds = 19.11 MB/sec
 Timing cached reads: 516 MB in 2.00 seconds = 258.00 MB/sec
 Timing buffered disk reads: 62 MB in 3.15 seconds = 19.68 MB/sec
 Timing cached reads: 512 MB in 2.00 seconds = 256.00 MB/sec
 Timing buffered disk reads: 60 MB in 3.08 seconds = 19.48 MB/sec
 Timing cached reads: 516 MB in 2.00 seconds = 258.00 MB/sec
 Timing buffered disk reads: 58 MB in 3.02 seconds = 19.21 MB/sec
 Timing cached reads: 516 MB in 2.01 seconds = 256.72 MB/sec
 Timing buffered disk reads: 60 MB in 3.12 seconds = 19.23 MB/sec
 Timing cached reads: 520 MB in 2.00 seconds = 260.00 MB/sec
 Timing buffered disk reads: 60 MB in 3.13 seconds = 19.17 MB/sec
 Timing cached reads: 520 MB in 2.01 seconds = 258.71 MB/sec
 Timing buffered disk reads: 60 MB in 3.13 seconds = 19.17 MB/sec
 Timing cached reads: 520 MB in 2.01 seconds = 258.71 MB/sec
 Timing buffered disk reads: 60 MB in 3.06 seconds = 19.61 MB/sec
 Timing cached reads: 516 MB in 2.01 seconds = 256.72 MB/sec
 Timing buffered disk reads: 60 MB in 3.14 seconds = 19.11 MB/sec
 Timing cached reads: 516 MB in 2.00 seconds = 258.00 MB/sec
 Timing buffered disk reads: 60 MB in 3.15 seconds = 19.05 MB/sec
**************************************************************************************************
4. ccn9 (native Linux)
**************************************************************************************************
ccn9$ df -lh
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 1.5G 1.1G 306M 78% /
tmpfs 62M 4.0K 62M 1% /dev/shm
/dev/sda6 4.2G 3.6G 453M 89% /tmp
/dev/sda5 938M 205M 685M 23% /var
ccn9 $ dmesg | grep DMA
 DMA zone: 4096 pages, LIFO batch:1
ccn9$ for i in `seq 1 10`; do hdparm -tT /dev/sda1; done
/dev/sda1:
 Timing cached reads: 492 MB in 2.01 seconds = 244.57 MB/sec
 Timing buffered disk reads: 62 MB in 3.10 seconds = 20.01 MB/sec
 Timing cached reads: 484 MB in 2.01 seconds = 241.07 MB/sec
 Timing buffered disk reads: 48 MB in 3.01 seconds = 15.95 MB/sec
 Timing cached reads: 484 MB in 2.00 seconds = 241.67 MB/sec
 Timing buffered disk reads: 62 MB in 3.03 seconds = 20.45 MB/sec
 Timing cached reads: 484 MB in 2.01 seconds = 241.31 MB/sec
 Timing buffered disk reads: 62 MB in 3.01 seconds = 20.57 MB/sec
 Timing cached reads: 480 MB in 2.01 seconds = 239.08 MB/sec
 Timing buffered disk reads: 62 MB in 3.03 seconds = 20.49 MB/sec
 Timing cached reads: 488 MB in 2.01 seconds = 243.31 MB/sec
 Timing buffered disk reads: 62 MB in 3.05 seconds = 20.31 MB/sec
 Timing cached reads: 484 MB in 2.01 seconds = 241.31 MB/sec
 Timing buffered disk reads: 62 MB in 3.01 seconds = 20.61 MB/sec
 Timing cached reads: 484 MB in 2.00 seconds = 241.67 MB/sec
 Timing buffered disk reads: 62 MB in 3.01 seconds = 20.59 MB/sec
 Timing cached reads: 488 MB in 2.01 seconds = 242.34 MB/sec
 Timing buffered disk reads: 62 MB in 3.01 seconds = 20.59 MB/sec
 Timing cached reads: 484 MB in 2.01 seconds = 240.35 MB/sec
 Timing buffered disk reads: 62 MB in 3.09 seconds = 20.09 MB/sec
**************************************************************************************************
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>
Previous by Date: Re: [Xen-devel] USB virt 2.6 split driver patch series , harry
Next by Date: [Xen-devel] [PATCH] Fix device removal on net and block frontend drivers , Murillo Fernandes Bernardes
Previous by Thread: [Xen-devel] Passing ioctls between front-end and back-end drivers , Nick Logan
Next by Thread: Re: [Xen-devel] question about disk performance in domU , Tim Freeman
Indexes: [Date] [Thread] [Top] [All Lists]

Copyright ©, Citrix Systems Inc. All rights reserved. Legal and Privacy
Citrix This site is hosted by Citrix

AltStyle によって変換されたページ (->オリジナル) /