WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Xen

xen-users

[Top] [All Lists]

Re: [Xen-users] xen storage options - plase advise

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] xen storage options - plase advise
From: James Pifer <jep@xxxxxxxxxxxxxxxx>
Date: 2010年3月04日 10:38:00 -0500
Delivery-date: 2010年3月04日 07:38:52 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4B8FCF3A.4000103@xxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <1267644499.14308.16.camel@xxxxxxxxxxxxxxxxxxxxxxxx> <1267648139.14030.4.camel@sl300 > <1267650467.14308.23.camel@xxxxxxxxxxxxxxxxxxxxxxxx> <1267651543.14308.26.camel@xxxxxxxxxxxxxxxxxxxxxxxx> <20100303214623.GB10956@xxxxxxxxxxxxxxxxxxx> <1267655677.14308.41.camel@xxxxxxxxxxxxxxxxxxxxxxxx> <1267656732.14030.12.camel@sl300 > <1267659860.14308.51.camel@xxxxxxxxxxxxxxxxxxxxxxxx> <1267661802.14030.32.camel@sl300 > <1267663113.14308.61.camel@xxxxxxxxxxxxxxxxxxxxxxxx> <2aa6fadf73370c5d9507a8f6161f9225.squirrel@xxxxxxxxxxxxxxxxxxxxxx> <1267706710.14308.72.camel@xxxxxxxxxxxxxxxxxxxxxxxx> <4B8FC7E5.6060504@xxxxxxxxxxx> <1267715162.14308.77.camel@xxxxxxxxxxxxxxxxxxxxxxxx> <4B8FCF3A.4000103@xxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
On Thu, 2010年03月04日 at 10:18 -0500, John Madden wrote:
> > Nobody has answered the question of disk space usage. Let's say I have
> > five Windows 2008 servers. Right now they are using file based, growable
> > storage. 
> > 
> > They each have a disk0 that is growable and partitioned like:
> > c: = 25gb
> > e: = 175gb (using round numbers)
> > 
> > Using clvm, would I be using 1TB of storage for these five domUs?
>
> Of course -- you aren't gaining anything magic by using one disk 
> technology over another unless you start talking about de-duplication. 
> If you want the benefits of the "auto-growing sparse files" that you're 
> currently using, consider allocating the amount of storage you actually 
> need and growing it (via lvm) as you go.
>
> John
>
Ok, so if I want to keep using "auto-growing sparse files" I would do
this.
1) Assign the same vdisk from the SAN to multiple servers. 
2) Create an LVM on this disk. 
3) Each server would run these commands to see the LVM and then mount
it:
pvscan 
vgscan 
vgchange -ay
4) If I need to expand the LVM disk due to growth, we would extend the
vdisk, then use LVM to grow the space. 
Essentially this is giving me a shared storage without any clustering.
This would allow me to live migrate domU's between servers. 
Is that correct? Am I missing anything?
Thanks,
James
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
<Prev in Thread] Current Thread [Next in Thread>
Previous by Date: Re: [Xen-users] slow dom0 write operations with 2.6.32 pv-ops and Xen4.0.0-rc4 , Fantu
Next by Date: Fwd: Re: [Xen-users] xen storage options - plase advise , J. Roeleveld
Previous by Thread: Re: [Xen-users] xen storage options - plase advise , John Madden
Next by Thread: Re: [Xen-users] xen storage options - plase advise , John Madden
Indexes: [Date] [Thread] [Top] [All Lists]

Copyright ©, Citrix Systems Inc. All rights reserved. Legal and Privacy
Citrix This site is hosted by Citrix

AltStyle によって変換されたページ (->オリジナル) /