WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Xen

xen-devel

[Top] [All Lists]

[Xen-devel] Re: revisit the super page support in HVM restore

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Subject: [Xen-devel] Re: revisit the super page support in HVM restore
From: "Zhai, Edwin" <edwin.zhai@xxxxxxxxx>
Date: 2009年8月19日 15:55:55 +0800
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: 2009年8月19日 00:56:23 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C6B1685E.1252C%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <C6B1685E.1252C%keir.fraser@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.17 (X11/20080914)
Keir Fraser wrote:
On 19/08/2009 08:08, "Zhai, Edwin" <edwin.zhai@xxxxxxxxx> wrote:
So how about this new method:
* Not tracking each of pfn inside 2M page, but trying best to allocate
 2M pages if the 2M page covering this pfn is not allocated.
* There may be holes inside new allocated 2M pages that are not synced in
 this batch, but we don't care and assume these missing pfns will
 come in future.
This new method is simple as the super page support for PV guest is
already there.
You wil fail to restore a guest which has ballooned down its memory as there
will be 4k holes in its memory map.
I see. But current PV guest has same issue also. If set superpages for the PV guest, allocate_mfn in xc_domain_restore.c would try to allocate 2M page for each of pfn regardless of the holes. Per my understanding, this is more serious issue for PV guest, as it uses balloon driver more frequently. If we have to use this algorithm, back to my complicated code -- do you have any suggestion to simplify the logic?
Thanks,
You will allocate 2MB superpages despite
these holes, which do not get fixed up until end of restore process, and run
out of memory in the host, or against the guest's maxmem limit.
 -- Keir
--
best rgds,
edwin
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>
Previous by Date: Re: [Xen-devel] Error restoring DomU when using GPLPV , Keir Fraser
Next by Date: [Xen-devel] [PATCH]: tools/CPUID: remove vendor specific CPUID bits masking , Andre Przywara
Previous by Thread: [Xen-devel] Re: revisit the super page support in HVM restore , Keir Fraser
Next by Thread: [Xen-devel] Re: revisit the super page support in HVM restore , Keir Fraser
Indexes: [Date] [Thread] [Top] [All Lists]

Copyright ©, Citrix Systems Inc. All rights reserved. Legal and Privacy
Citrix This site is hosted by Citrix

AltStyle によって変換されたページ (->オリジナル) /