WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Xen

xen-devel

[Top] [All Lists]

[Xen-devel] RFC: large system support - 128 CPUs

To: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] RFC: large system support - 128 CPUs
From: Bill Burns <bburns@xxxxxxxxxx>
Date: 2008年8月12日 14:41:10 -0400
Delivery-date: 2008年8月12日 11:41:32 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.14 (X11/20080515)
There are a couple of issues with building the Hypervisor
with max_phys_cpus=128 for x86_64. (Note that this was
on a 3.1 base, but unstable appears to have
the same issue, at least with the first part).
First is a build assertion due to the size of
the page_info structure and the shadow_page_info
structures get out of sync due to the presence
of cpumask_t in the page info structure.
A possible fix is to tack on the following to
the end of shadow_page_info structure:
--- xen/arch/x86/mm/shadow/private.h.orig 2007年12月06日 12:48:38.000000000 
-0500
+++ xen/arch/x86/mm/shadow/private.h 2008年08月12日 12:52:49.000000000 -0400
@@ -243,6 +243,12 @@ struct shadow_page_info
 /* For non-pinnable shadows, a higher entry that points at us */
 paddr_t up;
 };
+#if NR_CPUS > 64
+ /* Need to add some padding to match struct page_info size,
+ * if cpumask_t is larger than a long
+ */
+ u8 padding[sizeof(cpumask_t)-sizeof(long)];
+#endif
 };
 /* The structure above *must* be the same size as a struct page_info
The other issue is at runtime with a fault when
trying to bring up cpu 126. Seems the GDT space
reserved is not quite enough to hold the per
cpu entries. Crude fix (awaiting test results,
so not sure that this is sufficient.):
--- xen/include/asm-x86/desc.h.orig 2007年12月06日 12:48:39.000000000 -0500
+++ xen/include/asm-x86/desc.h 2008年07月31日 13:19:52.000000000 -0400
@@ -5,7 +5,11 @@
 * Xen reserves a memory page of GDT entries.
 * No guest GDT entries exist beyond the Xen reserved area.
 */
+#if MAX_PHYS_CPUS > 64
+#define NR_RESERVED_GDT_PAGES 2
+#else
 #define NR_RESERVED_GDT_PAGES 1
+#endif
 #define NR_RESERVED_GDT_BYTES (NR_RESERVED_GDT_PAGES * PAGE_SIZE)
 #define NR_RESERVED_GDT_ENTRIES (NR_RESERVED_GDT_BYTES / 8)
Bill
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>
Previous by Date: [Xen-devel] Re: Memory-hotplug support for x86_64 domUs? , Jeremy Fitzhardinge
Next by Date: [Xen-devel] Not enough RAM for domain 0 allocation. , Luca
Previous by Thread: [Xen-devel] Re: Memory-hotplug support for x86_64 domUs? , Jeremy Fitzhardinge
Next by Thread: Re: [Xen-devel] RFC: large system support - 128 CPUs , Jan Beulich
Indexes: [Date] [Thread] [Top] [All Lists]

Copyright ©, Citrix Systems Inc. All rights reserved. Legal and Privacy
Citrix This site is hosted by Citrix

AltStyle によって変換されたページ (->オリジナル) /