WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Xen

xen-devel

[Top] [All Lists]

[Xen-devel] [PATCH] xend: Fix non-contiguous NUMA node assignment

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Subject: [Xen-devel] [PATCH] xend: Fix non-contiguous NUMA node assignment
From: Andre Przywara <andre.przywara@xxxxxxx>
Date: 2010年1月15日 14:28:31 +0100
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, yunhong.jiang@xxxxxxxxx
Delivery-date: 2010年1月15日 05:29:21 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.21 (X11/20090329)
Hi,
it seems that I missed a point in this whole addition of max_node_id. I see the difference in the Xen HV part, so nr_nodes got replaced with max_node_id in physinfo_t (and xc_physinfo_t, respectively). But where does this value help in xend? There is no single Python reference to the physinfo()'s max_node_id field, instead all functions use the old (but now bogus) nr_nodes variable. So in the attached patch I kept the xc.physinfo() returned dictionary with only a nr_nodes field, calculated by simply adding 1 to max_node_id from libxc. Empty nodes can (and will) be detected by iterating through the node_to_cpus and node_to_memory lists. Nodes without memory should not be considered during guest's memory allocation, but will be used for further CPU affinity setting if the number of VCPUs exceeds the number of cores per node. Please correct me if I am totally wrong on this, but this seems to work much better in my case.
Regards,
Andre.
Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>
--
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany
Tel: +49 351 448 3567 12
----to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Karl-Hammerschmidt-Str. 34, 85609 Dornach b. Muenchen
Geschaeftsfuehrer: Andrew Bowd; Thomas M. McCoy; Giuliano Meroni
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632
diff -r db8a882f5515 tools/python/xen/lowlevel/xc/xc.c
--- a/tools/python/xen/lowlevel/xc/xc.c Thu Jan 14 14:11:25 2010 +0000
+++ b/tools/python/xen/lowlevel/xc/xc.c Fri Jan 15 14:20:05 2010 +0100
@@ -1078,7 +1078,7 @@
 #define MAX_CPU_ID 255
 xc_physinfo_t info;
 char cpu_cap[128], virt_caps[128], *p;
- int i, j, max_cpu_id, nr_nodes = 0;
+ int i, j, max_cpu_id;
 uint64_t free_heap;
 PyObject *ret_obj, *node_to_cpu_obj, *node_to_memory_obj;
 PyObject *node_to_dma32_mem_obj;
@@ -1115,7 +1115,6 @@
 node_to_dma32_mem_obj = PyList_New(0);
 for ( i = 0; i <= info.max_node_id; i++ )
 {
- int node_exists = 0;
 PyObject *pyint;
 
 /* CPUs. */
@@ -1127,14 +1126,12 @@
 pyint = PyInt_FromLong(j);
 PyList_Append(cpus, pyint);
 Py_DECREF(pyint);
- node_exists = 1;
 }
 PyList_Append(node_to_cpu_obj, cpus); 
 Py_DECREF(cpus);
 
 /* Memory. */
 xc_availheap(self->xc_handle, 0, 0, i, &free_heap);
- node_exists = node_exists || (free_heap != 0);
 pyint = PyInt_FromLong(free_heap / 1024);
 PyList_Append(node_to_memory_obj, pyint);
 Py_DECREF(pyint);
@@ -1145,13 +1142,10 @@
 PyList_Append(node_to_dma32_mem_obj, pyint);
 Py_DECREF(pyint);
 
- if ( node_exists )
- nr_nodes++;
 }
 
- ret_obj = 
Py_BuildValue("{s:i,s:i,s:i,s:i,s:i,s:i,s:l,s:l,s:l,s:i,s:s:s:s}",
- "nr_nodes", nr_nodes,
- "max_node_id", info.max_node_id,
+ ret_obj = Py_BuildValue("{s:i,s:i,s:i,s:i,s:i,s:l,s:l,s:l,s:i,s:s,s:s}",
+ "nr_nodes", info.max_node_id + 1,
 "max_cpu_id", info.max_cpu_id,
 "threads_per_core", info.threads_per_core,
 "cores_per_socket", info.cores_per_socket,
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>
Previous by Date: Re: [Xen-devel] Domain-0 kernelbootfailurewithxen-unstableandlinux-2.6.18-xen , Masaki Kanno
Next by Date: [Xen-devel] Xen from source (hg unstable) with 2.6.31.6 pv_ops unable to start hvm domu , Fantu
Previous by Thread: [Xen-devel] Implementing keystroke logger in VMM , ZelluX
Next by Thread: [Xen-devel] RE: [PATCH] xend: Fix non-contiguous NUMA node assignment , Jiang, Yunhong
Indexes: [Date] [Thread] [Top] [All Lists]

Copyright ©, Citrix Systems Inc. All rights reserved. Legal and Privacy
Citrix This site is hosted by Citrix

AltStyle によって変換されたページ (->オリジナル) /