WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Xen

xen-devel

[Top] [All Lists]

Re: [Xen-devel] Problem with migrating/saving 2.6.30 vanilla kernel with

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] Problem with migrating/saving 2.6.30 vanilla kernel with multiple VCPUs
From: Denis Chapligin <chollya@xxxxxxxxxxx>
Date: 2009年8月12日 09:14:19 +0300
Delivery-date: 2009年8月11日 23:15:06 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20090810111620.6cba43e1@xxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: SatGate LLC
References: <20090810110424.11d96ac8@xxxxxxxxxxxxxxxxx> <20090810111620.6cba43e1@xxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi
On 2009年8月10日 11:16:20 +0300
Denis Chapligin <chollya@xxxxxxxxxxx> wrote:
> Even more - when i set vcpus to 1 on this domain, it is possible to
> save/restore it, but domain can migrate only one time. Second migratin
> fails with:
> dom0:~# xm migrate -l test otherdom0
> Error: /usr/lib/xen-3.4/bin/xc_save 47 3 0 0 1 failed
>
> and following message inx xm dmesg:
>
> (XEN) mm.c:806:d0 Error getting mfn 10d679 (pfn 133b6) from L1 entry
> 800000010d679225 for l1e_owner=0, pg_owner=3 
> (XEN) mm.c:806:d0 Error getting mfn 10d679 (pfn 133b6) from L1 entry
> 800000010d679225 for l1e_owner=0, pg_owner=3
>
With domU kernel 2.6.30.2, taken from http://x17.eu/xen/, it is
possible to migrate domain with 1 vcpu. With more then one, it
migrates well, but hangs on disk operations in a couple of seconds
after migration. Right after migration (before hang) next message is
printed to the domU's console:
------------[ cut here ]------------
WARNING: at kernel/irq/manage.c:259 __enable_irq+0x56/0x79()
Unbalanced enable for IRQ 776
Modules linked in: ipv6
Pid: 2022, comm: suspend Tainted: G W 2.6.30.2 #1
Call Trace:
 [<c01564a3>] ? __enable_irq+0x56/0x79
 [<c01564a3>] ? __enable_irq+0x56/0x79
 [<c012561b>] warn_slowpath_common+0x59/0x94
 [<c01564a3>] ? __enable_irq+0x56/0x79
 [<c0125693>] warn_slowpath_fmt+0x26/0x28
 [<c01564a3>] __enable_irq+0x56/0x79
 [<c01584d4>] resume_device_irqs+0x59/0x6d
 [<c03d5790>] ? xen_resume_notifier+0x0/0x21
 [<c03cd9f9>] device_power_up+0x8d/0x94
 [<c03d6264>] __xen_suspend+0xaf/0x141
 [<c01275dc>] ? daemonize+0x1cd/0x223
 [<c03d5790>] ? xen_resume_notifier+0x0/0x21
 [<c03d5c72>] xen_suspend+0x48/0xc6
 [<c03d5c2a>] ? xen_suspend+0x0/0xc6
 [<c010461b>] kernel_thread_helper+0x7/0x10
---[ end trace 5ed28e66d100536c ]---
Is there any solution, that will help fix this issue?
-- 
 Denis Chapligin
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>
Previous by Date: Re: [Xen-devel] [PATCH] [IOEMU]: fix the crash of HVM live migration with intensive disk access , Zhai, Edwin
Next by Date: Re: [Xen-devel] [PATCH] fix the bug of gdb which debugs xen. , Keir Fraser
Previous by Thread: Re: [Xen-devel] Problem with migrating/saving 2.6.30 vanilla kernel with multiple VCPUs , Denis Chapligin
Next by Thread: [Xen-devel] [PATCH][XEN] xen: make mce debug output more verbose , Christoph Egger
Indexes: [Date] [Thread] [Top] [All Lists]

Copyright ©, Citrix Systems Inc. All rights reserved. Legal and Privacy
Citrix This site is hosted by Citrix

AltStyle によって変換されたページ (->オリジナル) /