WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Xen

xen-devel

[Top] [All Lists]

[Xen-devel] xen-unstable dom0/1 smp schedule while atomic

To: xen-devel@xxxxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] xen-unstable dom0/1 smp schedule while atomic
From: Ryan Harper <ryanh@xxxxxxxxxx>
Date: 2005年1月14日 11:01:37 -0600
Delivery-date: 2005年1月14日 18:05:08 +0000
Envelope-to: xen+James.Bulpin@xxxxxxxxxxxx
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.6+20040907i
While running xen-unstable (2005年01月13日 nightly) with dom0 and dom1 with
CONFIG_SMP=y I got 271k of output... sshd was running in dom0 and I remotely
initiated an scp from dom0 to another remote host (not dom1) Ill only post the
first and last few "scheduling while atomic" entries, let me know if someone
wants the whole
trace. 
 scheduling while atomic
 [schedule+1682/1696] schedule+0x692/0x6a0
 [sys_read+126/128] sys_read+0x7e/0x80
 [work_resched+5/47] work_resched+0x5/0x2f
 scheduling while atomic
 [schedule+1682/1696] schedule+0x692/0x6a0
 [sys_read+126/128] sys_read+0x7e/0x80
 [work_resched+5/47] work_resched+0x5/0x2f
 scheduling while atomic
 [schedule+1682/1696] schedule+0x692/0x6a0
 [do_softirq+155/160] do_softirq+0x9b/0xa0
 [irq_exit+57/64] irq_exit+0x39/0x40
 [do_IRQ+34/48] do_IRQ+0x22/0x30
 [evtchn_do_upcall+190/320] evtchn_do_upcall+0xbe/0x140
 [work_resched+5/47] work_resched+0x5/0x2f
 scheduling while atomic
 [schedule+1682/1696] schedule+0x692/0x6a0
 [do_softirq+155/160] do_softirq+0x9b/0xa0
 [irq_exit+57/64] irq_exit+0x39/0x40
 [do_IRQ+34/48] do_IRQ+0x22/0x30
 [evtchn_do_upcall+190/320] evtchn_do_upcall+0xbe/0x140
 [work_resched+5/47] work_resched+0x5/0x2f
<snip>
 scheduling while atomic
 [schedule+1682/1696] schedule+0x692/0x6a0
 [__change_page_attr+203/624] __change_page_attr+0xcb/0x270
 [sys_sched_yield+89/112] sys_sched_yield+0x59/0x70
 [coredump_wait+63/176] coredump_wait+0x3f/0xb0
 [do_coredump+222/471] do_coredump+0xde/0x1d7
 [__dequeue_signal+296/432] __dequeue_signal+0x128/0x1b0
 [__dequeue_signal+245/432] __dequeue_signal+0xf5/0x1b0
 [dequeue_signal+53/160] dequeue_signal+0x35/0xa0
 [get_signal_to_deliver+599/848] get_signal_to_deliver+0x257/0x350
 [do_signal+103/288] do_signal+0x67/0x120
 [dnotify_parent+58/176] dnotify_parent+0x3a/0xb0
 [vfs_read+210/304] vfs_read+0xd2/0x130
 [fget_light+130/144] fget_light+0x82/0x90
 [sys_read+126/128] sys_read+0x7e/0x80
 [do_notify_resume+55/60] do_notify_resume+0x37/0x3c
 [work_notifysig+19/24] work_notifysig+0x13/0x18
 scheduling while atomic
 [schedule+1682/1696] schedule+0x692/0x6a0
 [wake_up_state+24/32] wake_up_state+0x18/0x20
 [wait_for_completion+148/224] wait_for_completion+0x94/0xe0
 [default_wake_function+0/32] default_wake_function+0x0/0x20
 [force_sig_specific+99/144] force_sig_specific+0x63/0x90
 [default_wake_function+0/32] default_wake_function+0x0/0x20
 [zap_threads+92/160] zap_threads+0x5c/0xa0
 [coredump_wait+169/176] coredump_wait+0xa9/0xb0
 [do_coredump+222/471] do_coredump+0xde/0x1d7
 [__dequeue_signal+296/432] __dequeue_signal+0x128/0x1b0
 [__dequeue_signal+245/432] __dequeue_signal+0xf5/0x1b0
 [dequeue_signal+53/160] dequeue_signal+0x35/0xa0
 [get_signal_to_deliver+599/848] get_signal_to_deliver+0x257/0x350
 [do_signal+103/288] do_signal+0x67/0x120
 [dnotify_parent+58/176] dnotify_parent+0x3a/0xb0
 [vfs_read+210/304] vfs_read+0xd2/0x130
 [fget_light+130/144] fget_light+0x82/0x90
 [sys_read+126/128] sys_read+0x7e/0x80
 [do_notify_resume+55/60] do_notify_resume+0x37/0x3c
 [work_notifysig+19/24] work_notifysig+0x13/0x18
note: sshd[20450] exited with preempt_count 1
Ryan Harper
-------------------------------------------------------
The SF.Net email is sponsored by: Beat the post-holiday blues
Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek.
It's fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel
<Prev in Thread] Current Thread [Next in Thread>
Previous by Date: Re: [Xen-devel] Re: nfsroot and brige , Mike Wray
Next by Date: [Xen-devel] network advice needed , Andrew Theurer
Previous by Thread: [Xen-devel] Réf. : [Xen-devel] XenSource Inc. , Antoine NIVARD
Next by Thread: Re: [Xen-devel] xen-unstable dom0/1 smp schedule while atomic , Christian Limpach
Indexes: [Date] [Thread] [Top] [All Lists]

Copyright ©, Citrix Systems Inc. All rights reserved. Legal and Privacy
Citrix This site is hosted by Citrix

AltStyle によって変換されたページ (->オリジナル) /