WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Xen

xen-devel

[Top] [All Lists]

[Xen-devel] Pvops kernel: HARDIRQ-safe -> HARDIRQ-unsafe lock order dete

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] Pvops kernel: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
From: "Mark Hurenkamp" <mark.hurenkamp@xxxxxxxxx>
Date: 2010年1月31日 01:15:16 +0100
Delivery-date: 2010年1月30日 16:15:43 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: SquirrelMail/1.4.18
Hi,
I've been running an old machine with xen 3.2 reliably for quite a while,
but am now moving towards a new setup, and decided to give xen 4 a spin,
with the pvops kernel.
However, when i boot (recent linux-pvops, with recent xen-unstable), i
often see the following messages upon boot.
Anyway, i hope this info is of some use, and I'd appreciate any tips on
how to get this fixed!
Regards,
Mark.
======================================================
[ INFO: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected ]
2.6.31.6mh10 #22
------------------------------------------------------
khubd/42 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
 (&retval->lock){......}, at: [<ffffffff810f5d4b>] dma_pool_alloc+0x36/0x2fb
and this task is already holding:
 (&ehci->lock){-.....}, at: [<ffffffff81342791>] ehci_urb_enqueue+0xa5/0xd50
which would create a new lock dependency:
 (&ehci->lock){-.....} -> (&retval->lock){......}
but this new dependency connects a HARDIRQ-irq-safe lock:
 (&ehci->lock){-.....}
... which became HARDIRQ-irq-safe at:
 [<ffffffff8107e042>] __lock_acquire+0x231/0xbc3
 [<ffffffff8107eaab>] lock_acquire+0xd7/0x103
 [<ffffffff8144d311>] _spin_lock+0x36/0x69
 [<ffffffff8134144f>] ehci_irq+0x41/0x441
 [<ffffffff81329c09>] usb_hcd_irq+0x4a/0xa7
 [<ffffffff810a5964>] handle_IRQ_event+0x53/0x119
 [<ffffffff810a7628>] handle_level_irq+0x7d/0xce
 [<ffffffff8101574a>] handle_irq+0x8b/0x95
 [<ffffffff812893b3>] xen_evtchn_do_upcall+0xfa/0x194
 [<ffffffff81013fbe>] xen_do_hypervisor_callback+0x1e/0x30
 [<ffffffffffffffff>] 0xffffffffffffffff
to a HARDIRQ-irq-unsafe lock:
 (purge_lock){+.+...}
... which became HARDIRQ-irq-unsafe at:
... [<ffffffff8107e0b6>] __lock_acquire+0x2a5/0xbc3
 [<ffffffff8107eaab>] lock_acquire+0xd7/0x103
 [<ffffffff8144d311>] _spin_lock+0x36/0x69
 [<ffffffff810efe9a>] __purge_vmap_area_lazy+0x54/0x175
 [<ffffffff810f12af>] vm_unmap_aliases+0x180/0x18f
 [<ffffffff8100e308>] xen_alloc_ptpage+0x47/0x75
 [<ffffffff8100e373>] xen_alloc_pte+0x13/0x15
 [<ffffffff8103587a>] T.829+0x2f/0x5a
 [<ffffffff810358d6>] fill_pte+0x31/0xa5
ata1.00: SATA link down (SStatus 0 SControl 300)
ata1.01: SATA link down (SStatus 0 SControl 300)
 [<ffffffff81035a8e>] set_pte_vaddr_pud+0x31/0x4d
 [<ffffffff8100c651>] xen_set_fixmap+0xfd/0x104
 [<ffffffff81949e75>] pvclock_init_vsyscall+0x7e/0x8e
 [<ffffffff8193ae63>] xen_time_init+0xe6/0x113
 [<ffffffff81937059>] start_kernel+0x38d/0x427
 [<ffffffff81936621>] x86_64_start_reservations+0xac/0xb0
 [<ffffffff8193a28a>] xen_start_kernel+0x5dc/0x5e3
 [<ffffffffffffffff>] 0xffffffffffffffff
other info that might help us debug this:
2 locks held by khubd/42:
 #0: (usb_address0_mutex){+.+...}, at: [<ffffffff81325163>]
hub_port_init+0x7c/0x7c3
 #1: (&ehci->lock){-.....}, at: [<ffffffff81342791>]
ehci_urb_enqueue+0xa5/0xd50
the HARDIRQ-irq-safe lock's dependencies:
-> (&ehci->lock){-.....} ops: 0 {
 IN-HARDIRQ-W at:
 [<ffffffff8107e042>] __lock_acquire+0x231/0xbc3
 [<ffffffff8107eaab>] lock_acquire+0xd7/0x103
 [<ffffffff8144d311>] _spin_lock+0x36/0x69
 [<ffffffff8134144f>] ehci_irq+0x41/0x441
 [<ffffffff81329c09>] usb_hcd_irq+0x4a/0xa7
 [<ffffffff810a5964>] handle_IRQ_event+0x53/0x119
 [<ffffffff810a7628>] handle_level_irq+0x7d/0xce
 [<ffffffff8101574a>] handle_irq+0x8b/0x95
 [<ffffffff812893b3>] xen_evtchn_do_upcall+0xfa/0x194
 [<ffffffff81013fbe>]
xen_do_hypervisor_callback+0x1e/0x30
 [<ffffffffffffffff>] 0xffffffffffffffff
 INITIAL USE at:
ata2.00: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
 [<ffffffff8107e12d>] __lock_acquire+0x31c/0xbc3
 [<ffffffff8107eaab>] lock_acquire+0xd7/0x103
ata2.00: ATA-7: SAMSUNG HD154UI, 1AG01118, max UDMA7
ata2.00: 2930277168 sectors, multi 16: LBA48 NCQ (depth 0/32)
 [<ffffffff8144d311>] _spin_lock+0x36/0x69
 [<ffffffff8134144f>] ehci_irq+0x41/0x441
 [<ffffffff81329c09>] usb_hcd_irq+0x4a/0xa7
ata2.01: ATA-8: ST31500341AS, CC1J, max UDMA/133
ata2.01: 2930277168 sectors, multi 16: LBA48 NCQ (depth 0/32)
 [<ffffffff810a5964>] handle_IRQ_event+0x53/0x119
 [<ffffffff810a7628>] handle_level_irq+0x7d/0xce
 [<ffffffff8101574a>] handle_irq+0x8b/0x95
 [<ffffffff812893b3>] xen_evtchn_do_upcall+0xfa/0x194
 [<ffffffff81013fbe>]
xen_do_hypervisor_callback+0x1e/0x30
 [<ffffffffffffffff>] 0xffffffffffffffff
 }
 ... key at: [<ffffffff8255da38>] __key.35628+0x0/0x8
 -> (hcd_urb_list_lock){......} ops: 0 {
 INITIAL USE at:
 [<ffffffff8107e12d>] __lock_acquire+0x31c/0xbc3
 [<ffffffff8107eaab>] lock_acquire+0xd7/0x103
 [<ffffffff8144d311>] _spin_lock+0x36/0x69
 [<ffffffff81329ee5>]
usb_hcd_link_urb_to_ep+0x28/0xa3
 [<ffffffff8132b3fa>] usb_hcd_submit_urb+0x30f/0xa09
 [<ffffffff8132c092>] usb_submit_urb+0x24b/0x2c8
 [<ffffffff8132d54c>] usb_start_wait_urb+0x62/0x1a8
 [<ffffffff8132d8c9>] usb_control_msg+0xf2/0x116
 [<ffffffff8132e9cc>] usb_get_descriptor+0x76/0xa8
 [<ffffffff8132ea74>]
usb_get_device_descriptor+0x76/0xa6
 [<ffffffff8132ac05>] usb_add_hcd+0x463/0x67c
 [<ffffffff81338460>] usb_hcd_pci_probe+0x254/0x398
 [<ffffffff812263dd>] local_pci_probe+0x17/0x1b
 [<ffffffff81068986>] do_work_for_cpu+0x18/0x2a
 [<ffffffff8106c8eb>] kthread+0x91/0x99
 [<ffffffff81013e6a>] child_rip+0xa/0x20
 [<ffffffffffffffff>] 0xffffffffffffffff
 }
 ... key at: [<ffffffff8171d6d8>] hcd_urb_list_lock+0x18/0x40
 ... acquired at:
 [<ffffffff8107e858>] __lock_acquire+0xa47/0xbc3
 [<ffffffff8107eaab>] lock_acquire+0xd7/0x103
 [<ffffffff8144d311>] _spin_lock+0x36/0x69
 [<ffffffff81329ee5>] usb_hcd_link_urb_to_ep+0x28/0xa3
 [<ffffffff813427aa>] ehci_urb_enqueue+0xbe/0xd50
 [<ffffffff8132b95f>] usb_hcd_submit_urb+0x874/0xa09
 [<ffffffff8132c092>] usb_submit_urb+0x24b/0x2c8
 [<ffffffff8132d54c>] usb_start_wait_urb+0x62/0x1a8
 [<ffffffff8132d8c9>] usb_control_msg+0xf2/0x116
 [<ffffffff813253f4>] hub_port_init+0x30d/0x7c3
 [<ffffffff81328814>] hub_events+0x91a/0x1199
 [<ffffffff813290ca>] hub_thread+0x37/0x1ad
 [<ffffffff8106c8eb>] kthread+0x91/0x99
 [<ffffffff81013e6a>] child_rip+0xa/0x20
 [<ffffffffffffffff>] 0xffffffffffffffff
the HARDIRQ-irq-unsafe lock's dependencies:
-> (purge_lock){+.+...} ops: 0 {
 HARDIRQ-ON-W at:
 [<ffffffff8107e0b6>] __lock_acquire+0x2a5/0xbc3
 [<ffffffff8107eaab>] lock_acquire+0xd7/0x103
 [<ffffffff8144d311>] _spin_lock+0x36/0x69
 [<ffffffff810efe9a>]
__purge_vmap_area_lazy+0x54/0x175
 [<ffffffff810f12af>] vm_unmap_aliases+0x180/0x18f
 [<ffffffff8100e308>] xen_alloc_ptpage+0x47/0x75
 [<ffffffff8100e373>] xen_alloc_pte+0x13/0x15
 [<ffffffff8103587a>] T.829+0x2f/0x5a
 [<ffffffff810358d6>] fill_pte+0x31/0xa5
 [<ffffffff81035a8e>] set_pte_vaddr_pud+0x31/0x4d
 [<ffffffff8100c651>] xen_set_fixmap+0xfd/0x104
 [<ffffffff81949e75>] pvclock_init_vsyscall+0x7e/0x8e
 [<ffffffff8193ae63>] xen_time_init+0xe6/0x113
 [<ffffffff81937059>] start_kernel+0x38d/0x427
 [<ffffffff81936621>]
x86_64_start_reservations+0xac/0xb0
 [<ffffffff8193a28a>] xen_start_kernel+0x5dc/0x5e3
 [<ffffffffffffffff>] 0xffffffffffffffff
 SOFTIRQ-ON-W at:
 [<ffffffff8107e0d7>] __lock_acquire+0x2c6/0xbc3
 [<ffffffff8107eaab>] lock_acquire+0xd7/0x103
 [<ffffffff8144d311>] _spin_lock+0x36/0x69
 [<ffffffff810efe9a>]
__purge_vmap_area_lazy+0x54/0x175
 [<ffffffff810f12af>] vm_unmap_aliases+0x180/0x18f
 [<ffffffff8100e308>] xen_alloc_ptpage+0x47/0x75
 [<ffffffff8100e373>] xen_alloc_pte+0x13/0x15
 [<ffffffff8103587a>] T.829+0x2f/0x5a
 [<ffffffff810358d6>] fill_pte+0x31/0xa5
 [<ffffffff81035a8e>] set_pte_vaddr_pud+0x31/0x4d
 [<ffffffff8100c651>] xen_set_fixmap+0xfd/0x104
 [<ffffffff81949e75>] pvclock_init_vsyscall+0x7e/0x8e
 [<ffffffff8193ae63>] xen_time_init+0xe6/0x113
 [<ffffffff81937059>] start_kernel+0x38d/0x427
 [<ffffffff81936621>]
x86_64_start_reservations+0xac/0xb0
 [<ffffffff8193a28a>] xen_start_kernel+0x5dc/0x5e3
 [<ffffffffffffffff>] 0xffffffffffffffff
 INITIAL USE at:
 [<ffffffff8107e12d>] __lock_acquire+0x31c/0xbc3
 [<ffffffff8107eaab>] lock_acquire+0xd7/0x103
 [<ffffffff8144d311>] _spin_lock+0x36/0x69
 [<ffffffff810efe9a>] __purge_vmap_area_lazy+0x54/0x175
 [<ffffffff810f12af>] vm_unmap_aliases+0x180/0x18f
 [<ffffffff8100e308>] xen_alloc_ptpage+0x47/0x75
 [<ffffffff8100e373>] xen_alloc_pte+0x13/0x15
 [<ffffffff8103587a>] T.829+0x2f/0x5a
 [<ffffffff810358d6>] fill_pte+0x31/0xa5
 [<ffffffff81035a8e>] set_pte_vaddr_pud+0x31/0x4d
 [<ffffffff8100c651>] xen_set_fixmap+0xfd/0x104
 [<ffffffff81949e75>] pvclock_init_vsyscall+0x7e/0x8e
 [<ffffffff8193ae63>] xen_time_init+0xe6/0x113
 [<ffffffff81937059>] start_kernel+0x38d/0x427
 [<ffffffff81936621>]
x86_64_start_reservations+0xac/0xb0
 [<ffffffff8193a28a>] xen_start_kernel+0x5dc/0x5e3
 [<ffffffffffffffff>] 0xffffffffffffffff
 }
 ... key at: [<ffffffff816e4888>] purge_lock.26563+0x18/0x40
 -> (vmap_area_lock){+.+...} ops: 0 {
 HARDIRQ-ON-W at:
 [<ffffffff8107e0b6>] __lock_acquire+0x2a5/0xbc3
 [<ffffffff8107eaab>] lock_acquire+0xd7/0x103
 [<ffffffff8144d311>] _spin_lock+0x36/0x69
 [<ffffffff810f0180>] alloc_vmap_area+0x101/0x251
 [<ffffffff810f0405>] __get_vm_area_node+0x135/0x1e1
 [<ffffffff810f0d43>] __vmalloc_node+0x68/0x90
 [<ffffffff810f0ec2>] __vmalloc+0x15/0x17
 [<ffffffff81956863>]
alloc_large_system_hash+0x112/0x1ca
 [<ffffffff81958782>] vfs_caches_init+0xa9/0x11b
 [<ffffffff819370ac>] start_kernel+0x3e0/0x427
 [<ffffffff81936621>]
x86_64_start_reservations+0xac/0xb0
 [<ffffffff8193a28a>] xen_start_kernel+0x5dc/0x5e3
 [<ffffffffffffffff>] 0xffffffffffffffff
 SOFTIRQ-ON-W at:
 [<ffffffff8107e0d7>] __lock_acquire+0x2c6/0xbc3
 [<ffffffff8107eaab>] lock_acquire+0xd7/0x103
 [<ffffffff8144d311>] _spin_lock+0x36/0x69
 [<ffffffff810f0180>] alloc_vmap_area+0x101/0x251
 [<ffffffff810f0405>] __get_vm_area_node+0x135/0x1e1
 [<ffffffff810f0d43>] __vmalloc_node+0x68/0x90
 [<ffffffff810f0ec2>] __vmalloc+0x15/0x17
 [<ffffffff81956863>]
alloc_large_system_hash+0x112/0x1ca
 [<ffffffff81958782>] vfs_caches_init+0xa9/0x11b
 [<ffffffff819370ac>] start_kernel+0x3e0/0x427
 [<ffffffff81936621>]
x86_64_start_reservations+0xac/0xb0
 [<ffffffff8193a28a>] xen_start_kernel+0x5dc/0x5e3
 [<ffffffffffffffff>] 0xffffffffffffffff
 INITIAL USE at:
 [<ffffffff8107e12d>] __lock_acquire+0x31c/0xbc3
 [<ffffffff8107eaab>] lock_acquire+0xd7/0x103
 [<ffffffff8144d311>] _spin_lock+0x36/0x69
ata2.00: configured for UDMA/133
 [<ffffffff810f0180>] alloc_vmap_area+0x101/0x251
 [<ffffffff810f0405>] __get_vm_area_node+0x135/0x1e1
 [<ffffffff810f0d43>] __vmalloc_node+0x68/0x90
 [<ffffffff810f0ec2>] __vmalloc+0x15/0x17
 [<ffffffff81956863>]
alloc_large_system_hash+0x112/0x1ca
 [<ffffffff81958782>] vfs_caches_init+0xa9/0x11b
 [<ffffffff819370ac>] start_kernel+0x3e0/0x427
 [<ffffffff81936621>]
x86_64_start_reservations+0xac/0xb0
 [<ffffffff8193a28a>] xen_start_kernel+0x5dc/0x5e3
 [<ffffffffffffffff>] 0xffffffffffffffff
 }
 ... key at: [<ffffffff816e4838>] vmap_area_lock+0x18/0x40
 -> (&rnp->lock){..-...} ops: 0 {
 IN-SOFTIRQ-W at:
 [<ffffffff8107e063>] __lock_acquire+0x252/0xbc3
 [<ffffffff8107eaab>] lock_acquire+0xd7/0x103
 [<ffffffff8144d477>] _spin_lock_irqsave+0x4e/0x88
 [<ffffffff810a8c88>] cpu_quiet+0x25/0x86
 [<ffffffff810a92f7>]
__rcu_process_callbacks+0x74/0x236
 [<ffffffff810a9503>]
rcu_process_callbacks+0x4a/0x4f
 [<ffffffff8105be72>] __do_softirq+0xe7/0x1cb
 [<ffffffff81013f6c>] call_softirq+0x1c/0x30
 [<ffffffff810154e0>] do_softirq+0x50/0xb1
 [<ffffffff8105b903>] irq_exit+0x53/0x95
 [<ffffffff81289431>]
xen_evtchn_do_upcall+0x178/0x194
 [<ffffffff81013fbe>]
xen_do_hypervisor_callback+0x1e/0x30
 [<ffffffffffffffff>] 0xffffffffffffffff
 INITIAL USE at:
 [<ffffffff8107e12d>] __lock_acquire+0x31c/0xbc3
 [<ffffffff8107eaab>] lock_acquire+0xd7/0x103
 [<ffffffff8144d477>] _spin_lock_irqsave+0x4e/0x88
 [<ffffffff81448f81>]
rcu_init_percpu_data+0x2e/0x166
 [<ffffffff814490f7>] rcu_cpu_notify+0x3e/0x84
 [<ffffffff81953b25>] __rcu_init+0x147/0x178
 [<ffffffff8195141e>] rcu_init+0x9/0x17
 [<ffffffff81936f25>] start_kernel+0x259/0x427
 [<ffffffff81936621>]
x86_64_start_reservations+0xac/0xb0
 [<ffffffff8193a28a>] xen_start_kernel+0x5dc/0x5e3
 [<ffffffffffffffff>] 0xffffffffffffffff
 }
 ... key at: [<ffffffff8237f600>] __key.20504+0x0/0x8
 ... acquired at:
 [<ffffffff8107e858>] __lock_acquire+0xa47/0xbc3
 [<ffffffff8107eaab>] lock_acquire+0xd7/0x103
 [<ffffffff8144d477>] _spin_lock_irqsave+0x4e/0x88
 [<ffffffff810a9131>] __call_rcu+0x8a/0x111
 [<ffffffff810a91e4>] call_rcu+0x15/0x17
 [<ffffffff810efe41>] __free_vmap_area+0x68/0x6d
 [<ffffffff810eff80>] __purge_vmap_area_lazy+0x13a/0x175
 [<ffffffff810f12af>] vm_unmap_aliases+0x180/0x18f
 [<ffffffff8100e308>] xen_alloc_ptpage+0x47/0x75
 [<ffffffff8100e373>] xen_alloc_pte+0x13/0x15
 [<ffffffff810e665f>] __pte_alloc_kernel+0x5a/0xb4
 [<ffffffff8120e0dd>] ioremap_page_range+0x19d/0x298
 [<ffffffff81036e08>] __ioremap_caller+0x2cf/0x358
 [<ffffffff81036f73>] ioremap_nocache+0x17/0x19
 [<ffffffff8196ce27>] pci_mmcfg_arch_init+0xab/0x143
 [<ffffffff8196db3a>] __pci_mmcfg_init+0x2c9/0x2fa
 [<ffffffff8196db76>] pci_mmcfg_late_init+0xb/0xd
 [<ffffffff8196311e>] acpi_init+0x1aa/0x265
 [<ffffffff8100a07d>] do_one_initcall+0x72/0x193
 [<ffffffff81936a42>] kernel_init+0x18b/0x1e5
 [<ffffffff81013e6a>] child_rip+0xa/0x20
 [<ffffffffffffffff>] 0xffffffffffffffff
 ... acquired at:
 [<ffffffff8107e858>] __lock_acquire+0xa47/0xbc3
 [<ffffffff8107eaab>] lock_acquire+0xd7/0x103
 [<ffffffff8144d311>] _spin_lock+0x36/0x69
 [<ffffffff810eff69>] __purge_vmap_area_lazy+0x123/0x175
 [<ffffffff810f12af>] vm_unmap_aliases+0x180/0x18f
 [<ffffffff8100e308>] xen_alloc_ptpage+0x47/0x75
 [<ffffffff8100e373>] xen_alloc_pte+0x13/0x15
 [<ffffffff810e665f>] __pte_alloc_kernel+0x5a/0xb4
 [<ffffffff8120e0dd>] ioremap_page_range+0x19d/0x298
 [<ffffffff81036e08>] __ioremap_caller+0x2cf/0x358
 [<ffffffff81036f73>] ioremap_nocache+0x17/0x19
 [<ffffffff8196ce27>] pci_mmcfg_arch_init+0xab/0x143
 [<ffffffff8196db3a>] __pci_mmcfg_init+0x2c9/0x2fa
 [<ffffffff8196db76>] pci_mmcfg_late_init+0xb/0xd
 [<ffffffff8196311e>] acpi_init+0x1aa/0x265
 [<ffffffff8100a07d>] do_one_initcall+0x72/0x193
 [<ffffffff81936a42>] kernel_init+0x18b/0x1e5
 [<ffffffff81013e6a>] child_rip+0xa/0x20
 [<ffffffffffffffff>] 0xffffffffffffffff
stack backtrace:
Pid: 42, comm: khubd Not tainted 2.6.31.6mh10 #22
Call Trace:
 [<ffffffff8107dd42>] check_usage+0x28b/0x29c
 [<ffffffff8107da0b>] ? check_noncircular+0x92/0xc2
 [<ffffffff8107ddb0>] check_irq_usage+0x5d/0xbe
 [<ffffffff8107e752>] __lock_acquire+0x941/0xbc3
 [<ffffffff8107eaab>] lock_acquire+0xd7/0x103
 [<ffffffff810f5d4b>] ? dma_pool_alloc+0x36/0x2fb
 [<ffffffff8144d477>] _spin_lock_irqsave+0x4e/0x88
 [<ffffffff810f5d4b>] ? dma_pool_alloc+0x36/0x2fb
 [<ffffffff8101d0cb>] ? save_stack_trace+0x2f/0x4c
 [<ffffffff810f5d4b>] dma_pool_alloc+0x36/0x2fb
 [<ffffffff8107c671>] ? add_lock_to_list+0x7b/0xc1
 [<ffffffff8100ecf7>] ? xen_clocksource_read+0x21/0x23
 [<ffffffff8100eddc>] ? xen_sched_clock+0x14/0x8c
 [<ffffffff8134085e>] ehci_qh_alloc+0x28/0xd9
 [<ffffffff813422d7>] qh_append_tds+0x3d/0x452
 [<ffffffff8144d178>] ? _spin_unlock+0x2b/0x30
 [<ffffffff813427cf>] ehci_urb_enqueue+0xe3/0xd50
 [<ffffffff8121ff5e>] ? debug_dma_map_page+0xe4/0xf3
 [<ffffffff8132b0d8>] ? T.619+0xbc/0xcf
 [<ffffffff8132b95f>] usb_hcd_submit_urb+0x874/0xa09
 [<ffffffff8107cf1a>] ? mark_lock+0x2d/0x226
 [<ffffffff8100e7c1>] ? xen_force_evtchn_callback+0xd/0xf
 [<ffffffff8100eee2>] ? check_events+0x12/0x20
 [<ffffffff8132c092>] usb_submit_urb+0x24b/0x2c8
 [<ffffffff8106cf7c>] ? __init_waitqueue_head+0x3a/0x4e
 [<ffffffff8132d54c>] usb_start_wait_urb+0x62/0x1a8
 [<ffffffff8132c4e2>] ? usb_alloc_urb+0x1e/0x48
 [<ffffffff8100eecf>] ? xen_restore_fl_direct_end+0x0/0x1
 [<ffffffff8132c4b4>] ? usb_init_urb+0x27/0x37
 [<ffffffff8132d8c9>] usb_control_msg+0xf2/0x116
 [<ffffffff8132536e>] ? hub_port_init+0x287/0x7c3
 [<ffffffff813253f4>] hub_port_init+0x30d/0x7c3
 [<ffffffff81328814>] hub_events+0x91a/0x1199
 [<ffffffff8100e7c1>] ? xen_force_evtchn_callback+0xd/0xf
 [<ffffffff8100eee2>] ? check_events+0x12/0x20
 [<ffffffff813290ca>] hub_thread+0x37/0x1ad
 [<ffffffff8106cc4a>] ? autoremove_wake_function+0x0/0x39
 [<ffffffff81329093>] ? hub_thread+0x0/0x1ad
 [<ffffffff8106c8eb>] kthread+0x91/0x99
 [<ffffffff81013e6a>] child_rip+0xa/0x20
 [<ffffffff810137d0>] ? restore_args+0x0/0x30
 [<ffffffff81013e60>] ? child_rip+0x0/0x20
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-devel] Pvops kernel: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected, Mark Hurenkamp <=
Previous by Date: [Xen-devel] Re: collecting info on platforms with VT-d BIOS problems , Mark Hurenkamp
Next by Date: Re: [Xen-devel] [PATCH] evtchn_do_upcall: search a snapshot of level 2 bits for pending upcalls , Kaushik Kumar Ram
Previous by Thread: [Xen-devel] Re: collecting info on platforms with VT-d BIOS problems , Mark Hurenkamp
Next by Thread: [Xen-devel] xen 3.4 live migration with vnc , alice wan
Indexes: [Date] [Thread] [Top] [All Lists]

Copyright ©, Citrix Systems Inc. All rights reserved. Legal and Privacy
Citrix This site is hosted by Citrix

AltStyle によって変換されたページ (->オリジナル) /