WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Xen

xen-devel

[Top] [All Lists]

Re: [Xen-devel] memguard_guard_stack()

To: <Keir.Fraser@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] memguard_guard_stack()
From: "Jan Beulich" <jbeulich@xxxxxxxxxx>
Date: 2006年5月01日 13:36:16 +0200
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: 2006年5月01日 04:36:45 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hmm, but I found this precisely because we saw a double fault due to a stack 
overflow. Admittedly this was in the
context of one of these IPI storms during shotdown that were fixed previously, 
but even that shouldn't result in a stack
overflow, should it ? Jan
>>> Keir Fraser <Keir.Fraser@xxxxxxxxxxxx> 04/28/06 6:30 PM >>>
On 28 Apr 2006, at 17:03, Jan Beulich wrote:
> Is it really intended that the stack size, specifically bumped to 4 
> pages in include/asm-x86/config.h for x86-64 when
> debugging, gets shrunk to a single page in memguard_guard_stack()? To 
> me it would seem much more reasonable if indeed
> only the first page (or at most, the first two pages) was used as a 
> guard page here.
The only reason that the stack is 4 pages on a 64-bit debug build is 
because we put syscall-entry trampolines at the start of the per-cpu 
stack area. Therefore simply allocating 2 pages for a debug stack and 
then removing the mapping of the first page, as we do on i386, does not 
work -- that would unmap the trampolines! Instead we allocate 4 pages 
(next power of two) and zap the middle two page mappings. This has the 
desirable effect of placing the guard between the trampolines and the 
actual stack (otherwise the trampolines would get overwritten before 
the guard page gets trodden on!).
It should never be possible for Xen to overflow 4kB of stack. Very 
little is done in interrupt contexts so we don't have the overflow 
problems that Linux has suffered.
 -- Keir
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>
Previous by Date: Re: [Xen-devel] kernel oops with driver domains: usb ohci Fatal DMA error! Please use 'swiotlb=force' , Muli Ben-Yehuda
Next by Date: [PATCH] compilation fix of tpmback on ia64 (was [Xen-devel] [vTPM] Fix for local migration and balloon-allocated pages) , Isaku Yamahata
Previous by Thread: [Xen-devel] kernel oops with driver domains: usb ohci Fatal DMA error! Please use 'swiotlb=force' , Hans-Christian Armingeon
Next by Thread: Re: [Xen-devel] memguard_guard_stack() , Keir Fraser
Indexes: [Date] [Thread] [Top] [All Lists]

Copyright ©, Citrix Systems Inc. All rights reserved. Legal and Privacy
Citrix This site is hosted by Citrix

AltStyle によって変換されたページ (->オリジナル) /