WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Xen

xen-devel

[Top] [All Lists]

[Xen-devel] RE: vram_dirty vs. shadow paging dirty tracking

To: "Anthony Liguori" <aliguori@xxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] RE: vram_dirty vs. shadow paging dirty tracking
From: "Ian Pratt" <Ian.Pratt@xxxxxxxxxxxx>
Date: 2007年3月13日 21:02:23 -0000
Delivery-date: 2007年3月13日 14:05:18 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <45F6FC68.3040207@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcdlpnoXH6PGBcUCTMy5+C0qlQcRlwACtz7A
Thread-topic: vram_dirty vs. shadow paging dirty tracking
> When thinking about multithreading the device model, it occurred to me
> that it's a little odd that we're doing a memcmp to determine which
> portions of the VRAM has changed. Couldn't we just use dirty page
> tracking in the shadow paging code? That should significantly lower
> the
> overhead of this plus I believe the infrastructure is already mostly
> there in the shadow2 code.
Yep, its been in the roadmap doc for quite a while. However, the log
dirty code isn't ideal for this. We'd need to extend it to enable it to
be turned on for just a subset of the GFN range (we could use a xen
rangeset for this).
Even so, I'm not super keen on the idea of tearing down and rebuilding
1024 PTE's up to 50 times a second. 
A lower overhead solution would be to do scanning and resetting of the
dirty bits on the PTEs (and a global tlb flush). In the general case
this is tricky as the framebuffer could be mapped by multiple PTEs. In
practice, I believe this doesn't happen for either Linux or Windows.
There's always a good fallback of just returning 'all dirty' if the
heuristic is violated. Would be good to knock this up.
Best,
Ian
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>
Previous by Date: [Xen-devel][Xense-devel][PATCH][XSM][4/4] Xen Security Modules ACM Module , George S. Coker, II
Next by Date: [Xen-devel] Re: vram_dirty vs. shadow paging dirty tracking , Anthony Liguori
Previous by Thread: [Xen-devel] vram_dirty vs. shadow paging dirty tracking , Anthony Liguori
Next by Thread: [Xen-devel] Re: vram_dirty vs. shadow paging dirty tracking , Anthony Liguori
Indexes: [Date] [Thread] [Top] [All Lists]

Copyright ©, Citrix Systems Inc. All rights reserved. Legal and Privacy
Citrix This site is hosted by Citrix

AltStyle によって変換されたページ (->オリジナル) /