lua-users home
lua-l archive

More on thread memory contention [was: Re: Can't interrupt tight loops in Lua 5.4 anymore?]

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


G'day,
[Hint: Here Be Dragons!]
Joseph Clarke points out a possible memory race condition in a code
example posted to lua-l. While this is a valid concern, there's
more than one way that concurrent (thread shared-memory) access can
go awry.
I regularly read LWN (https://lwn.net/), and the following article stood
out (for me) as extremely relevant to some of my areas of interest (and
I have to put in a Very Good Word for lwn.net in general: The
signal/noise ratio is extremely high).
lwn.net is supported by subscriptions. I recommend supporting it!
--
 LWN.net needs you!
 Without subscribers, LWN would simply not exist. Please consider
 signing up for a subscription and helping to keep LWN publishing
--
Okay, on to the article from last year:
--
Title: Who's afraid of a big bad optimizing compiler?
URL: https://lwn.net/Articles/793253/
Date: July 15, 2019
Author: (Many contributors):
 This article was contributed by Jade Alglave, Will Deacon,
 Boqun Feng, David Howells, Daniel Lustig, Luc Maranget,
 Paul E. McKenney, Andrea Parri, Nicholas Piggin, Alan Stern,
 Akira Yokosawa, and Peter Zijlstra.
Summary:
 When compiling Linux-kernel code that does a plain C-language
 load or store, as in "a=b", the C standard grants the compiler
 the right to assume that the affected variables are neither
 accessed nor modified by any other thread at the time of that
 load or store. The compiler is therefore permitted to carry
 out a large number of transformations, a couple of which were
 discussed in this ACCESS_ONCE() LWN article, and another of
 which is described in Dmitry Vyukov's KTSAN wiki page.
 However, our increasingly aggressive modern compilers produce
 increasingly surprising code optimizations. Some of these
 optimizations might be especially surprising to developers who
 assume that each plain C-language load or store will always
 result in an assembly-language load or store. Although this
 article is written for Linux kernel developers, many of these
 scenarios also apply to other concurrent code bases, keeping in
 mind that "concurrent code bases" also includes single-threaded
 code bases that use interrupts or signals.
--
Start of Article:
 The ongoing trend in compilers makes us wonder: "Just how
 afraid should we be?". The following sections should help
 answer this question:
 * Load tearing
 * Store tearing
 * Load fusing
 * Store fusing
 * Code reordering
 * Invented loads
 * Invented stores
 * Store-to-load transformations
 * Dead-code elimination
 How real is all this?
The article includes 8 quick quiz questions; with the answers at the
end of the main section. For example Quick Quiz Question #1 is:
 But we shouldn't be afraid at all for things like
 on-stack or per-CPU variables, right?
The article is very readable and is more comprehensive than
other documents exploring shared-memory concurrency issues that
I've seen (although I must confess that, while interested in
working with code in these scenarios, I haven't done any coding
near this area (except for interrupts on microcontroller or DSP
embedded systems), and so much of this was an eye-opener for me).
Highly, highly recommended -- and remember, Here Be Dragons.
cheers,
sur-behoffski (Brenton Hoff)
programmer, Grouse Software

AltStyle によって変換されたページ (->オリジナル) /