This code was posted as answer to Async file writer in .Net 3.5.
How would it be done better using Tasks or new features in .Net 4.5?
public sealed class Logger : IDisposable
{
private delegate void WriteMessage(string message);
private static readonly Logger Instance = new Logger();
private static object Locker = new object();
private static readonly StreamWriter Writer =
new StreamWriter("c:\\temp\\logtester.log", true);
private static bool Disposed;
private Logger()
{
}
~Logger()
{
Dispose(false);
}
public static void Log(string message)
{
WriteMessage action = Instance.MessageWriter;
action.BeginInvoke(message, MessageWriteComplete, action);
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
private static void Dispose(bool disposing)
{
lock (Locker)
{
if (Disposed)
{
return;
}
if (disposing)
{
if (Writer != null)
{
Writer.Dispose();
}
}
Disposed = true;
}
}
private static void MessageWriteComplete(IAsyncResult iar)
{
((WriteMessage)iar.AsyncState).EndInvoke(iar);
}
private void MessageWriter(string message)
{
lock (Locker)
{
Writer.WriteLine(message);
}
}
}
2 Answers 2
Before I explain how would I improve that code, I'll first mention two problems I will not try to fix (because I'm not sure how to do it properly):
Disposable singleton is a really weird combination. If something should be available anytime and anywhere, it should be a singleton. On the other hand, if something has a limited lifetime and holds an expensive resource (like an open file), it should be disposable. Those two don't go together very well.
Finalizer should execute as fast as possible. It certainly shouldn't take a lock that may not be released for quite some time (because there may be many threads also holding the lock).
The problems my solution will solve:
- Writing to the log may not be in the correct order,
lock
doesn't guarantee any ordering. Dispose()
may not write all outstanding messages. This is again becauselock
doesn't guarantee ordering.- If a write attempt happens after
Dispose()
(either becauseLog()
was called afterDispose()
, or because of the ordering problems mentioned above), an exception will be thrown on theThreadPool
. This exception can't be caught from outside ofLogger
and will bring down the whole application. (More reason not to have disposable singletons.) - If there are many calls to
Log()
at the same time, or if the writing takes too long, there may be many threads, all waiting on the same lock. - While the message is being written, there is a thread blocking until it finishes.
My solution using TPL Dataflow (which uses .Net 4.5 extensively, available from NuGet) solves the issues so that:
- Writing to the log will be always in the correct order.
Dispose()
will first write all outstanding messages before returning.- If
Log()
is called afterDispose()
, it will directly throw an exception. - There will be at most one thread doing the writing, at any time.
- While the message is being written, no thread is waiting for it to finish.
public sealed class Logger : IDisposable
{
private static readonly Logger Instance = new Logger();
private readonly ActionBlock<string> m_writerBlock;
private static bool Disposed;
private Logger()
{
var writer = new StreamWriter("c:\\temp\\logtester.log", true);
m_writerBlock = new ActionBlock<string>(s => writer.WriteLineAsync(s));
m_writerBlock.Completion.ContinueWith(_ => writer.Dispose());
}
public static void Log(string message)
{
if (!Instance.m_writerBlock.Post(message))
throw new ObjectDisposedException("Logger");
}
public void Dispose()
{
if (Disposed)
return;
m_writerBlock.Complete();
m_writerBlock.Completion.Wait();
Disposed = true;
}
}
This works correctly, because the ActionBlock
will take care of almost anything needed here, like ordering, synchronization and scheduling Task
s when necessary. WriteLineAsync()
returns a Task
and the block will use this Task
to asynchronously wait for it to finish.
-
\$\begingroup\$ Very nice, I would upvote, but I don't yet have the reputation (almost there). I was not aware of the ActionBlock, I briefly looked at the MSDN entry for it and I'm curious how this will preserve ordering? Is it due to the "degree of parallelism"? \$\endgroup\$jeremywho– jeremywho2012年07月03日 18:06:24 +00:00Commented Jul 3, 2012 at 18:06
-
\$\begingroup\$
ActionBlock
maintains a queue that takes care of the ordering. Because of that, it preserves the order unless you specifyMaxDegreeOfParallelism
other than 1. But you certainly don't want to do that here. \$\endgroup\$svick– svick2012年07月04日 08:24:33 +00:00Commented Jul 4, 2012 at 8:24 -
\$\begingroup\$ @svick, I like this solution and if message parsing prior to writing is needed one could setup a transformblock prior to the actionblock, link the two and perform all parsing in the transformblock. With this even maxdegreeofparallelism set to other than 1 will still preserve order within the transformblock. But agree that maxdegreeofparallelism in the actionblock should always be set to 1 or ignored (then default 1 applies) to preserve write order. \$\endgroup\$Matt– Matt2013年10月05日 15:02:14 +00:00Commented Oct 5, 2013 at 15:02
-
2\$\begingroup\$ has anyone tested this? Some specflow that i did with 100 tasks being set up for log a string fail at about 75 very consistently. I suspect its to do with disposal \$\endgroup\$John Nicholas– John Nicholas2014年05月11日 11:08:48 +00:00Commented May 11, 2014 at 11:08
-
\$\begingroup\$ It also doesnt store the data until disposal ... not great if you want to log to debug. \$\endgroup\$John Nicholas– John Nicholas2014年05月11日 11:30:02 +00:00Commented May 11, 2014 at 11:30
The above code doesn't flush the buffer. So if you use it in a high load / parallel situation and run into an exception the program can terminate and lose the file handle for the writer before all the logs have been written. That's a bit of speculation - it could be that the writer can only write about 3k at once.
Some clarification of the issues as in comments seems like a discussion started: A for loop writing 100 messages will reveal the following issues
- the output log file maxes out at 3k
- putting a sleep for an arbitrary length of time doesnt help it still stalls at around 75 messages for me (presumably message length or some kind of buffer setting influences this) - the point being a running program with logging in a loop would find this solution broken.
- only writes on termination of program - which is probably why it runs into a max buffer and truncates the rest of the messages.
- Who cares about ordering? thats what a timestamp is for. Ordering is very un-asynchronous
I didn't sit down and figure out precisely what the problem was, however, I did use a dirty hack to somewhat compensate. Namley - dispose of the logger regularly. I do not recommend this as a solution in production code but seeing as i am merely using it for dirty debugging it suffices for my purposes. The flush of the buffer is a good solution for myself.
The flush adjustment
this.mWriterBlock = new ActionBlock<string>(
s =>{
writer.WriteLineAsync(s);
writer.Flush();
});
There are other patterns for singletons in multithreaded environments. Given it is an async logger it should probably create itself in a thread safe manner.
This for example - needs modernising.
The singleton on the microsoft site
using System;
public sealed class Singleton
{
private static volatile Singleton instance;
private static object syncRoot = new Object();
private Singleton() {}
public static Singleton Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
instance = new Singleton();
}
}
return instance;
}
}
}
-
\$\begingroup\$ You using
WriteLineAsync()
incorrectly, you have toawait
theTask
it returns (my answer does that implicitly by returning theTask
to the block). And if you're usingWriteLineAsync()
, there is no reason not to useFLushAsync()
too. \$\endgroup\$svick– svick2014年05月12日 22:04:37 +00:00Commented May 12, 2014 at 22:04 -
\$\begingroup\$ Also, I don't understand what does the singleton code you included here have to do with anything. \$\endgroup\$svick– svick2014年05月12日 22:08:13 +00:00Commented May 12, 2014 at 22:08
-
\$\begingroup\$ You are probably right about the singleton given that i am flushing the buffer. The singleton idea came before that as reinstantiating the writer also had the side effect of flushing and keeping it able to successfully write. \$\endgroup\$John Nicholas– John Nicholas2014年05月13日 08:02:53 +00:00Commented May 13, 2014 at 8:02
-
\$\begingroup\$ can you gurantee order of calls with flushasync though? calling it synchronously in the async block doesn't seem so bad and safer. \$\endgroup\$John Nicholas– John Nicholas2014年05月13日 08:28:26 +00:00Commented May 13, 2014 at 8:28
-
\$\begingroup\$ Yeah, you can guarantee order of calls if you
await
FlushAsync()
too. \$\endgroup\$svick– svick2014年05月13日 13:56:51 +00:00Commented May 13, 2014 at 13:56