I have a simple producer-consumer multhithread application, I am trying to measure a few points in both threads for benchmarking. What I came up with is the TPL dataflow library of .Net, and I have also found a few supporting posts on this idea. However, I am a bit suspicious if it is the most accurate/reliable way to get time bits. Because it critical to get these values as sensitive as we could (micro/nano second level)
Logger Class
public interface ILogMessage
{
long OrderId { get; set; }
}
/// <summary>
/// Queue Thread Time measurement Points
/// </summary>
public class QueueLoggerPoint : ILogMessage
{
public DateTime BeforeProcTime { get; set; }
public DateTime AfterProcTime { get; set; }
public long OrderId { get; set; }
}
/// <summary>
/// Order Thread Time measurement Points
/// </summary>
public class OrderLoggerPoint : ILogMessage
{
public DateTime BeforeDequeue { get; set; }
public DateTime AfterSend { get; set; }
public long OrderId { get; set; }
}
public static class Loggit
{
public async static Task Write(ILogMessage message)
{
string log = "";
string path = "";
if (message is QueueLoggerPoint)
{
QueueLoggerPoint type = (QueueLoggerPoint)message;
log = string.Format(@"OrderId: {0} // Before Proc Time: {1}, After Proc Time: {2}",
type.OrderId, type.BeforeProcTime.ToString("hh:mm:ss.fff"), type.AfterProcTime.ToString("hh:mm:ss.fff"));
path = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, @"Log/read.txt");
}
else
{
OrderLoggerPoint type = (OrderLoggerPoint)message;
log = string.Format(@"OrderId: {0} // Before Dequeue: {0}, After Send: {1}",
type.OrderId, type.BeforeDequeue.ToString("hh:mm:ss.fff"), type.AfterSend.ToString("hh:mm:ss.fff"));
path = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Log/send.txt");
}
using (StreamWriter sw = File.AppendText(path))
{
await sw.WriteLineAsync(log);
}
}
}
Application
public class Application
{
public ActionBlock<ILogMessage> LogQueue { get; set; }
/// <summary>
/// Constructor
/// </summary>
public Application()
{
LogQueue = new ActionBlock<ILogMessage>(async (item) => { await Loggit.Write(item); },
new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = 1 });
}
/// <summary>
/// Queue Thread's action method.
/// Task.Factory.StartNew(() => { QueueWorker(); });
/// </summary>
public void QueueWorker()
{
//Simulating the process inside these methods
while (runFlag)
{
foreach (var item in source)
{
QueueLoggerPoint point = new QueueLoggerPoint();
point.OrderId = item.OrderId;
point.BeforeProcTime = DateTime.UtcNow;
//Some process on item
point.AfterProctime = DateTime.UtcNow;
LogQueue.SendAsync(point);
}
}
}
/// <summary>
/// Order Thread's action method.
/// Task.Factory.StartNew(() => { OrderWorker(); });
/// </summary>
public void OrderWorker()
{
//Simulating the process inside these methods
while (runFlag)
{
OrderLoggerPoint point = new OrderLoggerPoint();
point.OrderId = item.OrderId;
point.BeforeDequeue = DateTime.UtcNow;
var item = MyQueue.Dequeue();
//Some process on item
SendItemToSomeWhere(item);
point.AfterSend = DateTime.UtcNow;
LogQueue.SendAsync(point);
}
}
}
Note: As the comments below deliberating Stopwatch
, I realized that I need the exact timings when the runtime run where, rather than the time intervals of the code blocks, because I need to compare when an item insert into first thread then out then insert into second thread then out comparatively.
1 Answer 1
Without some kind of batching, the biggest discovered bottleneck in your app will probably be the logger ;)
Re-opening the log file on every log message will completely distort the timeline of your app. Ideally, you should do it once per 100 or so log messages in a background thread, using e.g. log4net or NLog.
DataTime.Now
and friends are not going to precise enough for performance profiling; generallyStopwatch
is your friend (though it won't give you much by way of sub-microsecond accuracy), but I've not had a hard look at your code. Commentary by Eric Lippert on this matter \$\endgroup\$runFlag
orsource
definition - thus -1 and vote-to-close-as-broken/not-written-code. \$\endgroup\$