The above code The above code doesn't flush the buffer. So if you use it in a high load / parallel situation and run into an exception the program can terminate and lose the file handle for the writer before all the logs have been written. That's a bit of speculation - it could be that the writer can only write about 3k at once.
Some clarification of the issues as in comments seems like a discussion started: A for loop writing 100 messages will reveal the following issues
- the output log file maxes out at 3k
- putting a sleep for an arbitrary length of time doesnt help it still stalls at around 75 messages for me (presumably message length or some kind of buffer setting influences this) - the point being a running program with logging in a loop would find this solution broken.
- only writes on termination of program - which is probably why it runs into a max buffer and truncates the rest of the messages.
- Who cares about ordering? thats what a timestamp is for. Ordering is very un-asynchronous
I didn't sit down and figure out precisely what the problem was, however, I did use a dirty hack to somewhat compensate. Namley - dispose of the logger regularly. I do not recommend this as a solution in production code but seeing as i am merely using it for dirty debugging it suffices for my purposes. The flush of the buffer is a good solution for myself.
The flush adjustment
this.mWriterBlock = new ActionBlock<string>(
s =>{
writer.WriteLineAsync(s);
writer.Flush();
});
There are other patterns for singletons in multithreaded environments. Given it is an async logger it should probably create itself in a thread safe manner.
This for example - needs modernising.
The singleton on the microsoft site
using System;
public sealed class Singleton
{
private static volatile Singleton instance;
private static object syncRoot = new Object();
private Singleton() {}
public static Singleton Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
instance = new Singleton();
}
}
return instance;
}
}
}
The above code doesn't flush the buffer. So if you use it in a high load / parallel situation and run into an exception the program can terminate and lose the file handle for the writer before all the logs have been written. That's a bit of speculation - it could be that the writer can only write about 3k at once.
Some clarification of the issues as in comments seems like a discussion started: A for loop writing 100 messages will reveal the following issues
- the output log file maxes out at 3k
- putting a sleep for an arbitrary length of time doesnt help it still stalls at around 75 messages for me (presumably message length or some kind of buffer setting influences this) - the point being a running program with logging in a loop would find this solution broken.
- only writes on termination of program - which is probably why it runs into a max buffer and truncates the rest of the messages.
- Who cares about ordering? thats what a timestamp is for. Ordering is very un-asynchronous
I didn't sit down and figure out precisely what the problem was, however, I did use a dirty hack to somewhat compensate. Namley - dispose of the logger regularly. I do not recommend this as a solution in production code but seeing as i am merely using it for dirty debugging it suffices for my purposes. The flush of the buffer is a good solution for myself.
The flush adjustment
this.mWriterBlock = new ActionBlock<string>(
s =>{
writer.WriteLineAsync(s);
writer.Flush();
});
There are other patterns for singletons in multithreaded environments. Given it is an async logger it should probably create itself in a thread safe manner.
This for example - needs modernising.
The singleton on the microsoft site
using System;
public sealed class Singleton
{
private static volatile Singleton instance;
private static object syncRoot = new Object();
private Singleton() {}
public static Singleton Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
instance = new Singleton();
}
}
return instance;
}
}
}
The above code doesn't flush the buffer. So if you use it in a high load / parallel situation and run into an exception the program can terminate and lose the file handle for the writer before all the logs have been written. That's a bit of speculation - it could be that the writer can only write about 3k at once.
Some clarification of the issues as in comments seems like a discussion started: A for loop writing 100 messages will reveal the following issues
- the output log file maxes out at 3k
- putting a sleep for an arbitrary length of time doesnt help it still stalls at around 75 messages for me (presumably message length or some kind of buffer setting influences this) - the point being a running program with logging in a loop would find this solution broken.
- only writes on termination of program - which is probably why it runs into a max buffer and truncates the rest of the messages.
- Who cares about ordering? thats what a timestamp is for. Ordering is very un-asynchronous
I didn't sit down and figure out precisely what the problem was, however, I did use a dirty hack to somewhat compensate. Namley - dispose of the logger regularly. I do not recommend this as a solution in production code but seeing as i am merely using it for dirty debugging it suffices for my purposes. The flush of the buffer is a good solution for myself.
The flush adjustment
this.mWriterBlock = new ActionBlock<string>(
s =>{
writer.WriteLineAsync(s);
writer.Flush();
});
There are other patterns for singletons in multithreaded environments. Given it is an async logger it should probably create itself in a thread safe manner.
This for example - needs modernising.
The singleton on the microsoft site
using System;
public sealed class Singleton
{
private static volatile Singleton instance;
private static object syncRoot = new Object();
private Singleton() {}
public static Singleton Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
instance = new Singleton();
}
}
return instance;
}
}
}
The above code doesn't flush the buffer. So if you use it in a high load / parallel situation and run into an exception the program can terminate and lose the file handle for the writer before all the logs have been written. That's a bit of speculation - it could be that the writer can only write about 3k at once.
Some clarification of the issues as in comments seems like a discussion started: A for loop writing 100 messages will reveal the following issues
- the output log file maxes out at 3k
- putting a sleep for an arbitrary length of time doesnt help it still stalls at around 75 messages for me (presumably message length or some kind of buffer setting influences this) - the point being a running program with logging in a loop would find this solution broken.
- only writes on termination of program - which is probably why it runs into a max buffer and truncates the rest of the messages.
- Who cares about ordering? thats what a timestamp is for. Ordering is very un-asynchronous
I didn't sit down and figure out precisely what the problem was, however, I did use a dirty hack to somewhat compensate. Namley - dispose of the logger regularly. I do not recommend this as a solution in production code but seeing as i am merely using it for dirty debugging it suffices for my purposes. The flush of the buffer is a good solution for myself.
The flush adjustment
this.mWriterBlock = new ActionBlock<string>(
s =>{
writer.WriteLineAsync(s);
writer.Flush();
});
There are other patterns for singletons in multithreaded environments. Given it is an async logger it should probably create itself in a thread safe manner.
This for example - needs modernising.
The singleton on the microsoft site
using System;
public sealed class Singleton
{
private static volatile Singleton instance;
private static object syncRoot = new Object();
private Singleton() {}
public static Singleton Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
instance = new Singleton();
}
}
return instance;
}
}
}
The above code doesn't flush the buffer. So if you use it in a high load / parallel situation and run into an exception the program can terminate and lose the file handle for the writer before all the logs have been written. That's a bit of speculation - it could be that the writer can only write about 3k at once.
Some clarification of the issues as in comments seems like a discussion started: A for loop writing 100 messages will reveal the following issues
- the output log file maxes out at 3k
- putting a sleep for an arbitrary length of time doesnt help it still stalls at around 75 messages for me (presumably message length or some kind of buffer setting influences this) - the point being a running program with logging in a loop would find this solution broken.
- only writes on termination of program - which is probably why it runs into a max buffer and truncates the rest of the messages.
I didn't sit down and figure out precisely what the problem was, however, I did use a dirty hack to somewhat compensate. Namley - dispose of the logger regularly. I do not recommend this as a solution in production code but seeing as i am merely using it for dirty debugging it suffices for my purposes. The flush of the buffer is a good solution for myself.
The flush adjustment
this.mWriterBlock = new ActionBlock<string>(
s =>{
writer.WriteLineAsync(s);
writer.Flush();
});
There are other patterns for singletons in multithreaded environments. Given it is an async logger it should probably create itself in a thread safe manner.
This for example - needs modernising.
The singleton on the microsoft site
using System;
public sealed class Singleton
{
private static volatile Singleton instance;
private static object syncRoot = new Object();
private Singleton() {}
public static Singleton Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
instance = new Singleton();
}
}
return instance;
}
}
}
The above code doesn't flush the buffer. So if you use it in a high load / parallel situation and run into an exception the program can terminate and lose the file handle for the writer before all the logs have been written. That's a bit of speculation - it could be that the writer can only write about 3k at once.
Some clarification of the issues as in comments seems like a discussion started: A for loop writing 100 messages will reveal the following issues
- the output log file maxes out at 3k
- putting a sleep for an arbitrary length of time doesnt help it still stalls at around 75 messages for me (presumably message length or some kind of buffer setting influences this) - the point being a running program with logging in a loop would find this solution broken.
- only writes on termination of program - which is probably why it runs into a max buffer and truncates the rest of the messages.
- Who cares about ordering? thats what a timestamp is for. Ordering is very un-asynchronous
I didn't sit down and figure out precisely what the problem was, however, I did use a dirty hack to somewhat compensate. Namley - dispose of the logger regularly. I do not recommend this as a solution in production code but seeing as i am merely using it for dirty debugging it suffices for my purposes. The flush of the buffer is a good solution for myself.
The flush adjustment
this.mWriterBlock = new ActionBlock<string>(
s =>{
writer.WriteLineAsync(s);
writer.Flush();
});
There are other patterns for singletons in multithreaded environments. Given it is an async logger it should probably create itself in a thread safe manner.
This for example - needs modernising.
The singleton on the microsoft site
using System;
public sealed class Singleton
{
private static volatile Singleton instance;
private static object syncRoot = new Object();
private Singleton() {}
public static Singleton Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
instance = new Singleton();
}
}
return instance;
}
}
}
The above code doesn't flush the buffer. So if you use it in a high load / parallel situation and run into an exception the program can terminate and lose the file handle for the writer before all the logs have been written. That's a bit of speculation - it could be that the writer can only write about 3k at once.
Some clarification of the issues as in comments seems like a discussion started: A for loop writing 100 messages will reveal the following issues
- the output log file maxes out at 3k
- putting a sleep for an arbitrary length of time doesnt help it still stalls at around 75 messages for me (presumably message length or some kind of buffer setting influences this) - the point being a running program with logging in a loop would find this solution broken.
- only writes on termination of program.
- In fact it will only write upon program execution - which is probably why it runs into a max buffer and truncates the rest of the messages.
I didn't sit down and figure out precisely what the problem was, however, I did use a dirty hack to somewhat compensate. Namley - dispose of the logger regularly. I do not recommend this as a solution in production code but seeing as i am merely using it for dirty debugging it suffices for my purposes. The flush of the buffer is a good solution for myself.
The flush adjustment
this.mWriterBlock = new ActionBlock<string>(
s =>{
writer.WriteLineAsync(s);
writer.Flush();
});
There are other patterns for singletons in multithreaded environments. Given it is an async logger it should probably create itself in a thread safe manner.
This for example - needs modernising.
The singleton on the microsoft site
using System;
public sealed class Singleton
{
private static volatile Singleton instance;
private static object syncRoot = new Object();
private Singleton() {}
public static Singleton Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
instance = new Singleton();
}
}
return instance;
}
}
}
The above code doesn't flush the buffer. So if you use it in a high load / parallel situation and run into an exception the program can terminate and lose the file handle for the writer before all the logs have been written. That's a bit of speculation - it could be that the writer can only write about 3k at once.
Some clarification of the issues as in comments seems like a discussion started: A for loop writing 100 messages will reveal the following issues
- the output log file maxes out at 3k
- putting a sleep for an arbitrary length of time doesnt help it still stalls at around 75 messages for me (presumably message length or some kind of buffer setting influences this) - the point being a running program with logging in a loop would find this solution broken.
- only writes on termination of program.
- In fact it will only write upon program execution - which is probably why it runs into a max buffer and truncates the rest of the messages.
I didn't sit down and figure out precisely what the problem was, however, I did use a dirty hack to somewhat compensate. Namley - dispose of the logger regularly. I do not recommend this as a solution in production code but seeing as i am merely using it for dirty debugging it suffices for my purposes. The flush of the buffer is a good solution for myself.
The flush adjustment
this.mWriterBlock = new ActionBlock<string>(
s =>{
writer.WriteLineAsync(s);
writer.Flush();
});
There are other patterns for singletons in multithreaded environments. Given it is an async logger it should probably create itself in a thread safe manner.
This for example - needs modernising.
The singleton on the microsoft site
using System;
public sealed class Singleton
{
private static volatile Singleton instance;
private static object syncRoot = new Object();
private Singleton() {}
public static Singleton Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
instance = new Singleton();
}
}
return instance;
}
}
}
The above code doesn't flush the buffer. So if you use it in a high load / parallel situation and run into an exception the program can terminate and lose the file handle for the writer before all the logs have been written. That's a bit of speculation - it could be that the writer can only write about 3k at once.
Some clarification of the issues as in comments seems like a discussion started: A for loop writing 100 messages will reveal the following issues
- the output log file maxes out at 3k
- putting a sleep for an arbitrary length of time doesnt help it still stalls at around 75 messages for me (presumably message length or some kind of buffer setting influences this) - the point being a running program with logging in a loop would find this solution broken.
- only writes on termination of program - which is probably why it runs into a max buffer and truncates the rest of the messages.
I didn't sit down and figure out precisely what the problem was, however, I did use a dirty hack to somewhat compensate. Namley - dispose of the logger regularly. I do not recommend this as a solution in production code but seeing as i am merely using it for dirty debugging it suffices for my purposes. The flush of the buffer is a good solution for myself.
The flush adjustment
this.mWriterBlock = new ActionBlock<string>(
s =>{
writer.WriteLineAsync(s);
writer.Flush();
});
There are other patterns for singletons in multithreaded environments. Given it is an async logger it should probably create itself in a thread safe manner.
This for example - needs modernising.
The singleton on the microsoft site
using System;
public sealed class Singleton
{
private static volatile Singleton instance;
private static object syncRoot = new Object();
private Singleton() {}
public static Singleton Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
instance = new Singleton();
}
}
return instance;
}
}
}
- 118
- 7