Skip to main content
Code Review

Return to Answer

replaced http://stackoverflow.com/ with https://stackoverflow.com/
Source Link

To follow up on my comment: I would still recommend using the MemoryCache provided by the .NET framework. You linked to two issues but it is not obvious to me how your approach will avoid them or how you are affected by them.

  1. The locking The locking. The question you have linked to raises the problem that two threads might by trying to access the same key, get a null result (entry not present in cache) and hence both trying to create the object which is an expensive operation and trying to insert it. You will run into the exact same problem with the ConcurrentDictionary approach. Just because your underlying data structure is thread safe doesn't mean that any combination of operation is thread safe. To give a simple (hypothetical code) example:

     var object = _MyCache.GetOrDefault(key);
     if (object == null)
     {
     object = CreateObjectInVeryExpensiveWay();
     _MyCache.Set(key, object);
     }
    

If two threads call the above at the same time you will run into trouble independent of the actual underlying cache implementation. To avoid this you need separate synchronization mechanisms in any case.

  1. The memory issue with MemoryCache The memory issue with MemoryCache. If you follow the question it will lead to this Connect issue which explains the enforcement limitations. In any case your current implementation has no way of limiting the cache anyway and you will not be able to properly implement an actual memory consumption based limitation in any case for similar reasons.

I have the impression that you are trying to optimize something which you don't even know is a problem yet in a way where do not fully understand all the implications. Thread safety is not just a matter of ensuring that all used data structures are thread safe but also that the access patterns which are composed out of multiple independent operations on the data structure will not leads you into inconsistent states. Writing correctly behaving multi threaded code is hard, writing correctly behaving lock free code is several order of magnitudes harder.

I have rolled my own memory cache implementation before (as .NET 3.5 and earlier did not provide one) and it did take a few iterations to get it right. As you will notice it's using a plain Dictionary and locks. We have never had a performance problem with it as read access is really fast and read contention even with many threads is minimal. You could get fancy and replace the plain lock with ReaderWriterLockSlim if you can prove that it's problematic otherwise.

Premature optimization is the root of all evil. Only optimize something if you can prove it's the bottle neck.

To follow up on my comment: I would still recommend using the MemoryCache provided by the .NET framework. You linked to two issues but it is not obvious to me how your approach will avoid them or how you are affected by them.

  1. The locking. The question you have linked to raises the problem that two threads might by trying to access the same key, get a null result (entry not present in cache) and hence both trying to create the object which is an expensive operation and trying to insert it. You will run into the exact same problem with the ConcurrentDictionary approach. Just because your underlying data structure is thread safe doesn't mean that any combination of operation is thread safe. To give a simple (hypothetical code) example:

     var object = _MyCache.GetOrDefault(key);
     if (object == null)
     {
     object = CreateObjectInVeryExpensiveWay();
     _MyCache.Set(key, object);
     }
    

If two threads call the above at the same time you will run into trouble independent of the actual underlying cache implementation. To avoid this you need separate synchronization mechanisms in any case.

  1. The memory issue with MemoryCache. If you follow the question it will lead to this Connect issue which explains the enforcement limitations. In any case your current implementation has no way of limiting the cache anyway and you will not be able to properly implement an actual memory consumption based limitation in any case for similar reasons.

I have the impression that you are trying to optimize something which you don't even know is a problem yet in a way where do not fully understand all the implications. Thread safety is not just a matter of ensuring that all used data structures are thread safe but also that the access patterns which are composed out of multiple independent operations on the data structure will not leads you into inconsistent states. Writing correctly behaving multi threaded code is hard, writing correctly behaving lock free code is several order of magnitudes harder.

I have rolled my own memory cache implementation before (as .NET 3.5 and earlier did not provide one) and it did take a few iterations to get it right. As you will notice it's using a plain Dictionary and locks. We have never had a performance problem with it as read access is really fast and read contention even with many threads is minimal. You could get fancy and replace the plain lock with ReaderWriterLockSlim if you can prove that it's problematic otherwise.

Premature optimization is the root of all evil. Only optimize something if you can prove it's the bottle neck.

To follow up on my comment: I would still recommend using the MemoryCache provided by the .NET framework. You linked to two issues but it is not obvious to me how your approach will avoid them or how you are affected by them.

  1. The locking. The question you have linked to raises the problem that two threads might by trying to access the same key, get a null result (entry not present in cache) and hence both trying to create the object which is an expensive operation and trying to insert it. You will run into the exact same problem with the ConcurrentDictionary approach. Just because your underlying data structure is thread safe doesn't mean that any combination of operation is thread safe. To give a simple (hypothetical code) example:

     var object = _MyCache.GetOrDefault(key);
     if (object == null)
     {
     object = CreateObjectInVeryExpensiveWay();
     _MyCache.Set(key, object);
     }
    

If two threads call the above at the same time you will run into trouble independent of the actual underlying cache implementation. To avoid this you need separate synchronization mechanisms in any case.

  1. The memory issue with MemoryCache. If you follow the question it will lead to this Connect issue which explains the enforcement limitations. In any case your current implementation has no way of limiting the cache anyway and you will not be able to properly implement an actual memory consumption based limitation in any case for similar reasons.

I have the impression that you are trying to optimize something which you don't even know is a problem yet in a way where do not fully understand all the implications. Thread safety is not just a matter of ensuring that all used data structures are thread safe but also that the access patterns which are composed out of multiple independent operations on the data structure will not leads you into inconsistent states. Writing correctly behaving multi threaded code is hard, writing correctly behaving lock free code is several order of magnitudes harder.

I have rolled my own memory cache implementation before (as .NET 3.5 and earlier did not provide one) and it did take a few iterations to get it right. As you will notice it's using a plain Dictionary and locks. We have never had a performance problem with it as read access is really fast and read contention even with many threads is minimal. You could get fancy and replace the plain lock with ReaderWriterLockSlim if you can prove that it's problematic otherwise.

Premature optimization is the root of all evil. Only optimize something if you can prove it's the bottle neck.

replaced http://codereview.stackexchange.com/ with https://codereview.stackexchange.com/
Source Link

To follow up on my comment: I would still recommend using the MemoryCache provided by the .NET framework. You linked to two issues but it is not obvious to me how your approach will avoid them or how you are affected by them.

  1. The locking. The question you have linked to raises the problem that two threads might by trying to access the same key, get a null result (entry not present in cache) and hence both trying to create the object which is an expensive operation and trying to insert it. You will run into the exact same problem with the ConcurrentDictionary approach. Just because your underlying data structure is thread safe doesn't mean that any combination of operation is thread safe. To give a simple (hypothetical code) example:

     var object = _MyCache.GetOrDefault(key);
     if (object == null)
     {
     object = CreateObjectInVeryExpensiveWay();
     _MyCache.Set(key, object);
     }
    

If two threads call the above at the same time you will run into trouble independent of the actual underlying cache implementation. To avoid this you need separate synchronization mechanisms in any case.

  1. The memory issue with MemoryCache. If you follow the question it will lead to this Connect issue which explains the enforcement limitations. In any case your current implementation has no way of limiting the cache anyway and you will not be able to properly implement an actual memory consumption based limitation in any case for similar reasons.

I have the impression that you are trying to optimize something which you don't even know is a problem yet in a way where do not fully understand all the implications. Thread safety is not just a matter of ensuring that all used data structures are thread safe but also that the access patterns which are composed out of multiple independent operations on the data structure will not leads you into inconsistent states. Writing correctly behaving multi threaded code is hard, writing correctly behaving lock free code is several order of magnitudes harder.

I have rolled my own memory cache implementation my own memory cache implementation before (as .NET 3.5 and earlier did not provide one) and it did take a few iterations to get it right. As you will notice it's using a plain Dictionary and locks. We have never had a performance problem with it as read access is really fast and read contention even with many threads is minimal. You could get fancy and replace the plain lock with ReaderWriterLockSlim if you can prove that it's problematic otherwise.

Premature optimization is the root of all evil. Only optimize something if you can prove it's the bottle neck.

To follow up on my comment: I would still recommend using the MemoryCache provided by the .NET framework. You linked to two issues but it is not obvious to me how your approach will avoid them or how you are affected by them.

  1. The locking. The question you have linked to raises the problem that two threads might by trying to access the same key, get a null result (entry not present in cache) and hence both trying to create the object which is an expensive operation and trying to insert it. You will run into the exact same problem with the ConcurrentDictionary approach. Just because your underlying data structure is thread safe doesn't mean that any combination of operation is thread safe. To give a simple (hypothetical code) example:

     var object = _MyCache.GetOrDefault(key);
     if (object == null)
     {
     object = CreateObjectInVeryExpensiveWay();
     _MyCache.Set(key, object);
     }
    

If two threads call the above at the same time you will run into trouble independent of the actual underlying cache implementation. To avoid this you need separate synchronization mechanisms in any case.

  1. The memory issue with MemoryCache. If you follow the question it will lead to this Connect issue which explains the enforcement limitations. In any case your current implementation has no way of limiting the cache anyway and you will not be able to properly implement an actual memory consumption based limitation in any case for similar reasons.

I have the impression that you are trying to optimize something which you don't even know is a problem yet in a way where do not fully understand all the implications. Thread safety is not just a matter of ensuring that all used data structures are thread safe but also that the access patterns which are composed out of multiple independent operations on the data structure will not leads you into inconsistent states. Writing correctly behaving multi threaded code is hard, writing correctly behaving lock free code is several order of magnitudes harder.

I have rolled my own memory cache implementation before (as .NET 3.5 and earlier did not provide one) and it did take a few iterations to get it right. As you will notice it's using a plain Dictionary and locks. We have never had a performance problem with it as read access is really fast and read contention even with many threads is minimal. You could get fancy and replace the plain lock with ReaderWriterLockSlim if you can prove that it's problematic otherwise.

Premature optimization is the root of all evil. Only optimize something if you can prove it's the bottle neck.

To follow up on my comment: I would still recommend using the MemoryCache provided by the .NET framework. You linked to two issues but it is not obvious to me how your approach will avoid them or how you are affected by them.

  1. The locking. The question you have linked to raises the problem that two threads might by trying to access the same key, get a null result (entry not present in cache) and hence both trying to create the object which is an expensive operation and trying to insert it. You will run into the exact same problem with the ConcurrentDictionary approach. Just because your underlying data structure is thread safe doesn't mean that any combination of operation is thread safe. To give a simple (hypothetical code) example:

     var object = _MyCache.GetOrDefault(key);
     if (object == null)
     {
     object = CreateObjectInVeryExpensiveWay();
     _MyCache.Set(key, object);
     }
    

If two threads call the above at the same time you will run into trouble independent of the actual underlying cache implementation. To avoid this you need separate synchronization mechanisms in any case.

  1. The memory issue with MemoryCache. If you follow the question it will lead to this Connect issue which explains the enforcement limitations. In any case your current implementation has no way of limiting the cache anyway and you will not be able to properly implement an actual memory consumption based limitation in any case for similar reasons.

I have the impression that you are trying to optimize something which you don't even know is a problem yet in a way where do not fully understand all the implications. Thread safety is not just a matter of ensuring that all used data structures are thread safe but also that the access patterns which are composed out of multiple independent operations on the data structure will not leads you into inconsistent states. Writing correctly behaving multi threaded code is hard, writing correctly behaving lock free code is several order of magnitudes harder.

I have rolled my own memory cache implementation before (as .NET 3.5 and earlier did not provide one) and it did take a few iterations to get it right. As you will notice it's using a plain Dictionary and locks. We have never had a performance problem with it as read access is really fast and read contention even with many threads is minimal. You could get fancy and replace the plain lock with ReaderWriterLockSlim if you can prove that it's problematic otherwise.

Premature optimization is the root of all evil. Only optimize something if you can prove it's the bottle neck.

Source Link
ChrisWue
  • 20.6k
  • 4
  • 42
  • 107

To follow up on my comment: I would still recommend using the MemoryCache provided by the .NET framework. You linked to two issues but it is not obvious to me how your approach will avoid them or how you are affected by them.

  1. The locking. The question you have linked to raises the problem that two threads might by trying to access the same key, get a null result (entry not present in cache) and hence both trying to create the object which is an expensive operation and trying to insert it. You will run into the exact same problem with the ConcurrentDictionary approach. Just because your underlying data structure is thread safe doesn't mean that any combination of operation is thread safe. To give a simple (hypothetical code) example:

     var object = _MyCache.GetOrDefault(key);
     if (object == null)
     {
     object = CreateObjectInVeryExpensiveWay();
     _MyCache.Set(key, object);
     }
    

If two threads call the above at the same time you will run into trouble independent of the actual underlying cache implementation. To avoid this you need separate synchronization mechanisms in any case.

  1. The memory issue with MemoryCache. If you follow the question it will lead to this Connect issue which explains the enforcement limitations. In any case your current implementation has no way of limiting the cache anyway and you will not be able to properly implement an actual memory consumption based limitation in any case for similar reasons.

I have the impression that you are trying to optimize something which you don't even know is a problem yet in a way where do not fully understand all the implications. Thread safety is not just a matter of ensuring that all used data structures are thread safe but also that the access patterns which are composed out of multiple independent operations on the data structure will not leads you into inconsistent states. Writing correctly behaving multi threaded code is hard, writing correctly behaving lock free code is several order of magnitudes harder.

I have rolled my own memory cache implementation before (as .NET 3.5 and earlier did not provide one) and it did take a few iterations to get it right. As you will notice it's using a plain Dictionary and locks. We have never had a performance problem with it as read access is really fast and read contention even with many threads is minimal. You could get fancy and replace the plain lock with ReaderWriterLockSlim if you can prove that it's problematic otherwise.

Premature optimization is the root of all evil. Only optimize something if you can prove it's the bottle neck.

lang-vb

AltStyle によって変換されたページ (->オリジナル) /