.Net Core InMemory Caching with Locking

December 13, 2022
3 min read

In small apps that don't get much traffic, caching generally doesn't matter.  However, if you get a fair amount of traffic or you have time-intensive processes that run often, you'll want to apply some sort of caching scenario.  It could not only speed up your site, it could also save you money.

The two main types of caching are "in process" and "distributed cache".  The latter is great if you have multiple servers that need to feed off of the same cache.  In Process cache works on an individual server.  If you have multiple servers and are using In Process caching, each server will have it's own cache.  Depending on your scenario and the type of data you're caching, this could be all well and good.  If it's a problem, then you should look at a distributed cache, such as Redis.  The downside with Redis and other distributed caching systems is that they generally cost extra and have their own setup and configuration above and beyond a typical web server.

 

Caching in .net Core

In dotnet core, we use the IMemoryCache interface for in process caching.  In your Program.cs, you add the service like this:

public static void Main(string[] args) {
	var builder = WebApplication.CreateBuilder(args);
	builder.Services.AddMemoryCache();
	//.... other services
}

Then implementing it in a service or controller is easy using Dependency Injection:

public class MyService {	
    private IMemoryCache _cache;
	
	public MyService(IMemoryCache c)
    {
        _cache = c;
    }
	
	public string GetSomething() {
		var cachekey = "specialdata";
		
		if(!_cache.TryGetValue(cachekey, out string data)) {
			// if you're in here, there is no cache, so we need to construct it
			data = "Get Data via DB or some other method";
			
			// save the cache
			_cache.Set(cachekey, data, TimeSpan.FromMinutes(10));
		}
		
		return data;
	}
}


Now that works all well and good.  However, the potential problem with the above code is that while the function is gathering data, there could be multiple hits to the function. If the first hit to the function isn't complete by the time the next hits come, it'll run the caching code until the cache is set.  If this is a quick process, it might not be a problem, but if it takes awhile, it could have adverse affects.

 

Locking the cache


If the above scenario is problematic, you can add a bit more code to lock the caching process so that only the first hit to the function runs the data gathering part.  Every other hit will simply wait until the cache becomes available.  We're going to use what is called a locking semaphore.
 

.Net has a simple class called SemaphoreSlim, which helps with the logic here.   

Below is the above code rewritten using the semaphore implementation:

public class MyService {	
    private IMemoryCache _cache;
	// the (1,1) tells the code to allow at least 1 and at most 1 thread as a time.
    private static readonly SemaphoreSlim semaphore = new SemaphoreSlim(1, 1);
	
	public MyService(IMemoryCache c)
    {
        _cache = c;
    }
	
	public string GetSomething() {
		var cachekey = "specialdata";
		
		// first check for cached data
		if(!_cache.TryGetValue(cachekey, out string data)) {
			try {
				// setup the wait for anyone other than the first hit
                await semaphore.WaitAsync();
				
				// check again
				if(!_cache.TryGetValue(cachekey, out data)) {
				
					// if you're in here, there is no cache, so we need to construct it
					data = "Get Data via DB or some other method";
					
					// save the cache to whatever timeframe you want
					_cache.Set(cachekey, data, TimeSpan.FromMinutes(10));
				}
			}
			finally {
				// release the semaphore because we have finished the caching process
				semaphore.Release();
			}
		}
		
		return data;
	}
}


In summary

Using a locking semaphore, you will use the IMemoryCache to allow only the first hit to a uncached function to handle the data gathering.  It's a great way to speed up your site efficiently and effectively.