We recently switched some services from Azure managed cache to Azure Redis cache due to stability and connectivity issues. The transition went almost without any issues. In this post I will give a basic introduction to Redis on Azure and pointers to look at. Also, it's worth noticing Microsofts recommendation on Azure cache: "We recommend all new developments use Azure Redis Cache."

A bit about Redis

Redis is an advanced Key-value cache and store. However, the Azure Redis SAAS offering does currently not provide persistent storage, i.e. it's a cache. Even if you're not going to use Redis I can higly recommend reading their datatypes intro and as a mental excersice try to map the datatypes to your own solution. I can also recommend Redis creator Salvatore Sanfilippo's blogpost on the same topic.

One of the great things compared to Azure Managed cache is that we can connect directly to the Redis server using a command line client and execute commands. To obtain a Redis client (and server) for Windows use Chocolatey or Nuget to install the Redis-64 package from MSOpenTech.

A Redis instance comes with multiple "databases", each referenced using an index. This is very useful to separate environments or keep different types of data separated. We use this to separate output caching from data caching. It is possible to flush a single database, flushdb, making it very easy to test different scenarios.

Using Azure Redis SAAS

The recommended client for connecting to Azure Redis is the StackExchange.Redis Client from the great folks at StackExchance, available as a Nuget package. The client has a fairly uncomplicated interface and you'll find just the right amount of documentation in the GitHub repository. You're on the safe side as long as you share and reuse the ConnectionMultiplexer instance, stuff it as a singleton in your IoC container or keep it around as a static property. The following example is from Mike Harder (MSFT) and can be found on StackOverflow

private static Lazy<ConnectionMultiplexer> lazyConnection = new Lazy<ConnectionMultiplexer>(() => {
    return ConnectionMultiplexer

public static ConnectionMultiplexer Connection {
    get {
        return lazyConnection.Value;

Once you're connected you can set and get data by calling GetDatabase on the ConnectionMultiplexer and then use Set and Get on the database object, both available as async as well. Do note that Get and Set take string values and bytes[]. This is where serialization comes into play, more details further down.

The Azure Redis offering comes in Standard and Basic, each with different size levels. Basic is a single instance without SLA while Standard is a two-node configuration with replication and high availability SLA. Size and price wise the range is from 250MB to 53GB cache. However, size is not the only metric affected, as you scale up in size you also get more network bandwidth.

As you scale up in size, you also get more network bandwith.

This bit us during our first run in production, we started seeing exceptions with timeout issues. After working with the product team for Redis Azure, we learned that we had reached our network limits. Increasing the cache size from 2GB to 6GB doubled our available network bandwith. It would be great with more information on the pricing pages for this. Also, we find the network use a bit hard to keep track of, as the notifications cannot be set for a combined read/write network usage, but must be set for read and write individually.


The Azure Managed Cache comes with built in client side caching using memory cache, this is not included in the Stackexchange.Redis client. A side-effect of the memory cache is that you get the same object that you submitted to the cache. This is of course fast, and there's no need for serialization/de-serialization. However, once we start sending objects over the network, serialization comes into play. There are sevaral options, if you look at Microsofts Redis implementation for OutputCache it uses the BinaryFormatter, a safe but slow choice.

BinaryFormatter chew CPU cycles just as fast as our dogs can eat a hotdog.

There are several benchmark posts on serialization out there (1 2), and the general outcome is that you're better of using something else than BinaryFormatter if speed is an issue. And as we're talking about cache, I assume speed is relevant. Protobuf-net (surprise, surprise: from Mark Gravel/StackExchange) is one of the fastest alternatives out there. Well, at least until we get Cap'n'proto on the .Net stack.

So, what does it take to use Protobuf-net, in short: Add protobuf-net from NuGet, mark you classes with ProtoContract and properties with ProtoMember. Long story short: another blogpost. But it does mean that you will need to know which types you put in your cache and make sure they are supported by the serialization/de-serialization format of choice.

Things to keep in mind (the useful stuff)

  • Do not create a new ConnectionMultiplexer per get/set, create one and keep it around.
  • The network bandwith for Azure Redis is "limited", the limit is high but you can potentially hit it. Try increasing your pricing a level if you see timeout issues.
  • As for transient errors it's worth noticing that StackExchange.Redis will automatically reconnect, however it will not retry commands for you. We use Polly when we need to retry operations. var policy = Policy.Handle<ExceptionType>().Retry() or WaitAndRetry() and then var result = policy.Execute(() => DoSomething());
  • Choose a serialization format with care, especially de-serialization with BinaryFormatter is slow.
  • Redis is a lot more than a plain key-value store, learn and exploit the data-types provided.
  • You can use a 2 layer model combining local cache and Redis, you just have to roll your own (for now...). Apparently this is how StackExchange does it, see http://meta.stackexchange.com/questions/239480/stack-exchange-data-access-and-caching-coding-practices and http://meta.stackexchange.com/questions/69164/does-stack-overflow-use-caching-and-if-so-how. Do note the indication of using Redis pub/sub to update local caches.

Happy Caching!