Thursday, 21 March 2013

Performance series - Memory leak investigation Part 1

[Level T2] If you have worked a few years within the industry, it is very unlikely that you have not encountered a case of memory leak. Whether your own code or someone else's, it is an annoying problem to diagnose and fix. The problem's scale and the effort to fix it increases exponentially with the scale and complexity of the application and deployment.

Many companies use application benchmarking and performance testing to find these problems early on. But as I experienced myself, problems found earlier as part of performance testing are not necessarily easier to solve. Lengthy cycle of gathering metrics, analysis, coming up with one or more hypothesis/es and then fixing and trying and testing it could be very time consuming and jeopardising delivery and meeting deadlines.

In these cases, focusing on what is important and what is not saves the day. A common problem in troubleshooting performance bugs is information overload and the existence of red herrings that confuse the picture. There is a myriad of performance counters that you could spend hours and days explaining their anomalies that have nothing to do with the problem you are trying to solve. Having worked as a medical doctor before, I have seen this in the field of medicine many many times. Human's body as a very big application - eternally more complex that any given application - demonstrates similar behaviour and I have learnt to be focused on identifying and fixing the problem at hand rather than explaining each and every oddity.

As such, I am starting this series with an overview of the performance counters and tools to be used. There are many posts and even books on this very topic. I am not claiming that this series is a comprehensive and all-you-need guide. I am actually not claiming anything. I am simply sharing my experience and hope it will make your troubleshooting journey less painful and help you find the culprit sooner.

A few notes on the platform

This is exclusively a Windows and mainly a .NET series - although part of this series could help with none .NET applications. Also focus is mainly on ASP.NET web applications as the impact of even a small memory leak can be very big but the same guidelines can be equally applied to desktop applications.

What is a memory leak?

Memory leak is usually considered as the memory which is allocated by the application and becomes inaccessible - as such cannot be deallocated. But I do not like this definition. For example this code does not qualify for memory leak but it will lead to out of memory:

var list = new List<byte[]>();
for(i=0; i< 1000000; i++)
{
   list.Add(new byte[500 * 1024]); // 500 KB
}

As can be seen, data is accessible to the application but application does not unload the allocated memory and keeps piling up memory allocation in heap.

For me, memory leak is a constant/stepwise allocation of memory without deallocation which usually leads to OutOfMemoryException. The reason I say usually is because small leaks can be tolerated as many desktop applications are closed at the end of the day and many website daily recycle their app pools. However, they are still leaks and you have to fix them when you get the time since with change in conditions, leaks that you could tolerate can bring down your site/application.

Process memory/time in 4 different scenarios

In the top right diagram, we see constant allocation but it coincides with deallocation of memory - as such not a leak. This scenario is common in ASP.NET Caching (HttpRuntime.Cache) where 50% of the cache is purged when memory used reaches a certain threshold.

In the bottom right diagram, de-allocation does not happen or its rate is the same as the allocation when the memory size of the application's process reaches a threshold. This pattern can be seen with the SQL Server where it uses as much memory as it can until it reaches a threshold.

How do I establish a memory leak?

All you have to do is to ascertain your application meets the criteria described above. Just observe the memory usage over time under load. It is possible that the leak happens only in a certain condition or when using a particular functionality of the app so you have to be careful about that.

What tool do you need to do that? Even using a Task Manager could be good enough to start with.

Tools to use 

Normally you would start with Task Manager. Just a quick look at the Task Manager or eye-balling the memory size of the process when the app is running and under load is a simple but effective start.

The mainstay of the memory leak analysis is Windows Performance Monitor, aka Perfmon. This is very important in establishing a benchmark, monitoring changes over new version releases and identifying problems. In this post I will look into some essential performance counters.

If you are investigating a serious or live memory leak, you have to have a .NET Memory Profiler. Currently 3 different profilers exist:

  1. RedGate's ANTS Profiler
  2. Jetbrain's dotTrace
  3. Scitech's Memory Profiler

Each tool has its own pros and cons. I have experience with SciTech's and I must say I am really impressed by it.

The most advanced tool is Windbg which is a really useful tool but has a steep learning curve. I will look at using it for memory leak analysis in future posts.

Initial analysis of the memory leak

Let's look at a simple memory leak (you can find the snippets here on GitHub):

var list = new List<byte[]>();
for (int i = 0; i < 1000; i++)
{
    list.Add(new byte[1 * 1000 * 1000]);
    Thread.Sleep(200);
}

Process\Private Bytes performance counter is a popular measure of total memory consumption. Let's look at this counter in this app:


So how do we find out if it is a managed or unmanaged leak? We use .NET CLR Memory\#Bytes in all heaps counter:


As can be seen above, white private bytes increases constantly, heap allocation happens in steps. This stepwise increase of heap is consistent with a managed memory leak. In contrast let's look at this unmanaged leak code:

var list = new List<IntPtr>();
for (int i = 0; i < 400; i++)
{
    list.Add(Marshal.AllocHGlobal(1*1000*1000));
    Thread.Sleep(200);
}
list.ForEach(Marshal.FreeHGlobal);

And here is flat bytes in heap:


So this case is an unmanaged memory leak.

In the next post I will look at a few more performance counter and GC issues.

Tuesday, 19 March 2013

CacheCow Series - Part 0: Getting started and caching basics

[Level T2CacheCow is an implementation of HTTP Caching for ASP.NET Web API. It implements HTTP caching spec (defined in RFC 2616 chapter 13) for both server and client, so you don't have to. You just plug-in the CacheCow component in server and client and the rest is taken care of. You do not have to know details of HTTP caching although it is useful to know some basics.

It is also a flexible and extensible library that instead of pushing a resource organisation approach down your throat, allows you to define your caching strategies and resource organisation using a functional approach. It stays opinionated only about caching where the opinion is HTTP spec. We shall delve into these customisation points in the future posts. But first let's to caching 101.

Caching

Caching is a very overloaded and confused term in ASP.NET. Let's review the cachings available in ASP.NET (I promise not to make it too boring):

HttpRuntime.Cache

This class is used to cache managed objects in SERVER memory using a simple dictionary like key-value approach. You might have used HttpContext.Current.Cache but it is basically nothing but a reference to HttpRuntime.Cache. So how is this different to using a plain dictionary?
  1. You can define expiry so you do not run out of memory and your cache can be refreshed. This is the most important feature.
  2. It is thread-safe
  3. You can never run out of memory with this even with no expiry. As long as it hits a configurable limit, it purges half of the cache.
  4. And also: it stores the data in the unmanaged memory of the worker process and not in heap so does not lengthen GC sweeps.
This cache is useful for keeping objects around without having to acquire them all the time (could be complex calculation or IO cost such as database). No, this is not HTTP caching.

Output cache

In this approach output of a page or a route can be cached on the SERVER so the server code does not get executed. You can define expiry and let the output cached based on parameters. Output cache behind the scene uses HttpRuntime.Cache.

You guessed it right: this is also not HTTP caching.

HTTP Caching

In HTTP Caching, a cacheable resource (or HTTP response in simple terms) gets cached on the CLIENT (and not server). Server is responsible for defining cache policy, expiry and maintaining resource metadata (e.g. ETag, LastModified). Client on the other hand is responsible for actually caching the response, respecting cache expiry directives and validating expired resources.

The beauty of the HTTP caching is that your server will not even be hit when client has cached a resource - unlike other two types of caching discussed. This helps with minimising the scalability, reducing network traffic and allowing for offline client implementation. I believe HTTP caching is the ultimate caching design.

One of the fundamental differences of HTTP caching with other types of caching is that a STALE (or expired) resource is not necessarily unusable and does not get removed automatically. Instead client call server and check if the stale resource is still good to use. If it is, server responds with status 304 (NotModified) otherwise it sends back the new response.

HTTP Caching mechanism can also be used for solving concurrency problems. When a client wants to update a resource (usually using PUT), it can send a conditional call and ask for update to run if and only if the version of the server is the same as the version of the client (based on its ETag or LastModified). This is especially useful with modern distributed systems.

So as you can see, client and server engage in a dance using various headers and things can get really complex. Good new is you don't have to worry about it. CacheCow implements all of this for you out of the box.

CacheCow consists of CacheCow.Server and CacheCow.Component. These two components can be used independently, i.e. each will work regardless the other is used or not - this is the beauty of HTTP Caching as a mixed concern which I explained here.

CacheCow on the server

How easy is it to use CacheCow on the server? You just need to get the CacheCow.Server Nuget package:

PM> Install-Package CacheCow.Server

And then 2 lines of code to be exact (actually 3 but we will see why):

var cachecow = new CachingHandler();
GlobalConfiguration.Configuration.MessageHeandlers.Add(cachecow);

So with this line of code all your resources become cacheable. By default expiry is immediate so client has to validate and double check if it can re-use the cache resource but all of this can be configured (which we will talk about in future posts). Also server now is able to respond to conditional GET or PUT requests.

CacheCow on the server requires a database (technically cache store) for storing cache metadata - but we did not define a store. CacheCow comes with an in-memory database that can be used if you do not specify one. This is good enough for development, testing and small single server scenarios but as you can imagine not scalable. What does it take to define a database? Well, a single line hence the 3rd line and you can choose from SQL Server, Memcached,  RavenDb (Redis to come soon) or build your own by implementing an interface against another database (MySql for example).

So in this way we will have 3 lines (although you may combine first two lines):

var db = new SqlServerEntityTagStore();
var cachecow = new CachingHandler(db);
GlobalConfiguration.Configuration.MessageHeandlers.Add(cachecow);

With these three lines, you have added full feature HTTP caching (ETag generation, responding to conditinal GET and PUT, etc) to your API.

CacheCow on the client

Browsers that we use everyday (and usually even forget to look at as an application) are really amazing HTTP consumption machines. They implement cache storage (file-based) and are capable of handling all client scenarios of HTTP caching.

Client component of CacheCow just does the same but is useful when you want to use HttpClient for consuming HTTP services. Plugging it into your HttpClient is as simple as the server side. First get it:
PM> Install-Package CacheCow.Client
Then

var client = new HttpClient(new CachingHandler()
       { 
           InnerHandler = new HttpClientHandler()
       });


In a similar fashion, client component of CacheCow requires a storage - and this time for the actual responses (i.e. resources). By default, an in-memory storage is used which is OK for testing or single server and small API but for persistent stores you would use from stores such as File, SQL Server, Redis, MongoDB, Memcached aleady implemented or just implement the ICacheStore interface on the top of an alternative storage.

Constructor of the client CachingHandler accepts an implementation of ICacheStore which is your storage of choice.

So now cacheable resource will be cached and CacheCow will do conditional GET and PUT for you.

In the next post, we will look into server component of CacheCow.

Monday, 18 March 2013

CacheCow 0.4 released: new features and a breaking change

Version 0.4 is out and with it a couple of features such as attribute-based cache control and cache refresh (see below). For the first time I felt that I have got to write a few notes about this version, least of which because of one breaking change in this release - although it is not likely to break your code as I explain further. Changes have been in the server components.

I and other contributors have been working on CacheCow for the last 8 months. I thought with a couple of posts I have explained the usage of CacheCow. But I now feel that with concept counts increasing, I need to start a series on CacheCow. Before doing that I am going to explain new concepts and the breaking change.

Breaking change

The breaking change was a change in the signature of CacheControlHeaderProvider from Func<HttpRequestMessage, CacheControlHeaderValue> to Func<HttpRequestMessage, HttpConfiguration, CacheControlHeaderValue> to accept HttpConfiguration.

If you have provided your own CacheControlHeaderProvider, you need to provide HttpConfiguration as well - which should be very easy to fix whether web-host or self-host.

Cache Control Policy

So defining cache policy against resource have been by setting the value of CacheControlHeaderProvider which you would define whether a resource is cacheable and if it is, what is the expiry (and other related stuff):

public Func<HttpRequestMessage, HttpConfiguration, CacheControlHeaderValue> CacheControlHeaderProvider { get; set; }

So by default CacheCow sets Func to return a header value for private caching with immediate expiry for all resources:

CacheControlHeaderProvider = (request, cfg) => new CacheControlHeaderValue()
{
 Private = true,
 MustRevalidate = true,
 NoTransform = true,
 MaxAge = TimeSpan.Zero
};

Immediate expiry actually means that the client can use the expired resource as long as it validates the resource using a conditional GET - as explained before here.

But what if you want to individualise cache policy for each resource? We could use per-route handlers but that is not ideal and generally it depends on the resource organisation approach. I have explained in my previous post that resource organisation is one of the areas that needs to be looked at. But this is not within the scope of CacheCow. We are looking into solving this as part of another project while ASP.NET team are also looking into this. So I have decoupled the resource organisation project from CacheCow.

Having said that, in the meantime, I am going to provide some help with doing cache policy set up less painful. This means that CacheCow will come with a few pre-defined functions that help you with defining your cache control policy.

Good news! Cache policy definition using attributes

So now you can define your cache policy against your actions or controllers or both - although action attribute always takes precedence over controller. Using the popular ASP.NET Web API sample:

    [HttpCacheControlPolicy(true, 100)]
    public class ValuesController : ApiController
    {

        public IEnumerable<string> Get()
        {
            return new[] { "cache", "cow" };
        }

        [HttpCacheControlPolicy(true, 120)]
        public string Get(int id)
        {
            return "cache cow... mooowwwww";
        }

So GET call to the first action (/api/Values) will have a max-age of 100 while GET to the second action (e.g. /api/Values/1) will return a max-age of 120.

In order to set this up, all you have to do is to set the CacheControlHeaderProvider property of your CachingHandler to GetCacheControl method of an instance of AttributeBasedCacheControlPolicy:

cachingHandler.CacheControlHeaderProvider = new AttributeBasedCacheControlPolicy(
 new CacheControlHeaderValue()
  {
   NoStore = true
  }).GetCacheControl;

So in above we have passed a default caching policy of no-caching. This table defines which attribute value (or default provided in the constructor) is used:



Cache Refresh Policy

CacheCow works best when HTTP API is actually a REST API. In other words, it uses uniform interface (i.e. HTTP Verbs) to modify resources and this means that the caching handler will get the opportunity to invalidate and remove the cache when POST, PUT, DELETE or PATCH is used.

Problem is commonly HTTP API sits on the top of a legacy system where it has not control over modifications of resources and acts as a data provider. In such a case, the API will not be notified on resource changes and application will be responsible for removing cache metadata directly on the EntityTagStore used. And this is not always possible.

I am providing a solution for defining a time based cache refresh policy using attributes in a very similar fashion to Cache Control Policy - even the above table applies. Removal of items from cache store on the server happens upon the first request after the refresh interval has passed not immediately after interval. So we add the refresh policy provider:

cachingHandler.CacheRefreshPolicyProvider = new AttributeBasedCacheRefreshPolicy(TimeSpan.FromSeconds(5 * 60 * 60))
    .GetCacheRefreshPolicy;

We have defined 5 hour refresh policy as default. And we override using controller or action attributes.

Future posts

As promised, next few posts will be a CacheCow walk-through. 

Sunday, 10 March 2013

Pro tip: don't try abstracting the code world from the HTTP world

[Level T2] I have explained in my Part 5 of the Web API series how the world of HTTP (URI, headers, method, payload) meets the world of code (controller, actions, parameters, etc). This has partly been done in ASP.NET Web API in the media type formatter. And then I talked about the Tower Bridge of MediaTypeFormatter. The problem is these two worlds will eventually come to collide hard - if you ever try to abstract them away.

So what am I on about?

Problem is routing - as I have raised a few times before. And here I will try to explain why - and propose a solution.

The routes basically live in the application setup. You define them outside your controllers and actions - usually inside glolbal.asax or outside while being called on application startup. The idea is that no matter what your routing is, if set up correctly, the values will be populated in your actions - all parameters, and your data. And then your return some data which will be interpreted and serialised in your MediaTypeFormatters.

So what happens if I want to set the location header in the response to a POST method (when an item has been created)? Now I need to provide the parameters to the route to create a virtual link. Now here although I have been abstracted away from the route, I have to know the parameters to pass - this is where things start to get a bit hairy. I have defined a route somewhere else and yet I need to know about it here.

And that by itself is not a big problem. A bigger issue is that ASP.NET does the route matching linearly. So if I happen to define 1000 routes, and my 1000th route gets call often, I can be in trouble as performance can suffer: ASP.NET should always try to match to 999 routes until it hits 1000th.

But why should I have 1000 routes? What is wrong with /api/controller/id? I could end up with handful of routes? OK, here is the catch: routing was designed for a flat routing structure (and originally for ASP.NET MVC) while resource organisation in a Web API has to be hierarchical.

Another issue is hypermedia. Have you tried doing clean and robust hypermedia in Web API using the existing routing? Good luck with that one!

Another problem is the resource in the Web API's really elegant HTTP pipeline (which I love) is URL until it hits the controller dispatcher and then tries to find a matching controller and then action. As such, code part of the resource (controller, action, parameters) cannot programmatically (using attributes, etc) define a strategy to the delegating handlers since we do not yet know which controller or action will respond.

Resource organisation is the responsibility of the application

There were a few issues  raised with CacheCow in terms of cache invalidation of related resources. This is actually one of the fundamental scenarios that I designed CacheCow for. Problem is resource organisation is the responsibility of the application as such CacheCow cannot make any assumptions about the organisation of the resource. So the solution is for the application to describe resource organisation and the relationship of resources. This is definitely not easy - as again, world of routing is a disjoint one.

So here is what we need:

  • A hierarchical routing with a natural setup rather than arbitrary route definition 
  • Resources aware of related resources. As such a resource can define its hypermedia for the most part
  • Ability for resources to define their strategy when it comes to caching, etc


Once asked by Youssef Moussaoui - a developer in ASP.NET team - on what to improve in ASP.NET Web API, my immediate answer was routing. And not just routing, it is resource organisation.

So I know that ASP.NET will look into this but I probably will have a stab at this. We have defined the project Resourx which will look at this and try to achieve the 3 goals above. Watch the space!