Showing posts with label C#. Show all posts
Showing posts with label C#. Show all posts

Monday, 17 September 2012

Server-side Async: Careful with that Axe, Eugene

[Level T3]

In a previous post, I talked about the dangers lurking in doing server-side async operations in .NET 4.0. As you know, .NET 4.5 provides a much better syntax allowing async/await keywords to take your TPL Task-Soups to a much more readable and organised code. But even so, async will make debugging your application more difficult and bugs could take much longer to be reproduced, isolated and fixed.

Task-Soup

In .NET 4.0, when we add up continuations to create a chained task, we could end up with a few problems:

  1. We could end up with an unobserved exception problem. This is nicely described by Ayende here
  2. Nested lambda expressions could create unexpected problems with closure of variables
  3. The code becomes hard to read.
On the third note, I will just bring an example from my own code in CacheCow. What is it that we are actually returning here?

return response.Then(r =>
{
 if (r.Content != null)
 {
  TraceWriter.WriteLine("SerializeAsync - before load",
   TraceLevel.Verbose);

  return r.Content.LoadIntoBufferAsync()
   .Then(() =>
   {
    TraceWriter.WriteLine("SerializeAsync - after load", TraceLevel.Verbose);
    var httpMessageContent = new HttpMessageContent(r);
    // All in-memory and CPU-bound so no need to async
    return httpMessageContent.ReadAsByteArrayAsync();
   })
   .Then( buffer =>
      {
       TraceWriter.WriteLine("SerializeAsync - after ReadAsByteArrayAsync", TraceLevel.Verbose);
       return Task.Factory.FromAsync(stream.BeginWrite, stream.EndWrite,
        buffer, 0, buffer.Length, null, TaskCreationOptions.AttachedToParent);                                                        
      }
     );

   ;
 }

Even looking at brackets gives me headache.

Is Async worth it at all?

Now we talk a lot about Async operations and its role in improving scalability. But really, is it worth it? How much scalability would it bring? Would it help or hinder?

The answer to these questions is yes, it does help. The more IO you do on your server-side actions, the more you benefit from improvement from scalability. So it is highly advisable to implement your ApiController actions as Async by returning Task or Task<T>

The truth is, it will help even with your non-IO-bound operations although it is not advisable to use Async in such scenarios. You can test it for yourself, create a sync and an async controller to do exactly the same operation and use a benchmarking tool to compare the performance.

I have a CarManager sample on GitHub which I use for testing CacheCow.Server and it contains two simple  controllers: CarController and CarAsyncController. All these do is to use an in-memory repository and their GET only looking up the dictionary by its key:

// sync version
public Car Get(int id)
{
 return _carRepository.Get(id);
}


// async version (on another controller)
public Task<Car> GetAsync(int id)
{
 return Task.Factory.StartNew(() => _carRepository.Get(id));
}

So if you use a benchmarking tool such as Apache Benchamrk ab.exe, you could see a slight increase in throughput using the async controller. In my case, there was a 10% increase in throughput using async.

My ordeal with a bug

Development of CacheCow has been marred by existent of a problem which as we will see, turns out to be not in my code. I have been battling with this for a few weeks (on and off) and could not progress CacheCow development because of that.

OK, here is how my story begins; I think the Sherlock Holmes nature of this troubleshooting could be amusing for others too. After realising that using simple ContinueWith will not flow the context (see previous post) I was tasked with changing all such cases with Then in the TaskHelpers which checks existence of SynchronizationContext and flows the context if it exists.

On the other hand, lostdev, one of CacheCow's most loyal users, informed me of an occasional null reference exception in CacheCow.Server. Now, I had already fixed a bug related to null reference exception when the a resource was being retrieved for the first time. I attributed the problem to the fix I had made and reported that the problem is fixed in the current version.

So I started developing file-based cache storage for CacheCow.Client (which will have its own post very soon) and replaced all ContinueWith cases with Then.

And then I started to experience deadlocks in CacheCow.Client when I was using file-based caching and sending concurrent GET requests to the server. As soon as I would remove FileStore, and replace with InMemoryCacheStore, it would work. So I started searching through the client code, debug, look at the threads, debug again, change code, debug... to no avail. As soon as I was using file-based caching it would start to appear so it had to be on the client.

Then I noticed a strange thing: I could only run 4 concurrent calls and rest would be blocked. Why? Then I started playing with the maxconnection property of the system.net configuration:

  <system.net>
 <connectionManagement>
   <add address = "*" maxconnection = "N" />
 </connectionManagement>
  </system.net>

and interestingly, by setting the N to a high number, I would get more concurrent connections - but only up to the number defined. Hmmm... so the requests do not quite finish. OK, I fired up Sysinternals' TcpView but unfortunately these connections did not show up (and I do not know why).

I was getting nowhere until I accidentally loaded an earlier version of the server code. To my surprise, I did not get the deadlock but this error which @Tugberk separately reported earlier but attributed to order of handlers:

[NullReferenceException: Object reference not set to an instance of an object.]
System.Web.Http.WebHost.HttpControllerHandler.EndProcessRequest(IAsyncResult result) +112
System.Web.Http.WebHost.HttpControllerHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) +10
System.Web.CallHandlerExecutionStep.OnAsyncHandlerCompletion(IAsyncResult ar) +129

OK, so it is probably happening on the server but the continuation code gets deadlocked on unhandled exception. I am close! So it was time to go to bed and I was positive that I would nail it the day after.

It was funny that I woke up the day after and with my in-bed reading on tweets, stumbled on @Tugberk's tweet on issue he had just created. That sounds exceedingly similar, so we just doubled checked our scenarios and it turned out that an HttpResponseMessage with empty RequestMessage property is not handled in Web API and a null reference exception is thrown at the end of the response clean-up code. And the reason I was seeing it only with file-based cache store was that the part of server-side code to return such responses was being triggered only using file-based store (since it was capable of persisting caches and was trying to validate the cache).

So as you can see, a seemingly unrelated problem can really confuse the nature of the bugs in async scenarios.

Conclusion

First of all, always use request.CreateResponse() instead of using new HttpResponseMessage. I googled for cases of new HttpResponseMessage and found +3000 entries. This is really dangerous and I think this is a bug in Web API and needs to be fixed. If you are using new, make sure you set the RequestMessage property.

And in general, be careful with doing server-side async operations. It is really a powerful axe but with it you are not quite sure what a slightly off swing could bring. Careful with that axe Eugene.

Friday, 24 August 2012

Server-side TPL Async: Don't risk learning these lessons the hard way

[Level T2]

There has been more than a few times that I have felt I know all about TPL. Only to realise sometime later I was wrong, very wrong. Now you might read this and say to yourself "Come on, this is basic stuff. I know it well, thank you vety much". Well, it is possible that you could be right but I advise you carry on reading; what follows can surprise you.

None of what I am gonna talk about is new or it is being blogged about for the first time. Brad Wilson has an excellent series on the topic here but this is to serve as a digest of his posts targeted at a broader audience in addition to a few other points.

While this post is not directly related to ASP.NET Web API, most examples (and cases) are related to day-to-day scenarios we encounter in ASP.NET Web API.

Remember this covers pre-async/await keywords in .NET 4.5 and what you need to do if you are using .NET 4.0 and not async/await. Using async/await will cover you for some of the problems described below but not all.

Don't fire and forget

Tasks are ideal for decoupling pieces of functionality. For example, I can perform a database operation and at the same time audit the operation by outputing a log entry, writing to a file, etc. Using tasks I can de-couple these operations so that my database task returns without having to wait for audit to finish. This makes sense since database operation is high priority but audit is low priority:

private void DoDbStuff()
{
   CallDatabase();
   // doing audit entry asynchronously not to bog down database operation
   Task.Factory.StartNew(()=> AuditEntry("Database stuff was done"));
}

In fact, let's say we do not even care if audit is successful or not so we just fire and forget, it most audit will fail which is low priority. OK, it all seems innocent?

No! This innocent operation can bring down your application. Reason for it is that all async exceptions must be observed even if you do not care about them. If you don't, they will haunt you when you least expect them, at the time finalizer for task is run by GC. Such an unhandled exception will kill your app.

The link above talks about various ways of observing an exception. The most practical is to use a continuation and access the .Exception property of the task (just accessing the property is enough, does not need to do anything with the exception itself).

private void DoDbStuff()
{
   CallDatabase();
   // doing audit entry asynchronously not to bog down database operation
   Task.Factory.StartNew(()=> AuditEntry("Database stuff was done"))
      .ContinueWith(t => t.Exception); // fire and forget!
}

Another option which is more of a safe-guard against accidental unobserved exception, is to register to UnobservedTaskException on TaskScheduler:

 TaskScheduler.UnobservedTaskException +=
  (e, sender) => LogException(e);

So we register a handler to handle unobserved exceptions and this way they will be "observed". If you need to read more on this, have a look at Jon Skeet's post here.

This problem has made Ayende Rahien to run for the hills.

Respect SynchronizationContext

Uncle Jeffrey Richter tells us that

By default, the CLR automatically causes the first thread's execution context to flow to any helper threads.

And then we also learn that we can use ExecutionContext.SuppressFlow() to suppress flow of the thread context.

Now, what happens when we use ContinueWith()? It turns out unlike standard thread switches, context does not flow (I do not have a reference, if you do please let me know). This will help with improving performance of asynchronous task as we know context switching is expensive (and big part of it is context flow).

So why is it important? It is important because so many developers are used to HttpContext.Current. This context is stored in the thread storage area and passed along at the time of context switching. So if the context does not flow, HttpContext.Current will be null.

SynchronizationContext is a similar (but not same) concept. It is about a state that can be shared and used by different threads at the time of switching. I cannot explain this better than Stephen here. So using Post on SynchronizationContext ensures that the execution of continuation will happen in the same context and not necessarily by the same thread.

So basically the idea is that if you are in a Task pipeline (best example being MessageHandlers in ASP.NET Web API), you need to take responsibility for passing the context along the pipeline.

This is a snippet from ASP.NET Web API Source code that displays the steps. First of all you check to see if current context is null, if it is not then you have to use Post() to flow the context:

SynchronizationContext syncContext = SynchronizationContext.Current;

    TaskCompletionSource<Task<TOuterResult>> tcs = new TaskCompletionSource<Task<TOuterResult>>();

    task.ContinueWith(innerTask =>
    {
        if (innerTask.IsFaulted)
        {
            tcs.TrySetException(innerTask.Exception.InnerExceptions);
        }
        else if (innerTask.IsCanceled || cancellationToken.IsCancellationRequested)
        {
            tcs.TrySetCanceled();
        }
        else
        {
            if (syncContext != null)
            {
                syncContext.Post(state =>
                {
                    try
                    {
                        tcs.TrySetResult(continuation(task));
                    }
                    catch (Exception ex)
                    {
                        tcs.TrySetException(ex);
                    }
                }, state: null);
            }
            else
            {
                tcs.TrySetResult(continuation(task));
            }
        }
    }, runSynchronously ? TaskContinuationOptions.ExecuteSynchronously : TaskContinuationOptions.None);

    return tcs.Task.FastUnwrap();

There is a horrifying fact here. Most of the DelegatingHandler code out there (including some of mine) in various samples around internet do not respect this. Of course, looking at ASP.NET Web API source code reveals that they do indeed take care of this in their TaskHelper implementations and Brad tried to make us aware of it in his blog series. But I think we have not taken enough attention of the implications of ignoring SynchronizationContext.

Now my suggestion is to use the TaskHelpers and its extensions in the ASP.NET Web API (it is open source) or use the one provided in Brad's post. In any case,

Don't use Task for CPU-bound operations

Overhead of asynchronous operations is not negligible. You should only use async if you are doing an IO-bound operation (calling another web service/API, reading a file, reading a lot of data from database or running a slow query). I personally think even for normal IO operations, sync is more performant and scalable.

As we have talked about it here, the point about asynchronous programming on server-side is releasing the thread to be able to serve another request. Tasks are normally served by the CLR thread pool. If server already needs managed threads for its operations, it will be using CLR thread pool too. This means that by doing async operations you could be stealing threads needed for server's normal operations. A classic example is ASP.NET, so you should be careful to use async only if needed.

ContinueWith is Evil!

I think by now you should know why standard ContinueWith can be evil. First of all, it does not flow the context. Also it makes it easy for unboserved exceptions to creep into your code. My suggestion is to use .Then() from ASP.NET Web API's TaskHelpers.

Performance comparison

I think it is still early days - but I must say I would love to do a benchmark to quantify overhead of server-side asynchronous programming. Well if I do, this place will be where the result will first appear :)

So. Do I think I know all about TPL now? Hardly!

Monday, 6 August 2012

CacheCow.Client, using the benefits of HTTP Caching on the client

[Level T2]

Browsers are very sophisticated HTTP machines. We often fail to remember how much of the HTTP spec is implemented by the browsers.

As I have said before, ASP.NET Web API is a very powerful server-side framework but there is a client-side burden in using it or generally implementing a RESTful system - although Web API does not restrict you to a RESTful style.

Because of the client burden, we need more and more client-side libraries to implement lacking features that browser have had for such a long time - one of which is HTTP caching. If you use HttpClient out of the box, it will not implement any caching even though the resources are cacheable. Also all of the work for conditional GET or PUT calls (using if-none-match, etc) or cache validation (if there is must-revalidate) or checking whether your cache is stale has to be done in your own code.

CacheCow is an HTTP caching library for client and server in ASP.NET Web API that does all of above - see my earlier post on that. Storage of the cache is abstracted in ICacheStore and for now we can use in memory implementation (see below). So the features in the client library include:

  • Caching GET responses according to their caching headers
  • Verifying cached items for their staleness
  • Validating cached items if must-revalidate parameter of Cache-Control header is set to true. It will use ETag or Expires whichever exists
  • Making conditional PUT for resources that are cached based on their ETag or expires header, whichever exists

Today I released v0.1.3 of the CacheCow.Client on NuGet. This library would implement advanced HTTP caching with little or no configuration or hassle. All you have to do is to add the CachingHandler as a delegating handler to your HttpClient:

var client = new HttpClient(new DelegatingHandler()
       { 
           InnerHandler = new HttpClientHandler()
       });

This code will create an HttpClient that implements caching and stores the cache in memory. By implementing ICacheStore, you can store the cache in your custom repository. CacheCow is going to have persistent cache stores such as FileCacheStore, SqlCeCacheStore and SqliteCacheStore as a minimum. FileCacheStore will be similar to browser implementation of cache storage. Each of these cache stores will be implemented and released under its own NuGet package. To add an alternative cache store, you need to pass the store as a constructor parameter.

Usage

So in order to use, CacheCow.Client, use package manager in Visual Studio to download and add reference to it:

PM> Install-Package CacheCow.Client

This will also download and add reference to ASP.NET Web API client package, if you have not already added a reference to. Make sure try v0.1.3 or above (by the time of reading this).

After this you just need to create an HttpClient as above and add the CachingHandler as a delegating handler. That's it, you are ready to call services and cache the responses!

Sample

I am working on a sample project but for now, it is easiest to use the code below to call my CarManager Azure website which implements HTTP Caching. The code can be pasted from this GitHub gist.

CacheCow.Client adds a special header to the response which helps with debugging its various features. The header's name is x-cachecow and has a various flags on the operations done on the request/response. So in the code below, we will use this header to demonstrate the features of this library.

var client = new HttpClient(new CachingHandler()
                    {
                        InnerHandler = new HttpClientHandler()
                    }
 );
var initialResponse = client.GetAsync(
      "http://carmanager.azurewebsites.net/api/Car/5").Result;
var initialResponseHeader = initialResponse.Headers.Single(
       x => x.Key == CacheCowHeader.Name).Value.First();
Console.WriteLine(initialResponse.Headers.ETag.Tag);
Console.WriteLine(initialResponseHeader);

And we will see this to be printed:
"02e677a7799e484fb49447f8a600247d"
0.1.3.0;did-not-exist=true
As you can probably figure out, we have the ETag and the CacheCowHeader: first value is the version and did-not-exist means that item did not exist in the cache - which is understandable as this is the first call.

Now let's try this again:

var secondResponse = client.GetAsync("http://carmanager.azurewebsites.net/api/Car/5").Result;
var secondResponseHeader = secondResponse.Headers.Single(
      x => x.Key == CacheCowHeader.Name).Value.First();
Console.WriteLine(secondResponseHeader);

And what will print is:
0.1.3.0;did-not-exist=false;cache-validation-applied=true;retrieved-from-cache=true
So in fact, it existed in the cache, retrieved from the cache and cache validation was applied. Cache validation is the process by which client makes conditional call to retrieve/update a resource only if the condition is met (see Background section in this post). For example, in GET calls it will send the ETag with a if-none-match header to retrieve

If you call a PUT on a resource that is cached, CacheCow.Client will use its ETag or Expires value to make a conditional PUT, unless you set UseConditionalPut property to false.

By-passing caching

There are some cases where you might not want the result be cached or retrieved from the cache regardless of the caching logic. All you have to do is to set the CacheControl header to no-cache or no-store:

var nocacheRequest = new HttpRequestMessage(HttpMethod.Get, 
  "http://carmanager.azurewebsites.net/api/Car/5");
nocacheRequest.Headers.CacheControl = new CacheControlHeaderValue()
 {
  NoCache = true
 };
var nocacheResponse = client.SendAsync(nocacheRequest).Result;
var nocacheResponseHeader = nocacheResponse.Headers.FirstOrDefault(
 x => x.Key == CacheCowHeader.Name);
Console.WriteLine(nocacheResponseHeader);

This will print an empty header since we have by passed the caching.

Last but not least

Thanks for trying out and using CacheCow. Please send me your feedbacks and bugs. Just ping me on twitter or use GitHub's issue tracker.

Tuesday, 31 July 2012

Using range header for retrieving range of IEnumerable<T> in ASP.NET Web API


Introduction

[Level T3] In this post, we talk about using HTTP's Range header to achieve requesting data ranges for entities.

Background

HTTP spec defines a series headers that can be used for a client to request for a partial content. These operations are optional (in most cases spec uses word SHOULD) but most servers implement them and browsers have increasingly been using them. If you have ever resumed downloading a big file from internet, then you have used this feature (in fact all browsers use range if supported by server). In this case, client keeps requesting chunks and builds up the file until it is fully downloaded.

So here is how it works in a nutshell:

  1. Server can optionally informs clients while serving a resource that it supports partial content. It does that by sending Accept-Range header with a value of the unit it supports, normally bytes. In our case, our server sends back a custom unit that we call x-entity.
  2. Client, either informed by the server on partial content feature based on Accept-Range header or just simply tries its luck, sends a request with Range header with value of [unit]=[from]-[to] for example bytes=1024-2047. In this example, client asks for the second KB of the file. Range header can specify multiple ranges for example bytes=500-600,601-999
  3. Server will return the range requested and include a Content-Range header with value [units] [from]-[to]/[TotalCount]. For example bytes 1024-2047/12345678. It also returns status code 206 (partial content) to inform the client that the content is partial. If server does not support the range specified, it will send back status code 416.
Spec does consider using custom units so server can implement its own custom units and inform the client of the unit using Accept-Range header. Now the idea is that in ASP.NET Web API, we normally build many actions that return IEnumerable<T>. What if we could use the range to specify range of the enumerable to be returned? Hmmm....

This feature can be useful in case of pagination on the client so that instead of API implementing a range parameter all the time, we just use HTTP's built-in features and encapsulate the implementation in a reusable component, in this case a filter.

So in the code to follow, we define a custom range unit and call it x-entity. "x-" prefix is a common naming convention on the web to specify custom tokens that are not part of the canonical tokens defined in RFC specs.

Implementing range in ASP.NET Web API

So where is the best place to implement this? We have these requirements:

  • Access to request headers to read Range header
  • Access to response header to set Accept-Range
  • Access to content headers to set Content-Range header
  • Access to content so that it can filter IEnumerable<T>
DelegatingHandler might look promising but by the time it accesses the content, it is already turned into stream by MediaTypeFormatters.

MediaTypeFormatter is an interesting option. I actually created a RangeMediaTypeFormatterWrapper that would wrap the MTFs and intercept the content and if it was of type IEnumerable<T>, it would apply the filtering. Initially it seems MTF does not have access to request headers but in here we had an interesting discussion and it turns out it can access request using GetPerRequestFormatterInstance. But it also needs access to response headers.


So Glenn Block suggested filters and after some thoughts, it seems to be the right approach considering current limitations of MTF. The only drawback is that it has to be explicitly defined on the action - which can in some cases be a blessing in fact. In any case, filter approach as you will see is clean and does everything in the same place.

Filters in ASP.NET Web API is not much different from MVC. You get two methods: before (OnActionExecuting) and after (OnActionExecuted) the action where you can change values in request, response, action arguments or simply examine values.

Using the code

You can get the source code from GitHub. As you can see, we have a single controller called CarController. Project is running on port 50714 on my machine so all examples will include this port - yours could be different. So download the project, build and run it. I created a client project to implement all steps below using HttpClient but there is an issue with ASP.NET Web API implementation of the Range header that regardless of the unit set in the range header, always bytes is sent to the server.

As you can see, we have a simple action with EnableRange filter defined on it:

[EnableRange]
public IEnumerable<Car> Get()
{
 return CarRepository.Instance.Get();
}

So now we use fiddler (or similar tool capable of sending HTTP requests, such as Google's Postman) to send this request:

GET http://localhost:50714/api/Car HTTP/1.1
User-Agent: Fiddler
Host: localhost:50714

We will get all the cars in our repository in JSON format. But note the Accept-Range header with value of  x-entity:

HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Expires: -1
Accept-Ranges: x-entity
Server: Microsoft-IIS/8.0
Date: Tue, 31 Jul 2012 18:59:56 GMT
Content-Length: 1125

[{"Id":1,"Make":"Vauxhall","Model":"Astra","BuildYear":1997,...

So this should tell the client that it can use Range header. Now let's send a range header requesting 3rd item to 6th item (total of 4 items):

GET http://localhost:50714/api/Car HTTP/1.1
User-Agent: Fiddler
Host: localhost:50714
Range: x-entity=2-5

And here is the response:

HTTP/1.1 206 Partial Content
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Content-Range: x-entity 2-5/10
Expires: -1
Server: Microsoft-IIS/8.0
Date: Tue, 31 Jul 2012 19:00:19 GMT
Content-Length: 447

[{"Id":3,"Make":"Toyota","Model":"Yaris","BuildYear":2003,"Price":3750.0,...

Note the Content-Range header above and also the fact that we got the entities we requested in JSON (not shown fully above). So it tells us that it has sent back items from index 2 to index 5 and total of items is 10. Also note the 206 response.

Server can send back * if number of items is not known at the time of serving the request. I have used this option since I do not want to run a Count() on an IEnumerable<T>. It is very likely that the data is being retrieved from database and we do not want to load the whole table into memory. So my approach is to try cast the value into ICollection using as keyword. If it case OK then I get the count, otherwise I set the count to *.

Another option in the spec is that to in the range is optional so the client can send a range header with value 2-*. In this case, we must skip the first 2 items and return the rest:

GET http://localhost:50714/api/Car HTTP/1.1
User-Agent: Fiddler
Host: localhost:50714
Range: x-entity=2-

In this case, our server returns this response:

HTTP/1.1 206 Partial Content
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Content-Range: x-entity 2-9/10
Expires: -1
Server: Microsoft-IIS/8.0
Date: Tue, 31 Jul 2012 21:18:05 GMT
Content-Length: 897

[{"Id":3,"Make":"Toyota","Model":"Yaris","BuildYear":2003,"Price":3750.0, ....

Notes on implementation

The crux if the implementation is to call Skip() and Take() on IEnumerable<T>. Our code has to be able to work with all types hence cannot be generic. On the other hand, filters (and attributes as a whole) cannot use generics. As such we just have to use reflection to do this:

[EnableRange]
var skipMethod = t.GetMethods().Where(m => m.Name == "Skip" && m.GetParameters().Count() == 2)
 .First().MakeGenericMethod(_elementType);
var takeMethod = t.GetMethods().Where(m => m.Name == "Take" && m.GetParameters().Count() == 2)
 .First().MakeGenericMethod(_elementType);
...
value = skipMethod.Invoke(null, new object[] { value,  from});
if(to.HasValue)
 value = takeMethod.Invoke(null, new object[] { value, to - from + 1 });

Also it is useful to note that the return value is not accessible in the filter and we have to resort to using Content and casting it to ObjectContent and use the Value property.

Conclusion

Range header defined in HTTP spec is useful in retrieving partial content. We can use custom units and we defined x-entity unit to enable selecting a range of entities (commonly used in pagination scenarios) and implemented it using a filter.

Monday, 30 July 2012

Serialising request and response in ASP.NET Web API


Introduction

[Level T3] This is a short post on serialising/deserialising HTTP request and response messages in ASP.NET Web API.  Serialising messages manually can be achieved but is hard-work and you can run into various problems. ASP.NET Web API provides a means of achieving this through HttpMessageContent. This post is a follow-up to this discussion.

Background

There are many cases where you could be interested in serialising HttpRequestMessage or HttpResponseMessage. For me, I needed this to implement caching features on the HttpClient in CacheCow framework.

Technically speaking HTTP messages arrive in serialised format and all we need is access to the raw stream coming from server - as such no processing would be required. Unfortunately this is not possible since Web API does not read the message as a raw stream and then process it, instead it starts by reading various chunks, parsing it as it goes.

However, ASP.NET team implemented a feature that could be used for serialisation/deserialisation of request and response messages. If you have read Brad Wilson's batching post, you probably have seen noticed that HttpMessageContent can be used for implementing client-server batching. Now we will use this for serialisation.

HttpMessageContent

RFC 2616 in its appendices defines content types "message/http" and "application/http". application/http is a content-type that can contain more than one request or response.

Did we not have this in multi-part content-type? As we know, we can include different request or response parts in the same message and each part gets its own share of the headers, so what is the difference?

Well the difference is with multi-part, each part can only have headers related to content. And above all, they share the same status code. In application/http, each "part", as it were, is a complete request or response. For example, the requests each will have their own URI and responses their own status code.

HttpMessageContent can encapsulate multiple HttpRequestMessage or HttpResponseMessage but in our case we just need a single request or response.

Serialiser interface

Let's define an interface for our serialiser:

public interface IHttpMessageSerializer
{
 void Serialize(HttpResponseMessage response, Stream stream);
 void Serialize(HttpRequestMessage request, Stream stream);
 HttpResponseMessage DeserializeToResponse(Stream stream);
 HttpRequestMessage DeserializeToRequest(Stream stream);
}

UPDATE: Latest implementation is fully async and can be found as part of CacheCow library here.

Serialisation

In order to serialise, we need to create a new HttpMessageContent passing request or response and then use ReadAsByteArrayAsync to read the whole message as a byte array:

var httpMessageContent = new HttpMessageContent(request);
var buffer = httpMessageContent.ReadAsByteArrayAsync().Result;

As you can see it is very easy to serialise. Now the only caveat is that if you are serialising in a delegating handler, this will consume the message content stream so that it cannot be read further down the stream. If you do, you will see this error message:

The stream was already consumed. It cannot be read again.

The trick (for now) is to call the method ReadAsByteArrayAsync to force the content to be loaded into the buffer. Although we would not need the buffer we read (since the actual reading will happen inside HttpMessageContent), next time the content will be read from the buffer and not from the network. In my implementation I have made it optional whether to pre-read the content into the buffer.

Deserialisation

The trick with deserialisation is to create a normal HttpRequestMessage or HttpResponseMessage and set the content-type header into"application/http;msgtype=request" or "application/http;msgtype=response", accordingly". Then we use the special extension method to read into the an HttpMessageContent:

var request = new HttpRequestMessage();
request.Content = new ByteArrayContent(memoryStream.ToArray());
request.Content.Headers.Add("Content-Type", 
    "application/http;msgtype=request");
return request.Content.ReadAsHttpRequestMessageAsync().Result;

As you can see, all the heavy lifting happens inside the HttpMessageContent and there is really little code that we need to write.

Conclusion

We can use HttpMessageContent to serialise/deserialise request/response in ASP.NET Web API. Full implementation can be found as a GitHub gist here as part of CacheCow library here. This implementation is fully Async and takes advantage of IO completion ports exposed in Begin/End methods.

One word of caution is on cases where message needs to be used after serialisation - which would comprise many cases including serialisation in DelegatingHandler. In these cases we need to invoke ReadAsByteArrayAsync (or similar) to ensure the content is read into the buffer.

Sunday, 22 July 2012

Introducing CacheCow: An HTTP caching framework for server and client


CacheCow

[Level T2] This is a short post to introduce CacheCow, an Open Source framework for HTTP caching on the client and server in ASP.NET Web API.

As some of you probably know, I have been working on caching for a while. If you go back to my post on CachingHandler, you will see that I contributed server-side HTTP caching implementation to the WebApiContrib project and included samples and tests.

However, I realised that even more important part of the caching needs to be implemented on the client. Also implementations of the IEntityTagStore on various databases (in-memory and persisted) and client-side's cache storage all need their own project so this is bigger than just a feature on WebApiContrib. As such, I have decided to start a new project and port the server-side from WebApiContrib. [Please bear in mind, the code in the WebApiContrib will be maintained and supported so if you are using it and have problems or experience bugs, please ping me in twitter or GitHub.]

So CacheCow framework has been born. The name itself is a word game with Cash Cow, meaning it does the heavy lifting for caching with minimal set up hence promises good return on investment :). Project is open source and hosted on GitHub. And yes, I do accept pull requests; Tugberk has done a great job (also with some help from Sayed Ibrahim Hashimi which I am so grateful for) and automated the whole build and NuGet package generation and his PR was merged pretty much immediately. But please contact me before-hand on the work you would like to do - there is ton of interesting work to do.

How to use CacheCow.Server

At the moment, only server-side CacheCow is ready for use. All you need to do is to use NuGet to get the package:

PM> Install-Package CacheCow.Server

This will add the CacheCow.Server and CacheCow.Common DLLs and the rest is all the same as the CachingHandler post and samples. Just add CachingHandler to the config:

GlobalConfiguration.Configuration.MessageHandlers.Add(new CachingHandler());

This will add this handler with all the default settings and will store the cache state in the memory. There are many dials that you can turn and configure the handler according to your resource organisation - just see the sample in WebApiContrib. This sample connects to the CarManager sample and tests various scenarios for a fairly complex resource organisation.

How to use CacheCow.Server.EntityTagStore.SqlServer

As I pointed out above, by default cache state is stored in memory. This is OK for single server or test scenarios but in case of a server farm you would like the cache state to be maintained for whole farm and when a resource cache invalidated, it is done for all servers. In this case you need a central EntityTagStore (cache state store).

Building a cache state store is pretty easy and all you have to do is to implement IEntityTagStore interface which has 5 methods. Since cache might be invalidated not just for a CacheKey (previously called EntityTagKey) but also for a RoutePattern (see CachingHandler post), key-value stores are not rich enough to provide this but conventional databases can be used.

So I have implemented this for SQL Server. In order to get this EntityTagStore, just use NuGet:

PM> Install-Package CacheCow.Server.EntityTagStore.SqlServer

This will download the DLL and also a script file named script.sql located in <project root>\packages\CacheCow.Server.EntityTagStore.SqlServer.0.1.0\scripts

So create a database and run this script against it, and you will get one table and several stored procedures. Then in order to use SqlServerEntityTagStore, create an instance pass it to the CachingHandler constructor. Default constructor relies on a connection string named "EntityTagStore" to be there in your web.config and pointing to your database.

GlobalConfiguration.Configuration.MessageHandlers.Add(
 new CachingHandler(new SqlServerEntityTagStore()));


Alternatively, pass the connection string to the constructor. That is all you have to do use SQL Server EntityTagStore.

Roadmap

I am working on the client CachingHandler to be used with HttpClient. This will initially come with an InMemoryCacheStore but then various persistent cache store implementations can be done (for example file-based, SQL CE, etc)

Also CarManager sample needs to be ported for CacheCow which I am hoping to do very soon - although the old sample does work well.

Any question or comment please ping me on twitter (@aliostad) or GitHub.


Sunday, 1 July 2012

The place of Extension Methods in Software Design


Introduction

[Level T3] Extensions methods - introduced back in .NET 3.0 - are useful tools in a .NET developer's toolset. Apart from their usefulness, extension method is not an inherently object oriented concept yet we use them more and more in our API designs.

Extension methods initially were used for those classes where we did not own the source code for. But nowadays we are using them increasingly for types where we do own the source.

This post aims to have an in-depth look at the place of extension methods in the API design.

Background

Definition of Extension Methods according to MSDN is:
Extension methods enable you to "add" methods to existing types without creating a new derived type, recompiling, or otherwise modifying the original type.
So as we all know, in order to create an extension method, we need to:
  1. Create a static non-generic class
  2. Create a static method
  3. Make the first parameter as the type we are trying to add the method, with the keyword this
For example (and one of my favourites), we can add this extension method to the object to replicate the T-Sql's IN operator:

public static bool IsIn(this object item, params object[] list)
{
 if (list == null || list.Length == 0)
  return false;
 return list.Any(x => x == item);
}

Now I can use this like an instance method:

string ali = "ali";
var isIn = ali.IsIn("john", "jack", "shabbi", "ali"); // isIn -> true

If I may digress a little bit here, this is not such a great implementation since:

var isInForInt = 1.IsIn(2, 3, 1); // isInForInt -> false!

As you have probably guessed, defining the extension method for object type will cause the boxed integers objects to be compared instead of integers themselves and they surely won't be equal. So a generic implementation will solve the problem:

public static bool IsIn<T>(this T item, params T[] list)
{
 if (list == null || list.Length == 0)
  return false;
 return list.Any(x => EqualityComparer<T>.Default.Equals(x, item));
}

Reality is, extension method only gives the illusion of method being on the type and what is being compiled is nothing but a plain old static method call. Having a look at the IL generated confirms this:

IL_0056:  call       bool ConsoleApplication1.ExtensionMethods::IsIn<int32>(!!0, !!0[])

ExtensionMethods above is the name of the static class I created for this method.

So extension methods are basically the same utility or helper static methods we have been writing only glamorised to look like instance methods. Yet they have the additional benefit of:

  1. It leads to much more readable and natural code.
  2. I do not have to know the name of the helper class whose static methods I am using - in fact the behaviour has nothing to do with the static class. That class is not really a class in a true sense since it does not exert state or behaviour. And that is why it has to be declared static: to make clear its design intentions.
  3. Fluent API can be easily designed for older types without touching them.
  4. Since it is not really an instance method call, it can be called on null instances. This is a desirable side effect since we can check for nulls in the extension method and cater for them (none of the "object reference not set to an instance..." nonsense!)
  5. Since it can be called on null instances, some type information for the null instance can be determined in the extension method (although it can be a base type or an interface) while this is not possible for a null object.

Extension methods when we do not own the type

This has been the typical scenario. We always wonder if for example string had a such and such method and there was no way to achieve this. Now using extension methods we can. This scenario can also apply to cases where a historic API has been released (and you own the API) but cannot be changed. In this case, your API can be enhanced using extension methods.

With such usage, there is no decision to be made hence the design has already been done. Extension methods serve mere as a nice utility and syntactical sugar.

One of the most useful use cases I have found is the function composition in functional programming in C# (see some examples in my other posts here and here). This is especially important since you can achieve readability by method chaining. For example:

  usingReflection
   .Repeat(TotalCount)
   .OutputPerformance(stopwatch, performanceOutput)();

In addition to the examples above, let's have a look at a simple example to swallow the exception and optionally log the error (note how the implementation reuses itself to swallow errors that could arise from logging):

public static class WrapSwallowExtension
{
 public static Action<T> WrapSwallow<T>(this Action<T> action, Action<Exception> logger = null)
 {
  return (T t) =>
       {
           try
           {
            action(t);
           }
           catch (Exception e)
           {
      if (logger != null)
       logger.WrapSwallow()(e);             
           }

       };
 }
}

So I can use:

string myString = null;
Action<string> action = (s) => { s.ToLower(); }; // reference null exception! 
action.WrapSwallow()(myString); // swallowed

Now here I created a new exception but when I am working in a functional scenario, I already have my actions and functions.

Extension methods when we own the type

I have heard some saying "Why would you wanna use an extension method when you own the code? Just add the method to the type."

There are cases where you own the type yet you would still use an extension method. Here we have a look at a few scenarios below.

Extension methods for interfaces

This is the most obvious use case. Most of the Linq library is implemented using extension methods (while Microsoft owns the types). An interface cannot have the implementation but you can use extension methods to add implemented enhancement to your interfaces.

Without getting into the debate whether implementing ForEach against IEnumerable<T> is semantically correct or not (don't! I am not going there) you might have noticed that the function only exists for the List<T> so you have to use ToList() to use the feature. Well, this can be easily done for IEnumerable<T> too:

public static class IEnumerableExtensions
{
 public static IEnumerable<T> ForEachOne<T>(this IEnumerable<T> enumerable, Action<T> action)
 {
  foreach (var t in enumerable)
  {
   action(t);
   yield return t;
  }
 }
}

In this particular example, I do not own the source for IEnumerable<T> but even if I had, I would only be able to associate implementation with the interface using extension methods.

Overloading

This is the next common case. If you are familiar with ASP.NET MVC, you probably have noticed that the most of the functionality of HtmlHelper class has been implemented using extension methods.

Html.TextBoxFor(x => x.Name)

In fact all different overloads of HtmlHelper for Textbox, RadioButton, Checkbox, TextArea, etc are implemented using extension methods. So the HtmlHelper class itself implements a core set of functionality which will be called by these extension methods.

Now lets look at this fictional interface:

public interface IDependencyResolver
{
   object Resolve(Type t);
   T Resolve<T>();
}

The interface has two methods for resolving the type, one using the generic type the other with the type instance. Whoever implements this will be most likely implementing the non-generic method and then make generic method call the non-generic one:

public interface IDependencyResolver
{
 object Resolve(Type t);
}

public static class IDependencyResolverExtension
{
 public static T Resolve<T>(this IDependencyResolver resolver)
 {
  return (T) resolver.Resolve(typeof (T));
 }
}


This will help to:
  • Trim down the interface and make it terser so it can express its design intentions more clearly
  • Save all implementers of the interface having to repeat the same bit of code
When I look at the interface IQueryProvider, I wonder if it was designed before extension methods were available:

public interface IQueryProvider
{
    IQueryable<TElement> CreateQuery<TElement>(Expression expression);
    IQueryable CreateQuery(Expression expression);
    TResult Execute<TResult>(Expression expression);
    object Execute(Expression expression);
}

So the 4 methods could have been reduced to 2. Considering the fact that Linq and extension methods both came in .NET 3.0, my suspicion seems very likely!

Dependency layering

Another case where you might decide to use an extension method rather than exposing a direct method on the type is when a type's sole dependency on another type is confined to a single method. This is very common in cases where the dependency is on a layer above the dependent type - while naturally must be the other way around.

For example, let's look at this case:

// THIS WILL NOT WORK!

// sitting at entity layer
public class Foo
{
 // ...

 public Bar ToBar()
 {
  // ...
 }
}

// sitting at business layer
public class Bar
{
 // ...  
}

Now in this example, I have laid out these two classes in different logical layers to better illustrate the case - but it does not have to be, this is all about managing dependencies, in the same layer or other layers. We have baked in Foo's the dependency to Bar for the sake of ToBar(). The solution is to create an extension method for the ToBar().

So we can write (and completely decouple to classes):

public class Foo
{
 
}

public class Bar
{

}

public static class FooExtensions
{
 public static Bar ToBar(this Foo foo)
 {
  return new Bar();
 }
}


Providing implementation for enumerations

This is one that probably many of us have done. Enumerations - unfortunately - cannot contain implementations so extension methods are a good place to put the implementation code for enums. This is usually to do to conversion, parsing and formatting.

Delay decision on API signatures

With regards to an API, anything that goes into the public interface of the type is difficult to change. As such attempting to provide all possible overloads and use cases of the type on its public interface is likely to fail.

Delaying such decisions with providing a base functionality on the type and then providing more and more extension methods with each release is a useful process. ASP.NET team have used this technique for ASP.NET MVC and recently with ASP.NET Web API.

Drawbacks

Extension methods are static methods. As such they cannot be mocked using standard mocking frameworks. An extension method should not have any dependency other than the ones passed to it.

Let's look at this case:

public class Foo
{
 public string FileName { get; set; }

 public void Save()
 {
  // ...
 }
}

public static class FooExtensions
{
 public static void SafeSave(this Foo foo)
 {
  var directoryName = Path.GetDirectoryName(foo.FileName);
  if (!Directory.Exists(directoryName))
   Directory.CreateDirectory(directoryName);
  foo.Save();
 }
}

In this case, unit testing any class that uses SafeSave becomes a nightmare. What we need here is to create an interface IFileSystem and pass along with the extension method to abstract it from using the real file system.

Conclusion

Availability of extension methods has changed the way we design software APIs in the .NET world. We have started to build the basic functionality in the actual types and use extension methods to provide overloading.

There are 5 reasons to use extension methods when you own the type:

  • To associate implementation with interfaces
  • Overloading of an API
  • Removing dependency especially in logical layering
  • Providing implementation for enumerations
  • Delaying decision on API signatures
An extension method should not have any dependencies other than the ones passed to it.

Thursday, 28 June 2012

Using PocoHttp to consume Classic OData services


Introduction

[Level T2] In the last post, I introduced PocoHttp which I have been working on. As I promised there, we can use it to access classic OData services in addition to as much as OData support we have in ASP.NET Web API. OData functionality currently provided in PocoHttp is limited at the moment but I will be building upon.

[UPDATE: PocoHttp package v0.1 is not available in NuGet. Just type in the package manager console PM>Install-Package PocoHttp or search for PocoHttp]

Why OData?

During the last  few years, Microsoft has invested a lot in OData. Some other technologies have also been built upon it such as RIA services which is heavily used in the Silverlight world. So Microsoft has been pushing OData and marketing it as an open protocol but reality is outside Microsoft, adoption has been nill. I believe this has to do with issues regarding query syntax and paylaod.

Firstly, as we talked about before, community believes that OData services are not RESTful as query syntax is unique to OData (Darrel explains here). It also exposes the bowels of the server to the client which does not respect REST's client-server model.  Personally I believe there is a place for some of the simpler query constructs of OData such as top and skip that could be implemented by non-OData services as well.

And payload is limited to specific data structures in XML and JSON. As for XML payload, I think AtomPub as data wrapping format is heavy and inefficient - not to say that it has not been realised of recent. Sending metadata to the client that consumes the data every second is not a very good idea - metadata was used once to build the client but the client is eternally doomed to receive that information until the day sun stops shining (there might be a way to turn it off but that is not the default design). So it is like a web service that sends its WSDL all the time.

JSON payload is definitely lighter - it contains some metadata but nothing to be concerned about. The JSON format has changed between versions 1 and 2 of OData and server could be sending V1 or V2 (etc) depending on the content. There is DataServiceVersion custom header that can be used to get the version number from the response but I think as a canonical HTTP implementation, it should have been implemented as part of content-type:
Content-Type: application/json;DataServiceVersion=2.0
Or
Content-Type: application/odata2.0+json
Anyhow, that is not a big problem and I will be using JSON format in this sample. You can ask for JSON using an Accept header or appending $format=json.

OK, with all this so why OData then? For two reasons.

First of all, ASP.NET team are working to support alternative formats that do not have all the extra metadata: plain JSON or XML. So OData is going to live much longer in an improved implementation.

Second reason is there is already a considerable adoption inside the Microsoft community. Many teams while moving to new shiny Web APIs might still have an existing OData services that they need to use along with the new ones. 

Why use PocoHttp to access OData?

While OData has the problems we talked about, it is not short of one thing: tools and SDKs. You can easily use DataService<T> to consume OData services and the API even allows for automatic generation of data models for Entity Framework. You are fully abstracted from OData itself while using its all benefits.

So why would you ever want to use anything other than the existing tools to comsume OData services? Well, a few reasons.

With adoption of Web API increasing (which can be seen from the enthusiasm in the community), you will most likely start using Web APIs alongside OData ones. You would be building custom handlers to take care of authentication, tracing, caching, auditing, exception handling, etc and start to rely on them. But you would not be able to use these them with existing client tools since you need a client that is built upon the new HTTP pipeline in System.Net.Http.dll. That is PocoHttp!

Also you might want to harness the extreme powers that the client has in terms of queries it can run (which we touched in the last post) and limit them to a few handful essential but safe constructs - protecting your database and services. Then again that is one of the goals of PocoHttp.

New sample on the block

So I have created this sample on the PocoHttp that connects to OData sample services exposing Northwinds database. 

This basically sets up PocoClient to retrieve data in JSON format. Since the version of the JSON could be V1 or V2, the code needs to be able to support both.

Show us some code

OK! Let's look at request setup. There are two ways to request for JSON: by setting the Accept header or by appending $format=json:

// 1
pocoConfig.RequestSetup = (request) => request.Headers.Add("Accept", "application/json");

// 2
pocoConfig.RequestSetup = (request) => request.RequestUri = new 
   Uri(request.RequestUri.ToString() + "&$format=json",UriKind.Relative);

PocoConfiguration is a set of configuration parameters which is passed to the client constructor. One the parameters is RequestSetup which is an Action<HttpRequestMessage> that allows for the request to be modified before sending it to the server.

On the way back we use ResponseReader function to read the content into the objects. It receives the response, formatters and element types and must return objects it has read (will be IEnumerable<T>)

public Func<HttpResponseMessage, IEnumerable<MediaTypeFormatter>, Type, Task<object>> 
   ResponseReader { get; set; }

So this is a property sitting on the PocoConfiguration. PocoHttp provides a default implementation but since the data we get back is wrapped inside OData top level envelopes, we have to provide our own implementation. We need to define Payload<T> types that represent the structure of the data we are receiving. I have defined one for V1 and one for V2.


The gist of what we need to do is:
  1. Create a generic type of the entity using MakeGenericType
  2. Call non-generic ReadAsAsync passing the type
  3. Use reflection on the result to read off the properties to unwarp and get to the entities we need
For example, the code for V1 looks like this:

private static object ReadFromV1(HttpResponseMessage response, 
 IEnumerable<MediaTypeFormatter> formatters, Type type)
{
 var outermostType = typeof(PocoHttp.Samples.ClassicOData.V1.Payload<>)
  .MakeGenericType(type);
 var result = response.Content.ReadAsAsync(outermostType, formatters).Result;
 return outermostType.GetProperty("d").GetValue(result, null);
}

In the example, I have used only Take and Skip but you can use direct Where clauses as well.

Conclusion

While ASP.NET Web API is the future of providing services and data, there is a considerable number of existing OData services that still need to be consumed. 

While existing OData client tools expose full features of OData abstracting the client from HTTP detailed, we outlined two reasons why you would want to use PocoHttp instead to consume these services: re-using HTTP pipeline handlers and limiting the client in terms of queries it can run so that the.

Monday, 25 June 2012

Introducing PocoHttp: Consuming HTTP data services


Introduction

[Level T2PocoHttp is a non-opinionated Open Source .NET library for seamless consumption of HTTP data services using a familiar IQueryable<T> interface. Version 0.1 of the library is now available on GitHub - NuGet package to follow soon.

This post explains the background and motivation for this library as well as its usage and possible future directions.

Motivation and background

ASP.NET Web API exposes full richness of the HTTP spec. As such, many new avenues have been opened for creating HTTP client-server applications. Having said that, with HTTP spec aimed to be Touring-Complete, there is a client burden involved in implementing clients capable of deep protocol coherency.

ASP.NET Web API has made it easy to achieve protocol coherency as part of the HTTP pipeline - modelled as Russian Dolls (see Part 6 of the series).

With a lot of WCF services (that commonly were serving domain's value objects) soon to be moved to Web API, there is a requirement for abstracting HTTP-level aspects for seamless consumption of such services. PocoHttp is designed to provide an IQueryable<T> interface which is familiar to developers who have been working with ORMs such as Entity Framework, NHibernate, etc.

WCF Data Services are able to provide this seamless communication but they
  • Can only generate AtomPub payloads
  • As such no content negotiation or ability to generate plain XML or JSON (it does honour Accept header but returns OData specific format)
  • Use OData query syntax which is deemed by many in the community not RESTful since it assumes non-HTTP syntax coherence in the form of query string parameters on the client
  • Fully expose the data to the outside world
Hence there is a need for a .NET client library to take advantage of new HTTP features exposed in System.Net.Http to be able to consume HTTP data services whether developed in ASP.NET Web API or in any other platform. PocoHttp rises up to that challenge.

Minimal example

Before going much further into details, it might be useful to present a minimal usage of how to use it. Example below is from the PocoHttp samples (which hosts a minimal server too) available from GitHub.

var pocoClient = new PocoClient()
        {
            BaseAddress = new Uri("http://localhost:12889/api/")
        };
var list = pocoClient.Context<Car>() // calling "/api/Cars"
 .Take(1).ToList(); // getting the first item
Console.WriteLine(list[0]);

As can be seen above, pocoClient is initialised with a BaseAddress (Note the trailing slash which is mandatory for correct URI composition) and then a generic context of Car type is queried. PocoClient assumes (using the naming convention setup) that the Car should be exposed at BaseAddress + "Cars" so it makes an HTTP request and turns the result into Car objects.

PocoHttp's Grammar model

Basically PocoHttp translates IQueryable<T> queries into HTTP semantics (query string parameter in case of GET or a query criteria object in the body in case of POST) using a grammar and a set of naming convention settings - and then turns the result into .NET types using media type formatting provided by the ASP.NET Web API.



Currently, two built-in grammars are provided: OData and Pagination. OData implementation does not implement full features of OData (see below). Grammar can be implemented to translate the queries into other providers such as MongoDB or other custom syntaxes.

Normally, a grammar modifies the request by adding query string parameters. While it is not recommended, a grammar can modify the request to turn it into an HTTP POST and send a payload containing the criteria. The reason this is not recommended is that using POST method for GETting resources is not REST-friendly and responses to POST results cannot be cached while caching results of calls to data services can be very useful.

OData implementation

Current implementation of OData grammar supports below features:
  • Where: current implementation supports direct property expression. Complex property expressions (such as Car.Make.StartsWith("M")) not yet supported
  • AND, OR, greater than, lesser than, lesser than or equal, greater than or equal, not equal in WHERE expressions
  • Take
  • Skip
  • OrderBy
  • OrderByDescending 

Exposing an HTTP service in ASP.NET Web API (so that it can be consumed in PocoHttp) is simple: you just need to return IQueryable<T> from your action and decorate the action with [Queryable] attribute.

Having said that, while this feature is currently available in ASP.NET Web API RC, the future of OData support on Web API is a bit unclear as the Queryable attribute has been removed in ASP.NET Web API source code and it might not ship with RTM. As far as we can find out from ASP.NET team, OData support will be implemented using OData libraries and might ship out of band after the RTM. So the team is committed to full OData support but the timeline is unclear. As for now, you can happily use Queryable attribute in ASP.NET Web API RC.

As for PocoHttp, I am committed to implement full OData query syntax in the future releases.

Using PocoHttp against existing AtomPub OData services

Nothing prevents us to consume classic OData services that return AtomPub by just adding AtomPub media type formatter to the formatters. This will allow you to take advantage of all the niceties of HttpClient while using your existing OData services.

I will be having a post on this soon.

Disclaimer

As I initially said, this is a non-opinionated framework. You can expose your full domain model through an IQueryable<T> interface out to the client but this is not necessarily a good thing. Exposing bowels of your server and data to the outside world is an anti-pattern and could be a costly mistake. For example you can easily expose your database to the outside world allowing this query:

pocoClient.Context<Car>()
 .Where(car => car.Make.EndsWith("L"))

turn into this SQL statement that can bring down the server by having to look into each record:
SELECT * FROM CAR WHERE MAKE LIKE '%L'

Also free form Linq queries enable the client to select data using criteria which involves columns that do not have indices upon.

Pagination grammar

Pagination grammar is a simple non-OData syntax which is REST-friendlier. An example of the syntax is /api/Cars?skip=100&count=20.

Pagination vocabulary only supports 4 constructs:

  1. Skip: normally represented by skip
  2. Take: normally represented by count
  3. OrderBy: normally represented by order
  4. OrderByDescending: normally represented by orderDesc

All above representations can be changed by the client. A typical server implementation supporting these parameters will be

public IEnumerable<Car> Get(int skip, int count)
{
    return _repository.Skip(skip).Take(count);
}
As it can be seen, server implementation only supports skip and count and ignores the OrderBy and OrderByDescending.

Overriding naming convention and routing

There are times that you might want to override default route:
pocoClient.Context<Person>()
 .Where(p => p.Name == "Ali"); // calls /api/Persons?$filter=(Name eq 'Ali')
while you might want it to go to /api/Employees.

You have to ways to achieve this:

  1. To decorate your entity with EntityUriAttribute. In this case [EntityUri("Employees")]
  2. Pass the full URI as below when creating context:

pocoClient.Context<Car>("http://server/api/Employees")
 .Where(car => car.Make.EndsWith("L"))

Conclusion

In this post, we have introduced PocoHttp which is an emerging open source client library for consuming HTTP data services.

PocoHttp can be used to consume ASP.NET Web API services where IQueryable<T> is returned, existing OData services or any other HTTP data service built in other platforms - provided the grammar is implemented for the query syntax.

Using the CachingHandler in ASP.NET Web API


Introduction


[NOTE: Please also see updated post and the new framework called CacheCow
This class has been removed from WebApiContrib code and NO LONGER SUPPORTED]


[Level T3] Caching is an important concept in HTTP and comprises a sizeable chunk of the spec. ASP.NET Web API exposes full goodness of the HTTP spec and caching can be implemented as a message handler explained in Part 6 and 7. I have implemented a CachingHandler and contributed the code to the WebApiContrib in GitHub.

This post will have two sections: first a primer on HTTP caching and then how to use the handler. This code uses ASP.NET Web API RC and with .NET 4 (VS 2010 or 2012).

Background

NOTE: This topic is fairly advanced and complex. You do not necessarily need to know all this in order to use CachingHandler and are welcomed to skip it but more in-depth knowledge of HTTP could go a long way. 

Caching is a very important feature in the HTTP. RFC 2616 extensively covers this topic in section 13. Review of all the spec is beyond the scope of this post but we will briefly touch on the subject.

First of all let's get this straight: this is not about putting an object in memory as we do with HttpRuntime.Cache. In fact server does not store anything (more on this below), it only tells the client (or mid-stream cache or proxy servers) what can be cached, how long and validates the caching.

Basically, HTTP provides semantics and mechanism for the origin server, client and mid-stream proxy/cache servers to effectively reduce traffic by validating the version of the resource they have against the server and retrieve the resource only if it has changed. This process is usually referred to as cache validation.

In HTTP 1.0, server would return a LastModified header with the resource. A client/user agent could use this value and send it in the If-Modified-Since header in the subsequent GET requests. If the resource was not changed, server would respond with a 304 (Not Modified) otherwise the resource would be sent back with a new LastModified header. This would be also useful in PUT scenarios where a client sends a PUT request to update a resource only if it has not changed: a If-Unmodified-Since header is sent. If the resource has not changed, server fulfils the request and sends a 2xx response (usually 202 Accepted) otherwise a 412 (Precondition Failed) is sent.

For many reasons (including the fact that HTTP dates do not have milliseconds) it was felt that this mechanism was not adequate. In HTTP 1.1, ETag (Entity Tag) was introduced which is an opaque Id in the format of a quoted string that is returned with the resource. ETags can be strong (refer to RFC 2616 for more info) or weak in which case they start with w/ such as w/"12345". ETag works like a version ID for a resource - if two ETags (for the same resource) are equal, it means those two versions of the resource are the same.

ETag for the same resource could be different according to various headers. For example a client can send an Accept-Language header of de-DE or en-GB and server would send different ETags. Server can define this variability for each resource by using Vary header.

Cache validation for GET and PUT requests are similar but instead will use If-Match (for PUT) and If-None-Match (for GET).

Well... complex? Yeah, pretty much and yet I have not covered some other aspects and the edge cases. But don't worry! You will be abstracted from a lot of it if you use the CachingHandler.

Using CachingHandler right out of the box

OK, using the CachingHandler is straightforward especially if you go with the default settings. You can have a look at the CarManager sample in the WebApiContrib project. CarManager.Web sample demonstrate server side setup of the caching and CarManager.CachingClient for making calls to the server and testing various caching scenarios.

All you have to do is to create an ASP.NET Web API project and add the CachingHandler as a delegating handler as we learnt in Part 6:

GlobalConfiguration.Configuration.MessageHandlers.Add(cachingHandler);

And you are done! Now let's try the server with some scenarios. I would suggest that you use the CarManager sample, otherwise create a simple handler and implement GET, PUT and POST on it.

Now let's make some request and look at the response. I would suggest using Fiddler or Chrome's Postman to send requests and view the response.

So if we send a GET request:

GET http://localhost:8031/api/Cars HTTP/1.1
User-Agent: Fiddler
Host: localhost:8031

We get back this response (or similar; some headers removed and body truncated for clarity):

HTTP/1.1 200 OK
ETag: "54e9a75f2dbb4edca672f7a2c4a73dca"
Vary: Accept
Cache-Control: no-transform, must-revalidate, max-age=604800, private
Last-Modified: Thu, 21 Jun 2012 23:35:46 GMT
Content-Type: application/json; charset=utf-8

[{"Id":1,"Make":"Vauxhall","Model":"Astra","BuildYear":1997,"Price":175.0....

So we see the ETag header here along with important caching headers. Now if we make another GET call, we get back the same response but ETag and last modified stays the same.

Generating same ETag is fine (showing our CachingHandler is doing something) but we have not yet seen any caching. That is where client has to do some work to do. It has to use If-None-Match with the ETag to conditionally ask for the resource: if it matches server, it will get back 304 but if not, server will return the new resource:
GET http://localhost:8031/api/Cars HTTP/1.1
User-Agent: Fiddler
Host: localhost:8031
If-None-Match: "54e9a75f2dbb4edca672f7a2c4a73dca"
Here we get back 304 (Not modified) as expected:
HTTP/1.1 304 Not Modified
Cache-Control: no-cache
Pragma: no-cache
Expires: -1
ETag: "54e9a75f2dbb4edca672f7a2c4a73dca"
Server: Microsoft-IIS/8.0
Date: Sun, 24 Jun 2012 07:34:29 GMT

A typical server that can use CachingHandler

CachingHandler makes a few RESTful assumptions about the server for effective caching. Some of these assumptions can be overridden but generally not recommended. These assumptions are:

  • HTTP verbs (POST/GET/PUT/DELETE for CRUD operations) are used to modify resources - and not RPC-style resources (such as POST /api/Cars/Add). This is the most fundamental assumption.
  • All resources are to be modified through the same HTTP pipeline that implements caching. If a resource modified outside the pipeline, cache state needs to be updated by the same process.
  • Resource are organised in a natural cache-friendly manner: invalidation of related resources can be done with a minimal setup (more details upcoming).

Defining cache state

As we said, this has nothing to do with HttpRuntime.Cache! Unfortunately ASP.NET implementation makes it really confusing between the HTTP cache (where the resource gets cached on the client or midstream cache servers) and server caching (when the rendered output gets cached on the server).

Cache state is a collection of data that helps keep track of each resource along with its last modified date and ETag. It might initially seem that for a resource, there exists a single of such pieces of information. But as we touched upon above, a resource can have different representations each of which needs to be stored separately on the client while they will most likely invalidated together. 

For example, resource /api/disclaimer can exist in multiple languages as such client has to cache each representation separately but when disclaimer changes, all such representations need to be invalidated. That will require a storage of some sort to keep track of all this data. Current implementation comes with an in-memory storage but in a web farm scenario this needs to be a persistent store.

Cache state storage and cache invalidation

So we do not need to store the cache on the server, but we DO need to store ETag and various states on the server. If we only have a single server, this state can be stored in the memory. In a web farm (or even web garden) scenario a persisted store is needed. This store is called Entity Tag Store and is represented by a simple interface:

public interface IEntityTagStore
{
 bool TryGetValue(EntityTagKey key, out TimedEntityTagHeaderValue eTag);
 void AddOrUpdate(EntityTagKey key, TimedEntityTagHeaderValue eTag);
 bool TryRemove(EntityTagKey key);
 int RemoveAllByRoutePattern(string routePattern);
 void Clear();
}

We need to have an implementation of this interface so that the cache management can be abstracted away from controllers and instead done in the DelegatingHandlers.

Introducing some concepts (you may skip and come back to it later)

This store will keep track of the Etag (and related state stored in TimedEntityTagHeaderValue) based on an Entity Tag key which is calculated based on the resource URI and content of the important headers (which their list will be on the Vary header). In-Memory implementation for a single server is provided out of the box with CachingHandler - I would be creating a SQL-Server implementation of it very soon; watch this space.

It is important to note that change in the resource will most likely invalidate all forms of the resource so all permutations of important headers will be invalidated. So invalidation is usually performed at the resource level. Sometimes several related resources can be represented as a RoutePattern. By default, URI of a resource is its RoutePattern.

Also in some cases, change in a resource will invalidate linked resources. For example, a POST to /api/cars to add a car will invalidate /api/cars/fastest and /api/cars/mostExpensive. In this case, "/api/cars/*" can be defined as the linked RoutePattern (since /api/cars does not qualify for /api/cars/*).

Some assumptions in CachingHandler

  1. Re-emphasising that resource can only change through the HTTP API (using PUT, POST and DELETE verbs). If resources are to be changed outside the API, it is responsibility of the application to use IEntityTageStore to invalidate the cache for those resources.
  2. If no Vary header is defined by the application, CachingHandler creates weak ETags.
  3. Change in the resource will invalidates all forms of the resources (all permutations of important header values)

CarManager sample

I have considered a pretty complex and interrelated routing and caching requirement for the sample to display what is possible. Resources available are:

  1. /api/Car/{id}: GET, PUT and DELETE
  2. /api/Cars: GET and POST
  3. /api/Cars/MostExpensive: GET
  4. /api/Cars/Fastest: GET
So creating a car would invalidate cache for 2, 3 and 4. Updating car with id=1 invalidates 2,3 and 4 in addition to the car itself at /api/Car/1. Deleting a car will invalidate 2, 3 and 4.

The client sample (CarManager.CachingClinet) displays how to call the server with headers to validate the cache.

Configuring CachingHandler

Best place to start is the CarManager.Web sample to give an idea on how various setup can be used to configure complex caching requirements. Basically the points below can be used to configure the CachingHandler

Constructor

A list of request header names can be passed that will define the vary headers. If no vary header is passed, system only generates weak ETags. Also optionally an implementation of IEntityTagStore otherwise by default InMemoryEntityTagStore will be used.


EntityTagKeyGenerator

This is an opportunity to provide linked resource for a resource.

 Other customisation points

CachingHandler provides properties in the form of functions with default implementation that can be changed  to override the default behaviour which we will cover in the upcoming post on "Extending CachingHandler".


Conclusion

CachingHandler is a server-side DelegatingHandler which can be used to abstract away caching from individual ApiControllers so that controller logic can be coded without having to worry about caching. 

The  code is hosted on GitHub (currently sitting at my fork waiting to be merged merged!) and comes with both a client and server sample - CarManager.Web and CarManager.CachingClient.