Tuesday, 31 July 2012

Using range header for retrieving range of IEnumerable<T> in ASP.NET Web API


Introduction

[Level T3] In this post, we talk about using HTTP's Range header to achieve requesting data ranges for entities.

Background

HTTP spec defines a series headers that can be used for a client to request for a partial content. These operations are optional (in most cases spec uses word SHOULD) but most servers implement them and browsers have increasingly been using them. If you have ever resumed downloading a big file from internet, then you have used this feature (in fact all browsers use range if supported by server). In this case, client keeps requesting chunks and builds up the file until it is fully downloaded.

So here is how it works in a nutshell:

  1. Server can optionally informs clients while serving a resource that it supports partial content. It does that by sending Accept-Range header with a value of the unit it supports, normally bytes. In our case, our server sends back a custom unit that we call x-entity.
  2. Client, either informed by the server on partial content feature based on Accept-Range header or just simply tries its luck, sends a request with Range header with value of [unit]=[from]-[to] for example bytes=1024-2047. In this example, client asks for the second KB of the file. Range header can specify multiple ranges for example bytes=500-600,601-999
  3. Server will return the range requested and include a Content-Range header with value [units] [from]-[to]/[TotalCount]. For example bytes 1024-2047/12345678. It also returns status code 206 (partial content) to inform the client that the content is partial. If server does not support the range specified, it will send back status code 416.
Spec does consider using custom units so server can implement its own custom units and inform the client of the unit using Accept-Range header. Now the idea is that in ASP.NET Web API, we normally build many actions that return IEnumerable<T>. What if we could use the range to specify range of the enumerable to be returned? Hmmm....

This feature can be useful in case of pagination on the client so that instead of API implementing a range parameter all the time, we just use HTTP's built-in features and encapsulate the implementation in a reusable component, in this case a filter.

So in the code to follow, we define a custom range unit and call it x-entity. "x-" prefix is a common naming convention on the web to specify custom tokens that are not part of the canonical tokens defined in RFC specs.

Implementing range in ASP.NET Web API

So where is the best place to implement this? We have these requirements:

  • Access to request headers to read Range header
  • Access to response header to set Accept-Range
  • Access to content headers to set Content-Range header
  • Access to content so that it can filter IEnumerable<T>
DelegatingHandler might look promising but by the time it accesses the content, it is already turned into stream by MediaTypeFormatters.

MediaTypeFormatter is an interesting option. I actually created a RangeMediaTypeFormatterWrapper that would wrap the MTFs and intercept the content and if it was of type IEnumerable<T>, it would apply the filtering. Initially it seems MTF does not have access to request headers but in here we had an interesting discussion and it turns out it can access request using GetPerRequestFormatterInstance. But it also needs access to response headers.


So Glenn Block suggested filters and after some thoughts, it seems to be the right approach considering current limitations of MTF. The only drawback is that it has to be explicitly defined on the action - which can in some cases be a blessing in fact. In any case, filter approach as you will see is clean and does everything in the same place.

Filters in ASP.NET Web API is not much different from MVC. You get two methods: before (OnActionExecuting) and after (OnActionExecuted) the action where you can change values in request, response, action arguments or simply examine values.

Using the code

You can get the source code from GitHub. As you can see, we have a single controller called CarController. Project is running on port 50714 on my machine so all examples will include this port - yours could be different. So download the project, build and run it. I created a client project to implement all steps below using HttpClient but there is an issue with ASP.NET Web API implementation of the Range header that regardless of the unit set in the range header, always bytes is sent to the server.

As you can see, we have a simple action with EnableRange filter defined on it:

[EnableRange]
public IEnumerable<Car> Get()
{
 return CarRepository.Instance.Get();
}

So now we use fiddler (or similar tool capable of sending HTTP requests, such as Google's Postman) to send this request:

GET http://localhost:50714/api/Car HTTP/1.1
User-Agent: Fiddler
Host: localhost:50714

We will get all the cars in our repository in JSON format. But note the Accept-Range header with value of  x-entity:

HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Expires: -1
Accept-Ranges: x-entity
Server: Microsoft-IIS/8.0
Date: Tue, 31 Jul 2012 18:59:56 GMT
Content-Length: 1125

[{"Id":1,"Make":"Vauxhall","Model":"Astra","BuildYear":1997,...

So this should tell the client that it can use Range header. Now let's send a range header requesting 3rd item to 6th item (total of 4 items):

GET http://localhost:50714/api/Car HTTP/1.1
User-Agent: Fiddler
Host: localhost:50714
Range: x-entity=2-5

And here is the response:

HTTP/1.1 206 Partial Content
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Content-Range: x-entity 2-5/10
Expires: -1
Server: Microsoft-IIS/8.0
Date: Tue, 31 Jul 2012 19:00:19 GMT
Content-Length: 447

[{"Id":3,"Make":"Toyota","Model":"Yaris","BuildYear":2003,"Price":3750.0,...

Note the Content-Range header above and also the fact that we got the entities we requested in JSON (not shown fully above). So it tells us that it has sent back items from index 2 to index 5 and total of items is 10. Also note the 206 response.

Server can send back * if number of items is not known at the time of serving the request. I have used this option since I do not want to run a Count() on an IEnumerable<T>. It is very likely that the data is being retrieved from database and we do not want to load the whole table into memory. So my approach is to try cast the value into ICollection using as keyword. If it case OK then I get the count, otherwise I set the count to *.

Another option in the spec is that to in the range is optional so the client can send a range header with value 2-*. In this case, we must skip the first 2 items and return the rest:

GET http://localhost:50714/api/Car HTTP/1.1
User-Agent: Fiddler
Host: localhost:50714
Range: x-entity=2-

In this case, our server returns this response:

HTTP/1.1 206 Partial Content
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Content-Range: x-entity 2-9/10
Expires: -1
Server: Microsoft-IIS/8.0
Date: Tue, 31 Jul 2012 21:18:05 GMT
Content-Length: 897

[{"Id":3,"Make":"Toyota","Model":"Yaris","BuildYear":2003,"Price":3750.0, ....

Notes on implementation

The crux if the implementation is to call Skip() and Take() on IEnumerable<T>. Our code has to be able to work with all types hence cannot be generic. On the other hand, filters (and attributes as a whole) cannot use generics. As such we just have to use reflection to do this:

[EnableRange]
var skipMethod = t.GetMethods().Where(m => m.Name == "Skip" && m.GetParameters().Count() == 2)
 .First().MakeGenericMethod(_elementType);
var takeMethod = t.GetMethods().Where(m => m.Name == "Take" && m.GetParameters().Count() == 2)
 .First().MakeGenericMethod(_elementType);
...
value = skipMethod.Invoke(null, new object[] { value,  from});
if(to.HasValue)
 value = takeMethod.Invoke(null, new object[] { value, to - from + 1 });

Also it is useful to note that the return value is not accessible in the filter and we have to resort to using Content and casting it to ObjectContent and use the Value property.

Conclusion

Range header defined in HTTP spec is useful in retrieving partial content. We can use custom units and we defined x-entity unit to enable selecting a range of entities (commonly used in pagination scenarios) and implemented it using a filter.

Monday, 30 July 2012

Serialising request and response in ASP.NET Web API


Introduction

[Level T3] This is a short post on serialising/deserialising HTTP request and response messages in ASP.NET Web API.  Serialising messages manually can be achieved but is hard-work and you can run into various problems. ASP.NET Web API provides a means of achieving this through HttpMessageContent. This post is a follow-up to this discussion.

Background

There are many cases where you could be interested in serialising HttpRequestMessage or HttpResponseMessage. For me, I needed this to implement caching features on the HttpClient in CacheCow framework.

Technically speaking HTTP messages arrive in serialised format and all we need is access to the raw stream coming from server - as such no processing would be required. Unfortunately this is not possible since Web API does not read the message as a raw stream and then process it, instead it starts by reading various chunks, parsing it as it goes.

However, ASP.NET team implemented a feature that could be used for serialisation/deserialisation of request and response messages. If you have read Brad Wilson's batching post, you probably have seen noticed that HttpMessageContent can be used for implementing client-server batching. Now we will use this for serialisation.

HttpMessageContent

RFC 2616 in its appendices defines content types "message/http" and "application/http". application/http is a content-type that can contain more than one request or response.

Did we not have this in multi-part content-type? As we know, we can include different request or response parts in the same message and each part gets its own share of the headers, so what is the difference?

Well the difference is with multi-part, each part can only have headers related to content. And above all, they share the same status code. In application/http, each "part", as it were, is a complete request or response. For example, the requests each will have their own URI and responses their own status code.

HttpMessageContent can encapsulate multiple HttpRequestMessage or HttpResponseMessage but in our case we just need a single request or response.

Serialiser interface

Let's define an interface for our serialiser:

public interface IHttpMessageSerializer
{
 void Serialize(HttpResponseMessage response, Stream stream);
 void Serialize(HttpRequestMessage request, Stream stream);
 HttpResponseMessage DeserializeToResponse(Stream stream);
 HttpRequestMessage DeserializeToRequest(Stream stream);
}

UPDATE: Latest implementation is fully async and can be found as part of CacheCow library here.

Serialisation

In order to serialise, we need to create a new HttpMessageContent passing request or response and then use ReadAsByteArrayAsync to read the whole message as a byte array:

var httpMessageContent = new HttpMessageContent(request);
var buffer = httpMessageContent.ReadAsByteArrayAsync().Result;

As you can see it is very easy to serialise. Now the only caveat is that if you are serialising in a delegating handler, this will consume the message content stream so that it cannot be read further down the stream. If you do, you will see this error message:

The stream was already consumed. It cannot be read again.

The trick (for now) is to call the method ReadAsByteArrayAsync to force the content to be loaded into the buffer. Although we would not need the buffer we read (since the actual reading will happen inside HttpMessageContent), next time the content will be read from the buffer and not from the network. In my implementation I have made it optional whether to pre-read the content into the buffer.

Deserialisation

The trick with deserialisation is to create a normal HttpRequestMessage or HttpResponseMessage and set the content-type header into"application/http;msgtype=request" or "application/http;msgtype=response", accordingly". Then we use the special extension method to read into the an HttpMessageContent:

var request = new HttpRequestMessage();
request.Content = new ByteArrayContent(memoryStream.ToArray());
request.Content.Headers.Add("Content-Type", 
    "application/http;msgtype=request");
return request.Content.ReadAsHttpRequestMessageAsync().Result;

As you can see, all the heavy lifting happens inside the HttpMessageContent and there is really little code that we need to write.

Conclusion

We can use HttpMessageContent to serialise/deserialise request/response in ASP.NET Web API. Full implementation can be found as a GitHub gist here as part of CacheCow library here. This implementation is fully Async and takes advantage of IO completion ports exposed in Begin/End methods.

One word of caution is on cases where message needs to be used after serialisation - which would comprise many cases including serialisation in DelegatingHandler. In these cases we need to invoke ReadAsByteArrayAsync (or similar) to ensure the content is read into the buffer.

Sunday, 22 July 2012

Introducing CacheCow: An HTTP caching framework for server and client


CacheCow

[Level T2] This is a short post to introduce CacheCow, an Open Source framework for HTTP caching on the client and server in ASP.NET Web API.

As some of you probably know, I have been working on caching for a while. If you go back to my post on CachingHandler, you will see that I contributed server-side HTTP caching implementation to the WebApiContrib project and included samples and tests.

However, I realised that even more important part of the caching needs to be implemented on the client. Also implementations of the IEntityTagStore on various databases (in-memory and persisted) and client-side's cache storage all need their own project so this is bigger than just a feature on WebApiContrib. As such, I have decided to start a new project and port the server-side from WebApiContrib. [Please bear in mind, the code in the WebApiContrib will be maintained and supported so if you are using it and have problems or experience bugs, please ping me in twitter or GitHub.]

So CacheCow framework has been born. The name itself is a word game with Cash Cow, meaning it does the heavy lifting for caching with minimal set up hence promises good return on investment :). Project is open source and hosted on GitHub. And yes, I do accept pull requests; Tugberk has done a great job (also with some help from Sayed Ibrahim Hashimi which I am so grateful for) and automated the whole build and NuGet package generation and his PR was merged pretty much immediately. But please contact me before-hand on the work you would like to do - there is ton of interesting work to do.

How to use CacheCow.Server

At the moment, only server-side CacheCow is ready for use. All you need to do is to use NuGet to get the package:

PM> Install-Package CacheCow.Server

This will add the CacheCow.Server and CacheCow.Common DLLs and the rest is all the same as the CachingHandler post and samples. Just add CachingHandler to the config:

GlobalConfiguration.Configuration.MessageHandlers.Add(new CachingHandler());

This will add this handler with all the default settings and will store the cache state in the memory. There are many dials that you can turn and configure the handler according to your resource organisation - just see the sample in WebApiContrib. This sample connects to the CarManager sample and tests various scenarios for a fairly complex resource organisation.

How to use CacheCow.Server.EntityTagStore.SqlServer

As I pointed out above, by default cache state is stored in memory. This is OK for single server or test scenarios but in case of a server farm you would like the cache state to be maintained for whole farm and when a resource cache invalidated, it is done for all servers. In this case you need a central EntityTagStore (cache state store).

Building a cache state store is pretty easy and all you have to do is to implement IEntityTagStore interface which has 5 methods. Since cache might be invalidated not just for a CacheKey (previously called EntityTagKey) but also for a RoutePattern (see CachingHandler post), key-value stores are not rich enough to provide this but conventional databases can be used.

So I have implemented this for SQL Server. In order to get this EntityTagStore, just use NuGet:

PM> Install-Package CacheCow.Server.EntityTagStore.SqlServer

This will download the DLL and also a script file named script.sql located in <project root>\packages\CacheCow.Server.EntityTagStore.SqlServer.0.1.0\scripts

So create a database and run this script against it, and you will get one table and several stored procedures. Then in order to use SqlServerEntityTagStore, create an instance pass it to the CachingHandler constructor. Default constructor relies on a connection string named "EntityTagStore" to be there in your web.config and pointing to your database.

GlobalConfiguration.Configuration.MessageHandlers.Add(
 new CachingHandler(new SqlServerEntityTagStore()));


Alternatively, pass the connection string to the constructor. That is all you have to do use SQL Server EntityTagStore.

Roadmap

I am working on the client CachingHandler to be used with HttpClient. This will initially come with an InMemoryCacheStore but then various persistent cache store implementations can be done (for example file-based, SQL CE, etc)

Also CarManager sample needs to be ported for CacheCow which I am hoping to do very soon - although the old sample does work well.

Any question or comment please ping me on twitter (@aliostad) or GitHub.


Tuesday, 10 July 2012

What does REST's Client-Server mean now?

Introduction

[Level C4] In "What does coupling mean ...", we reviewed three client-server patterns (with their anti-patterns) based on the concern. In this post, we will carefully examine the new meaning of client-server web applications. This will serve as a primer on my new work to define Client Server Domain Separation (CSDS).

Motivation

As I explained in the coupling post, I was challenged by recent emergence of attempts to define client-server relationship, one of which being ROCA. There seems to be a recent trend in the REST-aware community by disgruntled developers who believe too much control has been shifted to the client-side. As such they are trying to come up with practices/styles to move some of the control back to the server.

Background

REST has a strong emphasis on separation of client and server. Fielding in his PhD dissertation outlines the constraint:
Separation of concerns is the principle behind the client-server constraints. By separating the user interface concerns from the data storage concerns, we improve the portability of the user interface across multiple platforms and improve scalability by simplifying the server components
As we saw in the coupling post, we need to understand if a functionality is client-concern, server-concern or mixed concern. Implementing the functionality in the wrong side of wire can be initially tolerated but finally takes its toll. Below we will try defining building blocks of our discussion.

When we talk about the domain below, we refer to the concerns implemented/expose in the client or server.

Server

Server is responsible for defining a domain (server domain) and maintaining its state and consistency/integrity. Server usually has very complex components yet it hides its complexity behind its services.

Server
Figure 1 - Server exposes a public domain hiding its complexity

Service is composed of API and domain objects. In a RESTful world, API is HTTP REST. Domain objects are representative of the server's domain model. While Server can have a complex domain, we refer to server domain as only the publicly available domain.

Server should not have a knowledge of the client type. Although such information can be shared with the server (for example through User-Agent header), it should not be used other than for auditing or statistics.

Domain objects sent to the client are usually looked down upon and treated as second class citizen. They are  sometimes called DTO (Data Transfer Object) or ViewModel. While this is OK in a server development scenario when the focus is to decide how much of the server's whole domain to be exposed, these models are not to be mistaken with Value Objects since they are entities (they have identity according to DDD).

Domain object can be a fully rendered HTML (markup) in its semantic form (i.e. no display semantics such as <b> or <i>). Documents are domain objects for example a blog domain has a post domain object.

Server itself can be a client of one or several servers. ifttt is a beautiful example of it. 

Client

Client is responsible for using server(s) services to provide value to the user. In the process, it defines a domain which is usually different from server domain, although there is always an overlap. Client has a life of its own, able to maintain some level of functionality with no server access.


Figure 2 - Client and server domains - now and before
Please note that we did not mention user on the definition of server. Server is being abstracted away from the user by the client. Client can still provide some of its values to the user even when server down. Funny enough, I lost connectivity for half an hour while I was typing this blog and I carried on typing. When re-connected, Blogger saved my worked.

So let's bring an example to highlight a few important points:

I love listening to online radio while working. I use TuneIn on my android to listen to music based on my mood. TuneIn is a great app that allows you to search online radios and podcasts. So one of the functionalities is the directory service of radio stations. This functionality is provided by TuneIn servers (that maintain the state and its consistency/integrity). It defines a server model that consists of name, URL, style, icon, etc. Client does not know how the directory is created or how often the information gets updated.
On the other hand, when I click on a station to listen, I connect to the radio server. Domain of the server has music streaming, current artist, etc. For the radio server, it is all the same if you listen to music in your browser or in the native client on your phone. Client does not know how music is stored, chosen, etc.
Now if I really like a song, I can share it in twitter. Twitter server, while for security reasons (OAuth) knows the application I am using, it does not really care. Publishing my tweet is all the same for it.

So as you can see, Twitter's server domain is fully concerned with users, their tweets, re-tweeting, etc while TuneIn client only cares about publishing a tweet, so their domains have a tiny (yet important) overlap.

Some the client functionality has nothing to do with server. Playing music on the device is fully a client concern so does not exist on the server domain. For example, online radio's streaming servers would not know if the data they are sending will be even heard by the user (e.g. speaker could be on mute or worse, the client uses the data for illegal dumping of the songs).

Let's have a look at Figure 2. I think our TuneIn example fully described the "Now" diagram so let's focus on the "classic" case. This is the classic client (a web application running on the browser - see below) where all the logic served by the server. In extreme cases, server even generates client scripts apart from hosting the static logic (Javascript). Client's domain is fully engulfed by the server meaning server is aware of all the client logic.

Does the classic client look to you as the REST Eutopia? Having read Fielding's dissertation snippet above, which one do you think represent REST better?

Web Application

What is a web application? This definition has been drastically changed over the last few years with the emergence of diverse client devices capable of running advanced Javascript or native code. Web applications can be found in the forms below (not particularly ordered and probably some missing):
  • Single Page Application (SPA) running on various devices including top-end mobile devices
  • Native rich clients on desktop/laptop
  • Native client apps on mobile devices
  • Bundled HTML/Javascript apps running on mobile devices (PhoneGap) or desktop (Windows 8)
  • Traditional web applications run by browsers
  • Browser as an application to display markup
Figure 3 - Server cannot really see behind the cloud and which client is using it


Web application is the usage abstraction of the client. While the server domain used to pretty much define the web application, client is becoming more and more important in defining it.

Now what does web do in the "web application"? Nowadays, almost every application is a web application: the client uses one or more cloud services to enrich the experience. Unless it is a simple drawing or editing tool, most applications are web applications.

Some of the forms deserve more attention. Bundled HTML/Javascript apps remind us that in some cases it is in fact incidental that the server hosts the files. Some Javascript files are hosted on CDN and downloaded and cached for a long time.

Also not all logic is Javascript. Microsoft RIA service (regardless of whether I like it or not - and I don't!) sends server's domain rules to the client very much like Javascript. Generating Javascript or binaries is all bad the same as it breaches client's independence.

Web Application and REST

Basing your web application on REST will help achieving better separation as well as making an efficient use of the web.

Having said that, today's world demands more and more from the computing industry. Server Push model (gaining popularity in node.js/websocket world) requires a stateful server which is a REST no-no but today's virtualisation and cloud elasticity has made scalability a much smaller problem.

For most web applications, however, following all REST constraints is the best practice as very few applications require Server Push model.

So what do I think of the new trend?

Well, I think the trend cannot resist the wind of change. Computing industry has been pushed to provide more value and has to do it cheaper, faster and richer. Separating client and server will help to achieve this more effectively:

With regard to ROCA I must say it is a worthwhile effort to understand the client and server interactions and contains useful common-sense practices. However I cannot subscribe to it since:

  • must-server advocates "classic" model (engulfed client) and ignores the client server separation prescribed by REST. 
  • It does not respect client domain (must-no-duplication) and it rules and logic.
  • Single-Page-Application is not ROCA-compliant (see discussions).
  • Not all clients are browsers, in fact less and less clients are browsers. ROCA is heavily targeted for browser applications with many of the constraints directly related to HTML, CSS or Javascript
  • It does not fully appreciate the inherent complexity of the client domain (must-jslimits and mustnot-jsengine)
  • A ROCA client will not be able to provide any useful offline feature
  • Includes lower-end non-browser clients yet does not appreciate upper-end clients (must-non-browser)

Conclusion

Client and server have their own domain - as REST prescribes. Server defines a domain and maintains its state and integrity. It hides its complexity behind its services while other than authentication and authorization does not need to know anything about the client.

Client through the usage of server(s) services provides value to the user. It defines its own domain that has overlap with server(s) domain.

Web application is a plethora of different devices and technologies. As such, defining a style requires considering all such scenarios.

We will talk more about Client-Server Domain Separation (CSDS) in the upcoming posts.

Sunday, 1 July 2012

The place of Extension Methods in Software Design


Introduction

[Level T3] Extensions methods - introduced back in .NET 3.0 - are useful tools in a .NET developer's toolset. Apart from their usefulness, extension method is not an inherently object oriented concept yet we use them more and more in our API designs.

Extension methods initially were used for those classes where we did not own the source code for. But nowadays we are using them increasingly for types where we do own the source.

This post aims to have an in-depth look at the place of extension methods in the API design.

Background

Definition of Extension Methods according to MSDN is:
Extension methods enable you to "add" methods to existing types without creating a new derived type, recompiling, or otherwise modifying the original type.
So as we all know, in order to create an extension method, we need to:
  1. Create a static non-generic class
  2. Create a static method
  3. Make the first parameter as the type we are trying to add the method, with the keyword this
For example (and one of my favourites), we can add this extension method to the object to replicate the T-Sql's IN operator:

public static bool IsIn(this object item, params object[] list)
{
 if (list == null || list.Length == 0)
  return false;
 return list.Any(x => x == item);
}

Now I can use this like an instance method:

string ali = "ali";
var isIn = ali.IsIn("john", "jack", "shabbi", "ali"); // isIn -> true

If I may digress a little bit here, this is not such a great implementation since:

var isInForInt = 1.IsIn(2, 3, 1); // isInForInt -> false!

As you have probably guessed, defining the extension method for object type will cause the boxed integers objects to be compared instead of integers themselves and they surely won't be equal. So a generic implementation will solve the problem:

public static bool IsIn<T>(this T item, params T[] list)
{
 if (list == null || list.Length == 0)
  return false;
 return list.Any(x => EqualityComparer<T>.Default.Equals(x, item));
}

Reality is, extension method only gives the illusion of method being on the type and what is being compiled is nothing but a plain old static method call. Having a look at the IL generated confirms this:

IL_0056:  call       bool ConsoleApplication1.ExtensionMethods::IsIn<int32>(!!0, !!0[])

ExtensionMethods above is the name of the static class I created for this method.

So extension methods are basically the same utility or helper static methods we have been writing only glamorised to look like instance methods. Yet they have the additional benefit of:

  1. It leads to much more readable and natural code.
  2. I do not have to know the name of the helper class whose static methods I am using - in fact the behaviour has nothing to do with the static class. That class is not really a class in a true sense since it does not exert state or behaviour. And that is why it has to be declared static: to make clear its design intentions.
  3. Fluent API can be easily designed for older types without touching them.
  4. Since it is not really an instance method call, it can be called on null instances. This is a desirable side effect since we can check for nulls in the extension method and cater for them (none of the "object reference not set to an instance..." nonsense!)
  5. Since it can be called on null instances, some type information for the null instance can be determined in the extension method (although it can be a base type or an interface) while this is not possible for a null object.

Extension methods when we do not own the type

This has been the typical scenario. We always wonder if for example string had a such and such method and there was no way to achieve this. Now using extension methods we can. This scenario can also apply to cases where a historic API has been released (and you own the API) but cannot be changed. In this case, your API can be enhanced using extension methods.

With such usage, there is no decision to be made hence the design has already been done. Extension methods serve mere as a nice utility and syntactical sugar.

One of the most useful use cases I have found is the function composition in functional programming in C# (see some examples in my other posts here and here). This is especially important since you can achieve readability by method chaining. For example:

  usingReflection
   .Repeat(TotalCount)
   .OutputPerformance(stopwatch, performanceOutput)();

In addition to the examples above, let's have a look at a simple example to swallow the exception and optionally log the error (note how the implementation reuses itself to swallow errors that could arise from logging):

public static class WrapSwallowExtension
{
 public static Action<T> WrapSwallow<T>(this Action<T> action, Action<Exception> logger = null)
 {
  return (T t) =>
       {
           try
           {
            action(t);
           }
           catch (Exception e)
           {
      if (logger != null)
       logger.WrapSwallow()(e);             
           }

       };
 }
}

So I can use:

string myString = null;
Action<string> action = (s) => { s.ToLower(); }; // reference null exception! 
action.WrapSwallow()(myString); // swallowed

Now here I created a new exception but when I am working in a functional scenario, I already have my actions and functions.

Extension methods when we own the type

I have heard some saying "Why would you wanna use an extension method when you own the code? Just add the method to the type."

There are cases where you own the type yet you would still use an extension method. Here we have a look at a few scenarios below.

Extension methods for interfaces

This is the most obvious use case. Most of the Linq library is implemented using extension methods (while Microsoft owns the types). An interface cannot have the implementation but you can use extension methods to add implemented enhancement to your interfaces.

Without getting into the debate whether implementing ForEach against IEnumerable<T> is semantically correct or not (don't! I am not going there) you might have noticed that the function only exists for the List<T> so you have to use ToList() to use the feature. Well, this can be easily done for IEnumerable<T> too:

public static class IEnumerableExtensions
{
 public static IEnumerable<T> ForEachOne<T>(this IEnumerable<T> enumerable, Action<T> action)
 {
  foreach (var t in enumerable)
  {
   action(t);
   yield return t;
  }
 }
}

In this particular example, I do not own the source for IEnumerable<T> but even if I had, I would only be able to associate implementation with the interface using extension methods.

Overloading

This is the next common case. If you are familiar with ASP.NET MVC, you probably have noticed that the most of the functionality of HtmlHelper class has been implemented using extension methods.

Html.TextBoxFor(x => x.Name)

In fact all different overloads of HtmlHelper for Textbox, RadioButton, Checkbox, TextArea, etc are implemented using extension methods. So the HtmlHelper class itself implements a core set of functionality which will be called by these extension methods.

Now lets look at this fictional interface:

public interface IDependencyResolver
{
   object Resolve(Type t);
   T Resolve<T>();
}

The interface has two methods for resolving the type, one using the generic type the other with the type instance. Whoever implements this will be most likely implementing the non-generic method and then make generic method call the non-generic one:

public interface IDependencyResolver
{
 object Resolve(Type t);
}

public static class IDependencyResolverExtension
{
 public static T Resolve<T>(this IDependencyResolver resolver)
 {
  return (T) resolver.Resolve(typeof (T));
 }
}


This will help to:
  • Trim down the interface and make it terser so it can express its design intentions more clearly
  • Save all implementers of the interface having to repeat the same bit of code
When I look at the interface IQueryProvider, I wonder if it was designed before extension methods were available:

public interface IQueryProvider
{
    IQueryable<TElement> CreateQuery<TElement>(Expression expression);
    IQueryable CreateQuery(Expression expression);
    TResult Execute<TResult>(Expression expression);
    object Execute(Expression expression);
}

So the 4 methods could have been reduced to 2. Considering the fact that Linq and extension methods both came in .NET 3.0, my suspicion seems very likely!

Dependency layering

Another case where you might decide to use an extension method rather than exposing a direct method on the type is when a type's sole dependency on another type is confined to a single method. This is very common in cases where the dependency is on a layer above the dependent type - while naturally must be the other way around.

For example, let's look at this case:

// THIS WILL NOT WORK!

// sitting at entity layer
public class Foo
{
 // ...

 public Bar ToBar()
 {
  // ...
 }
}

// sitting at business layer
public class Bar
{
 // ...  
}

Now in this example, I have laid out these two classes in different logical layers to better illustrate the case - but it does not have to be, this is all about managing dependencies, in the same layer or other layers. We have baked in Foo's the dependency to Bar for the sake of ToBar(). The solution is to create an extension method for the ToBar().

So we can write (and completely decouple to classes):

public class Foo
{
 
}

public class Bar
{

}

public static class FooExtensions
{
 public static Bar ToBar(this Foo foo)
 {
  return new Bar();
 }
}


Providing implementation for enumerations

This is one that probably many of us have done. Enumerations - unfortunately - cannot contain implementations so extension methods are a good place to put the implementation code for enums. This is usually to do to conversion, parsing and formatting.

Delay decision on API signatures

With regards to an API, anything that goes into the public interface of the type is difficult to change. As such attempting to provide all possible overloads and use cases of the type on its public interface is likely to fail.

Delaying such decisions with providing a base functionality on the type and then providing more and more extension methods with each release is a useful process. ASP.NET team have used this technique for ASP.NET MVC and recently with ASP.NET Web API.

Drawbacks

Extension methods are static methods. As such they cannot be mocked using standard mocking frameworks. An extension method should not have any dependency other than the ones passed to it.

Let's look at this case:

public class Foo
{
 public string FileName { get; set; }

 public void Save()
 {
  // ...
 }
}

public static class FooExtensions
{
 public static void SafeSave(this Foo foo)
 {
  var directoryName = Path.GetDirectoryName(foo.FileName);
  if (!Directory.Exists(directoryName))
   Directory.CreateDirectory(directoryName);
  foo.Save();
 }
}

In this case, unit testing any class that uses SafeSave becomes a nightmare. What we need here is to create an interface IFileSystem and pass along with the extension method to abstract it from using the real file system.

Conclusion

Availability of extension methods has changed the way we design software APIs in the .NET world. We have started to build the basic functionality in the actual types and use extension methods to provide overloading.

There are 5 reasons to use extension methods when you own the type:

  • To associate implementation with interfaces
  • Overloading of an API
  • Removing dependency especially in logical layering
  • Providing implementation for enumerations
  • Delaying decision on API signatures
An extension method should not have any dependencies other than the ones passed to it.