Showing posts with label Performance Monitoring. Show all posts
Showing posts with label Performance Monitoring. Show all posts

Thursday, 27 August 2015

No-nonsense Azure Monitoring in 20 Minutes (maybe 21) using ECK stack

Azure platform has been there for 6 years now and going from strength to strength. With the release of many different services and options (and sometimes too many services), it is now difficult to think of a technology tool or paradigm which is not “there” - albeit perhaps not exactly in the shape that you had wished for. Having said that, monitoring - even to the admission of some of the product teams - has not been the strongest of the features in Azure. Sadly, when building cloud systems, monitoring/telemetry is not a feature: it is a must.

I do not want to rant for hours why and how a product that is mainly built for external customers is different from the internal one which on its strength and success gets packaged up and released (as is the case with AWS) but a consistent and working telemetry option in Azure is pretty much missing - there are bits and pieces here and there but not a consolidated story. I am informed that even internal teams within Microsoft had to build their own monitoring solutions (something similar to what I am about to describe further down). And as the last piece of rant, let me tell you, whoever designed this chart with this puny level of data resolution must be punished with the most severe penalty ever known to man: actually using it - to investigate a production issue.

A 7-day chart, with 14 data points. Whoever designed this UI should be punished with the most severe penalty known to man ... actually using it - to investigate a production issue.

What are you on about?

Well if you have used Azure to deliver any serious solution and then tried to do any sort of support, investigation and root cause analysis, without using one of the paid telemetry solutions (and even with using them), painfully browsing through gigs of data in Table Storage, you would know the pain. Yeah, that's what I am talking about! I know you have been there, me too.

And here, I am presenting a solution to the telemetry problem that can give you these kinds of sexy charts, very quickly, on top of your existing Azure WAD tables (and other data sources) - tried, tested and working, requiring some setup and very little maintenance.


If you are already familiar with ELK (Elasticsearch, LogStash and Kibana) stack, you might be saying you already got that. True. But while LogStash is great and has many groks, it has been very much designed with the Linux mindset: just a daemon running locally on your box/VM, reading your syslog and delivering them over to Elasticsearch. The way Azure works is totally different: the local monitoring agent running on the VM keeps shovelling your data to durable and highly available storages (Table or Blob) - which I quite like. With VMs being essentially ephemeral, it makes a lot master your logging outside boxes and to read the data from those storages. Now, that is all well and good but when you have many instances of the same role (say you have scaled to 10 nodes) writing to the same storage, the data is usually much bigger than what a single process can handle and shoveling needs to be scaled requiring a centralised scheduling.

The gist of it, I am offering ECK (Elasticsearch, ConveyorBelt and Kibana), an alternative to LogStash that is Azure friendly (typically runs in Worker Role), out-of-the-box can tap into your existing WAD logs (as well as custom ones) and with a push of a button can be horizontally scaled to N, to handle the load for all your projects - and for your enterprise if you work for one. And it is open source, and can be extended to shovel data from any other sources.

At core, ConveyorBelt employs a clustering mechanism that can break down the work into chunks (scheduling), keep a pointer to the last scheduled point, pushing data to Elasticsearch in parallel and in batches and gracefully retry the work if fails. It is headless, so any node can fail, be shut down, restarted, added or removed - without affecting integrity of the cluster. All of this, without waking you up at night, and basically after a few days, making you forget it ever existed. In the enterprise I work for, we use just 3 medium instances to power analytics from 70 different production Storage Tables (and blobs).

Basic Concepts

Before you set up your own ConveyorBelt CB, it is better to know a few concepts and facts.

First of all, there is a one-to-one mapping between an Elasticsearch cluster and a ConveyorBelt cluster. ConveyorBelt has a list of DiagnosticSources, typically stored in an Azure Table Storage, which contains all data (and state) pertaining to a source. A source typically is a Table Storage, or a blob folder containing diagnostic data (or other) - but CB is extensible to accept other data stores such as SQL, file or even Elasticsearch itself (yes if you ever wanted to copy data from one ES to another). DiagnosticSource contains connection information for the CB to connect. CB continuously breaks down the work (schedules) for its DiagnosticSources and keeps updating the LastOffset.

Once the work is broken down to bite size chunks, they are picked up by actors (it internally uses BeeHive) and data within each chunk pushed up to your Elasticsearch cluster. There is usually a delay between data captured (something that you typically set in Azure configuration: how often copy data), so you set a Grace Period after which if the data isn't there, it is assumed there won’t be. Your Elasticsearch data will usually be behind realtime by the Grace Period. If you left everything as defaults, Azure copies data every minute which Grace Period of 3-5 minutes is safe. For IIS logs this is usually longer (I use 15-20 minutes).

The data that is pushed to the Elasticsearch requires:
  • An index name: by default the date in the yyyyMMdd format is used as the index name (but you can provide your own index)
  • The type name: default is PartitionKey + _ + RowKey (or the one you provide)
  • Elasticsearch mapping: Elasticsearch equivalent of a schema which defines how to store and index data for a source. These mappings are stored on a URL (a web folder or a public read-only Azure Blob folder) - schema for typical Azure data (WAD logs, WAD Perf data and IIS Logs) already available by default and you just need to copy them to your site or public Blob folder.

Set up your own monitoring suite

OK, now time to create our own ConveyorBelt cluster! Basically the CB cluster will shovel the data to a cluster of Elasticsearch. And you would need Kibana to visualise your data. Here I will explain how to set up Elasticsearch and Kibana in a Linux VM box. Further below I am explaining how to do this. But ...

if you are just testing the waters and want to try CB, you can create a Windows VM, download Elasticsearch and Kibana and run their batch files and then move to setting up CB. But after you have seen it working, come back to the instructions and set it up in a Linux box, its natural habitat.

So setting this up in Windows is just to download the files from the links below, unzip and then running the batch files elasticsearch.bat and kibana.bat. Make sure you expose the ports 5601 and 9200 from your VM, by creating endpoints.

https://download.elastic.co/kibana/kibana/kibana-4.1.1-windows.zip
https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.zip

Set up ConveyorBelt

As discussed above, ConveyorBelt is typically deployed as an Azure Cloud Service. In order to do that, you need to clone Github repo, build and then deploy it with your own credentials and settings - and all of this should be pretty easy. Once deployed, you would need to define various diagnostic source and point them to your ElasticSearch and then just relax and let CB do its work. So we will look at the steps now.

Clone and build ConveyorBelt repo

You can use command line:
git clone https://github.com/aliostad/ConveyorBelt.git
Or use your tool of choice to clone the repo. Then open administrative PowerShell window, move to the build folder and execute .\build.ps1

Deploy mappings

Elasticsearch is able to guess the data types of your data and index them in a format that is usually suitable. However, this is not always true so we need to tell Elasticserach how to store each field and that is why CB needs to know this in advance.

To deploy mappings, create a Blob Storage container with the option "Public Container" - this allows the content to be publicly available in a read-only fashion. 

You would need the URL for the next step. It is in the format:
https://<storage account name>.blob.core.windows.net/<container name>/

Also use the tool of your choice and copy the mapping files in the mappings folder under ConveyorBelt directory.

Configure and deploy

Once you have built the solution, rename tokens.json.template file to tokens.json and edit tokens.json file (if you need some more info, find the instructions here). Then in the same PowerShell window, run the command below, replacing placeholders with your own values:
.\PublishCloudService.ps1 `
  -serviceName <name your ConveyorBelt Azure service> `
  -storageAccountName <name of the storage account needed for the deployment of the service> `
  -subscriptionDataFile <your .publishsettings file> `
  -selectedsubscription <name of subscription to use> `
  -affinityGroupName <affinity group or Azure region to deploy to>
After running the commands, you should see the PowerShell deploying CB to the cloud with a single Medium instance. In the storage account you had defined, you should now find a new table, whose name you defined in the tokens.json file.

Configure your diagnostic sources

Configuring the diagnostic sources can wildly differ depending on the type of the source. But for standard tables such as WADLogsTable, WADPerformanceCountersTable and WADWindowsEventLogsTable (whose mapping file you just copied) it will be straightforward.

Now choose an Azure diagnostic Storage Account with some data, and in the diagnostic source table, create a new row and add the entries below:

  • PartitionKey: whatever you like - commonly <top level business domain>_<mid level business domain>
  • RowKey: whatever you like - commonly <env: live/test/integration>_<service name>_<log type: logs/wlogs/perf/iis/custom>
  • ConnectionString (string): connection string to the Storage Account containing WADLogsTable (or others)
  • GracePeriodMinutes (int): Depends on how often your logs gets copied to Azure table. If it is 10 minutes then 15 should be ok, if it is 1 minute then 3 is fine.
  • IsActive (bool): True
  • MappingName (string): WADLogsTable . ConveyorBelt would look for mapping in URL "X/Y.json" where X is the value you defined in your tokens.json for mappings path   and Y is the TableName (see below).
  • LastOffsetPoint (string): set to ISO Date (second and millisecond MUST BE ZERO!!) from which you want the data to be copied e.g. 2015-02-15T19:34:00.0000000+00:00
  • LastScheduled (datetime): set it to a date in the past, same as the LastOffset point. Why do we have both? Well each does something different so we need both. 
  • MaxItemsInAScheduleRun (int): 100000 is fine
  • SchedulerType (string): ConveyorBelt.Tooling.Scheduling.MinuteTableShardScheduler
  • SchedulingFrequencyMinutes (int): 1
  • TableName (string): WADLogsTable, WADPerformanceCountersTable or WADWindowsEventLogsTable
And save. OK, now CB will start shovelling your data to your Elasticsearch and you should start seeing some data. If you do not, look at the entries you have created in the Table Storage and you will find an Error column which tells you what went wrong. Also to investigate further, just RDP to one of your ConveyorBelt VMs and run DebugView while having "Capture Global Win32" enabled - you should see some activity similar to below picture. Any exceptions will also show in there.


OK, that is it... you are done! ... well barely 20 minutes, wasn't it? :)


Now in case you are interested in setting up ES+Kibana in Linux, here is your little guide.

Set up your Elasticsearch in Linux

You can run Elasticsearch on Windows or Linux - I prefer the latter. To set up an Ubuntu box on Azure, you can follow instructions here. Ideally you need to add a Disk Volume as the VM disks are ephemeral - all you need to know is outlined here. Make sure you follow instructions to re-mount the drive after reboots. Another alternative, especially for your dev and test environments, is to go with D series machines (SSD hard disks) and use the ephemeral disks - they are fast and basically if you lose the data, you can always set ConveyorBelt to re-add the data, and it does it quickly. As I said before, never use Elasticsearch to master your logging data so you can recover losing it.

Almost all of the commands and settings below, needs to be run in an SSH session. If you are a geek with a lot of linux experience, you might find some of details below obvious and unnecessary - in which case just move on.

SSH is your best friend

Anyway, back to setting up ES - after you got your VM box provisioned, SSH to the box and install Oracle JDK:
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java7-installer
And then install Elasticsearch:
wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.deb
sudo dpkg -i elasticsearch-1.7.1.deb
Now you have installed ES v 1.7.1. To set Elasticsearch to start at reboots (equivalent of Windows services) run these commands in SSH:
sudo update-rc.d elasticsearch defaults 95 10
sudo /etc/init.d/elasticsearch start
Now ideally you would want to move the data and logs to the durable drive you have mounted, just edit the Elasticsearch config in vim and change:
sudo vim /etc/elasticsearch/elasticsearch.yml
and then (note uncommented lines):
path.data: /mounted/elasticsearch/data
# Path to temporary files:
#
#path.work: /path/to/work

# Path to log files:
#
path.logs:  /mounted/elasticsearch/data
Now you are ready to restart Elasticsearch:
sudo service elasticsearch restart
Note: Elasticsearch is Memory, CPU and IO hungry. SSD drives really help but if you do not have them (class D VMs), make sure provide plenty of RAM and enough CPU. Searches are CPU heavy so it will depend on number of concurrent users using it.
If your machine has a lot of RAM, make sure you set ES memory settings as the default ones will be small. So update the file below and set the memory to 50-60% of the total memory size of the box:
sudo vim /etc/default/elasticsearch
And uncomment this line and set the memory size to half of your box’s memory (here 14GB, just an example!):
ES_HEAP_SIZE=14g
There are potentially other changes that you might wanna do. For example, based on number of your nodes, you wanna set the index.number_of_replicas in your elasticsearch.yml - if you have a single node set it to 0. Also turning off the multicast/Zen discovery since will not work in Azure. But these are things you can start learning about when you are completely hooked on the power of information provided by the solution. Believe me, more addicting than narcotics!

Set up the Kibana in Linux

Up until version 4, Kibana was simply a set of static HTML+CSS+JS files that would run locally on your browser by just opening root HTML in the browser. This model could not really be sustainable and with version 4, Kibana runs as a service on a box, most likely different to your ES nodes. But for PoC and small use cases it is absolutely fine to run it on the same box.
Installing Kibana is straightforward. You just need to download and unpack it:
wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz
tar xvf kibana-4.1.1-linux-x64.tar.gz
So now Kibana will be downloaded to your home directory and be unpacked to kibana-4.1.1-linux-x64 folder. If you want to see where that folder is you can run pwd to get the folder name.
Now to run it you just run the command below to start kibana:
cd bin
./kibana
That will do for testing if it works but you need to configure it to start at the boot. We can use upstart for this. Just create a file in /etc/init folder:
sudo vim /etc/init/kibana.conf
and copy the below (path could be different) and save:
description "Kibana startup"
author "Ali"
start on runlevel [2345]
stop on runlevel [!2345]
exec /home/azureuser/kibana-4.1.1-linux-x64/bin/kibana
Now run this command to make sure there is no syntax error:
init-checkconf /etc/init/kibana.conf
If good then start the service:
sudo start kibana
If you have installed Kibana on the same box as the Elasticsearch and left all ports as the same, now you should be able to go to browser and browse to the server on port 5601 (make sure you expose this port on your VM by configuring endpoints) and you should see the Kibana screen (obviously no data).



Wednesday, 27 May 2015

PerfIt! decoupled from Web API: measure down to a closure in your .NET application

Level [T2]

Performance monitoring is an essential part of doing any serious-scale software. Unfortunately in .NET ecosystem, historically first looking for direction and tooling from Microsoft, there has been a real lack of good tooling - for some reason or another effective monitoring has not been a priority for Microsoft although this could be changing now. Healthy growth of .NET Open Source community in the last few years brought a few innovations in this space (Glimpse being one) but they focused on solving development problems rather than application telemetry.

2 years ago, while trying to build and deploy large scale APIs, I was unable to find anything suitable to save me having to write a lot of boilerplate code to add performance counters to my applications so I coded a working prototype of performance counters for ASP .NET Web API and open sourced and shared it on Github, calling it PerfIt! for the lack of a better name. Over the last few years PerfIt! has been deployed to production in a good number of companies running .NET. I added the client support too to measure calls made by HttpClient and it was a handy addition.
From Flickr

This is all not bad but in reality, REST API calls do not cover all your outgoing or incoming server communications (which you naturally would like to measure): you need to communicate to databases (relational or NoSQL), caches (e.g. Redis), Blob Storages, and many other. On top of that, there could be some other parts of your code that you would like to measure such as CPU intensive algorithms, reading or writing large local files, running Machine Learning classifiers, etc. Of course, PerfIt! in this current incarnation cannot help with any of those cases.

It turned out with a little change and separating performance monitoring from Web API semantic (which is changing with vNext again) this can be done. Actually, not getting much credit for it, it was mainly ideas from two of my best colleagues which I am grateful for their contribution: Andres Del Rio and JaiGanesh Sundaravel.

New PerfIt! features (and limitations)

So currently at version alpha2, you can get the new PerfIt! by using nuget (when it works):
PM> install-package PerfIt -pre
Here are the extra features that you get from the new PerfIt!.

Measure metrics for a closure


So at the lowest level of an aspect abstraction, you might be interested in measuring metrics for a closure, for example:
Action action = Thread.Sleep(1000);
action(); // measure
Or in case of an async operation:
foo result = null;
Func<Task> asyncCall = async () => result = await _command.ExecuteScalar();

// and then
await asyncCall();
This closure could be wrapped in a method of course, but there again, having a unified closure interface is essential in building a common tool: each method can have different inputs of outputs while all can be presented in a closure having the same interface.

Thames Barriers Closure - Flickr. Sorry couldn't find a more related picture, but enjoy all the same
So in order to measure metrics for the action closure, all we need to do is:
var ins = new SimpleInstrumentor(new InstrumentationInfo() 
{ 
   Counters = CounterTypes.StandardCounters, 
   Description = "test", 
   InstanceName = "Test instance" 
}, 
   TestCategory); 

ins.Instrument(() => Thread.Sleep(100));

A few things here:
  • SimpleInstrumentor is responsible for providing a hook to instrument your closures. 
  • InstrumentationInfo contains the metadata for publishing the performance counters. You provide the name of the counters to raise to it (provided if they are not standard, you have already defined )
  • You will be more likely to create a single instrumentor instance for each aspect of your code that you would like to instrument.
  • This example assumes the counters and their category are installed. PerfitRuntime class provides mechanism to register your counters on the box - which is covered in previous posts.
  • Instrument method has an option to pass the context as a string parameter. This context can be used to correlate metrics with application context in ETW events (see below).

Doing an async operation is not that different:
ins.InstrumentAsync(async () => await Task.Delay(100));

//or even simpler:
ins.InstrumentAsync(() => Task.Delay(100))

SimpleInstrumentor is the building block for higher level abstractions of instrumentation. For example, PerfitClientDelegatingHandler now uses SimpleInstrumentor behind the scene.

Raise ETW events, effortlessly


Event Tracing for Windows (ETW) is a low overhead framework for logging, instrumentation, tracing and monitoring that has been in Windows since version 2000. Version 4.5 of the .NET Framework exposes this feature in the class EventSource. Probably suffice to say, if you are not using ETW you are doing it wrong.

One problem with Performance Counters is that they use sampling, rather than events. This is all well and good but lacks the resolution you sometimes need to find problems. For example, if 1% of calls take > 2 seconds, you need on average 100 samples and if you are unlucky a lot more to see the spike.

Another problem is lack of context with the measurements. When you see such a high response, there is really no way to find out what was the context (e.g. customerId) for which it took wrong. This makes finding performance bottlenecks more difficult.

So SimpleInstrumentor, in addition to doing counters for you, raises InstrumentationEventSource ETW events. Of course, you can turn it off or just leave it as it has almost no impact. But so much better, is that use a sink (Table Storage, ElasticSearch, etc) and persist these events to a store and then analyse using something like ElasticSearch and Kibana - as we do it in ASOS. Here is a console log sink, subscribed to these events:
var listener = ConsoleLog.CreateListener();
listener.EnableEvents(InstrumentationEventSource.Instance, EventLevel.LogAlways,
    Keywords.All);
And you would see:


Obviously this might not look very impressive but when you take into account that you have the timeTakenMilli (here 102ms) and have the option to pass instrumentationContext string (here "test..."), you could correlate performance with the context of in your application.

PerfIt for Web API is all there just in a different nuget package


If you have been using previous versions of PerfIt, do not panic! We are not going to move the cheese, so the client and server delegating handlers are all there only in a different package, so you just need to install Perfit.WebApi package:
PM> install-package PerfIt.WebApi -pre
The rest is just the same.

Only .NET 4.5 or higher


After spending a lot of time writing async code in CacheCow which was .NET 4.0, I do not think anyone should be subjected to such torture, so my apologies to those using .NET 4.0 but I had to move PerfIt! to .NET 4.5. Sorry .NET 4.0 users.

PerfIt for MVC, Windsor Castle interceptors and more

Yeah, there is more coming. PerfIt for MVC has been long asked by the community and castle interceptors can simply remove all cross cutting concern codes out of your core business code. Stay tuned and please provide feedback before going fully to v1!

Monday, 29 September 2014

Performance Counters for your HttpClient

Level [T2]

Pure HTTP APIs (aka REST APIs) are very popular at the moment. If you are building/maintaining one, you have probably learnt (perhaps the hard way) that having a monitoring on your API is one of your top cross-cutting concerns. This monitoring involves different aspects of the API, one of which is the performance.

There has been many approaches to solving cross-cutting concerns on APIs. Proxying has been a popular one and there has been proliferation of proxy type services such as Mashery or Apigee which basically sit in front of your API and provide an abstraction which can solve your security and access control, monetizing or performance monitoring.

This is a popular approach but comes with its headaches. One of these problems is that if you already have a security mechanism in place, it can clash with the one provided by the service. Also the geographic distribution of these services, although are getting better, is not as good as the one provided by many cloud vendors. This can mean that your traffic could be bouncing across the Atlantic ocean a couple of times before getting to your end users - and this is bad, really really bad. On the other hand, these will not tell you what is happening inside your application which you have to solve differently using classic monitoring approaches. So in a sense, I would say you might as well just byte! the bullet and just implement it yourself.

PerfIt was a library I built a couple of years ago to provide performance counters for ASP.NET Web API. Creating and managing performance counters for windows is not a rocket science but is clumsy and once you do it over and over and for every service, this is just a bit too much overhead. So this was designed to make it really simple for you... well, actually for myself :) I have been using PerfIt over the last two years - in production - and it serves the purpose.

Now, it is all well and good to know what is the performance characteristics of your API. But this gets really more complicated when you have taken a dependency on other APIs and degradation of your API is the result of performance issues in your dependent APIs.




This is really a blame game: considering the fact that each MicroServie is managed by a single team and in an ideal DevOps world, the developers must support their services and you would love to blame another team rather than yourself especially if this is truly the direct result of performance degradation in a dependent service.

One solution is have access to performance metrics of the dependent APIs but really this might not be possible and kinda goes against the DevOps model of operations. On the other hand, what if this is due to an issue in an intermediary - such as a Proxy?

The real solution is to benchmark and monitor the calls you are making out of your API. And I have implemented a new DelegatingHandler to do that measurement for you!


PerfIt! for HttpClient

So HttpClient is the de-facto class for accessing HTTP APIs. If you are using something else then either you have a really really good reason to do so or you are just doing it wrong.

PerfIt for client provides 4 standard counters out of the box:

  • Total # of operations
  • Average time taken (in seconds)
  • Time taken for the last operation (in ms)
  • # of operations per second

These are the 4 counters that you would normally need. If you need another, just get in touch with me but remember these counters must be business-independent.

First step is to install PerfIt using NuGet:
PM> Install-Package PerfIt
And then you just need to install the counters for your application. This can be done by running this simple code (categoryName is the performance counter grouping):
PerfItRuntime.InstallStandardCounters("<categoryName>");
Or by using an installer class as explained on the GitHub page and then running InstallUtil.exe.

Now, just add PerfitClientDelegatingHandler to your HttpClient and make some requests against a couple of websites:
using System;
using System.Net.Http;
using PerfIt;
using RandomGen;

namespace PerfitClientTest
{
    class Program
    {
        static void Main(string[] args)
        {
            var httpClient = new HttpClient(new PerfitClientDelegatingHandler("ClientTest")
            {
                InnerHandler = new HttpClientHandler()
            });

            var randomSites = Gen.Random.Items(new[]
            {
                "http://google.com",
                "http://yahoo.com",
                "http://github.com"
            });
            for (int i = 0; i < 100; i++)
            {
                var httpResponseMessage = httpClient.GetAsync(randomSites()).Result;  
                Console.Write("\r" + i);  
            }
           
        }
    }
}
And now you should be seeing this (we have chosen "ClientTest" for the category name):

So as you can see, instance names are the host names of the APIs and this should provide you with enough information for you to monitor your dependencies. Any deeper information than this and then you really need tracing rather than monitoring - which is a completely different thing...



So as you can see, it is extremely easy to set this up and run it. I might expose the part of the code that defines the instance name which will probably be coming in the next versions.

Please use the GitHub page to ask questions or provide feedbacks.


Monday, 1 April 2013

Monitor your ASP.NET Web API application using your own custom counters

[Level T2] OK, so you have created your Web API project and deployed into production and now boss says dude, we have performance problems. Or maybe head of testing wants to benchmark the application and monitor it over the course of next releases so we catch problems early. Or you are just a responsible geek interested in the performance of your code.

NOTE: THIS BLOG POST REFERS TO AN EARLIER VERSION OF PERFIT. PLEASE VISIT https://github.com/aliostad/PerfIt FOR UP-TO-DATE DOCUMENTATION

In any case, no serious web code can be written without having performance in mind. Problem is that existing system, .NET and ASP.NET performance counters can go as far as telling you overall picture of the performance and the metrics usually coalesced into a single value while you need to drill down to individual APIs and see what's happening. Now this can be another burden on your already squeezed time remaining. So how easy is it to publish counters for your individual API? Just these few steps! TLDR; :

  1. Use NuGet to install PerfIt! into your ASP.NET Web API project (make sure you get version +0.1.2 - there was a bug fixed in this version)
  2. Add PerfitDelegatingHandler to the list of your MessageHandlers
  3. Decorate those actions you want to monitor
  4. Use "Add New Item" to add an Installer class into your Web API project. Write just a single line of code for Install and Uninstall.
  5. Use InstallUtil.exe in an administrative command window to install (or uninstall) your counters. Done! 

Seeing performance counters of your own project is kinda cool!

1-Adding PerfIt! to your project

So to add PerfIt! to the project, simply use NuGet console:
PM> Install-Package PerfIt


2-Adding PerfItDelegatingHandler 

Now we need to add the delegating handler:

config.MessageHandlers.Add(new 
      PerfItDelegatingHandler(config, "My test app"));

The string passed in here is the application name. This will be used as the instance name in the performance counter. You can see that in the screenshot above.

3-Decorate actions

For any action that you want the counters to be published, use PerfIt action filter and define the counters you want to see published:

// GET api/values
[PerfItFilter(Description = "Gets all items",
   Counters = new []{CounterTypes.TotalNoOfOperations,
   CounterTypes.AverageTimeTaken})]
public IEnumerable<string> GetAll()
{
    Thread.Sleep(_random.Next(50,300));
    return new string[] { "value1", "value2" };
}

// GET api/values/5
[PerfItFilter(Description = "Gets item by id", 
   Counters = new[] { CounterTypes.TotalNoOfOperations, 
   CounterTypes.AverageTimeTaken })]
public string Get(int id)
{
    Thread.Sleep(_random.Next(50, 300));
    return "value";
}

Here we have decorated GetAll and Get to publish two types of counters (currently these counters are available but more will be added - see below).

The format of the counter will be [controller].[action].[counterType] (see screenshot above). As such, please note that we had to change the first Get to GetAll to so that the counter names do not get mixed up. If you cannot do that (while ASP.NET Web API allows you do change the name as long as the action starts with the verb), alternatively use the Name property of the filter to define your own custom name (which we did not specify here to allow default naming take place).

Description property will appear in the perfmon.exe window and is known as CounterHelp. Counters is an array of string, each string being a defined against PerfIt! runtime (see roadmap for more info) as a counter type. Another option is the ability to define Category for the counter which we also did not specify so by default, it will be the assembly name. You can see the category in the screenshot above as PerfCounterWeb (see the typo!).

4-Adding Installer to your project 

Now use "Add New Item" to add an installer class to your project. You might have not seen this in the new item templates but it is definitely there (screenshot from VS2012 but project is .NET 4.0 so can be done in VS2010):


After adding the class, override Install and Uninstall methods and add the code below (hit F7 to see the code):

public override void Install(IDictionary stateSaver)
{
    base.Install(stateSaver);
    PerfItRuntime.Install();
}

public override void Uninstall(IDictionary savedState)
{
    base.Uninstall(savedState);
    PerfItRuntime.Uninstall();
}

5-Use installutil.exe to install counters

Just make sure you open an administrative command window. Use Visual Studio command prompt since it has the right path for InstallUtil.exe. cd to your bin folder and register your assembly:
c:\projects\myproject\bin>InstallUtil.exe -i MyWebApplication.dll
This should do the trick. Use -u switch to uninstall the counters.

That is all that is needed. Just hit your app and start benchmarking your app in the perfmon.exe.

Turning off the publishing of counters

In normal/production circumstances, you might wanna turn off publishing of the performance counters. In this case, you can put the line below in the appSettings of your web.config:

<appSettings>
    <add key="perfit:publishCounters" value="false"/>

I have kept the default behaviour to publish counters - so to eliminate one configuration step to get up and running. This may change in the future - but unlikely.

PerfIt! Roadmap

I needed to add performance counters to my Web API application. With wife away and a few Easter bank holiday days, I managed to start and finish version 0.1 of PerfIt! library.

Future work includes adding more counter types and looking at improving the pipeline. PerfIt! has been built on the top of its own extensibility framework so very easy to add your own counters, all you need is to implement CounterHandlerBase and then register the handler in PerfItRuntime.HandlerFactories.

Any problems, issues, comments or feedbacks, please use the GitHub issues to get it touch. Enjoy!