The Rain and The Shade

January 28, 2012

Mother of all useless User Interfaces

Filed under: Windows Azure — ovaisakhter @ 8:05 am

I recently was sent a phone from Nokia Connect for testing. When they sent the phone then sent an email with a DHL tracking number. You click this link and you all sort of information. I was very excited and was looking a this link quite often. I realized that this tracking page is a brilliant example of how clueless an engineer can be in capturing the user requirements.

The tracking entries started like Shipment picked up Hurray one useful information, then “Processed at Lambeth – UK” what? and after that there have been 10 similar entries. I got a glint of hope when I saw the entry “Departed Facility in LONDON-HEATHROW – UK” I thought super now it will be flying to Denmark but the next entry (after about 10 hours) was “Shipment on hold BRUSSELS – BELGIUM” ah ok so it so it goes to Belgium from UK but does it come to Denmark from there or it goes to Germany from there and then to Denmark I have no clue.


I am thinking what would have been the requirement stated when this page was made. “User should be able to see the status of their shipment” and that is what this page is doing. But why exactly user wants to know/track the status of his shipment certainly to know when he will be able to get this shipment and that is one information that is missing from this page. Usually the installation wizards or data entry forms tell you that you are at the step 1 of 5 imagine a wizard telling you are at step 2 without telling you the total number of steps I am sure you will be as lost as I am.

The information in this tracking page may only be deciphered if you have in depth knowledge of DHL’s routes and the time it takes to go between each hop, otherwise you will be clueless. It is like showing the “Object null exception” as a message box to the user, it is like putting your log file on the user interface.

I am sure DHL spent a lot of money on this system and came up with this amazingly connected system which show step by step information to the user which has no practical use for him.

I can given few suggests on how these guys can improve the this page

  1. Put an expected arrival date
  2. Reduce the movements to only City level (if most cases)
  3. Add some type of progress bar on the page so that the movements can make sense

In the mean while this page will remain my reference point on how not to do a user interface.


October 2, 2011

Azure Access Control Service Usage Scenario

Filed under: Windows Azure — ovaisakhter @ 11:19 pm

In my last post I discussed a bit about setting up and configuring the Access Control Service. I also gave you a pointer to read on how can you establish the trust between your application and ACS. So we are pretty much on our way with Authentication.

In most of the web applications you will not need  much of authorization. What we may need to do is to get some additional information from the user and remember that information when the user comes again. For this you will need to know the user’s identity. So that you can map this identity with your user related data stored in your application.

In some applications you may need to further refine user access based on roles. You can use the same pattern here, i.e. map the user roles stored in your application with the id provided by the Identity provider. If your identity provider provides role information you can get this information from ACS by effectively using Rules in ACS. You can keep the role information in the ACS and have it transmitted to the application as part of the information sent to the application(explained in previous post how can you accomplish this).

No matter what your requirement is or what ever pattern you choose to use the thing you will require is that you should be able to get hold all the information sent by ACS before any functionality of your application is called. In Asp.Net MVC this can be accomplished as follows.

Create a class in your application

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method)]
public sealed class AuthorizeAttribute :
                       FilterAttribute, IAuthorizationFilter
       public void OnAuthorization(AuthorizationContext filterContext)


this class defines an attribute which can be used on your Controller classes or methods. OnAuthorization method is called before any method of the controller is called which is decorated by this attribute.

    public class HomeController : Controller
        // GET: /Home/
        public ActionResult HomeView()
            return View();


You can enhance the attribute to take a take role(s) which are authorized to use this controller (or method if used on a method), I personally prefer getting a controller id and mapping that with roles in some configuration but this is a discussion of another time.

For now let us get hold of the information sent by the ACS and try to display that on your page. Once you can do that then you can map it, compare it or store it, as you like. The key in getting this information is in this statement

          var user = filterContext.RequestContext.HttpContext.User as IClaimsPrincipal;

IClaimPrincipal is the the interface which opens doors for the analysis of information sent by ACS.  It took me some effort to find out where I can get hold of this interface. This interface can be found in Microsoft.IdentityModel.dll present at the following location

“c:\Program Files\Reference Assemblies\Microsoft\Windows Identity Foundation\v3.5\Microsoft.IdentityModel.dll”

All the information provided by the Identity Providers is in the form of claims. This instance of the interface contains all the claims provided by ACS. These statements iterate through all the claims and add them to a string list. Later on we will display this list inside our page.

var claims = new List<string>();
           filterContext.Controller.ViewBag.ClaimsInfo = claims;
               claims.AddRange(from identity in user.Identities
                               from claim in identity.Claims
                               select string.Format("Claim:Name:{0}, Value:{1}", claim.ClaimType.Substring(claim.ClaimType.LastIndexOf("/")+1), claim.Value));

I have added the list to the ViewBag so that it is accessible from the view. this simple Razor markup will display this list in our page

@foreach (var item in ViewBag.ClaimsInfo)

Here is the output when I logged in using my Gmail account.


You can see the admin role injected by me in previous blog also.

Mostly we will be interested in the emailaddress claim we can use it to map the application user data with the logged in user.

September 30, 2011

Azure AppFabric Access Control Service

Filed under: AppFabric,Windows Azure — ovaisakhter @ 11:31 pm

My neighbor bought a new TV I wanted to go to his house and have a look at it. But the problem was I dint know him all that well. I asked one of our common friend to introduce me to him so that I can go to his house and have a look at his new TV. My friend introduced us and told him that my name is Ovais and I am a reasonably decent person. As my neighbor trusted my friend, he let me in his house

(there nothing true in this story except the fact that my name is ovais and I am a recently decent man Winking smile).

If we map this story to the cyber space it will be something like, I go to a website, it has no way to verify who I am, so it asks me to prove my identity to Facebook when I do that Facebook tells this site that I am Ovais and the website lets me in the member area.

If we describe the same scenario in terms of Access Control Service then,the website is the “Relying party (RP) application” I am the client and Facebook is the Identity provider. The difference in CyberSpace is that there can be multiple Identity Providers but they do not speak the same language. So you need some one to translate their different languages into one standard language so that the website’s access control functionality is simple and robust, and AppFabric Access Control Service provides this translation functionality.       

ACS currently supports following Identity Providers

  • Windows Live credentials
  • Facebook
  • Google
  • Yahoo
  • WS-Federation identity provider(e.g. Microsoft AD FS 2.0)

To get started with ACS you have to log into your Azure Management portion. Then go to AppFabric/Access Control section and create a new namespace. Once the namespace is created you are now ready to configure the service.

At this point you may get an error if you are not the primary administrator of the subscription. If this is the case have a look at this link with known issues and workarounds Either you have ask your primary administrator to do the steps mentioned on this link or will have to do it yourself, Of course if you know his/her password Winking smile.

(I will not describe every step in detail have a look at this detail for full details  

Once you are on the ACS Management portal here are the things you need to do

  • Add Identity Providers
  • Add a Relying Party Application
  • Create Rules

I hope with my award winning full of suspense story you were able to understand the concept of first two. You can find the details of the step in details on the above mentioned link. Here I will like to write a bit about the Rules.

To understand rules first we need to understand another very important concept i.e. Claims. If you again consider my story, my friend Claimed that I am a decent enough man. My neighbor trusted this claim so he let me in. When are you are authenticated by an Identity Provider it also claims some things about you like you have this name and this email and your designation is Manager. Now different IPs may use different names for these claims. Rules actually map these different types of claims into a standard language so that the relying party deals with only one set of terminology. You can also define conditional mapping. For example the rule in the snapshot says that if user’s email address is then add a claim that he has Admin role.


All this configuration can be done using the management api which will enable you to automate this process or even create a more intuitive user interface for your administrators.

Once these steps are done you are all set to create your application and use the ACS there. 

You can enable an ASP.Net MVC application to use ACS following the step 8 provided in the link i.e. Step 8 – Configure Trust Between ACS and Your ASP.NET Relying Party Application. Once done with it you will be all set to test and execute your application. Here is how the log-in screen will look like when you will try to access your website. The options to login depend on the Identity Providers selected during configuration of the ACS.


So now your site has a Authentication system without writing a single line of code. You can at anytime add/remove the Identity provider without needing to change anything in the application.

In the next blog I will try to discuss how Authorization can be done using ACS in an MVC application.

September 27, 2011

Topics, Subscriptions and Receivers in Windows Azure App Fabric Service Bus

Filed under: AppFabric,Windows Azure — ovaisakhter @ 12:25 am


I believe the most complex thing I found when looking into the Azure AppFabic Service bus is the pricing model. After spending quite a few precious minutes of my life I now think understood it but I am still not sure if I am correct. So this matter has to wait until my next statement arrives and I got yelled by my support department. At which time I may come back with a blog on that. Meanwhile you can go ahead and have a look at the pricing FAQ at

good luck with that Smile

So now back to simpler things, Usually Udi Dahan starts his NServiceBus presentation with this “Where is the bus, There is no bus”. Well in the case of AppFabric Service Bus there is a Bus. You can have a look at it at the Azure management portal.

Now I will use this bus to create a simple chat application and while I am using it I will try explain some of its concepts. Please make sure to install the relevant SDK installed from the following link

Our chat application is a WPF application containing only one screen. The application allow any number of users to join in and every one will see every one’s messages. For simplicity the user names are given in the configuration file. So the main screen only has a list of messages, a message textbox and a send button. Application uses MVVM so most of our functionality resides in the ViewModel. I am using Galasoft’s MVVM light framework here which is a treat to use in it self but more on that later.

Long story short the button is bound to a command and we are mostly concerned from that point onwards.


I have encapsulated all the logic related to the service bus functionality into a separate class let us call it service bus manager in its constructor the class connects to the service bus and tries to create the initial structure required for the communication. Have a look at the following code

var tokenProvider =
           TokenProvider.CreateSharedSecretTokenProvider(IssuerName, IssuerKey);
           var serviceUri = ServiceBusEnvironment.CreateServiceUri("sb", ServiceNamespace, string.Empty);

           _namespaceManager = new NamespaceManager(serviceUri, tokenProvider);

The bold variables actually signify some steps that should be done before starting the code i.e. You should create a Service Name space. The service name space is actually the unique identity of your service using which your service bus will be located. Here is a link on how you can do it otherwise you can log into the Azure Management portal and find your way to do that,

Once your namespace is created and selected you will be able to see Default Key property on the Properties section click the button and you will be able to see the IssuerName and IssuerKey.

NamespaceManager is the main class in the API once you have created an instance of this you can go on and manipulate things. Next thing we would like to do is to create a topic, Following code checks if there is a topic already created for the Chat if not then it creates it.

_myTopic = !_namespaceManager.TopicExists(ChatTopic) ? _namespaceManager.CreateTopic(ChatTopic) : _namespaceManager.GetTopic(ChatTopic);

all good except I have not explained what a topic is, “A topic is a durable message log with multiple subscription taps separately feeding subscribers” So topic is a central entity against which all the messages are published, and as many as 2000 subscribers can subscribe to it. Once a message is posted to a topic a copy of this message is sent to each of the subscribers. 

Speaking of subscriptions here is the code which will create the subscriptions to our newly created topic.

if (_namespaceManager.SubscriptionExists(_myTopic.Path, clientName))
                _namespaceManager.DeleteSubscription(_myTopic.Path, clientName);

            _namespaceManager.CreateSubscription(_myTopic.Path, clientName,
                                                 new SqlFilter(string.Format("From <>'{0}’", clientName)));

For the chat application we are creating one subscription for each client, we are checking if there is a subscription already I am deleting and creating a new one. This code also shows another aspects of the subscriptions that they can be selective i.e. you can tell what type of messages posted on the topic you are interested in. Here I am specifying that I am not interested in the messages sent by myself, which kind of makes sense for a chat application. Here “From” is a property of the message which contains the user name of the user who sent the message. In our application a chat message is represented by a ChatMessage Class

public class ChatMessage
        public string Message { get; set; }
        public DateTime ReceivedTime { get; set; }
        public string From { get; set; }
        public string Id { get; set; }


but we do not send this message as it is. More on this later let us carry on with the constructor code

              var factory = MessagingFactory.Create(serviceUri, tokenProvider);
            _myTopicClient = factory.CreateTopicClient(_myTopic.Path);
            _mySubscriptionClient = factory.CreateSubscriptionClient(_myTopic.Path, clientName, ReceiveMode.ReceiveAndDelete);

As all good chat applications our application will send and receive messages _myTopicClient will be used to send the messages and _mySubscriptionClient will be used to receive the messages.

Let us talk about sending first,

public void SendMessage(ChatMessage chatMessage)
            using (var message = new BrokeredMessage())
                message.CorrelationId = chatMessage.From;
                message.Properties.Add("Message", chatMessage.Message);
                message.Properties.Add("From", chatMessage.From);
                message.Properties.Add("Id", chatMessage.Id);



simple enough I guess. Just keep in mind that

“Maximum message size: 256KBMaximum header size: 64KBMaximum number of header properties in property bag: MaxValue Maximum size of property in property bag: No explicit limit. Limited by maximum header size.”

Now let us talk about receiving the message. Service bus messages are received by polling on the service bus “kaachhaaan” I can hear the sound of breaking heart but well this is true no events guys not for now at least. Here is the code that does “the magic”.

var task = new Task(ReceiveMessageTask);

Started a task to start a separate thread for polling, may not be the best way to do it but it works we are good to go here.

private void ReceiveMessageTask()
           while (true)
               var message = _mySubscriptionClient.Receive(TimeSpan.FromSeconds(2));

               if (message == null) continue;

               var chatMessage = new ChatMessage
                                         From = (string)message.Properties["From"],
                                         Id = (string)message.Properties["Id"],
                                         Message = (string)message.Properties["Message"],
                                         ReceivedTime = DateTime.Now




so we poll for message after every 2 seconds and if we receive a message we fire an event which is handled by the ViewModel which get the message and updates the UI.

Here is where I am chatting to myself


(ignore the “Not Connected” label on the left)

You can open as many applications as you want about (2000) to be precise give them different names and they will work.

Do try this at home. I have given the example of a chat application to explain some of the concepts in azure appfabric service bus. Of course I do not believe it to be a rightful use of this technology. Service bus is used to enable applications to talk to each other. If you would like to dig deep into how and where the service bus should be used I recommend looking into sessions of Udi Dahan. They are not related to AppFabric service bus but give you a great insight to the scenarios where it can be used.

You can download the full application code from this link

Please do not mind the strange namespace name “DropBoxChatApp” as this is a story of another time.

Have fun guys

July 8, 2011

Using Reverse Time Stamp in TableStorage

Filed under: Table Storage,Windows Azure — ovaisakhter @ 11:59 am

When you read about the Azure Table Storage one of the earlier things you come to know is that there are only two (properties) fields in the stored Entities which are indexed. i.e. PartitionKey and RowKey.

All the records inside a partition are indexed by RowKey and are also automatically sorted on the RowKey also. I can say that if you can design your key in such a way that the records are always sorted in a way which is more suitable for most of your data access scenarios then you can save a lot processing and get much better performance.

In a lot cases the records should be sorted by their date of creation, so new records should be shown first. In TableStorage you get a property in every entity called TimeStamp which could be the first choice normally in this case for Ordering(good old SQL days ). When you go on and write your LINQ query with OrderBy the first thing you will get will be an error at the runtime. Because table storage does not supports OrderBy.

In TableStorage you can use a TimeStamp in the beginning your RowKey to get the records sorted by time of creation, and if you want to get the new records first you can reverse the timestamp. Here is an interesting code I have used which can do that for you.

string myRowKey = DateTime.MaxValue – DateTime.UtcNow).Ticks.ToString(“d19”)

//think I saw this code in one of the CloudCover videos by Steve Marx

so you can create your key like Entity.RowKey = myRowKey+whatOtherwiseCouldHavebeenMyRowKey+SomethingElseIfYoureallyWantto and when get your records they will be nicely sorted on the date of creation.

July 6, 2011

The keys are Kool again

Filed under: NoSQL Databases,Windows Azure — ovaisakhter @ 11:35 pm

(This post is highly inspired by my mail correspondence with Thomas Jespersen who works at

In the good old days when SQL was the king (it kind of still is) we were in love with the “identity column” and key for the record was insignificant in the design. So normally you will design a database where all the tables will have one primary key whose value will be either a Guid or a auto incrementing number (identity in MSSQL).

With the popularity of the NoSQL and high performance databases where many other things were revolutionized the record Key got its due importance back.Here is what tutorial on the Redis site says about the Redis

“Redis is what is called a key-value store, often referred to as a NoSQL database. The essence of a key-value store is the ability to store some data, called a value, inside a key. This data can later be retrieved only if we know the exact key used to store it”

This kind of seems to be the theme all around in the in most of the NoSQL databases e.g Redis, Azure Table Storage,RavenDB, Cassandra and many more use key to access huge amount of data. These systems index the keys for very fast retrieval of information. I am not saying that querying of the data is not possible(for example RavenDB provides amazing possibility for creating indexes on the data but more on that later) but still the fastest way to get or set the data in these systems is “if you can some how know the key”. 

Now the question is how do you know the keys without getting them from the store, an answer to that could be that you should be able to generate the keys based on the context and the type of query you want to do. let us take an example of twitter, a request comes in and says “who follows me” based on the context we know the current user (ovaisakhter in my case). So the user  ovaisakhter and he wants to know who follows him so we can have  a list in the database against a key “ovaisakhter-followers”. So now we can get all this information in one request.

Let us take another example. We need to save user’s tweets. One way of doing that could be that we maintain one list of tweets per month (depends on how the data will be accessed) so the key can  “UserId-mmyyyy-Tweets” so now if some one comes and asks for the tweets you exactly know where to find tweets quickly.

Azure Table Storage is a bit different then the other NoSQL offerings. It provides less opportunity to play with the structure of the document as the document has to be name-value pairs(maximum 255) with a total size of 1MB. They provide you with an further categorization possibility. You have two keys to play with PartitionKey and RowKey. PartitionKey has very important role to play in the scaling of your datastore (more on that later). Partition key also be used as a Categorization point for the fast data retrieval. Let see how our twitter example can look like if modeled on TableStorage. We can use PartitionKey as “UserId-mmyyyy-Tweets” and then all the Tweets can be stored with this PartitionKey. You can use Reverse ticks(reverse time stamp) as plus some identifier as RowKey for better sorting.  Remember once you know how to generate the “Key” you can use the Parallel.Foreach to get tweets for multiple months in parallel).

So in the new era of software development the Keys are back in fashion so due time should be spent on designing what your keys should be, based on things like your Data Structures, Data Retrieval Requirements,scaling requirements and other things.  

June 27, 2011

Querying the Entities using the RowKey in Azure Table Storage

Filed under: Table Storage,Windows Azure — ovaisakhter @ 3:44 pm

When you look into the table and table storage you are introduced to the concept of mandatory properties of every entity that can be stored in the Tables. i.e

  • PartitionKey
  • RowKey
  • TimeStamp

You are also told that only Partition Key and Row Key are the only properties that are indexed. When reading this I got this idea that may be it should be a good idea to put some of the data in the key so that you can use this data for searching. For example if you are making a blog site then if can be a good idea to put the UserId inside the Row key of every blog’s row key and then find it using a String.Contains give me the blogs of the user just return all the blogs where the Row id contains the email. At least we can draw this conclusion that a string.Contains should run faster (much faster) on the Row key rather than a “Non-indexed” field inside the entity.

So I tweeted and tried to confirm my hunch from the people at Cloud Cover on channel 9, who replied with affirmative.

Now I set off to measure the performance gains that I will get using the above mentioned approach. I created a user object like following,

Next I create records on this entity with the following code


Run the code twice with slight change in the Email address. So ideally I should have 8000 records in my table but there were 7600. I will investigate that later and report back but carrying on.

Now the fun part I started querying this entity from my code. Remember I have RowKey and Email having the email address. Which in my case contain a lot of entities starting with “o1”.

So I wrote one query each to get the record starting with “o1” on the User.RowKey and one with User.Email starting with “o1”. Now ideally the query running on RowKey should be much fast than the one running on the User.Email, not so ideally they should be almost the same. But in my case absolutely worst case happened. The RowKey Query was around 3 times slower than the Email query. Run the code 100 times took an average and the result was

  • Email Query Took 286 MilliSeconds
  • RowKey Query Took 919 MilliSeconds

Then I changed my queries and instead on doing Contains I did the equals comparison, and this time the RowKey query was much faster than the Email query.

So I can make this conclusion that the row keys are not stored as strings in the database most probably an Integer representation of them is saved and indexed. So the equals operation is fast but any string operation on them is extremely slow. I think this way doing things is highly non intuitive.

Here is the code I used to Query (Please do not mind a lot of Console.Write statements I was just trying to generate a MS Excel compatible output.)


So refrain yourself from querying any way except the equal on the RowKey or else you are in for a surprise and some I have a feeling that this will not be a good surprise.


Steven Max pointed out the an error with my code I was getting less records in the case of the Entity.Field case which gave such huge difference good news for me is that Entity.Field is still a little fast :)

RowKey Query: 1610,05 MilliSeconds
Entity.Field Query: 1590,07 MilliSeconds

June 25, 2011

What is Microsoft Azure VM Role and what it is Not

Filed under: VM Role,Windows Azure — ovaisakhter @ 12:18 pm

Some months back I heard about the Platform as a Service initiative from Microsoft in the PDC. It seemed exciting especially the VM role. I started thinking about the possible scenarios this feature can be used. Such as possibility to host our own servers like SharePoint 2010 or MS Dynamics CRM on the cloud.

A friend of mine who is not as lazy as I am jumped to the opportunity and uploaded a VM on Azure and started running a number of instances on it. He installed MS CRM, SharePoint on the instances connected them to his local domain with Windows Azure Connect. Life seemed as it should be and suddenly the dreams shattered. I received a very distressful Skype message from him.

After some conversation which is not mention able here, I got to know the problem was whenever he changed any configuration on VM roles all the instances were reinitialized or in other words were reverted to the “base image”. Initially I participated in his verbal bashing of Microsoft but later on when I thought it I realized that this cannot be a bug this has to be by design.

Then I started to look into it a bit further. Found this amazing video from Channel 9 series called Cloud Cover. one sentence in the video cleared the whole scenario for me. “VM roles are an extension to the worker roles”.

Worker roles can be simply put an Azure version of Windows Services. These are long running processes which are used to perform resource demanding batch operations running on Microsoft Azure operating system. They are stateless but they can use one of storage mechanism provided by Windows Azure (Blobs, Table Storage or SQL Azure etc.).

The VM role is an extension to the same concept with the difference that in this case you can use your own operating system. You make a Virtual Machine, (and do some abracadabra with csupload utility) and upload it to Azure. Do some configuration and vala your VM in running in Azure, and if you need two of them change the configuration and now there are two of them and so on and so forth. As Azure is responsible of starting and disposing of these instances so it is not possible to these instances to maintain state. So each time an instance is started it is started as pure as the “base image”. You can contaminate it a bit with the Startup tasks but that about it.

But the question is where should we use it? , Let us look at one example of a potential use of this offering.

You have a video sharing website. You use a utility to encode all the uploaded videos before they are published. This utility does not (as most of the available ones) support the Azure Operating System. You can setup the encoding process on a VM and run it on Azure using the VM role. Just keep in mind that the processed videos should be persisted on either Azure storage or on your in-premises storage (you can use Azure Connect for this also) as soon as you are done. The value addition is that you can start I with a single instance and scale it to hundreds if and then required. You can also increase the instances for a certain period (I don’t know when people share people share more videos, after Holiday season may be)  and then scale down when they are not needed. You can scale to service 1000 clients to 1000000 and then back to 1000 in one night.

Create a free website or blog at