Let us role a dice. (More predictable random numbers)

If you are making web applications chances are that you have heard of requirements like

  • Every 5th user should see this survey
  • The advertisements should be shown randomly such that 20% visitors should see Advertisement 1 and 30% Advertisement 2 and so on.

When we are trying to solve this type of problem naturally type of solutions come to our mind are something on the lines of Create a central data store and keep that track of visitors and who was shown what and then create an algorithm to add some randomness in the behavior (well this what came to my mind within the seconds of hearing this problem).

When we closely look at this solution the first problem we can identity is it will slow down the website. Every time a page is shown you have to access a data store (may be a session) in case of multiple web servers a database  , if the site has considerably large page views then this can be a problem, as this will be a sequential read/write operation hence there will be locks and requests will be waiting for their turn. Despite of all this trouble there is a huge chance that you will still not be able to fulfill the requirements one hundred percent. At this point in time if we ask the business stake holders, they will tell the requirement was never for 100% accuracy anyway (yea they are more reasonable people, far more than we give them credit for). So we have established that 100% accuracy is not required, and the traditional solutions can seriously hurt the performance.

Now let us talk about something else i.e. probability. If you take a coin and toss it, there is a 50% probability for which side the coin will land on. If we keep on flipping the coin for lets say 100s of times roughly it will land 50% of times on each side. In the same way there is a 1/6 chance for each side of dice to appear while rolling a dice. Let us try to  use this knowledge to solve the above mentioned problem. Let us imagine that we can create a dice having different number of sides based on the requirements and are able to increase or decrease the chances of for each side to appear. Each time we get a request we can roll the imaginary dice and based on the appearance of the side we take a decision. This way each time we have to take a decision we will not have look inside data store we just roll a dice and respond, and with our knowledge of probability we can safely say that we will be roughly in the reasonably I acceptable range. If we have more than one servers we can put a dice on each server and we are good to go.

There can be many ways to incorporate the probability into your code one way of this is to using Random. Let us take an example. When a visitor comes to our site we have to shown her a string. 10% of the users should see a string “10%”, 30% of the users should see string “30%” and 60% of the users should see a string “60%”. When the user comes to  site we generate a number between 1 to 100. If this number is from 1 to 10 you show the “10%” string to the user, if the number is between 11 to 40 we return “30%” string and from 41 to 100 we return the “60%” string. The idea is that there is a 10% probability for a random number from 1 to 100 to be land between 1 to 10 and so on and so forth.

I wrote a class that takes a list of percentages and their identification variables(string in our case), and when asked for the next random returns the appropriate identification variable.

var choices = new List<WeightedRandomEntry<string>>
                                                            {
                                                                new WeightedRandomEntry<string>(10,"10%"),
                                                                new WeightedRandomEntry<string>(30,"30%"),
                                                                new WeightedRandomEntry<string>(60,"60%")
                                                               

                                                            };

            var generator = new WeightedRandomGenerator<string>(choices);

now I ask for the Next random for “TotalIterations” number of times and count each time a string was returned and in the end print the percentage of appearance of each string. and then do the whole operation again for 100 times. 

for (var j = 0; j < 100; j++)
            {
                var ten = 0;
                var thirty = 0;
                var sixty = 0;
                for (var i = 0; i < TotalIterations; i++)
                {
                    switch (generator.GetNext())
                    {
                        case "10%":
                            ten++;
                            break;
                        case "30%":
                            thirty++;
                            break;
                        case "60%":
                            sixty++;
                            break;
                    }
                }

                Console.WriteLine("{0}                        {1}                    {2}",Percent(ten),Percent(thirty),Percent(sixty));
               
            }

The snapshot shows the result of the above code

  

This snapshot show the result of this code you can see that the results are roughly within the expected range. There is less that 1% deviation from the requirements.

Now if we want to solve the every 5th user problem we can have an identifier “show survey”  with 20% and “Don’t show survey” with 80%, get the Next of each page view and show survey if “show survey” is retuned.  This approach can be used to solve most of the weighted random problems without using any data store and provide considerable performance.

You can download the code used in this example from the following link.

Visual Studio Add-in for code review

Background

We do use static code analysis on our code and use Resharper as well which I am not sure where to place category wise but surely it is the best tool next to visual studio a developer can have. Although these tools have made the life of  the Architects lot easier,  a manual code reviews (if I may say it so) are still very relevant.

Some while back I had to do a code review for a decent sized project.  For some one as Lazy as I am it is a struggle to write down the review reports. It is a pain to write down the context information like project, file, line number and then some selected text to for each defect report. I  Googled for a  tool which can make my life easier, but most of the tools I found were a bit to complex for my need and on top of that most of them were coupled with some particular source control (not to mention they were not free also).

The Add-In

Eventually I thought of writing a very simple version myself. Here are the requirements for this simplest version

  • should be able to show it in visual studio
  • should be able to write a description of the issue
  • should get the context following context information
    • Solution Name
    • Project Name
    • File Name
    • Line Number
    • Selected Text
  • should be able to append these reports to a text file

In this post I will try to introduce you the addin and its capabilities which are not many and how to install it. I will explain a bit more about the code in my next post. I will how ever put a link to the source code in the end if you want to look at it.

Once you install the plug in you get a menu item in your tools menu “Code Review” once this item is clicked a visual studio tool window is loaded with the code review form. You can dock this window in visual studio for easy access. Here is how it looks like after docking.

image

So the plug in offers you some very basic capabilities. Once loaded you can select a text file where all of your review will be saved. You can go to a file and may be select some text and on the window start writing an issue description and puff the above mentioned context information appears in the context box. once done with the description  you can click the append to file button and the description along with the context information will be saved in the text file.

image

 

and when you click append to file here is what is saved to the file

Code Review Defect Report
**************************
Description
***********
this code generated by visual studio really stinks, well not really I am just doing to create a fake code defect report
Context
*******
Solution: C:\Users\Ovais\documents\visual studio 2010\Projects\Wisdom.VisualStudio.Tools\Wisdom.VisualStudio.Tools.sln
Project: Wisdom.VisualStudio.TestApplication
File: Program.cs
Line: 19

Selected Text: Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new Form1());

 

Installation

I have not made any fancy installation setup with this so you will have to do the following steps manually

  1. Copy the contents of the install.zip to a folder on your computer e.g. c:\Wisdomplugins
  2. in the visual studio go to tools menu and then select options
  3. in the options dialog select Environment/Add-in/Macros Security
  4. Click the add button and provide the path of your newly created folder.
  5. restart the visual studio and with a stroke of luck you will see the menu item in the tools menu

I have only tested this on the English version and I am pretty sure it will not work on any other language Smile.

You can get the code for the Add-in from the following location.

http://ge.tt/4X0Mm7F?c

As I mentioned I will explain the code of the Add-in in a later post. So if you are interested stay tuned.

Happy reviewing!

Peeping into very large text(xml) files

Recently I worked with a 40 GB XML file. The main objective was to look at the data in the file and create a compatible data model and eventually write a routine which can import the data into the database.

To start with I used a virtual machine hosted some where in the cloud for all my work . This approach made my life much easier as

  • The long running processes were not slowing down my own computer and I can do something else while the processing was done
  • I can just turn off my PC and go home without having to kill the process
  • I got a much faster machine in the VM

I tried to open the file in some known editors but there was no result and most editors froze. My initial objective was to just into the some part of the document so at least get an idea of what type of data I have to work with.  My initial thoughts were write a small program using XMLReader in C# and dump some part of the file into another file.

Just before writing the code I stumbled upon the Get-Content command in the Windows Power Shell. Get-Content command mainly lets you open a text based file and let you do basic operations. so if you want to show a file you will write something like

Get-Content .\1.txt

on the Windows Power Shell command prompt. This will print all the content of the file on the console. Not much useful is it?

Now if you want to display first 10 lines of this file you can give following command

Get-Content .\1.txt -totalcount 10

The best thing about this command is that it will not load all the file in the memory rather it will only load the specified lines. You can very easily save the out put of this command to a file like following

Get-Content .\1.txt -totalcount 10 > new1.txt

You can get more information about this command from the following link

http://technet.microsoft.com/en-us/library/ee176843.aspx

The link provides some other parameters used with Get-Content where you can count the number of rows or read last n number of rows but you can not use these if you are trying to work with very huge files as all of them will load the file in the memory and then will perform the requested operation, which will kill the purpose of using the command in the first place.

Mother of all useless User Interfaces

I recently was sent a phone from Nokia Connect for testing. When they sent the phone then sent an email with a DHL tracking number. You click this link and you all sort of information. I was very excited and was looking a this link quite often. I realized that this tracking page is a brilliant example of how clueless an engineer can be in capturing the user requirements.

The tracking entries started like Shipment picked up Hurray one useful information, then “Processed at Lambeth – UK” what? and after that there have been 10 similar entries. I got a glint of hope when I saw the entry “Departed Facility in LONDON-HEATHROW – UK” I thought super now it will be flying to Denmark but the next entry (after about 10 hours) was “Shipment on hold BRUSSELS – BELGIUM” ah ok so it so it goes to Belgium from UK but does it come to Denmark from there or it goes to Germany from there and then to Denmark I have no clue.

image 

I am thinking what would have been the requirement stated when this page was made. “User should be able to see the status of their shipment” and that is what this page is doing. But why exactly user wants to know/track the status of his shipment certainly to know when he will be able to get this shipment and that is one information that is missing from this page. Usually the installation wizards or data entry forms tell you that you are at the step 1 of 5 imagine a wizard telling you are at step 2 without telling you the total number of steps I am sure you will be as lost as I am.

The information in this tracking page may only be deciphered if you have in depth knowledge of DHL’s routes and the time it takes to go between each hop, otherwise you will be clueless. It is like showing the “Object null exception” as a message box to the user, it is like putting your log file on the user interface.

I am sure DHL spent a lot of money on this system and came up with this amazingly connected system which show step by step information to the user which has no practical use for him.

I can given few suggests on how these guys can improve the this page

  1. Put an expected arrival date
  2. Reduce the movements to only City level (if most cases)
  3. Add some type of progress bar on the page so that the movements can make sense

In the mean while this page will remain my reference point on how not to do a user interface.

Html In browser Database do we really need it?

Last night I was watching Mel Gibson’s Payback on the TV. One interesting thing in the movie is that every one keeps on saying that Mel needs his 130000 bucks back and the poor man keeps on telling them its 70000 and in the end he gets the 130000 bucks.

I have been working on web technologies for last 12 years and in all this time I have never heard anyone saying “if only we can have a database in the browser”. We had discussed a lot of time the limitations posed by the limited client side storage available in the browsers, and I have heard things a number of things like “if only we could store a little more information on the client side”.

A number of technologies were created to over come the limitations of the traditional web technologies frameworks  like Flash and Silverlight are two good examples, but no one of these technologies tried to provide any features to provide a database on the client side.

We have gone leaps and bounds in terms of programming languages and tools that interact with databases. There are frameworks which will make your lives much easier. In .Net world we have technologies like Entity Framework which can simplify the data access code a great deal. Using type safe languages like C, Java, C# you can write highly maintainable and robust business logic code . Having database coded on the browser will make us a step in a back direction. JavaScript doesn’t strikes me as one of the most maintenance friendly language. Eventually we will see a lot of data access, business logic code written in a language not known for its maintainability which may result in a maintenance nightmare. I recently had an experience while making a prototype with two tables proved to be a nightmare. Every slight change caused me to change the code at a number of places and introduced bugs in the code more frequently then a code written in a more type safe language. Of course I would have used something like Entity framework if I was coding it for server which could have been 50 times efficient in coding and maintainability.

Usually the JavaScript developers are not the guys making a lot business oriented code, application written with these newly acquired power could suffer some week coding standards.   

The browser database is temporary in nature and is gone as soon as you wipe clean your browser history. This essentially means that the data has to be saved to a server based data source this means there will be a lot of code that will be written twice. 

New breed of client applications mean new breed of server side applications. Business applications that are built for connected access will not be compatible with the new breed of disconnected scenarios. The new applications made for these scenarios will have to take care of the problems which come with the territory of distributed databases hence increasing their complexity.              

After all this discussion one question comes to my mind is “Why now?”. Internet connectivity has been improving every day. There will be less and less business scenarios which will require disconnected access. For these rare scenarios there are already enough solutions. So feels rather absurd that connectivity is increasing every day and we are coming up with solutions to make disconnected business applications.

So as poor mister porter said we only need 70000 bucks which means we could have been satisfied with the offline storage and may not needed a full blown database in the browser.   

Have no logic in ye unit tests

Recently I attended a seminar by Roy Osherove about good practices in unit tests.

Roy stressed hard that there should be no logic in the unit tests, I totally agree with that. He goes and on an explains that the logic in the unit tests can be symptom of two things

Missing Logic in your domain Model

If you have to write a lot of logic for a unit tests then it means that some one in their web application(or some other client application) will be writing the same logic.

This essentially means that if this logic changes at one place it will be have to be changed on the other places also, which will seriously hurt the maintainability of the unit tests. I like to consider as another client to your domain logic layer, more logic you have in business logic layers, less you will need to replicate in different clients.

A test that should have been divided into two or more tests

This is simple cohesion principal a test(or a method) should do only do one thing and do it well.

Who is testing my tests

There was an interesting question by Roy caught my attention “who is testing your tests?”. Your tests should not have logic as there is not one testing your unit tests.

I would like to believe that in most of the cases the unit tests and the code are in equilibrium state. So if one end is faulty the whole balance will be compromised. Unit tests check your code and the code checks your unit tests. In most of the situations when there is a bug in my unit test it will be fail as now it will not be doing what is expected of it. So I do not fully agree with this reason none the less I totally agree with the concept itself.

The Broken Window (Maintaining coding best practices in organizations)

Recently I attended a seminar by Roy Osherove about good practices in unit tests. Although all the seminar was full of information and useful ideas. I particularly liked the concept called Broken Window Theory. Idea is that if a diversion from the normality or pattern is made at one place, it will be replicated to all multiple places in the code. Other developer looking at the code will feel that it is fine to do something like this.

The question is how to avoid the broken windows in the code of a team or a software development company where a lot of people are developing code and there are very few to review it.

I am a big supporter of code analysis tools, and have been advocating them in my organization. We have been using FXCop and lately we having been using ReSharper.

For FxCop I like to have a basic set of rules that should be followed by all the projects. What we try to achieve with this is that all the code going out of the development shop has certain basic standard. Once a new project starts the architect looks into the standard rules and see what additional rules should be added to the rule set for this project. A combination of these rules then becomes the standard rule set for that project.

Standard operating procedure for developers is not to check-in before all the errors and warnings have been removed. FxCop helps a lot achieving this goal with the annoying warnings. We cannot use check-in policies as we use multiple source control tools. “Treat Warnings as Errors” is always on in the project properties. The code analysis is also done on the build servers after every check-in. In case of Resharper we use the out of the box refactoring scheme, the SOP is to follow all the suggestions.

All the static code analysis tools provide the facility to exclude a portion of the code from being being evaluated for a particular rule. And some times it is justifiable to exclude a certain rule. This is very convenient for the developers to do that even if there is not enough justification. The question here becomes on how to avoid the broken window effect when we make such exclusions.

In this case we like to follow the practice that I like to call “Justify and publish”. As a developer you are allowed to deviate from a rule if and only if you

  • 1) Writes a comment with the code with a reason of deviation.
  • 2) Publish this deviation to the project development wiki

this way the developers do not go on sabotaging the patterns at will. If some one else sees the diversion he can see that this was deliberate and for some reason and he will not replicate this every where. Publishing the information on the development wiki architects and other developers get a chance to wet the diversion and may suggest a better approach to solve the issue. Yet there is no time wasted for the developer.

The war game (Effectively starting up an offshore project)

I have been working in an offshore software development for past 11 years. Out of these I was stationed at an offshore development center in Islamabad about 8 years. Later on I shifted to my company’s onshore office in Denmark. Later down the road I moved to Talented Earth Organization(TEO) my current employer.

TEO has a sales office in Denmark and a offshore development center in Islamabad, Pakistan. In TEO we try to follow a combination of onshore and offshore. To bootstrap the development process for a customer/project we offer Architecture and project management services onshore. So usually it is myself and a colleague (project manager) start the project from Denmark. We have a common joke that our purpose is make ourselves useless as soon as possible. So as soon as the channels are established between the client and development team we move to a more of “need to use” or  steering committee role.

During all the years we have been working we always felt the challenge of effectively starting a software project. A number of times it happens that the project is in the analysis phase and the team is in place. The team can be included in the analysis phase but I think that that not all of the team members will be useful for analysis secondly in the offshore scenario it is a bit time consuming when the idea for the project is still being developed.

Technically speaking there are a number of things that should be in place before the development can start.

  • Do all of the team members have the correct development environment.
  • Is the source control in place with all the user roles and rights
  • Are all the third party components in place.
  • Database servers are there with developers having proper access. and so on so forth

The team dynamics is another important factor in the project success. The project manager needs to gauge the team members so that he knows how to interact effectively with team members. Team members need to learn about other team members their weaknesses their strengths, their likes, dislikes, coding and development styles. A handshake of all the technical resources is extremely important for the over all success of the project.     

I remember a project where the client was still fighting with the evolution of his idea, and we in onshore office were trying to help him in this fight,(To help a client during requirement analysis phase is to help him keep the scope limited.) We were having some meetings with the team in offshore so they had a high level idea of what is needed to be done. My project manager and I were having this these discussions that the team in offshore center doesn’t have much to do. From the initial technical discussions we had very good idea that which technologies will be using in the project and we wanted all of the team to do some hands on these technologies. There was a tough deadline on the project. “There will be war once the actual development is started” I once said to the project manager. Thinking about this sentence we got the idea of a War game, a short project to prepare the team for the actual project.

A 5 day mini project (the war game) was planned.

Here are the ground rule for this War Game

  • All the processes followed in the normal project will be followed (it is a war game Smile).
  • We created an one page feature list based on the current understanding. The team will be provided with this feature list.
  • We will have an one hour meeting for requirement elaboration.
  • We will not discuss the requirements again the team will develop what ever they can envision based on this one pager, the discussion and their previous knowledge about the project. Hence removing the delaying factors in the project.
  • The feature list made was a bit more than what you will expect the team can deliver. This will can result in two results
  • To add a bit of spice we agreed that the team will present the outcome of this activity to rest of the company.

So the  war game began and after 5 days we were delighted to see the result. We had properly working use cases. The best part was that the team enjoyed it a lot as they were able to show their creative side and work without boundaries.

When this demo was shown to the customer he was extremely happy in fact he asked us to make it online so that he can show this to his stake holders. Now customer had a reference point to talk about that this thing I like this I do not and so on and so forth.

After the first experience we have used in couple of other projects and all the time the results have been better expected.

In the end I must say that the war game provides your team a momentum and this is very important to keep this momentum going. That is why I would suggest that project managers to schedule the post war game activities carefully so that this momentum and team synergy is properly consumed.

Azure Access Control Service Usage Scenario

In my last post I discussed a bit about setting up and configuring the Access Control Service. I also gave you a pointer to read on how can you establish the trust between your application and ACS. So we are pretty much on our way with Authentication.

In most of the web applications you will not need  much of authorization. What we may need to do is to get some additional information from the user and remember that information when the user comes again. For this you will need to know the user’s identity. So that you can map this identity with your user related data stored in your application.

In some applications you may need to further refine user access based on roles. You can use the same pattern here, i.e. map the user roles stored in your application with the id provided by the Identity provider. If your identity provider provides role information you can get this information from ACS by effectively using Rules in ACS. You can keep the role information in the ACS and have it transmitted to the application as part of the information sent to the application(explained in previous post how can you accomplish this).

No matter what your requirement is or what ever pattern you choose to use the thing you will require is that you should be able to get hold all the information sent by ACS before any functionality of your application is called. In Asp.Net MVC this can be accomplished as follows.

Create a class in your application

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method)]
public sealed class AuthorizeAttribute :
                       FilterAttribute, IAuthorizationFilter
   {
       public void OnAuthorization(AuthorizationContext filterContext)
       {
          

       }
   }

this class defines an attribute which can be used on your Controller classes or methods. OnAuthorization method is called before any method of the controller is called which is decorated by this attribute.

   [Authorize()]
    public class HomeController : Controller
    {
        //
        // GET: /Home/
       
        public ActionResult HomeView()
        {
            return View();
        }

    }

You can enhance the attribute to take a take role(s) which are authorized to use this controller (or method if used on a method), I personally prefer getting a controller id and mapping that with roles in some configuration but this is a discussion of another time.

For now let us get hold of the information sent by the ACS and try to display that on your page. Once you can do that then you can map it, compare it or store it, as you like. The key in getting this information is in this statement

          var user = filterContext.RequestContext.HttpContext.User as IClaimsPrincipal;

IClaimPrincipal is the the interface which opens doors for the analysis of information sent by ACS.  It took me some effort to find out where I can get hold of this interface. This interface can be found in Microsoft.IdentityModel.dll present at the following location

“c:\Program Files\Reference Assemblies\Microsoft\Windows Identity Foundation\v3.5\Microsoft.IdentityModel.dll”

All the information provided by the Identity Providers is in the form of claims. This instance of the interface contains all the claims provided by ACS. These statements iterate through all the claims and add them to a string list. Later on we will display this list inside our page.

var claims = new List<string>();
           filterContext.Controller.ViewBag.ClaimsInfo = claims;
          
               claims.AddRange(from identity in user.Identities
                               from claim in identity.Claims
                               select string.Format("Claim:Name:{0}, Value:{1}", claim.ClaimType.Substring(claim.ClaimType.LastIndexOf("/")+1), claim.Value));

I have added the list to the ViewBag so that it is accessible from the view. this simple Razor markup will display this list in our page

<ul>
@foreach (var item in ViewBag.ClaimsInfo)
    {
    <li>@item</li>
    }
</ul>

Here is the output when I logged in using my Gmail account.

image

You can see the admin role injected by me in previous blog also.

Mostly we will be interested in the emailaddress claim we can use it to map the application user data with the logged in user.

Azure AppFabric Access Control Service

My neighbor bought a new TV I wanted to go to his house and have a look at it. But the problem was I dint know him all that well. I asked one of our common friend to introduce me to him so that I can go to his house and have a look at his new TV. My friend introduced us and told him that my name is Ovais and I am a reasonably decent person. As my neighbor trusted my friend, he let me in his house

(there nothing true in this story except the fact that my name is ovais and I am a recently decent man Winking smile).

If we map this story to the cyber space it will be something like, I go to a website, it has no way to verify who I am, so it asks me to prove my identity to Facebook when I do that Facebook tells this site that I am Ovais and the website lets me in the member area.

If we describe the same scenario in terms of Access Control Service then,the website is the “Relying party (RP) application” I am the client and Facebook is the Identity provider. The difference in CyberSpace is that there can be multiple Identity Providers but they do not speak the same language. So you need some one to translate their different languages into one standard language so that the website’s access control functionality is simple and robust, and AppFabric Access Control Service provides this translation functionality.       

ACS currently supports following Identity Providers

  • Windows Live credentials
  • Facebook
  • Google
  • Yahoo
  • WS-Federation identity provider(e.g. Microsoft AD FS 2.0)

To get started with ACS you have to log into your Azure Management portion. Then go to AppFabric/Access Control section and create a new namespace. Once the namespace is created you are now ready to configure the service.

At this point you may get an error if you are not the primary administrator of the subscription. If this is the case have a look at this link with known issues and workarounds http://msdn.microsoft.com/en-us/library/windowsazure/gg429787.aspx. Either you have ask your primary administrator to do the steps mentioned on this link or will have to do it yourself, Of course if you know his/her password Winking smile.

(I will not describe every step in detail have a look at this detail for full details http://msdn.microsoft.com/en-us/library/windowsazure/gg429779.aspx)  

Once you are on the ACS Management portal here are the things you need to do

  • Add Identity Providers
  • Add a Relying Party Application
  • Create Rules

I hope with my award winning full of suspense story you were able to understand the concept of first two. You can find the details of the step in details on the above mentioned link. Here I will like to write a bit about the Rules.

To understand rules first we need to understand another very important concept i.e. Claims. If you again consider my story, my friend Claimed that I am a decent enough man. My neighbor trusted this claim so he let me in. When are you are authenticated by an Identity Provider it also claims some things about you like you have this name and this email and your designation is Manager. Now different IPs may use different names for these claims. Rules actually map these different types of claims into a standard language so that the relying party deals with only one set of terminology. You can also define conditional mapping. For example the rule in the snapshot says that if user’s email address is ahmed.ovais@gmail.com then add a claim that he has Admin role.

  image

All this configuration can be done using the management api which will enable you to automate this process or even create a more intuitive user interface for your administrators.

Once these steps are done you are all set to create your application and use the ACS there. 

You can enable an ASP.Net MVC application to use ACS following the step 8 provided in the link i.e. Step 8 – Configure Trust Between ACS and Your ASP.NET Relying Party Application. Once done with it you will be all set to test and execute your application. Here is how the log-in screen will look like when you will try to access your website. The options to login depend on the Identity Providers selected during configuration of the ACS.

image

So now your site has a Authentication system without writing a single line of code. You can at anytime add/remove the Identity provider without needing to change anything in the application.

In the next blog I will try to discuss how Authorization can be done using ACS in an MVC application.