GitHub for web Designers

Unless you’re a one person web shop with no team to collaborate with, you've experienced the frustration that goes along with file sharing. No matter how hard you try, when multiple people are working on a single project without a version control system in place things get chaotic.

If you work with developers on the build-out and implementation of websites, the merge between front-end templates and back-end functionality can be a scary black hole.

Issues like overwrites, lost files, and the all-too-common “working off a previous version” phenomenon crop up constantly. And once back-end functionality has been put into your templates, you become terrified to touch them for fear of breaking something a developer spent a great deal of time getting to work.

In addition, even if you have a common repository that everyone is pulling from odds are at least one member of your team forgot to grab the latest files and is about to blow things up with their latest additions.

Fear not GitHub is here to save the day, I’ll give you a quick review of GitHub, an excellent version control system.

Version Control – A Quick and Dirty Explanation

Version Control (also known as Revision Control or Source Control Management) is a great way to solve the file sharing problem.

The basic concept is this: there is one main repository for all of the project files. Team members check files out, make changes, and then check them back in (or commit them). The Version Control System (VCS) automatically notes who changed the files, when they were changed, and what about them was new or different.

It also asks you to write a little note about the change so everyone on the project knows at a glance what you did and why. Each file will then have a revision history so you can easily go back to a previous version of any file if something goes horribly wrong.

A good  Version Control System also allows you to merge changes to the same file. If you and another person work locally on the same file at the same time, when you push these files back into the main repository the system will merge both sets of changes to create a new and fully up-to-date file. If any conflicts arise during the merge it will highlight them for you.

You’re probably using a very crude  Version Control System right now to keep your files straight. If you’re a designer, it looks something like this:

This works well enough for PSDs and other large binary files, which don’t really lend themselves to  Version Control System. But there’s a much better way to do it when you are managing the source code for a website.

Benefits to using a Version Control System include:

  • Files cannot be overwritten
  • There is a common repository that holds all the latest files
  • People can work on the same files simultaneously without conflict
  • Allows you to revert back to an older version of the file/project if needed
  • Making your developers very happy

Even if you don’t work with a team, version control can be a lifesaver. Backing up files is one of the easiest things you can do to save yourself from losing work or having to start over.

The idea of a  Version Control System seems daunting at first, especially since most of the documentation is written by and for developers. But once you make the move to incorporate it into your workflow, you’ll find it’s not nearly as hard as it looks.

Meet GitHub

OK, so now you can see why a Version Control System is a must-have for your web team. If you do a little Googling you’ll see that there are quite a few options out there including SVN, Mercurial, CVS, Bazaar and GitHub. Any one of them could be a good solution for your needs, and I encourage you to do some research before selecting a  Version Control System. In this article I’m going to focus on GitHub, the one I use daily. It’s a “rising star” that has gained popularity thanks to a strong Linux fanbase, GitHub and the Rails community.

GitHub is a free open-source Version Control System originally created by Linus Torvalds for Linux kernal development. Linus is a very smart guy; when he sets out to solve a problem, he doesn't mess around. One of Git’s big differentiators is that unlike SVN and CVS it is a distributed version control system. This means that every user has a complete copy of the repository data stored locally on their machine. What’s so great about that? A few things:

  • Everything is local, so you can work offline
  • There is no single point of failure. It doesn't rely on one central server that could crash and burn, taking the only repository for your project with it.
  • Because it doesn't have to communicate with a central server constantly, processes run much faster

Git has a slightly tougher learning curve than SVN, but the trade-off is worth it. Just think how impressed your developer friends will be when you tell them you’re using the new hotness that is Git! In all seriousness, I don’t think the learning curve is all that steep. 

Installing Git isn't fun and games. But there are plenty of resources online to get you through it. It will run on a PC, Mac or Linux box, although installation for Linux and OSX is considerable easier than for Windows.

You can download the latest version of Git here. Once you have the files, try this quick guide to get you started with the installation process. For Windows users, this step-by-step visual guide should be helpful. Mac users, try this guide found on GitHub

Original Article

Google authentication get email from ASP.NET Identity

So how do you obtain the external user email (the one authentication by google), first name and name in an ASP.NET website using ASP.NET Identity?

var email = externalIdentity.FindFirstValue(ClaimTypes.Email);

and here is getting the type

public static string FindFirstValue(this ClaimsIdentity identity, string claimType) 
    {
        Claim claim = identity.FindFirst(claimType);
        if (claim != null) 
        {
            return claim.Value;
        }
        return null;
    }

For more information take a look at Decoupling OWIN external authentication from ASP.NET Identity

WCF Attributes and what they all mean

The online MSDN documentation is your first source for things like that:

All the MSDN documentation pages contain detailed explanations of all the settings on those contract attributes.

Active links on Bootstrap Navbar with ASP.NET MVC

If you have used Bootstrap with your ASP.NET MVC application, you might have faced some issues with implementing active links of it’s Navbar component. We’ll have to dynamically add a css class called active to the particular menu item in order to make it selected in the Navbar. Here is a HtmlHelper extension which is capable of rendering menu items as well as drop-down menu items. I’ve used ASP.NET MVC 5 with Razor and Bootstrap 3.

First step is to create a HtmlHelper extension class. 

using System.Web.Mvc;

public static class MenuLinkExtension
{
    public static MvcHtmlString MenuLink(this HtmlHelper htmlHelper, string itemText, string actionName, string controllerName, MvcHtmlString[] childElements = null)
    {
        var currentAction = htmlHelper.ViewContext.RouteData.GetRequiredString("action");
        var currentController = htmlHelper.ViewContext.RouteData.GetRequiredString("controller");
        string finalHtml;
        var linkBuilder = new TagBuilder("a");
        var liBuilder = new TagBuilder("li");

        if (childElements != null && childElements.Length > 0)
        {
            linkBuilder.MergeAttribute("href", "#");
            linkBuilder.AddCssClass("dropdown-toggle");
            linkBuilder.InnerHtml = itemText + " <b class=\"caret\"></b>";
            linkBuilder.MergeAttribute("data-toggle", "dropdown");
            var ulBuilder = new TagBuilder("ul");
            ulBuilder.AddCssClass("dropdown-menu");
            ulBuilder.MergeAttribute("role", "menu");
            foreach (var item in childElements)
            {
                ulBuilder.InnerHtml += item + "\n";
            }

            liBuilder.InnerHtml = linkBuilder + "\n" + ulBuilder;
            liBuilder.AddCssClass("dropdown");
            if (controllerName == currentController)
            {
                liBuilder.AddCssClass("active");
            }

            finalHtml = liBuilder.ToString() + ulBuilder;
        }
        else
        {
            var urlHelper = new UrlHelper(htmlHelper.ViewContext.RequestContext, htmlHelper.RouteCollection);
            linkBuilder.MergeAttribute("href", urlHelper.Action(actionName, controllerName));
            linkBuilder.SetInnerText(itemText);
            liBuilder.InnerHtml = linkBuilder.ToString();
            if (controllerName == currentController && actionName == currentAction)
            {
                liBuilder.AddCssClass("active");
            }

            finalHtml = liBuilder.ToString();
        }

        return new MvcHtmlString(finalHtml);
    }
}

Once you have saved it, you can it by just calling like this!

<header class="navbar navbar-inverse navbar-fixed-top bs-docs-nav" role="banner">
    <div class="container">
        <div class="navbar-header">
            <a href="#" class="navbar-brand">Mvc Shop</a>
        </div>
        <nav role="navigation">
 
            <ul class="nav navbar-nav">
                @Html.MenuLink("Home", "Index", "Home")
                @Html.MenuLink("Dropdown", "Index", "Home2", new MvcHtmlString[]{
                                      @Html.MenuLink("Link1", "Action1", "Controller1"),
                                      @Html.MenuLink("Link2", "Action2", "Controller1"),
                                      @Html.MenuLink("Link3", "Action3", "Controller1"),
                                    })
                @Html.MenuLink("JavaScript", "Index", "Home1", new MvcHtmlString[]{
                                      @Html.MenuLink("Link1", "Index1", "Home1"),
                                      @Html.MenuLink("Link2", "Index2", "Home1"),
                                      @Html.MenuLink("Link3", "Index3", "Home1")
                                    })
 
            </ul>
        </nav>
    </div>
</header>

How to Rank your search results with multiple search terms using LINQ and EntityFramework

I have always used a ranked search criteria, it's the only true way to get good results back from a data set.  But I've been doing my ranking within C# code.  But this got all the data from the data source and then ranked the results.  This is very poor on performance, as we should only be returning back the results for the page size we require.

So task at hand it to produce a LINQ statement to reterive only the data you require.

Here is the solution:

var entity = new myEntities();

var searchTerm = "a b Ba";

var searchArray = searchTerm.Split(new[] { ' ' }, StringSplitOptions.RemoveEmptyEntries);

var usersAll = entity.User.AsExpandable().Where(TC_User.ContainsInLastName(searchArray));

Console.WriteLine("Total Records: {0}", usersAll.Count());

var users = usersAll
    .Select(x => new { 
        x.LastName, 
        Rank = searchArray.Sum(s => ((x.LastName.Length - x.LastName.Replace(s, "").Length) / s.Length)) });

var results = users.OrderByDescending(o => o.Rank)
    .Skip(0)
    .Take(20);

foreach (var user in results)
{
    Console.WriteLine("{0}, {1}", user.LastName, user.Rank);
}

Console.ReadLine();

You'll also need to add a new method to your User class to check for that the search term is contain in the LastName

public static Expression<Func<TC_User, bool>> ContainsInLastName(
                                                params string[] keywords)
{
    var predicate = PredicateBuilder.False<TC_User>();
    foreach (string keyword in keywords)
    {
        string temp = keyword;
        predicate = predicate.Or(p => p.LastName.Contains(temp));
    }
    return predicate;
}

One thing that is required is LinqKit, which is available via NuGet to handle the PredicateBuilder and AsExpandable.

Watch out as the results coming back from Sum is a BigInt so if you create a model to return back make sure it is a long type.

Backup SQL Azure

There are a number of options for backing up SQL Azure, which can be found here:

Different ways to Backup your Windows Azure SQL Database

I like the Azure way, which is just exporting, importing and setting a scheduled

Before You Begin

The SQL Database Import/Export Service requires you to have a Windows Azure storage account because BACPAC files are stored here. For more information about creating a storage account, see How to Create a Storage Account. You must also create a container inside Blob storage for your BACPAC files by using a tool such as the Windows Azure Management Tool (MMC) or Azure Storage Explorer.

If you want to import an on-premise SQL Server database to Windows Azure SQL Database, first export your on-premise database to a BACPAC file, and then upload the BACPAC file to your Blob storage container.

If you want to export a database from Windows Azure SQL Database to an on-premise SQL Server, first export the database to a BACPAC file, transfer the BACPAC file to your local server (computer), and then import the BACPAC file to your on-premise SQL Server.

Export a Database

  1. Using one of the tools listed in the Before You Begin section, ensure that your Blob has a container.

  2. Log on to the Windows Azure Platform Management Portal.

  3. In the navigation pane, click Hosted Services, Storage Accounts & CDN, and then click Storage Accounts. Your storage accounts display in the center pane.

  4. Select the required storage account, and make a note of the following values from the right pane: Primary access key and BLOB URL. You will have to specify these values later in this procedure.

  5. In the navigation pane, click Database. Next, select the subscription, your SQL Database server, and then your database that you want to export.

  6. On the ribbon, click Export. This opens the Export Database to Storage Account window.

  7. Verify that the Server Name and Database match the database that you want to export.

  8. In the Login and Password boxes, type the database credentials to be used for the export. Note that the account must be a server-level principal login - created by the provisioning process - or a member of the dbmanager database role.

  9. In New Blob URL box, specify the location where the exported BACPAC file is saved. Specify the location in the following format: “https://” + Blob URL (as noted in step 4) + “/<container_name>/<file_name>”. For example: https://myblobstorage.blob.core.windows.net/dac/exportedfile.bacpac. The Blob URL must be in lowercase without any special characters. If you do not supply the .bacpac suffix, it is applied by the export operation.

  10. In the Access Key box, type the storage access key or shared access key that you made a note of in step 4.

  11. From the Key Type list, select the type that matches the key entered in the Access Key box: either a Storage Access Key or a Shared Access Key.

  12. Click Finish to start the export. You should see a message saying Your request was successfully submitted.

  13. After the export is complete, you should attempt to import your BACPAC file into a Windows Azure SQL Database server to verify that your exported package can be imported successfully.

Database export is an asynchronous operation. After starting the export, you can use the Import Export Request Status window to track the progress. For information, see How to: View Import and Export Status of Database (Windows Azure SQL Database).

noteNote
An export operation performs an individual bulk copy of the data from each table in the database so does not guarantee the transactional consistency of the data. You can use the Windows Azure SQL Database copy database feature to make a consistent copy of a database, and perform the export from the copy. For more information, see Copying Databases in Windows Azure SQL Database.

Configure Automated Exports

Use the Windows Azure SQL Database Automated Export feature to schedule export operations for a SQL database, and to specify the storage account, frequency of export operations, and to set the retention period to store export files.

To configure automated export operations for a SQL database, use the following steps:

  1. Log on to the Windows Azure Platform Management Portal.

  2. Click the SQL database name you want to configure, and then click the Configuration tab.

  3. On the Automated Export work space, click Automatic, and then specify settings for the following parameters:

    • Storage Account

    • Frequency

      • Specify the export interval in days.

      • Specify the start date and time. The time value on the configuration work space is UTC time, so note the offset between UTC time and the time zone where your database is located.

    • Credentials for the server that hosts your SQL database. Note that the account must be a server-level principal login - created by the provisioning process - or a member of the dbmanager database role.

  4. When you have finished setting the export settings, click Save.

  5. You can see the time stamp for the last export on under Automated Export in the Quick Glance section of the SQL Database Dashboard.

To change the settings for an automated export, select the SQL database, click the Configuration tab, make your changes, and then click Save.

Create a New SQL Database from an Existing Export File

Use the Windows Azure SQL Database Create from Export feature to create a new SQL database from an existing export file.

To create a new SQL database from an existing export file, use the following steps:

  1. Log on to the Windows Azure Platform Management Portal.

  2. Click a SQL database name and then click the Configuration tab.

  3. On the Create from Export work space, click New Database, and then specify settings for the following parameters:

    • Bacpac file name - This is the source file for your new SQL database.

    • A name for the new SQL database.

    • Server – This is the host server for your new SQL database.

    • To start the operation, click the checkmark at the bottom of the page.

Import and Export a Database Using API

You can also programmatically import and export databases by using an API. For more information, see the Import Export example on Codeplex.

Import a Database

  1. Using one of the tools listed in the Before You Begin section, ensure that your Blob has a container, and the BACPAC file to be imported is available in the container.

  2. Log on to the Windows Azure Platform Management Portal.

  3. In the navigation pane, click Hosted Services, Storage Accounts & CDN, and then click Storage Accounts. Your storage accounts display in the center pane.

  4. Select the storage account that contains the BACPAC file to be imported, and make a note of the following values from the right pane: Primary access key and BLOB URL. You will have to specify these values later in this procedure.

  5. In the navigation pane, click Database. Next, select the subscription, and then your SQL Database server where you want to import the database.

  6. On the ribbon, click Import. This opens the Import Database from Storage Account window.

  7. Verify that the Target Server field lists the SQL Database server where the database is to be created.

  8. In the Login and Password boxes, type the database credentials to be used for the import.

  9. In the New Database Name box, type the name for the new database created by the import. This name must be unique on the SQL Database server and must comply with the SQL Server rules for identifiers. For more information, see Identifiers.

  10. From the Edition list, select whether the database is a Web or Business edition database.

  11. From the Maximum Size list, select the required size of the database. The list only specifies the values supported by the Edition you have selected.

  12. In the BACPAC URL box, type the full path of the BACPAC file that you want to import. Specify the path in the following format: “https://” + Blob URL (as noted in step 4) + “/<container_name>/<file_name>”. For example: https://myblobstorage.blob.core.windows.net/dac/file.bacpac. The Blob URL must be in lowercase without any special characters. If you do not supply the .bacpac suffix, it is applied by the import operation.

  13. In the Access Key box, type the storage access key or shared access key that you made a note of in step 4.

  14. From the Key Type list, select the type that matches the key entered in the Access Key box: either a Storage Access Key or a Shared Access Key.

  15. Click Finish to start the import.

Database import is an asynchronous operation. After starting the import, you can use the Import Export Request Status window to track the progress. For information, see How to: View Import and Export Status of Database (Windows Azure SQL Database).

Original Article

Vertical And Horizontal Privilege Escalation

What is Vertical And Horizontal Privilege Escalation?  Does your application support for this internally authorisation verification of work?  I expect not, it is really an over sight of the architecture and application planning that this is missed out and more importantly when it becomes an issue it is more often than not too late to implement any changes.

So what is Vertical And Horizontal Privilege Escalation?  

Privilege escalation is the act of exploiting a bug, design flaw or configuration oversight in an operating system or software application to gain elevated access to resources that are normally protected from an application or user. The result is that an application with more privileges than intended by the application developer or system administrator can perform unauthorized actions.

You may find that you have alternative methods to protect against privilege escalation:

  • Authorize attribute: This is used to authorize a user or role to access any resource in the application after authenticated. This helps to have User and Role level protections.
  • Http Referrer Check: This is to prevent an URL request which is not from the site, but from an external link or the link directly executed at the browser navigation
  • Anti Forgery Token: This is a powerful option to prevent any hidden field manipulation while form posting and prevents Cross-Site Request Forgery
  • HTML Encoding: It is advised to encode all user inputs to prevent Cross Site Scripting attack/ XSS attack etc.,
  • Encryption of Query string parameters. This is good way to prevent manipulation of query string parameters. 
  • URL Activity Tampering, we provide a key in the URL that can only be used for that URL

Even though you may not support Privilege Escalation directly you can place a front line of defence, Privilege escalation means a user receives privileges they are not entitled to. These privileges can be used to change information, view private information, or install unwanted programs such as viruses. It usually occurs when a system has a bug that allows security to be bypassed or, alternatively, has flawed design assumptions about how it will be used. Privilege escalation occurs in two forms:

  • Vertical privilege escalation, also known as privilege elevation, where a lower privilege user or application accesses functions or content reserved for higher privilege users or applications 
  • Horizontal privilege escalation, where a normal user accesses functions or content reserved for other normal users
I hope this help you understand, that planning and understanding issues from the outset is so important.

Make Fake MVC Objects for testing

If you need to create some Fake MVC objects for testing MVC objects then here you are:

FakeControllerContext

using System.Collections.Specialized;
using System.Web;
using System.Web.Mvc;
using System.Web.Routing;
using System.Web.SessionState;

using MvcContrib.TestHelper.Fakes;

public class FakeControllerContext : ControllerContext
{
    public FakeControllerContext(IController controller)
        : this(controller, null, null, null, null, null, null)
    {
    }

    public FakeControllerContext(IController controller, HttpCookieCollection cookies)
        : this(controller, null, null, null, null, cookies, null)
    {
    }

    public FakeControllerContext(IController controller, SessionStateItemCollection sessionItems)
        : this(controller, null, null, null, null, null, sessionItems)
    {
    }
        
    public FakeControllerContext(IController controller, NameValueCollection formParams)
        : this(controller, null, null, formParams, null, null, null)
    {
    }
        
    public FakeControllerContext(IController controller, NameValueCollection formParams, NameValueCollection queryStringParams)
        : this(controller, null, null, formParams, queryStringParams, null, null)
    {
    }
        
    public FakeControllerContext(IController controller, string userName)
        : this(controller, userName, null, null, null, null, null)
    {
    }
        
    public FakeControllerContext(IController controller, string userName, string[] roles)
        : this(controller, userName, roles, null, null, null, null)
    {
    }
        
    public FakeControllerContext
        (
            IController controller,
            string userName,
            string[] roles,
            NameValueCollection formParams,
            NameValueCollection queryStringParams,
            HttpCookieCollection cookies,
            SessionStateItemCollection sessionItems
        )
        : base(new FakeHttpContext(new FakePrincipal(new FakeIdentity(userName), roles), formParams, queryStringParams, cookies, sessionItems), new RouteData(), (Controller)controller)
    { }
}

FakeHttpContext

using System;
using System.Collections.Specialized;
using System.Security.Principal;
using System.Web;
using System.Web.SessionState;

public class FakeHttpContext : HttpContextBase
{
    private readonly FakePrincipal principal;
    private readonly NameValueCollection formParams;
    private readonly NameValueCollection queryStringParams;
    private readonly HttpCookieCollection cookies;
    private readonly SessionStateItemCollection sessionItems;

    public FakeHttpContext(FakePrincipal principal, NameValueCollection formParams, NameValueCollection queryStringParams, HttpCookieCollection cookies, SessionStateItemCollection sessionItems)
    {
        this.principal = principal;
        this.formParams = formParams;
        this.queryStringParams = queryStringParams;
        this.cookies = cookies;
        this.sessionItems = sessionItems;
    }

    public override HttpRequestBase Request
    {
        get
        {
            return new FakeHttpRequest(this.formParams, this.queryStringParams, this.cookies);
        }
    }

    public override IPrincipal User
    {
        get
        {
            return this.principal;
        }
        set
        {
            throw new NotImplementedException();
        }
    }

    public override HttpSessionStateBase Session
    {
        get
        {
            return this.sessionItems == null ? null : new FakeHttpSessionState(this.sessionItems);
        }
    }

}

FakeHttpRequest

using System.Collections.Specialized;
using System.Web;

public class FakeHttpRequest : HttpRequestBase
{
    private readonly NameValueCollection formParams;
    private readonly NameValueCollection queryStringParams;
    private readonly HttpCookieCollection cookies;

    public FakeHttpRequest(NameValueCollection formParams, NameValueCollection queryStringParams, HttpCookieCollection cookies)
    {
        this.formParams = formParams;
        this.queryStringParams = queryStringParams;
        this.cookies = cookies;
    }

    public override NameValueCollection Form
    {
        get
        {
            return this.formParams;
        }
    }

    public override NameValueCollection QueryString
    {
        get
        {
            return this.queryStringParams;
        }
    }

    public override HttpCookieCollection Cookies
    {
        get
        {
            return this.cookies;
        }
    }
}

FakeHttpSessionState

using System.Collections;
using System.Collections.Specialized;
using System.Web;
using System.Web.SessionState;

public class FakeHttpSessionState : HttpSessionStateBase
{
    private readonly SessionStateItemCollection sessionItems;

    public FakeHttpSessionState(SessionStateItemCollection sessionItems)
    {
        this.sessionItems = sessionItems;
    }

    public override void Add(string name, object value)
    {
        this.sessionItems[name] = value;
    }

    public override int Count
    {
        get
        {
            return this.sessionItems.Count;
        }
    }

    public override IEnumerator GetEnumerator()
    {
        return this.sessionItems.GetEnumerator();
    }

    public override NameObjectCollectionBase.KeysCollection Keys
    {
        get
        {
            return this.sessionItems.Keys;
        }
    }

    public override object this[string name]
    {
        get
        {
            return this.sessionItems[name];
        }
        set
        {
            this.sessionItems[name] = value;
        }
    }

    public override object this[int index]
    {
        get
        {
            return this.sessionItems[index];
        }
        set
        {
            this.sessionItems[index] = value;
        }
    }

    public override void Remove(string name)
    {
        this.sessionItems.Remove(name);
    }
}

FakePrincipal

using System.Linq;
using System.Security.Principal;

public class FakePrincipal : IPrincipal
{
    private readonly IIdentity identity;
    private readonly string[] roles;

    public FakePrincipal(IIdentity identity, string[] roles)
    {
        this.identity = identity;
        this.roles = roles;
    }

    public IIdentity Identity
    {
        get { return this.identity; }
    }

    public bool IsInRole(string role)
    {
        return this.roles != null && this.roles.Contains(role);
    }
}

The original article and source is from Stephen Walter Faking the Controller Context

Disclaimer Page using MVC

If you ever need a disclaimer page in MVC, there are a few things you need to consider, the first is you should not be able to get to a page without agreeing to the disclaimer and if you have to re-login then you need to be presented with the disclaimer page again.

I've opted to use an Attribute, which can be used on a Controller.

[AgreeToDisclaimer]
public class Page1Controller : Controller
{

This makes things very simple and as easy to use as the Authorise Attribute.

The implementation of the AgreeToDisclaimer inherits the Authorize Attribute, which over rides the AuthorizeCore and HandleUnauthorizedRequest methods

public class AgreeToDisclaimerAttribute : AuthorizeAttribute
{
    protected override bool AuthorizeCore(HttpContextBase httpContext)
    {
        if (httpContext == null)
            throw new ArgumentNullException("httpContext");

        if (!this.HasAgreedToDisclaimer(httpContext))
            return false;

        return true;
    }

    protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext)
    {
        // Returns HTTP 401 by default 
        filterContext.Result = new RedirectToRouteResult(
            new RouteValueDictionary 
    {
        { "action", "index" },
        { "controller", "Disclaimer" }
    });
    }

    private bool HasAgreedToDisclaimer(HttpContextBase current)
    {
        return current != null && current.User != null ? (bool)current.User.Disclaimer() : false;
    }
}

You'll have noticed that I have extended the User object to include Disclaimer, this way when the user session expires or the user logs out the current user is null

As for extending the User object this again is quite simple:

public static class IPrincipalExtension
{
    /// <summary>
    /// ObjectCache references the objects with WeakReferences for avoiding memory leaks.
    /// </summary>
    public static ConditionalWeakTable<object, Dictionary<string, object>> ObjectCache = new ConditionalWeakTable<object, Dictionary<string, object>>();

    public static void SetValue<T>(this T obj, string name, object value) where T : class
    {
        Dictionary<string, object> properties = ObjectCache.GetOrCreateValue(obj);

        if (properties.ContainsKey(name))
            properties[name] = value;
        else
            properties.Add(name, value);
    }

    public static T GetValue<T>(this object obj, object name)
    {
        Dictionary<string, object> properties;
        if (ObjectCache.TryGetValue(obj, out properties) && properties.ContainsKey(name.ToString()))
            return (T)properties[name.ToString()];
        else
            return default(T);
    }

    public static object GetValue(this object obj, bool name)
    {
        return obj.GetValue<object>(name);
    }

    public static bool Disclaimer(this IPrincipal principal)
    {
        return principal.GetValue<bool>("Disclaimer");
    }

    public static void SetDisclaimer(this IPrincipal principal, bool value)
    {
        principal.SetValue("Disclaimer", value);
    }
}

What is the difference between running in Debug and Release mode?

Is the only difference between Debug and Release configurations that Debug have the DEBUG constant defined, and Release produces the Optimize code?

  1. Are there performance differences between these two configurations. Are there any specific type of code that will cause big differences in performance here, or is it actually not that important?
  2. Are there any type of code that will run fine under the Debug configuration that might fail under Release configuration, or can you be certain that code that is tested and working fine under the Debug configuration will also work fine under Release configuration.

The C# compiler itself doesn't alter the emitted IL a great deal in the Release build. Notable is that it no longer emits the NOP opcodes that allow you to set a breakpoint on a curly brace. The big one is the optimizer that's built into the JIT compiler. It does make the following optimizations:

  • Method inlining. A method call is replaced by the injecting the code of the method. This is a big one, it makes property accessors essentially free.
  • CPU register allocation. Local variables and method arguments can stay stored in a CPU register without ever (or less frequently) being stored back to the stack frame. This is a big one, notable for making debugging optimized code so difficult. And giving the volatile keyword a meaning.
  • Array index checking elimination. An important optimization when working with arrays (all .NET collection classes use an array internally). When the JIT compiler can verify that a loop never indexes an array out of bounds then it will eliminate the index check. Big one.
  • Loop unrolling. Short loops (up to 4) with small bodies are eliminated by repeating the code in the loop body. Avoids the branch misprediction penalty.
  • Dead code elimination. A statement like if (false) { /.../ } gets completely eliminated. This can occur due to constant folding and inlining. Other cases is where the JIT compiler can determine that the code has no possible side-effect. This optimization is what makes profiling code so tricky.
  • Code hoisting. Code inside a loop that is not affected by the loop can be moved out of the loop.
  • Common sub-expression elimination. x = y + 4; z = y + 4; becomes z = x;
  • Constant folding. x = 1 + 2; becomes x = 3; This simple example is caught early by the compiler, but happens at JIT time when other optimizations make this possible.
  • Copy propagation. x = a; y = x; becomes y = a; This helps the register allocator make better decisions. It is a big deal in the x86 jitter because it has so few registers to work with. Having it select the right ones is critical to perf.

These are very important optimizations that can make a great deal of difference when, for example, you profile the Debug build of your app and compare it to the Release build. That only really matters though when the code is on your critical path, the 5 to 10% of the code you write that actually affects the perf of your program. The JIT optimizer isn't smart enough to know up front what is critical, it can only apply the "turn it to eleven" dial for all the code.

The effective result of these optimizations on your program's execution time is often affected by code that runs elsewhere. Reading a file, executing a dbase query, etc. Making the work the JIT optimizer does completely invisible. It doesn't mind though :)

The JIT optimizer is pretty reliable code, mostly because it has been put to the test millions of times. It is extremely rare to have problems in the Release build version of your program. It does happen however. Both the x64 and the x86 jitters have had problems with structs. The x86 jitter has trouble with floating point consistency, producing subtly different results when the intermediates of a floating point calculation are kept in a FPU register at 80-bit precision instead of getting truncated when flushed to memory.

Reference

About the author

You have probably figured out by now that my name is Bryan Avery (if not, please refer to your browser's address field).  Technology is more than a career to me - it is both a hobby and a passion.  I'm an ASP.NET/C# Developer at heart...

Month List