How to Rank your search results with multiple search terms using LINQ and EntityFramework

I have always used a ranked search criteria, it’s the only true way to get good results back from a data set.  But I’ve been doing my ranking within C# code.  But this got all the data from the data source and then ranked the results.  This is very poor on performance, as we should only be returning back the results for the page size we require.

So task at hand it to produce a LINQ statement to reterive only the data you require.

Here is the solution:

var entity = new myEntities();

var searchTerm = "a b Ba";

var searchArray = searchTerm.Split(new[] { ' ' }, StringSplitOptions.RemoveEmptyEntries);

var usersAll = entity.User.AsExpandable().Where(TC_User.ContainsInLastName(searchArray));

Console.WriteLine("Total Records: {0}", usersAll.Count());

var users = usersAll
    .Select(x => new { 
        x.LastName, 
        Rank = searchArray.Sum(s => ((x.LastName.Length - x.LastName.Replace(s, "").Length) / s.Length)) });

var results = users.OrderByDescending(o => o.Rank)
    .Skip(0)
    .Take(20);

foreach (var user in results)
{
    Console.WriteLine("{0}, {1}", user.LastName, user.Rank);
}

Console.ReadLine();

You’ll also need to add a new method to your User class to check for that the search term is contain in the LastName

public static Expression<Func<TC_User, bool>> ContainsInLastName(
                                                params string[] keywords)
{
    var predicate = PredicateBuilder.False<TC_User>();
    foreach (string keyword in keywords)
    {
        string temp = keyword;
        predicate = predicate.Or(p => p.LastName.Contains(temp));
    }
    return predicate;
}

One thing that is required is LinqKit, which is available via NuGet to handle the PredicateBuilder and AsExpandable.

Watch out as the results coming back from Sum is a BigInt so if you create a model to return back make sure it is a long type.

Backup SQL Azure

There are a number of options for backing up SQL Azure, which can be found here:

Different ways to Backup your Windows Azure SQL Database

I like the Azure way, which is just exporting, importing and setting a scheduled

Before You Begin

The SQL Database Import/Export Service requires you to have a Windows Azure storage account because BACPAC files are stored here. For more information about creating a storage account, see How to Create a Storage Account. You must also create a container inside Blob storage for your BACPAC files by using a tool such as the Windows Azure Management Tool (MMC) or Azure Storage Explorer.

If you want to import an on-premise SQL Server database to Windows Azure SQL Database, first export your on-premise database to a BACPAC file, and then upload the BACPAC file to your Blob storage container.

If you want to export a database from Windows Azure SQL Database to an on-premise SQL Server, first export the database to a BACPAC file, transfer the BACPAC file to your local server (computer), and then import the BACPAC file to your on-premise SQL Server.

Export a Database

  1. Using one of the tools listed in the Before You Begin section, ensure that your Blob has a container.

  2. Log on to the Windows Azure Platform Management Portal.

  3. In the navigation pane, click Hosted Services, Storage Accounts & CDN, and then click Storage Accounts. Your storage accounts display in the center pane.

  4. Select the required storage account, and make a note of the following values from the right pane: Primary access key and BLOB URL. You will have to specify these values later in this procedure.

  5. In the navigation pane, click Database. Next, select the subscription, your SQL Database server, and then your database that you want to export.

  6. On the ribbon, click Export. This opens the Export Database to Storage Account window.

  7. Verify that the Server Name and Database match the database that you want to export.

  8. In the Login and Password boxes, type the database credentials to be used for the export. Note that the account must be a server-level principal login – created by the provisioning process – or a member of the dbmanager database role.

  9. In New Blob URL box, specify the location where the exported BACPAC file is saved. Specify the location in the following format: “https://” + Blob URL (as noted in step 4) + “/<container_name>/<file_name>”. For example: https://myblobstorage.blob.core.windows.net/dac/exportedfile.bacpac. The Blob URL must be in lowercase without any special characters. If you do not supply the .bacpac suffix, it is applied by the export operation.

  10. In the Access Key box, type the storage access key or shared access key that you made a note of in step 4.

  11. From the Key Type list, select the type that matches the key entered in the Access Key box: either a Storage Access Key or a Shared Access Key.

  12. Click Finish to start the export. You should see a message saying Your request was successfully submitted.

  13. After the export is complete, you should attempt to import your BACPAC file into a Windows Azure SQL Database server to verify that your exported package can be imported successfully.

Database export is an asynchronous operation. After starting the export, you can use the Import Export Request Status window to track the progress. For information, see How to: View Import and Export Status of Database (Windows Azure SQL Database).

noteNote
An export operation performs an individual bulk copy of the data from each table in the database so does not guarantee the transactional consistency of the data. You can use the Windows Azure SQL Database copy database feature to make a consistent copy of a database, and perform the export from the copy. For more information, see Copying Databases in Windows Azure SQL Database.

Configure Automated Exports

Use the Windows Azure SQL Database Automated Export feature to schedule export operations for a SQL database, and to specify the storage account, frequency of export operations, and to set the retention period to store export files.

To configure automated export operations for a SQL database, use the following steps:

  1. Log on to the Windows Azure Platform Management Portal.

  2. Click the SQL database name you want to configure, and then click the Configuration tab.

  3. On the Automated Export work space, click Automatic, and then specify settings for the following parameters:

    • Storage Account
    • Frequency
      • Specify the export interval in days.
      • Specify the start date and time. The time value on the configuration work space is UTC time, so note the offset between UTC time and the time zone where your database is located.
    • Credentials for the server that hosts your SQL database. Note that the account must be a server-level principal login – created by the provisioning process – or a member of the dbmanager database role.
  4. When you have finished setting the export settings, click Save.

  5. You can see the time stamp for the last export on under Automated Export in the Quick Glance section of the SQL Database Dashboard.

To change the settings for an automated export, select the SQL database, click the Configuration tab, make your changes, and then click Save.

Create a New SQL Database from an Existing Export File

Use the Windows Azure SQL Database Create from Export feature to create a new SQL database from an existing export file.

To create a new SQL database from an existing export file, use the following steps:

  1. Log on to the Windows Azure Platform Management Portal.

  2. Click a SQL database name and then click the Configuration tab.

  3. On the Create from Export work space, click New Database, and then specify settings for the following parameters:

    • Bacpac file name – This is the source file for your new SQL database.
    • A name for the new SQL database.
    • Server – This is the host server for your new SQL database.
    • To start the operation, click the checkmark at the bottom of the page.

Import and Export a Database Using API

You can also programmatically import and export databases by using an API. For more information, see the Import Export example on Codeplex.

Import a Database

  1. Using one of the tools listed in the Before You Begin section, ensure that your Blob has a container, and the BACPAC file to be imported is available in the container.

  2. Log on to the Windows Azure Platform Management Portal.

  3. In the navigation pane, click Hosted Services, Storage Accounts & CDN, and then click Storage Accounts. Your storage accounts display in the center pane.

  4. Select the storage account that contains the BACPAC file to be imported, and make a note of the following values from the right pane: Primary access key and BLOB URL. You will have to specify these values later in this procedure.

  5. In the navigation pane, click Database. Next, select the subscription, and then your SQL Database server where you want to import the database.

  6. On the ribbon, click Import. This opens the Import Database from Storage Account window.

  7. Verify that the Target Server field lists the SQL Database server where the database is to be created.

  8. In the Login and Password boxes, type the database credentials to be used for the import.

  9. In the New Database Name box, type the name for the new database created by the import. This name must be unique on the SQL Database server and must comply with the SQL Server rules for identifiers. For more information, see Identifiers.

  10. From the Edition list, select whether the database is a Web or Business edition database.

  11. From the Maximum Size list, select the required size of the database. The list only specifies the values supported by the Edition you have selected.

  12. In the BACPAC URL box, type the full path of the BACPAC file that you want to import. Specify the path in the following format: “https://” + Blob URL (as noted in step 4) + “/<container_name>/<file_name>”. For example: https://myblobstorage.blob.core.windows.net/dac/file.bacpac. The Blob URL must be in lowercase without any special characters. If you do not supply the .bacpac suffix, it is applied by the import operation.

  13. In the Access Key box, type the storage access key or shared access key that you made a note of in step 4.

  14. From the Key Type list, select the type that matches the key entered in the Access Key box: either a Storage Access Key or a Shared Access Key.

  15. Click Finish to start the import.

Database import is an asynchronous operation. After starting the import, you can use the Import Export Request Status window to track the progress. For information, see How to: View Import and Export Status of Database (Windows Azure SQL Database).

Original Article

Vertical And Horizontal Privilege Escalation

What is Vertical And Horizontal Privilege Escalation?  Does your application support for this internally authorisation verification of work?  I expect not, it is really an over sight of the architecture and application planning that this is missed out and more importantly when it becomes an issue it is more often than not too late to implement any changes.

So what is Vertical And Horizontal Privilege Escalation?  

Privilege escalation is the act of exploiting a bug, design flaw or configuration oversight in an operating system or software application to gain elevated access to resources that are normally protected from an application or user. The result is that an application with more privileges than intended by the application developer or system administrator can perform unauthorized actions.

You may find that you have alternative methods to protect against privilege escalation:

  • Authorize attribute: This is used to authorize a user or role to access any resource in the application after authenticated. This helps to have User and Role level protections.
  • Http Referrer Check: This is to prevent an URL request which is not from the site, but from an external link or the link directly executed at the browser navigation
  • Anti Forgery Token: This is a powerful option to prevent any hidden field manipulation while form posting and prevents Cross-Site Request Forgery
  • HTML Encoding: It is advised to encode all user inputs to prevent Cross Site Scripting attack/ XSS attack etc.,
  • Encryption of Query string parameters. This is good way to prevent manipulation of query string parameters. 
  • URL Activity Tampering, we provide a key in the URL that can only be used for that URL

Even though you may not support Privilege Escalation directly you can place a front line of defence, Privilege escalation means a user receives privileges they are not entitled to. These privileges can be used to change information, view private information, or install unwanted programs such as viruses. It usually occurs when a system has a bug that allows security to be bypassed or, alternatively, has flawed design assumptions about how it will be used. Privilege escalation occurs in two forms:

  • Vertical privilege escalation, also known as privilege elevation, where a lower privilege user or application accesses functions or content reserved for higher privilege users or applications 
  • Horizontal privilege escalation, where a normal user accesses functions or content reserved for other normal users
I hope this help you understand, that planning and understanding issues from the outset is so important.

Make Fake MVC Objects for testing

If you need to create some Fake MVC objects for testing MVC objects then here you are:

FakeControllerContext

using System.Collections.Specialized;
using System.Web;
using System.Web.Mvc;
using System.Web.Routing;
using System.Web.SessionState;

using MvcContrib.TestHelper.Fakes;

public class FakeControllerContext : ControllerContext
{
    public FakeControllerContext(IController controller)
        : this(controller, null, null, null, null, null, null)
    {
    }

    public FakeControllerContext(IController controller, HttpCookieCollection cookies)
        : this(controller, null, null, null, null, cookies, null)
    {
    }

    public FakeControllerContext(IController controller, SessionStateItemCollection sessionItems)
        : this(controller, null, null, null, null, null, sessionItems)
    {
    }
        
    public FakeControllerContext(IController controller, NameValueCollection formParams)
        : this(controller, null, null, formParams, null, null, null)
    {
    }
        
    public FakeControllerContext(IController controller, NameValueCollection formParams, NameValueCollection queryStringParams)
        : this(controller, null, null, formParams, queryStringParams, null, null)
    {
    }
        
    public FakeControllerContext(IController controller, string userName)
        : this(controller, userName, null, null, null, null, null)
    {
    }
        
    public FakeControllerContext(IController controller, string userName, string[] roles)
        : this(controller, userName, roles, null, null, null, null)
    {
    }
        
    public FakeControllerContext
        (
            IController controller,
            string userName,
            string[] roles,
            NameValueCollection formParams,
            NameValueCollection queryStringParams,
            HttpCookieCollection cookies,
            SessionStateItemCollection sessionItems
        )
        : base(new FakeHttpContext(new FakePrincipal(new FakeIdentity(userName), roles), formParams, queryStringParams, cookies, sessionItems), new RouteData(), (Controller)controller)
    { }
}

FakeHttpContext

using System;
using System.Collections.Specialized;
using System.Security.Principal;
using System.Web;
using System.Web.SessionState;

public class FakeHttpContext : HttpContextBase
{
    private readonly FakePrincipal principal;
    private readonly NameValueCollection formParams;
    private readonly NameValueCollection queryStringParams;
    private readonly HttpCookieCollection cookies;
    private readonly SessionStateItemCollection sessionItems;

    public FakeHttpContext(FakePrincipal principal, NameValueCollection formParams, NameValueCollection queryStringParams, HttpCookieCollection cookies, SessionStateItemCollection sessionItems)
    {
        this.principal = principal;
        this.formParams = formParams;
        this.queryStringParams = queryStringParams;
        this.cookies = cookies;
        this.sessionItems = sessionItems;
    }

    public override HttpRequestBase Request
    {
        get
        {
            return new FakeHttpRequest(this.formParams, this.queryStringParams, this.cookies);
        }
    }

    public override IPrincipal User
    {
        get
        {
            return this.principal;
        }
        set
        {
            throw new NotImplementedException();
        }
    }

    public override HttpSessionStateBase Session
    {
        get
        {
            return this.sessionItems == null ? null : new FakeHttpSessionState(this.sessionItems);
        }
    }

}

FakeHttpRequest

using System.Collections.Specialized;
using System.Web;

public class FakeHttpRequest : HttpRequestBase
{
    private readonly NameValueCollection formParams;
    private readonly NameValueCollection queryStringParams;
    private readonly HttpCookieCollection cookies;

    public FakeHttpRequest(NameValueCollection formParams, NameValueCollection queryStringParams, HttpCookieCollection cookies)
    {
        this.formParams = formParams;
        this.queryStringParams = queryStringParams;
        this.cookies = cookies;
    }

    public override NameValueCollection Form
    {
        get
        {
            return this.formParams;
        }
    }

    public override NameValueCollection QueryString
    {
        get
        {
            return this.queryStringParams;
        }
    }

    public override HttpCookieCollection Cookies
    {
        get
        {
            return this.cookies;
        }
    }
}

FakeHttpSessionState

using System.Collections;
using System.Collections.Specialized;
using System.Web;
using System.Web.SessionState;

public class FakeHttpSessionState : HttpSessionStateBase
{
    private readonly SessionStateItemCollection sessionItems;

    public FakeHttpSessionState(SessionStateItemCollection sessionItems)
    {
        this.sessionItems = sessionItems;
    }

    public override void Add(string name, object value)
    {
        this.sessionItems[name] = value;
    }

    public override int Count
    {
        get
        {
            return this.sessionItems.Count;
        }
    }

    public override IEnumerator GetEnumerator()
    {
        return this.sessionItems.GetEnumerator();
    }

    public override NameObjectCollectionBase.KeysCollection Keys
    {
        get
        {
            return this.sessionItems.Keys;
        }
    }

    public override object this[string name]
    {
        get
        {
            return this.sessionItems[name];
        }
        set
        {
            this.sessionItems[name] = value;
        }
    }

    public override object this[int index]
    {
        get
        {
            return this.sessionItems[index];
        }
        set
        {
            this.sessionItems[index] = value;
        }
    }

    public override void Remove(string name)
    {
        this.sessionItems.Remove(name);
    }
}

FakePrincipal

using System.Linq;
using System.Security.Principal;

public class FakePrincipal : IPrincipal
{
    private readonly IIdentity identity;
    private readonly string[] roles;

    public FakePrincipal(IIdentity identity, string[] roles)
    {
        this.identity = identity;
        this.roles = roles;
    }

    public IIdentity Identity
    {
        get { return this.identity; }
    }

    public bool IsInRole(string role)
    {
        return this.roles != null && this.roles.Contains(role);
    }
}

The original article and source is from Stephen Walter Faking the Controller Context

Disclaimer Page using MVC

If you ever need a disclaimer page in MVC, there are a few things you need to consider, the first is you should not be able to get to a page without agreeing to the disclaimer and if you have to re-login then you need to be presented with the disclaimer page again.

I’ve opted to use an Attribute, which can be used on a Controller.

[AgreeToDisclaimer]
public class Page1Controller : Controller
{

This makes things very simple and as easy to use as the Authorise Attribute.

The implementation of the AgreeToDisclaimer inherits the Authorize Attribute, which over rides the AuthorizeCore and HandleUnauthorizedRequest methods

public class AgreeToDisclaimerAttribute : AuthorizeAttribute
{
    protected override bool AuthorizeCore(HttpContextBase httpContext)
    {
        if (httpContext == null)
            throw new ArgumentNullException("httpContext");

        if (!this.HasAgreedToDisclaimer(httpContext))
            return false;

        return true;
    }

    protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext)
    {
        // Returns HTTP 401 by default 
        filterContext.Result = new RedirectToRouteResult(
            new RouteValueDictionary 
    {
        { "action", "index" },
        { "controller", "Disclaimer" }
    });
    }

    private bool HasAgreedToDisclaimer(HttpContextBase current)
    {
        return current != null && current.User != null ? (bool)current.User.Disclaimer() : false;
    }
}

You’ll have noticed that I have extended the User object to include Disclaimer, this way when the user session expires or the user logs out the current user is null

As for extending the User object this again is quite simple:

public static class IPrincipalExtension
{
    /// <summary>
    /// ObjectCache references the objects with WeakReferences for avoiding memory leaks.
    /// </summary>
    public static ConditionalWeakTable<object, Dictionary<string, object>> ObjectCache = new ConditionalWeakTable<object, Dictionary<string, object>>();

    public static void SetValue<T>(this T obj, string name, object value) where T : class
    {
        Dictionary<string, object> properties = ObjectCache.GetOrCreateValue(obj);

        if (properties.ContainsKey(name))
            properties[name] = value;
        else
            properties.Add(name, value);
    }

    public static T GetValue<T>(this object obj, object name)
    {
        Dictionary<string, object> properties;
        if (ObjectCache.TryGetValue(obj, out properties) && properties.ContainsKey(name.ToString()))
            return (T)properties[name.ToString()];
        else
            return default(T);
    }

    public static object GetValue(this object obj, bool name)
    {
        return obj.GetValue<object>(name);
    }

    public static bool Disclaimer(this IPrincipal principal)
    {
        return principal.GetValue<bool>("Disclaimer");
    }

    public static void SetDisclaimer(this IPrincipal principal, bool value)
    {
        principal.SetValue("Disclaimer", value);
    }
}

What is the difference between running in Debug and Release mode?

Is the only difference between Debug and Release configurations that Debug have the DEBUG constant defined, and Release produces the Optimize code?

  1. Are there performance differences between these two configurations. Are there any specific type of code that will cause big differences in performance here, or is it actually not that important?
  2. Are there any type of code that will run fine under the Debug configuration that might fail under Release configuration, or can you be certain that code that is tested and working fine under the Debug configuration will also work fine under Release configuration.

The C# compiler itself doesn’t alter the emitted IL a great deal in the Release build. Notable is that it no longer emits the NOP opcodes that allow you to set a breakpoint on a curly brace. The big one is the optimizer that’s built into the JIT compiler. It does make the following optimizations:

  • Method inlining. A method call is replaced by the injecting the code of the method. This is a big one, it makes property accessors essentially free.
  • CPU register allocation. Local variables and method arguments can stay stored in a CPU register without ever (or less frequently) being stored back to the stack frame. This is a big one, notable for making debugging optimized code so difficult. And giving the volatile keyword a meaning.
  • Array index checking elimination. An important optimization when working with arrays (all .NET collection classes use an array internally). When the JIT compiler can verify that a loop never indexes an array out of bounds then it will eliminate the index check. Big one.
  • Loop unrolling. Short loops (up to 4) with small bodies are eliminated by repeating the code in the loop body. Avoids the branch misprediction penalty.
  • Dead code elimination. A statement like if (false) { /…/ } gets completely eliminated. This can occur due to constant folding and inlining. Other cases is where the JIT compiler can determine that the code has no possible side-effect. This optimization is what makes profiling code so tricky.
  • Code hoisting. Code inside a loop that is not affected by the loop can be moved out of the loop.
  • Common sub-expression elimination. x = y + 4; z = y + 4; becomes z = x;
  • Constant folding. x = 1 + 2; becomes x = 3; This simple example is caught early by the compiler, but happens at JIT time when other optimizations make this possible.
  • Copy propagation. x = a; y = x; becomes y = a; This helps the register allocator make better decisions. It is a big deal in the x86 jitter because it has so few registers to work with. Having it select the right ones is critical to perf.

These are very important optimizations that can make a great deal of difference when, for example, you profile the Debug build of your app and compare it to the Release build. That only really matters though when the code is on your critical path, the 5 to 10% of the code you write that actually affects the perf of your program. The JIT optimizer isn’t smart enough to know up front what is critical, it can only apply the “turn it to eleven” dial for all the code.

The effective result of these optimizations on your program’s execution time is often affected by code that runs elsewhere. Reading a file, executing a dbase query, etc. Making the work the JIT optimizer does completely invisible. It doesn’t mind though 🙂

The JIT optimizer is pretty reliable code, mostly because it has been put to the test millions of times. It is extremely rare to have problems in the Release build version of your program. It does happen however. Both the x64 and the x86 jitters have had problems with structs. The x86 jitter has trouble with floating point consistency, producing subtly different results when the intermediates of a floating point calculation are kept in a FPU register at 80-bit precision instead of getting truncated when flushed to memory.

Reference

Unit Testing a Private Method

With Visual Studio keep changing the way you create unit tests I thought I’d document down how to test a private method the easy way.

Here is a simple HomeController and we want to test that the GetPageSize method return 20 in a unit test.

public class HomeController : Controller
{
    public ActionResult Index()
    {
        var pageSize = GetPageSize();
        return View(pageSize);
    }

    private int GetPageSize()
    {
        return 20;
    }
}

As GetPageSize is a private method we’ll need to use the PrivateObject class to return the methods result, this is quite easy, from the Unit Test project make sure you have a reference to your MVC project and then reference the HomeController and invoke the method you need the result back from:

[TestMethod]
public void CheckPageSizeIs20()
{
    PrivateObject privateObject = new PrivateObject(typeof(HomeController));
    int pageSize;
    pageSize = (int)privateObject.Invoke("GetPageSize");

    Assert.AreEqual(20, pageSize);
}

If you need to pass in parameters to the private method then you can pass in an anonymous type

var actual = (IEnumerable<ActivityData>)privateObject.Invoke("GetAllUnassignedJobs", new object[] { filterData, uniqueJobs });

Reference: How to test a class member that is not public

Thanks to Karthik for googleing the answer

Concurrency

Hisenbugs

Hard to test concurrency, useally have test over a 5 week period and when you find the issues it is hard to reproduce

Lots of threads, and the behaviour of the program changes state and the interleaving changes and therefore it becomes hard to reproduce the concurrency issue, this is hope based testing, running on n number of machines and hope that we find a concurrency bug.  This is not only costly, in having the ability to run all the processes on multiple machines, the resource involved in managing the logistics for running all the tests and more importantly if you find a concurrency issue can you reproduce the bug.

So what we need is to be able to control the interleaving of the programme, we need to be able to run again and again in a loop, and record the interleaving and then if we find a bug having the ability to repeat that test and interleaving to reproduce that bug

 

CHESS is a tool for systematic and disciplined concurrency testing. Given a concurrent test, CHESS systematically enumerates the possible thread schedules to find hard-to-find concurrency errors, including assertion violations, deadlocks, data-races, and atomicity violations.

 

http://msdn.microsoft.com/en-us/library/ms182480%28v=vs.90%29/

Tools And Techniques To Identify Concurrency Issues

http://msdn.microsoft.com/en-us/magazine/cc546569/

hanselminutes_0136.pdf (94.14 kb)