Difference between == and === in JavaScript

The 3 equal signs mean “equality without type coercion”. Using the triple equals, the values must be equal in type as well.

0 == false   // true
0 === false  // false, because they are of a different type
1 == "1"     // true, automatic type conversion for value only
1 === "1"    // false, because they are of a different type
null == undefined // true
null === undefined // false
'0' == false // true
'0' === false // false

Creating API Help Pages

Install ASP.NET and Web Tools 2012.2 Update. This update integrates help pages into the Web API project template.

Next, create a new ASP.NET MVC 4 project and select the Web API project template. The project template creates an example API controller named ValuesController. The template also creates the API help pages. All of the code files for the help page are placed in the Areas folder of the project.

When you run the application, the home page contains a link to the API help page. From the home page, the relative path is /Help.

This link brings you to an API summary page.

The MVC view for this page is defined in Areas/HelpPage/Views/Help/Index.cshtml. You can edit this page to modify the layout, introduction, title, styles, and so forth.

The main part of the page is a table of APIs, grouped by controller. The table entries are generated dynamically, using the IApiExplorer interface. (I’ll talk more about this interface later.) If you add a new API controller, the table is automatically updated at run time.

The “API” column lists the HTTP method and relative URI. The “Description” column contains documentation for each API. Initially, the documentation is just placeholder text. In the next section, I’ll show you how to add documentation from XML comments.

Each API has a link to a page with more detailed information, including example request and response bodies.

Adding Help Pages to an Existing Project

You can add help pages to an existing Web API project by using NuGet Package Manager. This option is useful you start from a different project template than the “Web API” template.

From the Tools menu, select Library Package Manager, and then select Package Manager Console. In the Package Manager Console window, type one of the following commands:

For a C# application: Install-Package Microsoft.AspNet.WebApi.HelpPage

For a Visual Basic application: Install-Package Microsoft.AspNet.WebApi.HelpPage.VB

There are two packages, one for C# and one for Visual Basic. Make sure to use the one that matches your project.

This command installs the necessary assemblies and adds the MVC views for the help pages (located in the Areas/HelpPage folder). You’ll need to manually add a link to the Help page. The URI is /Help. To create a link in a razor view, add the following:

@Html.ActionLink("API", "Index", "Help", new { area = "" }, null)

Also, make sure to register areas. In the Global.asax file, add the following code to the Application_Start method, if it is not there already:

protected void Application_Start()
{
 // Add this code, if not present.
 AreaRegistration.RegisterAllAreas();

// ...
}

Adding API Documentation

By default, the help pages have placeholder strings for documentation. You can use XML documentation commentsto create the documentation. To enable this feature, open the file Areas/HelpPage/App_Start/HelpPageConfig.cs and uncomment the following line:

config.SetDocumentationProvider(new XmlDocumentationProvider(
 HttpContext.Current.Server.MapPath("~/App_Data/XmlDocument.xml")));

Now enable XML documentation. In Solution Explorer, right-click the project and select Properties. Select the Buildpage.

Under Output, check XML documentation file. In the edit box, type “App_Data/XmlDocument.xml”.

Next, open the code for the ValuesController API controller, which is defined in /Controllers/ValuesControler.cs. Add some documentation comments to the controller methods. For example:

/// <summary>
/// Gets some very important data from the server.
/// </summary>
public IEnumerable<string> Get()
{
 return new string[] { "value1", "value2" };
}

/// <summary>
/// Looks up some data by ID.
/// </summary>
/// <param name="id">The ID of the data.</param>
public string Get(int id)
{
 return "value";
}

Note

Tip: If you position the caret on the line above the method and type three forward slashes, Visual Studio automatically inserts the XML elements. Then you can fill in the blanks.

Now build and run the application again, and navigate to the help pages. The documentation strings should appear in the API table.

The help page reads the strings from the XML file at run time. (When you deploy the application, make sure to deploy the XML file.)

Under the Hood

The help pages are built on top of the ApiExplorer class, which is part of the Web API framework. The ApiExplorerclass provides the raw material for creating a help page. For each API, ApiExplorer contains an ApiDescription that describes the API. For this purpose, an “API” is defined as the combination of HTTP method and relative URI. For example, here are some distinct APIs:

  • GET /api/Products
  • GET /api/Products/{id}
  • POST /api/Products

If a controller action supports multiple HTTP methods, the ApiExplorer treats each method as a distinct API.

To hide an API from the ApiExplorer, add the ApiExplorerSettings attribute to the action and set IgnoreApi to true.

[ApiExplorerSettings(IgnoreApi=true)] public HttpResponseMessage Get(int id) { }

You can also add this attribute to the controller, to exclude the entire controller.

The ApiExplorer class gets documentation strings from the IDocumentationProvider interface. As you saw earlier, the Help Pages library provides an IDocumentationProvider that gets documentation from XML documentation strings. The code is located in /Areas/HelpPage/XmlDocumentationProvider.cs. You can get documentation from another source by writing your own IDocumentationProvider. To wire it up, call the SetDocumentationProviderextension method, defined in HelpPageConfigurationExtensions

ApiExplorer automatically calls into the IDocumentationProvider interface to get documentation strings for each API. It stores them in the Documentation property of the ApiDescription and ApiParameterDescription objects.

Orginal article https://docs.microsoft.com/en-us/aspnet/web-api/overview/getting-started-with-aspnet-web-api/creating-api-help-pages

DateTimeOffset for SOAP

If you are serious about using SOAP, it won’t be long before you find out that if you need a full ISO date in the XML, it will not work using C# and SOAP, as DateTimeOffset is not supported from Microsoft.

Here is how you go about ensuring that you deliver an ISO DateTime formatted field in SOAP.

First, you’ll need to create a struct to hold the ISO 8601 Date Time Offset, and we will use IXmlSerializable interface which provides custom formatting for XML serialisation and deserialization.

public struct Iso8601SerializableDateTimeOffset : IXmlSerializable
 {
 public DateTimeOffset value;

public Iso8601SerializableDateTimeOffset(DateTimeOffset value)
 {
 this.value = value;
 }

public static implicit operator Iso8601SerializableDateTimeOffset(DateTimeOffset value)
 {
 return new Iso8601SerializableDateTimeOffset(value);
 }

public static implicit operator DateTimeOffset(Iso8601SerializableDateTimeOffset instance)
 {
 return instance.value;
 }

public static bool operator ==(Iso8601SerializableDateTimeOffset a, Iso8601SerializableDateTimeOffset b)
 {
 return a.value == b.value;
 }

public static bool operator !=(Iso8601SerializableDateTimeOffset a, Iso8601SerializableDateTimeOffset b)
 {
 return a.value != b.value;
 }

public static bool operator <(Iso8601SerializableDateTimeOffset a, Iso8601SerializableDateTimeOffset b)
 {
 return a.value < b.value;
 }

public static bool operator >(Iso8601SerializableDateTimeOffset a, Iso8601SerializableDateTimeOffset b)
 {
 return a.value > b.value;
 }

public override bool Equals(object o)
 {
 if (o is Iso8601SerializableDateTimeOffset)
 return value.Equals(((Iso8601SerializableDateTimeOffset)o).value);
 else if (o is DateTimeOffset)
 return value.Equals((DateTimeOffset)o);
 else
 return false;
 }

public override int GetHashCode()
 {
 return value.GetHashCode();
 }

public XmlSchema GetSchema()
 {
 return null;
 }

public void ReadXml(XmlReader reader)
 {
 var text = reader.ReadElementString();
 value = DateTimeOffset.ParseExact(text, format: "o", formatProvider: null);
 }

public override string ToString()
 {
 return value.ToString(format: "o");
 }

public string ToString(string format)
 {
 return value.ToString(format);
 }

public void WriteXml(XmlWriter writer)
 {
 writer.WriteString(value.ToString(format: "o"));
 }
 }

We also need to cater for Json Date Time Offsets too, so we will write a converter

public class UtcDateTimeOffsetConverter : Newtonsoft.Json.Converters.IsoDateTimeConverter
 {
 public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
 {
 if (value is Iso8601SerializableDateTimeOffset)
 {
 var date = (Iso8601SerializableDateTimeOffset)value;
 value = date.value;
 }
 base.WriteJson(writer, value, serializer);
 }

public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
 {
 object value = base.ReadJson(reader, objectType, existingValue, serializer);
 if (value is Iso8601SerializableDateTimeOffset)
 {
 var date = (Iso8601SerializableDateTimeOffset)value;
 value = date.value;
 }
 return value;
 }
 }

Next, we need to implement it in an object model, with a slight twist as you can not use the Iso8601SerializableDateTimeOffset struct directly, to do this we need to wrap the result into a string property

public class Foo
 {
 public Guid Id { get; set; }

[JsonConverter(typeof(UtcDateTimeOffsetConverter))]
 [XmlElement("AcquireDate")]
 public string acquireDateForXml
 {
 get { return AcquireDate.ToString(); }
 set { AcquireDate = DateTimeOffset.Parse(value); }
 }

[XmlIgnore]
 public Iso8601SerializableDateTimeOffset? AcquireDate;
 }

That is it, job done

https://github.com/BryanAvery/DateTimeOffsetSOAP

DateTimeOffsetforSOAP

Bing Maps in an MVC Application

Here is a simple example of how to use Bing Maps in an MVC application.

I am using Bing Maps V8 which is the current version of Microsoft.

Also, nothing that document.ready will fire long before the map script loads as it loads asynchronously.  Sometimes document.ready will fire before the page is loaded which means the map div might not even be available. To overcome this, we are using the callback parameter of the map script UR: for example:

http://www.bing.com/api/maps/mapcontrol?callback=LoadMap

BingMapsDemo

The Microsoft documentation can be found here:

https://www.bing.com/api/maps/sdkrelease/mapcontrol/isdk/Overview#JS

To setup a Key you can do this quite easily at the Bing Maps Portal website:

https://www.bingmapsportal.com

Caching to improve the user experience

(Added LazyCache in to the Testing)

One of the essential factors in building high-performance, scalable Web applications is the ability to store items, whether data objects, pages, or even parts of a page in memory the initial time they are requested. You can save these items on the Web server or other software in the request stream, such as a proxy server or at the browser. This allows you to avoid recreating information that satisfied a previous request. Known as caching, it will enable you to use many techniques to store page output or application data across HTTP requests and reuse it. When the server does not have to recreate information you save time and resources, and throughput and scalability increase.

It is possible to obtain significant performance improvements in ASP.NET applications by caching frequently requested objects and data in either the Application or Cache classes. While the Cache class indeed offers far more flexibility and control, it only appears to provide a marginal advantage regarding increased throughput over the Application class for caching. It would be challenging to develop a testing scheme that could accurately measure the potential benefits of the Cache class’s built-in management of lesser-used objects through the scavenging process as opposed to the fact that Application does not offer this feature. The developer needs to decide this case and should be based on the needs and convenience of the project and its usage patterns.

In the article, I’ll be looking at the application caching and what is the most effective and scalable options.  If you would like to know more about the web caching then take a look at the Microsoft Caching Architecture Guide for .NET Framework Applications

So we all know that caching increases performance, what I am not getting into here is when it should be used or when it shouldn’t be used.  I’m more interested in the performance and scalability of the caching used.

The performance testing I am going to use the following different caching methods:

I’ve tried to provide some different options that are available for Caching, if you know of any others, please let me know and I’ll add them to this post.

There are two types of tests I am using a small test which waits for 30 milliseconds and then returns back the current date and time:

Thread.Sleep(30);
return System.DateTime.UtcNow.ToString(CultureInfo.InvariantCulture);

then a test to generate a much large object of 267 Mb

public IEnumerable<Block> GetData()
{
    // should eat around 267 Mb of RAM.
    var blocks = new Block[512 * 512 * 1];

    for (var i = 0; i < (512 * 512) * 1; i++)
    {
        blocks[i] = new Block();
    }
    return blocks;
}

Both are straightforward tests, by no means are they an accurate representation of real-world methods, but they do show you how it affects the performance when using different Caching.

The idea is to iterate around these test 1,000 times and record how long it takes for each test, what I don’t do here is work out the amount of memory is being used (if you know of an easy way to include this, please let me know).

I won’t go into great details here on the results of the tests as I think you’ll find them self-explanatory in the test application, attached at the bottom of this post.

Few things to note about the testing and results is that you should use a Paralling process for the examination, as this will be much closer to the real world multi-threading environment.

Also, careful consideration should be given to using locking in the Cache module, as you don’t want the same cache key function running while another one is trying to process the same function, it should wait until the process has finished and stored it into the cache to increase performance.

Here is the sample code with different sample tests showing how useful each option is

Cache

SignalR – the right way

It has been quite sometime that SignalR has been around, 2014, so why are not more people using it? First what is SignalR?

ASP.NET SignalR is a library for ASP.NET developers that makes developing real-time web functionality easy. SignalR allows bi-directional communication between server and client. Servers can now push content to connected clients instantly as it becomes available. SignalR supports Web Sockets and falls back to other compatible techniques for older browsers. SignalR includes APIs for connection management (for instance, connect and disconnect events), grouping connections, and authorisation.

For further information check https://www.asp.net/signalr

Could it because people just don’t get it or really understand what it can do, or perhaps they hit issues without realising they don’t understand the structure?  Whatever the reason people are not using SignalR, I’m going to provide a useful sample application to get you off on the right footing.

There are a lot of different samples and information on how to use SignalR, but I’m always a firm believer of KISS (Keep It Simple Stupid).

There are two primary sides to SignalR, the client side and the server hubs, here I have created an MVC application with Individual User Accounts for Authentication.

First, add the SignalR NuGet package

Install-Package Microsoft.AspNet.SignalR

Then we need to map the Hubs connection to the application.

To enable SignalR in your application, create a class called Startup with the following:

using Microsoft.Owin;
using Owin;
using MyWebApplication;

namespace MyWebApplication
{
 public class Startup
 {
   public void Configuration(IAppBuilder app)
   {
     app.MapSignalR();
   }
 }
}

What is important here is that app.MapSignalR() is the last to be called, and this is because any changes to the app need to be done before you call the mapping.  The incorrect order got me once when we had some custom Authentication, and it was not being passed to SignalR hubs.

I won’t be going into how you go about setting up the step by step process, as this is documented in many places, and also comes in the readme.txt file as part of the NuGet package.

What I will be adding is the Authorization to the project, which is covered by Microsoft in Authentication and Authorization for SignalR Hubs.

What is important to note how the connection is handled, we are using a class called SignalRConnectionManager, and this controls the connections based on the username coming from the context and the connection id which also comes from the context.

 

public class SignalRConnectionManager<T> : IDisposable
 {
 private readonly ConcurrentDictionary<T, HashSet<string>> _connections = new ConcurrentDictionary<T, HashSet<string>>();

public int Count { get { return _connections.Count; } }

/// <summary>
 /// Attempts to add the specified userid and connectionid
 /// </summary>
 public void Add(T userid, string connectionid)
 {
 HashSet<string> connections = _connections.GetOrAdd(userid, new HashSet<string>());

lock (connections)
 {
 connections.Add(connectionid);
 }
 }

public IEnumerable<string> Connections(T userid)
 {
 HashSet<string> connections;
 if (_connections.TryGetValue(userid, out connections))
 {
 return connections;
 }

return Enumerable.Empty<string>();
 }

public IEnumerable<T> UserIds()
 {
 return _connections.Keys;
 }

/// <summary>
 /// Attempts to remove a connectionid that has the specified userid
 /// </summary>
 public void Remove(T userid, string connectionid)
 {
 HashSet<string> connections;
 if (!_connections.TryGetValue(userid, out connections))
 {
 return;
 }

lock (connections)
 {
 connections.Remove(connectionid);

if (connections.Count == 0)
 {
 HashSet<string> emptyConnections;
 _connections.TryRemove(userid, out emptyConnections);
 }
 }
 }

#region IDisposable Support

private bool disposedValue = false; // To detect redundant calls

protected virtual void Dispose(bool disposing)
 {
 if (!disposedValue)
 {
 if (disposing)
 {
 _connections.Clear();
 }

// TODO: free unmanaged resources (unmanaged objects) and override a finalizer below.
 // TODO: set large fields to null.

disposedValue = true;
 }
 }

// This code added to correctly implement the disposable pattern.
 public void Dispose()
 {
 // Do not change this code. Put cleanup code in Dispose(bool disposing) above.
 Dispose(true);
 // TODO: uncomment the following line if the finalizer is overridden above.
 // GC.SuppressFinalize(this);
 }

#endregion IDisposable Support
 }

Client Code

In my case I’m going to be looking at JavaScript within a C# MVC application, which looks like this:

<p>SignalR</p>
<!--The jQuery library is required. -->
<script src="~/Scripts/jquery-1.10.2.js"></script>
<!--Reference the SignalR library. -->
<script src="~/Scripts/jquery.signalR-2.2.3.min.js"></script>
<!--Reference the auto generated SignalR hub script. -->
<script src="~/signalr/hubs"></script>

<!--Add script to update the page and send messages - SignalR - HeartBeat.-->
<script type="text/javascript">
 $(function () {
 // Declare a proxy to reference the hub.
 var heartBeat = $.connection.heartBeatHub;

heartBeat.client.broadcastMessage = function (html) {
 $('#message').html(html).fadeIn();
 };

if ($.connection.hub && $.connection.hub.state === $.signalR.connectionState.disconnected) {
 $.connection.hub.start()
 .done(function () {
 console.log('SignalR now connected, connection ID=' + $.connection.hub.id);
 heartBeat.server.send('Heart beat listening');
 console.log("Heart beat started")
 })
 .fail(function () { console.log('Could not Connect!'); });
 }
 });
</script>
<div id="message">
</div>

two important lines in this code are:

Reference the auto generated SignalR hub script

<script src="~/signalr/hubs"></script>

Declaring the proxy to reference the hub, you’ll notice the case of the letter ‘h’ is different the the C# code, this is important otherwise you will get a JavaScript error in your browser.

var heartBeat = $.connection.heartBeatHub;

Another important thing to note is that you should only start the hub once, no matter how many SignalR endpoints you have, and you place the listening code within the done section of hub, I’ve commented out another listening hub in this sample code:

if ($.connection.hub && $.connection.hub.state === $.signalR.connectionState.disconnected) {
 $.connection.hub.start()
 .done(function () {
 console.log('SignalR now connected, connection ID=' + $.connection.hub.id);
 heartBeat.server.send('Heart beat listening');
 console.log("Heart beat started")
 //anotherHub.server.send('Another hub listening');
 })
 .fail(function () { console.log('Could not Connect!'); });
 }

That is it for now, a good clean SignalR project, and here it is: SignalR

Async Unit Tests

In theory, Async unit testing seems easy, run the Async, wait until it is finished and look at the results.  But as you will find out it is not that easy.

Here is the official approach to Async unit testing

[TestMethod]
public void FourDividedByTwoIsTwo()
{
    GeneralThreadAffineContext.Run(async () =>
    {
        int result = await MyClass.Divide(4, 2);
        Assert.AreEqual(2, result);
    });
}
    
[TestMethod]
[ExpectedException(typeof(DivideByZeroException))]
public void DenominatorIsZeroThrowsDivideByZero()
{
    GeneralThreadAffineContext.Run(async () =>
    {
        await MyClass.Divide(4, 0);
    });
}

Hang on what is GeneralThreadAffineContext, it a Utility code originally distributed as part of the Async CTP, and the project file can be found here: AsyncTestUtilities

Original article: Async Unit Tests, Part 2: The Right Way

Generate C# Classes from JSON Responses

Well, as web developers, we are always dealing with JSON data which are coming from different sources. Either we are serialising our entities, or it’s coming from the external sources like third-party services and so on.

If the data is coming from an external source, one standard requirement is to always de-serialise it back to the data model to be able to process the data. Creating data model for JSON data is not the most exciting work in the world, especially when the data model is a bit nested and complex.

Fortunately, there is a very nice feature in Visual Studio which makes the life much more manageable. This feature is called Paste Special.

To take advantage, you first need to Copy the JSON data. Imaging there is JSON data as follow:

{
 "glossary": {
 "title": "example glossary",
 "GlossDiv": {
 "title": "S",
 "GlossList": {
 "GlossEntry": {
 "ID": "SGML",
 "SortAs": "SGML",
 "GlossTerm": "Standard Generalized Markup Language",
 "Acronym": "SGML",
 "Abbrev": "ISO 8879:1986",
 "GlossDef": {
 "para": "A meta-markup language, used to create markup languages such as DocBook.",
 "GlossSeeAlso": ["GML", "XML"]
 },
 "GlossSee": "markup"
 }
 }
 }
 }
}

And we require creating the data model for that. To do so, just Copy the data in memory and go to Visual Studio, create a new class (or an existing one where we intend to have our data model).

From the Edit menu in Visual Studio, select the Paste Special and from the submenu, Paste JSON As Classes.

Then, the data model will generate as follow:

public class Rootobject
 {
 public Glossary glossary { get; set; }
 }

public class Glossary
 {
 public string title { get; set; }
 public Glossdiv GlossDiv { get; set; }
 }

public class Glossdiv
 {
 public string title { get; set; }
 public Glosslist GlossList { get; set; }
 }

public class Glosslist
 {
 public Glossentry GlossEntry { get; set; }
 }

public class Glossentry
 {
 public string ID { get; set; }
 public string SortAs { get; set; }
 public string GlossTerm { get; set; }
 public string Acronym { get; set; }
 public string Abbrev { get; set; }
 public Glossdef GlossDef { get; set; }
 public string GlossSee { get; set; }
 }

public class Glossdef
 {
 public string para { get; set; }
 public string[] GlossSeeAlso { get; set; }
 }

As if by magic you see the initial data model has been created for us

As you may have noticed, there is one more option under Paste Special menu item, named Paste XML As Classes. This item does the same thing but for XML data. That means you need to copy your XML data to the memory and from the Paste Special menu item, choose Paste XML As Classes this time, to have your data model generated.

Original post ‘Paste Special’: a less well-known feature in Visual Studio

Repository Pattern – for the REST API

The Repository Pattern used to be the next big thing when it was used, but over time it got replaced by frameworks such as the Entity Framework and LINQ, which provided much more functionality and flexibility.

I started working on an external customers REST API then I realised that the Repository Pattern would work perfectly here.

Let’s recap the Repository Pattern.

The Repository Pattern has gained quite a bit of popularity since it was first introduced as a part of Domain-Driven Design in 2004. Primarily, it provides an abstraction of data, so that your application can work with a pure abstraction that has an interface approximating that of a collection. Adding, removing, updating, and selecting items from this collection is done through a series of straightforward methods, without the need to deal with database concerns like connections, commands, cursors, or readers. Using this pattern can help achieve loose coupling and can keep domain objects persistence ignorant. Although the pattern is prevalent (or perhaps because of this), it is also frequently misunderstood and misused. There are many different ways to implement the Repository pattern. Let’s consider a few of them, and their merits and drawbacks.

Repository Per Entity or Business Object

The most straightforward approach, especially with an existing system, is to create a new Repository implementation for each business object you need to store to or retrieve from your persistence layer. Further, you should only implement the specific methods you are calling in your application. Avoid the trap of creating a “standard” repository class, base class, or default interface that you must implement for all repositories. Yes, if you need to have an Update or a Delete method, you should strive to make its interface consistent (does Delete take an ID, or does it take the object itself?). Don’t implement a Delete method on your LookupTableRepository that you’re only ever going to be calling List(). The most significant benefit of this approach is YAGNI – you won’t waste any time implementing methods that never get called.

Generic Repository Interface

Another approach is to go ahead and create a simple, generic interface for your Repository. You can constrain what kind of types it works with to be of a specific type or to implement a particular interface (e.g. ensuring it has an Id property, as is done below using a base class). An example of a generic C# repository interface might be:

public interface IRepository<T> where T : EntityBase
{
    T GetById(int id);
    IEnumerable<T> List();
    IEnumerable<T> List(Expression<Func<T, bool>> predicate);
    void Add(T entity);
    void Delete(T entity);
    void Edit(T entity);
}
 
public abstract class EntityBase
{
   public int Id { get; protected set; }
}

The advantage of this approach is that it ensures you have a common interface for working with any of your objects. You can also simplify the implementation by using a Generic Repository Implementation (below). Note that taking in a predicate eliminates the need to return an IQueryable since any filter details can be passed into the repository. This can still lead to leaking of data access details into calling code, though. Consider using the Specification pattern (described below) to alleviate this issue if you encounter it.

Generic Repository Implementation

Assuming you create a Generic Repository Interface, you can implement the interface generically as well. Once this is done, you can quickly develop repositories for any given type without having to write any new code, and your classes the declare dependencies can merely specify IRepository<Item> as the type, and it’s easy for your IoC container to match that up with a Repository<Item> implementation. You can see an example Generic Repository Implementation, using Entity Framework, here.

public class Repository<T> : IRepository<T> where T : EntityBase
{
private readonly ApplicationDbContext _dbContext;

public Repository(ApplicationDbContext dbContext)
{
_dbContext = dbContext;
}

public virtual T GetById(int id)
{
return _dbContext.Set<T>().Find(id);
}

public virtual IEnumerable<T> List()
{
return _dbContext.Set<T>().AsEnumerable();
}

public virtual IEnumerable<T> List(System.Linq.Expressions.Expression<Func<T, bool>> predicate)
{
return _dbContext.Set<T>()
.Where(predicate)
.AsEnumerable();
}

public void Insert(T entity)
{
_dbContext.Set<T>().Add(entity);
_dbContext.SaveChanges();
}

public void Update(T entity)
{
_dbContext.Entry(entity).State = EntityState.Modified;
_dbContext.SaveChanges();
}

public void Delete(T entity)
{
_dbContext.Set<T>().Remove(entity);
_dbContext.SaveChanges();
}
}

Note that in this implementation, all operations are saved as they are performed; there is no Unit of Work being applied. There are a variety of ways in which Unit of Work behaviour can be added to this implementation, the simplest of which being to add an explicit Save() method to the IRepository<T> method, and to only call the underlying SaveChanges() method from this method.

IQueryable?

Another common question with Repositories has to do with what they return. Should they return data, or should they return queries that can be further refined before execution (IQueryable)? The former is safer, but the latter offers a great deal of flexibility. In fact, you can simplify your interface only to provide a single method for reading data if you go the IQueryable route since from there any number of items can be returned.

A problem with this approach is that it tends to result in business logic bleeding into higher application layers and becoming duplicated there. If the rule for returning valid customers is that they’re not disabled and they’ve bought something in the last year, it would be better to have a method ListValidCustomers() that encapsulates this logic rather than specifying these criteria in lambda expressions in multiple different UI layer references to the repository. Another typical example in real applications is the use of “soft deletes” represented by an IsActive or IsDeleted property on an entity. Once an item has been deleted, 99% of the time it should be excluded from display in any UI scenario, so nearly every request will include something like

.Where(foo => foo.IsActive)

in addition to whatever other filters are present. This is better achieved within the repository, where it can be the default behaviour of the List() method, or the List() method might be renamed to something like ListActive(). If it’s essential to view deleted/inactive items, a unique List method can be used for just this (probably rare) purpose.

Specification

Repositories that follow the advice of not exposing IQueryable can often become bloated with many custom query methods. The solution to this is to separate queries into their types, using the Specification Design Pattern. The Specification can include the expression used to filter the query, any parameters associated with this expression, as well as how much data the query should return (i.e. “.Include()” in EF/EF Core). Combining the Repository and Specification patterns can be a great way to ensure you follow the Single Responsibility Principle in your data access code. See an example of how to implement a generic repository along with a generic specification in C#.

Repository Pattern for the REST API

Now let’s see if this can work for the REST API, first the HTTP verbs that are used, the primary or most-commonly-used HTTP verbs (or methods, as they are properly called) are POST, GET, PUT, PATCH, and DELETE. These correspond to create, read, update, and delete (or CRUD) operations, respectively. There are a number of other verbs, too, but are utilized less frequently. Of those less-frequent methods, OPTIONS and HEAD are used more often than others.

HTTP Verb CRUD
POST Create
GET Read
PUT Update/Replace
PATCH Update/Modify
DELETE Delete

One of the main difference is that the calls to REST API are Asynchronous, and the interface needs to reflect this.

public interface IRepository
 {
    Task AddAsync<T>(T entity, string requestUri);
    Task<HttpStatusCode> DeleteAsync(string requestUri);
    Task EditAsync<T>(T t, string requestUri);
    Task<T> GetAsync<T>(string path);
 }

In the concrete implementation I am using the HttpClient to connect to the REST API, which needs a few parameters:

  • Uri of the end point
  • Authorization – type of authorization to be used, default NoAuthHttpSample
  • username – username for basic authorization – default null
  • password – the password for basic authorization – default null

A sample application can be found here:

Original reference Repository Pattern A data persistence abstraction

Create a subset of Unit Tests with a Playlist

Visual Studio Unit Test Tools comes with another excellent feature to manage unit test as a group /subset, called as “Playlist”. A Playlist is a subset of unit test methods grouped under some category. The Playlist could be a logical subset based on modules, features, layers etc. A Playlist is useful when we want to test a particular set of test cases among all the test methods. In that case, we just create a group of those test cases which may get impacted by our current changes in the actual code, and execute them only to ensure nothing breaks.

You can create a playlist either from the Test Explorer or the main menu by navigating to Test –> Playlist. Let’s have a look how we can do that.

  • Step 1 : Select Unit Test methods together for which you want to create a playlist.
  • Step 2 : Navigate to Add to Playlist –> New Playlist

The rest I’m sure you can work out for yourself, if you want more information on this check out Daily .NET Tips