Generate C# Classes from JSON Responses

Well, as web developers, we are always dealing with JSON data which are coming from different sources. Either we are serialising our entities, or it’s coming from the external sources like third-party services and so on.

If the data is coming from an external source, one standard requirement is to always de-serialise it back to the data model to be able to process the data. Creating data model for JSON data is not the most exciting work in the world, especially when the data model is a bit nested and complex.

Fortunately, there is a very nice feature in Visual Studio which makes the life much more manageable. This feature is called Paste Special.

To take advantage, you first need to Copy the JSON data. Imaging there is JSON data as follow:

{
 "glossary": {
 "title": "example glossary",
 "GlossDiv": {
 "title": "S",
 "GlossList": {
 "GlossEntry": {
 "ID": "SGML",
 "SortAs": "SGML",
 "GlossTerm": "Standard Generalized Markup Language",
 "Acronym": "SGML",
 "Abbrev": "ISO 8879:1986",
 "GlossDef": {
 "para": "A meta-markup language, used to create markup languages such as DocBook.",
 "GlossSeeAlso": ["GML", "XML"]
 },
 "GlossSee": "markup"
 }
 }
 }
 }
}

And we require creating the data model for that. To do so, just Copy the data in memory and go to Visual Studio, create a new class (or an existing one where we intend to have our data model).

From the Edit menu in Visual Studio, select the Paste Special and from the submenu, Paste JSON As Classes.

Then, the data model will generate as follow:

public class Rootobject
 {
 public Glossary glossary { get; set; }
 }

public class Glossary
 {
 public string title { get; set; }
 public Glossdiv GlossDiv { get; set; }
 }

public class Glossdiv
 {
 public string title { get; set; }
 public Glosslist GlossList { get; set; }
 }

public class Glosslist
 {
 public Glossentry GlossEntry { get; set; }
 }

public class Glossentry
 {
 public string ID { get; set; }
 public string SortAs { get; set; }
 public string GlossTerm { get; set; }
 public string Acronym { get; set; }
 public string Abbrev { get; set; }
 public Glossdef GlossDef { get; set; }
 public string GlossSee { get; set; }
 }

public class Glossdef
 {
 public string para { get; set; }
 public string[] GlossSeeAlso { get; set; }
 }

As if by magic you see the initial data model has been created for us

As you may have noticed, there is one more option under Paste Special menu item, named Paste XML As Classes. This item does the same thing but for XML data. That means you need to copy your XML data to the memory and from the Paste Special menu item, choose Paste XML As Classes this time, to have your data model generated.

Original post ‘Paste Special’: a less well-known feature in Visual Studio

Repository Pattern – for the REST API

The Repository Pattern used to be the next big thing when it was used, but over time it got replaced by frameworks such as the Entity Framework and LINQ, which provided much more functionality and flexibility.

I started working on an external customers REST API then I realised that the Repository Pattern would work perfectly here.

Let’s recap the Repository Pattern.

The Repository Pattern has gained quite a bit of popularity since it was first introduced as a part of Domain-Driven Design in 2004. Primarily, it provides an abstraction of data, so that your application can work with a pure abstraction that has an interface approximating that of a collection. Adding, removing, updating, and selecting items from this collection is done through a series of straightforward methods, without the need to deal with database concerns like connections, commands, cursors, or readers. Using this pattern can help achieve loose coupling and can keep domain objects persistence ignorant. Although the pattern is prevalent (or perhaps because of this), it is also frequently misunderstood and misused. There are many different ways to implement the Repository pattern. Let’s consider a few of them, and their merits and drawbacks.

Repository Per Entity or Business Object

The most straightforward approach, especially with an existing system, is to create a new Repository implementation for each business object you need to store to or retrieve from your persistence layer. Further, you should only implement the specific methods you are calling in your application. Avoid the trap of creating a “standard” repository class, base class, or default interface that you must implement for all repositories. Yes, if you need to have an Update or a Delete method, you should strive to make its interface consistent (does Delete take an ID, or does it take the object itself?). Don’t implement a Delete method on your LookupTableRepository that you’re only ever going to be calling List(). The most significant benefit of this approach is YAGNI – you won’t waste any time implementing methods that never get called.

Generic Repository Interface

Another approach is to go ahead and create a simple, generic interface for your Repository. You can constrain what kind of types it works with to be of a specific type or to implement a particular interface (e.g. ensuring it has an Id property, as is done below using a base class). An example of a generic C# repository interface might be:

public interface IRepository<T> where T : EntityBase
{
    T GetById(int id);
    IEnumerable<T> List();
    IEnumerable<T> List(Expression<Func<T, bool>> predicate);
    void Add(T entity);
    void Delete(T entity);
    void Edit(T entity);
}
 
public abstract class EntityBase
{
   public int Id { get; protected set; }
}

The advantage of this approach is that it ensures you have a common interface for working with any of your objects. You can also simplify the implementation by using a Generic Repository Implementation (below). Note that taking in a predicate eliminates the need to return an IQueryable since any filter details can be passed into the repository. This can still lead to leaking of data access details into calling code, though. Consider using the Specification pattern (described below) to alleviate this issue if you encounter it.

Generic Repository Implementation

Assuming you create a Generic Repository Interface, you can implement the interface generically as well. Once this is done, you can quickly develop repositories for any given type without having to write any new code, and your classes the declare dependencies can merely specify IRepository<Item> as the type, and it’s easy for your IoC container to match that up with a Repository<Item> implementation. You can see an example Generic Repository Implementation, using Entity Framework, here.

public class Repository<T> : IRepository<T> where T : EntityBase
{
private readonly ApplicationDbContext _dbContext;

public Repository(ApplicationDbContext dbContext)
{
_dbContext = dbContext;
}

public virtual T GetById(int id)
{
return _dbContext.Set<T>().Find(id);
}

public virtual IEnumerable<T> List()
{
return _dbContext.Set<T>().AsEnumerable();
}

public virtual IEnumerable<T> List(System.Linq.Expressions.Expression<Func<T, bool>> predicate)
{
return _dbContext.Set<T>()
.Where(predicate)
.AsEnumerable();
}

public void Insert(T entity)
{
_dbContext.Set<T>().Add(entity);
_dbContext.SaveChanges();
}

public void Update(T entity)
{
_dbContext.Entry(entity).State = EntityState.Modified;
_dbContext.SaveChanges();
}

public void Delete(T entity)
{
_dbContext.Set<T>().Remove(entity);
_dbContext.SaveChanges();
}
}

Note that in this implementation, all operations are saved as they are performed; there is no Unit of Work being applied. There are a variety of ways in which Unit of Work behaviour can be added to this implementation, the simplest of which being to add an explicit Save() method to the IRepository<T> method, and to only call the underlying SaveChanges() method from this method.

IQueryable?

Another common question with Repositories has to do with what they return. Should they return data, or should they return queries that can be further refined before execution (IQueryable)? The former is safer, but the latter offers a great deal of flexibility. In fact, you can simplify your interface only to provide a single method for reading data if you go the IQueryable route since from there any number of items can be returned.

A problem with this approach is that it tends to result in business logic bleeding into higher application layers and becoming duplicated there. If the rule for returning valid customers is that they’re not disabled and they’ve bought something in the last year, it would be better to have a method ListValidCustomers() that encapsulates this logic rather than specifying these criteria in lambda expressions in multiple different UI layer references to the repository. Another typical example in real applications is the use of “soft deletes” represented by an IsActive or IsDeleted property on an entity. Once an item has been deleted, 99% of the time it should be excluded from display in any UI scenario, so nearly every request will include something like

.Where(foo => foo.IsActive)

in addition to whatever other filters are present. This is better achieved within the repository, where it can be the default behaviour of the List() method, or the List() method might be renamed to something like ListActive(). If it’s essential to view deleted/inactive items, a unique List method can be used for just this (probably rare) purpose.

Specification

Repositories that follow the advice of not exposing IQueryable can often become bloated with many custom query methods. The solution to this is to separate queries into their types, using the Specification Design Pattern. The Specification can include the expression used to filter the query, any parameters associated with this expression, as well as how much data the query should return (i.e. “.Include()” in EF/EF Core). Combining the Repository and Specification patterns can be a great way to ensure you follow the Single Responsibility Principle in your data access code. See an example of how to implement a generic repository along with a generic specification in C#.

Repository Pattern for the REST API

Now let’s see if this can work for the REST API, first the HTTP verbs that are used, the primary or most-commonly-used HTTP verbs (or methods, as they are properly called) are POST, GET, PUT, PATCH, and DELETE. These correspond to create, read, update, and delete (or CRUD) operations, respectively. There are a number of other verbs, too, but are utilized less frequently. Of those less-frequent methods, OPTIONS and HEAD are used more often than others.

HTTP Verb CRUD
POST Create
GET Read
PUT Update/Replace
PATCH Update/Modify
DELETE Delete

One of the main difference is that the calls to REST API are Asynchronous, and the interface needs to reflect this.

public interface IRepository
 {
    Task AddAsync<T>(T entity, string requestUri);
    Task<HttpStatusCode> DeleteAsync(string requestUri);
    Task EditAsync<T>(T t, string requestUri);
    Task<T> GetAsync<T>(string path);
 }

In the concrete implementation I am using the HttpClient to connect to the REST API, which needs a few parameters:

  • Uri of the end point
  • Authorization – type of authorization to be used, default NoAuthHttpSample
  • username – username for basic authorization – default null
  • password – the password for basic authorization – default null

A sample application can be found here:

Original reference Repository Pattern A data persistence abstraction

Create a subset of Unit Tests with a Playlist

Visual Studio Unit Test Tools comes with another excellent feature to manage unit test as a group /subset, called as “Playlist”. A Playlist is a subset of unit test methods grouped under some category. The Playlist could be a logical subset based on modules, features, layers etc. A Playlist is useful when we want to test a particular set of test cases among all the test methods. In that case, we just create a group of those test cases which may get impacted by our current changes in the actual code, and execute them only to ensure nothing breaks.

You can create a playlist either from the Test Explorer or the main menu by navigating to Test –> Playlist. Let’s have a look how we can do that.

  • Step 1 : Select Unit Test methods together for which you want to create a playlist.
  • Step 2 : Navigate to Add to Playlist –> New Playlist

The rest I’m sure you can work out for yourself, if you want more information on this check out Daily .NET Tips

Timeout Process

When you are dealing with large amounts of data or processes that are very time hungry you sometimes need the ability to timeout that task and continue.

Here is a snippet of code that loops around that takes 1 second, and you can set a timeout to any time you like, but if it is less than 1 second, it will cancel the task and free up resources.

class Program
 {
 public static void Main()
 {
 int timeOutInMilliseconds = 450;
 var startTime = DateTime.Now;

var cTokenSource = new CancellationTokenSource();

// Create a cancellation token from CancellationTokenSource
 var cToken = cTokenSource.Token;
 // Create a task and pass the cancellation token
 var t1 = Task<int>.Factory.StartNew(() => GenerateNumbers(cToken), cToken);

// to register a delegate for a callback when a cancellation request is made
 cToken.Register(() => cancelNotification(cTokenSource));


 while (true)
 {
 if(t1.IsCompleted)
 { 
 Console.WriteLine("Finished Processing");
 break;
 }

if (DateTime.Now > startTime.AddMilliseconds(timeOutInMilliseconds))
 {
 Console.WriteLine("Timed out");
 cTokenSource.Cancel();
 break;
 }
 }

Console.WriteLine("finished");
 Console.ReadLine();
 }

private static Task HandleTimer(CancellationTokenSource cancellationTokenSource)
 {
 cancellationTokenSource.Cancel();
 Console.WriteLine("\nHandler not implemented...");
 return Task.Run(() => { var a = 0; });
 
 }

static int GenerateNumbers(CancellationToken cancellationTokenSource)
 {
 int i;
 for (i = 0; i < 10; i++)
 {
 Console.WriteLine("Method1 - Number: {0}", i);
 Thread.Sleep(100);

// poll the IsCancellationRequested property
 // to check if cancellation was requested
 if (cancellationTokenSource.IsCancellationRequested)
 break;
 }
 return i;
 }

// Notify when task is cancelled
 static void cancelNotification(CancellationTokenSource cancellationTokenSource)
 {
 cancellationTokenSource.Cancel();
 }
 }

Source: Parallel

Search every column in a SQL Server Database

Following on from the Search every table and field in a SQL Server Database article I posted last week, I thought it would be useful also to be able to search every column for some text and return where it was found.

Here is the script that I have produced which does the job and is quick in its response.

CREATE PROC [SearchAllColumns]
(
 @SearchStr nvarchar(100)
)
AS
 
BEGIN
 
 SELECT COLUMN_NAME AS 'ColumnName',
 TABLE_NAME AS 'TableName'
 FROM INFORMATION_SCHEMA.COLUMNS
 WHERE COLUMN_NAME LIKE '%' + @SearchStr + '%'
 ORDER BY TableName,
 ColumnName;

END

 

Versioning

In the article we are going to look at the importance of a good version control system, and why it is crucial when it comes to API versioning.

It is essential that we deploy a Web API version control from an early stage in our project. Versioning will ensure that the project is built on a solid base and that a standard is maintained. A good version control system will enable us to collaborate and scale in a managed way, and as such will provide our customers with a more straightforward progression through our upgrades.

The first point to note we should always version our Web API, while at the same time keeping as much of the same URI as possible, with the possibility of newer versions having a different URI and a clear upgrade path.

What is versioning?

Versioning is a thoughtful application of changes to a system that is already in production.

There are two types of versioning, they are:

  • Versioning Data – changing the way we lay out our data structures, name things and so on
  • Version SOAP Endpoints – adding or removing functionality, such as functions that can be called or changing parameters that can be called on those endpoints

Plan Versioning

How do we handle versioning?  We are producing a plan for the first release, and we need to do the following:

  • At the start
    • Pick a version-able identifier
    • pick a versioning pattern
  • Version-able identifiers
    • XML Data: XML Namespace
    • Things that listen at URL’s: URL Path
  • Numbering Pattern
    • https://yoururl.co.uk/[service name]/major.[minor].[build[
    • Each number increases monotonically: 1,2,3,4, …., 9, 10, 11
  • Date Pattern
    • https://yoururl.co.uk/[service name]/[yyyy]/[mm]/[dd]
    • https://yoururl.co.uk/[service name]/[yyyy].[mm].[dd]

The only time we ever need to version is when we change the contract with the client, we can change the contract in one of three ways:

  • Data
    • Add a new type
    • Removing a type
    • Change fields
    • Required fields
  • SOAP
    • Add/remove methods
    • Change parameter names
    • In session: change the set of initiating/termination actions
  • REST
    • Change URI structure
    • Add methods

Versioning Data

There are times when you need to version the data:

  • Reasons to version data
    • Name chooser picked terrible names (aka renames)
    • Adding/removing fields
  • Things to handle upon versioning data
    • Co-existing old/new clients
  • Reason not to version data
    • Re-ordering for aesthetic reasons

New Members

  • If a new member is required, define sensible default
    • Caveat: if no sensible default exists, you have a severe bug in a previous version
    • If you think you have no reasonable default and customers won’t change, you will suddenly find a sensible default
  • Place at the end
    • Some clients may be ordinal dependent, not name
    • Use DataMember.Order property
  • Or inherit Data Transformation Object if new item breaks old clients

Versioning SOAP Endpoints

  • Reasons to version endpoints
    • Name chooser picked terrible names (aka renames)
    • Adding methods
    • Removing methods
    • Updating method parameters (new version of old type)
    • Adding terminating functions (might already exist)
  • Things to handle upon versioning endpoints
    • Update old to call out to controller properly
  • Reasons not to version endpoint
    • Re-ordering for esthetic reasons (don’t do this!)

Handle Versioning

  • Keep services light
  • Keep details in separate class
  • Use MVC type pattern
    • DTO is Model
    • Service is View
    • “Separate class” is Controller
  • Benefits
    • Minimize code churn in services
    • Keep logic mapping DTO to Business in one place

Key Points

  • Keep service code clean, simple.
  • Think of service as a machine UI to the real object model.
  • Let internal logic worry about mapping DTO to the business objects.
  • Handle upgrade in one place, away from service.

Versioning REST Endpoints

  • Reasons to version REST endpoints
    • Name chooser picked terrible names
    • Paths don’t make sense
    • Updating data types, parameters (new version of old model)
  • Things to handle upon versioning endpoints
    • Update old to call out to controller properly
  • Reasons not to version endpoint
    • Supporting more of Uniform Interface (GET | HEAD, PUT, POST, DELETE)

Where to version?

  • New set of services in new directory
    • Pros: clean separation, small files, easy to read, allows for more natural deletion later
    • Cons: not necessary, moves logic to new file
  • Keep everything in one file, add new UriTemplates
    • Pros: All URLs in one place, easy to see what uses code, can see evolution in one place
    • Cons: Likely to clutter code, increase maintenance burden over time

Support Plans for Versioning

  • Need an SLA between service and consumers
  • Need to define
    • How long a consumer can depend on a version
    • What constitutes end of life for a version
  • Need to prepare for
    • Extensions to end of life
    • Presence of older clients
    • Work with client to update to latest

Assets to Create to Assist/Reduce Burden

  • Documentation on wire-level formats
    • XSD, HTML documents, sample code
  • SDKs in client languages
    • Java, Ruby, PHP, Python, .NET
    • Can ease migration.
      • Client uses new library, works till clean compile
      • Versioning issue becomes deployment issue
  • Support staff
    • Dedicate staff to help support clients who are migrating code.

Summary

  • Versioning: The retrospective application of changes to a system.
  • Plan ahead
    • URL structure, version names, use MVC pattern
  • Changes hit in three, related areas: Data, SOAP Methods, REST
  • Manage changes
  • Create new endpoints for new data types
  • Use Data Transformation Objects
  • Keep logic to translate between Data Transformation Objects and business in one place

References

Versioning Strategies by Microsoft

Best Practices: Data Contract Versioning

ASP.NET Core RESTful Web API versioning made easy by Scott Hanselman

 

Search every table and field in a SQL Server Database

One thing I do like about MySQL is when using phpMyAdmin is the ability to search the whole database for any string, this should only be available for developers who are searching for data in an extensive database.  It does take some time to run, but remember it has a lot to do.

Here is the script that Narayana Vyas Kondreddi has produced which does the trick and works very well.

CREATE PROC SearchAllTables
(
 @SearchStr nvarchar(100)
)
AS
 
BEGIN
 
 -- Copyright © 2002 Narayana Vyas Kondreddi. All rights reserved.
 -- Purpose: To search all columns of all tables for a given search string
 -- Written by: Narayana Vyas Kondreddi
 -- Site: http://vyaskn.tripod.com
 -- Updated and tested by Tim Gaunt
 -- http://www.thesitedoctor.co.uk
 -- http://blogs.thesitedoctor.co.uk/tim/2010/02/19/Search+Every+Table+And+Field+In+A+SQL+Server+Database+Updated.aspx
 -- Tested on: SQL Server 7.0, SQL Server 2000, SQL Server 2005 and SQL Server 2010
 -- Date modified: 03rd March 2011 19:00 GMT
 CREATE TABLE #Results (ColumnName nvarchar(370), ColumnValue nvarchar(3630))
 
 SET NOCOUNT ON
 
 DECLARE @TableName nvarchar(256), @ColumnName nvarchar(128), @SearchStr2 nvarchar(110)
 SET @TableName = ''
 SET @SearchStr2 = QUOTENAME('%' + @SearchStr + '%','''')
 
 WHILE @TableName IS NOT NULL
 
 BEGIN
 SET @ColumnName = ''
 SET @TableName = 
 (
 SELECT MIN(QUOTENAME(TABLE_SCHEMA) + '.' + QUOTENAME(TABLE_NAME))
 FROM INFORMATION_SCHEMA.TABLES
 WHERE TABLE_TYPE = 'BASE TABLE'
 AND QUOTENAME(TABLE_SCHEMA) + '.' + QUOTENAME(TABLE_NAME) > @TableName
 AND OBJECTPROPERTY(
 OBJECT_ID(
 QUOTENAME(TABLE_SCHEMA) + '.' + QUOTENAME(TABLE_NAME)
 ), 'IsMSShipped'
 ) = 0
 )
 
 WHILE (@TableName IS NOT NULL) AND (@ColumnName IS NOT NULL)
 
 BEGIN
 SET @ColumnName =
 (
 SELECT MIN(QUOTENAME(COLUMN_NAME))
 FROM INFORMATION_SCHEMA.COLUMNS
 WHERE TABLE_SCHEMA = PARSENAME(@TableName, 2)
 AND TABLE_NAME = PARSENAME(@TableName, 1)
 AND DATA_TYPE IN ('char', 'varchar', 'nchar', 'nvarchar', 'int', 'decimal')
 AND QUOTENAME(COLUMN_NAME) > @ColumnName
 )
 
 IF @ColumnName IS NOT NULL
 
 BEGIN
 INSERT INTO #Results
 EXEC
 (
 'SELECT ''' + @TableName + '.' + @ColumnName + ''', LEFT(' + @ColumnName + ', 3630) FROM ' + @TableName + ' (NOLOCK) ' +
 ' WHERE ' + @ColumnName + ' LIKE ' + @SearchStr2
 )
 END
 END 
 END
 
 SELECT ColumnName, ColumnValue FROM #Results
 DROP TABLE #Results
END

Original post

DataSets with Entity framework

Why, why, why I hear you say would you want to use Datasets with the Entity Framework?

I’m not here to debate the whys and wherefores for doing this, just finding a simple solution to the problem.

99.9% of the time I would say don’t, but if you have a legacy system and all the code is driven by DataSet’s you have many choices.

  1. Rewrite all the business logic code to support EF
  2. Use an old style connection to the database
  3. Use DataSet’s via EF connection

Each has their issues, the first could take some time, and would need full testing to ensure nothing had changed.

Using an old style database connection, you might as well just use the old legacy code.

What I’m going to be covering here is being able to call a Stored Procedure using the Entity Framework and return a dataset, this way the legacy code remains the same but allow you to utilise the Entity Framework for your connection.

Let’s get to work; first, we will create a Database Manager that will receive the Entity Framework DbContext

public class DatabaseManager
 {
 private readonly DbContext _context;

public DatabaseManager(DbContext context)
 {
 _context = context;
 }
 }

Then we need to call a Stored Procedure using the Entity Framework DbContext connection:

new SqlConnection(_context.Database.Connection.ConnectionString)

I won’t go over creating the command with its parameters, as this is assumed you know how this works already.  What I will mention is the SqlDataAdapter that calls the Stored Procedure and returns the DataSet

using (var adapter = new SqlDataAdapter(command))
 {
 adapter.Fill(dataSet);
 }

All you have to do now is return the dataSet and you are done.  It is that easy, here is the full code base:  DataSetEF

Choosing the Boring Solution

When solving a complex problem, it’s sometimes very tempting to choose an excellent solution. Not only will this solve the problem – but it will also impress your teammates and bring you everlasting glory.

An even better solution, however, would probably be a boring one. One that is easy to understand, even for your junior colleagues. One that is well-known for your platform and language. One that is entirely not extraordinary.

Using such a tedious solution means that you’re using simple vocabulary, which significantly increases the chances of everybody understanding you. This will make it a lot easier for everybody to follow along – including new team members (and yourself, a couple of weeks after you’ve written that code).

Aiming for Simplicity

Acronyms FTW! Car enthusiasts might now think of BMW, but software developers should think of KISS and YAGNI.

“Keep it simple, stupid” should remind us that the simple solution will always beat the overengineered solution. The reasons why this is true are almost endless. And they might be easier to understand when looking at the opposite: complex code.

  • Complex code is a perfect hiding place for mistakes.
  • Complex code is hard to understand, for your coworkers and yourself.
  • Complex code cannot easily be extended.
  • Complex code cannot be reused. And, last but certainly not least, writing complex code will cause your teammates to brand you as an outlaw.When a more straightforward solution seems sufficient right now, you should always aim for it.

“Writing complex code will cause your teammates to brand you as an outlaw.”

When you realise that your solution isn’t essential at all, you should drop it. This is what YAGNI is about – “You ain’t gonna need it” reminds us to stay modest when planning the volume and scope of our implementations. Will users need this feature? Will they need this option within a feature? These questions, of course, will translate to our code: will we need that class/ module/routine?