Bottlenecks and Monitoring

I've come across this fab system monitoring tool, Spotlight for Windows

Spotlight on Windows empowers you to quickly identify and eliminate bottlenecks in the Windows environment

For a busy IT professional, it is nearly impossible to diagnose, troubleshoot, and resolve every component affecting Windows Operating System (OS) performance using manual methods. Without a clear view of I/O and system activity, you learn of performance issues only after problems erupt.

With its unique graphical view of the Windows OS internals, Spotlight on Windows will empower the user to rapidly eliminate and identify bottlenecks in the Windows environment. Displaying the real-time flow of data within your Windows OS, Spotlight enables you to quickly identify and resolve performance problems.

Limitations:

Spotlight on Windows is unsupported freeware. The license key is provided in the download package and expires one year after installation. To renew, please revisit this website. An activation key will be made available prior to the expiration date of your current key.

http://www.quest.com/spotlight-on-windows/

Examples of using System.Data.ParameterDirection with Sql Server

When working with the SqlCommand you will no doubt have to supply parameters to your command whether it be a Stored Procedure or Text command.  Although not as efficient as an actual Stored Procedure there are some powerful things you can achieve using the System.Data.ParameterDirection, or rather other than just Input.

The enumeration holds the following values:

  • Input
  • InputOutput
  • Output
  • ReturnValue

In this article I will give a working example of each, simple but you will see the idea.  At the end I will make a custom object which I will populate using some of these directions.

For this example I am working with the AdventureWorks Database for Sql Server 2005.

  

Input

The most common direction.

            using (System.Data.SqlClient.SqlConnection sc1 =
                new System.Data.SqlClient.SqlConnection(@"Data Source=REA_ANDREW-PC\SQLEXPRESS;Initial Catalog=AdventureWorks;"+
                    "Integrated Security=True"))
            {
                sc1.Open();
                using (System.Data.SqlClient.SqlCommand command1 = new System.Data.SqlClient.SqlCommand())
                {
                    command1.CommandType = CommandType.Text;
                    command1.Connection = sc1;
                    // DIRECTION :: Input
                    command1.CommandText = "INSERT INTO [Purchasing].[ShipMethod] (Name,ShipBase,ShipRate)"+
                        "VALUES (@Name,@ShipBase,@ShipRate)";
                    //Method 1
                    command1.Parameters.AddWithValue("@Name", "MyShippingMethod");
                    //Method 2
                    System.Data.SqlClient.SqlParameter parameter2 =
                        new System.Data.SqlClient.SqlParameter("@ShipBase", 10.00M);
                    parameter2.Direction = ParameterDirection.Input;
                    command1.Parameters.Add(parameter2);
                    //Method 3 //Deprecated for AddWithValue
                    command1.Parameters.Add("@ShipRate", 10.00M);

                    command1.ExecuteNonQuery();
                }
            }

 

Here I have displayed three different syntactical ways of adding parameters to your SqlCommand.  The third and final way is deprectaed and you will see this if you try inside Visual Studio.  Because of this it will not appear inside intellisense.

  

Output

The output is going to look like the above example only I will not give it an intial value and simply assign the value inside the command text.

            using (System.Data.SqlClient.SqlConnection sc1 =
                new System.Data.SqlClient.SqlConnection(@"Data Source=REA_ANDREW-PC\SQLEXPRESS;Initial Catalog=AdventureWorks;"+
                    "Integrated Security=True"))
            {
                sc1.Open();
                using (System.Data.SqlClient.SqlCommand command1 = new System.Data.SqlClient.SqlCommand())
                {
                    command1.CommandType = CommandType.Text;
                    command1.Connection = sc1;
                    // DIRECTION :: Input
                    command1.CommandText = "select @MyParameter = Count(*) FROM [Purchasing].[ShipMethod]";
                    System.Data.SqlClient.SqlParameter paramter1 = command1.Parameters.Add("@MyParameter", SqlDbType.SmallInt);
                    paramter1.Direction = ParameterDirection.Output;
                    command1.ExecuteNonQuery();
                    //The following value is now 6, the number of records inside the table
                    int newValue = (int)paramter1.Value;
                }
            }

  

Return Value

The return value is something which you will return at the end of the statement/s.  For this example I need to create a short stored procedure, again i will simply return the count of records from the table:

ALTER PROCEDURE CountRows
ASDeclare @CountR intSELECT @CountR = Count(*) FROM [Purchasing].[ShipMethod]
RETURN @CountR

And so the function which will now work with this, and using the ReturnValue direction is as follows:

            using (System.Data.SqlClient.SqlConnection sc1 =
                new System.Data.SqlClient.SqlConnection(@"Data Source=REA_ANDREW-PC\SQLEXPRESS;Initial Catalog=AdventureWorks;"+
                    "Integrated Security=True"))
            {
                sc1.Open();
                using (System.Data.SqlClient.SqlCommand command1 = new System.Data.SqlClient.SqlCommand())
                {
                    command1.CommandType = CommandType.StoredProcedure;
                    command1.Connection = sc1;
                    // DIRECTION :: Input
                    command1.CommandText = "CountRows";
                    System.Data.SqlClient.SqlParameter paramter1 = command1.Parameters.Add("@CountR", SqlDbType.SmallInt);
                    paramter1.Direction = ParameterDirection.ReturnValue;
                    command1.ExecuteNonQuery();
                    //The following value is now 6, the number of records inside the table
                    int newValue = (int)paramter1.Value;
                }
            }

 

  

The simple example object

I now want to use these counting methods to construct a sample object which will contain simply.

  1. Total Record Count
  2. List<string> of Shipping Names which can be bound to

This is just for an example, I realise lol that I could simply use the List<string> Count property BUT think of it like this.  What if you wanted to create a paged object so the list actual only contains say ten items because of a page size you set BUT you still have knowledge of the total number of records through the Total Record Count property, from which you could calculate the total number of pages.!! 🙂

 

The object

namespace WindowsForm_Examples_NET_2
{    class SimpleObjectOne
    {
        private int _recordCount;
        private List<string> _shippingNames;

        public List<string> ShippingNames
        {
            get { return _shippingNames; }
            set { _shippingNames = value; }
        }

        public int RecordCount
        {
            get { return _recordCount; }
            set { _recordCount = value; }
        }
        public SimpleObjectOne()
        {
            _shippingNames = new List<string>();
        }
        [System.ComponentModel.DataObjectMethod(
            System.ComponentModel.DataObjectMethodType.Select)]
        public static SimpleObjectOne GetSimpleObjectOne()
        {
            SimpleObjectOne objectOne = new SimpleObjectOne();
            objectOne.Execute();
            return objectOne;
        }

        private void Execute()
        {
            using (System.Data.SqlClient.SqlConnection sc1 =
    new System.Data.SqlClient.SqlConnection(@"Data Source=REA_ANDREW-PC\SQLEXPRESS;Initial Catalog=AdventureWorks;" +
        "Integrated Security=True"))
            {
                sc1.Open();
                using (System.Data.SqlClient.SqlCommand command1 = new System.Data.SqlClient.SqlCommand())
                {
                    command1.CommandType = System.Data.CommandType.Text;
                    command1.Connection = sc1;
                    // DIRECTION :: Input
                    command1.CommandText = "SET @CountRows = (SELECT Count(*) as COUNTR FROM [Purchasing].[ShipMethod]);" +
                        "SELECT Name FROM [Purchasing].[ShipMethod];";
                    System.Data.SqlClient.SqlParameter parameter1 =
                        command1.Parameters.Add("@CountRows", System.Data.SqlDbType.Int);
                    parameter1.Direction = System.Data.ParameterDirection.Output;
                    using (System.Data.SqlClient.SqlDataReader reader = command1.ExecuteReader())
                    {
                        while (reader.Read())
                        {
                            _shippingNames.Add(reader["Name"] as string);
                        }
                    }
                    RecordCount = (int)parameter1.Value;
                    //The following value is now 6, the number of records inside the table
                }
            }
        }
    }
}

An important note to make here, is if you are using the ExecuteReader on a statement and pass in Output parameters, you must close the data reader before you can access these output parameter values.  In the above you will see that I encase the data reader in a using statement block.

And finally to consume this example I use the following three lines:

            SimpleObjectOne newObject = SimpleObjectOne.GetSimpleObjectOne();
            int RecordCount = newObject.RecordCount;
            List<string> names = newObject.ShippingNames;
 

I now have 6 as the record count and a list of shipping names.

C# DateTime.MinValue is not the same as SqlDateTime.MinValue

Working with non-nullable types in C# can be a bit of a pain. For instance when I have a date as a string and need to parse it into a DateTime what should the value be if the parse fails? I can’t use null because DateTime is not a nullable type.

This is exactly the dilema I encountered today. No worries, I’ll use DateTime.MinValue that way it is constant and I don’t have to worry about being consistent if I had chosen an arbitrary value of my own.

Well as it turns out I did have some worries. Sql Server 2000’s minimum DateTime value is not the same, in fact it is quite different. This kept causing errors.

For your reference here are some values you should take note of.

DateTime myDate = DateTime.MinValue; //=> 1/1/0001
SqlDateTime mySqlDate = SqlDateTime.MinValue; //=> 1/1/1753
//also note that SQL Server’s smalldatetime min value is 1/1/1900

If you need to use the smalldatetime you will need to create your own property to do this

So my problem was easily averted, after a quick Google search. I just had to use SqlDateTime.MinValue instead of DateTime.MinValue.

How to debug stored procedures in Visual Studio .NET

SUMMARY

This step-by-step article explains two ways that you can debug SQL Server stored procedures and the necessary configuration settings and steps for each approach.

A Visual Studio .NET developer can use the Server Explorer to debug SQL Server stored procedures independently of any Visual Studio project, or the developer can step into the code of the stored procedure directly from managed code in a Visual Basic, Visual C#, or Visual J# project.

Debug a stored procedure in standalone mode

1. Open Server Explorer.

NOTE: It is not necessary to add a Data Connection to work with a SQL Server server because SQL Server servers are listed under the Servers node also. You will use the Servers node in the steps that follow; however, you can use a Data Connection to you SQL Server server in the same way.

2. Under the Servers node in Server Explorer, expand the SQL Server machine name, expand the SQL Servers node, expand the SQL Server instance, expand the Northwind database node, and then expand the stored procedures node.

3. Right-click the CustOrderHist stored procedure and then click Step Into Stored Procedure.

4. The Run stored procedure dialog box opens, which lists the parameters of the stored procedure. Type ALFKI as the value for the @CustomerID input parameter and then click OK.

5. In the Visual Studio design environment, a window opens that displays the text of the stored procedure. The first executable line of the stored procedure is highlighted. Press F11 to step through the stored procedure to completion.

6. In the Output window, the following message is displayed, which indicates successful execution:
The program 'SQL Debugger: T-SQL' has exited with code 0 (0x0). 

Best Windows Backup Tool

Carbonite

Well I've been using Carbonite for over 2 years now and with almost 100Gbs backed up online it can't be bad.  Carbonite runs in the background all the time, so it is always backing up your data files.  It's a don't worry type of backup, just install it and leave it.

If I find I need more processing speed or I here the hard drive working hard, then I can pause Carbonite for 24 hours, and it stops it's searching for changed files.

It's so simple and restoring files are as easy as coping your files from one drive to another, as Carbonite creates virtual drive of the backed up files.

Carbonite

Structure to Stored Procedure

Everywhere I go from client to client I find that when developers create Stored Procedures that just perform the task in hand, and I so often find that no exception handling, comments or transaction based handling has been put in place.  So with all this in mind I set to write a structured Stored Procedure to be used as a template for future Stored Procedures that I create.

— =============================================
— Author: Bryan Avery
— Create date: 1st June 2007
— Description: Creates a card if it does not exist against a PersonID
— =============================================
ALTER
PROCEDURE [dbo].[People_Person_Add_Card]
@person_id Int,
@Card_Number NVarChar(50),
@Card_Type_id Int,
@CardStart DateTime,
@CardEnd DateTime,
@UpdatedBy Int,
@Debug bit = 0,
@Card_NumberId Int OUTPUT,
@ErrorMessage varchar(2000) OUTPUT,
@return_code Int OUTPUT
AS
BEGIN
— ==========================
— Declare SProc variables
— ==========================
 
DECLARE        @Error int,
@RowCount int
— ==========================
— Initialise SProc variables
— ==========================
 
SET NOCOUNT ON;
 
SET @Card_NumberId = 0
 
— ==========================
— Check SProc parameters
— ==========================
 
IF Len(RTrim(IsNull(@Card_Number, ”))) = 0
 
BEGIN
 
SET @ErrorMessage = ‘Invalid value of parameter:’ + ‘ @Card_Number=’ + IsNull(Cast(@Card_Number as varchar),'<NULL>’)
 
GOTO QuitWithParameterError
 
END
— ==========================
— SProc content
— ==========================
 
IF ( IsNull(@Debug,0) = 1 ) — (debug info)
 
BEGIN
 
DECLARE @Debug_ProcStartTime datetime 
, @Debug_ProcEndTime datetime
 
SET @Debug_ProcStartTime = Getdate()
 
SET NOCOUNT OFF
 
    PRINT ”
 
    PRINT ‘– ==========================’
    PRINT ‘– = PROCEDURE People_Person_Add_Card’
    PRINT ‘– = PARAMETERS’
    PRINT ‘– = @person_id = ‘ + IsNull(Cast(@person_id as varchar),'<NULL>’)
    PRINT ‘– = @Card_Number = ‘ + IsNull(@Card_Number,'<NULL>’)
    PRINT ‘– = @Card_Type_id = ‘ + IsNull(Cast(@Card_Type_id as varchar),'<NULL>’)
    PRINT ‘– = @CardStart = ‘ + IsNull(Cast(@CardStart as varchar),'<NULL>’)
    PRINT ‘– = @CardEnd = ‘ + IsNull(Cast(@CardEnd as varchar),'<NULL>’)
    PRINT ‘– = @UpdatedBy = ‘ + IsNull(Cast(@UpdatedBy as varchar),'<NULL>’)
    PRINT ‘– = @Debug = ‘ + IsNull(Cast(@Debug as varchar),'<NULL>’)
    PRINT ‘– ==========================’
 
    PRINT ”
 
 
END
 
 
IF ( IsNull(@Debug,0) = 1 ) — (debug info)
 
BEGIN
 
PRINT ‘– ******************************’
 
PRINT ‘– * BEGIN TRANSACTION’
 
PRINT ‘– ******************************’
 
END
 
BEGIN TRANSACTION
 
BEGIN TRY
 
–Update all the old cards, by setting them to deleted
 
UPDATE [ccs_people].[dbo].[card]
 
SET [last_updated_by] = @UpdatedBy
 
,[is_deleted] = ‘true’
 
WHERE card_id in (SELECT card.card_id
 
FROM card INNER JOIN
person_card
ON card.card_id = person_card.card_id
 
WHERE person_card.person_id = @person_id)
 
END TRY
 
BEGIN CATCH
 
SELECT @Error = @@ERROR, @RowCount = @@ROWCOUNT ; IF (@Error <> 0 OR @RowCount = 0) GOTO QuitWithRollback — (check for error)
 
END CATCH
 
BEGIN TRY
 
–Create the card
 
INSERT INTO [ccs_people].[dbo].[card]
 
([card_number]
 
,[card_type_id]
 
,[starts_at]
 
,[ends_at]
 
,[is_active]
 
,[last_updated_datetime]
 
,[last_updated_by]
 
,[is_deleted])
 
VALUES
 
(@Card_Number
 
,@Card_Type_id
 
,@CardStart
 
,@CardEnd
 
,’true’
 
,GetDate()
 
,@UpdatedBy
 
,’false’)
 
END TRY
 
BEGIN CATCH
 
SELECT @Error = @@ERROR, @RowCount = @@ROWCOUNT ; IF (@Error <> 0 OR @RowCount = 0) GOTO QuitWithConcurrencyRollback — (check for error)
 
END CATCH
 
BEGIN TRY
 
SET @Card_NumberId = SCOPE_IDENTITY()
 
–Create the link between the person and the card
 
INSERT INTO [ccs_people].[dbo].[person_card]
 
([person_id]
 
,[card_id]
 
,[last_updated_datetime]
 
,[last_updated_by]
 
,[is_deleted])
 
VALUES
 
(@person_id
 
,@Card_NumberId
 
,GetDate()
 
,@UpdatedBy
 
,’false’)
 
END TRY
 
BEGIN CATCH
 
SELECT @Error = @@ERROR, @RowCount = @@ROWCOUNT ; IF (@Error <> 0 OR @RowCount = 0) GOTO QuitWithRollback — (check for error)
 
END CATCH
— ——————————————
— Commit Transaction
— ——————————————
 
 
IF (@@TRANCOUNT > 0)
 
BEGIN
 
IF ( IsNull(@Debug,0) = 1 ) — (debug info)
 
BEGIN
 
PRINT ‘– ******************************’
 
PRINT ‘– * COMMIT TRANSACTION’
 
PRINT ‘– ******************************’
 
END
 
COMMIT TRANSACTION
 
END
 
GOTO QuitWithSuccess
— ——————————————
— Error Handling
— ——————————————
QuitWithConcurrencyRollback
:
 
IF (@@TRANCOUNT > 0)
 
BEGIN
 
IF ( IsNull(@Debug,0) = 1 ) — (debug info)
 
BEGIN
 
PRINT ‘– ******************************’
 
PRINT ‘– * ROLLBACK TRANSACTION’
 
PRINT ‘– ******************************’
 
END
 
ROLLBACK TRANSACTION
 
END
 
GOTO QuitWithConcurrencyError
————————————————–
QuitWithRollback
:
 
IF (@@TRANCOUNT > 0)
 
BEGIN
 
IF ( IsNull(@Debug,0) = 1 ) — (debug info)
 
BEGIN
 
PRINT ‘– ******************************’
 
PRINT ‘– * ROLLBACK TRANSACTION’
 
PRINT ‘– ******************************’
 
END
 
ROLLBACK TRANSACTION
 
END
 
GOTO QuitWithError
————————————————–
QuitWithConcurrencyError
:
 
IF ( IsNull(@Debug,0) = 1 ) — (debug info)
 
BEGIN
 
SET @Debug_ProcEndTime = Getdate()
 
PRINT ”
 
PRINT ‘– ==========================’
 
PRINT ‘– = People_Person_Add_Card’
 
PRINT ‘– = RETURN VALUE’
 
PRINT ‘– = QuitWithConcurrencyError’
 
PRINT ‘– = @ErrorMessage = ‘ + IsNull(Cast(@ErrorMessage as varchar),'<NULL>’)
 
PRINT ‘– = EXECUTION TIME’
 
PRINT ‘– = Start = ‘ + IsNull(Convert(varchar(24), @Debug_ProcStartTime , 113),'<NULL>’)
 
PRINT ‘– = End = ‘ + IsNull(Convert(varchar(24), @Debug_ProcEndTime , 113),'<NULL>’)
 
PRINT ‘– = Ellapsed Time (ms) = ‘ + IsNull(Cast(Datediff(millisecond, @Debug_ProcStartTime, @Debug_ProcEndTime) as varchar),'<NULL>’)
 
PRINT ‘– ==========================’
 
PRINT ”
 
END
 
SET @ErrorMessage = ‘Concurrency Error’
 
SET @return_code = 3
 
RETURN(3)
————————————————–
QuitWithError
:
 
IF ( IsNull(@Debug,0) = 1 ) — (debug info)
 
BEGIN
 
SET @Debug_ProcEndTime = Getdate()
 
PRINT ”
 
PRINT ‘– ==========================’
 
PRINT ‘– = People_Person_Add_Card’
 
PRINT ‘– = RETURN VALUE’
 
PRINT ‘– = QuitWithError’
 
PRINT ‘– = @Error = ‘ + IsNull(Cast(@Error as varchar),'<NULL>’)
 
PRINT ‘– = @return_code = ‘ + IsNull(Cast(@return_code as varchar),'<NULL>’)
 
PRINT ‘– = EXECUTION TIME’
 
PRINT ‘– = Start = ‘ + IsNull(Convert(varchar(24), @Debug_ProcStartTime , 113),'<NULL>’)
 
PRINT ‘– = End = ‘ + IsNull(Convert(varchar(24), @Debug_ProcEndTime , 113),'<NULL>’)
 
PRINT ‘– = Ellapsed Time (ms) = ‘ + IsNull(Cast(Datediff(millisecond, @Debug_ProcStartTime, @Debug_ProcEndTime) as varchar),'<NULL>’)
 
PRINT ‘– ==========================’
 
PRINT ”
 
END
 
SET @ErrorMessage = ‘SQL Error’
 
SET @return_code = 2
 
RETURN(2)
————————————————–
QuitWithParameterError
:
 
IF ( IsNull(@Debug,0) = 1 ) — (debug info)
 
BEGIN
 
SET @Debug_ProcEndTime = Getdate()
 
PRINT ”
 
PRINT ‘– ==========================’
 
PRINT ‘– = People_Person_Add_Card’
 
PRINT ‘– = RETURN VALUE’
 
PRINT ‘– = QuitWithParameterError’
 
PRINT ‘– = @ErrorMessage = ‘ + IsNull(Cast(@ErrorMessage as varchar),'<NULL>’)
 
PRINT ‘– = EXECUTION TIME’
 
PRINT ‘– = Start = ‘ + IsNull(Convert(varchar(24), @Debug_ProcStartTime , 113),'<NULL>’)
 
PRINT ‘– = End = ‘ + IsNull(Convert(varchar(24), @Debug_ProcEndTime , 113),'<NULL>’)
 
PRINT ‘– = Ellapsed Time (ms) = ‘ + IsNull(Cast(Datediff(millisecond, @Debug_ProcStartTime, @Debug_ProcEndTime) as varchar),'<NULL>’)
 
PRINT ‘– ==========================’
 
PRINT ”
 
END
 
 
SET @return_code = 1
 
RETURN(1)
————————————————
QuitWithSuccess
:
 
IF ( IsNull(@Debug,0) = 1 ) — (debug info)
 
BEGIN
 
SET @Debug_ProcEndTime = Getdate()
 
PRINT ”
 
PRINT ‘– ==========================’
 
PRINT ‘– = People_Person_Add_Card’
 
PRINT ‘– = RETURN VALUE’
 
PRINT ‘– = QuitWithSuccess’
 
PRINT ‘– = EXECUTION TIME’
 
PRINT ‘– = Start = ‘ + IsNull(Convert(varchar(24), @Debug_ProcStartTime , 113),'<NULL>’)
 
PRINT ‘– = End = ‘ + IsNull(Convert(varchar(24), @Debug_ProcEndTime , 113),'<NULL>’)
 
PRINT ‘– = Ellapsed Time (ms) = ‘ + IsNull(Cast(Datediff(millisecond, @Debug_ProcStartTime, @Debug_ProcEndTime) as varchar),'<NULL>’)
 
PRINT ‘– ==========================’
 
PRINT ”
 
END
 
SET @return_code = 0
 
RETURN(0)
— ————————————————–
END 
  

StoredProcedure.SQL (15.39 kb)

The SqlCommand Object

I found this article on csharp station, as a very good insight in to the SqlCommand object, enjoy: 

This lesson describes the SqlCommand object and how you use it to interact with a data base.  Here are the objectives of this lesson:

Know what a command object is.

  • Learn how to use the ExecuteReader method to query data.
  • Learn how to use the ExecuteNonQuery method to insert and delete data.
  • Learn how to use the ExecuteScalar method to return a single value.

Introduction

A SqlCommand object allows you to specify what type of interaction you want to perform with a data base.  For example, you can do select, insert, modify, and delete commands on rows of data in a data base table.  The SqlCommand object can be used to support disconnected data management scenarios, but in this lesson we will only use the SqlCommand object alone.  A later lesson on the SqlDataAdapter will explain how to implement an application that uses disconnected data.  This lesson will also show you how to retrieve a single value from a data base, such as the number of records in a table.

Creating a SqlCommand Object

Similar to other C# objects, you instantiate a SqlCommand object via the new instance declaration, as follows:

    SqlCommand cmd = new SqlCommand("select CategoryName from Categories", conn);

The line above is typical for instantiating a SqlCommand object.  It takes a string parameter that holds the command you want to execute and a reference to a SqlConnection object.  SqlCommand has a few overloads, which you will see in the examples of this tutorial.

Querying Data

When using a SQL select command, you retrieve a data set for viewing.  To accomplish this with a SqlCommand object, you would use the ExecuteReader method, which returns a SqlDataReader object.  We'll discuss the SqlDataReader in a future lesson.  The example below shows how to use the SqlCommand object to obtain a SqlDataReader object:

// 1. Instantiate a new command with a query and connection
SqlCommand cmd = new SqlCommand("select CategoryName from Categories", conn);

// 2. Call Execute reader to get query results
SqlDataReader rdr = cmd.ExecuteReader();

In the example above, we instantiate a SqlCommand object, passing the command string and connection object to the constructor.  Then we obtain a SqlDataReader object by calling the ExecuteReader method of the SqlCommand object, cmd. 

This code is part of the ReadData method of Listing 1 in the Putting it All Together section later in this lesson.

Inserting Data

To insert data into a data base, use the ExecuteNonQuery method of the SqlCommand object.  The following code shows how to insert data into a data base table:

// prepare command string
 string insertString = @"
     insert into Categories
     (CategoryName, Description)
     values ('Miscellaneous', 'Whatever doesn''t fit elsewhere')";
 
 // 1. Instantiate a new command with a query and connection
 SqlCommand cmd = new SqlCommand(insertString, conn);
 
 // 2. Call ExecuteNonQuery to send command
 cmd.ExecuteNonQuery();

The SqlCommand instantiation is just a little different from what you've seen before, but it is basically the same.  Instead of a literal string as the first parameter of the SqlCommand constructor, we are using a variable, insertString.  The insertString variable is declared just above the SqlCommand declaration. 

Notice the two apostrophes ('') in the insertString text for the word "doesn''t".  This is how you escape the apostrophe to get the string to populate column properly. 

Another observation to make about the insert command is that we explicitly specified the columns CategoryName and Description.  The Categories table has a primary key field named CategoryID.  We left this out of the list because SQL Server will add this field itself.  trying to add a value to a primary key field, such as CategoryID, will generate an exception.

To execute this command, we simply call the ExecuteNonQuery method on the SqlCommand instance, cmd.

This code is part of the Insertdata method of Listing 1 in the Putting it All Together section later in this lesson.

Updating Data

The ExecuteNonQuery method is also used for updating data.  The following code shows how to update data:

// prepare command string
 string updateString = @"
     update Categories
     set CategoryName = 'Other'
     where CategoryName = 'Miscellaneous'";
 
 // 1. Instantiate a new command with command text only
 SqlCommand cmd = new SqlCommand(updateString);
 
 // 2. Set the Connection property
 cmd.Connection = conn;
 
 // 3. Call ExecuteNonQuery to send command
 cmd.ExecuteNonQuery();

Again, we put the SQL command into a string variable, but this time we used a different SqlCommand constructor that takes only the command.  In step 2, we assign the SqlConnection object, conn, to the Connection property of the SqlCommand object, cmd. 

This could have been done with the same constructor used for the insert command, with two parameters.  It demonstrates that you can change the connection object assigned to a command at any time.

The ExecuteNonQuery method performs the update command.

This code is part of the UpdateData method of Listing 1 in the Putting it All Together section later in this lesson.

Deleting Data

You can also delete data using the ExecuteNonQuery method.  The following example shows how to delete a record from a data base with the ExecuteNonQuery method:

// prepare command string
 string deleteString = @"
     delete from Categories
     where CategoryName = 'Other'";
 
 // 1. Instantiate a new command
 SqlCommand cmd = new SqlCommand();
 
 // 2. Set the CommandText property
 cmd.CommandText = deleteString;
 
 // 3. Set the Connection property
 cmd.Connection = conn;
 
 // 4. Call ExecuteNonQuery to send command
 cmd.ExecuteNonQuery();

This example uses the SqlCommand constructor with no parameters.  Instead, it explicity sets the CommandText and Connection properties of the SqlCommand object, cmd. 

We could have also used either of the two previous SqlCommand constructor overloads, used for the insert or update command, with the same result.  This demonstrates that you can change both the command text and the connection object at any time. 

The ExecuteNonQuery method call sends the command to the data base.

This code is part of the DeleteData method of Listing 1 in the Putting it All Together section later in this lesson.

Getting Single values

Sometimes all you need from a data base is a single value, which could be a count, sum, average, or other aggregated value from a data set.  Performing an ExecuteReader and calculating the result in your code is not the most efficient way to do this.  The best choice is to let the data base perform the work and return just the single value you need.  The following example shows how to do this with the ExecuteScalar method:

// 1. Instantiate a new command
 SqlCommand cmd = new SqlCommand("select count(*) from Categories", conn);
 
 // 2. Call ExecuteNonQuery to send command
 int count = (int)cmd.ExecuteScalar();

The query in the SqlCommand constructor obtains the count of all records from the Categories table.  This query will only return a single value.  The ExecuteScalar method in step 2 returns this value.  Since the return type of ExecuteScalar is type object, we use a cast operator to convert the value to int.

This code is part of the GetNumberOfRecords method of Listing 1 in the Putting it All Together section later in this lesson.

Putting it All Together

For simplicity, we showed snippets of code in previous sections to demonstrate the applicable techniques .  It is also useful to have an entire code listing to see how this code is used in a working program.  Listing 1 shows all of the code used in this example, along with a driver in the Main method to produce formatted output.

Listing 1.  SqlConnection Demo
 using System;
 using System.Data;
 using System.Data.SqlClient;
 
 /// <summary>
 /// Demonstrates how to work with SqlCommand objects
 /// </summary>
 class SqlCommandDemo
 {
     SqlConnection conn;
 
     public SqlCommandDemo()
     {
         // Instantiate the connection
         conn = new SqlConnection(
            "Data Source=(local);Initial Catalog=Northwind;Integrated Security=SSPI");
     }
 
     // call methods that demo SqlCommand capabilities
     static void Main()
     {
         SqlCommandDemo scd = new SqlCommandDemo();
 
         Console.WriteLine();
         Console.WriteLine("Categories Before Insert");
         Console.WriteLine("————————");
 
         // use ExecuteReader method
         scd.ReadData();
 
         // use ExecuteNonQuery method for Insert
         scd.Insertdata();
         Console.WriteLine();
         Console.WriteLine("Categories After Insert");
         Console.WriteLine("——————————");
 
        scd.ReadData();
 
         // use ExecuteNonQuery method for Update
         scd.UpdateData();
 
         Console.WriteLine();
         Console.WriteLine("Categories After Update");
         Console.WriteLine("——————————");
 
         scd.ReadData();
 
         // use ExecuteNonQuery method for Delete
         scd.DeleteData();
 
         Console.WriteLine();
         Console.WriteLine("Categories After Delete");
         Console.WriteLine("——————————");
 
         scd.ReadData();
 
         // use ExecuteScalar method
         int numberOfRecords = scd.GetNumberOfRecords();
 
         Console.WriteLine();
         Console.WriteLine("Number of Records: {0}", numberOfRecords);
     }
 
     /// <summary>
     /// use ExecuteReader method
     /// </summary>
     public void ReadData()
     {
        SqlDataReader rdr = null;
 
         try
         {
             // Open the connection
             conn.Open();
 
             // 1. Instantiate a new command with a query and connection
             SqlCommand cmd = new SqlCommand("select CategoryName from Categories", conn);
 
             // 2. Call Execute reader to get query results
             rdr = cmd.ExecuteReader();
 
             // print the CategoryName of each record
             while (rdr.Read())
             {
                 Console.WriteLine(rdr[0]);
             }
         }
         finally
         {
             // close the reader
             if (rdr != null)
             {
                 rdr.Close();
             }
 
             // Close the connection
             if (conn != null)
             {
                 conn.Close();
             }
         }
     }
 
     /// <summary>
     /// use ExecuteNonQuery method for Insert
     /// </summary>
     public void Insertdata()
     {
         try
         {
             // Open the connection
             conn.Open();
 
             // prepare command string
             string insertString = @"
                 insert into Categories
                 (CategoryName, Description)
                 values ('Miscellaneous', 'Whatever doesn''t fit elsewhere')";
 
             // 1. Instantiate a new command with a query and connection
             SqlCommand cmd = new SqlCommand(insertString, conn);
 
             // 2. Call ExecuteNonQuery to send command
             cmd.ExecuteNonQuery();
         }
         finally
         {
             // Close the connection
             if (conn != null)
             {
                 conn.Close();
             }
         }
     }
 
     /// <summary>
     /// use ExecuteNonQuery method for Update
     /// </summary>
     public void UpdateData()
     {
         try
         {
             // Open the connection
             conn.Open();
 
             // prepare command string
             string updateString = @"
                 update Categories
                 set CategoryName = 'Other'
                 where CategoryName = 'Miscellaneous'";
 
             // 1. Instantiate a new command with command text only
             SqlCommand cmd = new SqlCommand(updateString);
 
             // 2. Set the Connection property
             cmd.Connection = conn;
 
             // 3. Call ExecuteNonQuery to send command
             cmd.ExecuteNonQuery();
        }
         finally
         {
             // Close the connection
             if (conn != null)
             {
                 conn.Close();
             }
         }
     }
 
     /// <summary>
     /// use ExecuteNonQuery method for Delete
     /// </summary>
     public void DeleteData()
     {
         try
         {
             // Open the connection
             conn.Open();
 
             // prepare command string
             string deleteString = @"
                 delete from Categories
                 where CategoryName = 'Other'";
 
             // 1. Instantiate a new command
             SqlCommand cmd = new SqlCommand();
 
             // 2. Set the CommandText property
             cmd.CommandText = deleteString;
 
             // 3. Set the Connection property
             cmd.Connection = conn;
 
             // 4. Call ExecuteNonQuery to send command
             cmd.ExecuteNonQuery();
         }
         finally
         {
             // Close the connection
             if (conn != null)
             {
                 conn.Close();
             }
         }
     }
 
     /// <summary>
     /// use ExecuteScalar method
     /// </summary>
     /// <returns>number of records</returns>
     public int GetNumberOfRecords()
     {
         int count = -1;
 
         try
         {
             // Open the connection
             conn.Open();
 
             // 1. Instantiate a new command
             SqlCommand cmd = new SqlCommand("select count(*) from Categories", conn);
 
             // 2. Call ExecuteNonQuery to send command
             count = (int)cmd.ExecuteScalar();
         }
         finally
         {
            // Close the connection
             if (conn != null)
             {
                 conn.Close();
             }
         }
         return count;
     }
 }

In Listing 1, the SqlConnection object is instantiated in the SqlCommandDemo structure.  This is okay because the object itself will be cleaned up when the CLR garbage collector executes.  What is important is that we close the connection when we are done using it.  This program opens the connection in a try block and closes it in a finally block in each method.

The ReadData method displays the contents of the CategoryName column of the Categories table.  We use it several times in the Main method to show the current status of the Categories table, which changes after each of the insert, update, and delete commands.  Because of this, it is convenient to reuse to show you the effects after each method call.

Summary

A SqlCommand object allows you to query and send commands to a data base.  It has methods that are specialized for different commands.  The ExecuteReader method returns a SqlDataReader object for viewing the results of a select query.  For insert, update, and delete SQL commands, you use the ExecuteNonQuery method.  If you only need a single aggregate value from a query, the ExecuteScalar is the best choice.

 

Determine the Connection Speed of your client

How many times have you written the perfect application, you spend hours getting it right, only to be told it is running slow.  You have performed all the Performance Tuning, and you still get the users saying it is running slowly.

One issue is with the clients connection speed, they may be over GPRS or a dial up connection.  You have no control over the users connections speeds, so to help this out why not check their connection speed and if it is running slow then provide a warning message.

The following code will work out your clients connection speed.  The only issue with such code is you have to wait until it is completed before you know the speed.  To over come this I use an iFrame on the web page, so to speed of connection is worked out in a separate window to the main application.  Meaning the client can continue to work while the calculation is taking place.

What and how you tell the client, is really up to you, one way is to display a JavaScript alert message giving them a warning, but be careful not to display this information too many times as they may get a little annoyed being told they have a slow connection speed when they already know they have,

int numKB = 512;

DateTime start;

DateTime end;

TimeSpan duration;

int length = 0;

string checklen = "\n";

private double CalculateSpeed()

{

Response.Flush();

start = DateTime.Now;

length = checklen.Length;

for (int i = 0; i < numKB; i++)

{

hfSpeedTest.Value += "".PadRight(1024 – length, "/*\\*"[0]) + "\n";

Response.Flush();

}

end = DateTime.Now;

duration = (end – start);

double timediff = numKB / duration.TotalSeconds;

double speed = System.Math.Round(timediff, 3);hfSpeedTest.Value =

""; return speed;

}

 

 

10 Tips for Writing High-Performance Web Applications

Before becoming a workaholic, I used to do a lot of rock climbing. Prior to any big climb, I’d review the route in the guidebook and read the recommendations made by people who had visited the site before. But, no matter how good the guidebook, you need actual rock climbing experience before attempting a particularly challenging climb. Similarly, you can only learn how to write high-performance Web applications when you’re faced with either fixing performance problems or running a high-throughput site.
My personal experience comes from having been an infrastructure Program Manager on the ASP.NET team at Microsoft, running and managing www.asp.net, and helping architect Community Server, which is the next version of several well-known ASP.NET applications (ASP.NET Forums, .Text, and nGallery combined into one platform). I’m sure that some of the tips that have helped me will help you as well.
You should think about the separation of your application into logical tiers. You might have heard of the term 3-tier (or n-tier) physical architecture. These are usually prescribed architecture patterns that physically divide functionality across processes and/or hardware. As the system needs to scale, more hardware can easily be added. There is, however, a performance hit associated with process and machine hopping, thus it should be avoided. So, whenever possible, run the ASP.NET pages and their associated components together in the same application.
Because of the separation of code and the boundaries between tiers, using Web services or remoting will decrease performance by 20 percent or more.
The data tier is a bit of a different beast since it is usually better to have dedicated hardware for your database. However, the cost of process hopping to the database is still high, thus performance on the data tier is the first place to look when optimizing your code.
Before diving in to fix performance problems in your applications, make sure you profile your applications to see exactly where the problems lie. Key performance counters (such as the one that indicates the percentage of time spent performing garbage collections) are also very useful for finding out where applications are spending the majority of their time. Yet the places where time is spent are often quite unintuitive.
There are two types of performance improvements described in this article: large optimizations, such as using the ASP.NET Cache, and tiny optimizations that repeat themselves. These tiny optimizations are sometimes the most interesting. You make a small change to code that gets called thousands and thousands of times. With a big optimization, you might see overall performance take a large jump. With a small one, you might shave a few milliseconds on a given request, but when compounded across the total requests per day, it can result in an enormous improvement.

Performance on the Data Tier

When it comes to performance-tuning an application, there is a single litmus test you can use to prioritize work: does the code access the database? If so, how often? Note that the same test could be applied for code that uses Web services or remoting, too, but I’m not covering those in this article.
If you have a database request required in a particular code path and you see other areas such as string manipulations that you want to optimize first, stop and perform your litmus test. Unless you have an egregious performance problem, your time would be better utilized trying to optimize the time spent in and connected to the database, the amount of data returned, and how often you make round-trips to and from the database.
With that general information established, let’s look at ten tips that can help your application perform better. I’ll begin with the changes that can make the biggest difference.

Tip 1—Return Multiple Resultsets

Review your database code to see if you have request paths that go to the database more than once. Each of those round-trips decreases the number of requests per second your application can serve. By returning multiple resultsets in a single database request, you can cut the total time spent communicating with the database. You’ll be making your system more scalable, too, as you’ll cut down on the work the database server is doing managing requests.
While you can return multiple resultsets using dynamic SQL, I prefer to use stored procedures. It’s arguable whether business logic should reside in a stored procedure, but I think that if logic in a stored procedure can constrain the data returned (reduce the size of the dataset, time spent on the network, and not having to filter the data in the logic tier), it’s a good thing.
Using a SqlCommand instance and its ExecuteReader method to populate strongly typed business classes, you can move the resultset pointer forward by calling NextResult. Figure 1 shows a sample conversation populating several ArrayLists with typed classes. Returning only the data you need from the database will additionally decrease memory allocations on your server.
The ASP.NET DataGrid exposes a wonderful capability: data paging support. When paging is enabled in the DataGrid, a fixed number of records is shown at a time. Additionally, paging UI is also shown at the bottom of the DataGrid for navigating through the records. The paging UI allows you to navigate backwards and forwards through displayed data, displaying a fixed number of records at a time.
There’s one slight wrinkle. Paging with the DataGrid requires all of the data to be bound to the grid. For example, your data layer will need to return all of the data and then the DataGrid will filter all the displayed records based on the current page. If 100,000 records are returned when you’re paging through the DataGrid, 99,975 records would be discarded on each request (assuming a page size of 25). As the number of records grows, the performance of the application will suffer as more and more data must be sent on each request.
One good approach to writing better paging code is to use stored procedures. Figure 2 shows a sample stored procedure that pages through the Orders table in the Northwind database. In a nutshell, all you’re doing here is passing in the page index and the page size. The appropriate resultset is calculated and then returned.
In Community Server, we wrote a paging server control to do all the data paging. You’ll see that I am using the ideas discussed in Tip 1, returning two resultsets from one stored procedure: the total number of records and the requested data.
The total number of records returned can vary depending on the query being executed. For example, a WHERE clause can be used to constrain the data returned. The total number of records to be returned must be known in order to calculate the total pages to be displayed in the paging UI. For example, if there are 1,000,000 total records and a WHERE clause is used that filters this to 1,000 records, the paging logic needs to be aware of the total number of records to properly render the paging UI.

Tip 3—Connection Pooling

Setting up the TCP connection between your Web application and SQL Server can be an expensive operation. Developers at Microsoft have been able to take advantage of connection pooling for some time now, allowing them to reuse connections to the database. Rather than setting up a new TCP connection on each request, a new connection is set up only when one is not available in the connection pool. When the connection is closed, it is returned to the pool where it remains connected to the database, as opposed to completely tearing down that TCP connection.
Of course you need to watch out for leaking connections. Always close your connections when you’re finished with them. I repeat: no matter what anyone says about garbage collection within the Microsoft® .NET Framework, always call Close or Dispose explicitly on your connection when you are finished with it. Do not trust the common language runtime (CLR) to clean up and close your connection for you at a predetermined time. The CLR will eventually destroy the class and force the connection closed, but you have no guarantee when the garbage collection on the object will actually happen.
To use connection pooling optimally, there are a couple of rules to live by. First, open the connection, do the work, and then close the connection. It’s okay to open and close the connection multiple times on each request if you have to (optimally you apply Tip 1) rather than keeping the connection open and passing it around through different methods. Second, use the same connection string (and the same thread identity if you’re using integrated authentication). If you don’t use the same connection string, for example customizing the connection string based on the logged-in user, you won’t get the same optimization value provided by connection pooling. And if you use integrated authentication while impersonating a large set of users, your pooling will also be much less effective. The .NET CLR data performance counters can be very useful when attempting to track down any performance issues that are related to connection pooling.
Whenever your application is connecting to a resource, such as a database, running in another process, you should optimize by focusing on the time spent connecting to the resource, the time spent sending or retrieving data, and the number of round-trips. Optimizing any kind of process hop in your application is the first place to start to achieve better performance.
The application tier contains the logic that connects to your data layer and transforms data into meaningful class instances and business processes. For example, in Community Server, this is where you populate a Forums or Threads collection, and apply business rules such as permissions; most importantly it is where the Caching logic is performed.

Tip 4—ASP.NET Cache API

One of the very first things you should do before writing a line of application code is architect the application tier to maximize and exploit the ASP.NET Cache feature.
If your components are running within an ASP.NET application, you simply need to include a reference to System.Web.dll in your application project. When you need access to the Cache, use the HttpRuntime.Cache property (the same object is also accessible through Page.Cache and HttpContext.Cache).
There are several rules for caching data. First, if data can be used more than once it’s a good candidate for caching. Second, if data is general rather than specific to a given request or user, it’s a great candidate for the cache. If the data is user- or request-specific, but is long lived, it can still be cached, but may not be used as frequently. Third, an often overlooked rule is that sometimes you can cache too much. Generally on an x86 machine, you want to run a process with no higher than 800MB of private bytes in order to reduce the chance of an out-of-memory error. Therefore, caching should be bounded. In other words, you may be able to reuse a result of a computation, but if that computation takes 10 parameters, you might attempt to cache on 10 permutations, which will likely get you into trouble. One of the most common support calls for ASP.NET is out-of-memory errors caused by overcaching, especially of large datasets.Common Performance Myths

One of the most common myths is that C# code is faster than Visual Basic code. There is a grain of truth in this, as it is possible to take several performance-hindering actions in Visual Basic that are not possible to accomplish in C#, such as not explicitly declaring types. But if good programming practices are followed, there is no reason why Visual Basic and C# code cannot execute with nearly identical performance. To put it more succinctly, similar code produces similar results.
Another myth is that codebehind is faster than inline, which is absolutely false. It doesn’t matter where your code for your ASP.NET application lives, whether in a codebehind file or inline with the ASP.NET page. Sometimes I prefer to use inline code as changes don’t incur the same update costs as codebehind. For example, with codebehind you have to update the entire codebehind DLL, which can be a scary proposition.
Myth number three is that components are faster than pages. This was true in Classic ASP when compiled COM servers were much faster than VBScript. With ASP.NET, however, both pages and components are classes. Whether your code is inline in a page, within a codebehind, or in a separate component makes little performance difference. Organizationally, it is better to group functionality logically this way, but again it makes no difference with regard to performance.
The final myth I want to dispel is that every functionality that you want to occur between two apps should be implemented as a Web service. Web services should be used to connect disparate systems or to provide remote access to system functionality or behaviors. They should not be used internally to connect two similar systems. While easy to use, there are much better alternatives.
The worst thing you can do is use Web services for communicating between ASP and ASP.NET applications running on the same server, which I’ve witnessed all too frequently.

Figure 3 ASP.NET Cache 
There are a several great features of the Cache that you need to know. The first is that the Cache implements a least-recently-used algorithm, allowing ASP.NET to force a Cache purge—automatically removing unused items from the Cache—if memory is running low. Secondly, the Cache supports expiration dependencies that can force invalidation. These include time, key, and file. Time is often used, but with ASP.NET 2.0 a new and more powerful invalidation type is being introduced: database cache invalidation. This refers to the automatic removal of entries in the cache when data in the database changes. For more information on database cache invalidation, see Dino Esposito’s Cutting Edge column in the July 2004 issue of MSDN®Magazine. For a look at the architecture of the cache, see Figure 3.

Tip 5—Per-Request Caching

Earlier in the article, I mentioned that small improvements to frequently traversed code paths can lead to big, overall performance gains. One of my absolute favorites of these is something I’ve termed per-request caching.
Whereas the Cache API is designed to cache data for a long period or until some condition is met, per-request caching simply means caching the data for the duration of the request. A particular code path is accessed frequently on each request but the data only needs to be fetched, applied, modified, or updated once. This sounds fairly theoretical, so let’s consider a concrete example.
In the Forums application of Community Server, each server control used on a page requires personalization data to determine which skin to use, the style sheet to use, as well as other personalization data. Some of this data can be cached for a long period of time, but some data, such as the skin to use for the controls, is fetched once on each request and reused multiple times during the execution of the request.
To accomplish per-request caching, use the ASP.NET HttpContext. An instance of HttpContext is created with every request and is accessible anywhere during that request from the HttpContext.Current property. The HttpContext class has a special Items collection property; objects and data added to this Items collection are cached only for the duration of the request. Just as you can use the Cache to store frequently accessed data, you can use HttpContext.Items to store data that you’ll use only on a per-request basis. The logic behind this is simple: data is added to the HttpContext.Items collection when it doesn’t exist, and on subsequent lookups the data found in HttpContext.Items is simply returned.

Tip 6—Background Processing

The path through your code should be as fast as possible, right? There may be times when you find yourself performing expensive tasks on each request or once every n requests. Sending out e-mails or parsing and validation of incoming data are just a few examples.
When tearing apart ASP.NET Forums 1.0 and rebuilding what became Community Server, we found that the code path for adding a new post was pretty slow. Each time a post was added, the application first needed to ensure that there were no duplicate posts, then it had to parse the post using a “badword” filter, parse the post for emoticons, tokenize and index the post, add the post to the moderation queue when required, validate attachments, and finally, once posted, send e-mail notifications out to any subscribers. Clearly, that’s a lot of work.
It turns out that most of the time was spent in the indexing logic and sending e-mails. Indexing a post was a time-consuming operation, and it turned out that the built-in System.Web.Mail functionality would connect to an SMTP server and send the e-mails serially. As the number of subscribers to a particular post or topic area increased, it would take longer and longer to perform the AddPost function.
Indexing e-mail didn’t need to happen on each request. Ideally, we wanted to batch this work together and index 25 posts at a time or send all the e-mails every five minutes. We decided to use the same code I had used to prototype database cache invalidation for what eventually got baked into Visual Studio® 2005.
The Timer class, found in the System.Threading namespace, is a wonderfully useful, but less well-known class in the .NET Framework, at least for Web developers. Once created, the Timer will invoke the specified callback on a thread from the ThreadPool at a configurable interval. This means you can set up code to execute without an incoming request to your ASP.NET application, an ideal situation for background processing. You can do work such as indexing or sending e-mail in this background process too.
There are a couple of problems with this technique, though. If your application domain unloads, the timer instance will stop firing its events. In addition, since the CLR has a hard gate on the number of threads per process, you can get into a situation on a heavily loaded server where timers may not have threads to complete on and can be somewhat delayed. ASP.NET tries to minimize the chances of this happening by reserving a certain number of free threads in the process and only using a portion of the total threads for request processing. However, if you have lots of asynchronous work, this can be an issue.
There is not enough room to go into the code here, but you can download a digestible sample at www.rob-howard.net. Just grab the slides and demos from the Blackbelt TechEd 2004 presentation.
One simple method is to use the ThreadPool, with any background processes you must make sure you capture any exception so as you know something has gone wrong, as shown below:

ThreadPool.QueueUserWorkItem(delegate { MyCode.Import(); });

Tip 7—Page Output Caching and Proxy Servers

ASP.NET is your presentation layer (or should be); it consists of pages, user controls, server controls (HttpHandlers and HttpModules), and the content that they generate. If you have an ASP.NET page that generates output, whether HTML, XML, images, or any other data, and you run this code on each request and it generates the same output, you have a great candidate for page output caching.
By simply adding this line to the top of your page
<%@ Page OutputCache VaryByParams=”none” Duration=”60″ %>
you can effectively generate the output for this page once and reuse it multiple times for up to 60 seconds, at which point the page will re-execute and the output will once be again added to the ASP.NET Cache. This behavior can also be accomplished using some lower-level programmatic APIs, too. There are several configurable settings for output caching, such as the VaryByParams attribute just described. VaryByParams just happens to be required, but allows you to specify the HTTP GET or HTTP POST parameters to vary the cache entries. For example, default/?Report=1 or default/?Report=2 could be output-cached by simply setting VaryByParam=”Report”. Additional parameters can be named by specifying a semicolon-separated list.
Many people don’t realize that when the Output Cache is used, the ASP.NET page also generates a set of HTTP headers that downstream caching servers, such as those used by the Microsoft Internet Security and Acceleration Server or by Akamai. When HTTP Cache headers are set, the documents can be cached on these network resources, and client requests can be satisfied without having to go back to the origin server.
Using page output caching, then, does not make your application more efficient, but it can potentially reduce the load on your server as downstream caching technology caches documents. Of course, this can only be anonymous content; once it’s downstream, you won’t see the requests anymore and can’t perform authentication to prevent access to it.

Tip 8—Run IIS 6.0 (If Only for Kernel Caching)

If you’re not running IIS 6.0 (Windows Server 2003), you’re missing out on some great performance enhancements in the Microsoft Web server. In Tip 7, I talked about output caching. In IIS 5.0, a request comes through IIS and then to ASP.NET. When caching is involved, an HttpModule in ASP.NET receives the request, and returns the contents from the Cache.
If you’re using IIS 6.0, there is a nice little feature called kernel caching that doesn’t require any code changes to ASP.NET. When a request is output-cached by ASP.NET, the IIS kernel cache receives a copy of the cached data. When a request comes from the network driver, a kernel-level driver (no context switch to user mode) receives the request, and if cached, flushes the cached data to the response, and completes execution. This means that when you use kernel-mode caching with IIS and ASP.NET output caching, you’ll see unbelievable performance results. At one point during the Visual Studio 2005 development of ASP.NET, I was the program manager responsible for ASP.NET performance. The developers did the magic, but I saw all the reports on a daily basis.
The kernel mode caching results were always the most interesting. The common characteristic was network saturation by requests/responses and IIS running at about five percent CPU utilization.
It was amazing! There are certainly other reasons for using IIS 6.0, but kernel mode caching is an obvious one.

Tip 9—Use Gzip Compression

While not necessarily a server performance tip (since you might see CPU utilization go up), using gzip compression can decrease the number of bytes sent by your server. This gives the perception of faster pages and also cuts down on bandwidth usage. Depending on the data sent, how well it can be compressed, and whether the client browsers support it (IIS will only send gzip compressed content to clients that support gzip compression, such as Internet Explorer 6.0 and Firefox), your server can serve more requests per second. In fact, just about any time you can decrease the amount of data returned, you will increase requests per second.
The good news is that gzip compression is built into IIS 6.0 and is much better than the gzip compression used in IIS 5.0. Unfortunately, when attempting to turn on gzip compression in IIS 6.0, you may not be able to locate the setting on the properties dialog in IIS. The IIS team built awesome gzip capabilities into the server, but neglected to include an administrative UI for enabling it.
To enable gzip compression, you have to spelunk into the innards of the XML configuration settings of IIS 6.0 (which isn’t for the faint of heart). By the way, the credit goes to Scott Forsyth of OrcsWeb who helped me figure this out for the www.asp.net severs hosted by OrcsWeb.
Rather than include the procedure in this article, just read the article by Brad Wilson at IIS6 Compression. There’s also a Knowledge Base article on enabling compression for ASPX, available at Enable ASPX Compression in IIS. It should be noted, however, that dynamic compression and kernel caching are mutually exclusive on IIS 6.0 due to some implementation details.

Tip 10—Server Control View State

View state is a fancy name for ASP.NET storing some state data in a hidden input field inside the generated page. When the page is posted back to the server, the server can parse, validate, and apply this view state data back to the page’s tree of controls. View state is a very powerful capability since it allows state to be persisted with the client and it requires no cookies or server memory to save this state. Many ASP.NET server controls use view state to persist settings made during interactions with elements on the page, for example, saving the current page that is being displayed when paging through data.
There are a number of drawbacks to the use of view state, however. First of all, it increases the total payload of the page both when served and when requested. There is also an additional overhead incurred when serializing or deserializing view state data that is posted back to the server. Lastly, view state increases the memory allocations on the server.
Several server controls, the most well known of which is the DataGrid, tend to make excessive use of view state, even in cases where it is not needed. The default behavior of the ViewState property is enabled, but if you don’t need it, you can turn it off at the control or page level. Within a control, you simply set the EnableViewState property to false, or you can set it globally within the page using this setting:
<%@ Page EnableViewState=”false” %>
If you are not doing postbacks in a page or are always regenerating the controls on a page on each request, you should disable view state at the page level.

Conclusion

I’ve offered you some tips that I’ve found useful for writing high-performance ASP.NET applications. As I mentioned at the beginning of this article, this is more a preliminary guide than the last word on ASP.NET performance. (More information on improving the performance of ASP.NET apps can be found at Improving ASP.NET Performance.) Only through your own experience can you find the best way to solve your unique performance problems. However, during your journey, these tips should provide you with good guidance. In software development, there are very few absolutes; every application is unique.
It’s also worth taking a look at Mads Kristensen blog: