Azure Resource Manager Templates Tips and Tricks for a Application Service

Over the past week, I spent some time automating Azure App Service infrastructure with Azure Resource Manager (ARM) templates. I discovered a few tips and tricks along the way that I’ll describe in detail below…

  • Take an environment name as a parameter
  • Create sticky slot settings
  • Variables aren’t just for strings
  • Change the time zone of your web app

Big thanks to David Ebbo’s talk at BUILD and his sample “WebAppManyFeatures” template; they are super handy resources for anyone who is working on writing their own ARM templates.

Environment Name Parameter

ARM templates are useful for creating the entire infrastructure for an application. Often this means creating infrastructure for each deployment environment (Dev, QA, Production, etc). Instead of passing in names for the service plans and web apps, I find it useful to pass in a name for the environment and use it as a suffix for all the names. This way we can easily use the same template to set up any environment.

{
    // ...
    "parameters": {
        "environmentName": {
            "type": "string"
        }
    },
    "variables": {
        "appServicePlanName": 
            "[concat('awesomeserviceplan-', parameters('environmentName'))]",
        "siteName": 
            "[concat('awesomesite-', parameters('environmentname'))]"
    }
    // ...
}

In the above example, if we pass in “dev” as the,environmentName we get “awesomeserviceplan-dev” and “awesomesite-dev” as our resource names.

Sticky Slot Settings

App settings and connection strings for a Web App slot can be marked as a “slot setting” (i.e., the setting is sticky to the slot). It’s not well documented how to specify sticky slot settings in an ARM template. It turns out we can do this using a config resource section called “slotconfignames” on the production site. Simply list the app setting and connection string keys that need to stick to the slot:

{
    "apiVersion": "2015-08-01",
    "name": "slotconfignames",
    "type": "config",
    "dependsOn": [
        "[resourceId('Microsoft.Web/Sites', variables('siteName'))]"
    ],
    "properties": {
        "connectionStringNames": [ "ConnString1" ],
        "appSettingNames": [ "AppSettingKey1", "AppSettingKey2" ]
    }
}

Here’s how it’ll look like in the portal:

Object Variables

Variables are not only useful for declaring text that is used in multiple places; they can be objects too! This is especially handy for settings that apply to multiple Web Apps in the template.

{
    // ...
    "variables": {
        "siteProperties": {
            "phpVersion": "5.5",
            "netFrameworkVersion": "v4.0",
            "use32BitWorkerProcess": false, /* 64-bit platform */
            "webSocketsEnabled": true,
            "alwaysOn": true,
            "requestTracingEnabled": true, /* Failed request tracing, aka 'freb' */
            "httpLoggingEnabled": true, /* IIS logs (aka Web server logging) */
            "logsDirectorySizeLimit": 40, /* 40 MB limit for IIS logs */
            "detailedErrorLoggingEnabled": true, /* Detailed error messages  */
            "remoteDebuggingEnabled": true,
            "remoteDebuggingVersion": "VS2013",
            "defaultDocuments": [
                "index.html",
                "hostingstart.html"
            ]
        }
    },
    // ...
}

And now we can use the siteProperties variable for the production site as well as its staging slot:

{
    "apiVersion": "2015-08-01",
    "name": "[variables('siteName')]",
    "type": "Microsoft.Web/sites",
    "location": "[parameters('siteLocation')]",
    "dependsOn": [
        "[resourceId('Microsoft.Web/serverfarms', variables('appServicePlanName'))]"
    ],
    "properties": {
        "serverFarmId": "[variables('appServicePlanName')]"
    },
    "resources": [
        {
            "apiVersion": "2015-08-01",
            "name": "web",
            "type": "config",
            "dependsOn": [
                "[resourceId('Microsoft.Web/Sites', variables('siteName'))]"
            ],
            "properties": "[variables('siteProperties')]"
        },
        {
            "apiVersion": "2015-08-01",
            "name": "Staging",
            "type": "slots",
            "location": "[parameters('siteLocation')]",
            "dependsOn": [
                "[resourceId('Microsoft.Web/Sites', variables('siteName'))]"
            ],
            "resources": [
                {
                    "apiVersion": "2015-08-01",
                    "name": "web",
                    "type": "config",
                    "dependsOn": [
                        "[resourceId('Microsoft.Web/Sites/Slots', variables('siteName'), 'Staging')]"
                    ],
                    "properties": "[variables('siteProperties')]"
                }
            ]
        }
    ]
}

Custom Time Zone

There’s a little-known setting in Azure App Service that allows you to set the time zone on a per-app basis. It is done by creating an app setting called WEBSITE_TIME_ZONE in the portal. It means we can also do this in an ARM template:

{
    "apiVersion": "2015-08-01",
    "name": "appsettings",
    "type": "config",
    "dependsOn": [
        "[resourceId('Microsoft.Web/Sites', variables('siteName'))]"
    ],
    "properties": {
        "WEBSITE_TIME_ZONE": "Pacific Standard Time"
    }
}

For more information on the time zone setting, check out this article.

Source code: WebAppManyFeatures.json

Allow access to Azure services, what does this actually mean?

I did some quick research on the SQL Server Firewall “Allow Access to Azure Services” option in Azure today.

And I sorry to say that my fears were right, that by setting this option does pose a significant security risk and leaves the SQL Server vulnerable.

Here is the extract of the article I found from Gaurav Hind at Microsoft

Access within Azure: This can be toggled by “Allow access to Azure services” Yes/No button on the portal (Firewall settings page). Please note, enabling this feature would allow any traffic from resources/services hosted in Azure (not just your Azure subscription) to access the database.

https://blogs.msdn.microsoft.com/azureedu/2016/04/11/what-should-i-know-when-setting-up-my-azure-sql-database-paas/

The big question now is how do you plug this gap in the firewall?  One possible solution is to build a virtual network within Azure or Filter network traffic with network security groups, this is beyond the scope of this article.

Azure Key Vault Setting up

Setting up Azure Key Vault for use within a .NET application is not an easy process, as it is a secure environment and security is required to gain access, which involved the Active Directly Application registrations and Access Policies.

I’ve created a quick setup guide based on the Azure Portal that is available at the time of setting up this article.

A web-developer’s guide to help eliminate non-coding tasks and get code done faster

Microsoft have just published this great eBook which helps web developers cut right to the code scripting of environments to make all the firewall ports and get code done quickly by providing an overview of key services in the cloud, what services to use based on your needs, step by step guidance, sample code, sample applications, and a free account to get started.

ebook-a-web-developers-guide

Here is a list of areas it covers

  • When to Use It
  • Moving an Existing ASP.NET Website to App Service
  • Identity Management
  • Scaling your Web App
  • Caching for Performance
  • Better Customer Experience with a CDN
  • Detect Failures Faster
  • Building a New Website on Azure App Service

 

Deployments Best Practices

Table of Contents

Introduction

This guide is aimed to help you better understand how to deal with deployments in your development workflow and provide some best practices. Sometimes a bad production deployment can ruin all the effort you have invested in a development process. Having a solid deployment workflow can become one of the greatest advantages of your team.

Before you start, I recommend reading our Developing and Deploying with Branches guide first to get a general idea of how branches should be setup in your repository to be able to fully utilize tips from this guide. It’s a great read!

Note on Development Branch

In this guide you will see a lot of references to a branch called development. In your repository you can use master (Git), trunk (Subversion) or default (Mercurial) for the same purpose, there’s no need to create a branch specifically called “development”. I chose this name because it’s universal for all version control systems.

The Workflow

Deployments should be treated as part of a development workflow, not as an afterthought. If you are developing a web site or an application, your workflow will usually include at least three environments: Development, Staging and Production. In that case the workflow might look like this:

  • Developers work on bugs and features in separate branches. Really minor updates can be committed directly to the stable development branch.
  • Once features are implemented, they are merged into the staging branch and deployed to the Staging environment for quality assurance and testing.
  • Once testing is complete, feature branches are merged into the development branch.
  • On the release date, the development branch is merged into production and then deployed to the Production environment.

Let’s take a closer look at each environment to see what are the most efficient way to deploy each one of them.

Development Environment

If you make web applications, you don’t need a remote development environment, every developer should have their own local setup.

Many customers have Development environments set up with automatic deployments on every commit or push. While this gives developers a small advantage of not installing the site or the application on their computers to perform testing locally, it also wastes a lot of time. Every tiny change must be committed, pushed, deployed, and only then it can be verified. If the change was made by mistake, a developer will have to revert it, push it, then redeploy.

Testing on a local computer removes the need to commit, push and deploy completely. Every change can be verified locally first, then, once it’s more or less stable, it can be pushed to a Staging environment for proper quality assurance testing.

However, what this does provide, is an environment that can ensure the auto-deployment to environment is successful and runs in an independent installation process far removed from the developers’ environment.

We do not recommend using deployments for rapidly changing development environments. Running your software locally is the best choice for that sort of testing.

We recommend to deploy to the development environment automatically on every commit or push.  This will ensure that the build process is full working.

Staging Environment

Once the features are implemented and considered fairly stable, they get merged into the staging branch and then automatically deployed to the Staging environment. This is when quality assurance kicks in: testers go to staging servers and verify that the code works as intended.

It is very handy to have a separate branch called staging to represent your staging environment. It will allow developers to deploy multiple branches to the same server simultaneously, simply by merging everything that needs to be deployed to the staging branch. It will also help testers understand what exactly is on staging servers at the moment, just by looking inside the staging branch.

We recommend always deploying major releases to staging at a scheduled time, of which the whole team is aware of.

Production Environment

Once the feature is implemented and tested, it can be deployed to production. If the feature was implemented in a separate branch, it should be merged into a stable development branch first. The branches should be deleted after they are merged to avoid confusion between team members.

The next step is to make a to show the difference between the production and development branches to take a quick look at the code that will be deployed to production. This gives you one last chance to spot something that’s not ready or not intended for production. Things like debugger breakpoints, verbose logging or incomplete features.

Once the diff review is finished, you can merge the development branch into production and then initialize a deployment of the production branch to your production environment by hand. Specify a meaningful message for your deployment so that your team knows exactly what you deployed.

Make sure to only merge development branch into production when you actually plan to deploy. Don’t merge anything into production in advance. Merging on time will make files in your production branch match files on your actual production servers and will help everyone better understand the state of your production environment.

We recommend always deploying major releases to production at a scheduled time, this should be a MANUALLY process not automated (This can be as simple as clicking a link to start the process going or moving some files, it just needs to be a human who activates the process), of which the whole team is aware of.  Find the time when your application is least active and use that time to roll out updates. This may sound obvious, but make sure that it’s not too late in the day, because someone needs to be around after the deployment for at least a few hours to monitor the application and make sure the deployment went fine. Urgent production fixes can be deployed at any time.

After deployment finishes make sure to verify it. It is best to check all the features or fixes that you deployed to make sure they work properly in production. It is a big win if your deployment tool can send an email to all team members with a summary of changes after every deployment. This helps team members to understand what exactly went live and how to communicate it to customers. Beanstalk does this for you automatically.

Your deployment to production is now complete, time to pop champagne and celebrate with your team!

Rolling Back

Sometimes deployments don’t go as planned and things break. In that case you have the possibility to rollback. However, you should be as careful with rollbacks as with production deployments themselves. Sometimes a rollback brings more havoc than the issue it was trying to fix. So first of all stay calm and don’t make any sudden moves. Before performing a rollback, answer the following questions:

Did it break because of the code that I deployed, or did something else break?

You can only rollback files that you deployed, so if the source of the issues is something else a rollback won’t be much help.

Is it possible to rollback this release?

Not all releases can be rolled back. Sometimes a release introduces a new database structure that is incompatible with the previous release. In that case if your rollback, your application will break.

If the answer to both questions is “yes”, you can rollback safely. After rollback is done, make sure to fix the bug that you discovered and commit it to either the development branch (if it was minor) or a separate bug-fix branch. Then proceed with the regular bug-fix branch → staging; bug-fix → development → production integration workflow.

Deploying Urgent Fixes

Sometimes you need to deploy a bug-fix to production quickly, when your development branch is not ready for release yet. The workflow in that case stays the same as described above, but instead of merging the development branch into production you actually merge your bug-fix branch first into the development branch, then separately into production, without merging development into production. Then deploy the production branch as usual. This will ensure that only your bug-fix will be deployed to the production environment without all the other stuff from the development branch that’s not ready yet.

It is important to merge the bug-fix branch to both the development and production branches in this case, because your production branch should never include anything that doesn’t exist in your stable development branch. The development branch is where developers work all day, so if your fix is only in the production branch they will never see it and it can cause confusion.

Automatic Deployments to Production?

I can’t stress enough how important it is for all production deployments to be performed and verified by a responsible human being. Using automatic deployments for Production environment is dangerous and can lead to unexpected results. If every commit is deployed to your production site automatically, imagine what happens when someone commits something by mistake or commits an incomplete feature in the middle of the night when the rest of the team is sleeping? Using automatic deployments makes your Production environment very vulnerable. Please don’t do that, always deploy to production manually.

Permissions

Every developer should be able to deploy to the Staging environment. They just need to make sure they don’t overwrite each other’s changes when they do. That’s exactly why the staging branch is a great help: all changes from all developers are getting merged into it so it contains all of them.

Your Production environment, ideally, should only be accessible to a limited number of experienced developers. These guys should always be prepared to fix the servers immediately after a deployment went rogue.

Conclusion

We’ve been using this workflow with many customers for many years to deploy their application. Some of these things were learned the hard way, through broken production servers. Following these guidelines and all production deployments will become incredibly smooth and won’t cause any stress at all.

Orginal Article

Visual Studio Team Services Security

Using Azure can open up a can of worm around security and many customers have many concerns

Microsoft do a lot of things to keep your Team Service project safe and secure, refer to this link for details: Visual Studio Team Services Data Protection Overview.

You can deploy your own build agent which you can have full control and easy to configure your machines to only accept the deployment from that build agent.

Another URL link I found from Microsoft Virtual Learning which might be useful:

Getting Started with Azure Security for the IT Professional

Do IT security concerns keep you up at night? You’re not alone! Many IT Pros want to extend their organization’s infrastructure but need reassurance about security. Whether you are researching a hybrid or a public cloud model with Microsoft Azure, the question remains the same: Does the solution meet your own personal and your organization’s bar for security, including industry standards, attestations, and ISO certifications? In this demo-filled course, explore these and other hot topics, as a team of security experts and Azure engineers takes you beyond the basic certifications and explores what’s possible inside Azure. See how to design and use various technologies to ensure that you have the security and architecture you need to successfully launch your projects in the cloud. Dive into datacenter operations, virtual machine (VM) configuration, network architecture, and storage infrastructure. Get the information and the confidence you need, from the pros who know, as they demystify security in the cloud.

This article is very useful is you need to deploy from a remote server

http://myalmblog.com/2014/04/configuring-on-premises-build-server-for-visual-studio-online/

Azure Security Center

It is important that when using Azure that you take security very seriously, I’ve been looking around to see if I can get together as much information as I can to help businesses protect the environments that they relay on so much to run their businesses.

I found these short introductions to Azure Security Center which are worth looking over:

Introduction to Azure Security Center

A brief overview of how Azure Security Center helps you protect, detect and respond to cybersecurity threats.

Azure Security Center Overview

With Azure Security Center, you get a central view of the security state of all of your Azure resources. At a glance, verify that the appropriate security controls are in place and configured correctly. Scott talks to Sara Fender who explains it all!

Azure Security Center – Threat Detection

With Azure Security Center, you get a central view of the security state of all of your Azure resources. At a glance, verify that the appropriate security controls are in place and configured correctly. Scott talks to Sarah Fender who explains how Security Center integrates Threat Detection.

Azure Security Center – Focus on Prevention

Staying ahead of current and emerging threats requires an integrated, analytics-driven approach. By combining Microsoft global threat intelligence and expertise with insights into security-related events across your Azure deployments, Security Center helps you detect actual threats early, and it reduces false positives. Scott talks to Sara Fender who breaks down the details.

Azure Resource Manager Compute Templates

Things have moved forward in the Microsoft’s Azure cloud platform since I very first started using Azure.  One thing is for sure is that the number of services available has grown and with this growth comes the responsibility of ensuring that you can successfully deploy what you have built with the minimum about of trouble in the quickest possible time.  Along came the Azure Resource Manager (ARM) which is the service used to provision resources in your Azure subscription. It was first announced at Build 2014 when the new Azure portal (portal.azure.com) was announced and provides a new set of JSON API’s that are used to provision resources. Before ARM, developers and IT professionals used the Azure Service Management API and the old portal (manage.windowsazure.com) to provision resources. Both portals and sets support the of API’s. However, going forward you should be using ARM, the new API’s, and the new Azure portal to provision and manage your Azure resources.

Azure Friday has a nice easy overview of the Azure Resource Manager.  Scott talks to Mahesh Thiagarajan about the new Azure JSON-based APIs for Azure that now include Compute, Network, and Storage. Orchestrating large system deployments is easier and more declarative than ever.

 

The Benefits of using the Azure Resource Manager

There are many benefits of ARM that the first Service Management API’s (and old portal) could not deliver. The advantages that ARM provides can be broken down into a few areas:

Declaratively Provision Resources

Azure Resouce Management provides us with a way to describe resources in a resource group using JSON documents.  There are many resource groups, for example, Web App, Storage Account, SQL Database. In the JSON document, you can describe the properties for each of the resources such as the type of storage account, the size of the SQL Database, and settings for the Web App; there are many more options available. The advantage here is that we can describe the environment we want and send that to Azure Resource Management to make it happen.

Smarter and Faster Provisioning of Resources

Before ARM, you had to provision resources independently to support your application, and it was up to you to figure dependencies between resources and accommodate for these in your deployment scripts.

ARM is also able to determine when it can provision resources simultaneously allow for faster provisioning of all the resources described in the resource group is achieved.

Resource Groups as a Unit of Management

Before ARM, the relationship between resources (i.e. a web app and a database) was something you had to manage yourself. Include trying to compute costs across multiple resources to determine the all-up costs for an application. In the ARM era, since the resource group containing the resources for your application are a single unit of deployment, the resource group also becomes a unit of management. By resource grouping it enables you to do things like determining costs for the entire resource group (and all he resources within), making accounting and chargebacks easier to manage.

Before ARM, if you wanted to enable a user or group of users to manage the resources for a particular application, then you had to make them a co-administrator on your Azure Subscription. The users have full capability to add, delete, and modify any resource in the subscription, not just the resources for that application. With ARM, you can configure Role-Based Access Control for resources and resource groups, enabling you to assign management permissions to only the resource group for only the users that need access to manage it. When those users sign-in to Azure they will be able to see the resource group you gave them access to but not the rest of the resources in your subscription. You can even assign Role-Based Access Control permissions to individual resources if you needed to.

Idempotent Provisioning of Resources

Post ARM, automating the provisioning of resources meant that you had to account for situations where some, but not all, of your resources, would be successfully provisioned.  With ARM, when you send it the JSON document describing your environment, ARM knows which resources already exist and which ones do not and will provision only the resources missing to complete the resource group.

Tools

The Azure portal is an excellent way to get started using Azure Resource Manager with nothing more than your browser. When you create resources using the Azure portal (portal.azure.com) you are using ARM. Eventually, you will need to write a deployment template that describes all the resources you want ARM to provision. And to be scalable, you will need to automate the deployment. You could start with something from the Azure Quick start Templates, but at some point, you will likely need (or want) to build your deployment template from scratch. For this, Visual Studio and PowerShell are the two tools are strongly recommended.

Visual Studio

When it is time to write your ARM deployment templates, you will find that Visual Studio and the Azure Tools delivers an incredible experience that includes JSON IntelliSense in the editor and a JSON outline tool to visualise the resources. Writing ARM deployment templates is not a trivial task. By using Visual Studio and the Azure Tools, you will be able to create deployment templates from scratch in a matter of minutes.

A version of the Azure Tools SDK is available for  VS 2015. The Azure Tools provides an Azure Resource Group project template to get you started as shown below. In subsequent posts, I’ll be demonstrating how to use this project template.

Powershell

The project template mentioned in the previous section generates a PowerShell deployment script that can be used to send your deployment to ARM in an automated manner. The script uses the latest Azure “RM” Cmdlets that you can download and install from here. Most of the code in the deployment script handles uploading artefacts that ARM may need to deploy your environment. For example, if your deployment template describes a virtual machine that requires a DSC extension to add additional configuration to the VM after Azure has created it, then the DSC package (zip file) would need to be generated and uploaded to a storage account for ARM to use while it is provisioning the VM. But, if you scroll down past all that to the bottom of the script file you will see two commands in the script (shown below).

The first command, New-AzureRmResourceGroup, only creates the resource group using a resource group name that you provide.

The second command, New-AzureRmResourceGroupDeployment, pushes your deployment template and parameters for the deployment template to ARM. After ARM receives the files, it starts provisioning the resources described in your deployment template.

A workflow diagram of how these tools are used to deploy an environment using Azure Resource Manager is shown here.

Summary
In this post, We’ve introduced Azure Resource Manager and highlighted some significant advantages it brings to the platform. Concluded by adding a couple of tools you should consider using to author your deployment templates.

Original post by Rick Rainey

Backup SQL Azure

There are a number of options for backing up SQL Azure, which can be found here:

Different ways to Backup your Windows Azure SQL Database

I like the Azure way, which is just exporting, importing and setting a scheduled

Before You Begin

The SQL Database Import/Export Service requires you to have a Windows Azure storage account because BACPAC files are stored here. For more information about creating a storage account, see How to Create a Storage Account. You must also create a container inside Blob storage for your BACPAC files by using a tool such as the Windows Azure Management Tool (MMC) or Azure Storage Explorer.

If you want to import an on-premise SQL Server database to Windows Azure SQL Database, first export your on-premise database to a BACPAC file, and then upload the BACPAC file to your Blob storage container.

If you want to export a database from Windows Azure SQL Database to an on-premise SQL Server, first export the database to a BACPAC file, transfer the BACPAC file to your local server (computer), and then import the BACPAC file to your on-premise SQL Server.

Export a Database

  1. Using one of the tools listed in the Before You Begin section, ensure that your Blob has a container.

  2. Log on to the Windows Azure Platform Management Portal.

  3. In the navigation pane, click Hosted Services, Storage Accounts & CDN, and then click Storage Accounts. Your storage accounts display in the center pane.

  4. Select the required storage account, and make a note of the following values from the right pane: Primary access key and BLOB URL. You will have to specify these values later in this procedure.

  5. In the navigation pane, click Database. Next, select the subscription, your SQL Database server, and then your database that you want to export.

  6. On the ribbon, click Export. This opens the Export Database to Storage Account window.

  7. Verify that the Server Name and Database match the database that you want to export.

  8. In the Login and Password boxes, type the database credentials to be used for the export. Note that the account must be a server-level principal login – created by the provisioning process – or a member of the dbmanager database role.

  9. In New Blob URL box, specify the location where the exported BACPAC file is saved. Specify the location in the following format: “https://” + Blob URL (as noted in step 4) + “/<container_name>/<file_name>”. For example: https://myblobstorage.blob.core.windows.net/dac/exportedfile.bacpac. The Blob URL must be in lowercase without any special characters. If you do not supply the .bacpac suffix, it is applied by the export operation.

  10. In the Access Key box, type the storage access key or shared access key that you made a note of in step 4.

  11. From the Key Type list, select the type that matches the key entered in the Access Key box: either a Storage Access Key or a Shared Access Key.

  12. Click Finish to start the export. You should see a message saying Your request was successfully submitted.

  13. After the export is complete, you should attempt to import your BACPAC file into a Windows Azure SQL Database server to verify that your exported package can be imported successfully.

Database export is an asynchronous operation. After starting the export, you can use the Import Export Request Status window to track the progress. For information, see How to: View Import and Export Status of Database (Windows Azure SQL Database).

noteNote
An export operation performs an individual bulk copy of the data from each table in the database so does not guarantee the transactional consistency of the data. You can use the Windows Azure SQL Database copy database feature to make a consistent copy of a database, and perform the export from the copy. For more information, see Copying Databases in Windows Azure SQL Database.

Configure Automated Exports

Use the Windows Azure SQL Database Automated Export feature to schedule export operations for a SQL database, and to specify the storage account, frequency of export operations, and to set the retention period to store export files.

To configure automated export operations for a SQL database, use the following steps:

  1. Log on to the Windows Azure Platform Management Portal.

  2. Click the SQL database name you want to configure, and then click the Configuration tab.

  3. On the Automated Export work space, click Automatic, and then specify settings for the following parameters:

    • Storage Account
    • Frequency
      • Specify the export interval in days.
      • Specify the start date and time. The time value on the configuration work space is UTC time, so note the offset between UTC time and the time zone where your database is located.
    • Credentials for the server that hosts your SQL database. Note that the account must be a server-level principal login – created by the provisioning process – or a member of the dbmanager database role.
  4. When you have finished setting the export settings, click Save.

  5. You can see the time stamp for the last export on under Automated Export in the Quick Glance section of the SQL Database Dashboard.

To change the settings for an automated export, select the SQL database, click the Configuration tab, make your changes, and then click Save.

Create a New SQL Database from an Existing Export File

Use the Windows Azure SQL Database Create from Export feature to create a new SQL database from an existing export file.

To create a new SQL database from an existing export file, use the following steps:

  1. Log on to the Windows Azure Platform Management Portal.

  2. Click a SQL database name and then click the Configuration tab.

  3. On the Create from Export work space, click New Database, and then specify settings for the following parameters:

    • Bacpac file name – This is the source file for your new SQL database.
    • A name for the new SQL database.
    • Server – This is the host server for your new SQL database.
    • To start the operation, click the checkmark at the bottom of the page.

Import and Export a Database Using API

You can also programmatically import and export databases by using an API. For more information, see the Import Export example on Codeplex.

Import a Database

  1. Using one of the tools listed in the Before You Begin section, ensure that your Blob has a container, and the BACPAC file to be imported is available in the container.

  2. Log on to the Windows Azure Platform Management Portal.

  3. In the navigation pane, click Hosted Services, Storage Accounts & CDN, and then click Storage Accounts. Your storage accounts display in the center pane.

  4. Select the storage account that contains the BACPAC file to be imported, and make a note of the following values from the right pane: Primary access key and BLOB URL. You will have to specify these values later in this procedure.

  5. In the navigation pane, click Database. Next, select the subscription, and then your SQL Database server where you want to import the database.

  6. On the ribbon, click Import. This opens the Import Database from Storage Account window.

  7. Verify that the Target Server field lists the SQL Database server where the database is to be created.

  8. In the Login and Password boxes, type the database credentials to be used for the import.

  9. In the New Database Name box, type the name for the new database created by the import. This name must be unique on the SQL Database server and must comply with the SQL Server rules for identifiers. For more information, see Identifiers.

  10. From the Edition list, select whether the database is a Web or Business edition database.

  11. From the Maximum Size list, select the required size of the database. The list only specifies the values supported by the Edition you have selected.

  12. In the BACPAC URL box, type the full path of the BACPAC file that you want to import. Specify the path in the following format: “https://” + Blob URL (as noted in step 4) + “/<container_name>/<file_name>”. For example: https://myblobstorage.blob.core.windows.net/dac/file.bacpac. The Blob URL must be in lowercase without any special characters. If you do not supply the .bacpac suffix, it is applied by the import operation.

  13. In the Access Key box, type the storage access key or shared access key that you made a note of in step 4.

  14. From the Key Type list, select the type that matches the key entered in the Access Key box: either a Storage Access Key or a Shared Access Key.

  15. Click Finish to start the import.

Database import is an asynchronous operation. After starting the import, you can use the Import Export Request Status window to track the progress. For information, see How to: View Import and Export Status of Database (Windows Azure SQL Database).

Original Article