Saturday, December 22, 2018

Setting up an Azure DevOps Post Deployment Gate that checks Application Insights for any Server Exceptions in the last 10 minutes.

We have a release pipleline which deploys a web site to an Azure App Service and then runs a bunch of Selenium tests against the website.

However that does not test the Azure Function we also deploy. Nor does it check for any errors being reported by the server.

What we will do is make use of a Post Deployment Gate, which will go and look inside Application Insights and check if any Server Exceptions were logged in the time since the web site was released.

If any server exceptions are found then the release will not progress to the next environment.


1. Enable API Access in Application Insights

Create an API Key if you do not already have one e.g.

image

2. Add a ‘Post Deployment’ gate

In the release definition click ‘Post-deployment conditions’ then enable ‘Gates’.

image

Set the delay before evaluation = 2 minutes (this allows extra time for the application insights data to come in before the gates are evaluated, although data should be there because Selenium tests will be running before this step)


Add an Invoke REST API task

Connection type: Generic

Generic service connection: Choose the one for the application insights account or Click ‘Manage’ and add this:

New Service Connection -> Generic

Connection name: give it a name e.g. Application Insights Connection

Server URL: https://api.applicationinsights.io/v1/apps/

Username & password can remain empty

e.g.

image

Method: Post

Headers (replace with this):

{

     "Content-Type":"application/json",

     "x-api-key": "$(ApplicationInsightsAPIKey)"

}


Body:

For all server exceptions use:

{

    "query": "exceptions | where client_Type != 'Browser' | where timestamp > ago(10m) | count"

}

To filter exceptions to within an assembly you can use (e.g. for all Unity server exceptions):

{

    "query": "exceptions | where client_Type != 'Browser' | where assembly contains 'Unity' | where timestamp > ago(10m) | count"

}

Or to filter on trace messages

e.g. If you have an Azure Function that sends emails and writes a trace message when it’s running you can ensure emails are not failing by checking the trace messages.

{

    "query": "traces | where operation_Name == 'SendEmailFunction' | where message contains 'Success' | where client_Type != 'Browser'| where timestamp > ago(10m)| count"

}


Note: The timestamp is something to play around with, it basically needs to cover:

  • The time the automated Selenium tests take to run (start to finish) - for us approx. 2 minutes
  • + ‘the delay before evaluation’ (2 minutes we set above)
  • + If the Gates fail they re-evaluate until the 6 minute timeout so we need to ensure the time range allows for this too.

The reason for this is once the Selenium tests stop they will not generate application insights data so by the time it re-evaluates the gate again there may be no data in app insights and the gate would incorrectly pass.

Therefore we need to be checking around the last 10 minutes worth of Application Insights data


Url suffix and parameters:

$(ApplicationInsightsApplicationID)/query


Advanced

Completion event:

ApiResponse

Success criteria (The below checks for a count of 0 records to be returned):

    eq(jsonpath('$.tables[0].rows[0]')[0][0],0)

Or this makes sure there are records being returned (e.g. > 0)

    ne(jsonpath('$.tables[0].rows[0]')[0][0],0)


Evaluation options

Time between re-evaluation of gates: 5 minutes

Minimum duration for steady results after a successful gate evaluation: 0 minutes

The timeout after which all gates fail: 6 minutes

Gates and approvals: On successful gates, ask for approvals

Example:

1

3. Set Release variables

Add these keys with a scope to the Environment you are releasing:

ApplicationInsightsAPIKey - This one should be a secret so hide the value

ApplicationInsightsApplicationID

image


4. Save and test it by creating a release



Useful Websites

The API Explorer web site was very useful in constructing the correct message

https://dev.applicationinsights.io/apiexplorer/postQuery

Application Insights ‘Analytics’ was useful in constructing the query

image

Saturday, February 17, 2018

Zero downtime Azure App Service deployments with EF6 Code First Migrations and MVC5

This post is about how we deploy our production web sites to an Azure Web App Service and execute Entity Framework 6 code first migrations as part of a VSTS Release process.


A little bit of history:

What we were doing was running dbMigrator.Update() on the start of the website.

So in Startup.cs we had this:

var efConfiguration = new Configuration();

var dbMigrator = new System.Data.Entity.Migrations.DbMigrator(efConfiguration);

dbMigrator.Update();

This worked well, it upgraded the database, and applied any seed data. The downside was the site became unusable until all this had finished which for us was about 1 minute.

This also had a major downside when we turned on Azure auto scaling on the App Plan, when it scaled to more than 1 instance, all instances started up, and all of them called dbMigrator.Update(). Resulting in a fair number of exceptions and the site failing to start.


What we now do:

We needed to remove the database logic from Startup.cs and do it as part of the VSTS Release process instead. So we deploy the site to a staging slot, execute database migrations via Migrate.exe into the production database (on the build server), and then swap the staging slot to production.

This does mean the production web site code is running against a newer database schema until the site swaps over, but this is fine as long as the developers code to handle current and current-1 database versions.  This is how production now looks, sharing the same database.

clip_image002

How to implement this:

1. The build Process

As well as packaging up the website into its own Artifact we now package up as another Artifact all the files we need in order to run migrate.exe, so we have 2 extra build tasks as part of our Main branch build:

clip_image004

clip_image006

clip_image008

In the contents section:

line 1 : Copies our dll’s containing our entity framework migrations

line 2: Copies the DeployDatabase.ps1 PowerShell script below

line 3: Copies the migrate.exe provided by Entity Framework in the packages folder.

The DeployDatabase.ps1 PowerShell file contains:

#

# DeployDatabase.ps1

#

[CmdletBinding()]

Param(

[Parameter(Mandatory=$True,Position=1)]

[string]$webAppName,

[Parameter(Mandatory=$False,Position=2)]

[string]$slotName,

[Parameter(Mandatory=$False,Position=3)]

[string]$slotResourceGroup

)

cls

$ErrorActionPreference = "Stop" # Stop as soon as an error occurs

if($slotName -ne $null -and $slotName -ne '') {

$isSlot = $True

}

else {

$isSlot = $False

}

Write-Host "PSScriptRoot : " $PSScriptRoot

Write-host "Web app: " $webAppName

Write-Host "Using slot: " $isSlot " " $slotName

Write-Host "SlotResourceGroup: " $slotResourceGroup

$dll = "SiteDataAccess.Extended.Customer.dll"

Write-Host "Using dll: " $dll

if($isSlot) {

$GetWebSite = Get-AzureRmWebAppSlot -Name $webAppName -Slot $slotName -ResourceGroupName $slotResourceGroup

}

else {

$GetWebSite = Get-AzureRmWebApp -Name $webAppName

}

$Connection = $GetWebSite.SiteConfig.ConnectionStrings | Where {$_.name -eq "ExtendedSiteDBContext"}

$ConnectionString = $Connection.ConnectionString

Write-host "Executing Database migrations and seeding with Migrate.exe"

& "$PSScriptRoot\migrate.exe" $dll /connectionString=$ConnectionString /connectionProviderName="System.Data.SqlClient" /verbose

if ($LastExitCode -ne 0) {

throw 'migrate.exe returned a non-zero exit code...'

}

Write-host "Finished executing Database migrations and seeding with Migrate.exe"

Write-host "Finished"

What that script does when called from an Azure Powershell task in a Release definition is lookup the connectionString from the web app service in Azure and then uses that to call migrate.exe. We did this so we did not have to store any connectionStrings in VSTS. If the sql user in the connectionString used when executing migrate.exe needs different permissions to that of the website you could change this to use VSTS release variables instead.


2. The Release definition

When we release a site our process is:

· Stop the deployment slot ‘stage’

· Deploy the website zip to slot ‘stage’

· Update the database using the PowerShell script

· Start the Stage site

· Swap Stage with production

· Ping production site

· Stop the stage site (to save resources)


Our release definition looks like this:

Release definition

Task Groups

clip_image010

clip_image012

clip_image014

The Update Database task is just an Azure PowerShell task containing:

clip_image016

In Azure the App Service Plan is configured so that the production site and the stage slot are almost identical (baring a few appSetting values). They share the same connectionString and every appSetting and ConnectionString value has the ‘Slot Setting’ checkbox ticked.


Our big gotcha

Was during the swap slots task the website will be warm started automatically by that process hitting localhost under http, or it will try the domain name of the site but again under http. We had this in our filter.config

// Ensure all http connections are redirected to https

filters.Add(new RequireHttpsAttribute());


Which meant the warm start process instantly failed to start the site, the swap process continued and the site then started up from cold in production. We could see that because our ping task that pinged the production site, was taking a minute to respond.

What we had to do was create a custom version of the RequireHttpsattribute, once we did this our ping task responds in 1-3 seconds:

// Ensure all http connections are redirected to https

filters.Add(new CustomRequireHttpsAttribute());


And the code for the attribute is:

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, Inherited = true, AllowMultiple = false)]

public class CustomRequireHttpsAttribute : RequireHttpsAttribute

{

    protected override void HandleNonHttpsRequest(AuthorizationContext filterContext)

{

    string ipAddress = filterContext.RequestContext.HttpContext.Request.UserHostAddress;

    var userAgent = filterContext.RequestContext.HttpContext.Request.UserAgent.ToLower();

    // 0.0.0.0 and 127.0.0.1 and ::1 are used by the Azure App Service Swap Slot process to Warm up the site before swapping the slot to production

    if(ipAddress == "0.0.0.0" || ipAddress == "::1" || ipAddress == "127.0.0.1")

    //if(userAgent.Contains("sitewarmup")) // doesnt work

    {

        return;

    }

    base.HandleNonHttpsRequest(filterContext);

    }

}

Sunday, April 23, 2017

Setup Application Gateway & the Internal Front-End Load Balancer for an application

Setup Application Gateway & the Internal Front-End Load Balancer for an application

This post follows on from the one on creating a service fabric environment in Azure http://jonlanceley.blogspot.co.uk/2017/04/3-node-service-fabric-environment-with.html

After you have deployed an application to service fabric you need to add the port for it to the service fabric cluster Front End load balancer and then to the Application Gateway.

1. Front End load balancer

The port is the one you have defined in your serviceManifest.xml e.g.

<Resources>

<Endpoints>

<Endpoint Protocol="http" Name="ServiceEndpoint" Type="Input" Port="8702" />

</Endpoints>

</Resources>

</ServiceManifest>

Create a new Health Probe

e.g. 8702Probe

image

Create a new Load balancing rule

e.g. 8702Rule, make sure you set the highlighted items correctly.

image

image


2. Application Gateway

Add a new Health Probe

image

The path just needs to be an endpoint which can return a response so the health probe knows if the application is alive.

image


Add a new Http Setting

image

image


Add a new Mult-Site Listener

image

image


Add a new Basic rule

Make sure you choose the httpSetting you created earlier

image

image

Remove rule1 for the appGatewayHttpListener

image


Check the backend health

Before connecting via a browser it’s worth checking that the Backend health report is Healthy otherwise you have missed something.

image

If it’s healthy, try opening your application in a browser e.g.

http://sfapptest.com/api/values/get

Wednesday, April 19, 2017

3 Node Service Fabric Environment with an Azure Application Gateway

This is an article I put together as I was experimenting with Service Fabric for a real world solution to a problem we had. 

In it we will create a service fabric environment in Azure which contains 3 node types, FrontEnd, BackEnd, and Management, plus an Application Gateway in front which all internet traffic can be routed through to the FrontEnd node. We will also be using an existing Virtual Network and Subnets that we will put the service fabric cluster into.

This post helped me a lot with producing this solution:

https://brentdacodemonkey.wordpress.com/2016/08/01/network-isolationsecurity-with-azure-service-fabric/

My template originally came from the Azure Portal when creating a new service fabric cluster there is the option of saving it as a template. It was then customised as the portal wizard does not let you do certain things. Most of the customisations came from this site:

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-patterns-networking

This is what we will build:

The FrontEnd node type is where we put any stateless services.

The BackEnd node type is where we would put any stateful services.

The Azure service fabric services will run on the Management / Primary node type.

· This has a public static outbound IP number, so we can connect to view the status of the cluster.

· It can also host services which need to connect out to a third party which have IP security on their firewall. The third party then only needs to add this IP number to their firewall.

· We can also use this to securely access an Azure SQL database that has IP restricted access.

 

The steps below are my notes for creating the service fabric environment.  All the scripts and ARM template are available on Github:

https://github.com/jonlanceley/jonlanceley/tree/master/CreateServiceFabricEnvironment

1. Create Service Fabric dependencies.

· Public Static IP (for Management nodeType)

· Key Vault (for service fabric certificates)

· Active Directory Application (for authentication)

· Resource Group to put service fabric cluster in

· Existing Virtual Network with 4 subnets for:

    o FrontEnd

    o BackEnd

    o Management

    o WAF / Application Gateway

Edit & change the parameters as required in this script:

Azure-CreateDependanciesForServiceFabricPlatform.ps1

Execute the script

Note: this script will prompt you yes/no to create each of the above items.

If you’re creating a non-development version we do not want to use a self-signed certificate so say ‘no’ when prompted. After the script has run you then need to manually add certificates into the key vault. Details here:

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-creation-via-portal#add-certificates-to-key-vault

https://blogs.technet.microsoft.com/kv/2016/09/26/get-started-with-azure-key-vault-certificates/


2.
Create the Service Fabric Environment

This will create a 3 nodeType service fabric environment FrontEnd, Backend and Management nodeTypes.

The management node type is set as the Primary.

Note the NSG’s are not assigned to the Subnet but they are created by the script.

Go to folder:

secureTemplateAnd3NodeTypeWithApplicationGatewayAndExistingSubnet

Copy the parameters.json file and then change the parameters.

Note: By default the script creates the minimum number of VM’s all at Standard A0 size. If this is a non-development environment you will want to change the VM size to be:

    · Minimum number of instances:

        o Set to 5 on Management/Primary node type

        o Set to 5 on Backend node type (stateful)

        o Set to 2 on Frontend node type (stateless)

    · Size (set to Standard D1_V2 the minimum supported spec for all node types)

    · Reliability Level of the cluster should be minimum of Silver in production (default is Bronze)

o Static IP parameters (change to match those you just setup):

    • existingStaticIPResourceGroup
    • existingStaticIPName
    • existingStaticIPDnsFQDN

o Specifiy the existing Virtual network and subnet names:

    • virtualNetworkName
    • existingVNetRGName
    • subnet0Name
    • subnet1Name
    • subnet2Name
    • subnetWAFName

o Active Directory parameters (change to match those you just setup):

    • aadTenantId
    • aadClusterApplicationId
    • aadClientApplicationId

o Certificate parameters (change to match those you just setup):

    • SourceVaultValue
    • certificateUrlValue
    • certificateThumbprint

o VM login parameters (used if you ever need to RDP into a cluster machine):

    • adminUserName
    • adminPassword

o Other parameters

Execute the deploy script:

.\deploy.ps1 -subscriptionId <yourAzureSubscriptionIdHere> -resourceGroupName mycluster -deploymentName mycluster -parametersFilePath .\parameters.json

If after a long time it errors with this message ‘Monitoring Agent not reporting success after launch’

image

Then you should be fine as Service Fabric will automatically recover the nodes that this failed for.

 

3. After deployment

Go to the Azure portal and find your service fabric cluster and you should eventually see the nodes (they may take some time to appear).

image

Once the deployment has finished and you can see in the Azure Portal that the nodes in the cluster are running you should be able to view the cluster e.g.

https://jonscluster.northeurope.cloudapp.azure.com:19080/Explorer

This should prompt you to login. If you see a message:

AADSTS50105: The signed in user 'jon.lanceley_xxxxxxxxx.com#EXT#@jonlanceleyxxxxxxxx.onmicrosoft.com' is not assigned to a role for the application '9df93f43-6682-4004-addd-1522a4e13439'.

Go to Azure Active Directory -> Enterprise Applications -> All Applications

image

Find the cluster server application (not the client one)

Add the user as an Admin

image

That’s it, you should now have a running Service Fabric Cluster. 

You now just need to deploy some code to it.  And then open the Front End Internal load balancer and the Application Gateway ports for the application: http://jonlanceley.blogspot.co.uk/2017/04/setup-application-gateway-internal.html

Tuesday, April 12, 2016

Release Management in TFS 2015.2

With the TFS 2015.2 update we now have the ability to use Release Manager in TFS on premise. This is how I have setup our branching structure, gated check-in and release manager to control the deployment (with approvals at some stages) of an MVC5 web site with Entity Framework into multiple Azure Web Application environments.

We have our branching structure set as:

Dev\developer name 1

Dev\developer name 2 etc

Main

Developers take a branch from main into the dev folder under their name and work on changes. When development is complete they check-in to their dev branch and then merge the changes into the Main branch. This allows the developer to pull in other changes from main into their dev branch, and be able to check-in at least daily into their dev branch so their code is backed up overnight.

We are using the new TFS build tasks; and setup against the Main branch is:

  • A gated check-in.

image001_thumb7_thumb

  • And a number of build tasks that compile the code in Release mode, run some unit tests and finally copy the files/build artifacts we need to deploy the web site into the drop folder (essentially a web deploy zip file is created).

image002_thumb2_thumb

 

The key to creating a web deploy zip file is the MS Build Arguments:

/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true

In the copy and Publish build artifacts task we have:

image003_thumb1_thumb

This automatically builds all changes checked into the Main branch with the final step it produces the web deploy files. Assuming the build passes then Release Manager will take over to deploy the web site into an Azure Web Application.

We have 4 environments in Azure that we need to deploy to ‘UAT Staging’, ‘UAT’, ‘Production Staging’ and ‘Production’. For us these are split into 2 Azure subscriptions, one for UAT one for Production. Within each subscription is 1 web application e.g. UAT and against that 1 slot ‘staging’ and each has its own Azure SQL database. This gives us 2 websites and 2 databases. We make use of sticky slot settings as well so the connection string / app settings stay against its environment because in production we make use of the Azure web application swap slot functionality.

So we configure release manager to deploy the code to UAT staging, UAT and Production staging. But for production release manager just initiates a swap slot PowerShell command.

So in Release manager we will need to setup the deployment process or tasks for each of these environments.

But first Release manager is setup to watch the main branch for a new build artifact which is done by setting the release trigger to Continuous Deployment.

image004_thumb1_thumb

As soon as a new version is checked in setting the UAT Staging environment trigger to ‘Automated after release creation’ will initiate a deployment into that environment automatically (and for us with no approvals required because that is the first server in Azure which our developers can test against).

On the environments tab we define the deployment steps for each environment so for UAT Staging we have:

  • Stop the Azure web application
  • Deploy the new code/web site
  • Start the Azure web application

image005_thumb1_thumb

We are making use of the new TFS Market Place extension ‘Run Inline Azure Powershell’ task which allows us to stop the Azure web application with:

Stop-AzureWebsite -Name $(WebAppName) -Slot stage

And below are the properties of the Azure Web App Deployment task, with the main one being the path to the Web Deploy Package.

Note: Our web applications are always running in Azure we are not creating them on demand.

image006_thumb2_thumb

The UAT and Production Staging environments all have the same 3 tasks. Deployment into an environment takes about 1 minute 20 seconds.

The Production environment has 1 task which swaps Production Staging with Production by executing this powershell script:

Switch-AzureWebsiteSlot -Name $(WebAppName) -Slot1 stage -Slot2 production -Force -Verbose

image007_thumb2_thumb

That allows us to do a quick production deployment (saving us that 1 minute 20).

UAT, Production Staging and Production all have Pre-Deployment approvers setup.

image008_thumb1_thumb

So deployment to the 1st Azure environment ‘UAT Staging’ happens automatically upon a successful check-in to the Main branch.

The developer has the chance now to manually test the site. When they want to deploy to the next environment ‘UAT’ they would open the release in release manager and start the deployment:

image009_thumb1_thumb

image010_thumb1_thumb

By using the ‘Deploy’ button to request the code is deployed into the UAT environment this will send an email to the approvers who will ‘hopefully’ approve the release and if they do release manager will execute the 3 deployment tasks setup for the UAT environment.

The same process would be followed for the remaining environments, the developer tests then uses the deploy button to move the same web application code to the next environment.

At any point we can check what release is in what environment by looking at the overview tab.

image011_thumb2_thumb

Database changes are done via Entity Framework Code First Migrations which are executed upon web site start up by running this code added to the MVC site startup.cs class:

var efConfiguration = new Configuration();

var dbMigrator = new System.Data.Entity.Migrations.DbMigrator(efConfiguration);

dbMigrator.Update();

This will execute any schema changes and then run the seed data method. The key to keeping the seed data updated is using the extension method ‘AddOrUpdate’.

 

Useful Links

https://msdn.microsoft.com/library/vs/alm/release/overview

Sunday, August 30, 2015

Unit Testing (part 4) - Faking Entity Framework code first DbContext & DbSet


This is the 4th in a series of posts about unit testing:

Unit Testing (part 1) - Without using a mocking framework

Unit Testing (part 2) - Faking the HttpContext and HttpContextBase

Unit Testing (part 3) - Running Unit Tests & Code Coverage

Unit Testing (part 4) - Faking Entity Framework code first DbContext & DbSet

 

Following on from the last 3 articles we can use the same approach of faking with test doubles on our database repository methods. 

I’m using Entity Framework 6 code first and I want to be able to call the code in my repository layer so I can test the where clauses etc, but I do not want to actually call the database.  Entity framework has a DBContext and a DbSet we just need to fake them.

All of our model properties implement this interface IDbEntity. 

    public interface IDbEntity<TPrimaryKey>

    {

        /// <summary>

        /// Unique identifier for this entity.

        /// </summary>

        TPrimaryKey ID { get; set; }

    }

What this does is say we must have a primary key called ID on each model.  We will use this later to implement a fast generic DbSet Find method.

public class Country : IDbEntity<Int32>

{

        [Key, DatabaseGenerated(DatabaseGeneratedOption.None)]     

        public Int32 ID { get; set; }

       

        [Required]

        [MaxLength(100)]

        public String Name { get; set; }

}

First we need to setup our DbContext so we start with our interface which just contains a list of DbSet’s (which is what we need in order to begin faking it for the unit tests).

public interface ISiteDBContext

{

        DbSet<Country> Countries { get; set; }       

}


Our concrete class that the MVC web site uses looks like this:

public class SiteDBContext : DbContext, ISiteDBContext

{

        public SiteDBContext()

            : base()

        {

            // Disable database initialisation (e.g. when the site is first run)           

     Database.SetInitializer<SiteDBContext>(null);

        }

 

        public SiteDBContext(string nameOrConnectionString)

            : base(nameOrConnectionString)

        {

            // Disable database initialisation (e.g. when the site is first run)           

     Database.SetInitializer<SiteDBContext>(null);

        }

       

        public DbSet<Country> Countries { get; set; }

       

        protected override void OnModelCreating(DbModelBuilder modelBuilder)

        {

            // Fluent API commands go here e.g.

            modelBuilder.Conventions.Remove<PluralizingTableNameConvention>();                 

     modelBuilder.Conventions.Remove<OneToManyCascadeDeleteConvention>();   

            base.OnModelCreating(modelBuilder);

        }

    }

 

And our fake one for unit testing looks like this (note the differences in yellow).  We are implementing the interface but then in the constructor setting the Country DbSet to use our FakeDbSet instead:

public class FakeSiteDBContext : DbContext, ISiteDBContext

{

      public FakeSiteDBContext() : base()

      {

         // Disable code first auto creation of a database                 

         Database.SetInitializer<FakeSiteDBContext>(null);

      Countries = new FakeDbSet<Country>();

}

public DbSet<Country> Countries { get; set; }

public override DbSet<TEntity> Set<TEntity>()

       {

            foreach (PropertyInfo property in

typeof(FakeSiteDBContext).GetProperties())

            {

                if (property.PropertyType == typeof(DbSet<TEntity>))

                {

                    var value = property.GetValue(this, null) as DbSet<TEntity>;

                    return value;

                }

            }

 

            // If the above fails fall back to the base default

            return base.Set<TEntity>();

  }

 }

And this is the FakeDbSet:

public sealed class FakeDbSet<TEntity> : DbSet<TEntity>, IQueryable, IEnumerable<TEntity>, IDbAsyncEnumerable<TEntity>

            where TEntity : class

    {

        ObservableCollection<TEntity> _data;

        IQueryable _query;

 

        public FakeDbSet()

        {

            _data = new ObservableCollection<TEntity>();

            _query = _data.AsQueryable();

        }

 

        public override TEntity Find(params object[] keyValues)

        {

            // Find by the Primary Key (ID) as defined in the interface IDbEntity

     // which is set on all of our model classes. This is a fast generic way to

     // implement find.

            // There is only currently 1 primary keyValue that can be passed in so we

     // use [0] to find it

            var result = _data.OfType<IDbEntity<Int32>>().Where(m => m.ID ==

(Int32)keyValues[0]);

            var myEntity = (TEntity)result.SingleOrDefault();

            return myEntity;

        }

 

        public override TEntity Add(TEntity item)

        {

            // In our FakeDbSet when an item is added to the context we increment it's

     // primary Key (ID column) otherwise it will always be 0

            // All our model classes inherit IDbEntity which defines an ID column as

     // the primary key

            // But note this will not update navigation properties, apperently there

     // is no way in EF to do that yet (so you have to work around it)

            if (item is IDbEntity<Int32>)

            {

                var myItem = (IDbEntity<Int32>)item;

                if (myItem.ID == 0)

                {

                    // Get the last record entered, so we can get it's ID then add 1

      // to it for the new record

                    var lastItem = _data.LastOrDefault();

                    if (lastItem == null)

                        myItem.ID = 1;

                    else

                    {

                        var myLastItem = (IDbEntity<Int32>)lastItem;

                        myItem.ID = myLastItem.ID + 1;

                    }

                }

            }

 

            _data.Add(item);

            return item;

        }

       

        public override TEntity Remove(TEntity item)

        {

            _data.Remove(item);

            return item;

        }

 

        public override TEntity Attach(TEntity item)

        {

            _data.Add(item);

            return item;

        }

 

        public override TEntity Create()

        {

            return Activator.CreateInstance<TEntity>();

        }

 

        public override TDerivedEntity Create<TDerivedEntity>()

        {

            return Activator.CreateInstance<TDerivedEntity>();

        }

 

        public override ObservableCollection<TEntity> Local

        {

            get { return _data; }

        }

 

        Type IQueryable.ElementType

        {

            get { return _query.ElementType; }

        }

 

        Expression IQueryable.Expression

        {

            get { return _query.Expression; }

        }

 

        IQueryProvider IQueryable.Provider

        {

            get { return new TestDbAsyncQueryProvider<TEntity>(_query.Provider); }

        }

 

        System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()

        {

            return _data.GetEnumerator();

        }

 

        IEnumerator<TEntity> IEnumerable<TEntity>.GetEnumerator()

        {

            return _data.GetEnumerator();

        }

 

        IDbAsyncEnumerator<TEntity> IDbAsyncEnumerable<TEntity>.GetAsyncEnumerator()

        {

            return new TestDbAsyncEnumerator<TEntity>(_data.GetEnumerator());

        }

    }

 

    internal class TestDbAsyncQueryProvider<TEntity> : IDbAsyncQueryProvider

    {

        private readonly IQueryProvider _inner;

 

        internal TestDbAsyncQueryProvider(IQueryProvider inner)

        {

            _inner = inner;

        }

 

        public IQueryable CreateQuery(Expression expression)

        {

            return new TestDbAsyncEnumerable<TEntity>(expression);

        }

 

        public IQueryable<TElement> CreateQuery<TElement>(Expression expression)

        {

            return new TestDbAsyncEnumerable<TElement>(expression);

        }

 

        public object Execute(Expression expression)

        {

            return _inner.Execute(expression);

        }

 

        public TResult Execute<TResult>(Expression expression)

        {

            return _inner.Execute<TResult>(expression);

        }

 

        public Task<object> ExecuteAsync(Expression expression, CancellationToken cancellationToken)

        {

            return Task.FromResult(Execute(expression));

        }

 

        public Task<TResult> ExecuteAsync<TResult>(Expression expression, CancellationToken cancellationToken)

        {

            return Task.FromResult(Execute<TResult>(expression));

        }

    }

 

    internal class TestDbAsyncEnumerable<T> : EnumerableQuery<T>, IDbAsyncEnumerable<T>, IQueryable<T>

    {

        public TestDbAsyncEnumerable(IEnumerable<T> enumerable)

            : base(enumerable)

        { }

 

        public TestDbAsyncEnumerable(Expression expression)

            : base(expression)

        { }

 

        public IDbAsyncEnumerator<T> GetAsyncEnumerator()

        {

            return new TestDbAsyncEnumerator<T>(this.AsEnumerable().GetEnumerator());

        }

 

        IDbAsyncEnumerator IDbAsyncEnumerable.GetAsyncEnumerator()

        {

            return GetAsyncEnumerator();

        }

 

        IQueryProvider IQueryable.Provider

        {

            get { return new TestDbAsyncQueryProvider<T>(this); }

        }

    }

 

    internal class TestDbAsyncEnumerator<T> : IDbAsyncEnumerator<T>

    {

        private readonly IEnumerator<T> _inner;

 

        public TestDbAsyncEnumerator(IEnumerator<T> inner)

        {

            _inner = inner;

        }

 

        public void Dispose()

        {

            _inner.Dispose();

        }

 

        public Task<bool> MoveNextAsync(CancellationToken cancellationToken)

        {

            return Task.FromResult(_inner.MoveNext());

        }

 

        public T Current

        {

            get { return _inner.Current; }

        }

 

        object IDbAsyncEnumerator.Current

        {

            get { return Current; }

        }

    }

 

Then using dependency injection in your unit test you register the FakeSiteDBContext.  Using Unity it would be:

container.RegisterType<DbContext, FakeSiteDBContext>(new PerRequestLifetimeManager());

And in the website you’d do:

container.RegisterType<DbContext, SiteDBContext>(new PerRequestLifetimeManager());

 

The unit test would look like this, in this example I’m calling a basketService method which would do all the same calls as if we were running the MVC web site, except in the test it is going to call our FakeDbSet and FakeDbSiteContext to avoid hitting a database because that’s what we told our dependency injection to do swap out every instance of DbContext with our FakeSiteDBContext.

 

[TestMethod]

public void AddToBasket_AddUSDItemToNewBasket()

{

    HttpContext.Current = new FakeHttpContext().CreateFakeHttpContext();

    unityContainer = UnityConfig.GetConfiguredContainer();

    var basketService = unityContainer.Resolve<IBasketService>();

    var httpContextWrapper = new FakeHttpContextWrapper(httpContext:

HttpContext.Current);

 

    var model = basketService.AddSubscriptionItemToBasket(httpContextWrapper, params go here…);

           

    Assert.AreEqual("en-US", model.CurrencyFormat.CurrencyCulture, "CurrencyCulture");

}

 

If you wanted to you can also seed the entity framework models with the same seed data you’d use in the real database, remember it’s all in memory and it’s fast.  It’s also useful as your working with the same data rather than creating fake data for every test.

There is nothing stopping you creating specific test data for one test as well, all you have to do is add populated models to the Entity Framework DbContext at the start of a unit test.

     var order = new Order()

            {

                UserBasketID = userBasketId,

                OrderItems = new Collection<OrderItem>(),

                DateCreated = DateTime.Now

            };

 

            var orderItem1 = new OrderItem

            {

                Price = 2.00M,

                Quantity = 1,

                DateCreated = DateTime.Now,

                Order = order

            };

            order.OrderItems.Add(orderItem1);

            dbContext.Orders.Add(order);

            dbContext.OrderItems.Add(orderItem1);

No need to save (remember it’s in memory all you have to do is .Add). 

The only downside with this approach I’ve found so far are your site will run but the tests may fail because:

  • Navigation property might = null. 
  • Not all the data is added to the context.

Remember we are faking out the DbContext and DbSet so we do not get all the Entity Framework functionality. 

To address both of these points:

  • If you have reference/navigation properties make sure you set them like the ones in yellow above (we assign the order to the orderItem).  This way your repository methods navigation properties will work in your unit test and won’t be null. 
  • If you think back to EF v1 days you had to add the items to the context that you want to save.  So in the example above EF would be fine with just the dbContext.Orders.Add(order) line it would know there are orderItems that also need saving.  But the fake DbContext won’t!  So if we are testing a lookup directly for orderItems our test would show 0 records. 

    We just have to also attach the orderItems to the dbContext too.  It won’t affect the way the site runs and our tests will pass.  So there is a compromise to be made here but in my opinion a small one.  Some people will argue that we are changing our site code to make the tests pass, and yes we are (slightly and only when we hit this scenario which has not happened to me much so far).

You should always follow up with UI integration tests such as Selenium, Microsoft’s Coded UI etc to test all the site functions and 3rd party calls work, these will be slower, but at least now we’ve got a way to run lots of fast unit tests before the code leaves Visual Studio.

Using this approach here are just some of my unit tests so you can see how quick they are:

image