Project – Colocated Infrastructure Migration to Microsoft Azure Cloud


Few years ago I was tasked with movie company’s entire colocated infrastructure to Microsoft Azure Cloud. Company was tired of paying me to head out to the colocated site in the middle of the night to deal with hardware issues. Plus the servers and networking gear was getting older and started to break down more frequently. So the choice was to either upgrade the hardware and continue to pay for colocated infrastructure or move to a cloud solution. The cloud solution looked appealing from the pricing point of view, but initially we run into some issues with available features. Up until that point Microsoft Azure didn’t support “Static” public IPs and we couldn’t move until we were guaranteed that after Azure Virtual Machine’s reboots and re-provisioning (in case of severe software issues) our Virtual Machines would get the same IPs. Once this feature was available we evaluated and ultimately ended-up virtualizing all of company’s infrastructure into Microsoft Azure Cloud.

Original Colocated Configuration

Initially my client company had a colocated infrastructure responsible for company’s on-line presence. Again, I was involved with the primary responsible;e for the colocation infrastructure setup and configuration. I was also the person responsible to maintaining the setup. We actually had 2 different generation of software; original software engine Gen3 as we called and newer more feature-rich engine called Gen4. Once the new colocation site went up most of the clients were upgraded from Gen3 into Gen4 with eventual goal of deprecating Gen3.

Both Gen3 and Gen4 systems were setup in the same c0-located site. But there were completely independent of one another, including different IP subnets. We were making sure that both systems are logically completely isolated from one another, and after migrating customers into new system the old system can be just deprecated and shutdown.

Gen3 Systems consisted of several servers: 2 domain controllers Primary and Backup, 2 Web Servers – Primary and Backup and a general legacy Windows 2003 server performing some utility tasks.

Gen4 Syste, consisted of more servers: again 2 Domain Controllers – Primary and Backup, 2 Web Servers -Primary and Backup, a dedicated SQL Database server, and 2 Application server performing various tasks.

Both Gen3 and Gen4 Systems hosted their own DNS servers whos IP was registered as name servers with eNom Registrar.

Features that we must have in order to move to Azure Cloud

One the biggest hurdles we faced when trying to migrate our infrastructure into Azure (or any Cloud for that matter) was the Static vs Dynamic IPs. We couldn’t have IPs of our DNS server change after we Stopped (De-allocated) and Restarted our Virtual Machines.  Another issue was it would take some time to re-code our ASP.NET Web Applications to be Cloud Friendly, rather we had to virtualize our Windows Machines and have them run in the cloud as Virtual Machines. So, right after Azure started to offer VMs we started to seriously consider moving into the cloud, but it took a couple of years until Azure started offering “Reserved IPs” for their Virtual Machines.

Initial Performance Issues when running Virtual Machines

Next hurdle after Static IPs we had to worry about Azure Virtual Machine Performance. At first the performance was pretty slow, especially when running SQL Server on Azure VM. Moving to Azure SQL Database was not possible for us – we have some compatibility issues. These issues were small, but updating the code and then testing it all would take too long. Lucky for us Microsoft was updating their CPUs for Azure VMs pretty quickly, and soon enough we were getting acceptable performance from Azure VMs – with one caveat; had to provision Azure VMs with more memory and allocating most of that memory to SQL server to have sufficient performance levels during our nightly website re-generation and update jobs.

Actual Process of Migrating to Azure Cloud

Sometime in 2014 we had all of the features we needed in Azure VMs to move our enterprise into Azure Cloud. I had done some performance testing between Amazon EC2 and Microsoft Azure. If memory serves me right the Amazon VMs were slightly faster, but Azure VMs were cheaper. In the end as a developer and being friendly with C#/ASP.NET development we choose Microsoft Azure. At some point we thought we might update our applications to be Azure Web Role friendly so Azure felt like a better fit.

I was considering 2 migration paths:

  1. Virtualize our Existing Machines and upload them into Azure Cloud
  2. Create new Azure Virtual Machines and migrate our databases and web application to these new servers

I did find an interesting article online (sorry forgot the actual link) that talked about how to virtualize the Windows Servers for the move into Azure Cloud; it was even possible to setup configure IPs locally and then move them into Azure Virtual Network with the same Subnets. But in the end I decided to migrate our applications and data into newly created Azure Virtual Machines. This approach seemed cleaner to me – no need to move legacy OS into the cloud; Windows 2008 R2 was getting pretty old by the time we migrated into the Azure Cloud.

When the migration was all done we ended up with an 1 single server for Gen3 system – this housed all legacy customers that refused to migrate their Websites, and 2 servers for Gen4 System. The We had to have 2 server for Gen4 System because we have 2 ASP.NET websites where using host headers as HTTP binding was not an option. Each one of the IPs we had to use was hosting several hundred customer websites and at the time single Azure Virtual Machine could only have 1 IP address. So 1st Gen4 Server hosted the main ASP.NET Website – which responded for each of the hundreds of Customer Websites requests and all ancillary Websites and Applications. Then the 2nd Gen4 Server hosted the secondary ASP.NET Website which responded for several other custom business domains. Combined these 2 Gen4 Azure Virtual Machines cost less that our entire Gen4 colocated Infrastructure when including both the colocation and bandwidth cost as well as maintenance costs.

Latest Upgrade to the Azure Cloud System

Microsoft has been steadily updating the features in the Azure Cloud as well as increasing the performance and capacity of Azure Virtual Machines. Recently I again spend some efforts our Azure Gen4 System. As part of the latest upgrade I have migrated 2 original Azure Gen4 Servers (they were running Windows Server 2012 R2) into one Azure Gen4 Server running Windows Server 2016. This new Azure Virtual Machine was added to a Azure Virtual Network and configured with 2 Static IPs – another new feature of Azure Cloud. Plus the server now is running on SSD drives further increasing the server’s performance. And all of that cost even less money that original Azure 2 Server Configuration. So I am happy – less machines for me to manage and the customer is happy – all this cost less money.

Step-by-Step Documentation of the Azure Virtual Machine deployment

As both a developer and a systems guy I always plan on getting the Documentation updated, but rarely get to do this. However this time around I made sure to first do the rough draft of the documentation as I was configuring single server to run the entire enterprise. Once the configuration and setup or everything was done, including Azure Virtual Network, Virtual Machines, SSDs, Windows and SQL software, Azure Site-To-Point VPN I had a detailed document that outlined each and every step. Then I deleted all of the Azure Objects and followed the documentation step-by-step to re-create and Azure Environment. After that an IP change for the Name Servers and all traffic from old Azure Cluster was routed to the new Cluster. And I have a VERY DETAILED Step-By-Step Document of how to re-setup the Azure Environment if I have to. Basically this will allow anyone with a bit of technical skills to follow the procedure and re-do the entire setup.

This makes my client sleep well at night knowing even if the Azure Virtual Machines are completely hosed we can still re-start and get back to operations probably within I would say 12 hours. Pretty tall order when you have to re-setup company’s entire online presence!

Next Step – I adding SSL Certificates for each Customer Domain that are all served by a single ASP.NET Web Application.

More to Follow …