Challenges in Migrating Legacy .NET Apps into Pivotal Cloud Foundry – Customer Case Study
Migrating legacy .net apps to Pivotal Cloud Foundry poses many challenges for most developers and companies. In this article, I will walk you step by step through the migration process based on my own experience and show you how to do it.
My first objective is to show that the migration is possible and requires only several changes in the application code base. The second one is to share the knowledge that I have gained while working on that project. Let’s forget that .NET Core even exists and discuss only the world of .NET framework. This article is about migrating existing ASP.NET Web form applications to Cloud Foundry.
As one of the 3rd party libraries is working only with Windows Server 2012R2 so we could not use Docker as it is supported only by Windows server 2016, which make migration more complex. Fortunately, but with some workarounds, the project was successfully completed.
The first goal of the migration was to implement a full Continuous Integration based on Cloud Foundry including building the application, launching unit tests, deploying the application to the development environment at Cloud Foundry, and finally launching acceptance tests using this environment.
Another requirement was to allow the operator to easily create a release package for the last version that passed the acceptance tests. That could be deployed to all environments like QA, UAT and Production. The final goal was to implement a production blue – green deployment.
After setting our goals, it was time to review the existing process and check what can we use from the existing infrastructure and processes. Since all steps were manual, nothing could be used. The build was made on a developer machine and all deployments to all production instances were manual. The only good thing was that the application has already been scaled vertically, which gave some hope to reduce the amount of necessary refactoring.
Once we have gotten familiar with the process, it was time to get to know the application itself. However, it turned out that the application was already very mature and that it was built in the asp.net web-forms technology and some parts of the code dated back to .NET 1.1. Therefore, the application requires Oracle Provier, Crystal reports and a custom archiving tool installed on the machine. Unfortunately, despite our hopes, it is not sessionless, but the session is stored in the database.
The next identified problem was the use of Active Directory; one of the web applications in the solution was using the windows identity. And finally, we learned that some of the sites were using mutual authentication and the servers had private keys loaded. Having all requirements we could begin the migration.
The migrating legacy .NET apps process
The first important decision was the choice of CI / CD tools. In the end, our choice has been limited to two: TFs and Concourse.
TFS is an integrated work tracker, very good support for NuGet and Active Directory integration. Unfortunately, the biggest disadvantage of TFS OnPremise is the lack of support for writing a CI / CD process as code or scripts (i.e. YAML files). VSTS currently has the support for CI, so there is a big hope that OnPremise version (TFS) will also have such support.
Concourse’s indisputable advantage is the docker usage. Each time when the pipeline is started it creates a new instance of container, so that we are sure that our building environment is immutable. Concourse provides us with a native support for Cloud Foundry (Resource type) and gives us the opportunity to use the same service for authorization as Cloud Foundry. Unlike TFS, it allows you to connect our process to several resources (GIT repositories) and to observe and trigger builds when changing in any of them. The biggest advantage, however, is the very simple representation of the pipelines in saved YAML files. When it comes to the drawbacks, its biggest issue is the lack of support for the NuGet resource so we cannot automatically launch the pipeline when the next version appears.
After considering all the pros and cons, the choice was to use Concourse. Unlike TFS, it allows you to connect our process to several resources (GIT repositories) and to observe and trigger builds when changing in any of them. After choosing the CI/CD tool, it was time to prepare Cloud Foundry and Concourse to work with our application. And since we were forced to use Windows Server 2012R2, we needed to prepare windows images for Concourse and PCF.
Concourse:
Uploading MSBuild, Web Deploy and all 3rd party dependencies. In case of an image for PCF, all we needed was all 3rd party dependencies. When we managed to get through the entire Setup process and choose all tools, we could start working with concourse on building the solution application. Unfortunately, at the very beginning it turned out that the application needed cleaning and small refactoring as the dependencies were so messed up that the application was not built on a clean machine, and required the developer to configure the machine in a very specific way which unfortunately was unacceptable if we wanted to use the concourse.
Another problem was the multitude of versions of MSBuild. Microsoft unfortunately did not show off and we had to create custom variables in the Concourse Image where we kept the path to MsBuild versions. And finally, we come to the reason of the IIS upload on the concourse image.
After deploying the application, the warm up took a very long time and made it difficult to run acceptance tests and caused that vertically scaling was not as smooth as we desired, therefore we decided to pre-compile asp.net pages during the build process that required IIS. Unfortunately, the pre-compilation did not turn out to be as fast as we assumed because all pages, including those currently blocked by feature toggle, were also parsed or were not in use at that time.
We have reached the point at which the application was ready to be uploaded to our PaaS. In Cloud Foundry, to run asp.net applications we needed to use an HWC Build pack which used Microsoft Hosted Web Core API that ran its own IIS instance and hosted single applications in a single web domain.
To perform the Cloud Foundry push, we needed a deployment manifesto written in YAML, of course none of us wanted to hardcode this file in the application code, we just wanted to create it dynamically based on the environment variables. Unfortunately, after a one-day research it turned out that the only PowerShell module that was maintained at that time and supported, did not provide sufficient functionality so we used python to create YAML files. After preparing the manifest, uploading the application was only a formality which, thanks to the good integration of Concourse and CF was smooth and problem-less. After uploading the application to Cloud Foundry the basic tests showed that the application did not work and some refactoring was required.
Refactoring
The first problem appeared with more than 2 instances of application, i.e. ViewState decryption failed after Postback in a situation where the user after sending the form goes to another instance. To fix it, we were forced to set the machineKey on the application level. After the application has been deployed and the first manual tests have passed, it was time to add automatic tests. For this we used selenium and chrome headless.
During the implementation of the tests, we did not come across any bigger problems. The only thing that is worth mentioning was the problem with the control “Update Panel” which could be very easily bypassed using custom java script. And the problem with checkboxes which in the headless mode Select method was not working and needed workaround by sending “space bar” character. Selenium was the reason why Chrome needed to be installed on Windows Image for concourse.
Another piece of code that required refactoring was the usage of RequestContext. Keep in mind that the router changed its content quite significantly, i.e. changing ports so the redirect or our famous asp.net ~ (tilde) would not work properly. We also had to change all the elements in the application that used LocalMachine as a storage (e.g. certificates) in our case was enough to change to CurrentUser (PCF always creates a new user when creating the instance of application and we used this fact for certificates).
The biggest problem that unfortunately could not be fixed is Windows Identity. This was mainly because there is no possibility to use this form of authentication in Cloud Foundry. The only workaround is to use Forms Authentication and communicate with AD using the LDAP protocol. The final change in the application was creating custom Healthpoint because ASP.NET Web Forms do not have ready solutions to monitor and debug applications.
However, you must remember that all pages, including our heathpoints, would not be available in case of any problem in global.asax, web.config or handlers and debugging this type of problems must be done outside of the application.
After implementing the tests, we could say that we had the first phase of migration done with a working application on CF – the first successful milestone.
Second milestone
It was time for milestone two. The application was working, but only on one environment and we wanted our applications to be deployed in many different environments and have the configuration stored in Environment Variables.
Unfortunately, ASP.NET, unlike asp.net Core does not allow you to configure the application after it has been run. The web.config file must be ready at the moment of launching the application, and if you change it for example using the global.asax, the application will be restarted.
We saw two ways of handling this issue:
- Massive application refactor to limit or eliminate web.config files (which would be very time-consuming or almost impossible in the case of sessions or certificates).
- The preparation of the entire environment and necessary files (web.config , certificates) before starting the HWC server.
After the test, both forms were using POC. Option number two, a custom build pack, was the winner.
Our Custom Build pack is nothing more than a set of Powershell scripts that launched before the HWC prepared the environment and applications. In case of Windows Server 2012, Cloud Foundry is not able to use the Docker container to separate application instances, but it creates separate users for each of the application instance.
Steps:
The first step is to read and parse all configuration from Environment Variable and download the necessary files from the GIT Server (TFS praises for making the highly-available HTTP API available).
Part of the configuration included in the GIT are xdt (XML Document Transformation) files that we used to create the appropriate entries in our web.config file.
After preparing the web.config file scripts, install the necessary certificates in the User Store and map Network Drive, of course, in the context of the user.
This few simple workaround has been able to separate the application code from the environment configuration.
The moment when we have separated configurations, we can easily take care of releasing the application and deploying it to all environments QA, UAT and production. All you have to do is set the correct versioning (we used Semantic versioning), create separate tests that can be done on production, and prepare blue-green deployment scripts that will, for example, change routing if the newly created environment is stable and remove the previous version.
The application has been migrated, it is running on many instances and is independent from the environment, but can we call it cloud-native already?
Unfortunately not, and some more work is required to make it happen. The most important changes that are still ahead of you and which you are able to do with small cost in asp.net web forms are adding telemetry and systematizing logs.
Add Sonar to Continues integration pipeline. The entire solution consists of 17 projects (including two web applications) that are dependent on each other which in the case of any change forces the deploy of all applications. So it should be broken down into a few smaller solutions and store the libraries in NuGet.
The database is not the best and most efficient place to store the sessions so the absolute minimum is to go to Radis and get rid of the session in new features at all. And for new features creating separate microservices.
Summary
To sum up, migrating legacy .NET applications to Pivotal Cloud Foundry can pose plenty of challenges that even the most experienced specialists may have to face and tackle. I hope that the process that I presented and the experience that I shared will help many of you overcome them.
Check related articles
Read our blog and stay informed about the industry's latest trends and solutions.
see all articles