About us
Our services

Capabilities

Legacy Modernization
Data Platforms
AI & Advanced Analytics

Industries

Automotive
Finance
Manufacturing
Aviation

Solutions

Databoostr

Data Sharing & Monetization Platform

Cloudboostr

Multicloud Enterprise Kubernetes

Looking for something else?

Contact us for tailored solutions and expert guidance.

Contact
Case studies
Resources

Resources

Blog

Read our blog and stay informed about the industry’s latest trends and technology.

Ready to find your breaking point?

Stay updated with our newsletter.

Subscribe

Insights

Ebooks

Explore our resources and learn about building modern software solutions from experts and practitioners.

Read more
Careers
Contact
Blog
Legacy modernization

The hidden cost of overengineering microservices: How we cut cloud costs by 82%

Kostiantyn Honcharenko
Software Engineer
April 11, 2025
•
5 min read

Table of contents

Heading 2
Heading 3
Heading 4
Heading 5
Heading 6

Schedule a consultation with legacy modernization experts

Contact us

When microservices are overused, complexity and costs skyrocket. Here’s how we consolidated 25 services into 5 - simplifying architecture and slashing cloud spend without sacrificing stability.

It’s hard to predict exactly how microservice architecture will evolve, what pros and cons will surface, and what long-term impact it will have. Microservices can offer significant benefits — like scalability, independent deployments, and improved fault isolation — but they also introduce hidden challenges, such as increased complexity, communication overhead, and maintenance costs.

While this architectural approach brings flexibility in managing systems, prioritizing critical components, and streamlining release and testing processes, it won’t magically fix everything — architecture still needs to make sense. Applying the wrong architecture can create more problems than it solves. Poorly designed microservices may lead to inefficiencies, tight coupling in unexpected places, and operational overhead that outweighs their advantages.

Entry point: reclaiming architectural simplicity

The project we took on was an example of microservice architecture applied without tailoring it to the actual shape and needs of the system. Relatively small and simple applications were over-decoupled. Not only were different modules and domains split into separate services, but even individual layers — such as REST API, services containing business logic, and database repositories — were extracted into separate microservices. This is a classic case of solving a simple problem with a complex tool, without adapting to the context.

Our mission was to refactor the system — not just at the code level, but at the architectural level — with a primary focus on reducing the long-term maintenance costs. To achieve this, we’ve decided to retain the microservice approach, but with a more pragmatic level of granularity. Instead of 25 microservices, we consolidated the system into just 5 thoughtfully grouped services, reduced cache instances from 3 to 1 and migrated 10 databases into 5.

Consulting the system

Before making any decisions, we conducted a thorough audit of the system’s architecture, application performance, efficiency, and overall cost. Looking at the raw architectural diagram alone is rarely enough — we wanted to observe the system in action and pay close attention to key metrics. This live analysis provided critical insights into configuring the new applications to better meet the system's original requirements while reducing operational costs.

Cloud Provider access

To truly understand a system’s architecture, it’s essential to have access to the cloud provider's environment — with a wide set of permissions. This level of visibility pays off significantly. The more detailed your understanding at this stage, the more opportunities you uncover for optimization and cost savings during consolidation.

Monitoring tools access

Most systems include monitoring tools to track their health and performance. These insights help identify which metrics are most critical for the system. Depending on the use case, the key factor might be computing power, memory usage, instance count, or concurrency. In our case, we discovered that some microservices were being unnecessarily autoscaled. CPU usage was rising — not due to a lack of resources, but because of accumulating requests in the next microservices in the chain that performed heavy calculations and interacted with external APIs. Understanding these patterns enabled us to make informed decisions about application container configurations and auto scaling strategies.

Refactoring, consolidating, and optimizing cloud architecture

We successfully consolidated 25 microservices into 5 independent, self-sufficient applications, each backed by one of 5 standardized databases — down from a previously fragmented set of 10 and a single cache instance instead of 3. Throughout this transformation, we stick to a core refactoring principle: system inputs and outputs must remain unchanged. Internally, however, architecture and data flow were redesigned to improve efficiency and maintainability.

We carefully defined domain boundaries to determine which services could be merged. In most cases, previously separated layers — REST proxies, service logic, and repositories — were brought together in an unified application within a single domain. Some applications required database migrations, resulting in consolidated databases structured into multiple schemas to preserve legacy boundaries.

Although we estimated resource requirements for the new services, production behavior can be unpredictable — especially when pre-launch workload testing isn't possible. To stay safe, we provisioned a performance buffer to handle unexpected spikes.

While cost reduction was our main goal, we knew we were dealing with customer-facing apps where stability and user experience come first. That’s why we took a safe and thoughtful approach — focusing on smart consolidation and optimization without risking reliability. Our goal wasn’t just to cut costs, but to do it in a way that also improved the system without impacting end-users.

Challenges and risks of architecture refactoring

Limited business domain knowledge

It’s a tough challenge when you're working with applications and domains without deep insight into the business logic. On one hand, it wasn’t strictly required since we were operating on a higher architectural level. But every time we needed to test and fix issues after consolidation, we had to investigate from scratch — often without clear guidance or domain expertise.

Lack of testing opportunities​

In maintenance-phase projects, it's common that dedicated QA support or testers with deep system knowledge aren’t available — which is totally understandable. At this point, we often rely on the work done by previous developers: verifying what types of tests exist, how well they cover the code and business logic, and how effective they are at catching real issues.

Parallel consolidation limitations

The original system’s granularity made it difficult for more than one developer to work on consolidating a single microservice simultaneously. Typically, each domain was handled by one developer, but in some cases, having multiple people working together could have helped prevent issues during such a complex process.

Backward compatibility​

Every consolidated application had to be 100% backward-compatible with the pre-consolidation microservices to allow for rollbacks if needed. That meant we couldn’t introduce any breaking changes during the transition — adding extra pressure to get things right the first time.

Distributed configuration​

The old system’s over-granular design scattered configuration across multiple services and a config server. Rebuilding that into a unified configuration required careful investigation to locate, align, and centralize everything in one application.

End-user impact​

Since the system was customer-facing, any bug or functionality gap after consolidation could directly affect users. This raised the stakes for every change and reinforced the need for a cautious, thoughtful rollout.

Architectural refactoring comes with risks and understanding them upfront is key to delivering both system reliability and cost efficiency.

What we gained: lower costs, higher reliability, and a sustainable system

Cloud cost reduction

After consolidation, overall cloud infrastructure costs were reduced by 82% . This was a direct result of architectural refactoring, microservices reduction, and more efficient resource usage.

Monitoring tool efficiency

The new architecture also lowered the load on external monitoring tools, leading up to 70% drop in related costs .

Indirect cost savings

While we didn’t have full access to some billing metrics, we know that many tools charge based on factors like request volume, microservice count and internal traffic. Simplifying the core of the system brought savings across these areas too.

Simplified maintenance

Shrinking from 25 microservices to 5 dramatically reduced the effort required for feature development, domain-specific releases, and CI/CD pipeline management. Once we removed the facade of complexity, it became clear the system wasn’t as complicated as it seemed. Onboarding new developers is now much faster and easier — which also opens the door to rethinking how many engineers are truly needed for ongoing support.

Zero downtime deployment

Since we were working with a customer-facing system, minimizing downtime for each release was critical. By consolidating functionality into 5 clearly defined, domain scoped applications, we made it possible to achieve zero downtime deployments in production.

Reduced complexity

Consolidation clarified how the system works and gave developers a wider view of its components. With cohesive domains and logic housed in fewer applications, it’s now easier to follow business flows, implement efficient solutions, debug issues, and write effective tests.

---

Every decision made at a given moment usually feels like the right one — and often it is. But if something remains important over time, it’s worth revisiting that decision in light of new context and evolving circumstances. As our case clearly shows, taking the time to reevaluate can truly pay off — both literally and figuratively.

Scale your legacy modernization 5x faster without business disruption

Legacy transformation services powered by Agentic AI-driven G.Tx Platform

Check our offer
Blog

Check related articles

Read our blog and stay informed about the industry's latest trends and solutions.

Legacy modernization
Software development

Choosing the right approach: How generative AI powers legacy system modernization

In today's rapidly evolving digital landscape, the need to modernize legacy systems and applications is becoming increasingly critical for organizations aiming to stay competitive. Once the backbone of business operations, legacy systems are now potential barriers to efficiency, innovation, and security.

As technology progresses, the gap between outdated systems and modern requirements widens, making modernization not just beneficial but essential.

This article provides an overview of different legacy system modernization approaches, including the emerging role of  generative AI (GenAI). We will explore how GenAI can enhance this process, making it not only faster and more cost-effective but also better aligned with current and future business needs.

Understanding legacy systems

Legacy systems are typically maintained due to their critical role in existing business operations. They often feature:

  •  Outdated technology stacks and programming languages.
  •  Inefficient and unstable performance.
  •  High susceptibility to security vulnerabilities due to outdated security measures.
  •  Significant maintenance costs and challenges in sourcing skilled personnel.
  •  Difficulty integrating with newer technologies and systems.

Currently, almost 66% of enterprises  continue to rely on outdated applications to run their key operations, and 60% use them for customer-facing tasks.

Why is this the case?

Primarily because of a lack of understanding of the older technology infrastructure and the technological difficulties associated with modernizing legacy systems. However, legacy application modernization is often essential. In fact,  70% of global CXOs consider mainframe and legacy modernization a top business priority.

The necessity of legacy software modernization

As technology rapidly evolves, businesses find it increasingly vital to update their aging infrastructure to keep pace with industry standards and consumer expectations. Legacy systems modernization is crucial for several reasons:

  •     Security Improvements    : Outdated software dependencies in older systems often lack updates, leaving critical bugs and security vulnerabilities unaddressed.
  •     Operational Efficiency    : Legacy systems can slow down operations with their inefficiencies and frequent maintenance needs.
  •     Cost Reduction    : Although initially costly, the long-term maintenance of outdated systems is often more expensive than modernizing them.
  •     Scalability and Flexibility    : Modern systems are better equipped to handle increasing loads and adapt to changing business needs.
  •     Innovation Enablement    : Modernized systems can support new technologies and innovations, allowing businesses to stay ahead in competitive markets.

Modernizing legacy code presents an opportunity to address multiple challenges from both a business and an IT standpoint, improving overall organizational performance and agility.

Different approaches to legacy modernization

When it comes to modernizing legacy systems, there are various approaches available to meet different organizational needs and objectives. These strategies can vary greatly depending on factors such as the current state of the legacy systems, business goals, budget constraints, and desired outcomes.

Some modernization efforts might focus on minimal disruption and cost, opting to integrate existing systems with new functionalities through APIs or lightly tweaking the system to fit a new operating environment. Other approaches might involve more extensive changes, such as completely redesigning the system architecture to incorporate  advanced technologies like microservices or even rebuilding the system from scratch to meet modern standards and capabilities.

Each approach has its own set of advantages, challenges, and implications for the business processes and IT landscape. The choice of strategy depends on balancing these factors with the long-term vision and immediate needs of the organization.

Rewriting legacy systems with generative AI

One of the approaches to legacy system modernization involves  rewriting the system's codebase from scratch while aiming to maintain or enhance its existing functionalities. This method is especially useful when the current system no longer meets the evolving standards of technology, efficiency, or security required by modern business environments.

By starting anew, organizations can leverage the latest technologies and architectures, making the system more adaptable and scalable to future needs.

Generative AI is particularly valuable in this context for several reasons:

  •     Uncovering hidden relations and understanding embedded business rules    : GenAI supports the analysis of legacy code to identify complex relationships and dependencies crucial for maintaining system interactions during modernization. It also deciphers embedded business rules, ensuring that vital functionalities are preserved and enhanced in the updated system.
  •     Improved accuracy    : GenAI enhances the accuracy of the modernization process by automating tasks such as code analysis and documentation, which reduces human errors and ensures a more precise translation of legacy functionalities to the new system.
  •     Optimization and performance    : With GenAI, the new code can be optimized for performance from the outset. It can integrate advanced algorithms that improve efficiency and adaptability, which are often lacking in older systems.
  •     Reducing development time and cost    : The automation capabilities of GenAI significantly reduce the time and resources needed for rewriting systems. Faster development cycles and fewer human hours needed for coding and testing lower the overall cost of the modernization project.
  •     Increasing security measures:    GenAI can help implement advanced security protocols in the new system, reducing the risk of data breaches and associated costs. This is crucial in today's digital environment, where security threats are increasingly sophisticated.

By integrating GenAI in this modernization approach, organizations can achieve a more streamlined transition to a modern system architecture, which is well-aligned with current and future business requirements. This ensures that the investment in modernization delivers substantial returns in terms of system performance, scalability, and maintenance costs.

Legacy system modernization with generative AI
 

How generative AI fits in legacy system modernization process

Generative AI enables faster speeds and provides a deeper understanding of the business context, which significantly boosts development across all phases, from design and business analysis to  code generation , testing, and verification.

Here's how GenAI transforms the modernization process:

1.  Analysis Phase

 Automated documentation and in-depth code analysis : GenAI's ability to assist in automatic documenting, reverse engineering, and extracting business logic from legacy codebases is a powerful capability for modernization projects. It overcomes the limitations of human memory and outdated documentation to help ensure a comprehensive understanding of existing systems before attempting to upgrade or replace them.

 Business-context awareness : By analyzing the production source code directly, GenAI helps comprehend the embedded business logic, which speeds up the migration process and improves the safety and accuracy of the transition.

2  . Preparatory Phase

 Tool compatibility and integration: GenAI tools can identify and integrate with many compatible development tools, recommend necessary plugins or extensions within supported environments, and enhance the existing development environment by automating routine tasks and providing intelligent code suggestions to support effective modernization efforts.

 LLM-assisted knowledge discovery : Large Language Models (LLMs) can be used to delve deep into a legacy system’s data and codebase to uncover critical insights and hidden patterns. This knowledge discovery process aids in understanding complex dependencies, business logic, and operational workflows embedded within the legacy system. This step is crucial for ensuring that all relevant data and functionalities are considered before beginning the migration, thereby reducing the risk of overlooking critical components.

3.  Migration/Implementation Phase

 Code generation and conversion : Using LLMs, GenAI aids in the design process by transforming outdated code into contemporary languages and frameworks, thereby improving the functionality and maintainability of applications.

 Automated testing and validation : GenAI supports the generation of comprehensive test cases to ensure that all new functionalities are verified against specified requirements and that the migrated system operates as intended. It helps identify and resolve potential issues early, ensuring a high level of accuracy and functionality before full deployment.

 Modularization and refactoring : GenAI can also help break down complex, monolithic applications into manageable modules, enhancing system maintainability and scalability. It identifies and suggests strategic refactoring for areas with excessive dependencies and scattered functionalities.

4.  Operations and Optimization Phase

 AI-driven monitoring and optimization : Once the system is live, GenAI continues to monitor its performance, optimizing operations and predicting potential failures before they occur. This proactive maintenance helps minimize downtime and improve system reliability.

 Continuous improvement and DevOps automation : GenAI facilitates continuous integration and deployment practices, automatically updating and refining the system to meet evolving business needs. It ensures that the modernized system is not only stable but also continually evolving with minimal manual intervention.

 Across All Phases

  •     Sprint execution support    : GenAI enhances agile sprint executions by providing tools for rapid feature development, bug fixes, and performance optimizations, ensuring that each sprint delivers maximum value.
  •     Security enhancements and compliance testing    : It identifies security vulnerabilities and compliance issues early in the development cycle, allowing for immediate remediation that aligns with industry standards.
  •     Predictive analytics for maintenance and monitoring    : It also helps anticipate potential system failures and performance bottlenecks using predictive analytics, suggesting proactive maintenance and optimizations to minimize downtime and improve system reliability.

Should enterprises use genAI in legacy system modernization?

To determine if GenAI is necessary for a specific modernization project, organizations should consider the complexity and scale of their legacy systems, the need for improved accuracy in the modernization process, and the strategic value of faster project execution.

If the existing systems are cumbersome and deeply intertwined with critical business operations, or if security, speed, and accuracy are priorities, then GenAI is likely an indispensable tool for ensuring successful modernization with optimal outcomes.

Conclusion

Generative AI significantly boosts the legacy system modernization process by introducing advanced capabilities that address a broad range of challenges. From automating documentation and code analysis in the analysis phase to supporting modularization and system integration during implementation, this technology provides critical support that speeds up modernization, ensures high system performance, and aligns with modern technological standards.

GenAI integration not only streamlines processes but also equips organizations to meet future challenges effectively, driving innovation and competitive advantage in a rapidly evolving digital landscape.

‍

Read more
Software development
Legacy modernization

Challenges in migrating legacy .NET apps into pivotal Cloud Foundry – customer case study

Migrating legacy .net apps to Pivotal Cloud Foundry poses many challenges for most developers and companies. In this article, I will walk you step by step through the migration process based on my own experience and show you how to do it.

My first objective is to show that the migration is possible and requires only several changes in the application code base. The second one is to share the knowledge that I have gained while working on that project. Let’s forget that .NET Core even exists and discuss only the world of .NET framework. This article is about migrating existing ASP.NET Web form applications to Cloud Foundry.

As one of the 3rd party libraries is working only with Windows Server 2012R2 so we could not use Docker as it is supported only by Windows server 2016, which make migration more complex. Fortunately, but with some workarounds, the project was successfully completed.

The first goal of the migration was to implement a full Continuous Integration based on Cloud Foundry including building the application, launching unit tests, deploying the application to the development environment at Cloud Foundry, and finally launching acceptance tests using this environment.

Another requirement was to allow the operator to easily create a release package for the last version that passed the acceptance tests. That could be deployed to all environments like QA, UAT and Production. The final goal was to implement a production blue - green deployment.

After setting our goals, it was time to review the existing process and check what can we use from the existing infrastructure and processes. Since all steps were manual, nothing could be used. The build was made on a developer machine and all deployments to all production instances were manual. The only good thing was that the application has already been scaled vertically, which gave some hope to reduce the amount of necessary refactoring.

Once we have gotten familiar with the process, it was time to get to know the application itself. However, it turned out that the application was already very mature and that it was built in the asp.net web-forms technology and some parts of the code dated back to .NET 1.1. Therefore, the application requires Oracle Provier, Crystal reports and a custom archiving tool installed on the machine. Unfortunately, despite our hopes, it is not sessionless, but the session is stored in the database.

The next identified problem was the use of Active Directory; one of the web applications in the solution was using the windows identity. And finally, we learned that some of the sites were using mutual authentication and the servers had private keys loaded. Having all requirements we could begin the migration.

The migrating legacy .NET apps process

The first important decision was the choice of CI / CD tools. In the end, our choice has been limited to two: TFs and Concourse.

TFS is an integrated work tracker, very good support for NuGet and Active Directory integration. Unfortunately, the biggest disadvantage of TFS OnPremise is the lack of support for writing a CI / CD process as code or scripts (i.e. YAML files). VSTS currently has the support for CI, so there is a big hope that OnPremise version (TFS) will also have such support.

Concourse’s indisputable advantage is the docker usage. Each time when the pipeline is started it creates a new instance of container, so that we are sure that our building environment is immutable. Concourse provides us with a native support for Cloud Foundry (Resource type) and gives us the opportunity to use the same service for authorization as Cloud Foundry. Unlike TFS, it allows you to connect our process to several resources (GIT repositories) and to observe and trigger builds when changing in any of them. The biggest advantage, however, is the very simple representation of the pipelines in saved YAML files. When it comes to the drawbacks, its biggest issue is the lack of support for the NuGet resource so we cannot automatically launch the pipeline when the next version appears.

After considering all the pros and cons, the choice was to use Concourse. Unlike TFS, it allows you to connect our process to several resources (GIT repositories) and to observe and trigger builds when changing in any of them. After choosing the CI/CD tool, it was time to prepare Cloud Foundry and Concourse to work with our application. And since we were forced to use Windows Server 2012R2, we needed to prepare windows images for Concourse and PCF.

Concourse:

Uploading MSBuild, Web Deploy and all 3rd party dependencies. In case of an image for PCF, all we needed was all 3rd party dependencies. When we managed to get through the entire Setup process and choose all tools, we could start working with concourse on building the solution application. Unfortunately, at the very beginning it turned out that the application needed cleaning and small refactoring as the dependencies were so messed up that the application was not built on a clean machine, and required the developer to configure the machine in a very specific way which unfortunately was unacceptable if we wanted to use the concourse.

Another problem was the multitude of versions of MSBuild. Microsoft unfortunately did not show off and we had to create custom variables in the Concourse Image where we kept the path to MsBuild versions. And finally, we come to the reason of the IIS upload on the concourse image.

After deploying the application, the warm up took a very long time and made it difficult to run acceptance tests and caused that vertically scaling was not as smooth as we desired, therefore we decided to pre-compile asp.net pages during the build process that required IIS. Unfortunately, the pre-compilation did not turn out to be as fast as we assumed because all pages, including those currently blocked by feature toggle, were also parsed or were not in use at that time.

We have reached the point at which the application was ready to be uploaded to our PaaS. In Cloud Foundry, to run asp.net applications we needed to use an HWC Build pack which used Microsoft Hosted Web Core API that ran its own IIS instance and hosted single applications in a single web domain.

To perform the Cloud Foundry push, we needed a deployment manifesto written in YAML, of course none of us wanted to hardcode this file in the application code, we just wanted to create it dynamically based on the environment variables. Unfortunately, after a one-day research it turned out that the only PowerShell module that was maintained at that time and supported, did not provide sufficient functionality so we used python to create YAML files. After preparing the manifest, uploading the application was only a formality which, thanks to the good integration of Concourse and CF was smooth and problem-less. After uploading the application to Cloud Foundry the basic tests showed that the application did not work and some refactoring was required.

Refactoring

The first problem appeared with more than 2 instances of application, i.e. ViewState decryption failed after Postback in a situation where the user after sending the form goes to another instance. To fix it, we were forced to set the machineKey on the application level. After the application has been deployed and the first manual tests have passed, it was time to add automatic tests. For this we used selenium and chrome headless.

During the implementation of the tests, we did not come across any bigger problems. The only thing that is worth mentioning was the problem with the control "Update Panel" which could be very easily bypassed using custom java script. And the problem with checkboxes which in the headless mode Select method was not working and needed workaround by sending "space bar" character. Selenium was the reason why Chrome needed to be installed on Windows Image for concourse.

Another piece of code that required refactoring was the usage of RequestContext. Keep in mind that the router changed its content quite significantly, i.e. changing ports so the redirect or our famous asp.net ~ (tilde) would not work properly. We also had to change all the elements in the application that used LocalMachine as a storage (e.g. certificates) in our case was enough to change to CurrentUser (PCF always creates a new user when creating the instance of application and we used this fact for certificates).

The biggest problem that unfortunately could not be fixed is Windows Identity. This was mainly because there is no possibility to use this form of authentication in Cloud Foundry. The only workaround is to use Forms Authentication and communicate with AD using the LDAP protocol. The final change in the application was creating custom Healthpoint because ASP.NET Web Forms do not have ready solutions to monitor and debug applications.

However, you must remember that all pages, including our heathpoints, would not be available in case of any problem in global.asax, web.config or handlers and debugging this type of problems must be done outside of the application.
After implementing the tests, we could say that we had the first phase of migration done with a working application on CF - the first successful milestone.

Second milestone

It was time for milestone two. The application was working, but only on one environment and we wanted our applications to be deployed in many different environments and have the configuration stored in Environment Variables.

Unfortunately, ASP.NET, unlike asp.net Core does not allow you to configure the application after it has been run. The web.config file must be ready at the moment of launching the application, and if you change it for example using the global.asax, the application will be restarted.

We saw two ways of handling this issue:

  • Massive application refactor to limit or eliminate web.config files (which would be very time-consuming or almost impossible in the case of sessions or certificates).
  • The preparation of the entire environment and necessary files (web.config , certificates) before starting the HWC server.

After the test, both forms were using POC. Option number two, a custom build pack, was the winner.
Our Custom Build pack is nothing more than a set of Powershell scripts that launched before the HWC prepared the environment and applications. In case of Windows Server 2012, Cloud Foundry is not able to use the Docker container to separate application instances, but it creates separate users for each of the application instance.

Steps:

The first step is to read and parse all configuration from Environment Variable and download the necessary files from the GIT Server (TFS praises for making the highly-available HTTP API available).

Part of the configuration included in the GIT are xdt (XML Document Transformation) files that we used to create the appropriate entries in our web.config file.

After preparing the web.config file scripts, install the necessary certificates in the User Store and map Network Drive, of course, in the context of the user.

This few simple workaround has been able to separate the application code from the environment configuration.

The moment when we have separated configurations, we can easily take care of releasing the application and deploying it to all environments QA, UAT and production. All you have to do is set the correct versioning (we used Semantic versioning), create separate tests that can be done on production, and prepare blue-green deployment scripts that will, for example, change routing if the newly created environment is stable and remove the previous version.

The application has been migrated, it is running on many instances and is independent from the environment, but can we call it cloud-native already?

Unfortunately not, and some more work is required to make it happen. The most important changes that are still ahead of you and which you are able to do with small cost in asp.net web forms are adding telemetry and systematizing logs.
Add Sonar to Continues integration pipeline. The entire solution consists of 17 projects (including two web applications) that are dependent on each other which in the case of any change forces the deploy of all applications. So it should be broken down into a few smaller solutions and store the libraries in NuGet.

The database is not the best and most efficient place to store the sessions so the absolute minimum is to go to Radis and get rid of the session in new features at all. And for new features creating separate microservices.

Summary

To sum up, migrating legacy .NET applications to Pivotal Cloud Foundry can pose plenty of challenges that even the most experienced specialists may have to face and tackle. I hope that the process that I presented and the experience that I shared will help many of you overcome them.

Read more
View all
Connect

Interested in our services?

Reach out for tailored solutions and expert guidance.

Stay updated with our newsletter

Subscribe for fresh insights and industry analysis.

About UsCase studiesContactCareers
Capabilities:
Legacy ModernizationData PlatformsArtificial Intelligence
Industries:
AutomotiveFinanceManufacturingAviation
Solutions:
DataboostrCloudboostr
Resources
BlogInsights
© Grape Up 2025
Cookies PolicyPrivacy PolicyTerms of use
Grape Up uses cookies

This website uses cookies to improve its user experience and provide personalized content for you. We use cookies for web analytics and advertising. You can accept these cookies by clicking "OK" or go to Details in order to manage your cookies preferences more precisely. To learn more, check out our Privacy and Cookies Policy

Accept allDetails
Grape Up uses cookies

Essential website cookies are necessary to provide you with services available through the website, autosave your settings and preferences, and to enhance the performance and security of the website - you have the right not to accept them through your web browser's settings, but your access to some functionality and areas of our website may be restricted.

Analytics cookies: (our own and third-party : Google, HotJar) – you can accept these cookies below:

Marketing cookies (third-party cookies: Hubspot, Facebook, LinkedIn) – you can accept these cookies below:

Ok