As GenAI adoption grows, LLM projects multiply across departments - each with its own models, data sources, and security posture. The enterprise AI platform provides a single system to host, orchestrate, and govern all of them, reducing time-to-market while maintaining enterprise-grade compliance and control.

Reusable agent blueprints, shared LLM access, and standardized deployment pipelines reduce the effort required to launch new LLM-based projects from weeks to days.
Manage compliance, model access, cost, and security across every department and use case - with full visibility into which models are deployed, who uses them, and how data flows through the system.
Built-in guardrails, filtering, and model governance aligned with EU AI Act requirements — so compliance is embedded in the platform, not bolted on as an afterthought.
A self-service platform with catalogued models, agents, and data sources reduces the manual burden on IT teams and enables business units to discover, request, and deploy AI capabilities independently.
We design and build enterprise AI platforms with plug-in architecture that integrates new agents, data sources, and LLM models into a governed, scalable system. The platform connects to your existing enterprise infrastructure - from data warehouses and CRMs to identity management and CI/CD pipelines.

A central orchestrator handles intent detection, agent selection, context management, and routing - enabling complex, multi-step reasoning and task execution across agent plugins.

A single gateway to all language models - public (OpenAI, Anthropic, Gemini) and private (Ollama, DeepSeek, LLaMA) — with cost metering, rate limiting, and secure access control.

New chatbots and AI agents integrate as plugins into the platform, reusing shared frontends, data connections, and authentication - without building from scratch each time.

Data access through managed Data Applications with role-based access control, filtering, and monitoring - ensuring agents only access data users are authorized to see.

All agents, MCP data sources, and LLM models registered in a single discovery catalog - making it easy to find, request access to, and reuse existing resources.
Track token consumption, API costs, and usage patterns across teams, projects, and models - with dashboards for budget control and chargeback.
Configurable input/output filtering, ethical compliance rules, and content moderation applied consistently across all agents and endpoints.
Full visibility into agent behavior, chain-of-thought reasoning, and model responses - critical for debugging, auditing, and understanding hallucination root causes.
Connect to any combination of public SaaS models and privately hosted models, with the flexibility to route requests based on task requirements, cost, or data sensitivity.
Automated deployment pipelines for agents and model updates, integrated with existing enterprise CI/CD, logging, and monitoring infrastructure.
Integration with Active Directory and enterprise identity providers for single sign-on and consistent access control across the platform.
Agents trigger external tools and scripts through serverless execution - connecting to APIs, databases, and enterprise systems within workflow steps.
Tools and automation allowing to quickly develop, deploy and manage agents. The system allows agents to communicate with each other to build better answers in cost-effective manner.

A single platform governing all LLM projects across R&D, customer service, operations, and back-office - replacing fragmented, department-level PoCs with a managed, compliant system.

AI agents connected to Confluence, SharePoint, and internal wikis that provide employees with instant, context-aware answers - with access controlled by their organizational role.

Deploy customer-facing chatbots and voice agents through the platform - with consistent security, compliance, and brand standards across every customer touchpoint.

Agents that query enterprise data warehouses, CRMs, and analytics platforms to deliver actionable insights to business users - without requiring them to write SQL or navigate BI tools.
Explore how we redefine industry standards through innovation.
Reach out for tailored solutions and expert guidance.
Learn more how we found a way to migrate smarter.
If you’re running Sybase in 2026, you already know the uncomfortable truth: the platform still works, but the ecosystem around it is quietly contracting: SAP patches ASE, yet engineering investment, tooling innovation, and community momentum are all flowing toward HANA and the cloud. ASE gets maintenance. Everything else moves on.
For most IT managers and DBA teams, the question is no longer whether to migrate. It’s how to do it without breaking things that have run reliably for 20 years.
When leadership hears “database migration”, they picture moving tables. What’s actually in a Sybase estate looks more like this:
.jpg)
The most dangerous category isn’t missing features - it’s features that look identical but behave differently across platforms. ASE and SQL Server share a T-SQL lineage, which creates a false sense of safety. The syntax compiles - but the runtime behavior diverges in ways that pass testing and surface under production load. Here are some of the most common examples:
Beyond syntax, cross-database patterns compound the problem: USE, db..object references, and CIS passthrough are everywhere in ASE estates and break the moment the engine changes. Migration tools like Microsoft's SSMA handle syntax conversion, but they don't detect behavioral divergence. The dangerous gaps aren't in what fails to convert - they're in what converts cleanly but runs differently.
A full feature mapping of ASE (versions 15,16) against SQL Server and Oracle on Azure across all deployment tiers gives a clearer picture: (Based on Grape Up G.Tx internal analysis across enterprise migration assessments.)
In ASE alone, roughly 30 features have no direct equivalent or workaround, but every one of those has a modern replacement approach on Azure, although some require significant architectural redesign rather than direct substitution.

The right target depends on what’s actually in your codebase.
SQL Server has the shorter path for most ASE estates. The T-SQL lineage reduces rewrite volume, and the platform carries fewer high-risk items across the board. Thirty years of divergence still mean real work, particularly around transaction semantics and locking behavior.
Oracle carries higher effort by default - PL/SQL vs. T-SQL is a language rewrite, NULL handling differs, and no direct replace for ASE’s nested transaction rollback semantics.
Deployment tier matters too. Azure VM, Managed Instance, and SQL Database each involve different trade-offs on compatibility, operational overhead, and cost. The right answer depends on your specific feature usage.
The engineers who know the platform deeply - who understand the undocumented behaviors, the operational quirks, the edge cases in the locking model - are retiring. That institutional knowledge compounds the migration effort every year it walks out the door.
A reliable migration starts with knowing exactly what you have. That means automated discovery across your live codebase - versions, features, dependencies, behavioral edge cases - not a manual audit based on what the team remembers.
This is what Grape Up’s G.Tx platform does in the assessment phase. G.Tx runs automated inventory against your environment, maps features against the target platform, identifies behavioral differences that won’t surface in standard testing, and produces a high-level risk report. The same platform then powers execution - code conversion, schema migration, test generation, and validation - so the assessment and the migration run on a single consistent picture of your estate, not on handover documents.
The engagement runs in four phases. Each ends with a deliverable and a client sign-off before the next begins:
You control the pace. Nothing moves to the next phase without your approval.
For most estates, a like-for-like migration is the most pragmatic path — it preserves existing logic and minimizes rewrite scope. But depending on your goals and architectural dependencies, a full redesign may be the better long-term investment. This means decomposing the monolithic database into independent components that communicate with each other rather than relying on shared database logic. The result is a more flexible, extensible, and maintainable architecture that is no longer constrained by the boundaries of a single database engine. The best approach can be decided during the Feasibility phase.
SQL Server on Azure is the recommended target for most Sybase ASE estates. The shared T-SQL lineage reduces rewrite volume and lowers the share of high-risk migration items by approximately 20% compared to Oracle. Oracle remains viable for estates where PL/SQL integration or Oracle-specific features are already part of the architecture, but it carries higher baseline effort. The final choice depends on your codebase - a G.Tx assessment will map your specific feature usage against both targets before you commit.
Timeline depends on estate size and complexity, but the structured phases give reliable checkpoints: Assessment runs 1–2 weeks, Feasibility 2–4 weeks, Proof of Concept varies by scope, and full scale migration is planned during Feasibility. The phased approach is designed to migrate and validate the most business-critical functionality first, progressively offloading the original system only as each phase proves stable on the target platform. Feasibility analysis is the most reliable way to get an accurate estimate for your specific environment.
The highest-risk category is not missing features - it's behavioral divergence: code that converts cleanly but runs differently in production. Transaction semantics, identity scoping, locking behavior, NULL handling in Oracle, and collation defaults are examples of gaps that may not be caught in testing and only surface under real load. Standard migration tools like SSMA automate schema conversion but are not designed to detect behavioral differences. Automated analysis of your codebase can surface these discrepancies early, making sure they never reach production.
SAP ASE 16.0 reached End of Mainstream Maintenance on December 31, 2025, meaning SAP no longer provides new security patches or fixes for this version. ASE 16.1 retains mainstream support until December 31, 2030, giving organizations on that version more runway, but there are no new ASE versions on SAP's roadmap. New capabilities, cloud-native features, and tooling investment are being directed at HANA and cloud products. The surrounding ecosystem is contracting in measurable ways. The practical risk is not that ASE will stop working, but that maintaining it becomes increasingly expensive as the specialist talent pool shrinks, integration tooling is deprecated, and the burden of filling those gaps falls on internal teams — as unpatched vulnerabilities quietly accumulate.
ASE stored procedures are written in T-SQL, which shares a lineage with SQL Server T-SQL - but decades of platform divergence mean direct conversion is rarely clean. Syntax differences are mostly handled by automated tools. The harder problems are behavioral: code that converts cleanly can still run differently in production, and standard conversion tools are not designed to catch them. Stored procedures rarely exist in isolation — understanding their true migration scope requires analysis in the context of the full system.
An initial assessment covers automated discovery of a representative part of your system — architectural overview, feature usage, stored procedures, connected applications mapped against your target platform. The result is a report covering system health (including security findings), AI transformation feasibility, and top risks ranked by severity and impact — identifying behavioral differences that standard tools miss, with concrete next steps and PoC proposals. Grape Up’s G.Tx Assessment runs in 1–2 weeks, is free with no commitment, and becomes the foundation for all subsequent migration phases.
Legacy software is the backbone of many organizations, but as technology advances, these systems can become more of a burden than a benefit. Migrating from a legacy system to a modern solution is a daunting task fraught with challenges, from grappling with outdated code and conflicting stakeholder interests to managing dependencies on third-party vendors and ensuring compliance with stringent regulatory standards.
However, with the right strategies and leveraging advanced technologies like Generative AI, these challenges can be effectively mitigated.
The average lifespan of business software can vary widely depending on several factors, such as the type of software or the industry it serves. Nevertheless, no matter if the software is 5 or 25 years old, it is highly possible its creators and subject matter experts are not accessible anymore (or they barely remember what they built and how it really works), the documentation is incomplete, the code messy and the technology forgotten a long time ago.
Lack of knowledge of the legacy solution not only blocks its further development and maintenance but also negatively affects its migration – it significantly slows down the analysis and replacement process.
Mitigation:
The only way to understand what kind of functionality, processes and dependencies are covered by the legacy software and what really needs to get migrated is in-depth analysis. An extensive discovery phase initiating every migration project should cover:
The discovery phase, although long (and boring!), demanding, and very costly, is crucial for the migration project’s success. Therefore, it is not recommended to give in to the temptation to take any shortcuts there.
At Grape Up , we do not. We make sure we learn the legacy software in detail, optimizing the analytical efforts at the same time. We support the discovery process by leveraging Generative AI tools . They help us to understand the legacy spaghetti code, forgotten purpose, dependencies, and limitations. GenAI enables us to make use of existing incomplete documentation or to go through technologies that nobody has expertise in anymore. This approach significantly speeds the discovery phase up, making it smoother and more efficient.
Unfortunately, understanding the legacy software and having a complete idea of the target replacement are two separate things. A decision to build a new solution, especially in a corporate environment, usually encourages multiple stakeholders (representing different groups of interests) to promote their visions and ideas. Often conflicting, to be precise.
This nonlinear stream of contradicting requirements leads to an uncontrollable growth of the product backlog, which becomes extremely difficult to manage and prioritize. In consequence, efficient decision-making (essential for the product’s success) is barely possible.
Mitigation:
A strong Product Management community with a single product leader - empowered to make decisions and respected by the entire organization – is the key factor here. If combined with a matching delivery model (which may vary depending on a product & project specifics), it sets the goals and frames for the mission and guides its crew.
For huge legacy migration projects with a blurry scope, requiring constant validation and prioritization, an Agile-based, continuous discovery & delivery process is the only possible way to go. With a flexible product roadmap (adjusted on the fly), both creative and development teams work simultaneously, and regular feedback loops are established.
High pressure from the stakeholders always makes the Product Leader’s job difficult. Bold scope decisions become easier when MVP/MDP (Minimum Viable / Desirable Product) approach & MoSCoW (must-have, should-have, could-have, and won't-have, or will not have right now) prioritization technique are in place.
At Grape Up, we assist our clients with establishing and maintaining efficient product & project governance, supporting the in-house management team with our experienced consultants such as Business Analysts, Scrum Masters, Project Managers, or Proxy Product Owners.
Migrating the legacy software gives the organization a unique opportunity to sunset outdated technologies, remove all the infrastructural pain points, reach out for modern solutions, and sketch a completely new architecture.
However, these are very heavy decisions. They must not only address the current needs but also be adaptable to future growth. Wrong choices can result in technical debt, forcing another costly migration – much sooner than planned.
Mitigation:
A careful evaluation of the current and future needs is a good starting point for drafting the first technical roadmap and architecture. Conducting a SWOT analysis (Strengths, Weaknesses, Opportunities, Threats) for potential technologies and infrastructural choices provides a balanced view, helping to identify the most suitable options that align with the organization's long-term plan. For Grape Up, one of the key aspects of such an analysis is always industry trends.
Another crucial factor that supports this difficult decision-making process is maintaining technical documentation through Architectural Decision Records (ADRs). ADRs capture the rationale behind key decisions, ensuring that all stakeholders understand the choices made regarding technologies, frameworks, or architectures. This documentation serves as a valuable reference for future decisions and discussions, helping to avoid repeating past mistakes or unnecessary changes (e.g. when a new architect joins the team and pushes for his own technical preferences).

When migrating from a legacy system, one of the significant challenges is managing dependencies with numerous other applications and services which are integrated with the old solution, and need to remain connected with the new one. Many of these are often provided by third-party vendors that may not be willing or able to quickly respond to our project’s needs and adapt to any changes, posing a significant risk to the migration process. Unfortunately, some of the dependencies are likely to be hidden and spotted not early enough, affecting the project’s budget and timeline.
Mitigation:
To mitigate this risk, it's essential to establish strong governance over third-party relationships before the project really begins. This includes forming solid partnerships and ensuring that clear contracts are in place, detailing the rules of cooperation and responsibilities. Prioritizing demands related to third-party integrations (such as API modifications, providing test environments, SLA, etc.), testing the connections early, and building time buffers into the migration plan are also crucial steps to reduce the impact of potential delays or issues.
Furthermore, leveraging Generative AI, which Grape Up does when migrating the legacy solution, can be a powerful tool in identifying and analyzing the complexities of these dependencies. Our consultants can also help to spot potential risks and suggest strategies to minimize disruptions, ensuring that third-party systems continue to function seamlessly during and after the migration.
A legacy migration requires expertise and resources that most organizations lack internally. It is 100% natural. These kinds of tasks occur rarely; therefore, in most cases, owning a huge in-house IT department would be irrational.
Without prior experience in legacy migrations, internal teams may struggle with project initiation; for that reason, external support becomes necessary. Unfortunately, quite often, the involvement of vendors and contractors results in new challenges for the company by increasing its vulnerability (e.g., becoming dependent on externals, having data protection issues, etc.).
Mitigation:
To boost insufficient internal capabilities, it's essential to partner with experienced and trusted vendors who have a proven track record in legacy migrations. Their expertise can help navigate the complexities of the process while ensuring best practices are followed.
However, it's recommended to maintain a balance between internal and external resources to keep control over the project and avoid over-reliance on external parties. Involving multiple vendors can diversify the risk and prevent dependency on a single provider.
By leveraging Generative AI, Grape Up manages to optimize resource use, reducing the amount of manual work that consultants and developers do when migrating the legacy software. With a smaller external headcount involved, it is much easier for organizations to manage their projects and keep a healthy balance between their own resources and their partners.
Due to their size, complexity, and importance for the business, budget constraints and time pressure are always common challenges for legacy migration projects. Resources are typically insufficient to cover all the requirements (that keep on growing), unexpected expenses (that always pop up), and the need to meet hard deadlines. These pressures can result in compromised quality, incomplete migrations, or even the entire project’s failure if not managed effectively.
Mitigation:
Those are the other challenges where strong governance and effective product ownership would be helpful. Implementing an iterative approach with a focus on delivering an MVP (Minimum Viable Product) or MDP (Minimum Desirable Product) can help prioritize essential features and manage scope within the available budget and time.
For tracking convenience, it is useful to budget each feature or part of the system separately. It’s also important to build realistic time and financial buffers and continuously update estimates as the project progresses to account for unforeseen issues. There are multiple quick and sufficient (called “magic”) estimation methods that your team may use for that purpose, such as silent grouping.
As stated before, at Grape Up, we use Generative AI to reduce the workload on teams by analyzing the old solution and generating significant parts of the new one automatically. This helps to keep the project on track, even under tight budget and time constraints.
A critical but typically disregarded and forgotten aspect of legacy migration is ensuring the new system meets not only all the business demands but also compliance, security, performance, and accessibility requirements. What if some of the implemented features appear to be illegal? Or our new system lets only a few concurrent users log in?
Without proper planning and continuous validation, these non-functional requirements can become major issues shortly before or after the release, putting the entire project at risk.
Mitigation:
Implementation of comprehensive validation, monitoring, and testing strategies from the project's early stages is a must. This should encompass both functional and non-functional requirements to ensure all aspects of the system are covered.
Efficient validation processes must not be a one-time activity but rather a regular occurrence. It also needs to involve a broad range of stakeholders and experts, such as:
Prioritizing non-functional requirements, such as performance and security, is essential to prevent potential issues from undermining the project’s success. For each legacy migration, there are also individual, very project-specific dimensions of validation. At Grape Up, during the discovery phase our analysts empowered by GenAI take their time to recognize all the critical aspects of the new solution’s quality, proposing the right thresholds, testing tools, and validation methods.
Migrating data from a legacy system is one of the most challenging tasks of a migration project, particularly when dealing with vast amounts of historical data accumulated over many years. It is complex and costly, requiring meticulous planning to avoid data loss, corruption, or inconsistency.
Additionally, the release of the new system can have a significant impact on customers, especially if not handled smoothly. The risk of encountering unforeseen issues during the rollout phase is high, which can lead to extended downtime, customer dissatisfaction, and a prolonged stabilization period.
Mitigation:
Firstly, it is essential to establish comprehensive data migration and rollout strategies early in the project. Perhaps migrating all historical data is not necessary? Selective migration can significantly reduce the complexity, cost, and time involved.
A base plan for the rollout is equally important to minimize customer impact. This includes careful scheduling of releases, thorough testing in staging environments that closely mimic production, and phased rollouts that allow for gradual transition rather than a big-bang approach.
At Grape Up, we strongly recommend investing in Continuous Integration and Continuous Delivery (CI/CD) pipelines that can streamline the release process, enabling automated testing, deployment, and quick iterations. Test automation ensures that any changes or fixes (that are always numerous when rolling out) are rapidly validated, reducing the risk of introducing new issues during subsequent releases.
Post-release, a hypercare phase is crucial to provide dedicated support and rapid response to any problems that arise. It involves close monitoring of the system’s performance, user feedback, and quick deployment of fixes as needed. By having a hypercare plan in place, the organization can ensure that any issues are addressed promptly, reducing the overall impact on customers and business operations.
Legacy migration is undoubtedly a complex and challenging process, but with careful planning, strong governance, and the right blend of internal and external expertise, it can be navigated successfully. By prioritizing critical aspects such as in-depth analysis, strategic decision-making, and robust validation processes, organizations can mitigate the risks involved and avoid common pitfalls.
Managing budgets and expenses effectively is crucial, as unforeseen costs can quickly escalate. Leveraging advanced technologies like Generative AI not only enhances the efficiency and accuracy of the migration process but also helps control costs by streamlining tasks and reducing the overall burden on resources.
At Grape Up, we understand the intricacies of legacy migration and are committed to helping our clients transition smoothly to modern solutions that support future growth and innovation. With the right strategies in place, your organization can move beyond the limitations of legacy systems, achieving a successful migration within budget while embracing a future of improved performance, scalability, and flexibility.
As digital transformation accelerates, modernizing legacy applications has become essential for businesses to stay competitive. The application modernization market size, valued at USD 21.32 billion in 2023 , is projected to reach USD 74.63 billion by 2031 (1), reflecting the growing importance of updating outdated systems.
With 94% of business executives viewing AI as key to future success and 76% increasing their investments in Generative AI due to its proven value (2), it's clear that AI is becoming a critical driver of innovation. One key area where AI is making a significant impact is application modernization - an essential step for businesses aiming to improve scalability, performance, and efficiency.
Based on two projects conducted by our R&D team , we've seen firsthand how Generative AI can streamline the process of rewriting legacy systems.
Let’s start by discussing the importance of rewriting legacy systems and how GenAI-driven solutions are transforming this process.
In the rapidly evolving software development landscape, keeping applications up-to-date with the latest programming languages and technologies is crucial. Rewriting applications to new languages and frameworks can significantly enhance performance, security, and maintainability. However, this process is often labor-intensive and prone to human error.
Generative AI offers a transformative approach to code translation by:
As businesses strive to stay competitive, adopting Generative AI for code translation becomes increasingly important. It enables them to harness the full potential of modern technologies while minimizing risks associated with manual rewrites.
Legacy systems, often built on outdated technologies, pose significant challenges in terms of maintenance and scalability. Modernizing legacy applications with Generative AI provides a viable solution for rewriting these systems into modern programming languages, thereby extending their lifespan and improving their integration with contemporary software ecosystems.
This automated approach not only preserves core functionality but also enhances performance and security, making it easier for organizations to adapt to changing technological landscapes without the need for extensive manual intervention.
Generative AI offers a powerful solution for rewriting applications, providing several key benefits that streamline the modernization process.
Modernizing legacy applications with Generative AI proves especially beneficial in this context for the following reasons:
By leveraging Generative AI, organizations can achieve a smooth transition to modern system architectures, ensuring substantial returns in performance, scalability, and maintenance costs.
In this article, we will explore:
For these case studies, we used OpenAI's ChatGPT-4 with a context of 32k tokens to automate the rewriting process, demonstrating its advanced capabilities in understanding and generating code across different application architectures.
We'll also present the benefits of using a data analytics platform designed by Grape Up's experts. The platform utilizes Generative AI and neural graphs to enhance its data analysis capabilities, particularly in data integration, analytics, visualization, and insights automation.
The source CRUD project was used as an example of a simple CRUD application - one written utilizing .Net Core as a framework, Entity Framework Core for the ORM, and SQL Server for a relational database. The target project containes a backend application created using Java 17 and Spring Boot 3.
Rewriting a simple CRUD application using Generative AI involves a series of methodical steps to ensure a smooth transition from the old codebase to the new one. Below are the key actions undertaken during this process:
Throughout each step, some manual intervention by developers was required to address code errors, compilation issues, and other problems encountered after using OpenAI's tools.
The first stage in rewriting a simple CRUD application using Generative AI is to conduct a thorough investigation of the existing architecture and data flow. This foundational step is crucial for understanding the current system's structure, dependencies, and business logic.
This involved:
While OpenAI's ChatGPT-4 is powerful, it has some limitations when dealing with large inputs or generating comprehensive explanations of entire projects. For example:
However, OpenAI excels at explaining large pieces of code or individual components. This capability aids in understanding the responsibilities of different components and their data flows. Despite this, developers had to conduct detailed investigations and analyses manually to ensure a complete and accurate understanding of the existing system.
This is the point at which we used our data analytics platform. In comparison to OpenAI, it focuses on data analysis. It's especially useful for analyzing data flows and project architecture, particularly thanks to its ability to process and visualize complex datasets. While it does not directly analyze source code, it can provide valuable insights into how data moves through a system and how different components interact.
Moreover, the platform excels at visualizing and analyzing data flows within your application. This can help identify inefficiencies, bottlenecks, and opportunities for optimization in the architecture.
As with OpenAI's inability to analyze the entire project, the attempt to generate the skeleton of the target application was also unsuccessful, so the developer had to manually create it. To facilitate this, Spring Initializr was used with the following configuration:
Attempts to query OpenAI for the necessary Spring dependencies faced challenges due to significant differences between dependencies for C# and Java projects. Consequently, all required dependencies were added manually.
Additionally, the project included a database setup. While OpenAI provided a series of steps for adding database configuration to a Spring Boot application, these steps needed to be verified and implemented manually.
After setting up the backend, the next step involved converting all project files - Controllers, Services, and Data Access layers - from C# to Java Spring Boot using OpenAI.
The AI proved effective in converting endpoints and data access layers, producing accurate translations with only minor errors, such as misspelled function names or calls to non-existent functions.
In cases where non-existent functions were generated, OpenAI was able to create the function bodies based on prompts describing their intended functionality. Additionally, OpenAI efficiently generated documentation for classes and functions.
However, it faced challenges when converting components with extensive framework-specific code. Due to differences between frameworks in various languages, the AI sometimes lost context and produced unusable code.
Overall, OpenAI excelled at:
However, it struggled with:
Despite these limitations, OpenAI significantly accelerated the conversion process, although manual intervention was required to address specific issues and ensure high-quality code.
Generating tests for the new code is a crucial step in ensuring the reliability and correctness of the rewritten application. This involves creating both unit tests and integration tests to validate individual components and their interactions within the system.
To create a new test, the entire component code was passed to OpenAI with the query: "Write Spring Boot test class for selected code."
OpenAI performed well at generating both integration tests and unit tests; however, there were some distinctions:
If the test name is self-descriptive, OpenAI was able to generate unit tests with a lower number of errors.

As an example of a microservice-based application, we used the Source microservice project - an application built using .Net Core as the framework, Entity Framework Core for the ORM, and a Command Query Responsibility Segregation (CQRS) approach for managing and querying entities. RabbitMQ was used to implement the CQRS approach and EventStore to store events and entity objects. Each microservice could be built using Docker, with docker-compose managing the dependencies between microservices and running them together.
The target project includes:
Similarly to the CRUD application rewriting project, converting a microservice-based application using Generative AI requires a series of steps to ensure a seamless transition from the old codebase to the new one. Below are the key steps undertaken during this process:
Throughout each step, developers were required to intervene manually to address code errors, compilation issues, and other problems encountered after using OpenAI's tools. This approach ensured that the new application retains the functionality and reliability of the original system while leveraging modern technologies and best practices.
The first step in converting a microservice-based application using Generative AI is to conduct a thorough investigation of the existing architecture and data flows. This foundational step is crucial for understanding:
Challenges with OpenAI
Similar to the process for a simple CRUD application, at the time, OpenAI struggled with larger inputs and failed to generate a comprehensive explanation of the entire project. Attempts to describe the project or its data flows were unsuccessful because inputting several project files at once often resulted in unclear and overly general outputs.
OpenAI’s strengths
Despite these limitations, OpenAI proved effective in explaining large pieces of code or individual components. This capability helped in understanding:
Developers can create a comprehensive blueprint for the new application by thoroughly investigating the initial architecture and data flows. This step ensures that all critical aspects of the existing system are understood and accounted for, paving the way for a successful transition to a modern microservice-based architecture using Generative AI.
Again, our data analytics platform was used in project architecture analysis. By identifying integration points between different application components, the platform helps ensure that the new application maintains necessary connections and data exchanges.
It can also provide a comprehensive view of your current architecture, highlighting interactions between different modules and services. This aids in planning the new architecture for efficiency and scalability. Furthermore, the platform's analytics capabilities support identifying potential risks in the rewriting process.
Rewriting the backend of a microservice-based application involves several intricate steps, especially when working with specific architectural patterns like CQRS (Command Query Responsibility Segregation) and event sourcing . The source C# project uses the CQRS approach, implemented with frameworks such as NServiceBus and Aggregates , which facilitate message handling and event sourcing in the .NET ecosystem.
Challenges with OpenAI
Unfortunately, OpenAI struggled with converting framework-specific logic from C# to Java. When asked to convert components using NServiceBus, OpenAI responded:
"The provided C# code is using NServiceBus, a service bus for .NET, to handle messages. In Java Spring Boot, we don't have an exact equivalent of NServiceBus, but here's how you might convert the given C# code to Java Spring Boot..."
However, the generated code did not adequately cover the CQRS approach or event-sourcing mechanisms.
Choosing Axon framework
Due to these limitations, developers needed to investigate suitable Java frameworks. After thorough research, the Axon Framework was selected, as it offers comprehensive support for:
Moreover, Axon provides out-of-the-box solutions for message brokering and event handling and has a Spring Boot integration library , making it a popular choice for building Java microservices based on CQRS.
Converting microservices
Each microservice from the source project could be converted to Java Spring Boot using a systematic approach, similar to converting a simple CRUD application. The process included:
Manual Intervention
Due to the lack of direct mapping between the source project's CQRS framework and the Axon Framework, manual intervention was necessary. Developers had to implement framework-specific logic manually to ensure the new system retained the original's functionality and reliability.
The source project included a frontend component written using aspnetcore-https and aspnetcore-react libraries, allowing for the development of frontend components in both C# and React.
However, OpenAI struggled to convert this mixed codebase into a React-only application due to the extensive use of C#.
Consequently, it proved faster and more efficient to generate a new frontend application from scratch, leveraging the existing REST endpoints on the backend.
Similar to the process for a simple CRUD application, when prompted with “Generate React application which is calling a given endpoint” , OpenAI provided a series of steps to create a React application from a template and offered sample code for the frontend.
Despite its capabilities, OpenAI-generated components often exhibited issues such as:
Additionally, there were challenges related to CORS policy and web security:
Once the frontend components were completed, the next task was to generate tests for these components. OpenAI proved to be quite effective in this area. When provided with the component code, OpenAI could generate simple unit tests using the Jest library.
OpenAI was also capable of generating integration tests for the frontend application, which are crucial for verifying that different components work together as expected and that the application interacts correctly with backend services.
However, some manual intervention was required to fix issues in the generated test code. The common problems encountered included:
The source application contained Dockerfiles that built images for C# applications. OpenAI successfully converted these Dockerfiles to a new approach using Java 17 , Spring Boot , and Gradle build tools by responding to the query:
"Could you convert selected code to run the same application but written in Java 17 Spring Boot with Gradle and Docker?"
Some manual updates, however, were needed to fix the actual jar name and file paths.
Once the React frontend application was implemented, OpenAI was able to generate a Dockerfile by responding to the query:
"How to dockerize a React application?"
Still, manual fixes were required to:
While OpenAI was effective in converting individual Dockerfiles, it struggled with writing docker-compose files due to a lack of context regarding all services and their dependencies.
For instance, some microservices depend on database services, and OpenAI could not fully understand these relationships. As a result, the docker-compose file required significant manual intervention.
Modern tools like OpenAI's ChatGPT can significantly enhance software development productivity by automating various aspects of code writing and problem-solving. Leveraging large language models, such as OpenAI over ChatGPT can help generate large pieces of code, solve problems, and streamline certain tasks.
However, for complex projects based on microservices and specialized frameworks, developers still need to do considerable work manually, particularly in areas related to architecture, framework selection, and framework-specific code writing.
What Generative AI is good at:
At the time of project implementation, Generative AI still had several limitations .
Benefits of using the data analytics platform:
By leveraging GenerativeAI’s strengths and addressing its limitations through manual intervention, developers can achieve a more efficient and accurate transition to modern programming languages and technologies. This hybrid approach to modernizing legacy applications with Generative AI currently ensures that the new application retains the functionality and reliability of the original system while benefiting from the advancements in modern software development practices.
It's worth remembering that Generative AI technologies are rapidly advancing, with improvements in processing capabilities. As Generative AI becomes more powerful, it is increasingly able to understand and manage complex project architectures and data flows. This evolution suggests that in the future, it will play a pivotal role in rewriting projects.
Do you need support in modernizing your legacy systems with expert-driven solutions?
.................
Sources: