Modernizing Legacy Applications with AI: A Specification-First Approach
Modernizing legacy applications is one of the most common yet daunting challenges in software development. Many organizations run on critical business logic locked away in aging VB.NET or Windows Forms applications. The goal is often clear: move to a modern, cloud-native, multi-tenant SaaS platform. However, the path to get there is rarely a straight line.
Recently, I undertook the process of porting a complex legacy VB.NET desktop application to a modern .NET 8 Blazor Server architecture using VS Code and GitHub Copilot. Instead of manually rewriting code line-by-line, I employed a structured, AI-driven methodology: generating comprehensive specifications from the legacy code, iterating on those docs until they were perfect, and then supervising the AI as it executed a precise implementation plan.
This article outlines the exact process I used to deconstruct the monolith, generate 32 distinct implementation tasks, and successfully stand up a new SaaS platform with 100% test coverage.
Phase 1: AI-Assisted Discovery and Specification
The first instinct in a rewrite is often to start coding immediately. I took a different approach: AI-assisted architectural archaeology.
Using GitHub Copilot within VS Code, I analyzed the legacy codebase—specifically the VB.NET forms, the SQL Server schema, and the underlying business logic classes. This deep analysis allowed me to understand the true complexity of the system before writing a single line of new code.
The Iterative Specification Loop
Instead of writing code, I asked Copilot to write documentation. I prompted it to generate a specification document based on its analysis of the legacy code:
Prompt: "This repository contains an old estimating application that was written in vb.net and used wpf. I would like to create a modern version of this in c# with blazor. The database should utilize martendb with postgres instead of sql server. The resulting application will be hosted online as a tenanted saas application. Review the existing application and create a set of specifications I can use to port it to the desired target."
This wasn't a one-shot process. I treated the specification document as a living product, iterating on it multiple times:
- Initial Draft: Copilot generated the baseline specs, identifying the data models and core workflows.
- Refining the Domain: I directed Copilot to refine the domain models, ensuring that complex workflows were treated as first-class citizens.
- Deepening the Logic: I explicitly requested that Copilot extract and document the calculation engines. The legacy app was heavy on math, so I ensured the spec included example formulas for every calculation type.
Prompt: "Provide example formulas for all of the calculations."
Prompt: "Let's use dependency injection for the factory instead of newing them up manually."
By the end of this phase, I didn't have a single line of new code, but I had a blueprint so detailed that the implementation details were unambiguous.
Lesson: Don't assume the legacy architecture is correct. Use AI to find natural "seams" in the business logic and document them before writing code.
Phase 2: From Specs to Tasks
Once the specifications were solid, I didn't just start coding. I started a new chat session with Copilot to ensure a clean context.
I fed the finalized specification documents back into Copilot and asked it to convert them into a step-by-step implementation plan:
Prompt: "Create tasks for implementing the ceiling estimating application based on the specifications. Note that the new application should land in the src folder."
Copilot analyzed the requirements and generated a list of 32 distinct tasks, ranging from setting up the project structure to implementing specific calculation logic and UI components.
This step is crucial for two reasons. First, it turned a massive "rewrite" project into a checklist of manageable, bite-sized units of work. Second, it solved the Context Window problem. If I had simply asked Copilot to "convert this application," the sheer volume of legacy code would have overwhelmed the model's memory, leading to hallucinations or forgotten requirements. By distilling the legacy code into a clean specification, and then feeding only that spec into a fresh chat, I ensured Copilot had exactly the context it needed—nothing more, nothing less.
Phase 3: Supervised AI Implementation
With the task list in hand, my role shifted from "coder" to "supervisor." I instructed Copilot to execute the tasks one by one.
Prompt: "Execute todos"
Modernizing on the Fly
As we worked through the tasks, I ensured we weren't just porting legacy patterns. For example, the legacy app used manual object instantiation for its calculator factories. I directed Copilot to implement Dependency Injection (DI) in the new .NET 8 architecture.
Legacy Approach (Conceptual):
// Old way: Manual instantiation and tight coupling
// This requires modifying the factory every time a new calculator is added
public class LegacyCalculationFactory
{
private readonly Dictionary<string, ICalculator> _calculators;
public LegacyCalculationFactory()
{
_calculators = new Dictionary<string, ICalculator>
{
{ "TypeA", new TypeACalculator() },
{ "TypeB", new TypeBCalculator() },
// ... manual registration continues forever
};
}
}
Modern Approach (AI-Generated):
// New way: Dependency Injection and automatic registration
// The container injects all registered implementations of ICalculator
public class ModernCalculationFactory(IEnumerable<ICalculator> calculators)
{
// Automatically populates the dictionary from the DI container
private readonly Dictionary<string, ICalculator> _calculators =
calculators.ToDictionary(c => c.Name, c => c);
public ICalculator GetCalculator(string name) => _calculators[name];
}
Testing as a First-Class Citizen
Part of the 32-task plan included writing tests. Because the specifications included exact formulas and expected outputs, Copilot was able to generate comprehensive unit and integration tests alongside the feature code.
We didn't move to the next task until the current one passed its tests. This discipline, enforced by the task list, meant that we were building on stable ground at every step.
Phase 4: The "First Run" and Validation
The true test of this methodology came at the end of the task list. Because we had been validating incrementally with unit tests, the final integration was surprisingly smooth.
We did encounter one specific hurdle with the multi-tenant configuration (a JSON structure mismatch in appsettings.json for the Finbuckle library), but because the rest of the system was stable, we could isolate and fix it quickly.
The Result:
- Build: 6 projects compiled with 0 errors.
- Tests: 100% pass rate on all unit and integration tests.
- Runtime: The application successfully started on
localhost, serving the correct tenant context immediately.
Key Takeaways
- Specs are the Prompt: The quality of the AI's output is directly proportional to the quality of the specification. Iterating on the spec document is high-leverage work.
- Context Management: Direct conversion overwhelms the AI's context window. Distilling legacy code into specs, and then specs into tasks, ensures the model stays focused and accurate.
- Supervisor Mode: Don't write the code; review the code. Guide the AI to use modern patterns (like DI) instead of blindly copying legacy logic.
- Task-Based Flow: Breaking the project into 32 distinct tasks allowed for steady progress and isolated testing, preventing the "big bang" integration hell common in rewrites.
Modernization is no longer about brute-force coding. By leveraging VS Code and Copilot to analyze, specify, and then implement, you can turn a legacy burden into a scalable, cloud-native asset with a fraction of the manual effort.
You May Also Like
Transform Your Documentation Workflow with AI: A Hands-On GitHub Copilot Workshop
Brad Jolicoeur - 01/01/2026
From C# Developer to ML Expert: Building a House Price Prediction System with ML.NET