Profiling .NET 10 Applications: The 2026 Guide to Performance
I still remember the days of squinting at jagged CPU charts, trying to mentally map a timestamp on a graph to a specific log entry, guessing which line of code caused the spike. It felt more like reading tea leaves than engineering.
Fortunately, those days are behind us. In 2026, profiling .NET 10 applications has shifted from a manual, investigative art to an AI-assisted diagnostic workflow. Whether I'm debugging a memory leak locally in Visual Studio or automating performance gates in a Kubernetes cluster, the tooling has evolved to give me answers, not just raw data.
This guide explores the state of profiling in .NET 10, common "villains" I still see in production code, and how to catch them using the latest tools.
The New Tooling Landscape
The biggest change in .NET 10 isn't just a faster runtime—it's how the tools understand our code. The friction of "starting a session" is almost gone.
1. Visual Studio 2026: The AI Investigator
Visual Studio remains my heavyweight champion for deep dives, but it has delegated the tedious parts to AI.
- Copilot Profiler Agent (
@profile): I open Copilot Chat, type@profile, and ask "What's causing the latency spike?" The agent kicks off a profiling session, collects the trace, and delivers a plain-language diagnosis—pointing me directly at the offending code. It feels like having a performance engineer sitting next to me. - Unified Memory Analysis: The "Allocation" and "Object Retention" views are finally merged. It's now trivial to distinguish between "temporary trash" (Gen 0) and "actual leaks" (Gen 2 retention).
Quick Start:
- Open your solution in Visual Studio 2026.
- Open the Copilot Chat panel (
Ctrl+Alt+I) and type@profileto invoke the Profiler Agent. - Ask it directly: "Profile my app and tell me what's causing the slowdown."
- The
@profileagent launches a profiling session, runs your app, and collects a trace automatically. - When collection completes, the agent presents a plain-language summary of the top hotspots and navigates you to the offending code in the editor.
- Follow up with targeted questions like "Why is
OrderService.GetAllallocating so much memory?" to drill deeper without ever leaving the chat.
2. dotnet-trace: Profiling for Everyone
For anyone on Mac, Linux, or Windows who wants a lightweight cross-platform option, dotnet-trace is the go-to CLI profiler.
- No IDE Required: Captures a trace from any running .NET process with a single command—perfect for remote servers or CI pipelines where you can't attach a GUI.
- SpeedScope & Flame Graph Support: Traces can be exported in SpeedScope format and opened directly at speedscope.app for an interactive Flame Graph view.
Quick Start:
Install the tool globally once:
dotnet tool install -g dotnet-trace
Attach to a running process and collect a 30-second CPU trace:
# Find the PID of your running app
dotnet-trace ps
# Collect a trace (30 seconds by default)
dotnet-trace collect --process-id <PID> --duration 00:00:30 \
--output ./trace.nettrace
Convert the trace to SpeedScope format for Flame Graph visualization:
dotnet-trace convert ./trace.nettrace --format Speedscope
Open trace.speedscope.json at speedscope.app and switch to the Left Heavy view. The widest bars at the top are your hotspots.
3. dotnet-monitor & dotnet-counters: The Silent Guardians
For production and CI/CD, these tools are my best friends.
- dotnet-monitor: Now standard in Kubernetes strategies. It supports Trigger-based Profiling, meaning it can automatically capture a trace only when CPU > 80% for more than a minute.
- dotnet-counters: The "Task Manager" for .NET now includes specific counters for .NET 10's GC tuning, giving visibility into pause times without pausing the app.
Quick Start:
Install the tools globally once:
dotnet tool install -g dotnet-monitor
dotnet tool install -g dotnet-counters
To watch live counters for a running process:
# Lists all running .NET processes and their PIDs
dotnet-counters ps
# Monitor GC and request metrics in real time
dotnet-counters monitor --process-id <PID> \
System.Runtime Microsoft.AspNetCore.Hosting
To configure dotnet-monitor for automatic triggered tracing in Kubernetes, add a trigger rule to its settings.json:
{
"CollectionRules": {
"HighCpuTrace": {
"Trigger": {
"Type": "EventCounter",
"Settings": {
"ProviderName": "System.Runtime",
"CounterName": "cpu-usage",
"GreaterThan": 80,
"SlidingWindowDuration": "00:01:00"
}
},
"Actions": [
{ "Type": "CollectTrace", "Settings": { "Profile": "Cpu", "Duration": "00:00:30" } }
]
}
}
}
With this in place, dotnet-monitor captures a 30-second CPU trace automatically whenever usage exceeds 80% for a sustained minute—no human intervention needed.
When Should You Profile?
I used to wait for a user complaint before opening a profiler. That was a mistake. In 2026, we follow a strict "Shift-Left" approach.
In the Loop (Development):
Before merging a PR, I run a Micro-benchmark (BenchmarkDotNet) on any "hot path" logic. If it feels slow, I either ask@profilein Copilot Chat or run a quickdotnet-trace collectto confirm I haven't accidentally introduced a closure allocation in a loop.In the Pipeline (CI/CD):
We treat performance like a unit test. If the critical path latency increases by > 10% compared to the baseline, the build fails.In Production (On-Demand):
Use Triggered Profiling. Don't guess; letdotnet-monitorbe your sentry. It captures the exact moment of failure so you can replay the crime scene later.
Common Issues & How to Fix Them
Even with .NET 10's optimized runtime, application code can still be the bottleneck. Here are the classic villains I still encounter in 2026, and how to fix them.
1. Memory Pressure (The "Death by a Thousand Cuts")
- Symptom: High Gen 0 allocation rates. The GC runs constantly, creating "micro-pauses" that kill throughput.
- The Suspect:
String.Concat, extensive usage of LINQ in hot paths, or boxing value types. - The Fix: Switch to
Span<T>for slicing strings without allocating.
// ❌ Old Way: Allocates a new string just to check a substring
public bool IsIdValid(string id) {
string prefix = id.Substring(0, 3);
return prefix == "USR";
}
// ✅ Modern Way: Zero-allocation span slicing
public bool IsIdValid(ReadOnlySpan<char> id) {
// Slices the 'view' of the string, no new memory allocated
var prefix = id.Slice(0, 3);
return prefix.SequenceEqual("USR");
}
2. The "Sync-over-Async" Trap
- Symptom: ThreadPool grows indefinitely ("Hill Climbing"), yet CPU usage is low. Requests simply time out.
- The Suspect: Blocking calls like
.Resultor.Wait()on an async task. - The Fix: Await all the way down. In .NET 10, the profiler explicitly flags "Blocking Waits" in async chains as a warning.
3. Lock Contention
- Symptom: CPU usage is low, but throughput is capped. Threads spend most of their time in
Monitor.Enter. - The Suspect: Using
lockon a shared resource in a high-traffic endpoint. - The Fix: Replace
lock (object)with theSystem.Threading.Lock(introduced in .NET 9). It has a cleaner API and better performance under contention.
// New .NET 9+ Lock type
private readonly System.Threading.Lock _syncRoot = new();
public void UpdateResource() {
// Cleaner scope-based syntax
using (_syncRoot.EnterScope()) {
// Critical section
_sharedState++;
}
}
4. Database N+1 Queries
- Symptom: A single API call generates 50+ SQL queries quickly in succession.
- The Suspect: Accessing a lazy-loaded navigation property inside a loop.
- The Fix: Use Eager Loading (
.Include()) or Split Queries in EF Core to fetch data efficiently.
// ❌ Dangerous: Triggers a SQL query for every Order
foreach (var customer in context.Customers) {
Console.WriteLine(customer.Orders.Count);
}
// ✅ Fix: Fetch everything in one (or split) round trip
var customers = context.Customers
.Include(c => c.Orders)
.ToList();
Conclusion
Profiling is no longer a dark art—it's a standard part of our engineering toolkit.
In 2026, we stopped searching for needles in haystacks. Visual Studio 2026's Copilot Profiler Agent doesn't just show you the CPU spike; it circles the line of code causing it. Meanwhile, dotnet monitor has become the silent guardian of our Kubernetes clusters, automatically capturing traces before we even know an outage is starting.
Use the tools, automate the triggers, and keep your .NET 10 apps flying.
You May Also Like
Leveling Up Local Dev with .NET Aspire & AI
Brad Jolicoeur - 03/22/2026
We Need to Talk About Your Repository Pattern
Brad Jolicoeur - 03/01/2026
Why Your Safety Net Is Dropping Messages