[In preview] Public Preview: Filtered Vector Search with DiskANN
May 21, 2025IAMCP Profiles in Partnership Ep 11 | Creating Tomorrow with Partners
May 21, 2025This particular problem is one I have come across several times here in support. Since my focus is on the web side of things I’ve only seen it in ASP.NET Core apps; however, the problem is not specific to ASP.NET Core and can occur in any .NET app.
The issue can happen on .NET 6 and lower; however, it more readily manifests and is more apparent on .NET 7 and higher, due to the differences in how .NET manages the blocks of memory used for the GC heaps — .NET 6 and below (and .NET Framework) use the larger, heap-specific segments, while .NET 7+ uses smaller, reusable regions. For extra reading about segments vs. regions, check out this post from Maoni Stephens, .NET GC Architect: https://devblogs.microsoft.com/dotnet/put-a-dpad-on-that-gc/
In addition, this particular leak appears to only occur on Windows, based on the relevant .NET source code, though it’s worth skimming over if your app is hosted elsewhere just in case.
Lastly, all the docs and source code links on this post are .NET 8-specific, as this is the minimum version of .NET that is in a supported state at the time of writing (not including ASP.NET Core 2.3 on .NET Framework). If your app is .NET 7 or older most of the concepts still apply and are accurate, and as far as I know everything is the same for .NET 9+.
How You Might Spot This
If your application is affected by this issue, you might notice one or more of the following symptoms:
- Gradual memory usage growth over time, even under consistent or light traffic
- If memory growth occurs over a long-enough period of time, it could lead to an OutOfMemoryException and performance degradation
- GC statistics showing a large amount of free memory in Gen2 that never seems to shrink
- Memory dumps containing a high number of pinned byte[] objects and FileSystemWatcher+AsyncReadState instances
Most of these signs often appear well before the system runs out of memory, so early detection via monitoring tools and periodic dump analysis can help prevent user-facing impact.
Since there are endless possibilities for what could cause memory leaks in an app, we’ll dive right in to what this particular one looks like in a memory dump. The output commands will be from WinDbg with .NET’s SOS extension, but the dotnet-dump CLI versions of the commands will also produce the same output. The dump output here is from an ASP.NET Core app running on .NET 8 on Windows+IIS.
Investigation
Let’s start with the output of the SOS !gcheapstat command (I’ve slightly cleaned up the output here to make it more presentable, but it’s close enough):
0:000> !gcheapstat
Heap Gen0 Gen1 Gen2 LOH POH
Heap0 40524872 38061216 485740568 0 0
Heap1 61045264 40133344 478243040 0 0
Heap2 15486520 39008632 479224200 0 53128
Heap3 49219288 35761584 478258096 0 0
Heap4 66295048 38873144 478597280 85776 81672
Heap5 15009984 40180176 488333256 0 1048
Heap6 42155696 38223640 470915848 0 8240
Heap7 90212936 38588136 479554176 98384 0
Total 379949608 308829872 3838866464 184160 144088
Free space:
Heap Gen0 Gen1 Gen2 LOH POH
Heap0 133544 32562168 454600200 0 0 SOH:86%
Heap1 193888 34698976 446975904 0 0 SOH:83%
Heap2 70128 33914136 447623832 0 0 SOH:90%
Heap3 161296 30709896 446524992 0 0 SOH:84%
Heap4 226584 33579616 447172344 32 0 SOH:82%
Heap5 56168 34728256 456538104 0 0 SOH:90%
Heap6 152896 33156456 440122928 0 0 SOH:85%
Heap7 313336 33419096 447947832 32 0 SOH:79%
Total 1307840 266768600 3587506136 64 0
Committed space:
Heap Gen0 Gen1 Gen2 LOH POH
Heap0 40570880 40505344 497610752 126976 4096
Heap1 61083648 40833024 489603072 4096 4096
Heap2 15536128 40177664 489631744 126976 69632
Heap3 49287168 36831232 490213376 4096 4096
Heap4 66326528 39550976 490278912 86016 135168
Heap5 15077376 41226240 500224000 4096 4096
Heap6 42209280 38801408 482791424 4096 69632
Heap7 90247168 39718912 489099264 102400 4096
Total 380338176 317644800 3929452544 458752 294912
Of the ~4GB committed space in Gen2, ~3.6 GB of that is free space.
Output from the !eeheap -gc (from SOS) command shows all the regions (note they are smaller in size when comparing against pre-.NET 7 segments) as well as the total .NET GC heap size:
(note: there are 8 heaps in this app, and all of them looked extremely similar, so for brevity I cut/removed heaps 1-7 and removed most of the entries in the middle, just know that Gen2 contained many more entries than Gen0 and Gen1):
0:000> !eeheap -gc
========================================
Number of GC Heaps: 8
—————————————-
Heap 0 (0000026a598e5f80)
Small object heap
segment begin allocated committed allocated size committed size
generation 0:
02aa6dd82d48 026b7bc00028 026b7bffffc8 026b7c000000 0x3fffa0 (4194208) 0x400000 (4194304)
…
02aa6dd86c88 026b91c00028 026b91eab088 026b91eb1000 0x2ab060 (2797664) 0x2b1000 (2822144)
generation 1:
02aa6dd598c0 026a96000028 026a963f35c8 026a96400000 0x3f35a0 (4142496) 0x400000 (4194304)
…
02aa6dd82c90 026b7b800028 026b7ba962e0 026b7baa1000 0x2962b8 (2712248) 0x2a1000 (2756608)
generation 2:
02aa6dd51af8 026a6a400028 026a6a7ebf20 026a6a800000 0x3ebef8 (4112120) 0x400000 (4194304)
02aa6dd51bb0 026a6a800028 026a6abef2e0 026a6ac00000 0x3ef2b8 (4125368) 0x400000 (4194304)
[whole bunch of entries]
02aa6dd7f3c8 026b67c00028 026b67ff5f68 026b68000000 0x3f5f40 (4153152) 0x400000 (4194304)
02aa6dd7f818 026b69400028 026b697fa5b8 026b69800000 0x3fa590 (4171152) 0x400000 (4194304)
NonGC heap
segment begin allocated committed allocated size committed size
026a59072fb0 02aaef970008 02aaefa00f28 02aaefa10000 0x90f20 (593696) 0xa0000 (655360)
Large object heap
segment begin allocated committed allocated size committed size
02aa6dd52b80 026a70000028 026a70000028 026a7001f000 0x1f000 (126976)
Pinned object heap
segment begin allocated committed allocated size committed size
02aa6dd4ec40 026a5a000028 026a5a000028 026a5a001000 0x1000 (4096)
——————————
[cut]
——————————
GC Allocated Heap Size: Size: 0x10dec7650 (4528567888) bytes.
GC Committed Heap Size: Size: 0x113e69000 (4628844544) bytes.
The post width on this blogging platform, at the time of this writing, is not very good for wide, tabular data – so it might be easier to copy and paste that output above into a notepad so it’s somewhat more understandable.
In short, it shows there are a huge amount of Gen2 regions/segments that each have committed 0x400000 (4,194,304) bytes. This is the standard initial size of regions for the small object heap (SOH), as of this writing. Some of these might be slightly different in size due to various reasons, but overall it adds-up to a large amount of memory in Gen2.
If we dump one of those regions/segments:
0:000> !dumpheap -segment 2aa6dd51af8
Address MT Size
026a6a400028 026a59a88160 129,240 Free
026a6a41f900 7ff9fa8e5d28 8,216
026a6a421918 7ff9fabcfdb8 40
026a6a421940 026a59a88160 92,552 Free
026a6a4382c8 7ff9fa8e5d28 8,216
026a6a43a2e0 7ff9fabcfdb8 40
026a6a43a308 026a59a88160 91,792 Free
026a6a450998 7ff9fa8e5d28 8,216
026a6a4529b0 7ff9fabcfdb8 40
026a6a4529d8 026a59a88160 75,648 Free
026a6a465158 7ff9fa8e5d28 8,216
026a6a467170 7ff9fabcfdb8 40
026a6a467198 026a59a88160 103,816 Free
026a6a480720 7ff9fa8e5d28 8,216
026a6a482738 7ff9fabcfdb8 40
026a6a482760 026a59a88160 117,904 Free
026a6a49f3f0 7ff9fa8e5d28 8,216
026a6a4a1408 7ff9fabcfdb8 40
026a6a4a1430 026a59a88160 92,504 Free
026a6a4b7d88 7ff9fa8e5d28 8,216
026a6a4b9da0 7ff9fabcfdb8 40
026a6a4b9dc8 026a59a88160 148,560 Free
026a6a4de218 7ff9fa8e5d28 8,216
026a6a4e0230 7ff9fabcfdb8 40
026a6a4e0258 026a59a88160 106,976 Free
026a6a4fa438 7ff9fa8e5d28 8,216
026a6a4fc450 7ff9fabcfdb8 40
026a6a4fc478 026a59a88160 79,408 Free
026a6a50faa8 7ff9fa8e5d28 8,216
026a6a511ac0 026a59a88160 161,488 Free
026a6a539190 7ff9fa8e5d28 8,216
026a6a53b1a8 7ff9fabcfdb8 40
026a6a53b1d0 026a59a88160 301,024 Free
026a6a5849b0 7ff9fa8e5d28 8,216
026a6a5869c8 026a59a88160 145,400 Free
026a6a5aa1c0 7ff9fa8e5d28 8,216
026a6a5ac1d8 7ff9fabcfdb8 40
026a6a5ac200 026a59a88160 99,216 Free
026a6a5c4590 7ff9fa8e5d28 8,216
026a6a5c65a8 7ff9fabcfdb8 40
026a6a5c65d0 026a59a88160 92,552 Free
026a6a5dcf58 7ff9fa8e5d28 8,216
026a6a5def70 7ff9fabcfdb8 40
026a6a5def98 026a59a88160 160,024 Free
026a6a6060b0 7ff9fa8e5d28 8,216
026a6a6080c8 7ff9fabcfdb8 40
026a6a6080f0 026a59a88160 92,544 Free
026a6a61ea70 7ff9fa8e5d28 8,216
026a6a620a88 7ff9fabcfdb8 40
026a6a620ab0 026a59a88160 81,576 Free
026a6a634958 7ff9fa8e5d28 8,216
026a6a636970 7ff9fabcfdb8 40
026a6a636998 026a59a88160 158,296 Free
026a6a65d3f0 7ff9fa8e5d28 8,216
026a6a65f408 7ff9fabcfdb8 40
026a6a65f430 026a59a88160 103,816 Free
026a6a6789b8 7ff9fa8e5d28 8,216
026a6a67a9d0 7ff9fabcfdb8 40
026a6a67a9f8 026a59a88160 89,176 Free
026a6a690650 7ff9fa8e5d28 8,216
026a6a692668 7ff9fabcfdb8 40
026a6a692690 026a59a88160 297,232 Free
026a6a6dafa0 7ff9fa8e5d28 8,216
026a6a6dcfb8 7ff9fabcfdb8 40
026a6a6dcfe0 026a59a88160 116,688 Free
026a6a6f97b0 7ff9fa8e5d28 8,216
026a6a6fb7c8 7ff9fabcfdb8 40
026a6a6fb7f0 026a59a88160 92,552 Free
026a6a712178 7ff9fa8e5d28 8,216
026a6a714190 7ff9fabcfdb8 40
026a6a7141b8 026a59a88160 149,184 Free
026a6a738878 7ff9fa8e5d28 8,216
026a6a73a890 7ff9fabcfdb8 40
026a6a73a8b8 026a59a88160 91,248 Free
026a6a750d28 7ff9fa8e5d28 8,216
026a6a752d40 7ff9fabcfdb8 40
026a6a752d68 026a59a88160 91,232 Free
026a6a7691c8 7ff9fa8e5d28 8,216
026a6a76b1e0 7ff9fabcfdb8 40
026a6a76b208 026a59a88160 92,544 Free
026a6a781b88 7ff9fa8e5d28 8,216
026a6a783ba0 7ff9fabcfdb8 40
026a6a783bc8 026a59a88160 170,760 Free
026a6a7ad6d0 7ff9fa8e5d28 8,216
026a6a7af6e8 7ff9fabcfdb8 40
026a6a7af710 026a59a88160 91,216 Free
026a6a7c5b60 7ff9fa8e5d28 8,216
026a6a7c7b78 7ff9fabcfdb8 40
026a6a7c7ba0 026a59a88160 140,096 Free
026a6a7e9ee0 7ff9fa8e5d28 8,216
026a6a7ebef8 7ff9fabcfdb8 40
Statistics:
MT Count TotalSize Class Name
7ff9fabcfdb8 29 1,160 System.Threading.ThreadPoolBoundHandle
7ff9fa8e5d28 31 254,696 System.Byte[]
026a59a88160 31 3,856,264 Free
Total 91 objects, 4,112,120 bytes
Notice the majority of memory here is “Free” and the rest is mostly 8KB Byte[] objects. Why isn’t the GC reclaiming or moving them? Well, those System.Byte[] objects are referenced and pinned:
0:000> !gcroot 026a6a7e9ee0
HandleTable:
0000026a5966a150 (strong handle)
-> 026a930cba68 System.Threading.ThreadPoolBoundHandleOverlapped
-> 026a930cb9e8 System.IO.FileSystemWatcher+AsyncReadState
-> 026a6a7e9ee0 System.Byte[]
0000026a59669f10 (pinned handle)
-> 026a6a7e9ee0 System.Byte[]
Found 2 unique roots.
As long as they are pinned, they won’t be moved. Since between the pinned Byte[] objects is a bunch of free space, and this is Gen2, that free space is essentially unusable.
Why is it unusable? Because allocations for the SOH are only made in Gen0. Thus, with all these regions and free memory being in Gen2, then they can’t be used for allocations.
Over time, as more and more of these Byte[] objects are allocated and pinned, more and more regions will continue getting promoted to Gen2 (assuming those objects are leaked and don’t go away) and get essentially locked away.
Also notice here the “System.IO.FileSystemWatcher+AsyncReadState” object that is referencing our pinned Byte[].
Here’s another view of this problem – these are from a different set of dumps as the other data above, but it demonstrated the problem more visibly. This time I’ve focused the output on type names containing “FileSystemWatcher”:
First dump of the process:
0:000> !dumpheap -stat -type FileSystemWatcher
Statistics:
MT Count TotalSize Class Name
7fff3be68d20 1 24 System.IO.FileSystemWatcher+c
7fff3be363b8 2 48 System.IO.FileSystemWatcher+NormalizedFilterCollection
7fff3be36c98 2 48 System.IO.FileSystemWatcher+NormalizedFilterCollection+ImmutableStringList
7fff3be35600 2 240 System.IO.FileSystemWatcher
7fff3be623e8 9,569 229,656 System.WeakReference
7fff3be61718 9,569 612,416 System.IO.FileSystemWatcher+AsyncReadState
Second dump taken a bit later:
0:000> !dumpheap -stat -type FileSystemWatcher
Statistics:
MT Count TotalSize Class Name
7fff3be68d20 1 24 System.IO.FileSystemWatcher+c
7fff3be363b8 2 48 System.IO.FileSystemWatcher+NormalizedFilterCollection
7fff3be36c98 2 48 System.IO.FileSystemWatcher+NormalizedFilterCollection+ImmutableStringList
7fff3be35600 2 240 System.IO.FileSystemWatcher
7fff3be623e8 18,037 432,888 System.WeakReference
7fff3be61718 18,037 1,154,368 System.IO.FileSystemWatcher+AsyncReadState
The count and total size of these objects roughly doubled, but notice the total size of either is not particularly notable (<2MB for both in the second dump). Of course, the Byte[] associated with each one would add up, but with Byte[] being a very typical class used in .NET for various operations, it often does not stand out and is skipped over in !dumpheap output.
Cause
So where are those objects coming from? In all the cases I’ve seen so far, it’s from code like this being present in a hot or somewhat frequently used code path:
IConfiguration configuration = new ConfigurationBuilder()
.AddJsonFile(“appsettings.json”,
optional: true,
reloadOnChange: true)
.Build();
var someConfig = configuration[“someConfig”];
The impact of this code will be greater the more often it is run. The problem is not apparent, but this is the trigger: reloadOnChange: true.
By default, reloadOnChange==false, but it’s specifically enabled in the code above. I think the reason for the specific usage of this code in apps is from the example for configuration files in the ASP.NET Core Configuration doc (though it’s not specific to just the JSON provider, all the file-based provider examples show it): ASP.NET Core Configuration File Providers. The general .NET Configuration providers doc also shows something similar for all the file providers: .NET Configuration Providers. All of those, at the time of this writing, show reloadOnChange: true.
This is really only meant to be used during app startup if a custom config file is being consumed that ASP.NET itself does not already consume automatically (assuming those defaults haven’t been changed). Instead, as mentioned above, some folks have mistakenly used this code in something like a Controller action or middleware component to gain access to some needed config value, not knowing what it’s doing under-the-hood (also not knowing that the config they typically sought was already loaded (and monitored) into the app’s configuration system).
Resolution
First, you should understand whether or not you actually need to tell .NET to load the requested configuration file in the first place, as ASP.NET Core loads (and monitors) several for you already (again, assuming those defaults haven’t been modified in your app). Normal .NET console apps also consume some standard configuration files if using the GenericHost implementation. Other app types may have their own set as well. The best solution is to access app config using the designed methods, such as through Dependency Injection.
If it’s determined the app needs to ingest and monitor a custom configuration file, add it once as early as possible (ideally during app startup), then only use retrieval methods later.
These methods for ASP.NET Core are described here: Configuration in ASP.NET Core.
Here is the general .NET configuration doc: Configuration in .NET.
For a quick, short-term fix, or if your app absolutely must dynamically load a config file multiple times during normal app execution, then be sure to set reloadOnChange to false – that way the path for file monitoring is not taken. This is still inefficient and relatively slow (execution-wise) compared to using DI or the normal retrieval methods, however, because .NET will still have to open the file and parse it to provide what the app is asking for every time it’s called.
More Information
What is the pathway here from reloadOnChange: true to the pinned buffer? Under-the-hood, when the config builder has its Build() called and when reloadOnChange==true, this triggers the configuration code to do some work with the goal of monitoring the specified file for changes. Specifically, this series of calls is made:
The AllocateBuffer() call at the top is where the 8KB Byte[] buffer is allocated on the heap (in Gen0) – this is the same 8,216-byte object that is shown in the !dumpheap output earlier in this post.
The StartRaisingEvents() call is the one that does much of the legwork here – it gets the buffer and leads down the path of getting it pinned. Here’s a link to the latest .NET 8 (LTS, 8.0.16) code where this is done:
Notice this particular method is in FileSystemWatcher.Win32.cs — this is because Windows, Linux, and MacOS all handle file monitoring differently. On Windows, the actual Win32 API call that does the monitoring work is ReadDirectoryChangesW, which is called from here.
Here’s the signature for this function at the time of this writing:
BOOL ReadDirectoryChangesW(
[in] HANDLE hDirectory,
[out] LPVOID lpBuffer,
[in] DWORD nBufferLength,
[in] BOOL bWatchSubtree,
[in] DWORD dwNotifyFilter,
[out, optional] LPDWORD lpBytesReturned,
[in, out, optional] LPOVERLAPPED lpOverlapped,
[in, optional] LPOVERLAPPED_COMPLETION_ROUTINE lpCompletionRoutine
);
The buffer allocated in managed code is pinned and the address for it is passed as the lpBuffer parameter above. In short, Windows will populate the buffer with the requested change notifications when they happen, which is why .NET needs to pin it. Since MacOS and Linux do this differently, there is no pinned buffer needed.
All of this happens each time IConfigurationBuilder.Build() is called when reloadOnChange==true. This means over time, more and more buffers will be created and pinned, eventually leading to heavy fragmentation of memory, as well as entire regions containing mostly free blocks that essentially go unused sitting in Gen2.
This is not actually a new issue – it has existed for several years and popped up in various places. Here are some old GitHub issues describing it: