Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Effective .NET Memory Management

You're reading from   Effective .NET Memory Management Build memory-efficient cross-platform applications using .NET Core

Arrow left icon
Product type Paperback
Published in Jul 2024
Publisher Packt
ISBN-13 9781835461044
Length 270 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Trevoir Williams Trevoir Williams
Author Profile Icon Trevoir Williams
Trevoir Williams
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. Chapter 1: Memory Management Fundamentals FREE CHAPTER 2. Chapter 2: Object Lifetimes and Garbage Collection 3. Chapter 3: Memory Allocation and Data Structures 4. Chapter 4: Memory Leaks and Resource Management 5. Chapter 5: Advanced Memory Management Techniques 6. Chapter 6: Memory Profiling and Optimization 7. Chapter 7: Low-Level Programming 8. Chapter 8: Performance Considerations and Best Practices 9. Chapter 9: Final Thoughts
10. Index 11. Other Books You May Enjoy

Profiling memory usage and allocation

Profiling memory effectively requires several sophisticated techniques, each of which can provide insights into how an application uses memory. Each method serves different purposes and offers unique benefits. Assessing how memory is allocated is vital for understanding a program’s memory footprint and behavior. Generally, we need to evaluate the following:

  • Allocation patterns: Understanding whether memory allocation is static, dynamic, or stack-based helps identify how memory management should be approached
  • Allocation hotspots: Identifying parts of the code where a high volume of memory allocations occurs can help optimize memory usage and improve application performance
  • Object lifetime management: Properly managing objects’ lifecycles ensures that memory is freed up when it is no longer needed, avoiding memory leaks

Implementing memory profiling effectively involves more than just selecting the right tools. The following best practices are crucial for obtaining accurate results and making informed optimization decisions. Profiling should be practiced as early as possible in the development process to catch potential issues before they become ingrained in the codebase. This proactive approach can save significant time and resources down the line. We must also choose tools that best fit the specific needs of the project and development environment. Consider factors such as the type of application, the deployment environment, and specific performance goals when selecting profiling tools.

Automating the collection and analysis of profiling data can help consistently monitor memory usage across different application lifecycle stages, including testing and production. Once the data is collected, focus on insights that can lead to actionable improvements and prioritize issues based on their impact on application performance and user experience.

Often, we work in teams, and it is good when each team member has some knowledge or experience in making their types of assessments. Education is paramount, and we must ensure that all team members know the principles of memory management and are proficient in using profiling tools. This shared understanding can help prevent memory issues and improve the overall quality of the code. It will also help document the findings from memory profiling sessions and the actions taken to address them, which can help track performance improvements over time. This documentation can also be invaluable for new team members and future references.

Now, let’s review techniques for identifying allocation patterns and logging/documenting them with custom code.

Identifying allocation patterns with custom code

Identifying allocation patterns involves understanding how and where memory is allocated for objects and data structures. This can help developers optimize memory usage, improve performance, and reduce issues related to memory leaks and excessive garbage collection.

On the topic of garbage collection, we know by now that this is a marquee feature in .NET development, where the GC automatically frees up memory for us after assessing which objects are still in use and which ones are out of scope. We also know that we can write code to negate the effects of the GC, which is why we end up with memory leaks. Because it becomes difficult to readily and easily detect where the faulty code is being executed, we can add snippets of code to different execution points in our code to try and assess whether garbage collection has occurred and how many times. The following code snippet shows how the GC can be monitored during application runtime:

public void MonitorGarbageCollections()
{
    Console.WriteLine($"GC Gen 0 has been collected 
    {GC.CollectionCount(0)} times");
    Console.WriteLine($"GC Gen 1 has been collected 
    {GC.CollectionCount(1)} times");
    Console.WriteLine($"GC Gen 2 has been collected 
    {GC.CollectionCount(2)} times");
}

Calling this method at various points in your application can show how frequently different generations of objects are collected, indicating their longevity and allocation rate. Understanding when and why garbage collections occur can provide insights into memory allocation patterns and help to differentiate between short-lived and long-lived objects.

We can also use the GC Notifications API to interact with the garbage collector and memory manager and implement custom memory profiling implementations. We can use GC.RegisterForFullGCNotification to get notifications about upcoming garbage collections. This allows an application to receive notifications about impending and completed garbage collections, specifically for full collection cycles. This capability is handy for applications that need to manage memory or resources carefully and can perform optimizations or cleanups before and after a complete garbage collection.

An example of this is as follows:

GC.RegisterForFullGCNotification(10, 10);
GCNotificationStatus status = GC.WaitForFullGCApproach();
if (status == GCNotificationStatus.Succeeded)
{
    Console.WriteLine("GC is about to happen. Preparing...");
    // Perform necessary pre-GC operations here
}
status = GC.WaitForFullGCComplete();
if (status == GCNotificationStatus.Succeeded)
{
    Console.WriteLine("GC has completed.");
    // Perform necessary post-GC operations here
}

In the preceding code snippet, we start by registering for notifications and providing the maxGenerationThreshold and largeObjectHeapThreshold parameters. The maxGenerationThreshold is the threshold for getting notifications before a complete garbage collection. It determines how aggressively the garbage collector tries to notify the application before starting a full GC. largeObjectHeapThreshold is the threshold for post-full garbage collection notifications. The possible values for both parameters range from 1 to 99, representing the time left before a garbage collection is expected to occur. The lower the value, the more time to respond, and the more frequent checks are needed. Next, we wait for the notification before the GC, and when this is evaluated as true, we can execute some code as required. Similarly, we can wait for the completion of the garbage collection and execute code accordingly.

Another code-based method for identifying application activity is logging and instrumentation. You can instrument your code to log memory metrics at critical points, which can help you track down unexpected allocations. This method involves adding custom code to monitor and record information about memory operations, allowing developers to understand memory usage patterns, detect leaks, and optimize performance:

  • Instrumentation: Modifying the application to include code that measures and records performance and memory usage statistics at crucial stages in the application lifecycle
  • Logging: This refers to writing the collected data to a log file, a console, or a more sophisticated monitoring system that can be reviewed to understand how the application manages memory over time

.NET provides various APIs to access memory-related information, which can be logged periodically or triggered by specific events or thresholds. The following methods show how the GC.GetTotalMemory method logs the total memory the application uses and GC.CollectionCount provides the number of times garbage collection has occurred for each generation:

public class MemoryProfiler
{
    public static void LogMemoryUsage()
    {
        long totalMemory = GC.GetTotalMemory(false);
        Debug.WriteLine($"Total memory used: {totalMemory} bytes");
    }
    public static void LogDetailedMemoryUsage()
    {
        for (int i = 0; i <= GC.MaxGeneration; i++)
        {
            long size = GC.CollectionCount(i);
            Debug.WriteLine($"Generation {i} collections: {size}");
        }
    }
}

This method logs memory usage before and after a potentially heavy operation, which can help identify unexpected increases in memory usage. To simulate its usage, we can create a sample method to generate a large list and review the usage before and after this heavy operation, as in the following example:

public void ExampleMethod()
{
    MemoryProfiler.LogMemoryUsage();
    // Perform memory-intensive operations here
    var largeList = new List<int>();
    for (int i = 0; i < 1000000; i++)
    {
        largeList.Add(i);
    }
    MemoryProfiler.LogMemoryUsage();
    MemoryProfiler.LogDetailedMemoryUsage();
}

Here, we use the GC.GetTotalMemory method to log the total memory used by the application, while GC.CollectionCount provides the number of times garbage collection has occurred for each generation. We use the Debug.WriteLine method to emit the diagnostic messages to the output window in Visual Studio and other debugging tools that listen to the debug output stream. It is part of the System.Diagnostics namespace and is a simple and effective tool for developers who need a quick and easy way to inspect what’s happening in their code during development, with the assurance that this debug code won’t affect the performance or behavior of their application once it’s in production.

Now let us jump out of writing custom logs and explore the technique of adding a Make Object ID using Visual Studio.

Make Object ID to find memory leaks

During debugging, it is common practice to track a variable to observe its usage and lifetime during the application’s execution paths. In Visual Studio, we usually use debugger windows such as the Watch window. However, when a variable goes out of scope in the Watch window, you may notice it becomes grayed out. In some scenarios, the value of a variable may change even when the variable is out of scope, and the new value(s) cannot be tracked via the debugger Watch window. If you must continue watching it closely, you can track the variable by creating an Object ID in the Watch window. This is where we use the Make Object ID feature.

Make Object ID is a valuable feature in Visual Studio that helps track and debug objects in memory, particularly for identifying memory leaks in .NET applications. This identifier remains consistent across different breakpoints and even as you step through the application, allowing developers to track an object’s state changes over time and across various parts of the application, regardless of where the object is in the call stack. It’s advantageous when objects are passed around through multiple methods or threads.

One of the primary uses of this feature is in detecting memory leaks. By marking an object with an ID, you can quickly check if it remains in memory longer than it should, indicating a potential leak. This is particularly crucial in long-running applications where memory leaks can lead to significant performance degradation or even application crashes over time. When paired with Visual Studio’s diagnostic tools, Make Object ID can provide a more comprehensive analysis. For instance, after marking an object, you can take memory snapshots at various phases of execution and compare them to see how the object’s memory allocation changes, which can help in optimizing memory usage.

Let us consider a typical scenario where the Make Object ID feature can help debug a memory leak involving event handlers in a .NET application. Memory leaks often occur when event handlers are not adequately detached, preventing the garbage collector from reclaiming the memory allocated for objects. In the following code, we define a EventPublisher class that raises an event and multiple subscribers (listeners) that attach handlers to this event. Suppose there’s a bug causing one of the subscribers not to detach its event handler, leading to a memory leak:

public class EventPublisher
{
    public event EventHandler MyEvent;
    public void TriggerEvent()
    {
        MyEvent?.Invoke(this, EventArgs.Empty);
    }
}
public class EventSubscriber
{
    public void Subscribe(EventPublisher publisher)
    {
        publisher.MyEvent += HandleEvent;
    }
    public void Unsubscribe(EventPublisher publisher)
    {
        publisher.MyEvent -= HandleEvent;
    }
    private void HandleEvent(object sender, EventArgs e)
    {
        Console.WriteLine("Event handled.");
    }
}

The Program.cs class file contains the following code:

var publisher = new EventPublisher();
var subscriber = new EventSubscriber();
// Subscriber attaches to the event
subscriber.Subscribe(publisher);
for (int i = 0; i < 15; i++)
{
    // Simulate event triggering
    publisher.TriggerEvent();
    // Optionally unsubscribe
    // Uncomment the following line to test memory management 
    // with unsubscribing
    // subscriber.Unsubscribe(publisher);
} // Keep the console window open
Console.WriteLine("Press any key to exit...");
Console.ReadKey();

Now that we have the code, we can place a breakpoint on the line of code that calls the subscriber.Subscribe(publisher) method call. Once the code hits the breakpoint, open the locals window, right-click on subscriber in the list of objects, and select Make Object ID. This assigns a unique ID to the subscriber object, let’s say {$1}. Review Figure 6.1 for further insight.

Figure 6.1 – How to add an Object ID to an object during runtime

Figure 6.1 – How to add an Object ID to an object during runtime

The apparent bug in the program’s execution path is that the event is not unsubscribed. Suppose we allow the program to run beyond the breakpoint by pressing F5 and check the memory usage after publisher.TriggerEvent() is called several times.

Open the Memory Usage tool under Debug > Windows > Show Diagnostic Tools and take a snapshot before and after the event is triggered. This is the same diagnostic tool we used briefly in the previous chapter. You may place another breakpoint at the final line of code to prevent the program from completing its execution before we can analyze the memory usage. When it hits the final breakpoint, take another snapshot, and you can check if the subscriber object still exists by using its Object ID ({$1}) in the Watch window. As seen in Figure 6.2, If it still points to a valid object. If it does, and it shows no other references are holding it, yet it’s not collected, it’s likely a memory leak.

Figure 6.2 – The object id still points to a valid object that should have been collected

Figure 6.2 – The object id still points to a valid object that should have been collected

The Object ID allows us to directly trace this object’s lifecycle and gather concrete proof that it’s not being collected as garbage due to lingering event handler references.

Beyond debugging and tracing capabilities in Visual Studio and writing code, we can set up Event Tracing for Windows to gather performance data and diagnostics. We will explore this next.

Event Tracing for Windows

Event Tracing for Windows (ETW) is a high-performance, low-overhead event logging system built into Windows. It is commonly used for performance monitoring, debugging, and tracing application execution. When creating a .NET Core application, you can use ETW to log and trace events. Here’s how to set it up and use it, with detailed explanations and code examples.

To use ETW, you need to install the Microsoft.Diagnostics.Tracing.EventSource package. This package provides the necessary APIs to create and manage ETW events:

dotnet add package Microsoft.Diagnostics.Tracing.EventSource

Now that the package is added, we must define the event source. We can use the EventSource class as a base class for our custom class and define the events:

using System.Diagnostics.Tracing;
[EventSource(Name = "SampleEventSource")]
class SampleEventSource : EventSource
{
    public static SampleEventSource Log {get;}     = new SampleEventSource();
    [Event(1, Keywords = Keywords.Startup)]
    public void AppStarted(string message) 
    => WriteEvent(1, message);
    [Event(2, Keywords = Keywords.Requests)]
    public void RequestStart(int requestId) 
    => WriteEvent(2, requestId);
    [Event(3, Keywords = Keywords.Requests)]
    public void RequestStop(int requestId) 
    => WriteEvent(3, requestId);
    [Event(4, Keywords = Keywords.Startup, 
    Level = EventLevel.Verbose)]
    public void DebugMessage(string message) 
    => WriteEvent(4, message);
}
public class Keywords
{
    public const EventKeywords Startup = (EventKeywords)0x0001;
    public const EventKeywords Requests = (EventKeywords)0x0002;
}

We define different methods to log a message with a specific log level. A level is a number or LogLevel string that helps categorize and filter log messages during analysis. The preset levels and leg level strings are as follows:

  • 0 = Trace
  • 1 = Debug
  • 2 = Information
  • 3 = Warning
  • 4 = Error
  • 5 = Critical

Doing this gives us a reusable class and methods to maintain a standard for logging messages in our application. Most event collection and analysis tools use these options to decide which events should be included in a trace:

  • Provider names: A list of one or more EventSource names. Only events defined on EventSources in this list are eligible to be included. To collect events from the SampleEventSource class above, you must include the EventSource name SampleEventSource in the list of provider names.
  • Event verbosity level: Each provider can define a verbosity level, and events with higher verbosity levels will be excluded from the trace. For example, configuring the application to collect Information verbosity-level events will exclude DebugMessage events since this is higher.
  • Event keywords: Each provider can define keywords and only events tagged with at least one of the keywords will be included. For example, only the AppStarted and DebugMessage events would be included if we specify the Startup keyword. If no keywords are specified, then events with any keyword will be included.

The following is the code that goes into the Program.cs to create sample log entries:

SampleEventSource.Log.AppStarted("Application Started!");
SampleEventSource.Log.DebugMessage("Process 1");
SampleEventSource.Log.DebugMessage("Process 1 Finished");
SampleEventSource.Log.RequestStart(3);
SampleEventSource.Log.RequestStop(3);

Now, you can run your application and use tools such as PerfView, Windows Performance Recorder (WPR), or Windows Performance Analyzer (WPA) to monitor and analyze the ETW events. For this demo, however, we will use the event viewer built into Visual Studio’s diagnostic tools. Open the Performance Profiler in Visual Studio (Alt + F2) and select the Events Viewer check box. Then, select the small gear icon to the right of the option to open the configuration window. This is shown in Figure 6.3.

Figure 6.3 – Event Viewer option in Visual Studio’s Performance Tools

Figure 6.3 – Event Viewer option in Visual Studio’s Performance Tools

The new window will contain a table that allows you to specify Additional Providers. Proceed to add a row for the SampleEventSource provider, click the Enabled checkbox, specify that the Enabled Keyword is 0x1, and change the level to Informational, as seen in Figure 6.4.

Figure 6.4 – Configuring the additional provider event source for the event viewer

Figure 6.4 – Configuring the additional provider event source for the event viewer

Once all the options have been entered, click Start to run the app and collect logs. Select Stop Collection or exit the application to stop collecting logs and show the collected data. As seen in Figure 6.5, we can filter through the tens of thousands of events generated to view logs generated by our custom event provider.

Figure 6.5 – Filter events coming from a custom event provider

Figure 6.5 – Filter events coming from a custom event provider

This was a simple demo, but it shows how ETW functionality can be embedded into your application. ETW is a powerful logging technology built into many parts of the Windows infrastructure. It is leveraged in the .NET CLR to collect system-wide data and profile all resources (CPU, disk, network, and memory) to help us obtain a holistic view of the application’s performance. Given that it has a low overhead, which can be further tuned, it is a suitable solution for monitoring production application diagnostics.

At this point, we have seen several ways to modify both our code and environments to provide additional information about the inner workings of our application at a system and runtime level. These approaches can be beneficial in finding elusive issues but also introduce new challenges for our dev teams. We will discuss some of the downsides to these next.

Downsides of profiling

Memory profiling, while a powerful tool for improving software performance and stability, can introduce performance overhead during operation. This overhead can affect the accuracy and performance of the application being profiled. This concept can be considered a headache, which explains why it is a road less traveled by software developers. Some of the more common challenges that are encountered are as follows:

  • Additional code execution: Memory profiling tools inject additional code into your application or run alongside it to monitor memory usage. This code tracks every allocation and deallocation, adding extra instructions the processor must execute. The more detailed the profiling (e.g., tracking each memory allocation), the higher the overhead. Also, maintaining and updating logging and instrumentation code can require significant effort, especially as the application grows.
  • Performance overhead: As the profiling tool tracks memory allocations and deallocations, it needs to store this data somewhere. This involves using additional memory and potentially significant I/O operations to write this data to disk. These operations are not part of the application’s normal execution flow and can significantly slow overall performance, especially if the I/O subsystem is already a bottleneck.
  • There will also be an increase in CPU usage. Profilers need CPU cycles to run monitoring code, process the collected data, and possibly analyze it on the fly. This additional CPU usage can compete with the application for resources, particularly in CPU-bound scenarios, leading to slower overall performance.
  • In addition to tracking the application’s memory usage, profilers also require memory to operate. This can include memory for storing the collected data and overhead for the profiler’s operations (such as its runtime environment). This increased demand for memory can lead to less available memory for the application, potentially causing more frequent garbage collections or paging, which can degrade performance.
  • Impact on garbage collector: Profilers can affect the garbage collector’s operations. By tracking object allocations and deallocations, the profiler may keep references to objects that would otherwise be collected, thus delaying garbage collection cycles or making them more frequent or prolonged. Each of these scenarios can introduce delays and performance hits to the application.

There are a few strategies that can be employed to mitigate the aforementioned concerns, as follows:

  • Use selective profiling: Running the profiler only on specific parts of the application or under certain conditions rather than profiling the entire application continuously
  • Off-peak profiling: Schedule profiling sessions during development or testing phases or during off-peak hours to minimize the impact on production performance
  • Incremental profiling: Gradually profiling different application parts in successive runs instead of all at once to reduce the load during any single profiling session

Understanding and planning for these overheads is essential when setting up memory profiling, especially in performance-sensitive environments.

Now that we have reviewed some of the challenges and concerns with adding profilers and some ways to manage the potential effects, let’s explore how we can detect possible memory leaks using unit testing.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime