In this post, We will discuss the best 6 memory optimization tips for .NET applications. Large .NET applications are silently killed by memory problems. High blood pressure is similar to this. If you continue to eat junk food without noticing it, one day it will cause you serious medical problems. High memory consumption, performance issues, and outright crashes can all be serious problems for .NET programs. This post will show you how to maintain a healthy blood flow in our application.
What is the best way to determine if your memory usage is healthy? Is everything you can be done to maintain its health? Exactly that question is the subject of this article. Detecting memory problems at their earliest stage can help you maintain memory health. Among the tips you’ll learn about are optimizing garbage collections and improving the speed of your application.
1. It is imperative that objects are collected quickly
It is essential that objects be collected as soon as possible in order for your program to run quickly. In order to appreciate how valuable it is, you must understand how .NET’s generational garbage collector works. New clauses create objects in Generation 0 on the heap. Memory is very limited in that case. Gen 1 is promoted to Gen 0 if they are still referenced in a Gen 0 collection. The memory space in Gen 1 is larger. They are promoted to Gen 2 if they are still referenced in a Gen 1 collection.
There is a high frequency of Gen 0 collections, and they are very fast. There is a difference in prices between Gen 1 and Gen 0 collections because they go over both memory spaces. Large Object Heap (LOH) is included in Gen 2 collections. They are extremely expensive. Many Gen 0 collections are offered in the GC, fewer Gen 1 collections are offered in the GC, and not many Gen 2 collections are offered in the GC. On the other hand, if you have a lot of objects that are promoted to a higher generation, you’ll experience the opposite. Poor performance and memory pressure are the results.
Object allocation is extremely cheap for new objects. Only the collections need to be considered.
Is it possible to collect objects in a low generation? Keeping them from being referenced is as simple as making sure they don’t get referenced too soon. It is necessary to keep some objects in memory forever, such as singletons. Usually, these are not memory-intensive services.
2. Make use of caching, but be cautious
There are inherent problems with mechanisms such as cachin. The objects are likely to be promoted to Gen 2 since they are long-lived temporary objects. Caching can really boost performance, even though it’s bad for GC pressure. It must be monitored, however.
The use of mutable cache objects can help alleviate some of this pressure. The idea is to update existing cache objects rather than replace them. As a result, less work will have to be done by the GC in terms of promoting objects and initiating more collections in Gen 0 and Gen 1.
As an example, here’s what I mean. Using your online grocery store, you cache stock items. Prices and data for frequently accessed items are stored in a cache mechanism. High blood pressure is caused by frozen pizzas. Let’s say the details change every five minutes, so every time the cache is invalidated and the database is re-queried, it has to be re-queried. In this case, you would modify the existing Pizza object rather than creating a new one.
3. Monitor the percentage of time in the GC
The execution time of garbage collections can be found out pretty easily if you know how much they hurt it. .NET CLR Memory | % Time in GC can be found using the performance counter. Garbage collectors use the garbage collector’s execution time. Performance counters can be viewed using a variety of tools. PerfMon can be used in Windows. Dotnet-trace can be used on Linux. Here’s how to measure memory, CPU, and everything else using performance counters in .NET.
The following numbers are magic, but take them with a grain of salt because every situation is different. In general, 10% time in GC is a reasonable amount for a large application. You should spend no more than 20% of your time in GC, and anything more is suspect.
4. Watch out for those Gen 2 collections
You should also monitor the number of Gen 2 collections in addition to % Time in GC. Gen 2 collections, or rather the rate at which they are being collected. Keeping them to a minimum is the goal. It should be considered that those are heap collections in full memory. As the GC collects everything, all threads of the application are effectively frozen.
The number of Gen 2 collected files cannot be quantified with a magic number. That number should be monitored periodically, and if it increases, then you’ve probably added some extremely bad behavior. .NET CLR Memory | % Gen 2 Collections shows that number
5. Keep track of memory consumption
Suppose an application is in a regular state. The world is full of random events. There are several types of servers. Servers may serve requests, queues of messages may be retrieved by a service, and desktop applications with many screens may serve messages. You are continuously creating new objects during this time, performing some operations on those objects, and returning to a normal state afterward. As a result, memory consumption is expected to remain roughly the same over time. During peak times or during heavy operations, it may reach high levels, but it should return to normal once the work is done.
Monitoring a lot of applications over time, however, may reveal that memory can increase. While logically it shouldn’t, average consumption gradually increases. Memory leaks are almost always responsible for that behavior. Objects that are no longer used but still referenced for some reason never get collected, which is the case with this phenomenon.
Memory is consumed more each time an operation causes an object to leak. As time passes, memory increases. A memory’s limit is reached when enough time has passed. Those limits are 4GB for 32-bit processes. 64-bit processes are constrained by machine constraints. The garbage collector panics when we are so close to the limit. Whenever an allocation runs out of memory, it triggers full memory Gen 2 collections. Your application can easily be slowed down by this. After even more time passes, the application crashes with an OutOfMemoryException when it reaches its limit. A heart attack in the making.
I recommend actively monitoring memory consumption over time to keep from reaching this state of affairs. You can do so by checking Process | Private Bytes in the performance counter. With Process Explorer or PerfMon, you can easily do this.
6. Analyze memory leaks periodically
Memory leaks are without a doubt the biggest culprits when it comes to memory problems. In the long run, they have a traumatic effect on the body because they’re easy to cause and can go unnoticed for quite a while. When your application continues to crash consistently, it is very difficult to fix memory leaks. There will be a lot of regression bugs to fix since you’ll have to change old code. In my opinion, an application with healthy memory should have a second prime objective: preventing memory leaks.
There is no way to guarantee that your team will never introduce memory leaks. Moreover, it isn’t feasible to check every time a new commit is made for memory leaks. It would be better to periodically check for memory leaks instead. You might need to do it every week, every month, or every quarter. Adapt to your needs.
You can do this by checking for memory leaks whenever your memory rises (including at Tip #5). This problem is compounded by the fact that leaks with minimal memory footprints can also cause a great deal of trouble. The code for objects that should be collected might still be running when they should be collected, resulting in incorrect behavior.
Memory leaks are best detected and fixed with a memory profiler. Demystifying Memory Profilers in C# .NET Part 2: Memory Leaks provides instructions on how to do that.
My article 8 Ways You can Cause Memory Leaks in .NET explains which design patterns cause memory leaks.
That’s it, a healthy memory state is within your reach. These recommendations will ensure that your application runs quickly and consumes little memory.