The ideal world we want to live in
Today, I want to tackle a topic I believe the internet needs more resources on! For an industry that seems obsessed with performance, I often see advice that, while functional, is far from optimal, frequently with many more memory allocations than necessary. In hopes of remedying this, or at least providing more resources for everyone, I want to share everything I’ve learned about C# so we can all write the best possible code!
C# is a wonderful language that has served me well over the years. However, it can often be cryptic about what happens under the hood unless you know where to look. Most of what I’ve learned comes from working in the .NET backend industry, where performance optimization is a key focus, or by inspecting the intermediate language code that the compiler generates. This is particularly useful when working outside of an engine, as it allows me to infer optimizations that an engine compiler, such as Unity’s IL2CPP, may or may not implement.
Before we even start discussing optimal code, I want to stress this: Performance should be about whether your code is fast enough, not whether it’s the fastest possible. In other words, you only need to optimize enough to reach your target FPS on your lowest-spec target device. It’s easy to fall into the trap of chasing perfection, but in reality, players will care far more about how fun and well-put-together your game is than whether it’s the most optimized game ever made. Additionally, I believe that readability, ease of use, and maintainability are more important than optimization, but that’s a discussion for another article (maybe).
Garbage collection
Let’s start by talking about garbage collection and how to work with it instead of against it.
Working with C# is especially interesting compared to unmanaged languages because you don’t have direct control over when objects are freed. This requires us to learn which data structures are most appropriate for different scenarios, how to use them correctly, and when to use them. Let’s start with some more obvious topics:
Don’t try to control the garbage collector
This may not be obvious, especially to those coming from unmanaged languages, but in C#, even if you call GC.Collect, there’s no guarantee that everything will be freed immediately, or even in the same frame, the next frame, or the next few seconds! The documentation states that there is no guarantee that all unreferenced memory will be collected, meaning that even when you force a collection, it may not recognize that the objects you expect to be collected are actually ready to be disposed.
Additionally, garbage collection can be significantly different between development and release builds. In my experience, you often get very wildly different results from editor to release builds, especially on consoles or mobile devices with entirely different chipsets. Because of this, I don’t recommend trying to force the garbage collector to work in a certain way, instead learn to work with it.
Don’t use destructors
This follows the same reasoning above, because you can’t rely on when something will be collected, you can’t rely on anything being cleared through a destructor, because you can’t know when is it going to be executed, leaving you with a bunch of dependencies hanging about. To be completely honest, I don’t even know why has Microsoft not just completely removed the ability to call destructors at high level at this point.
Fofllowing the same reasoning from the previous point, you also shouldn’t rely on destructors. Since you can’t control when garbage collection occurs, you can’t be sure when a destructor will be executed, potentially leaving object dependencies in an undefined state.
So what can we do? Instead, I suggest striving to write code that reasonably generates as little memory as possible. Lets go over some ways we can achieve this!
Plan your data in advance
Rather than allocating lists, dictionaries, or other heap-based structures on demand, structure your game data so that it’s preloaded at appropriate times, either when the game starts, when loading a save file, or during whatever specific loading screens you have.
For example, in Unity, you can store your data efficiently in Scriptable Objects (and go even further with Addressables). Complementing with some good editor tooling, this provides a situation where your game designers can work with your data in a way that is easy for them to understand, while everything gets serialised and stored efficiently in the asset itself, minimising allocating memory at runtime. Godot has a similar concept in their Resources that can be leveraged the same way.
If managing your data this way isn’t an option, try to generate memory in places where losing performance is less noticeable, such as during loading screens or menus.
If we can’t use destructors, what can we use?
The most obvious alternative is to leverage your engine’s lifecycle hooks:
- In Unity, use
OnDestroy
(orOnDisable
) - In Godot, use
_Exit_Tree
(or_Notification
and then catchNOTIFICATION_PREDELETE
)
Alternatively, you can leverage the IDisposable interface, along with using statements. Most IDEs nowadays will warn you when you forget to wrap an IDisposable
in a using block, which really helps making sure resources are cleaned correctly.
IEnumerable
is your friend
For the longest time, the IEnumerable interface was often thought of one and the same as EF (Entity Framework) due to both using LINQ statements in different ways, when in reality it’s just an interface that allows iterating over a set of data.
This is not to be confused with IEnumerator. It helps me to think of these two as such:
- Something that inherits from
IEnumerable
means it can be iterated over - Something that inherits from
IEnumerator
knows how to iterate something
When you implement your own IEnumerable
you must also implement a separate IEnumerator
, since you are saying something can be iterated, so then you have to define how is it iterated.
How does this help with memory management?
The first thing to know is that most built-in C# collections implement IEnumerable<T>
, which allows you to accept any collection and iterate over it, without exposing any of their specific functionalities. By coding your methods to accept IEnumerable<T>
, rather than specific collection types like lists or dictionaries, you can potentially avoid unnecessary computations. For example in a list you won’t have to count how many items are in it, in fact you have to call .Count()
in the enumerable explicitly which, at least in Rider, warns you of a possible extra enumeration.
That’s the trade-off with enumerables, you are given access exclusively the ability to iterate over elements, losing the ability to access functionality specific to any collection. However, this is also its strength, for cases where you only want to enumerate over data and don’t want to modify collections, you only stand to gain by this. Combined with the other suggestion I made of laying your data in a better way such that everything is in the “right” state from the start, it is possible to have your entire codebase using enumerables!
As a little side-note, the enumerables give you access to To
methods, however these do allocate new memory for a copy of the data structure of your choice so be careful and use them appropriately.
I highly recommend watching this video by Nick Chapsas where he talks about how enumerables can be wielded the wrong way, followed by how to use them the right way.
Span<T> and Memory<T> are slowly taking over
Span<T> and Memory<T> are relatively new data types that represents contiguous regions of arbitrary memory, the difference being that Spans are only ever allocated on the stack, whilst Memory’s are allocated on the heap.
These types changed everything as up until this point you had no way to represent data in the stack that was contiguously allocated (much like C++ standard library vector), which makes accessing them at least one order of magnitude faster than iterating lists, and hundreds of nanoseconds faster than even arrays can be.
Before you go ahead and replace everything with these, I’d say to be wary of using it, since you will often rely on using MemoryMarshal, which is effectively entering sort-of-but-not-really unsafe memory territory that may or may not give you situations with unexpected or unintented behaviour.
My suggestion is to reserve using Span
and Memory
for cases when you have really expensive operations that can’t be optimised further with built-in types. A very good example of this are operations on strings that can’t be handled with a StringBuilder
, such as replacing values or sorting/re-arranging characters.
I’d recommend reading this article if you want to learn further.
And also watch this video for more examples on how to use them.
When all else fails, make it unsafe
If there is truly really nothing else you can do, I’d like to remind you that C# is perfectly capable of performing almost as fast as an unmanaged language if you decide to make everything unsafe. This forces you to use value types or pointers, and requires you to have to manager their own memory allocation and deallocation.
I really do not recommended doing this unless you are confident with what you’re doing, and why you need to do it. Once you enter unsafe territory you are on your own and the compiler will no longer perform any safety checks. This is often best left for very specific performance critical blocks of code that you need to squeeze as much performance out of, although with Span
and Memory
this is becoming less and less common over time.
You can also disable garbage collection
For the sake of exposing as much knowledge as possible, if none of the other suggestions here are even possible and you are left with no other option, you could also disable the garbage collector during tight loops of gameplay that need high FPS and re-enable it later.
You can do so by calling GC.TryStartNoGCRegion and then later calling GC.EndNoRegion to re-enable it.
I strongly recommend against this as in all of my years working with C#, I never found a situation that couldn’t be solved with one of the other approaches.
On the topic of LINQ
LINQ is often debated when it comes to performance. In general, traditional for loops are more performant because LINQ allocates memory for at least one enumerator to iterate over a collection. However, in the latest versions of .NET, certain methods such as Any, All and Count now use Span internally, making them faster than traditional loops.
As of the time of writing this article, my personal recommendation remains: if you like using LINQ (I do, it makes the code more compact and easier to read), then use as much and wherever you want, consider optimising your LINQ statements whenever they become a performance problem. If the thought of writing slightly less performance (we’re talking nanoseconds here) with a bit more memory allocation bothers you, then stick to traditional loops, it’s your game, do whatever works for you!
That said, LINQ is progressively improving to the point that it’s faster than traditional code, so it’s worth keeping an eye out for .NET updates!
Other miscellaneous improvements
Methods that start with To
from built-in libraries
In C#, methods that start with To
(see ToArray
, ToList
, ToDictionary
, even ToString
) represent methods that will allocate new memory and create a new instance of that data structure. Use this as a very quick pointer when searching for places that are generating memory that you might want to optimise.
Avoid generating memory in update loops
Whether in Unity’s Update
or Godot’s _Process
methods, avoid allocating memory in these as doing so means it will do it every frame, which is way more detrimental to your game than anything else.
This leads to exponentially more memory allocations, garbage collection pressure and as more and more objects containing those scripts are added to the game, it also leads to memory fragmentation preventing it from shrinking.
Instead, prefer one of the other ways I mentioned here, such as pre-allocating memory in start events and release them on destroy events. You could also implement some kind of object pooling strategy to re-use objects efficiently.
Disable logging when profiling
Logging often involves allocating strings for debugging purposes. This is extra memory that will take chunks of your memory stats in your profiler, which don’t correspond to what the players are actually going to experience in release builds (assuming you are disabling logging in release builds as you should).
For example, Unity’s built-in debug class generates memory every time any type of log method is called, mostly due to using String.Format
internally. Godot presumably has the same issue (although I have not confirmed this).
Make sure to disable logging when profiling for the most accurate results.
Reuse collections instead of creating new ones
When reusing collections, avoid repeatedly destroying and creating new instances. Instead you can clear the existing collection using its .Clear()
method (available in anything that implements ICollection
). This prevents unnecessary memory allocations.
Set the capacity in your collections where possible
If you know the size of your collections ahead of time, most collection constructors allow you to pass an integer that defines the max capacity for them. This not only makes memory more contiguous, it also prevents the collection from resizing. For example, lists continuously double in size when reaching certain limits, unless if a capacity is specified.
Avoid concatenating strings
String concatenation in C# generates a third string on top of the ones provided, this can be avoided and improved upon by instead creating a StringBuilder with a specific capacity, then appending strings to it. You can also re-use the builder by clearing it when no longer needed.
Keep lambdas or closures invocations to a minimum
Every time a closure or lambda is invoked in C# it incurs an allocation overhead thus, avoid calling these every frame and instead trigger them only when the game state has changed.
Don’t return null for collections
Returning null for collections is considered bad practice since it forces the consumer of your methods to pollute their code with null checks.
In C#, for and foreach loops are no-op, which means they won’t execute any code if the collection you are iterating over has no elements. Therefore prefer returning an empty value of the corresponding collection, such as Array.Empty or Enumerable.Empty.
Leverage non-null references
C# provides a compiler option that enforces explicit handling of nullable references by using the Nullable struct.
This compiler option is called Nullable (different to the struct mentioned before), which helps avoid the dreadful null reference exception known as the billion dollar programming mistake.
Use the Return pattern for multiple values or null
This is a personal preference, but when a method can have multiple return values, I prefer to use the Result pattern, instead of returning null
, -1
, or some kind of other arbitrary value that is subjective and requires the full context of the situation to understand.
I use exceptions for critical errors that should never happen or situations where I can’t handle the error myself. For example if using an external library improperly or some built-in error from C# outside of my control.
I also use asserts to check for correct state in code that can’t be structured at compile time in a way that guarantees the state is correct.
Final thoughts
And that’s everything about C# I’ve learned in over 10 years of working in software development! I have a few more articles I want to write, those are going to be focused more on Unity as that’s what I have the most documentation on with me at the moment.