Introduction

For years, C# has been dismissed as the language of choice for low latency applications. I have been working in the financial industry for over 15 years and very often I heard that any low latency software such as HFT (High-Frequency Trading) had to be implemented in native languages such as C++.

Low Latency

I think the primary requirement for low latency is not just the performance but consistency and predictability of execution time. This is exactly why the “unpredictable” behaviour of GC puts developers off from using C# (and other .NET languages) in low latency scenarios. However, I have recently started to notice several positions being advertised where “Low Latency” featured together with one of the .NET languages. I have also worked as Trade Desk Algo Developer at Credit-Suisse developing various trading automation employing C# and .NET as my “weapons” of choice without any problems.
The language itself is not the only criteria for low latency applications. Very rarely (or never) low latency applications live in isolation. In the world of finance, you need to connect to various third-party services. Such as market data and order execution providers, and that is where the network connectivity and its performance also plays a significant role.
Another aspect of managed languages is the perceived overhead of various checks. In this article, I would try to examine to what degree these affect the performance.

Disadvantages

Garbage Collector

The standard argument is that CLR (Common Language Runtime) employs a non-deterministic way of cleaning up unused memory – Garbage Collection (GC). “Nondeterministic” in this context means that the developer has no direct control over when the memory is freed. It is up to the runtime (CLR) to decide when is the most appropriate time to perform the garbage collection, based on available memory and the application behaviour. Although the developer can initiate GC by calling the “Collect()” method, this practice is commonly discouraged.
The CLR GC employs a tracing method of identifying and freeing up memory, whereby the algorithm traverses a graph of objects allocated on the heap and identifies the ones that are not then used. The CLR has to pause some threads during GC to be able to identify, free up and compact memory. Because the memory is compacted (aka defragmented), all the live references have also got to be updated.
All of that takes time and more importantly when the GC will start, causing a small pause in the execution of the application. This behaviour deemed unacceptable for low latency applications.

Just In Time Compilation

Just In Time compilation (JIT) is an approach to compilation, where each method is compiled on demand. This leads to a slower start up times and occasional pauses, when a method is accessed for the first time. However, this is only applicable to “cold starts” and is easily solved by running NGen utility against the assemblies, which generates native code for the same assembly. There is also another way – manually “touching” the methods at the beginning of the program – forcing JIT to do its job once and for all, rather than on demand basis. But this is more like a hack.

Advantages

Garbage Collector

Despite its perceived disadvantages, GC eliminates a lot of potential bugs and enhances the productivity of a developer, who doesn’t have to be concerned about memory deallocation. This significantly reduces the possibility of memory related bugs and means that the software could be delivered much faster.
Because of the work being done by GC, memory allocations are extremely fast. One of the last stages of GC is memory compaction. Since the memory is compacted (there are few exceptions that leave “holes” in memory because of pinned object and Large Object Heap) allocation of new objects is extremely fast. All the CLR has to do it to advance the “next object pointer”. Whereas in C/C++ the memory would have to be scanned, to find an unused block for the new object.
Even though GC causes small pauses, they are usually unnoticeable. The GC team is working hard to make sure GC causes as little interruption to the program execution as possible. They have also added additional flavours of GC, giving the developer the option of specifying which behaviour they want depending on the application type. In addition more recent changes to the .NET Framework and C#, such as return by reference which opens the possibility of using large structs rather than reference types to save on a number of allocations. Before such practice would most probably backfire, as copying large structs is relatively expensive.

GC Flavours

Source: http://www.lybecker.com/blog/2007/04/03/garbage-collection-flavors/

Concurrent Workstation Garbage Collection

This mode is optimised for interactivity by frequent short garbage collects. Each collect is split up into smaller pieces and is interleaved with the managed applications threads. This improves responsiveness at the cost of CPU cycles. This is ideal for interactive desktop applications where a freezing application is an annoyance for the users and ideal CPU time is abundant when waiting for user input. Concurrent workstation mode improves the usability with perceived performance.
Note that interactive GC only applies for Gen2 (full heap) collects because Gen0 and Gen1 collects are in nature very fast.

Non-concurrent Workstation Garbage Collection

Non-concurrent workstation mode works by suspending managed application threads when a GC is initiated. It provides better throughput but might appear as application hiccups where everything freezes for the users.

Server Garbage Collection

In server mode, a managed heap and a dedicated garbage collector thread are created for each CPU. This means the each CPU allocates memory in its own heap, therefore, results in lock-free allocation. When a collect is initiated all the managed application threads are suspended and all the GC threads collect in parallel.
Another thing to note is that the size of the managed heap segments is larger in server mode than workstation mode. A segment is a unit of which the memory is allocated on the managed heap.

It is possible to choose the type of GC for a managed application in the configuration file. Under the element add one of the below three settings and depending on the number of CPU, the garbage collector will run in the configured mode. Garbage Collection type settings
Running a standalone managed application the GC mode is by default concurrent workstation. Managed application hosts like ASP.Net and SQLCLR run with Server GC by default

JIT

.NET Assemblies contain IL code which is not yet executable and requires one last step – compilation to native instructions by JIT. Since it’s done on a machine where the code is going to run and not on a build server, JIT compiler has all the information about the hardware, primarily the CPU, to tailor the generated native instructions to the particular CPU architecture and its features, such as extensions, additional registers etc. Theoretically, this is ought to yield faster code. I haven’t seen any benchmarks, but my guess would be that the difference is very marginal.

Less Control over Optimisations

Whereas in C++ you can embed assembler code, you cannot do so in C#. Personally, I don’t see the reason why assembler should be used in 2015 unless of course you’re writing something really low level.
Other features, like inlining, although present in a form of an attribute and a flag in C# – [MethodImpl(MethodImplOptions.AggressiveInlining)] it merely acts as a hint to the compiler. Aggressive inlining option tells it not to include the size of the method into the list of criteria when deciding whether to inline the method or not. Even so, there is no guarantee. CLR already does an excellent job of inlining methods or properties automatically where necessary. AggressiveInlining option should be used in very rare circumstances, forcing to inline large methods could lead to the less efficient use of CPU cache.

Automatic Checks

The beauty of managed code is that, together with the memory management, many checks have been introduced to eliminate the most common bugs.
One example is array boundary checks. The pro native language camp will say those checks significant performance. This is not exactly the case, while the checks are made, and this is an essential feature of CLR, they are not necessarily always performed at runtime. CLR will use static analysis to eliminate checks. Even if a check has to be performed on an array, the JIT team used a nice trick to reduce the number of instructions. Here is an example (x86):

cmp     EDX, dword ptr [ECX+4]         
jae     SHORT G_M60672_IG03            // unsigned comparison

EDX contains the array index, and [ECX+4] the length of the array. “jae” instruction performs the comparison that index < length. The trick here is that it seems that the code doesn’t check for negative index. JIT guys employ unsigned arithmetic properties and use “jae” instruction which performs an unsigned check. So, if EDX is, negative, casting it to an unsigned representation will yield the value of 2^31, thus causing it to fail validation.

Does it matter?

There is quite a long list of features in native languages that are not available in C#. The questions are whether it makes a significant difference.
I would argue that the productivity and the speed at which the software can be developed plays a more important role, especially in these days when it cheaper to throw in more powerful hardware rather than spending more time on writing, optimising and testing the code.
Low Latency is not only dependent on the language the application and the framework are written in, but on other factors such as the performance of the network. For instance, you might want to disable Nagle’s algorithms, which optimises the traffic over TCP at the expense of latency. This can easily be done in C# by setting Socket.NoDelay option to true
Equipped with decent hardware and sufficient memory GC pauses would be imperceptible and there are more components in the chain that could lead to latency than the code itself.

Linq and extension methods are wonderful features, since they allow extending objects without having to modify the original types. Extension methods are special types of static methods, but they are called as if they were instance methods. Most prominent examples include Linq to Object where there are a plethora of methods available for IEnumerable type.

However since many types implement IEnumerable interface this inadvertently creates a problem. For example an array of int is IEnumerable and we could call a Count() extension method on this array. However this approach has significant performance drawbacks in comparison to simply calling .Length on the array. The problem becomes obvious if we look at the implementation of the Count() extension method:

public static int Count<TSource>(this IEnumerable<TSource> source)
{
    if (source == null)
    {
        throw Error.ArgumentNull(nameof(source));
    }

    if (source is ICollection<TSource> collectionoft)
    {
        return collectionoft.Count;
    }

    if (source is IIListProvider<TSource> listProv)
    {
        return listProv.GetCount(onlyIfCheap: false);
    }

    if (source is ICollection collection)
    {
        return collection.Count;
    }

    int count = 0;
    using (IEnumerator<TSource> e = source.GetEnumerator())
    {
        checked
        {
            while (e.MoveNext())
            {
                 count++;
            }
        }
    }
    
    return count;
}

Because the extension method is applied on any IEnumerable it has to be very clever in how it computes the count: if it’s a collection then we can simply return the .Count otherwise we have to enumerate through all elements to determine the count. Just because of it’s size this method will not be inlined by the compiler and the cost of using .Count() vs .Length on an array type is going to be very high.

One solution to this problem is to implement Linq extensions for array types separately. In which case the implementation of .Count() becomes very simple and is likely to be inlined:

public static int Count<TSource>(this TSource [] items)
{
     return items.Length;
}

But then there are many other types where the Count could be easily calculated by calling .Count property, such as List<T>, IList<T>, ICollection<T>, ImmutableList<T> etc. The proper solution would be to implement all the Linq to Object extension methods for each of the types separately to ensure the best performance. Even in the case of List<T> vs IList<T> an implementation for the specific type would be slightly faster.

In other Linq to Object extension methods we could improve performance by substituting foreach with for which is normally slightly faster.

However this inevitably results in explosion of the codebase that has to be maintained. One plausible solution, while maintaining all the benefits of having an extension method for each type, is to write a template and then use t4 files to create a specific implementation for each type.

Taking the implementation challenges aside, integrating such a solution into an existing application is extremely trivial. We just need to add the assembly and provided that it uses the same namespace as the original Linq to Object extensions, recompiling the application would be enough for the compiler to pick up the “better” suited extension methods, resulting in immediate performance benefits.

I have started such project on GitHub

https://github.com/ebalynn/FasterLinq

Madeira

October 25, 2015 - 1

stripeyfish

As per the developer setup guide for VMware vSphere Web Services SDK 5.0, you have to build your own DLLs from the wsdl files provided.
There are a few changes from SDK 4.0, most notably the lack of the genvimstubs.cmd and OptimiseWsStubs.exe files. However it is still fairly straightforward… apart from the initial wsdl command! read on…

Depending on where you extract the SDK zip file to you may end up with spaces in your path. After lots of head scratching it turns out wsdl.exe does not like this! If you have spaces in your path, then when running command “wsdl.exe /n:VimyyApi vim.wsdl vimService.wsdl” you will get an error as follows – despite trying all combinations of environment variables, running from Visual Studio command prompt in the actual vim25 directory:
Error: Could not find a part of the path ‘c:program%20filesmicrosoft%20visual%20studio%209.0sdkvsphere5vsphere-wswsdlvim25query-messagetypes.xsd’.

So rather than copy to e.g. Visual Studio/SDK path…

View original post 95 more words

Again, I’m late to the party but here you go: https://github.com/dotnet/coreclr

__DynamicallyInvokableAttribute

It is undocumented, but it looks like one of the optimizations in .NET 4.5. It appears to be used to prime the reflection type info cache, making subsequent reflection code on common framework types run faster. There’s a comment about it in the Reference Source for System.Reflection.Assembly.cs, RuntimeAssembly.Flags property:

// Each blessed API will be annotated with a "__DynamicallyInvokableAttribute".
// This "__DynamicallyInvokableAttribute" is a type defined in its own assembly.
// So the ctor is always a MethodDef and the type a TypeDef.
// We cache this ctor MethodDef token for faster custom attribute lookup.
// If this attribute type doesn't exist in the assembly, it means the assembly
// doesn't contain any blessed APIs.
Type invocableAttribute = GetType("__DynamicallyInvokableAttribute", false); 
if (invocableAttribute != null) 
{ 
    Contract.Assert(((MetadataToken)invocableAttribute.MetadataToken).IsTypeDef); 
    ConstructorInfo ctor = invocableAttribute.GetConstructor(Type.EmptyTypes); 
    Contract.Assert(ctor != null); 
    int token = ctor.MetadataToken;
    Contract.Assert(((MetadataToken)token).IsMethodDef); 
    flags |= (ASSEMBLY_FLAGS)token & ASSEMBLY_FLAGS.ASSEMBLY_FLAGS_TOKEN_MASK; 
}

So here is where I landed…

For more details check out http://www.virtualclarity.com

Virtual Clarity

Tell me what’s wrong with this simple piece of code:

public class MyService : IMyService, IDisposable 
{
    private readonly IOtherService _otherService;
    public MyService(IOtherService otherService)
    {
        _otherService = otherService;
    }

    public void Dispose()
    {
       _otherService.Dispose();
    }
}

The answer is very simple: “I created you and I will be your end”, yet MyService didn’t create IOtherService, thus it has not right to call Dispose() on the service.”

Yet, at Mizuho, I saw many developers lacking this simple, yet critical design concept!

</rant>

Expect interruptions 😦

Unit Testing – Setup pattern (Moq)

Frank Code

In my previous post I mentioned that large unit test setups can be difficult to maintain / understand. This problem can be reduced by following a test pattern. I want to share with you a pattern I use, and I think works really well. In this post I will be using C#, Moq libary and Visual Studio test tools.

To start here are some things I believe make a good suite of unit tests:

  • Easy to identify data dependencies of tests.
  • Mocking is not repeated every test.
  • Test are clear and easy to understand / maintain.
  • New tests can be added with relative ease.

In order to explain this testing pattern I have forked my buddies TodoMVC app, and introduced a service layer, repository pattern and test project. You can see my source code with an example test class on github.

The TodoModule (service layer) has a number of public methods; Get…

View original post 315 more words