Sorin Dolha's Blog

I’ve said it before. In my opinion, WPF is the single contemporary programming technology (disclaimer: among those that I’ve considered myself) that really requires a book to learn it. Otherwise, trying to dig using just hands on testing – as it’s indeed possible in many other cases – you might think you know enough before you do and you’ll get frustrated every day later because things won’t work the way you’d think they should. This StackOverflow question – that triggered this post – is only an example. I personally have been there too (and after the WPF experience, now I decided that I’ll always learn complex new technologies using books, although it may be unnecessary in some cases, as I’ve seen before WPF.)

But don’t get scared. Mastering WPF by starting learning it from the core instead of from the surface will provide many, many benefits that will overcome any…

View original post 146 more words


Definitely something to play with when my brain has more oxygen (suffering from anaemia right now)

Deneme 1 2 3!

AForge.NET  is a C# framework designed for developers and researchers in the fields of Computer Vision and Artificial Intelligence – image processing, neural networks, genetic algorithms, machine learning, robotics, etc.

  • AForge.Imaging – library with image processing routines and filters;
  • AForge.Vision – computer vision library;
  • AForge.Neuro – neural networks computation library;
  • AForge.Genetic – evolution programming library;
  • AForge.Fuzzy – fuzzy computations library;
  • AForge.MachineLearning – machine learning library;
  • AForge.Robotics – library providing support of some robotics kits;
  • AForge.Video – set of libraries for video processing
  • etc.

are mainly topic in this framework. İf you want more information and help  visit:

View original post

Battling Anaemia 

The week before last,  on Thursday I suddenly felt something in the stomach, it wasn’t painful, felt more like a cramp. That happened around 14:30 then around 17:40 I decided to go home, stood up and could feel that my heart was racing. I initially thought nothing of it and decided to go home. However as soon as I tried to cross the road I felt that something was seriously wrong with me and I had to sit down. I then slowly proceeded to the closest tube station but even that proved extremely difficult.

I finally gave in and got a cab that took me to the train station. Thinking about the stomach cramp I had earlier I thought I had a bad sandwich and one day off sick should do it.

Meanwhile I could barely walk and after getting to the second floor of my house my blood pressure would skyrocket to 160/110 and my pulse would be around 130bpm. Everyone around me was commenting on how pale I looked,  suggesting I go to the A&E. Initially I refused,  but later I realised that I’m not getting any better and the A&E wouldn’t be such a bad idea.

So my dad drove me to the local hospital (my mum works there as well) and we went to the A&E. Expecting to wait for ages I was actually admitted relatively quickly, by then I really looked very sick. A nurse took few vials of my blood and sent me back to the waiting room.

Half an hour later they asked for me again but this time I was greated by a guy with the wheel chair! This is where I got really really scared. Then the nurse told me that the lab called them to tell that my haemoglobin level was 49 g/L (this is dangerously low, versus normal 160 g/L) and I require an emergency blood transfusions and would remain in the hospital. I was beyond scared by this point.

They said that normally people collapse when their haemoglobin levels are that low and the fact that I was relatively “alive” meant that my body got used to it. After even the first unit of blood (308ml) I felt almost high and full of energy. The consultant said that I require at least 4 units to get my levels above critical (1.2 litres in total). Each bag takes about 2-3 hours and my transfusions lasted well into the night. By then I was moved to the observation ward.

Next morning I was taken for the ultra-sound, and then for upper endoscopy (which was “fun” as I didn’t feel any sedation) , as they were thinking that I must have a bleed somewhere resulting in slow blood loss. Nothing was found.

Next day at the hospital they took about 20 vials of my blood for various tests. The consultant haematologist couldn’t find anything either apart from my still very low haemoglobin (81 g/L after all transfusions).

In the end I was discharged with a diagnosis of “iron deficiency anaemias”.  Which means that I either don’t absorb enough iron (celiac disease would do that for example)  or have a bleeding somewhere in my lower GI. I was given a bunch of ferrous sulphate pills to help maintain my iron levels.

Upon discharge from the hospital I was still a bit high from the blood transfusions, but that quickly subsided. Then all the “pleasant” symptoms from anaemia appeared – extreme chills (I would wear my hoodie even if it was +24 in a room), shortness of breath, headache, low grade fever, anxiety, inability and unwillingness to do anything, insomnia, night sweats, inability to regulate body temperature. Every day I would measure my blood pressure to gauge how hard my heart was working. If I had been sitting for 2 minutes or more my blood pressure was perfect 117/77, however if I did something slightly physically demanding it would again skyrocket to about 140/100. After 5 days of taking the medication I wasn’t feeling any better. Every day would be like a Groundhog day to me – I would wake up at 6 after only sleeping 4-5 hours and having nightmares. I would then sit quietly on the sofa wearing everything to get worm, even though the room temperature was never below 23C. I would then spend the whole day watching telly and sleeping which in turn would make my insomnia even worse.

However today is day #10 and it’s the second consecutive day that I suddenly started to feel better – temperature is down, I can walk without stopping, my headaches are gone and so are the chills.

I still have several outpatient procedures to go to establish the exact cause of such acute onset of anaemia.

All in all anaemia sucks big time, because of lack of oxygen in your blood it hurts to think, I couldn’t sit in front of my PC at all,  you hate yourself for not being able to do anything, one of the most unbearable symptoms is the constant feeling of cold. It basically sucks all the will to live out of you!

In the hindsight, my symptoms began long before August this year, now I think that I might have had some anaemia for at least 6-7 years. I’m now looking forward to recovery, and if I had it for a while I wonder how I would feel with the normal haemoglobin levels!

Day #11 and I certainly don’t feel any worse. About to drive to the clinic all by myself!

This is getting quite interesting…

I Love C#

This blog is written respective to the features C# 8.0 is suggesting to have. The base for this write up is the following video where Mads Torgersen explains Seth Juarez the newest C# 8.0 features proposed. Please bear in mind none of these features are released or finalized yet.

You can watch the video here.

Nullable Reference Types

I felt like this is somewhat of a misnomer since reference types are by default nullable as they can be null and a lot of developer would expect so. Although to be technically correct nullability (Is that a word? Who knows!) is not a mandatory need for all reference types. We are just used to see it that way and thus the naming is correct.

This specific feature points to a fact where someone is actually capable of declaring reference type instances which are not supposed to be null. Even in…

View original post 1,286 more words

As part of the series of posts announced at this initial blog post (.NET Application Architecture Guidance) that explores each of the architecture areas currently covered by our team, this current blog post focuses on “Microservices and Docker containers: Architecture, Patterns and Development guidance”.

Just as a reminder, the four introductory blog posts of this series will be the following:

The microservices architecture is emerging as an important approach for distributed mission-critical applications. In a microservice-based architecture, the application is built on a collection of services that can be developed, tested, deployed, and versioned independently. In addition, enterprises are increasingly realizing cost savings, solving deployment problems, and improving DevOps and production operations by using containers (Docker engine based as de facto standard).

Microsoft has been releasing container innovations for Windows and Linux by creating products like Azure Container Service and Azure Service Fabric, and by partnering with industry leaders like Docker, Mesosphere, and Kubernetes. These products deliver container solutions that help companies build and deploy applications at cloud speed and scale, whatever their choice of platform or tools…

We can use the Range method to build two integer sequences as follows: You can join the two sequences using the Concat method: You’ll see that the concatenated sequence holds all numbers from the first and the second sequence:

via Concatenate two IEnumerable sequences in C# .NET — Exercises in .NET with Andras Nemes

In parallel programs is very important to regard cache size and hit rates on a single CPU, but it’s even more important to consider how the caches of multiple processors/cores interact. Let’s consider a single representative example, which demonstrates the important cache optimisation and emphasizes the value of good tools when it comes to performance optimisation in general.

Let’s first examine the first sequential method, it performs the rudimentary task of summing all the elements in a two-dimensional array of integers and returns the result:

public static int MatrixSumSequential(int [,] matrix)
    int sum = 0;
    int rows = matrix.GetUpperBound(0);
    int cols = matrix.GetUpperBound(1);
    for(int i = 0; i < rows; i++)
        for(int j = 0; j < cols; j++)
            sum += matrix[i, j];
    return sum;  

We could have used TPL but let’s ignore the huge arsenal of tools TPL provides in our simple example. The following attempt at parallelisation may appear sufficiently reasonable to harvest the fruits of multi-core execution, and even implements a crude aggregation to avoid synchronisation on the shared sum variable:

public static int MatrixSumParallel(int [,] matrix)
    int sum = 0;
    int rows = matrix.GetUpperBound(0);
    int cols = matrix.GetUpperBound(1);
    const int THREADS = 4;
    int chunk = row / THREADS;
    int [] localSums = new int[THREADS];
    Threads [] threads = new Threads[THREADS];
    for(int i = = 0; i < THREADS; i++)
        int start = chunk * i;
        int end - chunk * (1 + i);
        int threadNum = i;
        threads[i] = new Thread(() => {
            for(int row = start; row < end; r++)
                for(int col = 0; col < cols; col++)
                    localSums[threadNum] += matrix[row, col];
        foreach(var thread in threads)
    return localSums.Sum();


Executing each of the two methods several times on an i7 machine with 6 cores produced the following results for a 2,000 x 2,000 matrix of integers:

  • 325ms average for sequential method
  • 935ms for the parallel method. Three times as slow as the sequential method!

The obvious question is why?
This is not an example of too fine grained parallelism because the number of threads is only 4. However if you accept the premise that the problem is somehow the cache related, it would make sense to measure the number of cache misses introduced by the 2 methods above.

The Visual Studio profiler when sampling the execution of each method with a 2,000 x 2,000 matrix reported 963 exclusive samples in the parallel version and only 659 exclusive samples in the sequential version, the vast majority of samples being on the inner loop line that reads from the matrix.

Why would a line of code writing to localSums introduce so many cache misses in comparison to writing to sum local variable? The answer is that the writes to the shared array invalidate cache lines at other processors/cores, causing every += operating to be a cache miss.
When the processor writes to a memory location that is in the cache of another processor/core cache, the hardware causes a cache invalidation, that marks the cache line as invalid. Accessing that line results in a cache miss.

The moral of the story do not blindly introduce parallelization in a hope that that would also result in the performance increase. Always test both versions, you might be surprised at the results!

Fraction Implementation in C#

I’m not really sure why Microsoft have never bothere with implementing a Fraction primitive in .NET. I’m sure there are plenty of uses as fraction allow to preserve the maximum possible precision. I have therefore decided to create my own implementation (albeit somewhat primitive at this stage) .

My implementation automatically simplifies the fraction, so if you we to create new Function(6, 3) that would be simplified to 2. The Fraction struct implements all the arithmetic operators on itself and on Int64, float, double and decimal.

Internally the Fraction is represented as two Int64: Numerator and Denominator and is always simplified upon initialisation. I initially intended to have it as an option, however following profiling the cost of simplification is not that great and the benefits outweigh the performance drawbacks. 

Fraction has explicit conversion to Int64 (although that is bound to lose precision), float, double and decimal. It supports comparison with Int64, float, double and decimal and even supports ++ and — operations.

So far I have provided more or less complete implementation with plenty of Unit Test. Now the hard word of optimising the performance begins!

Design of Fractions

Fraction is implemented as a struct (pretty obvious choice). It takes a numerator as the first argument and denominator,  it then tries to simplify the fraction using the Euclidean algorithm, so if you were to specify 333/111 it would become 3.

The implementation supports all arithmetic operations with long, float, double and decimal and can also be converted to those type by either calling the corresponding methods or using explicit cast.

You can also create a function from either a long, float, double or decimal. Conversion from a long is quite trivial however conversion from a float, double or a decimal goes through a while loop and multiplies the floating point number until it has no decimal places. This method is relatively slow and therefore is not recommended.

Apart from that the Fraction behaves like a fist class citizen: you can compare a Fraction to any other number, divide, multiply, add, subtract, compare, increment decrement etc.

For example:

var oneThird = Fraction(1, 3);
var reciprocal = oneThird.Reciprocal();

Console.WriteLine(oneThird * reciprocal) : "1"
Console.WriteLine(++oneThird) : "4/3" - just like with an integer ++ adds 1 
Console.WriteLine(oneThird * oneThird) : "1/9"


Please feel free to contribute to the codebase if you feel like it

If you are about to begin implementing your own version of a thread safe generic weak dictionary – STOP!
As of .NET 4.0 there is already a class that implements that functionality and it’s called ConditionalWeakTable and it exists in System.Runtime.CompilerServices namespace.

There are however several limitations: both the key and the value have to be reference types (TKey : class and TValue : class).

Here is the comment from the source file:

** Description: Compiler support for runtime-generated "object fields."
** Lets DLR and other language compilers expose the ability to
** attach arbitrary "properties" to instanced managed objects at runtime.
** We expose this support as a dictionary whose keys are the
** instanced objects and the values are the "properties."
** Unlike a regular dictionary, ConditionalWeakTables will not
** keep keys alive.
** Lifetimes of keys and values:
** Inserting a key and value into the dictonary will not
** prevent the key from dying, even if the key is strongly reachable
** from the value.
** Prior to ConditionalWeakTable, the CLR did not expose
** the functionality needed to implement this guarantee.
** Once the key dies, the dictionary automatically removes
** the key/value entry.
** Relationship between ConditionalWeakTable and Dictionary:
** ConditionalWeakTable mirrors the form and functionality
** of the IDictionary interface for the sake of api consistency.
** Unlike Dictionary, ConditionalWeakTable is fully thread-safe
** and requires no additional locking to be done by callers.
** ConditionalWeakTable defines equality as Object.ReferenceEquals().
** ConditionalWeakTable does not invoke GetHashCode() overrides.
** It is not intended to be a general purpose collection
** and it does not formally implement IDictionary or
** expose the full public surface area.
** Thread safety guarantees:
** ConditionalWeakTable is fully thread-safe and requires no
** additional locking to be done by callers.
** OOM guarantees:
** Will not corrupt unmanaged handle table on OOM. No guarantees
** about managed weak table consistency. Native handles reclamation
** may be delayed until appdomain shutdown.

Just by looking at the comments alone we can extrapolate that what we have is equivalent to a generic, thread safe weak dictionary! Further internet research, confirms the findings.
There are several critical limitations though:
• TKey and TValue both have to be reference types
• The equality is defined using ReferenceEquals()
• GetHashCode() overrides are never called

The JIT compiler logically determines which methods to inline. But sometimes we know better than it does. With AggressiveInlining, we give the compiler a hint. We tell it that the method should be inlined. Actually, the only hint we give the compiler is to ignore the size restriction on the method or the property you want to inline. Using this attribute does not guarantee that the method will be inlined. There are 1000 and 1 reasons why it cannot be (being virtual for one thing)


This example benchmarks a method with no attribute, and with AggressiveInlining. The method body contains several lines of useless code. This makes the method large in bytes, so the JIT compiler may decide not to inline it.

And: We apply the MethodImplOptions.AggressiveInlining option to Method2. It is an enum.

using System;
using System.Diagnostics;
using System.Runtime.CompilerServices;
class Program
    const int _max = 10000000;
    static void Main()
        // ... Compile the methods
        int sum = 0;
        var s1 = Stopwatch.StartNew();
        for (int i = 0; i &lt; _max; i++)
            sum += Method1();
        var s2 = Stopwatch.StartNew();
        for (int i = 0; i &lt; _max; i++)
          sum += Method2();
        Console.WriteLine(((double)(s1.Elapsed.TotalMilliseconds * 1000000) / _max).ToString("0.00 ns"));
        Console.WriteLine(((double)(s2.Elapsed.TotalMilliseconds * 1000000) / _max).ToString("0.00 ns"));
    static int Method1()
        // ... No inlining suggestion
        return "one".Length + "two".Length + "three".Length +
            "four".Length + "five".Length +   "six".Length +
            "seven".Length + "eight".Length + "nine".Length +
    static int Method2()
        // ... Aggressive inlining
        return "one".Length + "two".Length + "three".Length +
            "four".Length + "five".Length + "six".Length +
            "seven".Length + "eight".Length + "nine".Length +


7.34 ns No options
0.32 ns MethodImplOptions.AggressiveInlining

We see that with no options, the method calls required seven nanoseconds each. But with inlining specified (with AggressiveInlining), the calls required less than one nanosecond each.

Tip 1: Consider for a moment all the things you could do with those seven nanoseconds.

Tip 2: If you are scheduling your life based on nanoseconds, please consider reducing your coffee intake.