.NET Core performance revolution in Rider 2020.1


This blog post was originally posted on JetBrains .NET blog.

Many Rider users may know that the IDE has two main processes: frontend (Java-application based on the IntelliJ platform) and backend (.NET-application based on ReSharper). Since the first release of Rider, we’ve used Mono as the backend runtime on Linux and macOS. A few years ago, we decided to migrate to .NET Core. After resolving hundreds of technical challenges, we are finally ready to present the .NET Core edition of Rider!

In this blog post, we want to share the results of some benchmarks that compare the Mono-powered and the .NET Core-powered editions of Rider. You may find this interesting if you are also thinking about migrating to .NET Core, or if you just want a high-level overview of the improvements to Rider in terms of performance and footprint, following the migration. (Spoiler: they’re huge!)


Read more


Introducing perfolizer

DateTags

Over the last 7 years, I’ve been maintaining BenchmarkDotNet; it’s a library that helps you to transform methods into benchmarks, track their performance, and share reproducible measurement experiments. Today, BenchmarkDotNet became the most popular .NET library for benchmarking which was adopted by 3500+ projects including .NET Core.

While it has tons of features for benchmarking that allows getting reliable and accurate measurements, it has a limited set of features for performance analysis. And it’s a problem for many developers. Lately, I started to get a lot of emails when people ask me “OK, I benchmarked my application and got tons of numbers. What should I do next?” It’s an excellent question that requires special tools. So, I decided to start another project that focuses specifically on performance analysis.

Meet perfolizer — a toolkit for performance analysis! The source code is available on GitHub under the MIT license.



Read more


Distribution comparison via the shift and ratio functions

DateTags

When we compare two distributions, it’s not always enough to detect a statistically significant difference between them. In many cases, we also want to evaluate the magnitude of this difference. Let’s look at the following image:

On the left side, we can see a timeline plot with 2000 points (at the middle of this plot, the distribution was significantly changed). On the right side, you can see density plots for the left and the right side of the timeline plot (before and after the change). It’s a pretty simple case, the difference between distributions be expressed via the difference between mean values.

Now let’s look at a more tricky case:

Here we have a bimodal distribution; after the change, the left mode “moved right.” Now it’s much harder to evaluate the difference between distributions because the mean and the median values almost not changed: the right mode has the biggest impact on these metrics than the left more.

And here is a much more tricky case:

Here we also have a bimodal distribution; after the change, both modes moved: the left mode “moved right” and the right mode “moved left.” How should we describe the difference between these distributions now?


Read more


Normality is a myth


In many statistical papers, you can find the following phrase: “assuming that we have a normal distribution.” Probably, you saw plots of the normal distribution density function in some statistics textbooks, it looks like this:

The normal distribution is a pretty user-friendly mental model when we are trying to interpret the statistical metrics like mean and standard deviation. However, it may also be an insidious and misleading model when your distribution is not normal. There is a great sentence in the “Testing for normality” paper by R.C. Geary, 1947 (the quote was found here):

Normality is a myth; there never was, and never will be, a normal distribution.

I 100% agree with this statement. At least, if you are working with performance distributions (that are based on the multiple iterations of your benchmarks that measure the performance metrics of your applications), you should forget about normality. That’s how a typical performance distribution looks like (I built the below picture based on a real benchmark that measures the load time of assemblies when we open the Orchard solution in Rider on Linux):


Read more


Implementation of efficient algorithm for changepoint detection: ED-PELT


Changepoint detection is an important task that has a lot of applications. For example, I use it to detect changes in the Rider performance test suite. It’s very important to detect not only performance degradations, but any kinds of performance changes (e.g., the variance may increase, or a unimodal distribution may be split to several modes). You can see examples of such changes on the following picture (we change the color when a changepoint is detected):

Unfortunately, it’s pretty hard to write a reliable and fast algorithm for changepoint detection. Recently, I found a cool paper (Haynes, K., Fearnhead, P. & Eckley, I.A. “A computationally efficient nonparametric approach for changepoint detection,” Stat Comput (2017) 27: 1293) that describes the ED-PELT algorithm. It has O(N*log(N)) complexity and pretty good detection accuracy. The reference implementation can be used via the changepoint.np R package. However, I can’t use R on our build server, so I decided to write my own C# implementation.


Read more


A story about slow NuGet package browsing

DateTags

In Rider, we have integration tests which interact with api.nuget.org. Also, we have an internal service which monitors the performance of these tests. Two days ago, I noticed that some of these tests sometimes are running for too long. For example, nuget_NuGetTest_shouldUpgradeVersionForDotNetCore usually takes around 10 sec. However, in some cases, it takes around 110 sec, 210 sec, or 310 sec:


It looks very suspicious and increases the whole test suite duration. Also, our dashboard with performance degradations contains only such tests and some real degradations (which are introduced by the changes in our codebase) can go unnoticed. So, my colleagues and I decided to investigate it.


Read more


Cross-runtime .NET disassembly with BenchmarkDotNet


BenchmarkDotNet is a cool tool for benchmarking. It has a lot of useful features that help you with performance investigations. However, you can use these features even if you are not actually going to benchmark something. One of these features is DisassemblyDiagnoser. It shows you a disassembly listing of your code for all required runtimes. In this post, I will show you how to get disassembly listing for .NET Framework, .NET Core, and Mono with one click! You can do it with a very small code snippet like this:

[DryCoreJob, DryMonoJob, DryClrJob(Platform.X86)]
[DisassemblyDiagnoser]
public class IntroDisasm
{
    [Benchmark]
    public double Sum()
    {
        double res = 0;
        for (int i = 0; i < 64; i++)
            res += i;
        return res;
    }
}

Read more


BenchmarkDotNet v0.10.14


BenchmarkDotNet v0.10.14 has been released! This release includes:

  • Per-method parameterization (Read more)
  • Console histograms and multimodal disribution detection (Read more)
  • Many improvements for Mono disassembly support on Windows (A blog post is coming soon)
  • Many bugfixes

In the v0.10.14 scope, 8 issues were resolved and 11 pull requests where merged. This release includes 47 commits by 8 contributors.


Read more


BenchmarkDotNet v0.10.13


BenchmarkDotNet v0.10.13 has been released! This release includes:

  • Mono Support for DisassemblyDiagnoser: Now you can easily get an assembly listing not only on .NET Framework/.NET Core, but also on Mono. It works on Linux, macOS, and Windows (Windows requires installed cygwin with obj and as). (See #541)
  • Support ANY CoreFX and CoreCLR builds: BenchmarkDotNet allows the users to run their benchmarks against ANY CoreCLR and CoreFX builds. You can compare your local build vs MyGet feed or Debug vs Release or one version vs another. (See #651)
  • C# 7.2 support (See #643)
  • .NET 4.7.1 support (See 28aa94)
  • Support Visual Basic project files (.vbroj) targeting .NET Core (See #626)
  • DisassemblyDiagnoser now supports generic types (See #640)
  • Now it’s possible to benchmark both Mono and .NET Core from the same app (See #653)
  • Many bug fixes (See details below)

Read more


Analyzing distribution of Mono GC collections

DateTags

Sometimes I want to understand the GC performance impact on an application quickly. I know that there are many powerful diagnostic tools and approaches, but I’m a fan of the “right tool for the job” idea. In simple cases, I prefer simple noninvasive approaches which provide a quick way to get an overview of the current situation (if everything is terrible, I always can switch to an advanced approach). Today I want to share with you my favorite way to quickly get statistics of GC pauses in Mono and generate nice plots like this:



Read more