In many statistical papers, you can find the following phrase: "assuming that we have a normal distribution." Probably, you saw plots of the normal distribution density function in some statistics textbooks, it looks like this:
The normal distribution is a pretty user-friendly mental model when we are trying to interpret the statistical metrics like mean and standard deviation. However, it may also be an insidious and misleading model when your distribution is not normal. There is a great sentence in the "Testing for normality" paper by R.C. Geary, 1947 (the quote was found here):
Normality is a myth; there never was, and never will be, a normal distribution.
I 100% agree with this statement. At least, if you are working with performance distributions (that are based on the multiple iterations of your benchmarks that measure the performance metrics of your applications), you should forget about normality. That's how a typical performance distribution looks like (I built the below picture based on a real benchmark that measures the load time of assemblies when we open the Orchard solution in Rider on Linux):Read more
Performance is an important feature for many projects. Unfortunately, it's an all too common situation when a developer accidentally spoils the performance adding some new code. After a series of such incidents, people often start to think about performance regression testing.
As developers, we write unit tests all the time. These tests check that our business logic work as designed and that new features don't break existing code. It looks like a good idea to write some perf tests as well, which will verify that we don't have any performance regressions.
Turns out this is harder than it sounds. A lot of developers don't write perf tests at all. Some teams write perf tests, but almost all of them use their own infrastructure for analysis (which is not a bad thing in general because it's usually designed for specific projects and requirements). There are a lot of books about test-driven development (TDD), but there are no books about performance-driven development (PDD). There are well-known libraries for unit-testing (like xUnit/NUnit/MSTest for .NET), but there are almost no libraries for performance regression testing. Yeah, of course, there are some libraries which you can use. But there are troubles with well-known all recognized libraries, approaches, and tools. Ask your colleagues about it: some of them will give you different answers, the rest of them will start Googling it.
There is no common understanding of what performance testing should look like. This situation exists because it's really hard to develop a solution which solves all problems for all kind of projects. However, it doesn't mean that we shouldn't try. And we should try, we should share our experience and discuss best practices.Read more
In Rider, we care a lot about performance. I like to improve the application responsiveness and do interesting optimizations all the time. Rider is already well-optimized, and it's often hard to make significant performance improvements, so usually I do micro-optimizations which do not have a very big impact on the whole application. However, sometimes it's possible to improve the speed of a feature 100 times with just a few lines of code.
Rider is based on ReSharper, so we have a lot of cool features out of the box. One of these features is Solution-Wide Analysis which lets you constantly keep track of issues in your solution. Sometimes, solution-wide analysis takes a lot of time to run because there are many files which should be analyzed. Of course, it works super fast on small and projects.
Let's talk about a performance bug (#RIDER-3742) that we recently had.