Navruz-Özdemir quantile estimator

DateTags

The Navruz-Özdemir quantile estimator suggests the following equation to estimate the \(p^\textrm{th}\) quantile of sample \(X\):

\[\begin{split} \operatorname{NO}_p = & \Big( (3p-1)X_{(1)} + (2-3p)X_{(2)} - (1-p)X_{(3)} \Big) B_0 +\\ & +\sum_{i=1}^n \Big((1-p)B_{i-1}+pB_i\Big)X_{(i)} +\\ & +\Big( -pX_{(n-2)} + (3p-1)X_{(n-1)} + (2-3p)X_{(n)} \Big) B_n \end{split} \]

where \(B_i = B(i; n, p)\) is probability mass function of the binomial distribution \(B(n, p)\), \(X_{(i)}\) are order statistics of sample \(X\).

In this post, I derive these equations following the paper “A new quantile estimator with weights based on a subsampling approach” (2020) by Gözde Navruz and A. Fırat Özdemir. Also, I add some additional explanations, simplify the final equation, and provide reference implementations in C# and R.


Read more


Sfakianakis-Verginis quantile estimator


There are dozens of different ways to estimate quantiles. One of these ways is to use the Sfakianakis-Verginis quantile estimator. To be more specific, it’s a family of three estimators. If we want to estimate the \(p^\textrm{th}\) quantile of sample \(X\), we can use one of the following equations:

\[\begin{split} \operatorname{SV1}_p =& \frac{B_0}{2} \big( X_{(1)}+X_{(2)}-X_{(3)} \big) + \sum_{i=1}^{n} \frac{B_i+B_{i-1}}{2} X_{(i)} + \frac{B_n}{2} \big(- X_{(n-2)}+X_{(n-1)}-X_{(n)} \big),\\ \operatorname{SV2}_p =& \sum_{i=1}^{n} B_{i-1} X_{(i)} + B_n \cdot \big(2X_{(n)} - X_{(n-1)}\big),\\ \operatorname{SV3}_p =& \sum_{i=1}^n B_i X_{(i)} + B_0 \cdot \big(2X_{(1)}-X_{(2)}\big). \end{split} \]

where \(B_i = B(i; n, p)\) is probability mass function of the binomial distribution \(B(n, p)\), \(X_{(i)}\) are order statistics of sample \(X\).

In this post, I derive these equations following the paper “A new family of nonparametric quantile estimators” (2008) by Michael E. Sfakianakis and Dimitris G. Verginis. Also, I add some additional explanations, reconstruct missing steps, simplify the final equations, and provide reference implementations in C# and R.


Read more


Winsorized modification of the Harrell-Davis quantile estimator


The Harrell-Davis quantile estimator is one of my favorite quantile estimators because of its efficiency. It has a small mean square error which allows getting accurate estimations. However, it has a severe drawback: it’s not robust. Indeed, since the estimator includes all sample elements with positive weights, its breakdown point is zero.

In this post, I want to suggest modifications of the Harrell-Davis quantile estimator which increases its robustness keeping almost the same level of efficiency.


Read more


Misleading standard deviation


The standard deviation may be an extremely misleading metric. Even minor deviations from normality could make it completely unreliable and deceiving. Let me demonstrate this problem using an example.

Below you can see three density plots of some distributions. Could you guess their standard deviations?

The correct answers are \(1.0, 3.0, 11.0\). And here is a more challenging problem: could you match these values with the corresponding distributions?


Read more


Unbiased median absolute deviation based on the Harrell-Davis quantile estimator


The median absolute deviation (\(\textrm{MAD}\)) is a robust measure of scale. In the previous post, I showed how to use the unbiased version of the \(\textrm{MAD}\) estimator as a robust alternative to the standard deviation. “Unbiasedness” means that such estimator’s expected value equals the true value of the standard deviation. Unfortunately, there is such thing as the bias–variance tradeoff: when we remove the bias of the \(\textrm{MAD}\) estimator, we increase its variance and mean squared error (\(\textrm{MSE}\)).

In this post, I want to suggest a more efficient unbiased \(\textrm{MAD}\) estimator. It’s also a consistent estimator for the standard deviation, but it has smaller \(\textrm{MSE}\). To build this estimator, we should replace the classic “straightforward” median estimator with the Harrell-Davis quantile estimator and adjust bias-correction factors. Let’s discuss this approach in detail.


Read more


Unbiased median absolute deviation


The median absolute deviation (\(\textrm{MAD}\)) is a robust measure of scale. For distribution \(X\), it can be calculated as follows:

\[\textrm{MAD} = C \cdot \textrm{median}(|X - \textrm{median}(X)|) \]

where \(C\) is a constant scale factor. This metric can be used as a robust alternative to the standard deviation. If we want to use the \(\textrm{MAD}\) as a consistent estimator for the standard deviation under the normal distribution, we should set

\[C = C_{\infty} = \dfrac{1}{\Phi^{-1}(3/4)} \approx 1.4826022185056. \]

where \(\Phi^{-1}\) is the quantile function of the standard normal distribution (or the inverse of the cumulative distribution function). If \(X\) is the normal distribution, we get \(\textrm{MAD} = \sigma\) where \(\sigma\) is the standard deviation.

Now let’s consider a sample \(x = \{ x_1, x_2, \ldots x_n \}\). Let’s denote the median absolute deviation for a sample of size \(n\) as \(\textrm{MAD}_n\). The corresponding equation looks similar to the definition of \(\textrm{MAD}\) for a distribution:

\[\textrm{MAD}_n = C_n \cdot \textrm{median}(|x - \textrm{median}(x)|). \]

Let’s assume that \(\textrm{median}\) is the straightforward definition of the median (if \(n\) is odd, the median is the middle element of the sorted sample, if \(n\) is even, the median is the arithmetic average of the two middle elements of the sorted sample). We still can use \(C_n = C_{\infty}\) for extremely large sample sizes. However, for small \(n\), \(\textrm{MAD}_n\) becomes a biased estimator. If we want to get an unbiased version, we should adjust the value of \(C_n\).

In this post, we look at the possible approaches and learn the way to get the exact value of \(C_n\) that makes \(\textrm{MAD}_n\) unbiased estimator of the median absolute deviation for any \(n\).


Read more


Comparing distribution quantiles using gamma effect size


There are several ways to describe the difference between two distributions. Here are a few examples:

  • Effect sizes based on differences between means (e.g., Cohen’s d, Glass’ Δ, Hedges’ g)
  • The shift and ration functions that estimate differences between matched quantiles.

In one of the previous post, I described the gamma effect size which is defined not for the mean but for quantiles. In this post, I want to share a few case studies that demonstrate how the suggested metric combines the advantages of the above approaches.


Read more


A single outlier could completely distort your Cohen's d value


Cohen’s d is a popular way to estimate the effect size between two samples. It works excellent for perfectly normal distributions. Usually, people think that slight deviations from normality shouldn’t produce a noticeable impact on the result. Unfortunately, it’s not always true. In fact, a single outlier value can completely distort the result even in large samples.

In this post, I will present some illustrations for this problem and will show how to fix it.


Read more


Better moving quantile estimations using the partitioning heaps


In one of the previous posts, I have discussed the Hardle-Steiger method. This algorithm allows estimating the moving median using \(O(L)\) memory and \(O(log(L))\) element processing complexity (where \(L\) is the window size). Also, I have shown how to adapt this approach to estimate any moving quantile.

In this post, I’m going to present further improvements. The Hardle-Steiger method always returns the order statistics which is the \(k\textrm{th}\) smallest element from the sample. It means that the estimated quantile value always equals one of the last \(L\) observed numbers. However, many of the classic quantile estimators use two elements. For example, if we want to estimate the median for \(x = \{4, 5, 6, 7\}\), some estimators return \(5.5\) (which is the arithmetical mean of \(5\) and \(6\)) instead of \(5\) or \(6\) (which are order statistics).

Let’s learn how to implement a moving version of such estimators using the partitioning heaps from the Hardle-Steiger method.


Read more


MP² quantile estimator: estimating the moving median without storing values


In one of the previous posts, I described the P² quantile estimator. It allows estimating quantiles on a stream of numbers without storing them. Such sequential (streaming/online) quantile estimators are useful in software telemetry because they help to evaluate the median and other distribution quantiles without a noticeable memory footprint.

After the publication, I got a lot of questions about moving sequential quantile estimators. Such estimators return quantile values not for the whole stream of numbers, but only for the recent values. So, I wrote another post about a quantile estimator based on a partitioning heaps (inspired by the Hardle-Steiger method). This algorithm gives you the exact value of any order statistics for the last \(L\) numbers (\(L\) is known as the window size). However, it requires \(O(L)\) memory, and it takes \(O(log(L))\) time to process each element. This may be acceptable in some cases. Unfortunately, it doesn’t allow implementing low-overhead telemetry in the case of large \(L\).

In this post, I’m going to present a moving modification of the P² quantile estimator. Let’s call it MP² (moving P²). It requires \(O(1)\) memory, it takes \(O(1)\) to process each element, and it supports windows of any size. Of course, we have a trade-off with the estimation accuracy: it returns a quantile approximation instead of the exact order statistics. However, in most cases, the MP² estimations are pretty accurate from the practical point of view.

Let’s discuss MP² in detail!


Read more