Finite-sample Gaussian efficiency of the trimmed Harrell-Davis median estimator
In the previous post, we obtained the finite-sample Gaussian efficiency values of the sample median and the Harrell-Davis median. In this post, we extended these results and get the finite-sample Gaussian efficiency values of the trimmed Harrell-Davis median estimator based on the highest density interval of the width \(1/\sqrt{n}\).
Read more
Finite-sample Gaussian efficiency of the Harrell-Davis median estimator
In this post, we explore finite-sample and asymptotic Gaussian efficiency values of the sample median and the Harrell-Davis median.
Read more
Weighted quantile estimation for a weighted mixture distribution
Let \(\mathbf{x} = \{ x_1, x_2, \ldots, x_n \}\) be a sample of size \(n\). We assign non-negative weight coefficients \(w_i\) with a positive sum for all sample elements:
\[\mathbf{w} = \{ w_1, w_2, \ldots, w_n \}, \quad w_i \geq 0, \quad \sum_{i=1}^{n} w_i > 0. \]
For simplification, we also consider normalized (standardized) weights \(\overline{\mathbf{w}}\):
\[\overline{\mathbf{w}} = \{ \overline{w}_1, \overline{w}_2, \ldots, \overline{w}_n \}, \quad \overline{w}_i = \frac{w_i}{\sum_{i=1}^{n} w_i}. \]
In the non-weighted case, we can consider a quantile estimator \(\operatorname{Q}(\mathbf{x}, p)\) that estimates the \(p^\textrm{th}\) quantile of the underlying distribution. We want to build a weighted quantile estimator \(\operatorname{Q}(\mathbf{x}, \mathbf{w}, p)\) so that we can estimate the quantiles of a weighed sample.
In this post, we consider a specific problem of estimating quantiles of a weighted mixture distribution.
Read more
Preprint announcement: 'Finite-sample Rousseeuw-Croux scale estimators'
Recently, I published a preprint of a paper ‘Finite-sample Rousseeuw-Croux scale estimators’. It’s based on a series of my research notes.
The paper preprint is available on arXiv: arXiv:2209.12268 [stat.ME]. The paper source code is available on GitHub: AndreyAkinshin/paper-frc. You can cite it as follows:
- Andrey Akinshin (2022) “Finite-sample Rousseeuw-Croux scale estimators” arXiv:2209.12268
Abstract:
The Rousseeuw-Croux \(S_n\), \(Q_n\) scale estimators and the median absolute deviation \(\operatorname{MAD}_n\) can be used as consistent estimators for the standard deviation under normality. All of them are highly robust: the breakdown point of all three estimators is \(50\%\). However, \(S_n\) and \(Q_n\) are much more efficient than \(\operatorname{MAD}_n\): their asymptotic Gaussian efficiency values are \(58\%\) and \(82\%\) respectively compared to \(37\%\) for \(\operatorname{MAD}_n\). Although these values look impressive, they are only asymptotic values. The actual Gaussian efficiency of \(S_n\) and \(Q_n\) for small sample sizes is noticeably lower than in the asymptotic case.
The original work by Rousseeuw and Croux (1993) provides only rough approximations of the finite-sample bias-correction factors for \(S_n,\, Q_n\) and brief notes on their finite-sample efficiency values. In this paper, we perform extensive Monte-Carlo simulations in order to obtain refined values of the finite-sample properties of the Rousseeuw-Croux scale estimators. We present accurate values of the bias-correction factors and Gaussian efficiency for small samples (\(n \leq 100\)) and prediction equations for samples of larger sizes.
Read more
Sensitivity curve of the Harrell-Davis quantile estimator, Part 3
In the previous posts (1, 2), I have explored the sensitivity curves of the Harrell-Davis quantile estimator on the normal distribution, the exponential distribution, and the Cauchy distribution. In this post, I build these sensitivity curves for some additional distributions.
Read more
Sensitivity curve of the Harrell-Davis quantile estimator, Part 2
In the previous post, I have explored the sensitivity curves of the Harrell-Davis quantile estimator on the normal distribution. In this post, I continue the same investigation on the exponential and Cauchy distributions.
Read more
Sensitivity curve of the Harrell-Davis quantile estimator, Part 1
The Harrell-Davis quantile estimator is an efficient replacement for the traditional quantile estimator, especially in the case of light-tailed distributions. Unfortunately, it is not robust: its breakdown point is zero. However, the breakdown point is not the only descriptor of robustness. While the breakdown point describes the portion of the distribution that should be replaced by arbitrary large values to corrupt the estimation, it does not describe the actual impact of finite outliers. The arithmetic mean also has the breakdown point of zero, but the practical robustness of the mean and the Harrell-Davis quantile estimator are not the same. The Harrell-Davis quantile estimator is an L-estimator that assigns extremely low weights to sample elements near the tails (especially, for reasonably large sample sizes). Therefore, the actual impact of potential outliers is not so noticeable. In this post, we use the standardized sensitivity curve to evaluate this impact.
Read more
Weighted quantile estimators for exponential smoothing and mixture distributions
There are various ways to estimate quantiles of weighted samples. The proper choice of the most appropriate weighted quantile estimator depends not only on the own estimator properties but also on the goal.
Let us consider two problems:
- Estimating quantiles of a weighted mixture distribution.
In this problem, we have a weighted mixture distribution given by \(F = \sum_{i=1}^m w_i F_i\). We collect samples \(\mathbf{x_1}, \mathbf{x_2}, \ldots, \mathbf{x_m}\) from \(F_1, F_2, \ldots F_m\), and want to estimate quantile function \(F^{-1}\) of the mixture distribution based on the given samples. - Quantile exponential smoothing.
In this problem, we have a time series \(\mathbf{x} = \{ x_1, x_2, \ldots, x_n \}\). We want to describe the distribution “at the end” of this time series. The latest series element \(x_n\) is the most “actual” one, but we cannot build a distribution based on a single element. Therefore, we have to consider more elements at the end of \(\mathbf{x}\). However, if we take too many elements, we may corrupt the estimations due to obsolete measurements. To resolve this problem, we can assign weights to all elements according to the exponential law and estimate weighted quantiles.
In both problems, the usage of weighted quantile estimators looks like a reasonable solution. However, in each problem, we have different expectations of the estimator behavior. In this post, we provide an example that illustrates the difference in these expectations.
Read more
The Huggins-Roy family of effective sample sizes
When we work with weighted samples, it’s essential to introduce adjustments for the sample size. Indeed, let’s consider two following weighted samples:
\[\mathbf{x}_1 = \{ x_1, x_2, \ldots, x_n \}, \quad \mathbf{w}_1 = \{ w_1, w_2, \ldots, w_n \}, \]
\[\mathbf{x}_2 = \{ x_1, x_2, \ldots, x_n, x_{n+1} \}, \quad \mathbf{w}_2 = \{ w_1, w_2, \ldots, w_n, 0 \}. \]
Since the weight of \(x_{n+1}\) in the second sample is zero, it’s natural to expect that both samples have the same set of properties. However, there is a major difference between \(\mathbf{x}_1\) and \(\mathbf{x}_2\): their sample sizes which are \(n\) and \(n+1\). In order to eliminate this difference, we typically introduce the effective sample size (ESS) which is estimated based on the list of weights.
There are various ways to estimate the ESS. In this post, we briefly discuss the Huggins-Roy’s family of ESS.
Read more
Finite-sample bias correction factors for Rousseeuw-Croux scale estimators
The Rousseeuw-Croux scale estimators \(S_n\) and \(Q_n\) are efficient alternatives to the median absolute deviation (\(\operatorname{MAD}_n\)). While all three estimators have the same breakdown point of \(50\%\), \(S_n\) and \(Q_n\) have higher statistical efficiency than \(\operatorname{MAD}_n\). The asymptotic Gaussian efficiency values of \(\operatorname{MAD}_n\), \(S_n\), and \(Q_n\) are \(37\%\), \(58\%\), and \(82\%\) respectively.
Using scale constants, we can make \(S_n\) and \(Q_n\) consistent estimators for the standard deviation under normality. The asymptotic values of these constants are well-known. However, for finite-samples, only approximated scale constants are known. In this post, we provide refined values of these constants with higher accuracy.
Read more