Notes / Timer Characteristics


There are many terms for describing basic timer characteristics. We are going to cover the following group of terms:

Nominal and actual Frequency, Resolution, and Granularity

You may think that the minimum achievable positive difference between two timestamps is 1 tick. It’s not always true. It’s better to say that this difference is not less than 1 tick. A tick is the measurement unit of a timer, but it doesn’t mean that you are always able to measure a 1 tick interval. For example, 1 tick for DateTime is 100 ns, but it’s impossible to measure so small interval with the help of DateTime (read more in the next section).

We can have some terminology troubles with the frequency term here. Sometimes “frequency” means how many ticks we have in one second. It’s the nominal frequency. Sometimes “frequency” means how many counter increments we have per second. It’s the actual frequency.

If we have a value of frequency, we can calculate reciprocal frequency. The formula is simple: <reciprocal frequency> = 1 / <frequency>. Thus, if we are talking about the nominal frequency, the nominal reciprocal frequency is the duration of 1 tick. If we are talking about the actual frequency, the actual reciprocal frequency is the time interval between two sequential counter increments.

An example.
The Stopwatch.Frequency value is the nominal stopwatch frequency because it can be used only for the calculation of the 1 tick duration. There is nothing about it in the specification and the documentation, so it can return any value. And we can’t make any conclusions about the actual Stopwatch frequency based on this value. For example, Stopwatch.Frequency in Mono is always 10000000.

“Reciprocal frequency” may sound clumsy, so we have another handy term: resolution. Unfortunately, here we also have some troubles with the definition. Sometimes people say “Resolution” and mean the duration of 1 tick. It’s the nominal resolution. Sometimes people say “Resolution” and mean the minimum positive interval between two different measurements. It’s the actual resolution.

There is another term for resolution: granularity. Usually, people use both words as synonyms (so, we can also talk about the nominal granularity, and the actual granularity), but more often the granularity describes the actual reciprocal frequency (the actual resolution) instead of 1 tick duration.

If we actually can measure the 1 tick interval, everything is ok, there is no difference between nominal and actual values: they are equal. Thus, people often say just “frequency” or “resolution” without any prefixes. However, if the actual resolution is more than 1 tick, there may be troubles with terminology. Be careful and always look at the context.

An example.
The standard value of DateTime.Ticks is 100 ns. On modern versions of Windows, the default timer frequency (which is responsible for DateTime.Now) is 64 Hz. Thus, the actual resolution is the following:

$$ \textrm{ActualResolution} = \frac{1}{\textrm{ActualFrequency}} = \frac{1}{64\textrm{Hz}} = 15.625\textrm{ms} = 15625000\textrm{ns} = 156250\textrm{ticks}. $$

Let’s look once again on all values:

NominalResolution = 100 ns
ActualResolution  = 15.625 ms
NominalFrequency  = 10 MHz
ActualFrequency   = 64 Hz

As you can see, it’s important to distinguish between the nominal and actual values.

Frequency offset

As it was mentioned before, it’s easy to think that the frequency is fixed. Usually, this assumption doesn’t affect the calculations. However, it’s good to know that the frequency may differ from the declared value. In this case, the actual frequency may differ from the declared value by so-called the maximum frequency offset which is expressed in “parts per million” (ppm, $10^{-6}$).

An example. The declared timer frequency is 2 GHz with a maximum frequency offset is 70ppm. It means that the actual frequency should be in the range 1'999'930'000 Hz..2'000'070'000 Hz. Let’s say, we measure a time interval, and the measured value is 1 second (or 2'000'000'000 ticks). If the actual frequency is 1'999'930'000 Hz, the actual time interval is:

$$ \textrm{ElapsedTime} = \frac{2\,000\,000\,000~\textrm{ticks}}{1\,999\,930\,000~\textrm{ticks/sec}} \approx 1.000035001225\textrm{sec}. $$

If the actual frequency is 2'000'070'000 Hz, the actual time interval is:

$$ \textrm{ElapsedTime} = \frac{2\,000\,000\,000~\textrm{ticks}}{2\,000\,070\,000~\textrm{ticks/sec}} \approx 0.999965001225\textrm{sec}. $$

Thus, the actual value of the measured interval (assuming there are no other errors) is in range 0.999965001225 sec..1.000035001225 sec.

Once again: usually we shouldn’t care about it because other errors have a greater impact on the final error.

Timestamp latency, Access time, and Timer overhead

When we discussed Figure {@fig:timers-terms-quantizing}, the timestamps were shown as instant events. In fact, a call of a timestamping API method also takes some time. Sometimes it interacts with the hardware, and such a call could be quite expensive. You may find different terms for this value: timestamp latency, access time, or timer overhead. All of these terms usually mean the same: it’s a timer interval between two moments: calling a timestamping API and getting the value.

Precision and Accuracy

There are two more important terms: precision and accuracy.

Precision (or random error) is the maximum difference between different measurements of the same time interval. Precision describes how repeatable measurements are. In other words, precision is defined by random errors of measured values around the actual value.

Accuracy (or systematic error) is the maximum difference between the measured and actual value.

In most cases, the timestamp latency is negligibly small compared to the actual resolution. However, sometimes the latency is huge, and it can affect total accuracy. We can say that the accuracy, in this case, is the sum of the latency and the resolution.

An example. On Windows 10 with enabled HPET (read more in further sections), the frequency of Stopwatch is 14.31818 MHz, the latency of Stopwatch.GetTimestamp() is about 700 ns. It’s easy to calculate the Stopwatch resolution: (1/14318180) second $\approx$ 70 ns. Unfortunately, the latency is much bigger, so it’s impossible to actually measure 70 ns-intervals:

$$ \nonumber \textrm{Accuracy} \approx \textrm{Latency} + \textrm{Resolution} \approx 700\textrm{ns} + 70\textrm{ns} \approx 770\textrm{ns}. $$

A typical measurement for such a situation is presented in the below figure:


Thus, if you want to calculate the accuracy level, you should know both values: the actual resolution and the timestamp latency.

People often confuse resolution, precision, and accuracy. Let’s look at the difference in a simple example.

An example.
We have a timer with frequency = 100 Hz (it means that 1 sec = 100 ticks). We are trying to measure exactly 1 second interval 5 times. Here are our results: 119 ticks, 121 ticks, 122 ticks, 120 ticks, 118 ticks. In this case:

Thus, we get

Resolution = 10 ms
Accuracy   = 200 ms
Precision  = 40 ms

As we can see, all three terms define different values. However, people confuse them because very often we can observe the same values in all cases. Precision is limited by nominal resolution (we can’t get precision which is less than 1 tick). Accuracy is limited by precision and actual resolution (if the difference between measurements of the same value if x, accuracy can’t be less than x). Usually, if we work with a high precision timer with a low access time, precision, resolution, and accuracy have the same order (sometimes these values can be equal). So, if everyone knows the context (exact values of all timer properties), the terms can replace each other (e.g., we can say “precision level” instead of “accuracy level” because they are the same). Formally, this is wrong. Despite this, people do it anyway. If you read a description of some measurements, always look at the context and be ready for incorrect statements.