Notes / 14.31818 MHz

The HPET frequency is 14.31818 MHz and the ACPI PM frequency is 3.579545 MHz. Let us discuss the origin of these magic numbers.

In 1950, the National Television System Committee (NTSC) started constructing a new standard for color television. The standard was approved later in 1953. It was a complicated technical task because the new standard had to be backward compatible with old black and white televisions. The new standard uses the luminance-chrominance encoding system: a color image is represented as a sum of luminance and chrominance signals. The luminance signal corresponds to the monochrome signal in black and white television, so B&W TV can accept the new standard. The chrominance signal contains only color information (two additional signals with different phases). Now let’s solve a simple task: we should choose the chrominance signal frequency $f_c$ (also known as color subcarrier frequency) which doesn’t affect B&W television. Consider several basic conditions which we should satisfy.

Condition 1 “Bandwidth for the chrominance signal”
The frequency of chrominance signal $f_c$ should be as high as possible: it allows getting a small noise structure. By the American standard, the maximum video frequency $f_{max}$=4.18 MHz. After a series of experiments, it turned out that the difference $f_{max}-f_c$ can’t be less than 0.6 MHz (otherwise, we will get image distortions). Thus, we have the following requirements:

$$ f_{max}-f_c \geq 0.6~\textrm{MHz} $$

Now we have the upper limit for $f_c$:

$$f_c \leq 3.58~\textrm{MHz}$$

Condition 2 “Line frequency”
To minimize the visibility of the color subcarrier on B&W screens, its frequency should be chosen as an odd half-integer multiple of the horizontal line rate $f_h$:


Thanks to this, the chrominance signal peaks would fit neatly between the luminance signal peaks which minimizes the interference. In the case of an even multiplier 2n, we get a strong noise pattern (a set of vertical lines).

Condition 3 “Audio signal”
We also have to minimize the interference between the audio signal (sound carrier) and the chrominance signal (chrominance carrier). So, we have to introduce an additional requirement (by analogy with Condition 2) for the distance between the sound carrier spacing $f_{\Delta s}$ and the frequency of chrominance carrier $f_c$:

$$f_{\Delta s}-f_c=(2m+1)\frac{f_h}{2}$$

Substituting $(2n+1)\cdot f_h/2$ for $f_c$, we get

$$f_{\Delta s}=(2m+1)\frac{f_h}{2}+(2n+1)\frac{f_h}{2}$$

It follows that

$$\frac{f_{\Delta s}}{f_h}=m+n+1=k$$

where $k$ is an integer number.

The original standard had a frame rate of 30 Hz with 525 lines per frame (15750 lines per second). This number was chosen because of the vacuum-tube-based technologies limitations of the day. Thus, the original horizontal line rate $f_h$=15750 Hz. By the American standard for B&W television, the sound carrier spacing $f_{\Delta s}$ between the audio and video frequency is exactly 4.5 MHz. Thereby, we have

$$\frac{f_{\Delta s}}{f_h}=\frac{4.5~\textrm{MHz}}{15750~\textrm{Hz}}\approx285.714285714.$$

To minimize interference between audio and color video signals, $f_{\Delta s}/f_h$ should be an integer number. It was decided to make $f_{\Delta s}$ 286${}^{\textrm{th}}$ harmonic of $f_h$ (286 is the closest integer number to 285.714285714). However, we can’t change the audio carrier frequency (the legacy TV receivers will not decode it), but we can change the horizontal line frequency! It’s easy to calculate the new horizontal line rate:


The frequency reduction coefficient is 15750 Hz/15734.26573 Hz $\approx$ 1.001. An interesting implication from this is that now we have 29.97 Hz as the frame rate instead of 30 Hz and 59.94 Hz as the field frequency instead of the common 60 Hz``.^[ It's a special kind of fun to convert "24 frames per second" films to the 59.94 Hz` NTSC video standard. Long story short, it requires slowing the film motion by 1/1000 to 23.976 frames per second which increases a 1.5-hour film by 5.4 seconds. Google for “Three-two pull down.” ]

Condition 4 “Simple construction”
We also want to have an oscillator which is easy to implement. It is easier to create frequency divider chains when (2n+1) is a product of small prime numbers. We know that


From $f_c \leq 3.58~\textrm{MHz}$ and $f_h=15734.26573~\textrm{Hz}$, we have:

$$ 2n+1 = \frac{2f_c}{f_h} \leq \frac{2\cdot 3580000~\textrm{Hz}}{15734.26573~\textrm{Hz}} \approx 455.0578 $$

We know that the chrominance signal frequency $f_c$ should be as high as possible. The maximum possible value for (2n+1) (which should be an odd integer number) is 455. It’s a great number because it has small frequency dividers by 5, 7, and 13:

$$(2n+1)=5 \cdot 7 \cdot 13=455.$$

Hooray, now we can calculate $f_c$ which became the default NTSC Color burst frequency:

$$f_c=(2n+1)\frac{f_h}{2}=455\cdot\frac{15734.26573~\textrm{Hz}}{2}\approx 3.579545~\textrm{MHz}.$$

If you like the history of television, you can also find a lot of interesting technical details in Analog TV Broadcast Systems and [@Stephens1999]. The 3.579545 MHz value had a significant impact on the modern hardware timers. But how? Well, it’s time to learn more about one of the first clock oscillators: the Intel 8284 clock oscillator.

Let’s remember some old-fashioned processor models and clock oscillators. Intel 8284 is a clock oscillator for Intel 8086/8088 microprocessors. By specification, the maximum frequency for 8088 is 5 MHz. The signal should have a 33.3% duty clock cycle (1/3 of the time high; 2/3 of the time low), so the original signal should be around 15 MHz (we can get 5 MHz by dividing the original frequency by 3).

At that date, it was a common practice to use TVs instead of monitors. Thus, the Color Graphics Adapter (CGA) required the 3.579545 MHz signal for creating the NTSC color subcarrier.

Also, it was expensive to have several crystal oscillators on the same chip. It was decided to use the same crystal for both CGA and CPU. Thereby, the master oscillator has the 14.31818 MHz frequency (4 x NTSC). It allows getting 3.579545 MHz for CGA video controller (dividing the master frequency by 4) and 4.77272666 MHz for CPU (dividing it by 3). Yes, it was less than the 5 MHz limit (4.6% performance drop), but it’s a good trade-off that allowed the production of cheap CPU chips. You can find this story in Tim Paterson’s (the original author of MS-DOS) blog post (see [@Paterson2009]). Also, it is worth reading the same story by Calvin Hsia (see [@Hsia2004]). Some additional technical information about Intel 8284 can be found in [@Karna2017], [@Govindarajalu2002].

Now we can understand the origin ACPI PM and HPET frequencies. The ACPI PM reuses the 3.579545 MHz NTSC frequency because we already have hardware support for this. HPET has a minimum frequency requirement: 10 MHz. Since it was expensive to introduce an additional oscillator for HPET, it was decided to reuse the 14.31818 MHz frequency which is also already implemented on the hardware level. Another hardware timer that was affected by these magic numbers is the PIT (Programmable Interval Timer) (also known as the Intel 8253/8254 chip). The frequency of this timer is 1.193182 MHz. It uses the same 14.31818 MHz master frequency which is divided by 12, so it’s compatible with CGA (CGA Frequency = 3 * PIT Frequency) and CPU (CPU Frequency = 4 * PIT Frequency).

References (3)