Most people would feel they can count on one day comprising the same number of hours, minutes and seconds as the next. But this isn’t always the case – June 30 will be a second longer in 2015 with the addition of a leap second, added to reconcile the differences between two definitions of time: one astronomical, the other provided by atomic clocks.
Before the 1950s, time was defined by the position of the sun in the sky, as measured by instruments that monitor the Earth’s rotation. But this rotation is not constant. It is slowing due to the gravitational pull of the moon, with days lengthening by 1.7 milliseconds per century.
The varying length of the day has been known for centuries but only became a practical concern (outside astronomy) with the invention of atomic clocks in the 1950s. These provide a far more stable and easy-to-use definition of time, based on a particular microwave frequency absorbed by caesium atoms. Atomic clock signals were soon used to control standard-frequency radio transmitters, which telecommunication engineers could use to calibrate and synchronise equipment.
Matching the Astronomic to the Atomic
When these transmitters were upgraded to also emit a one-pulse-per-second signal and a time-and-date code, the International Telecommunication Union in Geneva was asked to come up with a standard definition of time. The result was “Coordinated Universal Time”, abbreviated to UTC (to keep French speakers happy), which defined an atomic clock-generated time signal that would also stay within a second of an astronomical definition of time, known as UT1.
The question was how to keep these timescales synchronised. Initial efforts that adjusted transmission frequency, thereby changing the length of the second, or by adding millisecond delays at pre-arranged times caused problems and disrupted time-keeping electronics upon which other standards relied – for example the 50Hz frame rate for European television broadcasts.
So in the late 1960s the definition of UTC was changed to keep the length of the second constant. Instead the atomic and astronomical definitions of time encompassed within UTC had to be synchronised by inserting or skipping a whole second – and so the leap second was introduced, for the first time in June 1972. There have been 24 more since, announced by the Earth Orientation Center in Paris.
Computers Don’t Like Change
While this worked well, by the late 1990s there were concerns. A large effort was underway to tackle the millenium bug in computer systems, which led engineers to start worry about other time-related disruptions. High-precision time broadcasts from the GPS navigation system enabled new safety-critical applications, such as aircraft navigation and control, where time variables immediately affect the trajectory of vehicles. And it had become common practice to synchronise computer clocks over the Internet using the Network Time Protocol (NTP), which posed the question of how computers should implement leap seconds.
The leap second’s inventors envisaged that a digital clock displaying UTC, which would normally step from 23:59:59 to 00:00:00, would instead insert an additional 61st second, displayed as 23:59:60. This turned out to be impractical, however, as computer software rarely breaks time into separate variables for hours, minutes and seconds. Instead, it’s more convenient to represent time as a single number, a running count of seconds. Looked at this way, adding one to any time value representing 23:59:59 will always end up with 00:00:00. There are no numbers left on the scale that could represent the time 23:59:60.
Unfortunately, the way NTP implemented leap seconds in Unix and Linux operating systems (which run most internet servers) made things worse: by leaping back in time to the beginning of the final second and repeating it. Any software reading off a clock twice within a second might find the deeply confusing situation of the second time-stamp predating the first. A combination of this and a particular bug in Linux caused computers to behave erratically and led to failures in some datacentres the last time a leap second was introduced in 2012, notably in one large airline booking system. Instead, alternative implementations now just slow down the computer’s clock briefly in the run up to a leap second to account for the difference.
Standards, So Many to Choose From
A leap second-free form of atomic time also exists, known as International Atomic Time or TAI (again, via the French). UTC currently lags exactly 35 seconds behind TAI, and this will increase to 36 seconds by July 1. Systems where leap seconds can cause serious disruption, such as GPS or spacecraft, have used variants of TAI for a long time. But use of TAI is not widespread, as legal definitions of time are based on UTC.
For over 15 years a debate has raged over whether to abolish leap seconds altogether, such that from some date onwards the difference between UTC and TAI becomes fixed. This would solve concerns based on how to implement leap seconds in computers, but would also break many existing specialist systems, including satellite-tracking ground stations, astronomical instruments, and any systems built with the assumption that UTC and UT1 never differ by more than a second.
There is also a more philosophical question arising from decoupling our definition of time from the position of the sun in the sky. Astronomical instruments like sundials and sextants would become useless without regular recalibration. And the meridian on which local mean solar time matches UTC, which currently passes through Greenwich in London, would start to accelerate eastwards: reaching Paris within a few hundred years and eventually passing around the globe, many times. Perhaps this is what might, in part, have motivated the UK government to oppose this change.
This article was originally published on The Conversation. Read the original article.