the math of atrf-xtal (when used for atusb)
werner at almesberger.net
Thu Jun 2 19:43:43 EDT 2011
An important part of the production testing process for ben-wpan is the
measurement of the accuracy of the crystal oscillator. In the case of
atben, this is a little complicated (*), but it's easier in atusb:
- the transceiver clock drives the microcontroller
- the microcontroller has hardware counters
- the hw counters can use the system clock as their clock source
So it's easy for the microcontroller to count clock cycles, and to
return the counter value via USB. Now we can get one sample of PC
time and the corresponding counter value, wait a little, then get
another sample, and we should be able to divide the difference in
clock count by the difference in real time, and we have the
frequency. Right ?
Well, no. It's a little harder. First of all, the PC's system time
may not be very precsise. The main source of imprecision is drift,
e.g., the PC's oscillator is a little faster or slower than it
should be, and this speed difference doesn't change very much over
time. Fortunately, we can compensate this by synchronizing the clock
with NTP. NTP measures the clock drift in relation to distant atomic
clocks, and then applies a correction that more or less eliminates
The second problem is that we don't know when exactly the counter in
the microcontroller is being read. For a simple retrieval via USB,
all we can know is:
1) the PC time when the USB transfer was requested
2) the PC time when the USB transfer completed
3) the counter value that was retrieved at some point in time between
1) and 2) are just the time right before and after the call to
gettimeofday(&start, NULL); // 1)
usb_control_msg(..., ATUSB_TIMER, ...); // retrieve 3)
gettimeofday(&end, NULL); // 2)
Now, the PC time of 3) must be somewhere between the two. It's
difficult to tell where exactly it is, but knowing the size of the
interval, we can simply assume that, if we pick any time within the
interval as out reference, the real time the counter was read will
not be off by more than +/- the interval width.
So far, so good. Now, we could use this to determine the real clock
frequency with arbitrary precision. If we call the start time of the
i-th measurement ai, the end time bi, and the counter value ci, then
we could calculate the frequency from two measurements as
f = (c1-c0)/(a1-a0)
and the maximum measurement error would be
e = (|b0-a0|+|b1-a1|)/(a1-b0)
Unfortunately, this doesn't work too well. The typical interval size
is about 1 ms. So for a measurement error of at most 10 ppm, we'd
have to wait about 200 seconds between the two measurements. We can
improve this a little by making a lot of measurements, and looking
for some with a shorter than average interval, but this still only
means an improvement by maybe a factor of 2-3.
100 seconds is very long for what should be a simple measurement.
Just imagine doing this in a real mass production line. I used this
approach in the "cntr" project, where I didn't care so much about
run time. There, the algorithm is in the function "measure" .
For ben-wpan production testing, we need something better. I first
tried the same approach as in "cntr", but it was just too slow. With
the code frantically searching for "narrow" samples, and thereby
discarding thousands of measurements, it occurred to me that there's
a lot of information lost.
So I tried to just gather a decent number of samples and do a linear
regression on them. The GNU Scientific Library  has a few nice
functions for this, and they even come with examples.
The result was impressive. With a number of samples that would have
previously given me an accuracy of maybe 100 ppm on a good day, I
got results repeatable within about 1 ppm.
To improve this even further, I added weights to the samples. A
weight is a value indicating how much the sample should be taken
into consideration. I used the reciprocal square of the interval
size as the weight, thus giving less significance to results that
came out of long intervals. This brought the repeatability to within
+/- 0.1 ppm. (The production test process uses only about +/-5 ppm,
because thermal variations alone will add a few ppm.)
My final code is in .
Sometimes, a little math goes a long way.
(*) atben has no direct way to measure the clock, but we can measure
with fairly good accuracy how long it takes to send a frame into
the ether (without caring about reception). This time is a
function of the oscillator frequency. We can then compare that
duration to a known reference ... and voila, we know how accurate
the oscillator is.
The code that does this  is actually an ancestor of the ubb-vga
driver and is quite aggressive about disabling concurrent system
More information about the discussion