Thu 28 Oct 2010 04:57:30 PM UTC, comment #7:
John,
Out of curiosity I did some research. It turns out I was both right and wrong at the same time.
Friedrich Leisch is a statistician. In one of the perverse accidents of mathematics, the statisticians have very different definitions for autocorrelation and autocovariance than commonly used in the signal processing community. While I have a few statistically oriented texts, they are not where I learned the mathematics. If I was ever aware of the discrepancy, it was forgotten long ago.
The presence of autocor.m and autcov.m in the signal directory led me to expect a signal processing discipline definition as presented by:
Random Data
Bendat & Piersol
Digital Signal Processing
Oppenheim & Schaefer
Theory and Application of Digital Signal Processing
Rabiner & Gold
The Fourier Transform & its Applications
Bracewell
The Fourier Integral & its Applications
Papoulis
and many more.
In all of these, the autocorrelation is the inverse Fourier transform of the power spectrum and the autocovariance is the inverse Fourier transform of the power spectrum w/ the DC component set to zero. If the peak values are normalized to 1 it is explicitly described as "normalized".
However, Box & Jenkins in "Time Series Analysis" defined the autocorrelation and autocovariance as implemented by Friedrich Leisch.
Rather than using my "corrected" versions, I'd like to suggest that the documentation for autocor(), autocov() and periodogram() explicitly note that they implement the Box & Jenkins definitions rather than the Oppenheim & Schaefer/Bendat & Piersol definitions.
All of Leisch's contributions probably belong in the statistics directory, however, most are unlikely to be used by anyone other than a statistician, so there is little risk of confusion.
Have Fun!
Reg
|