I make no secret of the fact that I'm type-oriented. My advisor's cure-all prescription was "look at the types." In denotational semantics, I was impressed with the exercises where in a polymorphic parametric language, you could often prove what a function did by looking only at its type. But I'm not a typist, in the sense of claiming universal superiority of statically typed systems: I'll happily code on in Lisp, Javascript, Perl, Ruby, whatever.
However, as to the utility of using types to document the meaning of functions, I have to report their underuse and my dissatisfaction with the state of affairs regarding the Discrete Fourier Transform in software libraries. I'm adopting my strategy of "complain that something isn't true" in hopes that someone who knows the truth will correct me.
I know from basic electrical engineering courses about continuous Fourier analysis, where you take a time-oriented signal (I like to think of audio waveforms) and show the frequency-domain representation in a graph, more or less like in this screenshot from the Spectrum Analyzer Wikipedia article:
However, I had never taken courses such as signal processing where the Discrete Fourier Transform was studied. I did know what to expect from it: you get a Fourier analysis from discrete signal samples, as in a digitized audio clip, rather than an continuous signal which a piece of analog hardware could process.
I tried for two days to find out what type the Discrete Fourier Transform (DFT) actually returns to you. OK, I'm exaggerating on one point. In software terms, it was fairly well defined as taking a list/vector/array of N samples and returning a list/vector/array of N complex numbers. What I was missing was the physical interpretation of these complex numbers. It was fairly well represented to me that you could plot the magnitude of these complex numbers to arrive at something that looked like the above graph (loosely, the "signal spectrum"), with the exception that I had no idea what the units of either axis were. I wanted to know the independent axis in Hertz (cycles/second) and the dependent axis in something comprehensible, watts or Joules or decibel/Joules or something.
It appears after all my investigation that the complex numbers returned by the DFT are already in plain Hertz, in the sense that the zero-based index i into the returned vector represents the frequency i cycles/second. But this is in spite of the documentation and writings on the subject, mostly through experimentation and certainly not from anything I found in any software library documentation.
What in the world? This is one of the most fundamental algorithms in one of the most widely used technologies (communications) in modern society, and I can't find a clear explanation of the interpretation of the results. The closest I did come is in this summary:
http://local.wasp.uwa.edu.au/~pbourke/other/dft
This author includes graphics of the DC component, first harmonic, etc., and points out the limitation of the DFT is the Nyquist frequency; that is, the highest frequency that can be represented in N samples is N/2 cycles/second, hence only the first half of the values returned by the DFT are "unique" in a sense, the other half are the mirror image (negative frequencies).
It could be argued that anyone using the DFT is already familiar with the subject and interpretation of its result. That may be the final word. Nevertheless, I'm baffled and my challenge to the JFKBits readers is to find a documented DFT or Fast Fourier Transform library function that clearly spells out the physical interpretation of the return value in terms of Hertz, and for bonus points, the interpretation of the magnitude.
1 comment:
Heh. I wrote a template class that transforms a integral numeric type in to a distinct class for very similar reasons. It was so annoying to have a method, even one I wrote, that took (int, int, int). Now it takes (ObjectID, AxisID, int) which is a bit more clear. And if you get the order wrong, the compiler lets you know.
Post a Comment