Friday, February 29, 2008

Types and the Meaning of DFT

Sometimes it's good to get outside your comfort zone. These past few weeks have been like that for me, leaving behind the comfortable world of lambda calculus, type-directed translation, web frameworks, and instead trying my hand at some digital signal processing (DSP). In this post we're going to wrap up the discussion of types, meaning, contract and documentation using the topic of the Discrete Fourier Transform (DFT).

DSP: A Different World

This all started with some customer visits I made earlier this month, customers at large corporations doing lots of DSP, with more of a fondness for producing field-programmable gate arrays than writing news aggregators in 600 lines or less. It's refreshing to peek under the lid of other technical fields and see their complexity, to read things like "this project required tight coordination between the FGPA designers and DSP programmers who typically do not share design tools." It's the realization that it's a big technical world out there - that there are whole categories of developers with different skillsets and toolchains, where in a sense C is not close enough to the metal. Remember the advice that you always need to ask "for what" when hearing a recommendation for a new language or framework? Yeah. It matters what you're doing.

One outgrowth of those visits was an initiative on my part to write some basic signal processing programs. For example, I wanted to simulate the Amplitude Modulation process: modulate a .wav file sound clip onto the carrier frequency from the standard US commercial AM radio band (600KHz to 1600KHz), look at the frequency domain graph to see the expected carrier frequency spike and sidebands (corresponding to the original sound clip), and then demodulate it and play back the resulting sound file. Apparently this style of simulating the over-the-air bitstream is commonplace for those in the business of making things that go over the air (or through optic fibers).

I Heard This is Supposed to Hertz

The essence of the DFT is not hard to grasp, once you understand the basic premise of Fourier analysis: that any given time-based function (a.k.a signal) can be equivalently represented as a sum of periodic functions, sines, of various frequencies and magnitudes. In very simple terms, if you hit four tuning forks of different music pitches, the resulting sound could be represented as either a complex-looking wavy line or the sum of four sine waves of different frequencies (and phases). The DFT simply works with discrete samples of the input signal or function and is more suited to processing with a digital computer.

What I discovered and reported in Look at the Types... If You're Lucky is that the writings about the Discrete Fourier Transform (DFT) are surprisingly slippery on the topic of physical interpretation. What I wanted most from the DFT was a frequency spectrum plot with Hertz (SI unit Hz, defined as cycles per second) on the bottom axis and power on the vertical axis. I could tell what programming language type the DFT functions took and returned (arrays of complex numbers). Given the array of complex numbers it is easily converted to an array of magnitudes which you can then plot. But the units of the plot? That was another matter.

Both in general discussions of the DFT and in library function documentation, I got lots of (a) definitions of the DFT array and (b) useful tips like "you can plot the magnitude to see the spectrum " and "the first element in the returned vector is the DC component".

But over and over I could not get a straight answer to the question "what are the actual frequencies" and "what units are the magnitudes"; essentially what are the units when plotting the magnitude of the DFT complex numbers.

Typical DFT Documentation

Here's a typical example of DFT documentation, the Apache Math library's FastFourierTransformer class:


public Complex[] transform(double[] f)
Transform the given real data set.

The formula is $ y_n = \Sigma_{k=0}^{N-1} e^{-2 \pi i nk/N} x_k $

Parameters:
f - the real data array to be transformed

Returns:
the complex transformed array

Notice the DFT definition is given, though typically the free variables y_n, N, and x_n are not defined. The reader is assumed to be familiar with the definition.

How to Get the Hertz, the Power

Eventually I stumbled on Julius O. Smith III's excellent summary of the DFT. I say "excellent" because it spells out in satisfying detail what the notation means and what the units are.

Briefly, if the DFT function returns an zero-based array X, then X[k] represents the frequency k/Ttotal where Ttotal is the time duration of your input sample. That's it!

So, Smith gives us the Hertz. For the vertical axis, I had to consult a friend with a PhD in communication theory to get the explanation for the magnitude. Of course the meaning of the magnitude depends on the meaning of your input signal. After all, the time domain samples could have a range of 1 Volt or 1millivolt, the input array doesn't have units either. Briefly, whatever units of power the input squared has, the DFT magnitude squared also has.

Types, Contracts and the Real World

Every programmer learns how to model the real world with types as well as code. We realize there's not a one to one match between data and reality, and finding the right tradeoff is important to success. Some kinds of bugs can trace their origin to a data model that is too simple: there are real world states that can't be represented. Other bugs arise when the data representation has states that correspond to no real world state. Other bugs arise when the data model is so complex that the programmers can't keep up with it. I claim that the best software solutions are written by experts in the problem domain, and have a data model that models the problem domain as exactly as possible. The problem domain experts have no problem keeping up with the data model because it matches their understanding of the real-world processes that they carry with them when they're away from the source code.

The Discrete Fourier Transform is best described as a library function, because it has applications in so many fields and areas. The best software solutions using the DFT are undoubtedly written by people who have studied the topic away from a source code editor. Sitting in front of a Javadoc is no place to learn DSP. It's also no place to learn about Swing, or X.509 certificates. But I do think a modicum of domain explanation is appropriate in documentation for these types of libraries, for newcomers who know what they need out of the library without needing to know implementation details. It's a tricky line, but after being in this situation, I'm more sympathetic to this position of a library consumer.

In summary, I'd like to propose that library documentation generally have three kinds of information:

1. The function prototype, describing the types of parameters and the return value.

2. Low-level invariants and contract information about the arguments and return value.

3. A little bit about how the function relates to the real world.

Improved Documentation

Finally, here's my documentation attempt for the DFT function:


Complex[] dft(double[] signal);
returns the Discrete Fourier Transform of the input array signal.

If the input signal is an array of N signal samples taken at sampling period T, implying that the input signal has time duration NT seconds, then the returned Complex array also has N frequency components. The kth element of the array corresponds to the frequency k/(NT) cycles/second. If the squared value of the elements of signal is in power units P, then then squared value of the elements of the returned array also have power units P.

The elements in the first half of the returned array are mirrored by the elements in the second half, as they correspond to the positive and negative frequencies respectively.

Parameters:
signal - Input signal, a length N array of signal samples taken at sampling period T.

Returns:
Array of length N of complex numbers. The kth element of the array corresponds to the time-domain frequency k/(NT) cycles/second. The magnitude squared of a complex-valued element in the array corresponds to the power for that frequency, and the angle of the complex-valued element corresponds to its phase.


IANADSPG (I Am Not A Digital Signal Processing Guy), so don't bet your company on this description, and do please send me corrections (citing sources). This is simply the type of thing I'd have liked to know when first coming to the DFT, and I hope by analogy you can find insights how users of your library will view it and how you may want to document it.

Wednesday, February 27, 2008

Look at the Types...If You're Lucky

I make no secret of the fact that I'm type-oriented. My advisor's cure-all prescription was "look at the types." In denotational semantics, I was impressed with the exercises where in a polymorphic parametric language, you could often prove what a function did by looking only at its type. But I'm not a typist, in the sense of claiming universal superiority of statically typed systems: I'll happily code on in Lisp, Javascript, Perl, Ruby, whatever.

However, as to the utility of using types to document the meaning of functions, I have to report their underuse and my dissatisfaction with the state of affairs regarding the Discrete Fourier Transform in software libraries. I'm adopting my strategy of "complain that something isn't true" in hopes that someone who knows the truth will correct me.

I know from basic electrical engineering courses about continuous Fourier analysis, where you take a time-oriented signal (I like to think of audio waveforms) and show the frequency-domain representation in a graph, more or less like in this screenshot from the Spectrum Analyzer Wikipedia article:




However, I had never taken courses such as signal processing where the Discrete Fourier Transform was studied. I did know what to expect from it: you get a Fourier analysis from discrete signal samples, as in a digitized audio clip, rather than an continuous signal which a piece of analog hardware could process.

I tried for two days to find out what type the Discrete Fourier Transform (DFT) actually returns to you. OK, I'm exaggerating on one point. In software terms, it was fairly well defined as taking a list/vector/array of N samples and returning a list/vector/array of N complex numbers. What I was missing was the physical interpretation of these complex numbers. It was fairly well represented to me that you could plot the magnitude of these complex numbers to arrive at something that looked like the above graph (loosely, the "signal spectrum"), with the exception that I had no idea what the units of either axis were. I wanted to know the independent axis in Hertz (cycles/second) and the dependent axis in something comprehensible, watts or Joules or decibel/Joules or something.

It appears after all my investigation that the complex numbers returned by the DFT are already in plain Hertz, in the sense that the zero-based index i into the returned vector represents the frequency i cycles/second. But this is in spite of the documentation and writings on the subject, mostly through experimentation and certainly not from anything I found in any software library documentation.

What in the world? This is one of the most fundamental algorithms in one of the most widely used technologies (communications) in modern society, and I can't find a clear explanation of the interpretation of the results. The closest I did come is in this summary:

http://local.wasp.uwa.edu.au/~pbourke/other/dft

This author includes graphics of the DC component, first harmonic, etc., and points out the limitation of the DFT is the Nyquist frequency; that is, the highest frequency that can be represented in N samples is N/2 cycles/second, hence only the first half of the values returned by the DFT are "unique" in a sense, the other half are the mirror image (negative frequencies).

It could be argued that anyone using the DFT is already familiar with the subject and interpretation of its result. That may be the final word. Nevertheless, I'm baffled and my challenge to the JFKBits readers is to find a documented DFT or Fast Fourier Transform library function that clearly spells out the physical interpretation of the return value in terms of Hertz, and for bonus points, the interpretation of the magnitude.

Friday, February 22, 2008

Evaluation Animation and Debugging

In Presenting Step-by-Step Evaluations, I outlined some things I'd like to do for studying and presenting lambda calculus (or other) evaluations. In recent reflections on that, I noticed the word "step", and realized that what I wanted was almost identical to single-step debugging support. I'd like an animation of reducing expr to follow these steps, each rendered as a few seconds of a still possibly with a dissolve between them:

  1. Show expression, highlighting the next application to be reduced, possibly use coloring to highlight the function and argument expressions
  2. If occurs check would result in variable capture, display a message announcing there will be an alpha-conversion, followed by expr rewritten with the alpha conversion
  3. Announce the beta-conversion
  4. Show the result of the beta conversion
Writing this as a function f(expr, stage), where stage is one of the above steps, which returns the reduced-once expression and the next stage should be sufficient to wrap in either a simple loop to produce an animation sequence, or it could be driven by a debugger GUI.

It is especially obvious from this that even at the lambda calculus level, sometimes you want to skip lots at a time and sometimes you need to drill down to the finest level of evaluation possible. Seeing even the alpha conversion step may be useful at some points, at other times you want to set a breakpoint.

That brings up another point -- what is a "single step" in functional language debugging? It's all function applications, there are no lines to step over. Breakpoints have to be set on function expressions: "stop when this function is about to be evaluated."

In conclusion, even this exercise points out how useful lambda calculus can be for studying all kinds of phenomenon at its core: database-like transactional storage, undo/redo stacks, or marshalling. It doesn't have to be lambda calculus, it can be some arbitrary toy language, but lambda calculus has the essential features: variable reference, function definition, and function application. If you're studying a phenomenon like persisting code, you'll find that it's not the data structures (figuring out a way to store lambdas) that are a challenge but the variable references between structures (the swizzling problem). Lambda calculus is a simple 3-feature model that's easy to fit in your head and let you concentrate on the phenomenon of interest.

Wednesday, February 20, 2008

What to Expect When You're Computing for Speed

I've been surprised how many times I'll be discussing programming system X and hear a complaint about the performance of X on task T, when on hearing the details of T, I respond with something akin to "A five ounce bird cannot carry a one pound coconut." Some tasks are inherently asking too much of the hardware, and some common sense tests can often be applied to understand what to expect. The other thing is that when you're staring at a screenful of code, all monospaced, all the same font size, it can be hard to spot the performance bottlenecks. Even a simple diagnostic like logging statements timestamped with a fine-grained timer are so valuable in identifying runtime hotspots.

If anything, this is why you study computer architecture: to understand the basic operation and limitations of modern hardware, and to build a reasonable mental model of performance as you create the software architecture sitting atop the hardware. Planning a cross-country trip differs based on whether you're using an RV, a Camry, or an F16, and what kind of load you expect to carry. Know your platform, and know your basic performance requirements or performance-sensitive use cases.

If you do get into trouble, here are some things to try:

1. I/O tends to trump everything. Listen to the disk. Is it working? A lot? Look at your disk's rotation speed. Can you get a faster disk? I once kept an older computer with a twice-slower CPU for testing because it had a fast hard drive that made all the difference. Think about other types of I/O access like web services HTTP requests or net-mounted file access. Would a fatter network pipe help?

2. Try to have enough main memory for the problem at hand. Is the CPU busy all the time when you expect it to be? Or is CPU idle because your problem is swapping?

3. Don't ignore cache effects. Cache tends to be slow and narrow (not many bits transferred at once).

4. Use wall clock timings as your best measure of performance, rather than CPU-only times, to account for I/O performance. If your machine's background noise level of other activities is high, you may consider that part of the problem to be addressed.

Sometimes issues do come down to something silly in a language implementation. Don't be afraid to add what are essentially tests of the language and its implementation to your product test suite. Is method dispatch for an object with hundreds of fields disproportionately slow? It matters for your application, so it matters for you. If you make a mod to the open-source implementation of your Python interpreter, now you have a regression test in case someone ever runs the test suite with an updated Python that may address the issue.

Friday, February 08, 2008

Call by Name: "Yo Elvis"

So Groovy has this Elvis operator:


def gender = user.male ? "male" : "female" //traditional ternary operator usage

def displayName = user.name ?: "Anonymous" //more compact Elvis operator
OK, I can dig it. I've got lots of Java code that could use this operator, following a form like x==null? "defaultSomething": x.getSomething().

Let's think for a second what this operator means, inventing some syntax for defining an operator:

?:(expr, defaultExpr) => ((expr)==null? defaultExpr : expr)
At one point in time at least, the Groovy implementation followed this implementation pattern, and if the twice-mentioned expr contained any side-effects they would be repeated. Usually something you don't want, always something you want to document, unless you want blunt objects headed toward your cranium. The odd thing about this implementation: it sounds just like call-by-name (CBN) evaluation. Unfortunately CBN is not accessible to the Groovy programmer, and in what is an obviously side-effecting language it's the worst possible form of CBN.

After digging into Lisp macros last week, this looks like a great candidate for a macro. Alternatively, imagine if Groovy let you define call-by-name functions yourself, then you could write this operator the way you'd expect:

?:(expr, defaultExpr) {
var exprEvaluated = expr // force evaluation
return exprEvaluated==null? defaultExpr : exprEvaluated
}
In call-by-name, a function's arguments are not evaluated before calling the function. Instead they are passed in as expression trees at runtime, and only if "needed" are they fully evaluated.

The advantage of a call-by-name function over a macro is code size: expanded macro invocations would add up, increasing the code size, and possibly leading to poor instruction cache locality.

The funny thing is that call-by-name, in a sense, in not an unfamiliar concept to any programmer. Any language reference's section on "control structures" is telling us all the built-in call-by-name functions: repeat(body,test), while(test,body), unless(expr,test), if(test,truecase,falsecase) are all potentially definable as call-by-name functions. Macros serve pretty much the same purpose, which is Arc includes so many cool control structures as macros. The tricky part for adding call-by-name definitions to statically-typed languages is that they work on expression trees, not values, and writing the type annotations most likely requires parametric polymorphism.

Thursday, February 07, 2008

Summer in the Sun

From the Types list today, it looks like some interesting work can be had at Sun this summer:


The Programming Languages Research Group at Sun Labs has five internship slots available for Summer 2008. Each slot is available in either Burlington MA or Austin TX. These internships will focus on implementation of various aspects of static checking and type inference for the Fortress Programming Language. Resumes may be submitted through any of the following links.

http://www.sun.com/corp_emp/zone/search.cgi?req=558053
http://www.sun.com/corp_emp/zone/search.cgi?req=558054
http://www.sun.com/corp_emp/zone/search.cgi?req=558052
http://www.sun.com/corp_emp/zone/search.cgi?req=558056
http://www.sun.com/corp_emp/zone/search.cgi?req=558056

Thank you for your consideration.

-------------
Eric Allen
Co-Principal Investigator, Project Fortress
Staff Engineer, Sun Microsystems

Wednesday, February 06, 2008

Language Politics and Technology Decisions

When someone says "Java sux", or "Ruby rules", we need to always, always, always ask "for what?" The machinery of politics is alive and well in programming forums the world around. Everyone who has ever written a program or learned how to use a technology has a vested interest. They will continually be about the business of convincing others to use or learn it too. We need to understand that machinery in order to protect ourselves and have a more productive discussion. Better understanding of someone else's decisions as well as our own will also help us arrive at better decisions overall.

Decision Making

Politics is about decision making, and in technology we constantly make decisions. We make evaluations: Ruby or Java, XML or JSON, Flash or Ajax. We like to think these decisions are made on technical grounds, but I think if we look carefully we'll realize when we do the promoting we do so based on a mixture of reasons. (Do you make your decisions based on in-depth research, or whether you hate curly braces?)

Promotion of Interests

Politics has an important aspect when it comes to making decisions: it is always done in an environment of the promotion of interests. Not to be tedious, let me repeat those two parts:

1. promotion,
where the interesting thing is how are things promoted
and
2. interests,
where the interesting thing is identifying interests

Forms of Promotion

Promotion has different forms: direct or indirect, emotional or rational, personal or media-broadcast. We concentrate on rational discussion of technologies, and for good reason -- if something doesn't work, there's a price to pay, in starting over or cleaning up a mess and if we're not dead or fired, starting over. The rational basis for promoting a technology has to be there. But the emotional side is at work too, whether you like it or not. My wife maintains that if someone dresses sloppily that they won't be liked -- even if others don't understand why. Having an attractive website may or may not add to your presentation, but an ugly website almost certainly will detract from it. The take-home for this for technical people is that presentation and form play a part along with rational arguments. Iron your arguments, iron your shirt, iron your website. ("Iron your Python"?)

Identifying Interests

All the time we encounter people promoting this language or that tool. And often enough we ourselves are doing the promoting, even if not in Superbowl ads. As already noted by many others, even if you're not a salesman who stands to make a commission from convincing someone to adopt technique T, you stand to gain more benefit from your vested interest in T if another person uses it. (Note that "technology" derives from the same root as "technique"; design patterns are materially as much a technology as the Python 2.5.1 distribution). You can act as a consultant for newcomers to T, and you attract more people to maintain and improve work already using T. Cisco Systems actively promotes network administration training programs for high-school students, as part of making sure network usage grows as much as they'd like it to. Microsoft was visiting programming language groups, who are notoriously ill-funded, looking for people designing new languages offering funding to target the .NET platform. Some in industry like to talk about this as "growing the ecosystem." The point is that when you hear someone promoting a technique, they're promoting it from their interests, not necessarily yours. It's not always as obvious as when you get a call from a salesman.

Evaluating Proposals

The primary question to ask when you evaluate a technical proposal then is not unfamiliar: what's in it for me. Or rather you should be asking "what would this technique/language/framework do for me in the environment I have." The Hennessy and Patterson computer architecture books hammer home this maxim about evaluating a computer architecture:
The true benchmark is the wall clock time of the program or programs you will actually use most often
Paul Graham designed a language that Paul Graham liked. Maybe there was a different way he could have handled its announcement. But I venture that part of the over-hype problem with the Arc announcement last week is that when Paul Graham wrote about how great a language he was designing, people were hearing how great it would be for them, mentally filling in unsaid things with features they'd like to see, and not pausing to try to align what Paul's interests were with what their interests were. The thing is... though we may have an opinion, we may not always know what would help us.

Know Yourself

When someone tells you "you need better tools: try Lisp", ask "what about Lisp do you think would help me?" If they start listing reasons without first trying to understand who you are, may I say there's a problem.

This has me thinking about a conversation from the film Joe vs. the Volcano between Marshall the limo driver and Joe, who is preparing for a trip at the end of which he plans to jump into a live volcano. I've adapted it to acknowledge the need to know what we're trying to do, and at the same time how hard that can be. Instead of the limo driver, JFKBits plays the part of technical consultant's technical consultant, and instead of the volcano jumper preparing for his big trip with carte blanche Joe plays the part of a programmer for hire:

JFKBits: So what would you like to do?
JOE: Excuse me?
JFKBits: What would you like to do, sir?
Joe thinks for a moment.
JOE: I thought I might like to learn some programming language.
JFKBits: Okay. What would you like to program?
JOE: I don't know.
JFKBits: Alright.
JOE: What would you program?
JFKBits: For what? What do you need?
JOE: A web application.
JFKBits: What kind of web application? What does your client need?
JOE: I don't exactly know.
JFKBits pulls the car over and stops.
JOE: Why'd you stop?
JFKBits: I'm just hired to consult with you, mister. I'm not here to tell you who you are.
JOE: I didn't ask you to tell me who I am.
JFKBits: You were hinting around about programming languages. It happens that programming languages are very important to me, Mister..
JOE: Reader.
JFKBits: Reader. The programming language makes the man. I believe that. You say to me you wanna learn some programming, you wanna write a webapp, but you don't know what kind. You leave that hanging in the air, like I'm going to fill in the blank, that to me is like asking me who you are, and I don't know who you are, I don't wanna know. It's taken me my whole life to find out who I am and I'm tired now, you hear what I'm sayin'?

In the movie, the topic is clothes: "clothes make the man." As with clothes, there is a very personal element about the things we work with on a daily basis, and it's worthwhile understanding why you like what you like. I'd wager that for most people, past a certain age, their preferences and reasons are locked in and choice of tools is largely based on what's familiar.

Unlike Marshall, I think too many of us are far too eager to advocate our personal technical decisions to people with whose needs and environments we are not well acquainted.

Some Applications

When being consulted by others about what technical choice to make, we should always ask "what do you need it for? What are your interests?"

When being persuaded, we should always ask "what is this good for, and does that line up with my reasons?"

When persuading, we should exercise due diligence to identify why someone should care; in particular, what interests do we share such that what makes this decision appealing for me also makes it appealing to you.

Conclusion

When you see judgments ("Ruby rulz"; "the real WTF is they were using PHP"; "the tools for Python stink") in an online forum, realize they represent the end result of a decision-making process by a person who is not you and most likely has a very different set of interests. If you have an axe to grind, i.e. they're challenging your judgment, you may unnecessarily expend effort "debating", effort which may be more productively spent understanding the underlying interests and reasons for arriving at a conclusion.

If you can determine they really are your political opponent, that you are in direct competition for the same target audience, I suppose you could go at it. But I think we too often assume that different conclusions imply either competition or faulty reasoning processes. Just a little bit of effort in understanding "where are you coming from" may be more appropriate and ultimately more enlightening.

Friday, February 01, 2008

Pizza to Scala: The Ten Year Road of Language Design

James Gosling, yesterday:

There has been a lot of chatter about the closures proposal penned by Neal Gafter. And, in particular, whether or not I support it. I absolutely do.
--James Gosling, Thursday January 31, 2008
He goes on to add "Closures were left out of Java initially more because of time pressures than anything else." Java was introduced in 1995. Let's look a bit back into history to see what's been happening since then.

In 1998, my advisor advised me to take a look at some related work, something called Pizza, "a substantial companion to Java." Pizza was a joint project between Philip Wadler and Martin Odersky. They wanted to add closures, parametric polymorphism, and abstract datatypes to Java.

Then anonymous classes were introduced to Java, which went 80% of the way toward giving Java the utility of closures (4 out of 5 programmers thought it was good enough). Still a syntactic pain, but a net win considering the convenience of putting your code where it makes sense rather than a class definition elsewhere. Wadler and Odersky decided to focus their energies on generics, and moved on to Generic Java. I used the Generic Java compiler at the time simply because it was faster than Sun's javac for compiling ordinary Java code. The work on Generic Java was then actually adopted into the Java standard.

That brings us to 2008. Martin Odersky is now realizing the original goals of Pizza with Scala, and closures are being brought to Java.

I appreciate the efforts of Wadler and Odersky in bringing functional programming closer to the average programmer. In 1998 Wadler wrote an enjoyable piece "Why No One Uses Functional Languages", a kind of State of the Functional Programming Union, in which he lists 8 barriers to adoption of functional programming. At a casual glance, I count 5 out of 8 of these barriers broken down by the association with the Java platform. These include portability, libraries, and compatibility (e.g. foreign function interfaces and other ways to interface with legacy systems). A sixth reason, tools, is addressed or helped to a large degree by Eclipse, which while written in Java, largely owes its existence as far as spirit of design and the benefit of years of hard-toil experience to the Smalltalk community.

Joel Spolsky and they that hire have been bemoaning the dilution of fresh CS grads coming out of the Java mills. In light of the above technical developments, this is an ironic trend, considering the "training" barrier Wadler mentions:
To programmers practiced in C, C++, or Java, functional programs look odd. It takes a while to come to grips with writing f(x,y) as f x y. Curried food and curried functions are both acquired tastes.
--Philip Walder
My understanding of Spolsky's point is that "the kids aren't learning Scheme - they don't know the great ideas of computer science anymore, they just learn Java and design patterns." If it is the case, yes it does bother me. I don't know if that is the case; I don't hire people, and the people I've worked worth have been (mostly) great.

What I will say is this. The churn of progress has been steadily raising the level abstraction over time. We have needed, and will continue to need, people who create the new abstractions (languages and frameworks) and people who apply the new abstractions. Bright engineers out of the Java mill will eventually be challenged with problems their existing tools don't address (hey, this is tedious and error prone...) and if nothing else they'll reinvent the great ideas of the past, and relearn them the hard way. Learning things the hard way is effective (you tend to internalize lessons better when they involve words like "shots fired" and "you [and your slipped schedule] are gonna cost me a billion dollars!") and expensive. Managers should be noticing things like that. I hope some bright engineering managers, who sit on their local university's advisory board, notice these trends as they arise and say a few words on their next visit to the department.



Community service footnote: I sit on the advisory board for the CS department at my alma mater and provide feedback as an alum and industry representative. If a school you care about doesn't have such a board and you think they need to hear from you, start it. Even if you don't think you have influence, alums in the technical fields can always pretend they'll strike it rich someday and therefore should have some influence.