I'm torn between Fisking this essay on how privacy protections are hampering intelligence and programming with the rest of tonight. I can't quite put this essay down but we'll see how little I manage to say about it.
- First, consider the source: Stewart Baker heads the technology law practice at Steptoe & Johnson in Washington, D.C. From 1992 to 1994, he was general counsel of the National Security Agency. A former intelligence official calls for more intelligence powers. A policeman (of sorts) calls for more police powers. Police always think they need more power, and never see (willingly) the abuses the power can and is put to. They can't; it causes cognitive dissonance. This doesn't mean we can simply ignore the arguments, this doesn't mean he can't possibly have insight, but the rest of us certainly need to keep it in mind.
- Second, everywhere he says something to the effect of "privacy laws are hampering our efforts", you may freely substitute the idea "we need more power". Because that's what this boils down to, a cry for more power; the putative thing keeping them from the power doesn't really matter.
- And on Sept. 11, 2001, [the wall of seperation between law enforcement and intelligence] probably cost us 3,000 American lives. That's a highly skewed way of looking at "the wall". -3000 lives? Absolutely nobody else ever hurt by it? Absolutely nobody else ever helped? It's good rhetoric, it's bad logic. It needs to be considered in totality, and remember "the wall" can't take 100% of the credit for those deaths either, if indeed it can take any; it is a hypothesis by Stewart Baker that this is the "root cause" of this complicated event, supported by descriptions of two email messages, which we have to take Stewart's word even accurately reflect the content of these emails from a single frustrated New York agent. Many other failures occurred, and these must take their credit too.
- We couldn't find al-Mihdhar and al-Hazmi in August 2001 because we had imposed too many rules designed to protect against privacy abuses that were mainly theoretical. I can't speak to the specific rules, but I observe that while I can't necessarily pull out specific examples, it is a logical certainty that those rules were put in place because of privacy violations that hurt people. In typical beauracratic fashion, they may have been pointless, contrived, absurd, useless, or even dangerous, but that's a failure of the beauracracy, not the idea of privacy protections. The rules themselves may have been protecting against theoretical privacy violations, but the idea of "privacy violations" themselves are not theoretical in the slightest.
- They were focused on the hypothetical risk to privacy if foreign intelligence and domestic law enforcement were allowed to mix... Again, I challenge this word "hypothetical". I for one would not welcome a return to the days where this can mix. It's such a bad idea that it's almost universally rejected in civilized countries, which is the entire reason why we "needed" Echelon in the first place; so those countries could spy on each other's citizens, getting around these restrictions, and passing intelligence to each other. The very fact that Stewart can refer to these abuses as "hypothetical" is strong evidence that the protections are working, since we apparently have rendered these issues "hypothetical"; we weaken them at our own peril, and it would be foolishness to strip them away again.
- The second lesson is that we cannot write rules that will both protect us from every theoretical risk to privacy and still allow the government to protect us from terrorists. We cannot fine-tune the system to perfection, because systems that ought to work can fail. Emphasis mine. The bolded statement can stand on its own with no further elaboration; arguments that implicitly claim that we could fine-tune the system to "perfection" if we just got rid of that pesky privacy stuff should be recognized for what it is: Sleazy salesmanship. No system can tuned to perfection, those that are designed for perfection fail miserably. Douglas Adams captured this perfectly in his Hitchhiker's Guide book in the phrase "The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at and repair."; this describes beaureucratically-designed systems to a T. A perfect defense is a promise the intelligence community can not make, whether we have privacy or not.
- So the effort to build information technology tools to find terrorists has stalled. And what respect I may have had goes out the window. He's a lawyer, so explanations of why what they were building were useless would probably shoot over his head, but I wish for once they'd listen to the people who know what they're talking about. Casting a dragnet over all data can't work. The tools can't work. It's been established that the data was there, collecting more data will just hurt them.
Well, that's enough to satisfy me. But the article is very infuriating; the worst is what isn't said and is simply taken for granted, like slipping in the assumption that the privacy wall really is responsible for the failures, and the hidden assumption that this privacy wall is somehow all cost and no benefit, in flat contradiction to history. A more limited article focused on how a particular privacy protection had a particular effect, and demonstrating what we might do about it other then "Throw it all away!" might have been worth discussing, but this guy just tries to "define the win".
I reject nearly every premise this article is built on. It's a great example of an un-negatable emotional argument ("What, you think 3000 lives are worth some privacy?" "No, I challenge your entire assertion that privacy cost those lives in the first place." "But look, 3000 deaths! If Chewbacca lives on Endor, you must eliminate the privacy wall!"), but it still just boils down to a call for more power over us all.