After the U.S. Government Accountability Office (GAO) evaluated a screening program put in place by the Transportation Security Administration (TSA), Nate Anderson of Ars Technica wrote "TSA's got 94 signs to ID terrorists. but they're unproven by science". On two fronts, ouch.
The first is the science. Anderson explains how the GAO audiited this program, which reportedly costs $200 million a year:
For the report, GAO auditors looked at the outside scientific literature, speaking to behavioral researchers and examining meta-analyses of 400 separate academic studies on unmasking liars. That literature suggests that "the ability of human observers to accurately identify deceptive behavior based on behavioral cues or indicators is the same as or slightly better than chance (54 percent)." That result holds whether or not the observer is a member of law enforcement.
The second is the absurdity of trying to spot terrorists using 94 leading indicators:
[The TSA program] relies on a network of 3,000 behavior detection officers (BDOs) deployed at 176 airports around the country. BDOs observe passengers waiting to cross security checkpoints into the "sterile" section of an airport. They are trained to observe 94 different signs of stress, fear, and deception, with the goal of calculating a "point total" for an observed individual in less than 30 seconds. The 94 signs remain a secret, but we do know that anyone displaying enough of them is referred for a patdown and secondary screening, during which officers will engage in "casual conversation" to determine whether the traveler poses a potential threat.
There's not much value in jumping all over the TSA; they have plenty of critics (and I will have to fly again sooner or later). But 94 signs? Forget 30 seconds; you couldn't evaluate that many signs in 30 minutes.
Naturally I think there's a lesson for publishing here: what Peter Collingridge has described as "the surprising power of little data". Rather than spend years building comprehensive systems to cover every possibility, pick a small set of things you'd like to test and pilot them. Collect (some) data along the way, evaluate what you find and refine where needed.
A good example of this approach can be found in SourceBooks. The company's CEO, Dominique Raccah, has been recognized twice this fall as a publishing innovator and an inspiration among digital pioneers. Although the awards are new, they follow years of smaller-scale innovation in areas like poetry and education.
More recently, SourceBooks has teamed up with Wattpad to sign and market young-adult titles. In making the announcement, Raccah said, "This partnership is all about helping connect talented authors with readers.” I don't think Sourcebooks or Wattpad will need 94 ways to measure when that happens.