This week, I am expanding on five lessons I've learned while designing and consulting on content-related workflows. Yesterday, I wrote that "Data collection and process checking should be used to identify and resolve problems, not find a function or person to blame." Today, I'd like to tackle a related thought: "Quality is best supported by processes with fewer handoffs (and over time, fewer inspections to maintain quality)."
Content workflows typically involve a number of discrete steps that take place across multiple functions. Each one of these steps has a measurable error rate, a predictable (if small) likelihood that a mistake will be missed or a problem will be introduced.
With data collection and process improvements, almost any error rate can be reduced. Still, the more steps you have, the more likely it is that an error will occur. A two-step process in which each step has a 99% success rate is reliable, on average, 98.01% of the time.
This is the maddening part of checking content for quality: as a discrete step, it increases the likelihood that a problem will get through. Sometimes the checks are considered necessary ("Fix grammar" and "Protect us from libel" are examples), but performing them doesn't increase the end-to-end quality of content as much as getting it right at the start does.
A second benefit of a workflow with fewer steps: it takes less time. Teach a blogger to write, and you get to market a lot faster.
The idea that quality is derived from a series of process (content) reviews is kind of baked into the way we think about publishing. Certainly, there are editors who work with authors to significantly improve their work (Max Perkins seems to be one of the most cited).
But to the extent that these editors add long-term value, they do so by providing feedback that helps writers improve. I'd argue, as well, that extensive editing is closer to collaboration than it is to a content workflow.
The countervailing argument: "If we don't check everything, a (significant?) share of what we publish won't be of adequate quality." I counter: if you don't gather data that helps writers increasingly get it right from the start, that's a self-fulfilling prediction. It's also not scalable.
I'm not cavalier about quality, but I do reject the idea that inspection offers the only way to guarantee it. If you're going to check, do it to improve the quality of inputs, not outputs.
Tomorrow, I'll be writing about process mapping and the value of clearly identifying the functions involved.