My heart goes out to Dannen and Dishman, who are trying to make sense of an industry that sets retail price as a multiple of physical cost (not to mention backing into press runs to make a unit cost that fits the multiple for the price you want to hit). They both get some things right, and I think they also get one core point wrong.
Dannen successfully debunks the recent claims that cost equals published quality, and he calls for publishers to get their business models in line with market reality (in his view, lower prices). Dishman, who has written lately about the impact of piracy on paid content sales, argues that using price to drive maximum volume decreases publisher revenues and ultimately hurts readers.
From my vantage point, Dannen and Dishman (and most established publishers) miss a micro-economic argument that underpins start-ups like Richard Nash’s Cursor project. By their own argument, publishers’ upfront costs are increasingly fixed. In that environment, as the marginal cost of goods trends to zero, “profit maximization” actually becomes “revenue maximization”.
Finding the sweet spot, the point at which a change in price lowers total revenue, is a science that involves testing, data collection and analysis. Clearly, this is a skill set that retailers like Amazon and Barnes & Noble bring to the table. It is not a skill set found in most trade publishing houses.
Publishers do need to get their workflows in order, but they also need to cultivate new and different skill sets. Understanding the impact of price on total revenue is increasingly important for content providers, especially those who have made significant upfront investments in their content.