Earlier this year, I was asked to join a panel, "Digital Piracy Review and Discussion", that took place on Friday at the annual meeting of the Association of American University Presses (AAUP). Moderated by Claire Lewis Evans, Editor for Digital and Electronic Publishing at the University of Alabama Press, the panel also included Lydia Crowe, Rights & Permissions Manager for the University of Iowa Press and Michael Schwartz, Contracts, Copyright, & Permissions Supervisor at Princeton University Press.
I structured my remarks to cover four areas I feel are important for publishers to keep in mind when they think about piracy. These four topic areas include:
- “Instance” vs. “impact” of piracy
- The sonar of data collection
- Knowing “where” and “when”
- “The consequence of a bad API”
“Instance” vs. “impact” of piracy. There’s a difference between the “instance” of piracy – a file seeded using BitTorrent, or a download from a file-sharing site – and its “impact”. Virtually every study done in this area makes assumptions, most often about the substitution rate – the likelihood that a download costs an IP owner an otherwise paid sale.
The most common assumption is that the substitution rate is close to 1:1 – almost every pirated copy is a lost sale. For books, which have been promoted for a long time using both free copies – review copies, galleys and ARCs – that flies in the face of some fairly common practices. We also recognize the value of word of mouth in promoting the sales of some works, however that viral support is engendered.
In the study that we did for books published for O’Reilly, we actually found that sales increased after piracy was first detected. That study is limited, but it was controlled – we looked only at front-list titles, and we tracked actual sales before and after the instance of piracy was detected. Our findings suggest that piracy can result in less, the same or even more sales, depending on the situation and the characteristics of the works in question.
The sonar of data collection. A couple of years ago, I wrote a post about “the sonar of fighting piracy”. Around that time, Attributor and the U.K.-based Publishers Association had received a fair amount of press for their work monitoring instances of piracy and issuing takedown orders of various sorts.
I like to tell stories, and in this post I talked about The Hunt for Red October, in which a Russian submarine commander is planning to defect. He leaves behind a letter that reveals his plan, and the Red October is then chased across the Atlantic by a fleet of Soviet ships.
A U.S. sub commander, watching the pursuit, says:
And there’s something else strange. They’re not listening to their sonar. At 30 knots, they could run over my daughter’s stereo and not hear it. They’re not trying to find Ramius. They’re trying to drive him.
Around that time, I had written a lot about trying openness and interoperability as an option to fight piracy. Out of that thinking came a clearer articulation of what bothers me about making enforcement the go-to strategy: it keeps us from listening.
This is not an argument against all enforcement. The research I just described was structured on the premise that piracy could hinder or help sales, and we wanted to test for those outcomes.
But it’s hard to listen when you’re looking to remove first, ask questions later. At best, that approach gives the enforcer a false sense of security. At worst, it hurts paid sales. Neither seems like the right choice.
Knowing “where” and “when”. One of the first listening exercises involves “where”. It’s no secret that most books are still sold by territory. We may secure worldwide rights, but we parse them out as we are able, sometimes over an extended period of time.
These days, a book published anywhere is visible everywhere. Territorial rights don’t make a lot of sense for folks looking to just buy a book. They don’t care much that you haven’t cleared rights in Romania or Australia or South Africa. They just want the book.
With the O’Reilly books, a chunk of the downloads were tracked in countries that didn’t have the rights to sell the book (yet). Sometimes it appeared to be sampling – check out a pirated copy and, if it met your needs, you bought a physical copy and had it shipped. Other times it was piracy – maybe a version of “I tried to buy it, but they won’t sell it to me, and I need it now”. But tackling piracy in markets you currently don’t support is less about enforcement and more about meeting market demand.
The same is true for “when”. Growth in pirate activity in a concentrated period of time can be a leading indicator of increased demand. Even in a home market, it can signal opportunities to make sure a book is stocked or actively promoted within a given community. Data can help lead the way, if you parse it.
“The consequence of a bad API”. The better part of three years ago, I was planning a presentation that I first called a “unified field theory of publishing”. Along the way, someone suggested that I revisit every blog post I’d written in the prior year to make sure that they “fit” my theory. That seemed overwhelming, so I tried to put it off.
Asked ”How would you explain piracy, for example?”, I said “Piracy is the consequence of a bad API.”
A simple lesson: people want to consume content when they want it, where they want it, how they want it and in the forms that make the most sense to them. Failing to meet those requirements creates the conditions for piracy.
This could mean forging new ways to bridge territorial rights. It could mean selling components or recombinant content. It could mean using piracy data to figure out where the market for translations is robust (as Paulo Coelho has done to his benefit).
It could mean many things, but it cannot mean enforcement alone. Data collection is a step toward innovation. That’s the path forward.