[Earlier today, I delivered a keynote presentation to open the second day of Klopotek’s 10th annual Publishers Forum, held once again in Berlin. What follows are my prepared remarks for the talk, titled “Disaggregating supply”.]
Last year, I had a chance to join this meeting and share my thoughts about the value of putting “context first”. In that presentation, I explained how what I described as the “container model of publishing” limits how we think about creating, maintaining and disseminating content.
If you heard my remarks, you know how strongly I feel about changing publishing workflows. We need to migrate from thinking about products to instead planning for and offering services and solutions. To get there, four principles apply:
- Our content must become open, accessible and interoperable. Adherence to standards will not be an option;
- Because we compete on context, we’ll need to focus more clearly on using it to promote discovery;
- Because we’re competing with providers that already use low- and no-cost tools, trying to beat them on the cost of content is a losing proposition. We need to develop opportunities that encourage broader use of our content; and
- We will distinguish ourselves if we can provide readers with tools that draw upon context to help them manage abundance.
Now, those principles were developed as I thought about book content. But, it’s a badly kept secret that I am not just a book guy.
For almost the first half of my time in publishing, I worked on ad-driven weekly magazines, particularly TIME magazine. I still consult with a variety of periodical publishers, and I while I blog more about book publishing, I write quite a bit about magazines, as well.
You might be listening and thinking, “He works with magazine publishers?” I hear that reaction more often than you think. Although there is more overlap in Europe than in the United States, I’ve found that book publishers often think that magazines are fundamentally a different business, much as magazine publishers feel that book publishing has only limited relevance to them.
Certainly there are differences between book and magazine publishing, and we could dwell on them. But when I was in business school (a while ago, but bear with me), one of my teachers gave us a piece of advice that sticks with me to this day: it’s easy to spot the differences in things; it’s much harder to see the similarities.
His point is an important one. As members of the book publishing community, we tend to see our situations as unique, distinct from the structure and trends of other content-driven businesses. But there are more similarities than differences, and understanding trends in those things that are similar can help us plan and prepare for changes down the road.
I see at least two useful similarities: Both periodical and book publishers provide content in mostly standard, provider-determined containers; and we both sell those containers to readers, increasingly in digital form.
So what got me thinking about “disaggregating supply” is a similarity that you might first think of as a difference: trends in the provision of advertising content. From a publisher perspective, there is a world of difference between editorial and advertising content. But I don’t want you to use a publisher perspective.
Let’s think like readers. From that perspective, advertising and editorial content is all part of a single container, or stream, or search. It is accessed, sometimes bought and consumed for reader-determined purposes. In many cases, advertising content provides equal or greater value than traditional editorial content. Think about fashion magazines, for example, or many business-to-business periodicals.
Yes, there are business-model differences. With advertising, the reader is monetized, and a publisher is rewarded for having gathered an audience. But that reality is changing even as I talk with you today.
A story from last October helps explain why.
You’ll have seen reports of how a huge storm, Hurricane Sandy, caused flooding and widespread power outages in the eastern United States, particularly in New York, New Jersey and Connecticut.
Gawker Media was knocked offline for almost a week, instead publishing a version of its blogs on Tumblr, which is a fast-growing multimedia web platform. That brief alliance led some Gawker fans to hope that the company would use Tumblr’s platform on a permanent basis.
Tumblr does offer a pretty good set of cross-media authoring tools. These tools have further lowered barriers to entry for anyone interested in publishing content on the web. One reporter even called it “the ultimate CMS”.
But I think that wishing Gawker (or any other branded presence) will set up shop on Tumblr reveals a fervent hope for a return to gatekeeper media. Here, the advocates are mainly folks who favor ad-supported media, which has a hard time figuring out how to monetize audiences that aren’t built and sold by the millions.
That is to say, it has a hard time trying to monetize the web.
Two generations ago, the United States had just three television networks, a well-known set of national magazines and no Internet. The situation here in Germany was even more limited. Monetizing media involved mass markets and predictable patterns for purchase and messaging.
Each time traffic for a given web presence – a platform – grows to a size that rivals what networks and national magazines once offered, we hear a call to standardize the content and figure out the ad model.
I understand this. It’s much easier to deal with an overarching platform – CBS Television, TIME magazine, the Saturday Evening Post – than it is to figure out how to market at a small scale. But the web isn’t a community of millions; it’s millions of communities, even inside a platform like Tumblr.
It’s hard to keep track of millions of communities, unless you start with the idea that you’re not going to bundle them. That’s part of the power of a company like Google.
No one buys “Google”; they buy access to ad hoc communities defined by the searches people perform, the e-mail they send, and other contextually relevant activities. For its part, Google invested in understanding what users are doing so that it can effectively serve them in the moment.
Now, I think, there is an ultimate CMS: the internet. It’s just messy. It was built that way. Trying to use the web in ways that emulate gatekeeper media is a mistake.
Rather, advertisers and media buying services are taking a page from Google and Facebook and getting out of the aggregation business. Some marketers have started to improve their ability to capture and automatically act on information that matches customer requirements with their products and services.
The more that you can match requirements with solutions in a way that feels lightweight, open and network savvy, the greater the chance that the communities you want to reach will opt in.
What is happening to advertising is also happening to editorial content. You see this most clearly with periodical publishing. Readers search for and read articles, sometimes just headlines. Sales of both physical and digital collections – containers like newspapers and magazines – persist, but demand is generally weak and the average prices paid for these formats have been under pressure throughout the last decade.
This is what got me thinking that periodical publishers could compete more effectively if they stopped trying to aggregate demand and started figuring out ways to disaggregate supply. You see companies that already operate this way – Bloomberg LLP and Thomson-Reuters are examples.
Disaggregating supply requires a new approach to structuring content. To help make this happen, a number of firms that support contextually sophisticated content management strategies have come on the scene in the last several years. You’ll have a chance to hear a good version of just such a story when Daniel Mayer of TEMIS presents here later today.
The short story is this:
This notion of disaggregating supply is a useful metaphor in book publishing, as well. In 2011, I wrote a presentation, “The opportunity in abundance”, that outlined how the prevailing book industry supply chain is under significant pressure. Increasingly, book publishing is a business built upon the cumulative sales of low-demand titles. Bestsellers remain, but the long tail gets longer by the day.
Whether it is used to sell and buy physical or digital versions of published content, the Internet is not a digital manifestation of the old supply chain. It functions as a multi-sided market, a platform that enables direct interaction between at least two distinct types of affiliated customers.
Andrei Hagiu and Julian Wright, the business school professors who helped define multi-sided markets, said this about content on the web:
Another important example of an interlocking multi-sided platform is the Internet itself, which can be viewed as a platform that primarily enables users (content seekers) and content providers to directly interact. The platform … collectively provide[s] end-to-end connectivity across the world wide web.
Think about those ideas for a moment. For decades, perhaps centuries, the primary platform for publishers and their supply-chain intermediaries relied on the ability to exclude. Now, we’re starting to see the dominance of a platform that includes everything and excludes nothing.
In return, we get access to global communities and the ability to meet latent desires. But we are competing with everyone else to reach and serve these communities.
Book-based communities are not new, and across certain digital realms they have been served reasonably well for a decade. An early example is Safari Books Online, the joint venture between O’Reilly Media and Pearson, which provides subscription access to digital versions of book content published by a number of companies, including but not limited to O’Reilly, Pearson and Microsoft.
The service generally focuses on technology-related content, a useful distinction in defining its community. It sells access, but it does not sell or provide components of any given work. This is not a bug; it is a feature that helps publishers understand and support the service. You can view but you can’t download parts of a book, a limitation that makes publishers more comfortable with Safari.
We’ve seen other book publishers that have succeeded in creating and sustaining community ties. These include Canadian romance publisher Harlequin Inc., which sells both physical and digital books directly to consumers, and science fiction publisher Baen Books, whose commitment to making its books interoperable across reading devices has earned it a loyal following among science fiction readers.
But these models still provide the book, the whole book and pretty much nothing but the book. They are flexible, as are online subscription services like 24 Symbols, already operating in Spain, and Oyster, announced but not operating in the United States.
I am a fan of the “pay-as-you-read” model embodied by ValoBox, whose principals, Anna Lewis and Oli Brooks, speak here this afternoon. But through no fault of their own, they are limited to provided what publishers are willing and capable of sharing. We need to improve what they are able to offer.
While we’ve begun to learn how to form and serve online communities, we continue to be bound by the form of the book as our container of choice. As a result, we’re trying to catch up with an audience whose needs may be already moving away from us.
Recently, I had a conversation with a colleague who ran a publishing business that includes newsletters, magazines, research and an evolving database that is sold on a subscription basis. The database includes much of the content that appears in other formats as well as original research and data relevant to the businesses they serve.
Subscriptions are sold to institutions. There is a good deal of high-value content in the database. Early efforts to convince companies to buy an institutional license focused on the price relative to the combined value of the published products whose content it would include. For the most part, my colleague’s business was unsuccessful in selling these institutional licenses.
That is, they were unsuccessful until they started offering high-quality charts, graphs and analyses that users could download and use as “white label” content in slide presentations given to their customers. Almost immediately, demand for site licenses began to rise; they remain strong today.
The charts had always been part of the database; the only change in their offer was an effort to unbundle content – to disaggregate supply. The publishers’ clients found great value in being able to borrow from the archive and present, easily and with great quality, the analyses already done by their publisher-partner. In a competitive marketplace, the visibility of these disaggregated charts and graphs amplified both awareness and willingness to buy the publisher’s data service.
Earlier, I touched upon trends affecting advertising as well as periodical content. Of course, there are other examples of disaggregating supply. Of those, the sale of recorded music may provide the most relevant and uncomfortable example for book publishers.
Many things are said about the music business, including conjecture about the impact of piracy on its overall health. This morning, I’d like to focus on two things that are generally undisputed: music publishers benefited from selling full collections of songs – that is, albums; and a significant share of music consumers wanted to be able to buy individual songs of their choosing. For quite some time, they were unable to do so.
It’s not a stretch to say that, in the music business, the “album” is their book. In fact, when the technology used to create vinyl records could store only a short amount of recorded music on a single side, studios took to bundling multiple pieces of vinyl in sleeves with a cover. This book-like object gave birth to the very notion of an album.
Technology improved to the point at which a single piece of vinyl could hold five to ten songs on a side, but the notion of an album as the discrete unit stayed with us even after digital technology took hold in the later 1980s. For the better part of a decade, overall music revenues grew, but so too did consumer expectations that they should be able to buy components – songs – as well as albums.
While this came to a head with the launch of Napster, an early file-sharing service, the trend dates back a generation. For decades, consumers tried to improve their ability to buy and share songs. Anyone who grew up with music in the 1970s and 1980s has likely created or received a mix tape of individual songs. If you haven’t done either, I guarantee you danced to one.
Physical distribution gave suppliers the opportunity to limit what songs would be published as singles. Digital technologies removed that aspect of the gatekeeper role. When music labels tried to maintain the album as the core component, consumers – and soon after, Apple – took matters into their own hands.
I understand the reluctance of music publishers to trade sales of songs for revenues from albums. Song revenues are lower, at least at the outset. But refusing to meet a latent or emerging requirement is likely to result in a more severe falloff in revenues. At least part of what happened to music publishers in the last decade is tied to their reluctance to disaggregate supply.
Now, book publishers approach an inflection point of their own. Defense of format makes us reluctant to unbundle content. Even something as basic as “cut and paste” functionality is disabled or severely restricted across many commercial works. Publishers restrict use with the idea that this kind of thing somehow cuts down on piracy and other illicit activities. I’d argue it ends up having the opposite effect.
Last year, I talked about various types of content that might be more readily disrupted by new business models and non-traditional competitors. These genres or categories included reference, travel and tourism, education and testing, cooking, certain religious works, particularly Biblical content, and some business writing.
In the last year, disruption across most of these categories has accelerated. After 244 years, Encyclopedia Britannica closed down its print edition. Google bought a restaurant guide, Zagat, and then acquired rights to travel content from Frommers, formerly a Wiley imprint.
Last year saw major universities offer several highly successful “MOOCs” – massively open online courses – to individuals well outside their registered study body. In all cases, the best-performing students included a cross-section of people who had not attended the bricks-and-mortar university.
Digital-first Biblical content is available now from firms like Logos, which offers readers the opportunity to study anywhere, on any device, while also providing tools to aid the development of sermons, improve and inform academic study and interact with other members of the Logos community.
A simple example of Logos’ academic functionality: the software lets you search various Bible translations by a part of a clause, so that different interpretations can be examined, understood and perhaps refined. The service is available on a subscription basis for US $57 to US $134 a month. Try doing that work using printed texts and you’ll quickly see the value of a subscription.
When I present these ideas and examples, publishers sometimes push back around their applicability to works of fiction. I agree with the sense that fiction itself is less prone to disruption from non-traditional sources, but I think that’s missing the opportunity. Good works of fiction are typically supported by a rich backdrop of historical research, settings, backstory development and character profiles.
This is context – metadata of the highest order – and it distinguishes those works for which it exists. Imagine the boost for online visibility and discovery for those titles for which this context is made visible. And imagine the increase in awareness and sales for those titles whose authors and publishers are willing to share at least some of this data with their fan base, even their fan-fiction base.
There’s a pretty good example of that kind of thinking. It’s called Pottermore.
So, lots of changes, some faster than others, not distributed equally. It’s tempting to wait until the market shows itself, but that’s a high-risk strategy. We need to prepare for the networked present in at least three ways:
Standards. Beyond product-level identifiers, we’ll need ways to provide for a much more robust and extensive use internal tagging. RDF, ISNI and ISTC provide some examples, but we need greater clarity to guarantee access and interoperability. As we think about selling components, we’ll need to significantly revise the way that we acquire, maintain and account for rights. Lessons learned from services like Safari Books Online and Valobox will help here, but it’s more than that. We need to rethink how we handle content and rights, or someone outside the business will do it for us.
Structure. If we’re serious about creating, managing and delivering components that meet market-determined requirements, we are going to have to develop, partner or adapt to systems and structures that make content acquisition and monetization possible at levels more granular than most publishers have ever considered. Many good start-ups are eager to work with and even receive funding from publishers, should publishers find it in their hearts to do so.
Sense. We’re talking about organizing and serving communities of readers. Success no longer emanates from a series of well-planned, top-down efforts. Publishers will need to develop a market understanding that helps them prepare for and address at least some consumer needs that are not yet articulated.
These initiatives demand broad-scale change, but not universally. Authors and customers are not the ones at risk. Rather, publishers and in some cases technology vendors face the greatest threat, as authors and consumers develop their own coping strategies around slower-moving content providers.
Investing in discovery standards, content structure and the maintenance of a market-facing (rather than a product-centered) sensibility are hedges toward long-term sustainability. The four principles outlined at the start of my remarks today can provide a useful and important filter for those investments.
Toward the end of last year, a colleague and friend, John Maxwell, joined a panel discussion on the future of academic publishing in the digital age. There, John suggested that “we needn’t take boundedness and completeness as a prescription for what serious media ought to be. Our challenge is to look beyond that.”
This brings us back to the idea of a container. “Context first” proposed that we not use containers as the primary source of information, instead considering them as vehicles to transmit what Hugh McGuire calls an “internally complete representation.” But here, “internally complete” is not the same as “complete”.
I think we’re inevitably moving toward what I’d call a “pre-book world“: a living representation of the development, refinement and extension of a particular work. At various points, an object – a book or an eBook, as examples – may be rendered, but as a subset of the greater representation.
That trend is easier to see in scholarly or academic publishing, where digital has been the norm for a decade or more. But I think it will grow to include many forms of publishing. As it does, we’ll need to think more about components, solutions and ultimately the potential inherent in disaggregating supply.