[What follows is an update to "The library within us". As the introduction notes, I've made a set of changes to the initial draft. They hopefully strengthen the idea that publishers need to think like community organizers to succeed in a digitally networked world.]
I initially gave this talk last month at “Books in Browsers”, an event organized by Peter Brantley and Kat Meyer. I’ve revised it a fair amount since then. In itself, that’s not unusual – I tinker with every presentation I do.
But this version is more than tinkered. To be honest, I wasn’t happy with the first version. It missed a core component of what I think publishing needs to do going forward, something I’ve fixed in this version. Both drafts, start with the same idea:
"It’s time to think about content, not as a product or a service, but a vehicle to reach an outcome. Literacy is important as a step toward informing and empowering individuals, groups and communities, but on its own it is not enough. As reading experiences become both portable and increasingly universal, we need to reshape our sense of publishing and build "the library within us": a collection of tools and resources that individuals can draw upon to connect with and change the world around us."
Okay, a loose idea, a bit utopian, but it’s an idea that can be more than utopian. Mark Andreessen, who has bet on five pretty big ideas in the last 20 years, recently told Wired:
“The Internet has spread to the size and scope where it has become economically viable to build huge companies in single domains.”
We typically think of these “huge companies” as intermediaries – the platforms that now dictate the landscape for digital distribution. We think of our domain as “books”. But I believe we’re at a point at which we can do more than just send our content to aggregators. To get there, though, we’re obligated to reframe our models and focus on the needs of individuals.
In at least one sense, this is not a new idea for me. I’m on the record as being somewhat skeptical about the long-term efficacy of containers, or at least of containers as a starting point for content. In an era of content abundance, we’re going to have to compete on more than just availability.
And, the idea of content as a service is already moving along reasonably well, at least in some quarters. Just recently, when Springer announced an interface redesign, it described its goals as “speed, simplicity and (customer) optimization.” Those don’t sound like container qualities to me.
In fact, they are qualities consistent with the underpinnings of the “lean consumption” model that James Womack and Daniel Jones described in 2005 for the Harvard Business Review. I’ve invoked their work before, most directly in the initial drafts of “Context first” that I wrote in 2010. When that presentation grew too long for its own good, I deleted all but a passing mention of the source, but I think it’s worth returning to their six principles or characteristics of lean consumption:
- Solve the customer’s problem completely by insuring that all the goods and services work, and work together.
- Don’t waste the customer’s time.
- Provide exactly what the customer wants.
- Provide what’s wanted exactly where it’s wanted.
- Provide what’s wanted where it’s wanted exactly when it’s wanted.
- Continually aggregate solutions to reduce the customer’s time and hassle.
The challenge we face in publishing is discovery in the face of abundance. What Jones and Womack provide is a filter or a lens through which we can reconfigure our approach to publishing. Unfortunately, we’ve left much of that innovation to others.
Let’s take a moment to apply these principles to traditional publishing. I think you’ll start to see why market power has shifted from providers to platforms.
Before Amazon introduced the Kindle in 2007, there was no interoperable solution for digital content. Fearing piracy, publishers favored DRM (its own story) and took little interest in the generally miserable experience of readers who tried to read books on early e-reading platforms.
That people bought digital books and struggled to load, read and maintain them on third-party systems was of no consequence to publishers. The market was small, fragmented and of little interest, at least to trade publishers.
During the last decade, some professional publishers did try to organize content to better match the needs of their client base. Bloomberg and Thomson-Reuters maintained their own platforms, deepened customer relationships and over time increased their returns.
Publishers who wanted to provide what customers wanted, where and when they wanted it – in professional terms, “as part of workflow” – also joined up with companies like Silver Chair, which works with scientific, technical and medical publishers to deliver information in ways that make it useful and readily accessible to those publishers’ clients.
Still, most publishing activities remain focused on the development and dissemination of containers, mostly in print, sometimes as PDFs. Whether the containers are physical or digital, they remained publisher-determined, relatively immutable and almost always one-way. To the extent that systems have come along to change this dynamic, they have been developed by third parties, most of whom have to beg for access to published content.
Hugh McGuire has written that the book and the Internet will soon merge, and I agree, but these market trends make me wonder if any traditional publishers will be part of the parade.
Over the past decade, Clay Christensen, perhaps the person most closely associated with the ideas behind disruptive innovation, has been exploring the idea that consumers “hire” products and services to fulfill an identified need. In Christensen’s view, there is no stopping consumers from hiring the products and services they want. There are jobs to be done, problems to be solved, needs to be filled.
This solutions-based approach to creating and delivering results ultimately breaks apart prevailing media business models, many of which are more or less container-driven. It’s not enough to act as if digital is just another channel; it’s much more than that. It opens up possibilities that one-way containers fail to address.
The gap, already visible and growing, leaves us vulnerable to continued and escalating disruption. If we’re going to be disrupted, though, it would be helpful to see it coming and develop a few promising alternatives.
Peter Brantley recently wrote, “The embedding of books in a networked environment is something we have not seen before.” In his view, it presents opportunities and challenges at the level of an individual reader: what platforms to support, how much access to provide, and what levels of privacy to expect or demand, as examples.
But it also gives content providers a new and wholly transformative opportunity to meet the needs of a market that does not yet consciously exist. As I said last year, we need to become more outcomes based. Continuing to create and distribute static books, in whatever format you choose, is a strategy that is likely to result in declining prices, lower margins, less income to reinvest and … you get the idea.
Alternatives start with data-gathering. Companies like Innosight, which consults with businesses facing disruptive innovation, make three recommendations:
- Understand the criteria customers apply in choosing between solutions. There are plenty of assumptions about why people acquire content, and there is even some reasonable data about book-buying behavior. But there’s precious little data about content-consumption behavior. To the extent that it exists, the data is often held captive by platforms whose interests align only loosely with those of the people producing that content.
- Pinpoint an important job that isn't being done adequately. This can include things that I once called “the consequence of a bad API”: workarounds, compensating behaviors and expressed dissatisfaction with products and services.
- Unlock markets by eliminating barriers for customers. I think of the work done by NISO to create and hopefully implement single sign-on standards as a good example here.
A year ago, I was optimistic that publishers and supply-chain partners would soon see their mutual need for a data-driven reconsideration of why publishing exists and the purposes it can serve.
I’m no longer optimistic.
Another year spent wrangling over the role of libraries, another year spent kicking the can down the road with respect to the widespread and debilitating use of DRM, another year spent fostering the idea that we really have embraced “digital”: these things and more have convinced me that the “opportunity in abundance” will not accrue to the incumbents.
This became all too clear to me last summer. In January, I had made the somewhat ambitious pledge to “post something useful every day”. By June, 180 or so posts in, the optimism well had run dry. I just didn’t believe my own story any more.
Around that time, Helmut von Berg, who works with Klopotek to plan its annual conference in Berlin, asked if I might be interested in developing a short address about “networked publishing”, to be delivered at the Frankfurt Book Fair. My first question was (perhaps naturally) “What do you mean when you say ‘networked publishing’?”
As ever, Helmut was very prepared, and soon I was swimming in a stack of documents that I found informative, challenging and a bit daunting. Here’s a selection of some of the things Helmut had written:
“Traditional publishing is consequently a ‘gatekeeper-defined culture’, while that of networked content provision [is] a cultural network.”
And …
“These content units, whether we call them items, entities or chunks, must be prepared, so that they can be found and used in accordance with expectations.”
One more, longer excerpt:
“If we go a step further … we discover that we are dealing with two different areas with regard to content: firstly with creation and preparation and secondly with distribution and use. The first can perhaps best be summed up as ‘content clusters’, the second as ‘market’ … Market power is created not by sales strength, but by the quality of usability in non-predetermined user environments.”
Pretty good stuff there. It made me wonder if I could avoid the work and just ask Helmut to deliver the keynote. I didn’t ask him, though, because his thinking, along with that of others whose work he had sent me, started to restore my native optimism for things publishing.
Now, we know that publishing has always been networked, in the sense that getting your book or article published depended on who you knew. Quality mattered, or at least it helped to differentiate the work of an author, but the economics of publishing necessarily fostered what Helmut aptly pointed out is a ‘gatekeeper’ role.
So, the interesting thing about ‘networked publishing’ is not just the fact that publishing is networked, but which networks are now dominant.
As the Nieman Foundation’s James Allworth noted, “If history is our guide, the platforms do gain an edge” when business models change. Trade publishers know the short list: Amazon. Apple. Barnes & Noble. Kobo. On a clear day, Google. They are already well-established. With respect to digital containers, the platform ship has sailed.
Some publishers have turned their thinking to focus on how they might build platforms of their own. If the goal is to create a way to distribute eBooks directly, I say, good luck with that. What we must do starts with the recognition that we need to find a platform to build on, not just build.
That’s where the network comes in.
I typically think of three primary functions that underpin any sort of publishing: authoring, repository and distribution.
Although there are plenty of new tools available for authors to use, “authoring” itself is not that much changed. An idea must at some point be turned into a work of interest, suitable for distribution. But barriers are now lower, and authoring has been increasingly democratized.
“Repository” once meant things like plates, then film, then files. Old economics dictated the “minimum viable product”, typically a book, as the package that could be created and sold. Now, the “minimum viable product can be a book, a chapter, a component, an extract, a snippet – anything that can be monetized, as well as some that can’t, or won’t.
“Authoring” and “repository” must be organized to deliver content that is distribution-ready. Competition now takes place at the level of use. The “minimum viable product” may not be a book; it’s whatever the end-user, a reader, values enough to pay for. This new order potentially undermines the value of scale for content providers who previously prevailed on their ability to manage institutional and trade relationships.
It is a sea change from an era that ended in the last decade. New skills are required to manage content at a much different level.
To manage the new repository, many publishers, particularly larger ones, have invested in a plethora of systems, making what are typically large-scale, specialized investments that can be difficult or expensive to maintain.
These investments were made before an era in which cross-platform data mining was considered a competitive weapon. Highly specialized systems perform better for certain uses, but they don’t always play well with others. The ability to look broadly is critical: as a Google executive recently said, “We don’t have better analytics. We have more data.”
These sizable investments by publishers included spending on ERP and related IT systems. That spending paved paths – sometimes, it paved cowpaths – and made it harder for traditional publishers to adjust to a world in which the minimum viable product might not have an identifier at all.
Systems investments made to create and track dumb, one-way products are fundamentally incapable of fostering the two-way dialogue that smart products and services engender.
In 1998, then Wired executive editor Keith Kelly wrote "New Rules for the New Economy: 10 Radical Strategies for A Connected World”. Almost 15 years later, the book remains a worthwhile read. In it, Kelly says several things that are truer today than they were when he first observed them:
- “Value is carried by abundance, not scarcity, inverting traditional business propositions.”
- “As networks entangle all commerce, a firm’s primary focus shifts from maximizing the firm’s value to maximizing the network’s value.”
- “As innovation accelerates, abandoning the highly successful in order to escape from its eventual obsolescence becomes the most difficult and yet most essential task.”
Think about these things for moment. “Value in abundance” – clearly, proprietary platforms that offer content from a subset of providers are at a disadvantage. “Maximizing the network’s value” favors those who can cost-effectively use existing tools to address a market need.
And “abandoning the highly successful” … well, that’s where I lost my optimism.
But it is the first of Kelly’s 10 rules that is most sobering for publishers considering a networked reality: “As power flows away from the center, the competitive advantage belongs to those who learn how to embrace decentralized points of control.”
Kelly’s thinking here starts with work done in the mid 1980s by Saltzer, Reed and Clark, who wrote “End to end arguments in system design”. They observed: “The intelligence that matters most exists in boundless variety at the ends of a network, rather than in the mediated systems in the middle”.
They went on to claim: “Therefore, network protocols should be designed primarily as means for those ends, rather than to serve the parochial interests of intermediary operators.”
If I could translate loosely and simply: the “ends of the network” are readers, information consumers and recombinant opportunities – communities of identified and latent content requirements.
For our purposes, the “mediated systems in the middle” are traditional publishers, though they are also the telcos and ISPs who face similar, perhaps equally daunting challenges.
And the “protocols designed primarily as means for those network ends” … that’s what we got with the Internet. That’s our Sea of Stories.
This is why we struggle with networked publishing: we want to apply our models to the network, and what we need to do is apply the networked models to our business. At the level of a user, networked publishing tells a story. It can be utilitarian, aspirational, inspirational or reflective, but it is not constrained.
A shift to networked publishing lowers barriers to the creation of content, but it amplifies the return for content providers who can leverage two-way communication and create, refine and evolve content products around the needs of the readers they serve. Rather than focus on filling shelves full of books, in physical or digital form, we can open up new markets by filling those shelves with solutions.
Some of those solutions will remain what we have come to know as books, but many more will be conceived, developed and delivered in forms and for purposes that we have yet to fully grasp. If agile affords an opportunity to improve discovery, it also supports the ability to deliver what Helmut von Berg called “the quality of usability in non-predetermined user environments.”
We are already evolving from a world in which we decided what would be published, toward one in which we think about how what is published will be discovered, evaluated and consumed. If this is a future we want to enable, we have to be mindful in how we go about it.
You may begin to think that I’m saying, “We should reorganize the information age around the individual.” I’m not. The individual is doing that on her own. I’m talking about this today because I am hoping that we’ll be ready when the time comes. I’m sure someone with content solutions will be.
Think back to the principles of lean consumption – what the customer wants, when and where she wants it, delivered with a minimum of hassle and waste. Smaller-scale solutions – a recipe rather than a cookbook, continuing education rather than an MBA – increasingly pose threats to established players.
The answer involves using content as a means to build and serve communities of like interest. The Internet helps publishers reach widely dispersed, even global markets in ways that were not possible before. But simply putting content on the web is not enough.
There is already an abundance of content, one that will continue to grow, reducing discovery and increasing the cost of marketing. Instead, publishers need to look at their work as “community organizers”, investing in the development, management and sustainability of groups affiliated by place, purpose or preference.
Those communities need help forming boundaries (defining what they are, as well as what they are not), advancing community awareness and consciousness and (by a variety of means) negotiating new ways of thinking and acting. These are deliberate efforts, not accidents or opportunities to exploit a near-term advantage.
They can be rewarded as publishers learn to nurture those communities and provide both goods (physical and digital books, for example) and services (continuing education or opportunities to meet virtually or face-to-face, to name some options) that meet the needs of the communities they serve. This is a significant shift from the roles most publishers have played in the past, although there are some examples (Baen Books in the science fiction community, O’Reilly Media for technology publishing) that can be instructive.
Echoing a theme from “The opportunity in abundance”, we need to define the purpose of publishing, on a scale that extends from utilitarian to transformative (both valid ends of a spectrum). That’s distinct from the act of publishing, which is our historical end point and validation, but one that has diminishing value.
Waiting until the market shows itself is a high-risk strategy. We need to prepare for the networked present in at least three ways:
Standards. Beyond product-level identifiers, we need a much more robust and extensive use of internal tagging. RDF, ISNI and ISTC provide some examples, but we need greater clarity to guarantee access and interoperability.
Structure. If we’re serious about creating, managing and delivering a minimum viable product that meets market-determined requirements, we are going to have to develop, partner or adapt to systems and structures that make content acquisition and monetization possible at levels more granular than most publishers have ever considered.
Sense. Because success no longer emanates from a series of well-planned, top-down efforts, publishers will need to develop a market understanding and the ability to develop and manage community. These skills can help them prepare for and address consumer needs that are not yet articulated.
These initiatives demand broad-scale change, but not universally. Authors and customers are not the ones at risk. Rather, publishers and in some cases technology vendors face the greatest threat, as authors and consumers develop their own coping strategies around slower-moving content providers.
Investing in discovery standards, content structure and the maintenance of a market-facing (rather than a product-centered) sensibility are hedges toward long-term sustainability.
The traditional functions of a publishing repository – preparation, management and monetization of content – will need to be maintained at a much greater level of detail and supplemented by a new skill that may be available only to those with sufficient scale: market insight. Without it, authors will have much more reason to sell directly.
Fixed-format sales – physical and digital containers – will persist, but prices will likely decline, with margins shifting to publishers who can monetize the components of a customer-valued minimum viable product. The ability to make that happen will increasingly accrue to content providers who cultivate communities.
At the start of my talk, I invoked Marc Andreessen and his view that “it has become economically viable to build huge companies in single domains.” I alluded to five bets that Andreessen had made in the last 20 years. At a high level, these are his bets:
- “Everyone will have the web” (1992)
- “The browser will be the OS” (1995)
- “Web businesses will live in the cloud” (1999)
- “Everything will be social” (2004)
- “Software will eat the world” (2009)
I don’t think it’s hard to see what happened with the first four ideas. By 2015, we expect that half the world’s population will be connected to the web. These days, I need not make a case for browsers.
Cloud computing, the impact of social – these are givens.
Which brings us to Andreessen’s most recent bet: “Software will eat the world.” In the Wall Street Journal last year, Andreessen claimed that “Prominent new Internet companies are building real, high-growth, high-margin, highly defensible businesses.”
That’s something we’ve heard before. But here’s Andreessen talking about something we feel more than we discuss or acknowledge:
“Today, the world's largest bookseller, Amazon, is a software company—its core capability is its amazing software engine for selling virtually everything online, no retail stores necessary. On top of that, while Borders was thrashing in the throes of impending bankruptcy, Amazon rearranged its web site to promote its Kindle digital books over physical books for the first time. Now even the books themselves are software.”
His idea reminds me of something Richard Nash asked last month at a NISO conference in Boston: “What if the book is the algorithm?”
If it is, or if it can be, we have the tools now to serve widely dispersed, networked audiences in ways that would never have scaled in an earlier era. These tools are not products; they are vehicles to reader-valued outcomes.
As Kevin Kelly claimed, “abandoning the highly successful in order to escape from its eventual obsolescence becomes the most difficult and yet most essential task.” If you’re part if the traditional order, it’s hard to imagine that this is the week you need to decide, but … maybe it is.
In his recent book, “The Intention Economy”, Doc Searls captured the principles of network design in three simple statements:
- Nobody owns it
- Everyone can use it
- Anybody can improve it
Think about those ideas for a moment. For decades, perhaps centuries, the primary platform for publishers and their supply-chain intermediaries relied on the ability to exclude. Now, we’re starting to see the dominance of a platform that includes everything and excludes nothing. In return, we get access to global communities and the ability to meet latent desires.
We can “pre-empt and co-opt”, resist the change, buying time and perhaps some short-term wins. Or we can learn the new rules and prepare for the opportunities inherent in networked publishing.
I hope we do the latter, because there are plenty of boneyards we don’t want to end up in.
An additional note: I've closed comments early on this post, after several days in which it attracted dozens of spam comments. Apologies for any inconvenience; feel free to post a comment elsewhere and I'll move it here, if you want.