Categories
Idea Media News Technology

The future of content, Part 2

(This is the second in a series of posts about the future of content creation and sharing online. Part 1 contains my original discussion, while Part 3 considers monetization.)

Yesterday I imagined a web architecture that depends on individual pieces of highly tagged content, rather than streams of content. Today I’d like to expand on that.

Right now when a creator posts something to the web, they must take all their cues from the environment in which they are posting. YouTube has a certain category and tag structure. Different blogging software handles post tagging differently. News organizations and other media companies have their own specialized CMSes, either built by third parties, built in-house, or built by third parties and then customized. This ultimately leads to content that is typically only shareable through linking, copy-and-paste, or embedding via a content provider or CMS’s proprietary solution.

None of this is standardized. Different organizations adhere to different editorial guidelines, and these likely either include different rules for content tagging or neglect to discuss content tagging at all. And of course, content posted by individuals is going to be tagged or not tagged depending on the person’s time and interest in semantic content.

The upshot is, there is no way, other than through a search engine, to find all content (not just content from one specific creator) that relates to a certain keyword or phrase. And since content is tagged inconsistently across creators, and spammers flood the web with useless content, search engines are a problematic solution to content discovery.

In my idealized web, creators would adhere to a certain set of standards when posting content. The content posting interface would automatically give each section of content its own unique identifier, and the creator would be able to adjust these for accuracy–for example, marking an article as an article, marking the title as the title, and making sure each paragraph was denoted as a paragraph. If this sounds like HTML5, well, that’s intentional. I believe that in the interest of an accessible, contextualized web of information, we need all content posting interfaces to conform to web standards (and we need web standards to continue to evolve to meet the needs of content).

Further, I think such systems should tag each unit of content such that the context and sharing and linking history of that unit of content can be logged. This would provide extraordinarily rich information for data analysts, a field that is already growing and would explode upon adoption of this model.

In my vision, content would not be dependent on an individual or an organization to host it on a website at a particular IP address. Instead, there would be independent but interconnected content repositories around the world where all content would reside. “Permalinks” would go straight to the content itself.

Browsers would become content interpreters, bringing up individual pieces of content in a human-comprehensible way. Users could have their own browser settings for the display of different kinds of content. Theming would be big. And a user’s browser history could allow that browser to suggest content, if the user opted in.

But websites would still exist; content interpretation would not be the sole domain of browsers. Rather than places where content is stored and then presented, websites would be contextualized areas of related content, curated by people or by processes or both. Perhaps a site owner would always be able to post and remix their own content, but would need to acquire a license to post or remix someone else’s. Perhaps different remix sites would pop up, sites with in-browser video and image editing, that would allow users to become creators. All remixes would become bona fide content, stored in the repository; anyone could simply view the remix from their browser, but community sites could also share streams of related remixes.

With properly-tagged content that is not tied to millions of different websites, content streams would be easy for anyone to produce. Perhaps browsers would facilitate this; perhaps websites would do so; perhaps both. The web would shift from being about finding the right outlets for content to finding the right content interpreter to pull in the content the user wants, regardless of source.

Such a system would have “social media” aspects, in that a user could set their browser or favorite content interpretation website to find certain kinds of content that are shared or linked by their friends, colleagues, and people of interest to them. This information, of course, would be stored with each piece of content in the repository, accessible to everyone. But users would also be able to opt out of such a system, should they wish to be able to share and remix but not have their name attached. The rest of the trail would still be there, linking from the remix to the original pieces, such that the content could be judged on its worth regardless of whether the creator was “anonymous user” or a celebrity or a politician or a mommy blogger.

Under this sort of system, content creators could be as nit-picky about the presentation of their content as they wanted. They could be completely hands-off, submitting content to the repository without even creating a website or stream to promote or contextualize it. Or they could dig in deep and build websites with curated areas of related content. Media companies that produce a lot of content could provide content interpretation pages and content streams that take the onus of wading through long options lists away from the user and instead present a few options the creator thinks users might want to customize. The point is, users would be able to customize as much as they wanted if they dug into the nitty-gritty themselves, but content creators would still be able to frame their content and present it to casual users in a contextualized way. They could also use this framework, along with licensing agreements, to provide content from other creators.

Comments would be attached to content items, but also tagged with the environment in which they were made–so if they were made on a company’s website, that information would be known, but anyone could also see the comment on the original piece of content. Content streams made solely of comments would be a possibility (something like Branch).

This system would be extremely complex, especially given the logging involved, but it would also cut down on a lot of duplication and IP theft. If sharing is made simple, just a few clicks, and all content lives in the same place, there’s no reason for someone to save someone else’s picture, edit out the watermark, and post it elsewhere. Since all content would be tagged by author, there would actually be no reason for watermarks to exist. The content creator gets credit for the original creation, and the person who shares gets credit for the share. This would theoretically lead to users following certain sharers, and perhaps media companies could watch this sort of thing and hire people who tend to share content that gets people talking.

Obviously such a paradigm shift would mean a completely different approach to content creation, content sharing, commenting, and advertising…a whole new web. I haven’t even gotten into what advertising might be like in such a system. It would certainly be heavily dependent on tagging. I’ll think more about the advertising side and perhaps address it in a Part 3.

One reply on “The future of content, Part 2”

Comments are closed.