Everything happens so much

For years now, I have used Twitter as a microblog. I started on a public account in February of 2007 under my real name. During GamerGate, circa 2012, my Twitter usage dropped dramatically and I spent more time on Tumblr and Facebook. But in the late 2010s I split off a couple other Twitter accounts: one public, not under my real name, for engaging in fandom stuff after Tumblr banned explicit content; and one private, also not under my real name, for sharing personal stuff after Facebook’s lack of ethics spurred me to close my account.

Basically, I used that third Twitter account as my journal, as the place where I connected with people I cared about. That’s where my photos and quick blurbs about my life went. I shared thoughts and feelings and big decisions. I didn’t have everyone there, but I had a lot of people. It was private, and it was comfortable.

That’s all changed now that Twitter is run by a thin-skinned fascist. I no longer feel safe sharing personal life details, even on a private account.

In considering what I want to do about this, I have finally accepted that I can’t trust third-party sites with my personal information. It’s taken me a long time to embrace this truth, because it’s so easy to use social media, and social media is where most people are. But this kind of thing is just going to keep happening.

So what I’m going to do instead is start using this blog again. Whenever I feel like rambling about something or sharing something, I’m going to post here.

WordPress doesn’t have robust user permissions, so I can’t simply set up roles for user accounts and then limit certain posts to certain roles. My only option that isn’t installing a plugin is to password-protect any posts I don’t want the general public to see. I’m not keen on installing a plugin to manage security, so for now I’m going to use the password method.

If you see a password-protected post you’d like to read, and you’re a friend of mine, reach out to me directly and I’ll share the password.

Getting off oil

backed-up interstate traffic
Traffic. September 22, 2012, Atlanta. Copyright Heather Meadows

Occasionally I ponder whether or not the US will ever get off oil as its primary source of transportation fuel. In other words, whether we will trade our gas-burning cars for otherĀ  modes of transportation, like trains and subways for longer distances and walking and cycling for shorter ones. Electric cars are perhaps a more realistic possibility, given the way our current infrastructure grossly favors cars. To switch to electric cars, we wouldn’t need to redesign roads or add rail; we’d just need to convert existing gas stations to electric substations.

I got to thinking about this today thanks to my Twitter friend Ara posting a link to an article about world gas prices, which includes this intriguing paragraph:

The world’s most expensive gas, according to the survey, can be found in Norway, where drivers pay $10.12 for a gallon of premium gas. While the country has significant oil reserves of its own, instead of using the money to subsidize vehicle fuel it goes to fund social spending such as free college education and national infrastructure.

This made me wonder if we could subsidize the shift away from oil by stepping down our subsidization of it. According to a study by the Environmental Law Institute, the US spent approximately $72 billion from 2002 to 2006 subsidizing fossil fuels. Unfortunately, this figure isn’t broken out by type, so we can assume some of this money went to coal as well. If the move was targeted toward reducing oil use, I can’t imagine ending coal subsidies at the same time. Any replacement transportation, whether it be trains or electric cars, would depend on electricity, which in this country is largely generated through burning coal. (Of course, burning coal, another fossil fuel, has many health, environmental, and non-renewable resource issues, and eventually we’ll have to get off coal as well.)

The upshot is, I’m not sure how much money we’d be able to free up for infrastructure change. And then there’s the question of how much that change would cost. Going completely to a rail-based system would be extraordinarily expensive; entire cities would have to be redesigned. Sprawl has spaced everything out so much that it’s rare for someone to be able to walk or cycle to their nearest grocery store, let alone to work. City planners would either have to come up with ways to link neighborhoods to shopping centers or to redistribute commodities along communal travel routes.

It’s hard to imagine how this would play out in rural areas, where people drive ten, 20 or 40 minutes just to get into town. Obviously there wouldn’t be a rail system to those people’s distant houses. In that situation, it doesn’t seem possible to completely replace cars with mass transit. A friend once showed me an overhead view of farming communities in South America in which homes are built around a cul-de-sac and their farms radiate out around them. While this model clusters rural residents together and makes communal transportation more feasible, it would require completely redistributing people’s land, and it would also require a huge cultural shift away from our current farm style of houses planted in the middle of their acreage.

Each city, town, incorporated area, and unincorporated area would have to come up with its own plan to reduce oil consumption. I can’t see this happening on a broad scale without a federal mandate–after all, individual states also subsidize oil–but then there would need to be federal oversight. I expect the same states that oppose national health mandates would oppose national transit mandates, so any such movement would take years just to get started.

Without more information, I can’t say whether or not this sort of thing will ever happen in the US. But I do hope our elected officials are at least thinking about it. I’d love to see us get off oil, for diplomatic as well as environmental reasons.

The tablet-laptops are on their way

I previously wrote about my new obsession with tablet-laptop hybrids, and my specific fascination with the Acer Iconia W5. Now, thanks to Sean and Ars Technica, I am aware of the Ativ Smart PC 700T1C from Samsung.

Lookit.

Samsung Smart PC Pro 700T

It’s sleeker than the W5. It also supports 64-bit Windows and has an Ivy Bridge processor. The resolution is better–1920×1080 vs. the W5’s bizarre 1366×768. The max 4GB of RAM is a concern, but this problem seems to plague all tablets.

I like that it works with a stylus (“pen”). The reviewer had concerns about the size, but I guarantee my hands are smaller than his. You can’t flip the screen around on the dock, but as I mentioned in my post on the W5, that feature was more of a bonus than a must-have.

It bothers me a little that it appears you have to buy the pen and the keyboard dock separately, meaning the list price is misleading. But I don’t think they add too much to the price–the stylus certainly doesn’t.

Ultimately I’d love to see one of these in person and get a feel for how it would be to own one.

Keyword hell

mass of unnecessary keywordsI have spent a lot of time these past few months organizing my personal photo site at SmugMug. I made sure all the photos I wanted uploaded were uploaded; I created new Categories and Subcategories and moved galleries around; I did a bunch of captioning and keywording that I’d put off (with still more left to go).

Then I thought I’d organize my keywords to make my photos easier to browse. I knew I had some photos with the keyword “southcarolina” while others used “south carolina”, for example; I was able to go through and bulk change all of these to match each other.

keywords-130415In the process of doing this, though, I discovered something horrible.

You see, for as long as I’ve been taking photos, I’ve been renaming all my files to use a variation on the filename template of my first digital camera, the Olympus C3030 Zoom. This means photos taken today will have filenames like P04150001.jpg. I did this so I could easily view my photos locally and sort them on SmugMug in the order of my choosing–simply sorting by date taken doesn’t always work, especially with multiple photos from the same day but different cameras, or photos copied from someone else, or scanned files.

Some time back, SmugMug implemented a change that now causes keywords to be automatically generated from filenames. And ever since then, each of my photos has been tagged with an unnecessary keyword. This has resulted in a ludicrous list of keywords; I’ve made a screen capture of about one-fourth of it.

I’m not holding this against SmugMug; it seems like automated keywords would be useful for many people, and I am able to turn the feature off (and I did as soon as I realized what was happening). Unfortunately, though, this unique set of conditions has left me with a metric ass-ton of work.

You see, there is no way to manage many keywords globally at once–deleting, renaming and the like. You can manage single keywords at once, which is how I changed “southcarolina” to “south carolina” across my entire photo site. And you can of course manage keywords by gallery, which is how I assign them and edit them in the first place. But you can’t, say, select a group of keywords and hit a delete button.

What I can do is click on the keyword, which takes me to the photo, then click on the “See photo in original gallery” link. From there I can use the Caption / Keyword tool to edit all the keywords in that gallery.

However, even from that screen there is no way to delete keywords in bulk. I can’t use wildcards to find and replace or find and remove keywords. And though SmugMug has a “Remove Numeric” keyword option, it only works with keywords that are made up of all numbers, not alphanumeric keywords. So I must scroll through the entire gallery and manually delete the unnecessary keyword from each photo.

Take a look at the keyword screenshot again and try to imagine how much work this is going to be.

Actually, I can help you with the scope. I highlighted all the keywords and pasted them into Word to get a count.

There are 9,638 keywords, each of which applies to either a single or a small handful of photos.

Nine thousand, six hundred thirty-eight.

That’s nearly ten thousand keywords, with even more photos, scattered across hundreds of galleries.

Yeah, this is going to be fun.

Tablet-laptops

My current computer is a laptop from Sager. It’s fairly large. I’ve taken it on trips home to Kentucky, and I even lugged it to a web convention once, but given its size I prefer to use it at home, with speakers, a mouse, and a keyboard plugged into it. The screen is nice and big and I use the keyboard and mouse to sit a respectable distance back from it. The speakers add to the desktop feel. This is not a computer I really like taking places; I have to detach everything, crawl under my desks to unplug and slip out cords, and then pack it all up in a large laptop bag. It’s very heavy.

For some time now I’ve dreamed of owning a tiny laptop, perhaps a netbook, something I could use for the general mobile computing currently achieved through my iPhone. While the phone is very convenient for things like Twitter and Facebook and even email, I hate browsing the web or writing documents on it. After using the phone for extended periods, I sorely miss having a keyboard.

I’ve never particularly wanted a tablet. They’ve always seemed inconvenient to me; you have to hold them, typically, unless you have a stand, and the stands generally aren’t as adjustable as I’d like. I usually watch movies and such on my laptop or TV, and I do most of my digital reading on my iPhone. (In fact, though I own a Kindle, I’ve read most of my Kindle books on the iPhone Kindle app.) The only thing I’ve ever wanted a tablet for is cooking: I’ll pull up a recipe on my phone sometimes, but the screen is small and shuts off automatically, and as I don’t feel like adjusting settings every time I cook, I end up having to unlock the phone multiple times throughout the cooking process. A tablet, I’ve thought, would be nicer, something I could mount in the kitchen, something with a larger screen that would stay on and be easy to read. But since that would pretty much be the only thing I’d use it for, I could hardly see how that would be a worthwhile purchase. It would be better, I thought, to get that tiny laptop.

I haven’t spent a lot of time looking for something, though. Ultimately I figured I could just deal with the hassle of moving my giant laptop when the situation warranted it. I had no pressing need for a spare laptop, anyway.

But then I was at Fry’s with friends last month and I saw the Acer Iconia W5.

Acer W510Look at that thing.

It is gorgeous.

It’s a touchscreen Windows tablet with a keyboard dock. You can bend the screen back at any angle, twist it around, flip the keyboard to serve as an adjustable stand for tablet display and use. You can undock the tablet and just use it by itself. The keyboard layout is great, and the touchscreen means there’s no need for a mouse.

It’s tiny. It’s light. It would fit in a small bag. The display, at least to me, looks great.

Upon sighting this extraordinary creation I was filled with a technological longing I haven’t experienced in perhaps a decade. Sure, I like new things, but for the most part I’m pragmatic about gadgets. I view everything with a healthy dose of do I really need that?

But this. This would do everything.

I could stand it up in the kitchen for recipes. I could take it to Kentucky instead of lugging my huge laptop. I could bring it to conferences and not take up egregious amounts of space. Heck, I could take it to coffee shops. I could run regular desktop applications, not just apps. I could install programs.

Sean has no problem budgeting for something like this, but he said I should look into other options to make sure we find the best one. Since then I’ve done a little poking around here and there, and I’m not really finding anything like the W5. There are plenty of tablets with stands, and some that come with keyboards, and some that have keyboards built into their covers, but nothing quite like this. No adjustable hinge for the screen. No built-in keyboard with standard layout (and extra battery).

Ars Technica’s review of the W5 isn’t exactly glowing, however. There are concerns about the hardware not supporting 64-bit Windows, and the plastic construction. The performance apparently isn’t as great as tablets with Intel’s Ivy Bridge processor.

I read a review of the Windows Surface Pro by Gabriel from Penny Arcade, and he says that device is great for hand-drawing and gaming, something I’m not sure the W5 can do. Then again, I don’t do either of those things. (Ars has some issues with the Surface Pro, and it doesn’t have a keyboard dock, just a case with a kickstand.)

The Asus VivoTab RT has a keyboard dock, and its construction seems to be much better than that of the W5, but it’s a Windows RT machine, meaning it runs apps, not full programs.

This brief rundown of upcoming tablets seems to indicate that there are or will be plenty of these “hybrids”, which the author says can be classified as tablets with keyboard docks or as laptops with removable screens.

At this point I’m not sure there’s anything else out there that would give me what the W5 offers. But it seems like this trend of hybrids is only beginning, so perhaps I should wait until more products are on the market.

Based on my research, I can at least conclude that I want the following features:

  • Touchscreen tablet
  • Keyboard dock
  • Form factor adjustments better than a simple kickstand; a stiff hinge is necessary, and the option to swivel and/or present the screen differently would be a bonus
  • Windows 8 full (not RT)

We have the technology

I often feel that there are so many things we could be doing. So many things we are capable of. So many things we just aren’t achieving that we should be readily able to.

Sometimes I discover that we are at least partially doing those things, but we’re not doing them in a way that people know about or can find or share easily.

This morning I heard a tornado siren. It’s only the second time I’ve heard it since I’ve lived here. The first time, nothing happened, so this time, I didn’t think much of it. An hour or so later I saw a tweet remarking on Atlanta’s “tornado-y” weather, so I thought I’d see what the deal was.

I went to my go-to weather site, The Weather Channel’s weather.com, and clicked on my local forecast, which is saved in a tile at the top of the page. Then I clicked on the Alerts, and in the drop-down I saw Tornado Watch until 4pm. That was all I needed to know, so I left the page.

Some time later, I saw this tweet:

If you follow that image link, you get…a cell phone picture of a TV screen.

A cell phone picture. Of a TV screen.

I understand wanting to share important information quickly. Actually, I think the ability to do that is rather important. But it astonished me that the most efficient way to rapidly share vital information online was apparently to post a picture of it.

We have the data. We have the technology. We can do better.

I went poking around weather.com to find the source of that image–better yet, something that would stay up-to-date no matter when someone got the link. First I went to the Atlanta forecast page. I clicked on things, but never saw a map like the TV picture. I did find a list of affected counties, which is useful, especially for people who can’t see pictures. But I wanted to duplicate the experience a viewer of the picture would have–duplicate and enhance it.

Finally I clicked on the Map link in the sidebar, and that took me to the interactive Weather Map. This was the same thing I’d seen on the forecast page and ignored because it didn’t have the tornado warning areas highlighted. But I gave it a chance; I clicked on Map Options. Scrolling all the way to the very bottom, I finally found the Weather Alert Overlays, and I clicked the radio button next to Severe Alerts.

And there, at last, it was.

Weather Map screencap 01/30/2013I quickly sent a link and instructions in response to the tweet. Then it occurred to me to check the link on my phone. I opened Tweetbot and tapped the link and sure enough…the interactive map doesn’t work in iOS, because it uses Flash.

Sigh.

Here’s what I want. I want a map that works regardless of the device I’m using. I want the ability to share a direct link to the view I am using–in this case, Severe Alerts–not just a generic link to the default map (which is what you currently get from those sidebar social media buttons). I want a forecast page that calls up versions of the map that are relevant to any weather alerts currently in effect.

As I said, we are capable of so many things. So many useful things. So many things that would be a genuine help to society.

The thing is, if we try to do those things, we can’t just throw something together and say we’re done. We have to make it easy.

Otherwise, people will skip right past it and keep taking pictures of their TVs.

Please help fund this Kickstarter for diabetics

I am so inspired by Nial Giacomelli’s The Diabetic Journal. This is a personal project, an application to help manage all the overwhelming variables in a diabetic’s life, that has grown beyond personal use and into something that Mr. Giacomelli wants to distribute absolutely free as a smartphone app. He’s made no profit on it and will make none. He’s looking to Kickstarter simply to allow him to focus on the app, to get it out the door with more features and a more streamlined UI.

I have no horse in this race; I’m not diabetic. But I’ve been through a lot of health-related crap. I can only imagine what it must be like for diabetics to have to manage their illness every single day for the rest of their lives.

This app would help them. And it would be free.

But things are looking bad. The word’s not getting out, or people don’t understand, or some other problem is keeping the project far from its goal.

I’m a backer and I want The Diabetic Journal to get my money.

If you’ve got anything you can send, anything at all, please. For once, here’s a Kickstarter that isn’t about personal profit or entertainment or special perks. It’s about helping people.

Isn’t that something that’s really worth Kickstarting?

Back The Diabetic Journal

A breath of fresh air

I spent this past week in Kentucky with my family, and while there I didn’t check ADN, Twitter, or Google Plus at all. I got on Facebook about three times total, to check private messages and make sure no one had posted anything important to my timeline. The day after the election I tried Facebook again, but a quick scan through the news feed made me wonder why I ever used Facebook to begin with.

I realize a lot of this is just election exhaustion, and that will pass. But I truly enjoyed spending a week not checking social media obsessively. I left my phone in my purse most of the time and didn’t use it for anything but one phone call and maybe three text messages. (I may have also played a turn in chess, but I don’t remember.) I also didn’t unpack my computer right away, and when I did I mostly used it to review Japanese on WaniKani and to watch lectures and do assignments for my Coursera Python class. I also added to my Goals document, which I started working on in October. It’s a simple list of ideas I’ve had that I want to see to fruition.

The rest of my time was spent with family members, talking or playing games or enjoying meals. I got to celebrate Halloween, Connor’s 13th birthday, an early Thanksgiving, and Daphne’s second birthday. I didn’t really go anywhere beyond my parents’ house and my brothers’ houses, but it was relaxing, and I didn’t get too stir-crazy. (When I started feeling antsy, AJ took me to a cool walking trail so I could enjoy the fall leaves. It totally rejuvenated me.)

While I was staying with my parents, I also wrote a few entries in a journal, by hand. It takes a lot longer for me to write by hand than it does for me to type. I found that I was doing more crafting so I wouldn’t write anything poorly. I also found that I had no desire to share the brief brags, complaints, and jokes that I normally would post to social media without hesitation.

I used to despair that all my thoughts were lost to the ether. When social media came around, I thought it was my salvation. Finally there was a way to chronicle everything that went through my head. This was important to me, for some reason. I’ve always wanted other people to understand me, but I’ve rarely felt like anyone does. I suppose I thought the more I shared, the more others would learn about me, and maybe eventually they would come to understand me. (This might be a large reason why I have such a problem with lying or with being misrepresented.)

I’ve gone overboard with sharing here on the blog, and I think my social media participation is probably even worse. It’s so much easier. Just taking a week off from it, I feel very different…like I have so much more time.

I’m not sure yet what I’m going to do. As a professional in the web world, I probably need to maintain a social media presence of some sort. And it would help to stay on the cutting edge with things like ADN. But of all the social media I abstained from this week, the only one I really worry about quitting completely is Facebook, due as I’ve said before to the possibility of losing touch with far-flung friends and family. Maybe I will find a way to limit my participation, perhaps by scheduling a time each week or so to “catch up”. Or maybe I’ll do something else. For right now, I’m putting that decision off, as I don’t really have any desire to get back on social media.

Social media quandary

Some time ago, I reached a point of crisis with Facebook. I was (and am) terribly unhappy with the company’s lack of respect for its users. Facebook users are not the customer; they’re the product. Mark Zuckerberg has little respect for privacy and seems only interested in pleasing advertisers. While I realize Facebook needs to make money, I don’t think that should happen at the cost of people’s feeling of personal security.

However, despite that huge issue, I continue to use Facebook, because that’s where everyone is. Or, more specifically, that’s where a majority of my far-flung real life friends are. Facebook makes it simple for me to keep up with people I otherwise wouldn’t hear from for months, years, or at all. I have always been terrible with keeping up with people myself, so this has been a godsend. And through Facebook I have developed deeper friendships with people who were once simple acquaintances. I’ve planned travel. I’ve shared and received affirmations and support. Facebook is where I go for community. It’s not a paradigm that can be replicated.

Twitter, I’ve come to discover over the past few days of trying very hard not to use it, is also an non-replicable paradigm.

I never thought I would have to try and find an experience to replace what I have on Twitter. Unlike Facebook, where I reveal information only behind tiered walls of (questionable) privacy, my tweets have always been public. Anyone is welcome to them. I have very few real followers, but I have over the years since I joined in February of 2007 curated a following list of interesting, funny people and accounts, one that enriches my life with daily musings, links to important news articles, beautiful photos, and more. I’ve also enjoyed sharing my own thoughts and occasionally receiving feedback.

As Twitter works toward profitability, things keep changing. I had always believed Twitter was more interested in its users than Facebook was, that Twitter would ultimately have its users’ backs. But one thing always bothered me: Why, if Twitter still has all my tweets as it claims, won’t it let me have them?

Unhappy that my tweets were seemingly going into a void from which they could never be recovered, I recently set up a rule with If This Then That that saves any tweet I post into a text file on Dropbox. Doing that, I was confident that at least going forward I would have access to my own content.

But then Twitter changed its API terms for developers, directly affecting my solution. IFTTT sent me an email about it, directing me to the Developer Rules of the Road and specifically this paragraph under “Twitter Content”:

You may export or extract non-programmatic, GUI-driven Twitter Content as a PDF or spreadsheet by using “save as” or similar functionality. Exporting Twitter Content to a datastore as a service or other cloud based service, however, is not permitted.

This rather creepily makes it sound like my content, the stuff I write, belongs to Twitter, not me. And as the content belongs to Twitter, I apparently have no right to use a process to save it. I would have to manually copy and paste from the GUI, if I’m reading this correctly. They know no one’s going to actually do that.

I realize this section exists to stop people from cross-posting their tweets to other services (which also seems draconian, no matter how annoying I find cross-posted content), but it effectively locks me out of my own writing, again. Let’s say I instead decide to post on some other service that allows me full access to my content, and then cross-post to Twitter. I could save the original posts I write that way, but not replies. I also wouldn’t be able to save retweets, which, while secondary, provide context to what I’m writing and insight into what I was thinking about while writing.

When I read the email from IFTTT on Thursday, I tweeted a little about it with shock and dismay, and then stopped tweeting altogether. It’s been about three days…but it feels more like a month.

In the meantime, I did what I could to get the content I enjoy on Twitter elsewhere. I went over to Google+ and added everyone I could find. I even pulled in news organizations I’m interested in and removed them from Facebook–but it looks like most of them post more to Facebook than Google+. Similarly, most of the people I followed on Google+ don’t post there much. The bulk of content is back on Twitter.

I’ve also been using App.net Alpha and the iOS app Spoonbill to participate in the new App.net-powered community that I’ll just refer to as ADN for simplicity’s sake. (App.net has the capability to support multiple communities, though I’m not sure that’s been done yet.) While that community is interesting, it’s sort of weird. (One conversation I witnessed, Person A: “Don’t you have a personal lawyer?” Person B: “Of course; I have several.”) There are a few people who, like me, talk about their lives, but for the most part I see people talking about tech trends, social media theory, marketing, and occasionally politics. It’s good content, but it’s not everything I want. Not by a long shot. There’s no @Lileks there. Little to nothing about journalism, photography, design, language, culture, or travel. @Horse_ebooks is there, but I hate @Horse_ebooks. The people I actually know who have signed up haven’t posted much of anything. It feels like a large number of the active people on ADN live in the Bay Area, adding to the sort of tech elitist ambiance. I have had very few conversations there.

So no, ADN can’t replace Twitter for me, at least not now. There isn’t enough adoption, I suppose. I even sort of feel weird posting there, like I’m spamming up a special place with my worthless thoughts. Rather the opposite of how I assumed I would feel about using a paid service that puts the users first.

ADN can’t do it, Google+ can’t do it, and I refuse to change the way I use Facebook (especially since that would give Facebook more data about me). So it would appear that I have no choice but to use Twitter, at least in terms of reading.

I’ve heard rumors that Twitter will start allowing users to download their tweets by the end of the year. But rumors like that have existed for awhile. I’ll believe it when I see it.

For now, I’ll probably keep reading Twitter. But I’m not sure I’ll be actually posting much there.

The future of content, Part 4

I’ve talked about redesigning the web into a collection of interconnected pieces of content, and I’ve discussed monetizing such a paradigm. Now I’d like to go further into the value this reconstruction would bring to content creators, sharers, and users.

The way the web works right now, content creators and sharers typically must either have their own website or use third-party services in order to build an audience and make money. Under this paradigm, the websites (or their content streams) are the main point of interest, and the onus is on the site owners and managers to “keep the content fresh”. In the case of businesses, this includes finding and hiring/contracting creators and negotiating licensing agreements with third-party content providers. The now-now-now pace puts pressure on creators to write something, anything, in order to keep people coming back to the site. This has resulted in a glut of content that is posted for the sake of having new content posted. SEO marketing has exacerbated the issue with content posted for the sake of higher search engine rankings. People are wasting more and more time reading navel-gazing content that adds little value to the human community.

With a web that is truly content-driven, the focus would shift from trying to keep thousands of disparate sites and streams “fresh” to trying to produce and share content that is meaningful, impactful, and important. With IP issues handled through robust tagging, content would be available for anyone to share. Licensing would be streamlined, and creators would be directly paid for their work. Media houses could more confidently keep creators on staff; sharing would provide an obvious metric of a creator’s value. Creators could focus on more long-form pieces, knowing that their existing work would continue to be shared and monetized. There would be less pressure to post something, anything, every day.

The web has suffered from the adoption of the “always on” mindset. If there is nothing new to report, there is no need to invent something to report. Someone, somewhere, is always producing content; it’s a big world. Rather than polluting millions of streams with junk, media companies, news organizations, marketers, and individuals should shift their focus to finding and sharing value. Simply aggregating RSS feeds or repurposing content the way we’ve been doing it so far is not enough; it does not meet the needs of the user and it does not ensure that content creators are paid for their work. We need to rebuild the system from the ground up.

The future of content, Part 3

Over the past two days I’ve described a new model for web architecture, one whose primary unit is an individual piece of content stored in a universal repository, rather than a product (page, feed, API, etc.) hosted on a web server. (Read Part 1; read Part 2.) Today I’ll discuss how such a system might be monetized.

Currently, content is shared in many disparate ways. The Associated Press has its own proprietary format for allowing other news sites to automatically repost its content; it also allows its lower-tier affiliates to manually repost (i.e., by copying and pasting into their own content management system), so long as the copyright notice remains intact. Sites pay to be affiliates. Bloggers, of course, have done the manual copy-and-paste thing for years; nowadays a pasted excerpt with a link to the original is considered standard, and this of course brings little money to the original creator. Video sites, too, have their own different ways of allowing users to share. Embedded video advertising allows the content creator to make some money on shares…assuming someone hasn’t simply saved the video and reposted it. Data is far more difficult to share or monetize. Some sites offer an API, but few laypeople know what to do with such a thing. The typical social media way of sharing data is by posting a still image of a graph or infographic–not contextualized or accessible at all.

In a system where every piece of content is tagged by creator, wherein sharing of any type of media is simple, IP could be more easily secured and monetized. Content tags could include copyright types and licensing permission levels. A piece of content might, for example, be set to freely share so long as it is always accompanied by the creator’s advertising. Ads could be sponsorship watermarks, preroll video, display banners or text that appear within the content unit, or something else entirely. The content creator would determine what advertising would be available for each piece of content, and the content sharers would each individually decide what advertising they are willing to have appear, or if they’d rather purchase an ad-free license. Resharers who took the content from someone else’s share would not avoid the advertising choice, because while they would have found the content at another sharer’s site or stream, the content itself would still be the original piece, hosted at the original repository, with all the original tags intact–including authorship and advertising.

Content could also be set to automatically enter the public domain at the proper time, under the laws governing its creator, or perhaps earlier if the creator so wishes.

The first step in making all of this work is to have all content properly tagged and a system wherein content tags are quickly updated and indexed across the internet. The second step would be in making sharing the “right” way so easy that very few would attempt to save someone else’s content and repost it as their own. As I mentioned in Part 2, I’m imagining browsers and sites that offer a plethora of in-browser editing and sharing options, far easier (and less expensive!) than using desktop applications. Making sharing and remixing easy and browser-based would also cut down on software piracy. Powerful creation suites would still be purchased by the media producers who need them to make their content, but the average person would no longer require a copy of Final Cut Pro to hack together a fan video based on that content.

The kind of tagging I’m talking about goes somewhat beyond the semantic web. Tags would be hard-coded into content, not easily removed (or avoided by a simple copy and paste). A piece of content’s entire history would be stored as part of the unit. Technologically, I’m not sure what this would involve, or what problems might arise. It occurs to me that over time a piece of content would become quite large through the logging of all its shares. But making that log indivisible from the content would solve many issues of intellectual property rights on the internet today. Simply asking various organizations who host disparate pieces of content to tag that content properly and then hoping they comply will not lead to a streamlined solution, especially given the problem of “standards” (as spoofed by xkcd).

With a system like this, the web rebuilt from the bottom up, there would be no need for individual content creators to reinvent the wheels of websites, APIs, DRM, advertising. They could instead focus on producing good content and the contextualizing it into websites and streams. Meanwhile, the hardcore techies would be the ones working on the underlying system, the content repository itself, the way streams are created, how tagging and logging occurs, tracking sharing, etc. Media companies–anyone–could contribute to this process if they wanted, but the point is they wouldn’t have to.

The future of content, Part 2

(This is the second in a series of posts about the future of content creation and sharing online. Part 1 contains my original discussion, while Part 3 considers monetization.)

Yesterday I imagined a web architecture that depends on individual pieces of highly tagged content, rather than streams of content. Today I’d like to expand on that.

Right now when a creator posts something to the web, they must take all their cues from the environment in which they are posting. YouTube has a certain category and tag structure. Different blogging software handles post tagging differently. News organizations and other media companies have their own specialized CMSes, either built by third parties, built in-house, or built by third parties and then customized. This ultimately leads to content that is typically only shareable through linking, copy-and-paste, or embedding via a content provider or CMS’s proprietary solution.

None of this is standardized. Different organizations adhere to different editorial guidelines, and these likely either include different rules for content tagging or neglect to discuss content tagging at all. And of course, content posted by individuals is going to be tagged or not tagged depending on the person’s time and interest in semantic content.

The upshot is, there is no way, other than through a search engine, to find all content (not just content from one specific creator) that relates to a certain keyword or phrase. And since content is tagged inconsistently across creators, and spammers flood the web with useless content, search engines are a problematic solution to content discovery.

In my idealized web, creators would adhere to a certain set of standards when posting content. The content posting interface would automatically give each section of content its own unique identifier, and the creator would be able to adjust these for accuracy–for example, marking an article as an article, marking the title as the title, and making sure each paragraph was denoted as a paragraph. If this sounds like HTML5, well, that’s intentional. I believe that in the interest of an accessible, contextualized web of information, we need all content posting interfaces to conform to web standards (and we need web standards to continue to evolve to meet the needs of content).

Further, I think such systems should tag each unit of content such that the context and sharing and linking history of that unit of content can be logged. This would provide extraordinarily rich information for data analysts, a field that is already growing and would explode upon adoption of this model.

In my vision, content would not be dependent on an individual or an organization to host it on a website at a particular IP address. Instead, there would be independent but interconnected content repositories around the world where all content would reside. “Permalinks” would go straight to the content itself.

Browsers would become content interpreters, bringing up individual pieces of content in a human-comprehensible way. Users could have their own browser settings for the display of different kinds of content. Theming would be big. And a user’s browser history could allow that browser to suggest content, if the user opted in.

But websites would still exist; content interpretation would not be the sole domain of browsers. Rather than places where content is stored and then presented, websites would be contextualized areas of related content, curated by people or by processes or both. Perhaps a site owner would always be able to post and remix their own content, but would need to acquire a license to post or remix someone else’s. Perhaps different remix sites would pop up, sites with in-browser video and image editing, that would allow users to become creators. All remixes would become bona fide content, stored in the repository; anyone could simply view the remix from their browser, but community sites could also share streams of related remixes.

With properly-tagged content that is not tied to millions of different websites, content streams would be easy for anyone to produce. Perhaps browsers would facilitate this; perhaps websites would do so; perhaps both. The web would shift from being about finding the right outlets for content to finding the right content interpreter to pull in the content the user wants, regardless of source.

Such a system would have “social media” aspects, in that a user could set their browser or favorite content interpretation website to find certain kinds of content that are shared or linked by their friends, colleagues, and people of interest to them. This information, of course, would be stored with each piece of content in the repository, accessible to everyone. But users would also be able to opt out of such a system, should they wish to be able to share and remix but not have their name attached. The rest of the trail would still be there, linking from the remix to the original pieces, such that the content could be judged on its worth regardless of whether the creator was “anonymous user” or a celebrity or a politician or a mommy blogger.

Under this sort of system, content creators could be as nit-picky about the presentation of their content as they wanted. They could be completely hands-off, submitting content to the repository without even creating a website or stream to promote or contextualize it. Or they could dig in deep and build websites with curated areas of related content. Media companies that produce a lot of content could provide content interpretation pages and content streams that take the onus of wading through long options lists away from the user and instead present a few options the creator thinks users might want to customize. The point is, users would be able to customize as much as they wanted if they dug into the nitty-gritty themselves, but content creators would still be able to frame their content and present it to casual users in a contextualized way. They could also use this framework, along with licensing agreements, to provide content from other creators.

Comments would be attached to content items, but also tagged with the environment in which they were made–so if they were made on a company’s website, that information would be known, but anyone could also see the comment on the original piece of content. Content streams made solely of comments would be a possibility (something like Branch).

This system would be extremely complex, especially given the logging involved, but it would also cut down on a lot of duplication and IP theft. If sharing is made simple, just a few clicks, and all content lives in the same place, there’s no reason for someone to save someone else’s picture, edit out the watermark, and post it elsewhere. Since all content would be tagged by author, there would actually be no reason for watermarks to exist. The content creator gets credit for the original creation, and the person who shares gets credit for the share. This would theoretically lead to users following certain sharers, and perhaps media companies could watch this sort of thing and hire people who tend to share content that gets people talking.

Obviously such a paradigm shift would mean a completely different approach to content creation, content sharing, commenting, and advertising…a whole new web. I haven’t even gotten into what advertising might be like in such a system. It would certainly be heavily dependent on tagging. I’ll think more about the advertising side and perhaps address it in a Part 3.

The future of content

(This is the first in a series of posts about the future of content creation and sharing online. Part 2 expands on the ideas in this post, while Part 3 considers monetization.)

I recently read Stop Publishing Web Pages, in which author Anil Dash calls for content creators to stop thinking in terms of static pages and instead publish customizable content streams.

Start publishing streams. Start moving your content management system towards a future where it outputs content to simple APIs, which are consumed by stream-based apps that are either HTML5 in the browser and/or native clients on mobile devices. Insert your advertising into those streams using the same formats and considerations that you use for your own content. Trust your readers to know how to scroll down and skim across a simple stream, since that’s what they’re already doing all day on the web. Give them the chance to customize those streams to include (or exclude!) just the content they want.

At first I had the impression that this would mean something like RSS, where content would be organized by publish date, but customizable, so a user could pick which categories/tags they wanted. This sounded like a great way to address how people currently approach content.

Upon further contemplation, though, I don’t think it would go far enough. Sorting by date and grouping by category seem like good options for stream organization, but why limit ourselves? What if I want to pull in content by rating, for example?

What if, alongside a few curated content streams, users visiting a content creator had access to all possible content tags–so that power users could not only simply customize existing streams, but create their own? As they start to choose tags, the other options would narrow dynamically based on the content that matches the tags and what tags are in place on that content. I’d want to be able to apply sub-tags when customizing a stream, so, for example, I could build a recipe stream that included all available beef entree recipes, but only sandwiches for the available chicken entree recipes. The goal would be to give users as much or as little power as they want, while maintaining ownership of the content.

Think of all the fun ways users could then curate and remix the content. Personal newspaper sites like paper.li have already given us a glimpse of the possibilities, but with properly tagged content, the customization could be even better, especially if the content curation system they’re using is flexible. Users could pick the images they want, create image galleries, pull in video, and put everything wherever they wanted it, at whatever size they wanted, using whatever fonts and colors they wanted. And what if each paragraph, or perhaps even sentence, in an article had a unique identifier? A user could select the text they want to be the highlight/summary for the piece, without having to copy and paste (and without the possibility of inadvertently misrepresenting the content).

And what if the owner of the content could tell what text was used to share the content? With properly tagged content within a share-tracking architecture, each sharing instance would serve as a contextualized trackback to the content owner. Over time, they’d have aggregate sharing data that would provide valuable audience information: who shared the content, what text they used, what pictures they used, what data they used, what video they used. Depending on how the sharing architecture is built, perhaps the content owner could even receive the comments and ratings that are put on the content at point-of-share, helping them determine where to look for feedback. They could see who shared the content directly and who reshared it from someone else’s share. Whose shares are getting the most reshares? How do those content sharers share the content? What is the context; what other content are they sharing in that space? This could inform how the content owner chooses to share the content on their own apps and pages.

For websites would still exist, of course. They would just be far more semantic and dynamic. Rather than being static page templates, they’d be context-providing splash pages, pulled together by content curators. Anything could be pulled into these pages and placed anywhere; curators could customize the look and feel and write “connector text” to add context (such as a custom image caption referring back to an article). This connector text would then become a separate tagged unit associated with the content it is connecting, available for use elsewhere. The pages themselves would serve as promotional pieces for content streams users could subscribe to; the act of visiting such a page could send the user the stream information. And content shared alongside other content would then be linked to that content. Whenever a content creator presented two pieces of their own content together, that would tag those pieces of content as being explicitly linked. Content would also be tagged as linked whenever sharers presented it together, regardless of creator. Perhaps explicit links would be interpreted by the sharing architecture as stronger than other links; perhaps link strength could be dynamically determined by number of presentations, whether the content had the same creator, and the trust rating of the sharers involved. Regardless, users could then browse through shares based on link strength if they chose.

Author and copyright information would be built into this sharing system. Ideally, authors would be logged into their own account on a content management system such that their author information (name, organization, website, etc.) would be automatically appended to any content they create or curate. There would probably need to be a way for users to edit the author, to allow for cases where someone posts something for someone else, but this would only be available at initial content creation, to avoid IP theft. This author information would then automatically become available for a “credits” section in whatever site, blog, app, or other managed content area that content is pulled into. Copyright would be protected in that author information is always appended and the content itself isn’t being changed as it’s shared, just contextualized differently. Every piece of content would link back to its original author.

I’m imagining all of this applying to everything–not just text-based articles and still images, but spreadsheets, interactive graphs, video. Users would have in-browser editing capabilities to grab video clips if they didn’t want to present the entire video. They’d have the ability to take a single row out of a table to make a graph. Heck, they’d have the ability to crop an image. But no matter how they chopped up and reassembled the content, it would always retain its original author and copyright information and link back to the whole original. Remixes and edits would append the information of the sharer who did the remix/edit.

Essentially, rather than pages or even streams, I’m seeing disparate pieces of content, linked to other content by tags and shares. All content would be infinitely customizable but still ultimately intact. This would serve the way people now consume content and leave possibilities open for many different means of content consumption in the future. Meanwhile, it would provide valuable data to content creators while maintaining their content ownership.

I would love to work on building such a system. Anybody else out there game?

Since writing I’ve found some related sites and thoughts:

Blacking out tomorrow

Like many sites across the internet, pixelscribbles will be blacking out on January 18 from 8am to 8pm Eastern US time in protest of proposed US legislation that ostensibly seeks to stop online piracy but would ultimately result in curtailing free speech across the world.

I’ll be using the SOPA Blackout WordPress plugin. However, I wasn’t a fan of the default intro text, so I wrote my own. Here’s what I came up with; please feel free to use it yourself. I encourage writing your statement in your own words if you can, though; that makes it all the more powerful.

pixelscribbles.com is currently blacked out in protest of SOPA and PIPA, two fundamentally flawed pieces of legislation currently being considered by the US government. If enacted, these bills or others like them would have far-reaching consequences across the globe. Their flawed reasoning and careless wording would give censorship power to corporations, blocking the free flow of information from country to country, isolating us from one another. It would put US citizens’ knowledge of important events such as the Arab Spring in jeopardy. You can watch the video below for more information, and this blog post summarizes the timeline of events. I’m including links to more information below.

Many of the sites I’m linking to are likely blacked out in protest today–Wikipedia is just one such example. If that’s the case, please save the links and read them later.

SOPA, HR 3261: The Stop Online Piracy Act, was until recently being discussed in the US House of Representatives. It has been shelved, for now, due to the online backlash.

PIPA, S. 968: The Protect IP Act, is a bill in the US Senate with similar problems. This bill has not yet been shelved.

Here is more detailed information and analysis about the bills:

While this legislation has many supporters in the entertainment industry (click here for a list as of January 14), many other companies have come out against it, including Facebook, Twitter, Google, Mozilla, Wikipedia, and Reddit. Here are a few articles on the subject:

Where do your Senators and Representatives stand on PIPA and SOPA? Click here to find out. You can then use the form below to contact the people who are supposed to be representing your interests, not the interests of big companies.