Idea: Shared experience movie theater

How about a movie theater where talking and texting during the movie is encouraged? Where you know that’ll happen going in, so you enjoy the experience in a different way?

The theater could also have its own app for “live tweeting” movies, and it would quarantine the tweets so people wouldn’t see spoilers unless they wanted to.

Published
Categorized as Idea

A possible new approach to meal planning

For years I have struggled with regularly preparing meals at home. I tend to dislike following routines for any length of time, and I also tend to dislike having to come up with vast organizational schemes more frequently than perhaps monthly (or even bimonthly), so creating a weekly meal plan, shopping for it, and then cooking according to that plan every night has rarely occurred.

Thinking about it today, I started to wonder if I couldn’t space out the planning and work more. Buy in bulk, take the first few steps of a recipe, package up single or double servings, then freeze the servings to cook later. I do have times where I want to do a huge project; perhaps I could use those times to stock up on freezable meal beginnings. And then on regular nights all I’d really have to do to make dinner would be to pick a pre-prepared item and get the fresh ingredients I might need to compliment it. To save freezer space, I could even branch out into unfrozen vacuum-packed food, if it’s possible to do that safely. And of course there’s always canning.

It’s a thought. This may be a good way for me to go so I don’t feel as overwhelmed during the week.

Published
Categorized as Idea Tagged ,

We have the technology

I often feel that there are so many things we could be doing. So many things we are capable of. So many things we just aren’t achieving that we should be readily able to.

Sometimes I discover that we are at least partially doing those things, but we’re not doing them in a way that people know about or can find or share easily.

This morning I heard a tornado siren. It’s only the second time I’ve heard it since I’ve lived here. The first time, nothing happened, so this time, I didn’t think much of it. An hour or so later I saw a tweet remarking on Atlanta’s “tornado-y” weather, so I thought I’d see what the deal was.

I went to my go-to weather site, The Weather Channel’s weather.com, and clicked on my local forecast, which is saved in a tile at the top of the page. Then I clicked on the Alerts, and in the drop-down I saw Tornado Watch until 4pm. That was all I needed to know, so I left the page.

Some time later, I saw this tweet:

If you follow that image link, you get…a cell phone picture of a TV screen.

A cell phone picture. Of a TV screen.

I understand wanting to share important information quickly. Actually, I think the ability to do that is rather important. But it astonished me that the most efficient way to rapidly share vital information online was apparently to post a picture of it.

We have the data. We have the technology. We can do better.

I went poking around weather.com to find the source of that image–better yet, something that would stay up-to-date no matter when someone got the link. First I went to the Atlanta forecast page. I clicked on things, but never saw a map like the TV picture. I did find a list of affected counties, which is useful, especially for people who can’t see pictures. But I wanted to duplicate the experience a viewer of the picture would have–duplicate and enhance it.

Finally I clicked on the Map link in the sidebar, and that took me to the interactive Weather Map. This was the same thing I’d seen on the forecast page and ignored because it didn’t have the tornado warning areas highlighted. But I gave it a chance; I clicked on Map Options. Scrolling all the way to the very bottom, I finally found the Weather Alert Overlays, and I clicked the radio button next to Severe Alerts.

And there, at last, it was.

Weather Map screencap 01/30/2013I quickly sent a link and instructions in response to the tweet. Then it occurred to me to check the link on my phone. I opened Tweetbot and tapped the link and sure enough…the interactive map doesn’t work in iOS, because it uses Flash.

Sigh.

Here’s what I want. I want a map that works regardless of the device I’m using. I want the ability to share a direct link to the view I am using–in this case, Severe Alerts–not just a generic link to the default map (which is what you currently get from those sidebar social media buttons). I want a forecast page that calls up versions of the map that are relevant to any weather alerts currently in effect.

As I said, we are capable of so many things. So many useful things. So many things that would be a genuine help to society.

The thing is, if we try to do those things, we can’t just throw something together and say we’re done. We have to make it easy.

Otherwise, people will skip right past it and keep taking pictures of their TVs.

Idea: A malleable restaurant experience

Reading through Tofugu’s Famous Foods of Every Japanese Prefecture [North, East, Central] makes me feel two things: hungry, and wistful. I want to go to all those places and try all those foods.

It occurred to me that it would be cool for a Japanese restaurant to have a small regular menu and then switch out other menu items, perhaps every quarter, to feature different items from different regions. It would be a little difficult logistically, as they’d have to source the ingredients and train the chefs and whatnot, but it would make for a fascinating dining experience. They could even change their decor to match the city or prefecture whose food they were featuring at any given time. A map and photos at the entryway could show guests what the current region is and what kinds of specialty items to expect.

The restaurant could also try weeklong events, such as an udon event or a ramen event, and go crazy with different selections. Maybe they could bring in guest chefs, specialists, to take some of the pressure off the main staff.

No matter how big a restaurant’s menu is–the menu at our current go-to Japanese restaurant is pretty huge–there’s always going to be something missing. And if Kitchen Nightmares has taught me anything, it’s that a smaller menu improves food quality all around. These lessons are actionable: shift to a smaller menu that changes regularly. This move would bring refreshing variety and the opportunity to try new things while allowing the chefs increased focus on each dish.

Digital publishing idea from 2008

I was going through my old project ideas folder and came across this gem from November of 2008:

A means to publish works for reading on screens and handhelds–different resolutions that are all legible without zooming and possibly without scrolling.  Each “page” is now a “screen”.

No need for a separate device for reading.

Should be able to create with existing tools.  Perhaps pdfs that are then imported into a locked system of some sort.  Or something even more interactive.

Will work for newspapers and magazines.  No need for print versions!

Users would purchase the browsing software and then purchase each “issue” they wanted to read, or subscribe.  Their accounts would always be available to them online, with every issue they had access to.  They can also download each issue to any device on which they’ve registered the software.

Published
Categorized as Idea, Media, News

The future of content, Part 4

I’ve talked about redesigning the web into a collection of interconnected pieces of content, and I’ve discussed monetizing such a paradigm. Now I’d like to go further into the value this reconstruction would bring to content creators, sharers, and users.

The way the web works right now, content creators and sharers typically must either have their own website or use third-party services in order to build an audience and make money. Under this paradigm, the websites (or their content streams) are the main point of interest, and the onus is on the site owners and managers to “keep the content fresh”. In the case of businesses, this includes finding and hiring/contracting creators and negotiating licensing agreements with third-party content providers. The now-now-now pace puts pressure on creators to write something, anything, in order to keep people coming back to the site. This has resulted in a glut of content that is posted for the sake of having new content posted. SEO marketing has exacerbated the issue with content posted for the sake of higher search engine rankings. People are wasting more and more time reading navel-gazing content that adds little value to the human community.

With a web that is truly content-driven, the focus would shift from trying to keep thousands of disparate sites and streams “fresh” to trying to produce and share content that is meaningful, impactful, and important. With IP issues handled through robust tagging, content would be available for anyone to share. Licensing would be streamlined, and creators would be directly paid for their work. Media houses could more confidently keep creators on staff; sharing would provide an obvious metric of a creator’s value. Creators could focus on more long-form pieces, knowing that their existing work would continue to be shared and monetized. There would be less pressure to post something, anything, every day.

The web has suffered from the adoption of the “always on” mindset. If there is nothing new to report, there is no need to invent something to report. Someone, somewhere, is always producing content; it’s a big world. Rather than polluting millions of streams with junk, media companies, news organizations, marketers, and individuals should shift their focus to finding and sharing value. Simply aggregating RSS feeds or repurposing content the way we’ve been doing it so far is not enough; it does not meet the needs of the user and it does not ensure that content creators are paid for their work. We need to rebuild the system from the ground up.

The future of content, Part 3

Over the past two days I’ve described a new model for web architecture, one whose primary unit is an individual piece of content stored in a universal repository, rather than a product (page, feed, API, etc.) hosted on a web server. (Read Part 1; read Part 2.) Today I’ll discuss how such a system might be monetized.

Currently, content is shared in many disparate ways. The Associated Press has its own proprietary format for allowing other news sites to automatically repost its content; it also allows its lower-tier affiliates to manually repost (i.e., by copying and pasting into their own content management system), so long as the copyright notice remains intact. Sites pay to be affiliates. Bloggers, of course, have done the manual copy-and-paste thing for years; nowadays a pasted excerpt with a link to the original is considered standard, and this of course brings little money to the original creator. Video sites, too, have their own different ways of allowing users to share. Embedded video advertising allows the content creator to make some money on shares…assuming someone hasn’t simply saved the video and reposted it. Data is far more difficult to share or monetize. Some sites offer an API, but few laypeople know what to do with such a thing. The typical social media way of sharing data is by posting a still image of a graph or infographic–not contextualized or accessible at all.

In a system where every piece of content is tagged by creator, wherein sharing of any type of media is simple, IP could be more easily secured and monetized. Content tags could include copyright types and licensing permission levels. A piece of content might, for example, be set to freely share so long as it is always accompanied by the creator’s advertising. Ads could be sponsorship watermarks, preroll video, display banners or text that appear within the content unit, or something else entirely. The content creator would determine what advertising would be available for each piece of content, and the content sharers would each individually decide what advertising they are willing to have appear, or if they’d rather purchase an ad-free license. Resharers who took the content from someone else’s share would not avoid the advertising choice, because while they would have found the content at another sharer’s site or stream, the content itself would still be the original piece, hosted at the original repository, with all the original tags intact–including authorship and advertising.

Content could also be set to automatically enter the public domain at the proper time, under the laws governing its creator, or perhaps earlier if the creator so wishes.

The first step in making all of this work is to have all content properly tagged and a system wherein content tags are quickly updated and indexed across the internet. The second step would be in making sharing the “right” way so easy that very few would attempt to save someone else’s content and repost it as their own. As I mentioned in Part 2, I’m imagining browsers and sites that offer a plethora of in-browser editing and sharing options, far easier (and less expensive!) than using desktop applications. Making sharing and remixing easy and browser-based would also cut down on software piracy. Powerful creation suites would still be purchased by the media producers who need them to make their content, but the average person would no longer require a copy of Final Cut Pro to hack together a fan video based on that content.

The kind of tagging I’m talking about goes somewhat beyond the semantic web. Tags would be hard-coded into content, not easily removed (or avoided by a simple copy and paste). A piece of content’s entire history would be stored as part of the unit. Technologically, I’m not sure what this would involve, or what problems might arise. It occurs to me that over time a piece of content would become quite large through the logging of all its shares. But making that log indivisible from the content would solve many issues of intellectual property rights on the internet today. Simply asking various organizations who host disparate pieces of content to tag that content properly and then hoping they comply will not lead to a streamlined solution, especially given the problem of “standards” (as spoofed by xkcd).

With a system like this, the web rebuilt from the bottom up, there would be no need for individual content creators to reinvent the wheels of websites, APIs, DRM, advertising. They could instead focus on producing good content and the contextualizing it into websites and streams. Meanwhile, the hardcore techies would be the ones working on the underlying system, the content repository itself, the way streams are created, how tagging and logging occurs, tracking sharing, etc. Media companies–anyone–could contribute to this process if they wanted, but the point is they wouldn’t have to.

The future of content, Part 2

(This is the second in a series of posts about the future of content creation and sharing online. Part 1 contains my original discussion, while Part 3 considers monetization.)

Yesterday I imagined a web architecture that depends on individual pieces of highly tagged content, rather than streams of content. Today I’d like to expand on that.

Right now when a creator posts something to the web, they must take all their cues from the environment in which they are posting. YouTube has a certain category and tag structure. Different blogging software handles post tagging differently. News organizations and other media companies have their own specialized CMSes, either built by third parties, built in-house, or built by third parties and then customized. This ultimately leads to content that is typically only shareable through linking, copy-and-paste, or embedding via a content provider or CMS’s proprietary solution.

None of this is standardized. Different organizations adhere to different editorial guidelines, and these likely either include different rules for content tagging or neglect to discuss content tagging at all. And of course, content posted by individuals is going to be tagged or not tagged depending on the person’s time and interest in semantic content.

The upshot is, there is no way, other than through a search engine, to find all content (not just content from one specific creator) that relates to a certain keyword or phrase. And since content is tagged inconsistently across creators, and spammers flood the web with useless content, search engines are a problematic solution to content discovery.

In my idealized web, creators would adhere to a certain set of standards when posting content. The content posting interface would automatically give each section of content its own unique identifier, and the creator would be able to adjust these for accuracy–for example, marking an article as an article, marking the title as the title, and making sure each paragraph was denoted as a paragraph. If this sounds like HTML5, well, that’s intentional. I believe that in the interest of an accessible, contextualized web of information, we need all content posting interfaces to conform to web standards (and we need web standards to continue to evolve to meet the needs of content).

Further, I think such systems should tag each unit of content such that the context and sharing and linking history of that unit of content can be logged. This would provide extraordinarily rich information for data analysts, a field that is already growing and would explode upon adoption of this model.

In my vision, content would not be dependent on an individual or an organization to host it on a website at a particular IP address. Instead, there would be independent but interconnected content repositories around the world where all content would reside. “Permalinks” would go straight to the content itself.

Browsers would become content interpreters, bringing up individual pieces of content in a human-comprehensible way. Users could have their own browser settings for the display of different kinds of content. Theming would be big. And a user’s browser history could allow that browser to suggest content, if the user opted in.

But websites would still exist; content interpretation would not be the sole domain of browsers. Rather than places where content is stored and then presented, websites would be contextualized areas of related content, curated by people or by processes or both. Perhaps a site owner would always be able to post and remix their own content, but would need to acquire a license to post or remix someone else’s. Perhaps different remix sites would pop up, sites with in-browser video and image editing, that would allow users to become creators. All remixes would become bona fide content, stored in the repository; anyone could simply view the remix from their browser, but community sites could also share streams of related remixes.

With properly-tagged content that is not tied to millions of different websites, content streams would be easy for anyone to produce. Perhaps browsers would facilitate this; perhaps websites would do so; perhaps both. The web would shift from being about finding the right outlets for content to finding the right content interpreter to pull in the content the user wants, regardless of source.

Such a system would have “social media” aspects, in that a user could set their browser or favorite content interpretation website to find certain kinds of content that are shared or linked by their friends, colleagues, and people of interest to them. This information, of course, would be stored with each piece of content in the repository, accessible to everyone. But users would also be able to opt out of such a system, should they wish to be able to share and remix but not have their name attached. The rest of the trail would still be there, linking from the remix to the original pieces, such that the content could be judged on its worth regardless of whether the creator was “anonymous user” or a celebrity or a politician or a mommy blogger.

Under this sort of system, content creators could be as nit-picky about the presentation of their content as they wanted. They could be completely hands-off, submitting content to the repository without even creating a website or stream to promote or contextualize it. Or they could dig in deep and build websites with curated areas of related content. Media companies that produce a lot of content could provide content interpretation pages and content streams that take the onus of wading through long options lists away from the user and instead present a few options the creator thinks users might want to customize. The point is, users would be able to customize as much as they wanted if they dug into the nitty-gritty themselves, but content creators would still be able to frame their content and present it to casual users in a contextualized way. They could also use this framework, along with licensing agreements, to provide content from other creators.

Comments would be attached to content items, but also tagged with the environment in which they were made–so if they were made on a company’s website, that information would be known, but anyone could also see the comment on the original piece of content. Content streams made solely of comments would be a possibility (something like Branch).

This system would be extremely complex, especially given the logging involved, but it would also cut down on a lot of duplication and IP theft. If sharing is made simple, just a few clicks, and all content lives in the same place, there’s no reason for someone to save someone else’s picture, edit out the watermark, and post it elsewhere. Since all content would be tagged by author, there would actually be no reason for watermarks to exist. The content creator gets credit for the original creation, and the person who shares gets credit for the share. This would theoretically lead to users following certain sharers, and perhaps media companies could watch this sort of thing and hire people who tend to share content that gets people talking.

Obviously such a paradigm shift would mean a completely different approach to content creation, content sharing, commenting, and advertising…a whole new web. I haven’t even gotten into what advertising might be like in such a system. It would certainly be heavily dependent on tagging. I’ll think more about the advertising side and perhaps address it in a Part 3.

The future of content

(This is the first in a series of posts about the future of content creation and sharing online. Part 2 expands on the ideas in this post, while Part 3 considers monetization.)

I recently read Stop Publishing Web Pages, in which author Anil Dash calls for content creators to stop thinking in terms of static pages and instead publish customizable content streams.

Start publishing streams. Start moving your content management system towards a future where it outputs content to simple APIs, which are consumed by stream-based apps that are either HTML5 in the browser and/or native clients on mobile devices. Insert your advertising into those streams using the same formats and considerations that you use for your own content. Trust your readers to know how to scroll down and skim across a simple stream, since that’s what they’re already doing all day on the web. Give them the chance to customize those streams to include (or exclude!) just the content they want.

At first I had the impression that this would mean something like RSS, where content would be organized by publish date, but customizable, so a user could pick which categories/tags they wanted. This sounded like a great way to address how people currently approach content.

Upon further contemplation, though, I don’t think it would go far enough. Sorting by date and grouping by category seem like good options for stream organization, but why limit ourselves? What if I want to pull in content by rating, for example?

What if, alongside a few curated content streams, users visiting a content creator had access to all possible content tags–so that power users could not only simply customize existing streams, but create their own? As they start to choose tags, the other options would narrow dynamically based on the content that matches the tags and what tags are in place on that content. I’d want to be able to apply sub-tags when customizing a stream, so, for example, I could build a recipe stream that included all available beef entree recipes, but only sandwiches for the available chicken entree recipes. The goal would be to give users as much or as little power as they want, while maintaining ownership of the content.

Think of all the fun ways users could then curate and remix the content. Personal newspaper sites like paper.li have already given us a glimpse of the possibilities, but with properly tagged content, the customization could be even better, especially if the content curation system they’re using is flexible. Users could pick the images they want, create image galleries, pull in video, and put everything wherever they wanted it, at whatever size they wanted, using whatever fonts and colors they wanted. And what if each paragraph, or perhaps even sentence, in an article had a unique identifier? A user could select the text they want to be the highlight/summary for the piece, without having to copy and paste (and without the possibility of inadvertently misrepresenting the content).

And what if the owner of the content could tell what text was used to share the content? With properly tagged content within a share-tracking architecture, each sharing instance would serve as a contextualized trackback to the content owner. Over time, they’d have aggregate sharing data that would provide valuable audience information: who shared the content, what text they used, what pictures they used, what data they used, what video they used. Depending on how the sharing architecture is built, perhaps the content owner could even receive the comments and ratings that are put on the content at point-of-share, helping them determine where to look for feedback. They could see who shared the content directly and who reshared it from someone else’s share. Whose shares are getting the most reshares? How do those content sharers share the content? What is the context; what other content are they sharing in that space? This could inform how the content owner chooses to share the content on their own apps and pages.

For websites would still exist, of course. They would just be far more semantic and dynamic. Rather than being static page templates, they’d be context-providing splash pages, pulled together by content curators. Anything could be pulled into these pages and placed anywhere; curators could customize the look and feel and write “connector text” to add context (such as a custom image caption referring back to an article). This connector text would then become a separate tagged unit associated with the content it is connecting, available for use elsewhere. The pages themselves would serve as promotional pieces for content streams users could subscribe to; the act of visiting such a page could send the user the stream information. And content shared alongside other content would then be linked to that content. Whenever a content creator presented two pieces of their own content together, that would tag those pieces of content as being explicitly linked. Content would also be tagged as linked whenever sharers presented it together, regardless of creator. Perhaps explicit links would be interpreted by the sharing architecture as stronger than other links; perhaps link strength could be dynamically determined by number of presentations, whether the content had the same creator, and the trust rating of the sharers involved. Regardless, users could then browse through shares based on link strength if they chose.

Author and copyright information would be built into this sharing system. Ideally, authors would be logged into their own account on a content management system such that their author information (name, organization, website, etc.) would be automatically appended to any content they create or curate. There would probably need to be a way for users to edit the author, to allow for cases where someone posts something for someone else, but this would only be available at initial content creation, to avoid IP theft. This author information would then automatically become available for a “credits” section in whatever site, blog, app, or other managed content area that content is pulled into. Copyright would be protected in that author information is always appended and the content itself isn’t being changed as it’s shared, just contextualized differently. Every piece of content would link back to its original author.

I’m imagining all of this applying to everything–not just text-based articles and still images, but spreadsheets, interactive graphs, video. Users would have in-browser editing capabilities to grab video clips if they didn’t want to present the entire video. They’d have the ability to take a single row out of a table to make a graph. Heck, they’d have the ability to crop an image. But no matter how they chopped up and reassembled the content, it would always retain its original author and copyright information and link back to the whole original. Remixes and edits would append the information of the sharer who did the remix/edit.

Essentially, rather than pages or even streams, I’m seeing disparate pieces of content, linked to other content by tags and shares. All content would be infinitely customizable but still ultimately intact. This would serve the way people now consume content and leave possibilities open for many different means of content consumption in the future. Meanwhile, it would provide valuable data to content creators while maintaining their content ownership.

I would love to work on building such a system. Anybody else out there game?

Since writing I’ve found some related sites and thoughts:

A Harry Potter TV series

…would be awesome, right? And here’s how I would do it.

Ever since the third movie, I’ve felt that movies can’t adequately tell the Harry Potter story. The world is too rich. There are too many characters, too many magical creatures, too much backstory. Subplots upon subplots must be left out for time, but this causes confusion and sometimes story changes.

The first two books translated well enough, as they were short and simple stories, but as soon as complexity came in, the overall tale started to suffer. It probably would have been best to make two films each from Azkaban onward.

Still, movies can’t beat television when it comes to telling complex stories, because television has the time to do more. It’s why I’ve all but stopped going to the movies, but still watch TV (though not much, to be honest). It’s why when I’m scanning through Netflix to find something to watch, I usually avoid the movies and look for a series to sink my teeth into. Maybe this says something about a decline in the quality of movies in general; I don’t know. I just know that I like to be engaged with many characters and a deep plot and an interesting setting, and I don’t get enough mental stimulation from most movies.

In any case, I’ve long thought Harry Potter should be done as an animated series with 30-minute episodes. I’d prefer some really pretty anime art, but the brilliant Iron Man: Armored Adventures has me reconsidering the potential of CGI 3D cel-shading. Regardless of how it’s animated, it needs to look beautiful and magical. (This was one thing that kind of felt off to me about the Harry Potter films; many things that should have been beautiful were not, including the centaurs. The mermaids were supposed to be fearsome, of course, but that doesn’t mean everything had to be frightening.)

The series would have a team of regular writers, and, assuming she wanted a hand in it, J.K. Rowling would be the producer, and she’d sign off on all the story concepts. In general the main plots would follow the books pretty much to the letter. We’d get to see all the scenes we’ve imagined, maybe not the way we imagined them, but in a new and interesting way.

There would also be some original one-off episodes thrown into the mix. I’d let the writing team write some of these, but I’d also woo guest writers, people who’ve written good stuff for other shows, and just see what they might do for Harry Potter. Rowling would have to have veto power, but I imagine she’d be open to some different interpretations and situations for her characters, and would only speak up if she felt like a writer had misunderstood a character. These one-offs would be stand-alone; they could not affect overall continuity in any major way (although minor effects would be fine, and background characters could get more spotlight. Wouldn’t you love “A Day in the Life of Luna Lovegood”?).

Story arcs would flow from the canon material pretty naturally. I’m not sure how the episodes would break up into seasons, since the books are all different lengths, but this is something the writers could discuss and figure out. The original episodes could help with padding a season out when needed. Also, just because the series would be following the books wouldn’t necessarily mean there couldn’t be expanded flashback episodes. I’d love to see a story arc about Dumbledore, a story arc about Snape, a story arc about James and Lily, a story arc about Moony, Wormtail, Padfoot, and Prongs. Even the diary flashback about Tom Riddle from The Chamber of Secrets could be expanded into an episode, or perhaps worked into a longer series about Tom Riddle and Voldemort that would go with the Half-Blood Prince episodes. Following Harry’s story doesn’t mean the series would have to only follow him. Maybe some of the original episodes could spend more time with Hermione. The possibilities are endless.

The point would be a robust series, with a known beginning and end, and a lot of known stuff in the middle, but then plenty of possibilities for new stories and new visuals and new music and new actors and a fresh, full way of experiencing the universe J.K. Rowling envisioned.

Would that not rule?!

Idea: Weekly roundups for social media autoposting

A lot of location-based/check-in apps have an option to automatically post your latest activity to a social network. This is really fun, but after awhile, especially if you’re using the application a lot, people can get burned out on all the posts. They might, depending on their social networking tool, block those posts, or even block you!

Another problem is that there is no real meaning to this information. Sure, it’s nice to let people know what spots you like, but by posting each individual check-in, you’re putting the onus of aggregating that information to find the meaning on yourself and your friends. Nobody’s going to go back and read all your check-ins to try to come up with a conclusion, and you’re not likely to do it either.

A third problem, which I mentioned in my post about using iPhones as travel tools, is safety and privacy. Simply put, it can be unsafe to constantly broadcast your location or other information to the world.

To deal with these issues, I suggest services offer a “weekly roundup” option. Instead of posting to social networks multiple times a day, services would do one post a week. That post would include a link to a webpage with that user’s activities for the whole week. For example, Gowalla’s weekly roundup on Twitter might look like this:

@cosleia went to Teresa’s Mexican Restaurant and 15 other places this week! Click here for roundup: http://bit.ly/link

Already, by having one post a week, the deluge of information is dammed up and released as a very manageable stream. And roundups would provide much-needed context, as well. For example, there could be various messages for different situations; if a person went to the same place several times, the message could say:

@cosleia went to Teresa’s Mexican Restaurant five times this week! See where else she went: http://bit.ly/link

And then I would know that I need to stop eating out so much ;)

Similarly, with RunKeeper, a weekly roundup could let people know how many times I ran that week, how far I went, how many calories I burned, what my best time was that week, or any number of things. People who saw the post would be able to tell if I’d improved over the course of the week, and if so, how.

We’re emerging into an age where information on almost anything and anyone is constantly available…but once the initial novelty has worn off, what’s the use of all that data? Putting a week’s worth of data together would provide context for both the viewer and the user of the app.

And finally, weekly roundups would eliminate the danger of posting your exact location in real time. You’d still be sharing your favorite places with your friends, but in a less immediate way.

Ultimately, it’s great that we have the ability to store and broadcast so much data. But if we don’t turn that data into something useful, it’s pretty pointless. Weekly roundups would be a great first step towards generating real, meaningful information from all that data.

Perhaps eventually apps could offer a special section on their sites where users could view their activity trends for weeks, months, or years at a time. It would be like Mint.com for activities! In the case of location-based apps, this part would probably need to be private, so it wouldn’t be easy for someone else to track where the user is likely to be at any given time.

Here’s hoping app creators start thinking about how user data can be utilized–not just for advertising revenue, but for the user’s own benefit.