I attended the Scripting News meetup in Portland on Friday and got a chance to meet Dave Winer.

The topic of preserving content came up which is a very important to me.

How can we preserve content so it is not trapped in old, obsolete hardware or software?

Continuous Migration

I started blogging in 2000 when I read an article about "weblogs".

Since my website was already data-driven, it was pretty trivial to add a weblog category. I used bulletin board code and wrote a simple parser to add bold and italics to plain text. This was before AsciiDoc or Markdown were invented.

Keeping my website online was an endless series of server changes that broke my code.

After a few years of fighting that, I took a break from maintaining my website and started using Blogger.com.

But all my old posts were now stuck in a database. And my website code used to fetch those posts no longer worked.

This is the root of the problem we discussed at the meetup.

Preserving content as text

Ward Cunningham shared his story of trying to resurrect the work he’d done with HyperCard. His solution for today is to move all the content into JSON stored as simple text. Take a look at his latest invention; Federated Wiki. Click on the JSON link at the bottom of each page to see what that looks like.

I’ve come to the same realization. If I want to preserve the content into the future, it’s going to be in a format that will still be around in 20 years.

So I’m slowly dredging up my old content out of old databases and saving the text into text files.

I’m formatting my text using AsciiDoc, which is like a super-set of Markdown.

Then I use a static website generator called Hugo to build my websites. Hugo can use JSON as the front matter. It uses Asciidoctor to render the formatting before building the website.

This blog, powered by HubPress, uses the same concept.

Hopefully I’ve minimized the migration frustration now that my content is in plain text. Twenty years from now I should still be able to open and read my text files.