The other day I joined a discussion on content migration with a client who is beginning a KM project. While preparing for the meeting I ran into this post. It’s from a customer support company, but many of the migration issues they face are similar to my KM experiences. I’d like to share some of the strategies we’ve used over time to turn this early implementation challenge into an opportunity for KM success.
As with many things in life, the 80 / 20 rule is relevant to content migrations. Any existing repository has old, inaccurate, duplicated data - and information that is simply no longer relevant. (For high quality repositories this may not be 80 / 20, but some level of cleanup is always possible, and usually advisable.) Why migrate stuff you don’t need? Migrating the best quality content into your new repository ensures that your KM project gets a huge boost. With less overall data but higher quality content, information discovery will be more efficient and much more pleasant for your users.
Sort the wheat from the chaff
How do you identify the 80%? A couple of ways:
- Usage statistics from your current repository can tell you a lot about what content is being used and what is not being used. (Hopefully your current repository can provide you with this information. Lucidea’s Presto KM product can tell you which records users have viewed, as well as what files have been downloaded - an important distinction in a document repository.)
- Social statistics can also help. Low ratings help identify low quality content, while tags or comments with negative phrases do the same.
Unlock the power of what you have
Sometimes migration is not a good idea. Many repositories are great for capturing information and ensuring regulatory compliance, but they don’t enable the knowledge sharing that you want. If this is the case, keep the repository for content capture, but use your KM system as a discovery layer to enable access and information exchange. We refer to this as “unlocking the vault.” And if your KM system has connectors that allow indexing of third party repositories, you get the best of all possible worlds. Information capture and regulatory compliance don’t fall within your KM project, but your goal to share knowledge more widely is achieved. (Lucidea’s KM product has connectors to a wide variety of repositories and is perfect for this scenario.)
Crowdsourced success
Sometimes you do need to migrate the whole repository if the 80/20 rule isn’t relevant or connecting to the content won’t work. For example, when an aging mainframe-based repository is going off-line next quarter, you just need to get the data out. In these cases we recommend turning on usage statistics and social features in a new repository, and beginning an aggressive “content weeding” process to clean up the data. Enlisting users’ help via commenting, rating or tagging documents often invigorates the launch of a new repository, thereby increasing its success.
Measure it, monitor it, manage it
We once worked with a client with 200,000 totally disorganized documents on a shared drive. They needed to make the documents more accessible, but no metadata existed and they knew that a significant percentage of these files were “dustbin worthy.” What to do? We helped them import the documents into a KM repository, making them immediately more secure and manageable, more accessible via the web, and more findable via full text indexing. The client then monitored usage and used social tools to identify high quality content. In addition, they monitored users’ searches and were able to understand what content the users were looking for and what content they found useful. Within 6 months the repository was much smaller, with much higher quality content, and much more organized. The client used search statistics to create a taxonomy, and then used it to organize the content.
How have you handled content migrations in the past? Any lessons you would care to share?