The previous week Kristian Norling from VGR (Västra Götaland Regional Council) posted a really interesting and important blog post about information flow. Those of you who doesn’t know what VGR has been up to previously, here is a short background.
For a number of years VGR has been working to give reality to a model for how information is created, managed, stored and distributed. And perhaps the most important part – integrated.
Why is Information Flow Important?
In order to give your users access to the right information it is essential to get control of the whole information flow i.e. from the time it is created until it reaches the end user. If we lack knowledge about this, it is almost impossible to ensure quality and accuracy.
The fact that we have control also gives us endless possibilities when it comes to distributing the right information at the right time (an old cliché that is finally becoming reality). To sum up: that is what search is all about!
When information is being created VGR uses a Metadata service which helps the editors to tag their content by giving keyword suggestions.
In reality this means that the information can be distributed in the way it is intended. News are for example tagged with subject, target group and organizational info (apart from dates, author, expiring date etc which is automated) – meaning that the people belonging to specific groups with certain roles will get the news that are important to them.
Once the information is tagged correctly and published it is indexed by search. This is done in a number of different ways: by HTML-crawling, through RSS, by feeding the search engine or through direct indexing.
The information is after this available through search and ready to be distributed to the right target groups. Portlets are used to give single sign-on access to a number of information systems and template pages in the WCM (Web Content Management system) uses search alerts to give updated information.
Simply put: a search alert for e.g. meeting minutes that contains your department’s name will give you an overview of all information that concerns this when it is published, regardless of in which system it resides.
Furthermore, the blog post describes VGRs work with creating short and persistent URL:s (through an URL-service) and how to ”monitor” and “listen to” the information flow (for real-time indexing and distribution) – areas where we all have things to learn. Over time Kristian will describe the different parts of the model in detail, be sure to keep an eye on the blog.
What are your thoughts on how to get control of the information flow? Have you been developing similar solutions for part of this?
I realise we are a bit late. Fredrik Wackå, a senior IT-strategist, has already written an excellent article on his blog (in Swedish). He has, among other things, been interviewing Kristian Norling (at Twitter), who has been working with portal strategies and better search for many years at Region Västra Götaland. Although, for all our non-Swedish speaking guests here is a short summary:
Findwise has during the last few months been working on a new search solution for Region Västra Götaland. The two main goals have been to deliver a search experience that seems both fast and accurate. The result? Today making a search at VGR takes about 0,1-0,2 seconds, faster than a Google search on the web.
Furthermore, there was a need for context. Large amount of information requires ways to filter and sort – otherwise the users will drown in the result list. By giving the end-users the ability to sort the search result the users can look for general information within an area as well as quickly narrow down to a specific piece (for example by two clicks be able to see only the PDF-files created in 2009). The filters (and thereby metadata standard) includes:
- Information type
- Where the document resides
- Where it belongs in the organization
- What source it has
- When it was last changed
- Who has written it
- What format it resides in
- Keywords that has been created
The search solution also includes a metadata service. As so many others VGR has been struggling with getting the metadata in place.
Apart from the metadata supported by the system (where Dublin Core is being used) the metadata service is doing two things:
- Analyses the content in the text, compares it to taxonomy and gives the writer suggestions of keywords that he/she can use
- Gives the writer the ability to add additional keywords
Apart from this the end-users will be able to add etiquettes (tags). These will be compared with two lists. If the tags appears in the “white list” it will be published right away, if they are in the “blacklist” they will be deleted. Anything inbetween are controlled before they are published.
To conclude: a lot of effort has been put into creating a good search experience and VGR continues to deliver functionality and solutions that are light-years ahead of many others. The combination of supporting systems and using the “collected intelligence” of the writers and end-users will make it even better over time. Search is about both supporting systems, content and people.
Read more in Fredrik Wackås blog