Content Governance – life cycle and reach

This is the fifth post in a series (1, 2, 3, 4, 6, 7 ) on the challenges organisations face as they move from having online content and tools hosted firmly on their estate to renting space in the cloud.  We will help you to consider the options and guide on the steps you need to take.

 Starting from our first post we have covered different aspects you need to consider as you take each step including information structure and how it is managed using Office 365 and SharePoint as a technology example.  We will cover governance and how content should be managed in the cloud in this post.

content buckets

Content created within a context, as either a departmental site, or team habitat has usually only reach and bearing for the local context of fellow members of staff within this unit. Other pieces of content have a coverage that stretches all parts of the business. One simple example, is the bucket of content that makes up the management system, with governing principles, strategies, policies and guidelines that describes the core processes, activities, roles and so forth within an organisation.

Yet other content, as the outcome from a project, will build a bucket of content that either lives in a new context, improves a bucket of content or feeds into yet another following project.

From an information management perspective, it is vital that you have organising principles to all your content, where all these layers have been covered. Both reach, and the life cycle to the set of content.

You need a governance framework that reaches out to every bucket of content.  This covers what is still on your estate as well as the growing amount in the cloud.  All content needs to be managed to remove risks of leakage of sensitive information and prevent people having an inconsistent user experience as they move from one bucket of content in the cloud to another content bucket still on the estate.

You need to make sure people do not see the difference between buckets of content on the estate from content buckets in the cloud.  People using your content to help with their work don’t need to know where the content is kept.  They need to find it as easily as before, preferably even easier!  Content in the cloud  should feel the same and be a natural extension to the digital environment people are already used to.  Manage it with a governance framework that covers every bucket of content and make it more easy to adopt quicker and use more often without caution or delay.

Part of your governance needs to cover publishing standards based on business needs so it is easy to access from any device e.g laptops, tablets and smartphones, and to view without unnecessary authentication levels.  This helps to create that consistent good user experience that encourages people to use your content whether the bucket is in the cloud or not.

A professional team from group HR, might work in their local teamsite, with on-going conversations, work-in-progress documents and so forth. Pieces of their content production leads to governing policies that have a global reach within the organisation, and needs to be linked from the corporate intranet spaces. with versioning and good quality to resource descriptions (meta data). This practice and professional network of HR people, do also share content on a departmental site. With links and resources, that have direct impact on their internal processes. The group of people, have outreaching triggers, and in-bound conversations. And have to balance these two states.

When it comes to temporal content buckets, like a project team site. There are several considerations one have to capture. First where will the outcome and result be stored, when the project is finished. In which context will these content pieces contribute. Second, what should be captured from all on-going conversations (social elements) and work-in-progress and drafts developed during the projects lifecycle? Should a project habitat, be searchable after closing down? Or do the habitat change status, hence all documentation stay within the collection, but the overarching state to the habitat changes? Within Sharepoint these temporal states, versions, workflow and properties. All sum up the organising principles.

If these principles haven’t been ironed out, and been described and decided. Inevitable there will be emerging ghost towns, of dead habitats and lost collections of content. With no governance or ownership whatsoever. All this will become a digital landfill.

We will cover more about SharePoint in our next post in this series. Please visit Michael Sampson‘s recent slides where he takes you through strategy, planning, governance and user adoption for collaboration!
Please join our Live Stream on YouTube the 20th November 8.30AM – 10AM Central European Time
View Fredric Landqvist's LinkedIn profileFredric Landqvist research blog
View Mark Morrell's LinkedIn profileMark Morell intranet-pioneer

The Curator – how to cultivate the habitat

This is the fourth post in a series (1, 2, 3, 5, 6, 7) on the challenges organisations face as they move from having online content and tools hosted firmly on their estate to renting space in the cloud.  We will help you to consider the options and guide on the steps you need to take.

In the first post we set out the most common challenges you are likely to face and how you may overcome these.  In the second post we focused on how Office 365 and SharePoint can play a part in moving to the cloud.  In the third post we covered how they can help join up your organisation online using their collaboration tools and features.

In this post we will cover engagement and how sorting and categorisation of artifacts, according to a simple-to-understand and easy-to-use standard, will form the bits and parts of the curation and cultivation process.

CultivationAll document libraries should have one standard listing of all items – with two very distinct audiences: being either actors within the habitat or the people contributing, acting and joining the daily conversation; and secondly, those visitors who pass-by the habitat to collect, link and act upon the content presented within the habitats realm.

This makes it very easy for visitors to find their way around a habitat, if the visitors’ area (business lounge) is pretty much aligned to the overarching theme of the site… and all artifacts that the project team like to share wider, have been listed in a virtual bookshelf, with major versions only. The visitors’ area, has all the relevant data, presented upfront. Basically the answers to the questions set when starting the project. The visitors’ area shouldn’t be a backdrop, but rather a storefront. The content has to be of good quality. Then there should be options to engage with the inner-living-room of the habitat, and enter the messy on-going conversations, depending on access-rights. But the default setting, should always be open for unexpected “internal” (within the realm of the organisation) visitors. If the visitors’ area is compiled in a nice and easy to use manner, most visitors are just happy to pick the best-read from the bookshelf, or at least raise a questions for the team! The social construct for this is “welcoming a stranger”, since that visitor might link to your team’s content, cross-linking into his social-spaces.

The habitat’s livingroom and social conversations, will address new context-specific organising principles. A team might want to add new list-items, sort categories or introduce very local what-goes-where themes. This may be especially so when the team consists of actors who have different roles and responsibilities with regard to the overall outcome. And because of this, there may be a certain mix of tools or services in this one habitat of many, where they hang-out for project tasks.

The contextual adjustment is where the curator has to work on a cultivation process that glues the team together. The shared terminology within a group conversation, is what match their practices together. At inception, the curator picks a bouquet of on-topic terms from the controlled vocabularies. Mixing this with everyday use, and contributions from all members, this can be the fruitful and semantically-enhanced conversations with end-user generated tags or “folksonomies”. The same goes for interior design of links, tools, chosen content types and other forms of artifacts that the team will be needing to fulfill their goals and outcome.

The governance of the habitat, leans very much on the shared experiences in the group, and assigned responsibilities for stewardship and curation – where publishing standards, guidelines and training should be part of the mix.

We will cover more on governance and how content should be managed in the cloud in our next post.
Please join our Live Stream on YouTube the 20th November 8.30AM – 10AM Central European Time
View Fredric Landqvist's LinkedIn profileFredric Landqvist research blog
View Mark Morrell's LinkedIn profileMark Morell intranet-pioneer

Housekeeping rules within the Habitat

This is the third post in a series (1, 2, 4, 5, 6, 7) on the challenges organisations face as they move from having online content and tools hosted firmly on their estate to renting space in the cloud.  We will help you to consider the options and guide on the steps you need to take.

 In the first post we set out the most common challenges you are likely to face and how you may overcome these.  In the second post we focused on how Office 365 and SharePoint can play a part in moving to the cloud.  Here we cover how they can help join up your organisation online using their collaboration tools and features.

Habitat

When arranging the habitat, it is key to address the theme of collaboration. Since each of these themes, derives different feature settings of artifacts and services. In many cases, teamwork is situated in the context of a project. Other themes for collaboration are the line of business unit teamwork, or the more learning networks a.k.a communities of practice. I will leave these later themes for now.

Most enterprises have some project management process (i.e. PMP) that all projects do have to adhere to, with added complementary documentation, and reporting mechanisms. This is so the leadership within the organisation will be able to align resources, govern the change portfolio across different business units. Given this structure, it is very easy to depict measurable outcomes, as project documents have to be produced, regardless of what the project is supposed to contribute towards.

The construction of a habitat, or design of a joint workplace, all boils down to pragmatic steps that are aligned with the overarching project framework at hand. Answering a few simple Questions (Inverted Pyramid):

  • Who? will be participating, who will own (organisation) the outcome from the joint effort pulling together a project (dc.contributor ; dc.creator ; dc.provenance ) and reach ( dc.coverage ; dc.audience )
  • What? is the project all about, topic and theme (dc.subject ; dc.title ; dc.description, dc.type )
  • When? will this project be running, and timeline for ending the project. All temporal themes around the life of a project. (dc.date)
  • Where? will participants contribute. What goes where and why? (dc.source ; dc.format ; dc.identifier )
  • Why? usually defined in project description, setting common ground for the goals and expected outcome. ( dc.description )
  • How? defines used processes, practices and tools to create the expected outcome for the project, with links to common resources as the PMP framework, but also links to other key data-sets. Like ERP record keeping and masterdata, for project number and other measures not stored in the habitat, but still pillars to align to the overarching model. (dc.relation)

When these questions have been answered, the resource description for the habitat is set. In Sharepoint the properties bag (code) feature. During the lifespan of the on-going project, all contribution, conversations and creation of things can inherit rule-based metadata for the artifacts from the collections resource description. This reduces the burden weighing on the actors building the content, by enabling automagic metadata completion where applicable. And from the wayfinding, and findability within and between habitats, these resource descriptions will be the building blocks for a sustainable information architecture.

In our next post we will cover how to encourage employee engagement with your content.

Please join our Live Stream on YouTube the 20th November 8.30AM – 10AM Central European Time
View Fredric Landqvist's LinkedIn profileFredric Landqvist research blog
View Mark Morrell's LinkedIn profileMark Morell intranet-pioneer

Wagon Trains to the Cloud

This is the first post in a series(2, 3, 4, 5, 6, 7) on the challenges organisations face when they move from having online content and tools hosted firmly on their estate to renting space in the cloud.  We will help you to consider the options and guide you on the steps you need to take.

In this first post we show you  the most common challenges that you are likely to face and how you may overcome these.

A fast migration path, to become tenants in a cloud apartment housing unfolds a set of business critical issues that have to be mitigated:

  • Wayfinding in a maze of content buckets and social habitats.
  • Emerging digital Ghost Towns due to lack of information governance.
  • Digital Landfills without organising principles for information and data.
  • Digital Litter with little or no governance or principles for ownership, with redundant, outdated and trivial (ROT) content.
  • With no strategy or plan, erodes any possibility to positive business outcome from moving to the clouds.

WagonTrn.jpg
WagonTrn” by Tillman at en.wikipedia – Transferred from en.wikipedia by SreeBot. Licensed under Public domain via Wikimedia Commons.

The way forward is to settle a sustainable information architecture, that supports an information environment in constant flux. With information and data interoperable on any platform, everywhere, anytime and on any device.

You need to show how everything is managed and everyone fits together.  A governance framework can help do this.  It can show who is responsible for the intranet, what their responsibilities are and fit with the strategy and plan.  Making it available to everyone on the intranet helps their understanding of how it is managed and supports the business.

The main point is to have a governance framework and information architecture with the same scope to avoid gaps in content being managed or not being found.

Both need to be in harmony and included in any digital strategy.  This avoids competing information architectures and governance frameworks being created by different people that causes people to have inconsistent experiences not finding that they need and using alternative, less efficient, ways in future to find what they need to help with their work.

Background

Building huts, houses and villages is an emerging social construction. As humans we coordinate our common resources, tools and practices. A habitat populated by people needs housekeeping rules with available resources for cooking, cleaning, social life and so on. Routines that defines who does what task and by when in order to keep everything ok.

A framework with governing principles that set out roles and responsibilities along with standards that set out the expected level of quality and quantity of each task that everyone is engaged and complies with, is similar to how the best intranets and digital workplaces are managed.

In the early stages with a small number of habitats the rules for coordination are pretty simple, both for shared resources between the groups and pathways to connect them. The bigger a village gets, it taxes the new structures to keep things smooth. When we move ahead into mega cities with 20+ million people living close, it boils down to a general overarching plan and common infrastructures, but you also need local networked communities, in order to find feasible solutions for living together.

Like villages and mega cities there is a need for consistency that helps everyone to work and live together.  Whenever you go out you know that there are pavements to walk on, roads for driving, traffic lights that we stop at when they turn red and signs to help us show the easiest way to get to our destination.

Sustainable architecture and governance creates a consistent user experience. A well structured information architecture that is aligned with a clear governance framework sets out roles and responsibilities. Publishing standards based on business needs that supports the publishers follow them. This means wherever content is published, whether it is accredited or collaborative, it will appear to be consistent to people and located where they expect it to be.  This encourages a normal way to move through a digital environment with recognizable headings and consistently placed search and other features.

This allegori, fits like a glove when moving into large enterprise-wide shared spaces for collaboration. Whether it is cloud based, on-premises or a mix thereof. The social constructions and constraints still remain the same. As an IT-services on tap, cloud, has certainly constraints for a flexible and adjustable habitual construction to be able to host as many similar habitats as possible. But offers a key solutions to instantly move into! Tenants share the same apartment building (Sharepoint online).

When the set of habitats grow, navigation in this maze becomes a hazard for most of us. Wayfinding in a digital mega city, is extremely difficult. To a large extent, enterprises moving into collaboration suites suffer from the same stigma. Regardless if it is SharePoint, IBM Connections, Google Apps for Work, or a similar setting. It is not a discussion of which type of house to choose, but rather which architecture and plan that work in the emerging environment.

Information Architecture for Digital Habitats

If one leans upon linked-data,  linked-open-data, and emerging semantic web and web of data standards, there are a set of very simple guidelines that one should adhere to when building a Digital Village or Mega City. The 5 stars, our beacon of light!

All collections and shared spaces, should have persistent URI:s, which is the fourth star in the ladder. When it comes to the third star of non-proprietary formats it obviously becomes a bit tricky, since i.e. MS Sharepoint and MS Office like to encourage their own format to things. But if one add resource descriptions to collections and artifacts using Dublin Core elements, it will be possible to connect different types of matter. With feasible and standardised resource descriptions it will be possible to add schemas and structures, that can tell us a little bit more about the artifacts or collection thereof. Hence the option to adhere to the second star. The first star, will inside the corporate setting become key to connect different business units, areas with open licenses and with restrictions to internal use only and in some cases open for other external parties.

Linking data-sets, that is collections or habitats, with different artifacts is the fifth star. This is where it all starts to make sense, enabling a connected digital workplace. Building a city plan, with pathways, traffic signals and rules, highways, roads, neighborhoods  and infrastructural services and more. In other words, placemaking!

Placemaking is a multi-faceted approach to the planning, design and management of public spaces. Placemaking capitalizes on a local community’s assets, inspiration, and potential, with the intention of creating public spaces that promote people’s health, happiness, and well being.

We will cover more about how this applies to Office 365 and SharePoint in our next post.

Please join our Live Stream on YouTube the 20th November 8.30AM – 10AM Central European Time
View Fredric Landqvist's LinkedIn profileFredric Landqvist research blog
View Mark Morrell's LinkedIn profileMark Morell intranet-pioneer

Enterprise-Linked-Data and the Connected Digital Workplace

The emerging hyper-connected and agile enterprises of today are stigmatised by their IS/IT-legacy, so the question is: Will emerging web and semantic technologies and practices undo this stigma?

The Shift

Semantic Technologies and Linked-Open-Data (LOD) have evolved since Tim Berners-Lee introduced their basic concepts, and they are now part of everyday business on the Internet, thanks mainly due to their uptake by information and data-run companies like Google, social networks like Facebook and large content sites, like Wikipedia. The enterprise information landscape is ready to be enhanced by semantic web, to increase findability and usability. This change will enable a more agile digital workplace where members of staff can use cloud based services, anywhere, anytime on any device, in combination with the set of legacy systems backing their line-of-business. All in all, more efficient organising principles for information and data.

The Corporate Information Landscape of today

In everyday workplace we use digital tools to cope with the tasks at hand. These tools have been set into action to address meta models to structure the social life dealing with information and data. The legacy of more than 60 years of digital records keeping, has left us in an extremely complex environment, where most end-users have a multitude of spaces where they are supposed to contribute. In many cases their information environment lacks interoperability.

A good, or rather bad example of this, is the electronic health records (EHR) of a hospital, where several different health professionals try to codify their on-going work in order to make better informed decisions regarding the different medical treatments. While this is a good thing, it is heavily hampered with closed-down silos of data that do not work in conjunction with the new more agile work practices. It is not uncommon to have more than 20 different information systems employed to do provisioning during a workday.

The information systems architecture, in any organisation or enterprise, may comprise of home-grown legacy systems from the past, or bought off-the-shelf software suites and extremely complex enterprise-wide information systems like ERP, BI, CRM and the like. The connections between these information systems (or integration points) often resemble “spaghetti” syndrome, point-to-point. The work practice for many IT professionals is to map this landscape of connections and information flows, using for example Enterprise Architecture models. Many organisations use information integration engines, like enterprise-service-bus applications, or master data applications, as means to decouple the tight integration and get away from the proprietary software lock-in.

On top of all these schema-based, structured data, information systems, lies the social and collaborative layer of services, with things like intranet (web based applications), document management, enterprise wide social networks (e.g. Yammer) and collaborative platforms (e.g SharePoint) and more obviously e-mail, instant messaging and voice/video meeting applications. All of these platforms and spaces where one  carries out work tasks, have either semi-structured (document management) or unstructured data.

Wayfinding

A matter of survival in the enterprise information environment, requires a large dose of endurance, and skills. Many end-users get lost in their quest to find the relevant data when they should be concentrating on making well-informed decisions. Wayfinding is our in-built adaptive way of coping with the unexpected and dealing with it. Finding different pathways and means to solve the issues. In other words … Findability.

Outside-in and Inside-Out

Today most organisations and enterprises workers act on the edge of the corporate landscape – in network conversations with customers, clients, patients/citizens, partners, or even competitors, often employing means not necessarily hosted inside the corporate walls. On the Internet we see newly emerging technologies become used and adapted at a faster rate and in a more seamless fashion than the existing cumbersome ones of the internal information landscape. So the obvious question raised in all this flux is: why can’t our digital workplace (the inside information landscape) be as easy to use and to find things / information as in the external digital landscape? Why do I find knowledgeable peers in communities of practice more easily outside than I do on the inside? Knowledge sharing on the outpost of the corporate wall is vivid, and truly passionate whereas inside it is pretty stale and lame to say the least.

Release the DATA now

Aggregate technologies, such as Business Intelligence and Datawarehouse, use a capture, clean-up, transform and load mechanism (ETL) from all the existing supporting information systems. The problem is that the schemas and structures of things do not compile that easily. Different uses and contexts make even the most central terms difficult to unleash into a new context. This simply does not work. The same problem can be seen in the enterprise search realm where we try to cope with both unstructured or semi-structured data. One way of solving all this is to create one standard that all the others have to follow and including a least common denominator combined with master data management. In some cases this can work, but often the set of failures fromsuch efforts are bigger than those arising from trying to squeeze an enterprise into a one-size-fits-all mega-matrix ERP-system.

Why is that? you might ask, from the blueprint it sounds compelling. Just align the business processes and then all data flows will follow a common path. The reality unfortunately is way more complex because any organisation comprises of several different processes, practices, professions and disciplines. These all have a different perspectives of the information and data that is to be shared. This is precisely why we have so many applications in the first place! To what extent are we able to solve this with emerging semantic technologies? These technologies are not a silver bullet, far from it! The Web however shows a very different way of integration thinking, with interoperability and standards becoming the main pillars that all the other things rely on. If you use agreed and controlled vocabularies and standards, there is a better chance of actually being able to sort out all the other things.

Remember that most members of staff, work on the edges of the corporate body, so they have to align themselves to the lingo from all the external actor-networks and then translate it all into codified knowledge for the inside.

Semantic Interoperability

Today most end-users use internet applications and services that already use semantic enhancements to bridge the gap between things, without ever having to think about such things. One very omnipresent social network is Facebook, that relies upon the FOAF (Friend-of-a-Friend) standard for their OpenGraph. Using a Graph to connect data, is the very corner stone of linked-data and the semantic web. A thing (entity) has descriptive properties, and relations to other entities. One entity’s property might be another entity in the Graph. The simple relationship subject-predicate-object. Hence from the graph we get a very flexible and resilient platform, in stark contrast to the more traditional fixed schemas.

The Semantic Web and Linked-Data are a way to link different data sets that may grow from a multitude of schemas and contexts into one fluid interlinked experience. If all internal supporting systems or at least the aggregate engines could simply apply a semantic texture to all the bits and bytes flowing around, it could well provide a solution to the area where other set ups have failed. Remember that these linked-data sets are resilient by nature.

There is a  set of controlled vocabularies (thesauri, ontologies and taxonomies) that capture all the of topics, themes and entities that make up the world. These vocabs have to some extent already been developed, classified and been given sound resource descriptors (RDF). The Linked-Open-Data clouds are experiencing a rapid growth of meaningful expressions. WikiData, dbPedia, Freebase and many more ontologies have a vast set of crispy and useful data that when intersected with internal vocabularies, can make things so much easier. A very good example of such useful vocabularies, are the ones developed by professional information science people is that of the Getty Institute’s recently released thesari for AAT (Arts and Architecture), CONA (Cultural Object Authority) and TGN (Geographical Names). These are very trustworthy resources, and using linked-data anybody developing a web or mobile app can reuse their namespace for free and with high accuracy. And the same goes for all the other data-sets in the linked-open-data cloud. Many governments have declared open data as the main innovation space in which to release their things, under the realm of the “Commons”.

Inaddition to this, all major search engines have agreed on a set of very simple-to-use schemas captured in the www.schema.org world. These schemas have been very well received from their very inception by the webmaster community. All of these are feeding into the Google Knowledge Graph and all the other smart-things (search-enabled) we are using daily.

From the corporate world, these Internet mega-trends, have, or should have, a big impact on the way we do information management inside the corporate walls. This would be particularly the case if the siloed repositories and records were semantically enhanced from their inception (creation), for subsequent use and archiving. We would then see more flexible and fluid information management within the digital workplace.

The name of the game is interoperability at every level: not just technical device specifics, but interoperability at the semantic level and at the level we use governing principles for how we organise our data and information, regardless of their origin.

Stepping down, to some real-life examples

In the law enforcement system in any country, there is a set of actor-networks at play: the police, attorneys, courts, prisons and the like. All of them work within an inter-organisational process from capturing a suspect, filing a case, running a court session, judgement, sentencing and imprisonment; followed at the end by a reassimilated member of society.  Each of these actor-networks or public agencies have their own internal information landscape with supporting information systems, and they all rely on a coherent and smooth flow of information and data between each other. The problem is that while they may use similar vocabularies, the contexts in which they are used may be very different due to their different responsibilities and enacted environment (laws, regulations, policies, guidelines, processes and practices) when looking from a holistic perspective.

IA LOD Innovation

A way to supersede this would be to infuse semantic technologies and shared controlled vocabularies throughout, so that the mix of internal information systems could become interoperable regardless of the supporting information system or storage type. In such a case linked-open-data and semantic enhancements could glue and bridge the gaps to form one united composite, managed by just one individual’s record keeping. In such a way, the actual content would not be exposed, rather a metadata schema would be employed to cross any of the previously existing boundaries.

This is a win-win situation, as semantic technologies and any linked-open-data tinkering use the shared conversation (terms and terminologies) that already exists within the various parts of the process. While all parts cohere to the semantic layers, there is no need to reconfigure  internal processes or apply other parties’ resource descriptions and elements. In such a way only parts of schemas are used that are context specific for a given part of a process, and so allowing the lingo of the related practices and professions to be aligned.

This is already happening in practice in the internal workplace environment of an existing court, where a shared intranet is based on such organising principles as already mentioned, uses applied sound and pragmatic information management practices and metadata standards like Dublin Core and Common Vocabularies –  all of which are infused in Content Provisioning.

For the members of staff, working inside a court setting, this is a major improvement, as they use external databases everyday to gain insights in order to carry out their duties. And when the internal workplace uses such a set up, their knowledge sharing can grow –  leading to both improved wayfinding and findability.

Yet another interesting case, is a service company that operates on a global scale. They are an authoritative resource in their line-of-business, maintaining a resource of rules and regulations that have become a canonical reference. By moving into a new expanded digital workplace environment (internet, extranet and intranet) and using semantic enhancement and search, they get a linked-data set that can be used by clients, competitors and all others working within their environment. At the same time their members of staff can use the very same vocabularies to semantically enhance their provision of information and data into the different information systems internally.

The last example is an industrial company with a mix of products within their line-of-business. They have grown through M&A over the years, and ended up in a dead-end mess of information systems that do not interoperate at all. A way to overcome the effect of past mergers and aquisitions, was to create an information governance framework. Applying it  with MDM and semantic search they were able to decouple data and information, and as a result making their workplace more resilient in a world of constant flux.

One could potentially apply these pragmatic steps to any line of business, since most themes and topics have been created and captured by the emerging semantic web and linked-data realm. It is only a matter of time before more will jump on this bandwagon in order to take advantage of changes that have the ability to make them a canonical reference, and a market leader. Just think of the film industry’s IMDB.

A final thought: Are the vendors ready and open-minded enough to alter their software and online services in order to realise this outlined future enterprise information landscape?

For more information please read these online resources, or go for the executive brief video clip:
Enterprise-Linked-Data
http://testing.rachaelkalicun.info/led_book/led-contents.html

Exec Brief

Europeana brief for memory institutions using linked-open-data:
http://en.wikipedia.org/wiki/File:Linked-open-data-Europeana-video.ogv

Linked-Open-Data network Sweden 2014 presentation:
http://livingarchives.mah.se/2014/03/linked-data-2014/
and Fredric’s talk about semantic enhanced citizen participation and slides.

The future linked-data enterprise, from Intranätverk conference in Göteborg, in May 2014
Fredric Landqvist and Kerstin Forsbergs’s talk, and slides.

Gamification in Information Retrieval

My last article was mainly about Collaborative Information Seeking – one of the trends in enterprise search. Another interesting topic is the use of games’ mechanics in CIS systems. I met up with this idea during previously mentioned ESE 2014 conference, but interest is so high, that this year in Amsterdam a GamifIR (workshops on Gamification for Information Retrieval) took place. IR community have debated about what kind of benefits can IR tasks bring from games’ techniques. Workshops cover gamified task in context of searching, natural language processing, analyzing user behavior or collaborating. The last one was discussed in article titled “Enhancing Collaborative Search Systems Engagement Through Gamification” and has been mentioned by Martin White in his great presentation about search trends on last ESE summit.

Gamification is a concept which provides and uses game elements in non-game environment. Its goal is to improve customers or employees motivation for using some services. In the case of Information Retrieval it is e.g. encouraging people to find information in more efficient way. It is quite instinctive because competition is  an inherent part of human nature. Long time ago, business sectors have noticed that higher engagement, activating new users and establishing interaction between them, rewarding the effort of doing something lead to measurable results. Even if quality of data given by users could be higher. Among those elements can be included: leaderboards, levels, badges, achievements, time or resources limitation, challenges and many others. There are even described design patterns and models connected with gameplay, components, game practices and processes. Such rules are essential because virtual badge has no value until being assigned by user.

Collaborative Information Seeking is an idea suited for people cooperating on complex task which leads to find specific information. Systems like this support team work, coordinate actions and improve communication in many different ways and with usage of various mechanisms. At first glance it seems that gamification is perfect adopted to CIS projects. Seekers become more social, feeling of competence foster actions which in turn are rewarded.

The most important thing is to know why do we need gamified system and what kind of benefits we will get. Next step is to understand fundamental elements of a game and find out how adopt them to IR case. In their article “Enhancing Collaborative Search Systems Engagement Through Gamification”, researchers of Granada and Holguin universities have listed propositions how to gamify CIS system.  Based on their suggestions I think essential points are to prepare highly sociable environment for seekers. Every player (seeker) needs to have own personal profile which stores previous achievements and can be customized. Constant feedback on progress, list of successful members, time limitations, keeping the spirit of competition by all kinds of widgets are important for motivating and building a loyalty. Worth to remember that points collected after achieving goals need to be converted into virtual values which can distinguish the most active players. Crucial thing is to construct clear and fair principles, because often information seeking with such elements is a fun and it can’t be ruined.

Researchers from Finnish universities, who published article “Does Gamification Work?”, have broken down a problem of gamifying into components and have thoroughly studied them. Their conclusion was that concept of gamification can work, but there are some weaknesses – context which is going to be gamified and the quality of the users. Probably, the main problem is lack of knowledge which elements really provide benefits.

Gamification can be treated as a new way to deal with complex data structures. Limitations of data analyzing can be replaced by mechanism which increase activity of users in Information Retrieval process. Even more – such concept may leads to more higher quality data, because of increased people motivation. I believe, Collaborative Information Seeking, Gamification and similar ideas are one of the solutions how to improve search experience by helping people to become better searchers than not by just tuning up algorithms.

Reaching Findability #4 – Build for the long term!

Findability is surprisingly complex due to the large number of measures needed to be understood and undertaken. I believe that one of the principal challenges lies within the pedagogical domain. This is my fourth post in a series of simple tips for reaching Findability.

Build for the long term!

A platform for Findability takes time to build. It is partly about technology development but equally about organisational maturity.

Mechanisms for managing both information and search technology need to be established and adopted. The biggest effect though, is realized when people start thinking about information differently. When they start wanting to share and find information more easily, and desire the possibility to do so.

As with any change project, it helps to not make too many changes at the same time. It is often easier to establish a long-term goal and take small steps along the way. To reach the goal of Findability, the first step is to define a Findability strategy. While information is refined and the technical platform is developed step-by-step, the organization is allowed time to mature.

Choose your technical search platform carefully, and think long-term based on the specific requirements of your business. Give priority to supporting the processes and target groups due to receive the most tangible benefits from finding information easier and more quickly. One valuable method for defining goals, target group needs and means by which to fulfill them is Effect Mapping, developed by InUse AB, which can be used early on in the transformation to gain and communicate important insights

The technical architecture, as with the information structure, must be well thought out when laying the foundation of the platform. Start out with a first application, perhaps an intranet or public web search. The extent and influence of the search platform can then be gradually built out by adding new information sources and components in accordance with the long-term plan. With the right priorities, business value is created every step of the way. New ideas can be tested and problems mitigated before the consequences become difficult to handle.

A good example of an organization with target group focused development is Municipality of Norrköping. You can watch an entertaining presentation of how they do it here!

Results from Enterprise Search and Findability Survey 2013

Although data is growing rapidly information is still difficult to find in most organizations. That is the most obvious conclusion from our annual global Enterprise Search and Findability Survey. 64 percent of respondents from organizations with more than 1000 employees say it is difficult to find the right information internally. Surprising since 79 percent think it is of high importance. But there are some good news too, 36 percent of respondents have a search strategy in place and 38 percent plan to implement one.

A recent report by European Union’s Joint Research Centre called “Enterprise Search in the European Union: A Techno-economic Analysis” found two main reasons for adopting a strategy for Enterprise Search; the growth in data generation and a more worrying one – the fact that this huge amount of information is largely unstructured. An estimated 80% of the information stored is either unstructured or has no adequate metadata for the needs of employees.

The survey findings show the need for enterprises to adopt enterprise search solutions to overcome the burden of information overload which faces the knowledge workers of today’s organizations. Not finding information, or even worse, finding the wrong information is still the reality for most organizations. Companies may be sitting on a considerable stock of digital assets but still being unable to capture and value from them.

Get budget for search

McKinsey & Company wrote an article recently, “Measuring the full impact of digital capital” saying that the need for growth and competitiveness will force companies to build strong digital capabilities. For those concerned about how to get funding for search I want to point out that the benefits from this investment are clear with improved quality in business decisions, increased efficiency and a more harmonized global offering as direct results. Much happier employees is a positive side effect.

Read more about the survey and download the report for free here!

Why search and Findability is critical for the customer experience and NPS on websites

To achieve a high NPS, Net Promoter Score, the customer experience (cx) is crucial and a critical factor behind a positive customer experience is the ease of doing business. For companies who interact with their customers through the web (which ought to be almost every company these days) this of course implies a need to have good Findability and search on the website in order for visitors to be able to find what they are looking for without effort.

The concept of NPS was created by Fred Reichheld and his colleagues of Bain and Co who had an increasing recognition that measuring customer satisfaction on its own wasn’t enough to make conclusions of customer loyalty. After some research together with Satmetrix they came up with a single question that they deemed to be the only relevant one for predicting business success “How likely are you to recommend company X to a friend or colleague.” Depending upon the answer to that single question, using a scale of 0 to 10, the respondent would be considered one of the following:

net-promoter

The Net Promoter Score model

The idea is that Promoters—the loyal, enthusiastic customers who love doing business with you—are worth far more to your company than passive customers or detractors. To obtain the actual NPS score the percentage of Detractors is deducted from the percentage of Promoters.

How the customer experience drives NPS

Several studies indicate four main drivers behind NPS:

  • Brand relationship
  • Experience of / satisfaction with product offerings (features; relevance; pricing)
  • Ease of doing business (simplicity; efficiency; reliability)
  • Touch point experience (the degree of warmth and understanding conveyed by front-line employees)

According to ‘voice of the customer’ research conducted by British customer experience consultancy Cape Consulting the ease of doing business and the touch point experience accounts for 60 % of the Net Promoter Score, with some variations between different industry sectors. Both factors are directly correlated to how easy it is for customers to find what they are looking for on the web and how easily front-line employees can find the right information to help and guide the customer.

Successful companies devote much attention to user experience on their website but when trying to figure out how most visitors will behave website owners tend to overlook the search function. Hence visitors who are unfamiliar with the design struggle to find the product or information they are looking for causing unnecessary frustration and quite possibly the customer/potential customer runs out of patience with the company.

Ideally, Findability on a company website or ecommerce site is a state where desired content is displayed immediately without any effort at all. Product recommendations based on the behavior of previous visitors is an example but it has limitations and requires a large set of data to be accurate. When a visitor has a very specific query, a long tail search, the accuracy becomes even more important because there will be no such thing as a close enough answer. Imagine a visitor to a logistics company website looking for information about delivery times from one city to another, an ecommerce site where the visitor has found the right product but wants to know the company’s return policy before making a purchase or a visitor to a hospital’s website looking for contact details to a specific department. Examples like these are situations where there is only one correct answer and failure to deliver that answer in a simple and reliable manner will negatively impact the customer experience and probably create a frustrated visitor who might leave the site and look at the competition instead.

Investing in search have positive impacts on NPS and the bottom line 

Google has taught people how to search and what to expect from a search function. Step one is to create a user friendly search function on your website but then you must actively maintain the master data, business rules, relevance models and the zero-results hits to make sure the customer experience is aligned. Also, take a look at the keywords and phrases your visitors use when searching. This is useful business intelligence about your customers and it can also indicate what type of information you should highlight on your website. Achieving good Findability on your website requires more than just the right technology and modern website design. It is an ongoing process that successfully managed can have a huge impact on the customer experience and your NPS which means your investment in search will generate positive results on your bottom line.

More posts on this topic will follow.

/Olof Belfrage