Tutorial: Optimising Your Content for Findability

This tutorial was done on the 6th of November at J. Boye 2012 conference in Aarhus Denmark. Tutorial was done by Kristian Norling.

Findability and Your Content

As the amount of content continues to increase, new approaches are required to provide good user experiences. Findability has been introduced as a new term among content strategists and information architects and is most easily explained as:

“A state where all information is findable and an approach to reaching that state.”

Search technology is readily used to make information findable, but as many have realized technology alone is unfortunately not enough. To achieve findability additional activities across several important dimensions such as business, user, information and organisation are needed.

Search engine optimisation is one aspect of findability and many of the principles from SEO works in a intranet or website search context. This is sometimes called Enterprise Search Engine Optimisation (ESEO). Getting findability to work well for your website or intranet is a difficult task, that needs continuos work. It requires stamina, persistence, endurance, patience and of course time and money (resources).

Tutorial Topics

In this tutorial you will take a deep dive into the many aspects of findability, with some good practices on how to improve findability:

  • Enterprise Search Engines vs Web Search
  • Governance
  • Organisation
  • User involvement
  • Optimise content for findability
  • Metadata
  • Search Analytics

Brief Outline

We will start some very brief theory and then use real examples and also talk about what organisations that are most satisfied with their findability do.

Experience level

Participants should have some intranet/website experience. A basic understanding of HTML, with some previous work with content management will make your tutorial experience even better. A bonus if you have done some Search Engine Optimisation (SEO) for public websites.

Presentation: Enterprise Search – Simple, Complex and Powerful

Every second, more and more information is created and stored in various applications. corporate websites, intranets, SharePoint sites, document management systems, social platforms and many more – inside the firewall the growth of information is similar to that of the internet. However, even though major players on the web have shown that navigation can’t compete with search, the Enterprise Search and Findability Report shows that most organisations have only a small or even a non-existing budget for search.

Web Search and Enterprise Search

Web search engines like Google has made search look easy. For enterprise search, some vendors give promises of a magic box. Buy a search engine, plug it in and wait for the magic to happen! Imagine the disappointment when both search results and performance are poor and users can’t find what they are looking for…

When you start planning your enterprise search project you soon realize the complexity and challenge – how do you meet the expectations created by Google?

The Presentation

This presentation was originally presented at the joint NSW KM Forum and IIM September event in Sydney, Australia by Mattias Brunnert. It contains topics as:

  • Why search is important and how to measure success
  • Why Enterprise Search and Information Management should be friends
  • How to kick off your search program

Video: Introducing Hydra – An Open Source Document Processing Framework

Introducing Hydra – An Open Source Document Processing Framework from presented at Lucene Revolution hosted on Vimeo.

Presented by Joel Westberg, Findwise AB
This presentation details the document-processing framework called Hydra that has been developed by Findwise. It is intended as a description of the framework and the problem it aims to solve. We will first discuss the need for scalable document processing, outlining that there is a missing link between the open source chain to bridge the gap between source system and the search engine, then will move on to describe the design goals of Hydra, as well as how it has been implemented to meet those demands on flexibility, robustness and ease of use. This session will end by discussing some of the possibilities that this new pipeline framework can offer, such as freely seamlessly scaling up the solution during peak loads, metadata enrichment as well as proposed integration with Hadoop for Map/Reduce tasks such as page rank calculations.

Semantic Search Engine – What is the Meaning?

The shortest dictionary definition of semantics is: the study of meaning. The more complex explanation of this term would lead to a relationship that maps words, terms and written expressions into common sense and understanding of objects and phenomena in the real world. It is worthy to mention that objects, phenomena and relationships between them are language independent. It means that the same semantic network of concepts can map to multiple languages which is useful in automatic translations or cross-lingual searches.

The approach

In the proposed approach semantics will be modeled as a defined ontology making it possible for the web to “understand” and satisfy the requests and intents of people and machines to use the web content. The ontology is a model that encapsulates knowledge from specific domain and consists of hierarchical structure of classes (taxonomy) that represents concepts of things, phenomena, activities etc. Each concept has a set of attributes that represent the mapping of that particular concept to words and phrases that represents that concepts in written language (as shown at the top of the figure below). Moreover, the proposed ontology model will have horizontal relationships between concepts, e.g. the linguistic relationships (synonymy, homonymy etc.) or domain specific relationships (medicine, law, military, biological, chemical etc.). Such a defined ontology model will be called a Semantic Map and will be used in the proposed search engine. An exemplar part of an enriched ontology of beverages is shown in the figure below. The ontology is enriched, so that the concepts can be easily identified in text using attributes such as the representation of the concept in the written text.

Semantic Map

The Semantic Map is an ontology that is used for bidirectional mapping of textual representation of concepts into a space of their meaning and associations. In this manner, it becomes possible to transform user queries into concepts, ideas and intent that can be matched with indexed set of similar concepts (and their relationships) derived from documents that are returned in a form of result set. Moreover, users will be able to precise and describe their intents using visualized facets of concept taxonomy, concept attributes and horizontal (domain) relationships. The search module will also be able to discover users’ intents based on the history of queries and other relevant factors, e.g. ontological axioms and restrictions. A potentially interesting approach will retrieve additional information regarding the specific user profile from publicly available information available in social portals like Facebook, blog sites etc., as well as in user’s own bookmarks and similar private resources, enabling deeper intent discovery.

Semantic Search Map

Semantic Search Engine

The search engine will be composed of the following components:

  • Connector – This module will be responsible for acquisition of data from external repositories and pass it to the search engine. The purpose of the connector is also to extract text and relevant metadata from files and external systems and pass it to further processing components.
  • Parser – This module will be responsible for text processing including activities like: tokenization (breaking text into lexems – words or phrases), lemmatization (normalization of grammar forms), exclusion of stop-words, paragraph and sentence boundary detector. The result of parsing stage is structured text with additional annotations that is passed to semantic Tagger.
  • Tagger – This module is responsible for adding semantic information for each lexem extracted from the processed text. Technically it refers to addition of identifiers to relevant concepts stored in the Semantic Map for each lexem. Moreover phrases consisting of several words are identified and disambiguation is performed basing on derived contexts. Consider the example illustrated in the figure.
  • Indexer – This module is responsible for taking all the processed information, transformation and storage into the search index. This module will be enriched with methods of semantic indexing using ontology (semantic map) and language tools.
  • Search index – The central storage of processed documents (document repository) structured properly to manage full text of the documents, their metadata and all relevant semantic information (document index). The structure is optimized for search performance and accuracy.
  • Search – This module is responsible for running queries against the search index and retrieval of relevant results. The search algorithms will be enriched to use user intents (complying data privacy) and the prepared Semantic Map to match semantic information stored in the search index.

What do you think? Please let us know by writing a comment.

Searching for Zebras: Doing More with Less

There is a very controversial and highly cited 2006 British Medical Journal (BMJ) article called “Googling for a diagnosis – use of Google as a diagnostic aid: internet based study” which concludes that, for difficult medical diagnostic cases, it is often useful to use Google Search as a tool for finding a diagnosis. Difficult medical cases are often represented by rare diseases, which are diseases with a very low prevalence.

The authors use 26 diagnostic cases published in the New England Journal of Medicine (NEJM) in order to compile a short list of symptoms describing each patient case, and use those keywords as queries for Google. The authors, blinded to the correct disease (a rare diseases in 85% of the cases), select the most ‘prominent’ diagnosis that fits each case. In 58% of the cases they succeed in finding the correct diagnosis.

Several other articles also point to Google as a tool often used by clinicians when searching for medical diagnoses.

But is that so convenient, is that enough, or can this process be easily improved? Indeed, two major advantages for Google are the clinicians’ familiarity with it, and its fresh and extensive index. But how would a vertical search engine with focused and curated content compare to Google when given the task of finding the correct diagnosis for a difficult case?

Well, take an open-source search engine such as Indri, index around 30,000 freely available medical articles describing rare or genetic diseases, use an off-the-shelf retrieval model, and there you have Zebra. In medicine, the term “zebra” is a slang for a surprising diagnosis. In comparison with a search on Google, which often returns results that point to unverified content from blogs or content aggregators, the documents from this vertical search engine are crawled from 10 web resources containing only rare and genetic disease articles, and which are mostly maintained by medical professionals or patient organizations.

Evaluating on a set of 56 queries extracted in a similar manner to the one described above, Zebra easily beats Google. Zebra finds the correct diagnosis in top 20 results in 68% of the cases, while Google succeeds in 32% of them. And this is only the performance of the Zebra with the baseline relevance model — imagine how much more could be done (for example, displaying results as a network of diseases, clustering or even ranking by diseases, or automatic extraction and translation of electronic health record data).

Analytics and Big Data at IBM Information On Demand 2011

The big trend these days are in Big Data and how you can analyze large amounts of information in order to gain important insights, and from those insights be able to take the right action. This trend was a hot topic at the IBM Information On Demand (IOD) conference in Las Vegas earlier this year. IBM has a very strong position in this field, it’s hard to have missed how their computer system Watson challenged the top players of all time in Jeopardy recently, and won! Read more about Watson

Now IBM has taken the technology behind Watson and started to apply it in their different analytics products, where one specific area that is being targeted is healthcare. For this area IBM released a new product during IOD called IBM Content and Predictive Analytics for Healthcare, which can for example be used as a tool for physicians to support them in their diagnosis of patients.

In April this year IBM merged two of their products, their search engine OmniFind and their product for analyzing large amounts of unstructured information, Content Analytics. The new product is called IBM Content analytics with Enterprise search and it too is based on much of the same technology that is used in Watson, more specifically it utilizes the same Natural Language Processing techniques. This means that it has the ability to understand text on a level just as sophisticated as that of Watson.

Content Analytics with enterprise search scales very well to many millions of documents. However, when there is a need for analyzing really enormous data sets, in the magnitude of petabytes or even exabytes, IBM has developed what they call their BigData platform. This platform mainly revolves around two products, InfoSphere Streams and InfoSphere BigInsights, and it builds on a foundation of open source software, such as Apache Hadoop and Apache Lucene. InfoSphere Streams is used for real time analysis of information in motion. This helps you understand what’s happening right at this moment in your organization and supports you in taking appropriate action as things are happening. InfoSphere BigInsights on the other hand lets you analyze and draw insight from massive amounts of already existing data.

Studies have shown how organizations that fall short in this area are overtaken by those who understand how to use the power of analytics.

IBM has surely chosen an interesting path when merging Analytics with Findability.

Book Review: Search Analytics for Your Site

Lou Rosenfeld is the founder and publisher of Rosenfeld Media and also the co-author (with Peter Morville) of the best-selling book Information architecture for the World Wide Web, which is considered one of the best books about information management.

In Lou Rosenfeld’s latest book he lets us know how to successfully work with Site Search Analytics (SSA). With SSA you analyse the saved search logs of what your users are searching for to try to find emerging patterns. This information can be a great help to figure out what users want and need from your site.  The search terms used on your site will offer more clues to why the user is on your site compared to search queries from Google (which reveal how they get to your site).

So what’s in the book?

Part I – Introducing Site Search Analytics

In part one the reader gets a great example of why to use SSA and an introduction to what SSA is. In the first chapters you follow John Ferrara who worked at a company called Vanguard and how he analysed search logs to prove that a newly bought search engine performed poorly whilst using the same statistics to improve it. This is a great real world example of how to use SSA for measuring quality of search AND to set up goals for improvement.

a word cloud is one way to play with the data

Part II – Analysing the data

In this part Lou gets hands on with user logs and lets you how to analyse the data. He makes it fun and emphasizes the need to play with user data. Without emphasis on playing, the task to analyse user data may seem daunting. Also, with real world examples from different companies and institutions it is easy to understand the different methods for analysis. Personally, I feel the use of real data in the book makes the subject easier (and more interesting) to understand.

From which pages do users search?

Part III – Improving your site

In the third part of the book, Rosenfeld shows how to apply your findings during your analysis. If you’ve worked with SSA before most of it will be familiar (improving best bets, zero hits, query completion and synonyms) but even for experienced professionals there is good information about how to improve everything from site navigation to site content and even to connect your ssa to your site KPI’s.

ConclusionSearch Analytics For Your Site shows how easy it is to get started with SSA but also the depth and usefulness of it. This book is easy to read and also quite funny. The book is quite short which in this day and age isn’t negative. For me this book reminded me of the importance of search analytics and I really hope more companies and sites takes the lessons in this book to heart and focuses on search analytics.

Bridging the Gap Between People and (Enterprise Search) Technology

Tony Russell-Rose recently wrote about the changing face of search, a post that summed up the discussion about the future of enterprise search that took part at the recent search solutions conference. This is indeed an interesting topic. My colleague Ludvig also touched on this topic in his recent post where he expressed his disappointment in the lack of visionary presentations at this year’s KMWorld conference.

At our last monthly staff meeting we had a visit from Dick Stenmark, associate professor of Informatics at the Department of Applied IT at Gothenburg University. He spoke about his view on the intranets of the future. One of the things he talked about was the big gap in between the user’s vague representation of her information need (e.g. the search query) and the representation of the documents indexed by the intranet enterprise search engine. If a user has a hard time defining what it is she is looking for it will of course be very hard for the search engine to interpret the query and deliver relevant results. What is needed, according to Dick Stenmark, is a way to bridge the gap between technology (the search engine) and people (the users of the search engine).

As I see it there are two ways you can bridge this gap:

  1. Help users become better searchers
  2. Customize search solutions to fit the needs of different user groups

Helping users become better searchers

I have mentioned this topic in one of my earlier posts. Users are not good at describing which information they are seeking, so it is important that we make sure the search solutions help them do so. Already existing functionalities, such as query completion and related searches, can help users create and use better queries.

Query completion often includes common search terms, but what if we did combine them with the search terms we would have wanted them to search for? This requires that you learn something about your users and their information needs. If you do take the time to learn about this it is possible to create suggestions that will help the user not only spell correctly, but also to create a more specific query. Some search solutions (such as homedepot.com) also uses a sort of query disambiguation, where the user’s search returns not only results, but a list of matching categories (where the user is asked to choose which category of products her search term belongs). This helps the search engine return not only the correct set of results, but also display the most relevant set of facets for that product category. Likewise, Google displays a list of related searches at the bottom of the search results list.

These are some examples of functionalities that can help users become better searchers. If you want to learn some more have a look at Dan Russells presentation linked from my previous post.

Customize search solutions to fit the needs of different user groups

One of the things Dick Stenmark talked about in his presentation for us at Findwise was how different users’ behavior is when it comes to searching for information. Users both have different information needs and also different ways of searching for information. However, when it comes to designing the experience of finding information most companies still try to achieve a one size fits all solution. A public website can maybe get by supporting 90% of its visitors but an intranet that only supports part of the employees is a failure. Still very few companies work with personalizing the search applications for their different user groups. (Some don’t even seem to care that they have different user groups and therefore treat all their users as one and the same.) The search engine needs to know and care more about its’ users in order to deliver better results and a better search experience as a whole. For search to be really useful personalization in some form is a must, and I think and hope we will see more of this in the future.