Beyond Office 365 – knowledge graphs, Microsoft Graph & AI!

This is the first joint post in a series where Findwise & SearchExplained, together decompose Microsoft’s realm with the focus on knowledge graphs and AI. The advent of graph technologies and more specific knowledge graphs have become the epicentre of the AI hyperbole.

microsoft_graph

The use of a symbolic representation of the world, as with ontologies (domain models) within AI is by far nothing new. The CyC project, for instance, started back in the 80’s. The most common use for average Joe would be by the use of Google Knowlege Graph that links things and concepts. In the world of Microsoft, this has become a foundational platform capacity with the Microsoft Graph.

It is key to separate the wheat from the chaff since the Microsoft Graph is by no means a Knowledge Graph. It is a highly platform-centric way to connect things, applications, users and information and data. Which is good, but still it lacks the obvious capacity to disambiguate complex things of the world, since this is not its core functionality to build a knowledge graph (i.e ontology).

From a Microsoft centric worldview, one should combine the Microsoft Graph with different applications with AI to automate, and augment the life with Microsoft at Work. The reality is that most enterprises do not use Microsoft only to envelop the enterprise information landscape. The information environment goes far beyond, into a multitude of organising systems within or outside to company walls.

Question: How does one connect the dots in this maze-like workplace? By using knowledge graphs and infuse them into the Microsoft Graph realm?

Office 365 MDM

The model, artefacts and pragmatics

People at work continuously have to balance between modalities (provision/find/act) independent of work practice, or discipline when dealing with data and information. People also have to interact with groups, and imaged entities (i.e. organisations, corporations and institutions). These interactions become the mould whereupon shared narratives emerge.

Knowledge Graphs (ontologies) are the pillar artefacts where users will find a level playing field for communication and codification of knowledge in organising systems. When linking the knowledge graphs, with a smart semantic information engine utility, we get enterprise-linked-data that connect the dots. A sustainable resilient model in the content continuum.

Microsoft at Work – the platform, as with Office 365 have some key building blocks, the content model that goes cross applications and services. The Meccano pieces like collections [libraries/sites] and resources [documents, pages, feeds, lists] should be configured with sound resource descriptions (metadata) and organising principles. One of the back-end service to deal with this is Managed Metadata Service and the cumbersome TermStore (it is not a taxonomy management system!). The pragmatic approach will be to infuse/integrate the smart semantic information engine (knowledge graphs) with these foundation blocks. One outstanding question, is why Microsoft has left these services unchanged and with few improvements for many years?

The unabridged pathway and lifecycle to content provision, as the creation of sites curating documents, will be a guided (automated and augmented [AI & Semantics]) route ( in the best of worlds). The Microsoft Graph and the set of API:s and connectors, push the envelope with people at centre. As mentioned, it is a platform-centric graph service, but it lacks connection to shared narratives (as with knowledge graphs).  Fuzzy logic, where end-user profiles and behaviour patterns connect content and people. But no, or very limited opportunity to fine-tune, or align these patterns to the models (concepts and facts).

Akin to the provision modality pragmatics above is the find (search, navigate and link) domain in Office 365. The Search road-map from Microsoft, like a yellow brick road, envision a cohesive experience across all applications. The reality, it is a silo search still 😉 The Microsoft Graph will go hand in hand to realise personalised search, but since it is still constraint in the means to deliver a targeted search experience (search-driven-application) in the modern search. It is problematic, to say the least. And the back-end processing steps, as well as the user experience do not lean upon the models to deliver i.e semantic-search to connect the dots. Only using the end-user behaviour patterns, end-user tags (/system/keyword) surface as a disjoint experience with low precision and recall.

The smart semantic information engine will usually be a mix of services or platforms that work in tandem,  an example:

  1. Semantic Tools (PoolParty, Semaphore)
  2. Search and Analytics (i3, Elastic Stack)
  3. Data Integration (Marklogic, Biztalk)
  4. AI modules (MS Cognitive stack)

In the forthcoming post on the theme Beyond Office 365 unpacking the promised land with knowledge graphs and AI, there will be some more technical assertions.
View Fredric Landqvist's LinkedIn profileFredric Landqvist research blog
View Agnes Molnar's LinkedIn profileAgnes Molnar SearchExplained

.

Reflection, part 2

Some time ago I was writing about the Reflection mechanism in .NET Framework.

This time I will show you a use case, where it’s better NOT TO USE the Reflection.

Introduction

In the previous post about the Reflection I mentioned some doubt thoughts about using this mechanism and one of them has actually its justification.

So, when it’s better not to use Reflection and why?

Consider a method that accepts some objects and we want to access some property on these objects inside this method.

private void MyUniversalMethod(object obj)
{
    if (obj.GetType().GetProperty("MyPropertyName ") is System.Reflection.PropertyInfo myProperty) //Check if our object actually has the property we're interested in.
    {
        var myPropertyValue = myProperty.GetValue(obj); //Get the property value on our object.
        myProperty.SetValue(obj, new object()); //Set the property value on our object.
    }
} 

Although, technically, there’s nothing wrong with this approach, it should be avoided in most cases, because it totally breaks the concept of strong typing.

How do we do it properly?

If we are in control over the classes we are using, we should always extract the property we want to access in such method to an interface.

interface IMyInterface
{
    object MyProperty { get; set; }
} 

So our method will look a lot simpler and, what’s most important, the compiler upholds the code integrity so we don’t have to bother if the property doesn’t exist or is inaccessible on our object, because the interface forces the accessibility for us:

private void MyUniversalMethod(IMyInterface obj)
{
    var myPropertyValue = obj.MyProperty; //Get the property value on our object.
    obj.MyProperty = new object(); //Set the property value on our object.
} 

But, what if we have no control over the classes?

There are scenarios where we have to use someone else’s code and we have to adapt our code to the already existing one. And, what’s worse, the property we are interested in is not defined in any interface but there are several classes that can contain such property.

But then, it is still recommended that we don’t use the Reflection in that case.

Instead of that, we should filter the objects that come to our method to specific types that actually contain the property we are interested in.

private void MyUniversalMethod(object obj)
{
    if (obj is TheirClass theirClass) //Check if our object is of type that has the property we're interested in. If so, assign it to a temporary variable.
    {
        var theirPropertyValue = theirClass.TheirProperty; //Get the property value on our object.
        theirClass.TheirProperty = new object(); //Set the property value on our object.
    }
} 

There’s an inconvenience in the example above that we have to specify all the types that might contain the property we are interested in and handle them separately but this protects us from cases where in different classes a property of the same name is of a different type. Here we have the full control over what’s happening with strongly typed property.

Then, what is the Reflection good for?

Although I said it is not recommended in most cases, there are, however, cases where the Reflection approach would be the preferred way.

Consider a list containing names of objects represented by some classes.

We create a method that will retrieve the name for us to display:

private string GetName(object obj)
{
    var type = obj.GetType();
    return (type.GetProperty("Name") as System.Reflection.PropertyInfo)? //Try to get the "Name" property.
        .GetValue(obj)? //Try to get the "Name" property value from the object.
        .ToString() //Get the string representation of the value, if it's string it just returns its value.
        ?? type.Name; //If the tries above fail, get the type name.
}

The property “Name” is commonly used by many classes, though it’s very rarely defined in an interface. We can be also almost certain that it will be a string. We can then just look for this property by a Reflection and in case we didn’t find it use the type name. This approach is commonly used in Windows Forms PropertyGrid’s collection editors.

Use of dynamic keyword

At the point we are certain we don’t want to rely on strong typing, we can access the properties at runtime in an even simpler way, by using the dynamic keyword, which introduces the flexibility of duck typing.

private void MyUniversalMethod(dynamic obj)
{
    var theirPropertyValue = obj.TheirProperty; //Get the property value on our object.
    obj.TheirProperty = new object(); //Set the property value on our object.
} 

This is very useful in cases where we don’t know the type of the object passed to the method at the design time. It is also required by some interop interfaces.

But be careful what you are passing to this method, because in case you try to access a member which doesn’t exist or is inaccessible you will get a RuntimeBinderException.

Note that all members you will try to access on a dynamic object are also dynamic and the IntelliSense is disabled for them – you’re on your own.

Reflection… is like violence

“If it doesn’t solve your problems, you are probably not using enough of it”

Introduction

What is Reflection?

You can think of your class like it is your body. The code contained in it doesn’t “know” anything about its contents (unless it’s statically referenced, but I wouldn’t consider it “knowing about”), just like you don’t know what is inside your body.

The Reflection is then like an encyclopedia of a human body that contains all the information about it. You can use it to discover all the information about all your body parts.

So, what is that Reflection in computer programming?

The Reflection is a mechanism that allows you to discover the information about what your class is made of.

You can enumerate in the runtime through Properties, Methods, Events and Fields, no matter if they are static (Shared in Visual Basic) or instance, public or non-public.

But it does not limit you to discover information about only your classes. You can actually get the information about all the classes currently loaded into your application domain.

This means, Reflection gives you access to the places you can’t get to in a “normal” way.

The purpose of using reflection

Some say that there’s no point in getting access to class members through Reflection because:

  • Public members are accessible in the “normal” way, so using Reflection just breaks the strong typing – the compiler will no longer tell you if you are doing something wrong;
  • Accessing non-public members is just hacking.

They may be right, in both cases, but

  • Sometimes it’s way more convenient to let the program discover members in runtime so we don’t need to explicitly access them one by one, it’s more refactoring-proof, sometimes it’s the only way to implement a well-known design pattern;
  • And sometimes, hacking is the only reasonable way of achieving a goal.

Still in doubt? Let’s take a look at some examples below that describe when to use Reflection and get a profit and when it’s more like a bad practice…

Use-case 1: Enumerate your own members

One of the most popular use cases of Reflection is when you have to serialize your class and send such prepared data over the net e.g. using JSON format. A serializer (a class containing one essential method responsible for creating a data ready to send out of your class object and probably second method to restore the data back to your class object) does nothing else than enumerate through all the properties contained by the class and reads all the data from the class object and then formats it to prepare desired format (such as JSON).

Static typing approach – specify properties explicitly when assembling the output string

Now imagine you have to serialize a class without using the reflection. Let’s take a look at the example below:

Suppose we have a model class

public class SampleClass
{
    public int Id { get; set; }
    public string Name { get; set; }
}

And we want to serialize an instance of this class to a comma separated string.

Sounds simple, so let’s do it. We create a serializer class:

public static class SampleClassSerializer
{
    static string Serialize(SampleClass instance)
    {
        return $"{obj.Id},{obj.Name}";
    }
} 

Looks simple and straightforward.

But what if we need to add a property to our class? What if we have to serialize another class? All these operations require to change the serializer class which involves additional frustrating work that needs to be done each time something changes in the model class.

Reflection approach – enumerate properties automatically when assembling the output string

How to do it properly, then?

We create a seemingly bit more complicated serializer method:

 

//Make the method generic so object of any class can be passed.
//Optionally constrain it so only object of a class specified in a constraint,
//a class derived from a class specified in a constraint
//or a class or an interface implementing an interface specified in a constraint can be passed.
public static string Serialize<T>(T obj) where T : SampleClass
{
    //Get System.Type that the passed object is of.
    //An object of System.Type contains all the information provided by the Reflection mechanism we need to serialize an object.
    System.Type type = obj.GetType();

    //Get an IEnumerable of System.Reflection.PropertyInfo containing a collection of information about every discovered property.
    IEnumerable<System.Reflection.PropertyInfo> properties = type.GetProperties()
    //Filter properties to ones that are not decorated with Browsable(false) attribute
    //in case we want to prevent them from being discovered be the serializer.
        .Where(property => property.GetCustomAttributes(false).OfType<BrowsableAttribute>().FirstOrDefault()?.Browsable ?? true);

    return string.Join(",", //Format a comma separated string.
        properties.Select(property => //Enumerate through discovered properties.
        {
            //Get the value of the passed object's property.
            //At this point we don't know the type of the property but we don't care about it,
            //because all we want to do is to get its value and format it to string.
            object value = property.GetValue(obj);

            //Return value formatted to string. If a custom formatting is required,
            //in this simple example the type of the property should override the Object.ToString() method.
            return value.ToString();
        }));
} 

The example above shows that adding a few simple instructions to the serializer makes us we can virtually forget about it. Now we can add, delete or change any of the properties, we can create new classes and still Reflection used in the serializer will still discover all the properties we need for us.

With this approach all the changes we make to the class are automatically reflected by the serializer and the way of changing its behavior for certain properties is to decorate them with appropriate attributes.

For example, we add a property to our model class we don’t want to be serialized. We can use the BrowsableAttribute to achieve that. Now our class will look like this:

public class SampleClass
{
    public int Id { get; set; }

    public string Name { get; set; }

    [Browsable(false)]
    public object Tag { get; set; }
} 

Note that after GetProperties method in serializer we are using Where method to filter out properties that have Browsable attribute specified and set to false. This will prevent newly added property Tag (with [Browsable(false)]) from being serialized.

Use-case 2: Enumerate class’ methods

This is an academic example. Suppose we are taking a test at a university where our objective is to create a simple calculator that processes two decimal numbers. We’ve been told that to pass the test, the calculator must perform addition and subtraction operations.

So we create the following Form:

There are two rows, each one uses different approach to the problem. The upper row uses common “academic” approach which involves the switch statement. The lower row uses the Reflection.

The upper row consist of following controls (from left to right): numericUpDown1, comboBox1, numericUpDown2, button1, label1

The lower row consist of following controls (from left to right): numericUpDown3, comboBox2, numericUpDown4, button2, label2

In a real case we would create only one row.

Static typing approach – use the switch statement to decide which method to use based on user input

Next, to make it properly, we create a class containing some static arithmetic methods that will do some math for us:

public class Calc
{
    public static decimal Add(decimal x, decimal y)
    {
        return x + y;
    }

    public static decimal Subtract(decimal x, decimal y)
    {
        return x - y;
    }
} 

And then we are leveraging our newly created class. Since it’s an academic test, we don’t want to waste precious time on thinking how to do it properly, so we are choosing the “switch” approach:

private void button1_Click(object sender, EventArgs e)
{
    switch (comboBox1.SelectedItem)
    {
        case "+":
            label1.Text = Calc.Add(numericUpDown1.Value, numericUpDown2.Value).ToString();
            break;

        case "-":
            label1.Text = Calc.Subtract(numericUpDown1.Value, numericUpDown2.Value).ToString();
            break;

        default:
            break;
    }
} 

Plus WinForms Designer generated piece of code that populates the operation combobox:

this.comboBox1.Items.AddRange(new object[] 
{
    "+",
    "-"
}); 

The code is pretty simple and self-explanatory.

But then, the lecturer says that if we want an A grade, the calculator must perform multiplication and division operations as well, otherwise, we will get B grade at most.

Additional hazard (by hazard I mean a potential threat that may in some circumstances lead to occurrence of unhandled exception or other unexpected behavior) in this approach is that we are using “magic constants”, the “+”,“-” strings. Note that for each arithmetic operation one instance of the string is generated by the designer and another instance is used in each case statement.

If by any reason we decide to change one instance, the other one will remain unchanged and there’s even no chance that IntelliSense will catch such misspell and for compiler everything is fine.

But, apart from that, let’s hope that the lecturer will like the “+”,“-“ strings, we need to implement two additional arithmetic operations in order to get our scholarship.

So, we implement two additional methods in our little arithmetic class:

public static decimal Multiply(decimal x, decimal y)
{
    return x * y;
}

public static decimal Divide(decimal x, decimal y)
{
    return x / y; //Since we are using decimals instead of doubles, this can produce divide by zero exception, but error handling is out of scope of this test.
} 

But obviously that’s not enough. We have to add two another cases in our switch statement:

case "×":
    label1.Text = Calc.Multiply(numericUpDown1.Value, numericUpDown2.Value).ToString();
    break;

case "÷":
    label1.Text = Calc.Divide(numericUpDown1.Value, numericUpDown2.Value).ToString();
    break; 

and two another symbols in the combobox Items collection, so the WinForms Designer generated piece of code looks like this:

this.comboBox1.Items.AddRange(new object[] 
{
    "+",
    "-",
    "×",
    "÷"
}); 

We must also bear in mind changing used method names if we copy/paste the existing piece of code, it can be a real hell if we have to add multiple methods or if there are multiple usages.

Reflection approach – enumerate class methods and let user choose one of them

So, how can we do it better?

Instead of using evil switch statement we use Reflection.

First, we discover our arithmetic methods and populate the combobox with them – literally, the combobox items are the System.Reflection.MethodInfo objects which can be directly invoked. This can be done in form constructor or in form Load event handler:

private void Form1_Load(object sender, EventArgs e)
{
    //Populate the combobox.
    comboBox2.DataSource = typeof(Calc).GetMethods(BindingFlags.Static | BindingFlags.Public).ToList();

    //Set some friendly display name for combobox items.
    comboBox2.DisplayMember = nameof(MethodInfo.Name);
} 

Then in the “=” button Click event handler we just have to retrieve the System.Reflection.MethodInfo from selected combobox item and simply call its Invoke method which will directly invoke desired method:

private void button2_Click(object sender, EventArgs e)
{
    label2.Text =                              //Set the result directly in the label's Text property.
        ((MethodInfo)comboBox2.SelectedValue)? //Direct cast the selected value to System.Reflection.MethodInfo,
                                               //we have the control over what is in the combobox, so we are not expecting any surprises.
        .Invoke(                               //Invoke the method.
            null,                              //Null for static method invoke.
            new object[] { numericUpDown3.Value, numericUpDown4.Value }) //Pass parameters in almost the same way as in normal call,
                                                                         //except that we must pack them into the object[] array.
        .ToString();
} 

That’s all. We reduced whole switch, designer populated list with symbols and magic strings to 3 simple lines.

Not only the code is smaller, its refactoring tolerance is greatly improved. Now, when lecturer says that we need to implement two another arithmetic operations, we simply add the static methods to our arithmetic class (as stated before) and we’re done.

However, this approach has a hazard. The usage in button2
Click event handler assumes that all arithmetic methods in Calc class accept only two parameters of type that is implicitly convertible from decimal.

In other words, if someone add a method that processes 3 numbers, the operation represented by this method will show up in the combobox, but attempt to execute it will result in target invocation exception.

But if we tried to use it in the previous approach, we would get the error as well. The difference is that it would be a compile time error saying that we are trying to use method that accepts 3 parameters but we have provided only 2 arguments.

That’s one advantage of static typing, the compiler upholds the code integrity and the IntelliSense will inform us about any misusages long before we even hit the compile button. When we are using dynamic invoke, we must predict any exceptions and handle them ourselves.

Conclusion

The examples above describe how the Reflection mechanism can improve code maintainability and robustness. They also show that Using Reflection, though it’s handy, lays more responsibility on the developer as checking for compatibility is moved from design time to run time and the compiler will no longer lend its hand when invoking dynamically typed members.

In this article we learned how can we use the Reflection to retrieve data from properties and how to invoke methods that are now explicitly specified in design time – they can change over time but the code that are using them will remain unchanged.

In the next article I’ll show you more advanced usages of the Reflection such as storing data, accessing private members and instantiating classes on some interesting examples like Building An Object From Database or Implementing Plugin System. You will also eventually find out why Reflection is like violence

Digital recycling & knowledge growth

How do we prevent the digital debris of human clutter and mess? And to what extent will future digital platforms guide us in knowledge creation and use?

Start making sense, and the art of making sense!

People and the Post, Postal History from the Smithsonian's  National Postal Museum

People and the Post, Postal History from the Smithsonian’s National Postal Museum

Mankind’s preoccupation for much of this century has to become fully digitalized. Utilities, software, services and platforms are all becoming an ‘intertwingled’ reality for all of us. Being mobile, the blurring of the borders between the workplace and recreational life plus the ease of digital creation are creating information overloads and (out-of-sight) digital landfills. While digital content creation is cheaper to create and store, its volume and its uncared for status makes it harder for everyone else to find and consume the bits they really need (and have some provenance for peace of mind).

Fear not. A collection of emerging digital technologies exist that can both support and maintain future sustainable digital recycling – things like: Cognitive Computing, Artificial Intelligence; Natural Language Processing; Machine Learning and the like, Semantics adding meaning to shared concepts, and Graphs linking our content and information resources. With good information management practice and having the appropriate supporting tools to tinker with, there is a great opportunity to not only automate knowledge digitization but to augment it.

Automation

In the content continuum (from its creation to its disposal) there is a great need for automating processes as much as possible in order to reduce the amount of obsolete or hidden (currently value-less) digital content. Digital knowledge recycling is difficult as nearly every document or content creator is, by nature, reluctant to add further digital tags (a.k.a. metadata) describing their content or documents once they have been created. What’s more experience shows this is inefficient on a number of accounts, one of which is inconsistency.

Most digital documents (and most digital content, unless intended to sell something publicly) therefore lack the proper recycling resource descriptors that can help with e.g. classification, topic description or annotation with domain specific (shared, consistent) concepts. Such descriptions add appropriate meaning or context to content, aiding its further digital reuse (consumption). Without them, the problem of findability is likely to remain omnipresent across many intranets and searched resources.

Smartphones generate content automatically, often without the user thinking or realizing. All kinds of resource descriptors (time, place etc.) are created automatically through movement and mobile usage. With the addition of further machine learning and algorithms, online services such as Google Photos use these descriptors (and some automatic annotation of their own) to add more contextual data before classifying pictures into collections. This improved data quality (read: metadata addition and improved findability) allows us to find the pictures or timeline we want more easily.

In the very same manner, workplace content or documents can now have this same type of supporting technical platform that automatically adds additional business specific context and meaning. This could include data from users: their profiles, departments or their system user behaviour patterns.

For real organizational agility though a further extra layer of automatic annotation (tagging) and classification is needed – achieved using shared models of the business. These models can be expressed through a combination of various controlled vocabularies (taxonomies) that can be further joined through relationships (ontologies) and finally published (publicly or privately) as domain models as linked data (in graphs). Within this layer exist not just synonyms, but alternative and preferred labels, and more importantly relationships can be expressed between concepts – hence the graph: concepts being the dots (nodes) with relationships the joining lines (vertices). Using certain tools, the certain relationships between concepts can be further given a weighting.

This added layer generates a higher quality of automated context, meaning and consistency for the annotation (tagging) of content and documents alike. The very same layer feeds information architecture in the navigation of resources (e.g. websites). In Search, it helps to disambiguate between queries (e.g. apple the fruit, or apple the organization?).

This digital helper application layer works very much in the same smooth manner as e.g. Google Photos, i.e. in the background, without troubling the user.

This automation however, will not work without sustainable organizing principles, applied in information management practices and tools. We still need a bit of human touch! (Just as Google Photos added theirs behind the scenes earlier, as a work in progress)

Augmentation

This codification or digitalization of knowledge allows content to be annotated, classified and navigated more efficiently. We are all becoming more aware of the Google Knowledge Graph or the Microsoft Graph that can connect content and people. The analogy of connecting the dots in a graph is like linking digital concepts and their known relationships or values.

Augmentation can take shape in a number of forms. A user searching for a particular query can be presented not only with the most appropriate search results (via the sense-making connections and relationships) but also can be presented with related ideas they had not thought of or were unaware of – new knowledge and serendipity!

Search, semantic, and cognitive platforms have now reached a much more useful level than in earlier days of AI. Through further techniques new knowledge can also be discovered by inference, using the known relationships within the graph to fill in missing knowledge.

Key to all of this though is the building of a supporting back-end platform for continuous improvement in the content continuum. Technically, something that is easier to start than one may first suspect.

Sustainable Organising Principles to the Digital Workplace

 


View Fredric Landqvist's LinkedIn profileFredric Landqvist research blog
View Peter Voisey's LinkedIn profilePeter Voisey

Digital wizardry for customers & employees – the next elements

A reflection on Mobile World Congress topics mobility, digitalisation, IoT, the Fourth Industrial Revolution and sustainability

MWC2017Commerce has always had a conversational, today it is digital. Organisations are asking how to organise clean effective data for an open digital conversation. 

Digitalization’s aim is to answer customer/consumer-centric demands effectively (with relevant and related data) and in an efficient manner. [for the remainder of the article read consumer and customer interchangeably]

This essentially means Joining the dots between clean data and information and being able to recognise the main and most consumer-valuable use cases, be it common transaction behaviour or negating their most painful user experiences.

This includes treading the fine line between being able to offer “intelligent” information (intelligent in terms of relevance and context)  to the consumer effectively and not seeming to be freaky or stalker-like. The latter is dealt with by forming a digital conversation where the consumer understands the use of their information only being used for their end needs or wants.

While clean, related data from the many various multi-channel customer touch-points forms the basis of an agile digital organisation, it is the combination of significant data analysis insight of user demand & behaviour (clicks, log analysis etc), machine learning and sensible prediction that forms the basis of artificial Intelligence. Artificial intelligence broken down is essentially resultant action based on the inferences of knowing certain information, i.e. the elementary Dr Watson, but done by computers.

This new digital data basis means being able to take data from what were previous data silos and combine it effectively in a meaningful way, for a valuable purpose. While the tag of Big Data becomes weary in a generalised context, key is the picking of data/information to get relevant answers to the mosts valuable questions, or in consumer speak, to get a question answered or a job done effectively.

Digitalisation (and then the following artificial intelligence) relies obviously on computer automation, but it still requires some thoughtful human-related input. Important steps in the move towards digitalization include:

  • Content and Data Inventory, to clean data/ the cleansing of data and information;
  • Its architecture (information modelling, content analysis, automatic classification and annotation/tagging);
  • Data analysis in combination with text analysis (or NLP: natural language processing for the more abundant unstructured data, content), the latter to put flesh on the bone as it were, or adding meaning and context
  • Information Governance: the process of being responsible for the collection, proper storage and use of important digital information (now made less ignorable with new citizen-centric data laws (GDPR) and the need for data agility or anonymization of data)
  • Data/system Interoperability: which data formats, structures, and standards, are most appropriate for you? What data collections are most Relational databases, Linked/graph data, data lakes etc.?); 
  • Language/cultural interoperability: letting people with different perspectives accessing the same information topics using their own terminology.
  • Interoperability for the future also means being able to link everything in your business ecosystem for collaboration, in- and outbound conversations, endless innovation and sustainability.
  • IoT or the Internet of Things is making the physical world digital and adding further to the interlinked network, soon to be superseded by the AoT (the analysis of things)
  • Newer steps of Machine learning (learning about consumer preferences and behaviour etc.) and artificial intelligence (being able to provide seemingly impossible relevant information, clever decision-making and/or seamless user experience).

The fusion of technologies continues further as the lines between the physical, digital, and biological spheres with developments in immersive Internet, as with Augmented Reality (AR) and Virtual Reality (VR).

The next elements are here already: semantic (‘intelligent’) search, virtual assistants, robots, chat bots… with 5G around the corner to move more data, faster.

Progress within mobility paves the way for a more sustainable world for all of us (UN Sustainable Development), with a future based on participation. In emerging markets we are seeing giant leaps in societal change. Rural areas now have access to the vast human resources of knowledge to service innovation e.g. through free-access to Wikipedia on cheap mobile devices and Open Campuses. Gender equality with changed monetary and mobile financial practices and blockchain means to raise to the challenge with interoperability. We have to address the open paradigm (e.g Open Data) and the participation economy, building the next elements. Shared experience and information commons. This also falls back to the intertwingled digital workplace, and practices to move into new cloud based arenas.

Some last remarks on the telecom industry, it is loaded with acronyms and for laymen in the area sometimes a maze to navigate and to build some sensemaking.

So are these steps straightforward, or is the reality still a potential headache for your organisation? 

Contact Findwise now to ease the process, before your competitor does 😉
View Fredric Landqvist's LinkedIn profileFredric Landqvist research blog
View Peter Voisey's LinkedIn profilePeter Voisey

Sensemaking or Digital Despair

Finding our way in the bright, futuristic, data-driven & intertwined world, often taxes us and our digital-hungry senses. Fast rewind to the recent FindabilityDay 2015 and the parade of brilliant speaker talents on stage. Starting of with our dear friend and peer, Martin White, on the topic the future of search.

Human factors, from idea inception to design and practical UX of our digital artifacts. The key has been make-do and ship. This is the reason the more technically-advanced mobiles fell by the wayside 8 years ago Apple’s iPhone.

The social life with information, shapes our daily lives, in a hyper-connected world. It’s still very hard to find that information needle in the haystack, and most days we feel despair when losing the scent of information nuggets. The results from the Findability Survey, spoke clearly. Without sound organising principles to information and data, and a pliable recorded vision, we won’t find anything of value.

Next, moving into an old business model, with Luna’s and Sara’s presentation, a great example, where we see that the orchestration and choreography of their data assets will determine their survival or demise – in conjunction with infused means to information management practices, processes and tools. They showed a new set of facets to delivering on their mission in their line-of business.

Regardless of the line of business, it becomes clear that our fragmented workplace setting now only partly “on tap”. It makes our daily lives a mess, since things do not interoperate. The vision should show the way to a shared information commons, where we all cultivate.

So finally, How do we make sense of any mess?

Answer: Architect a place where you can find comfort with social conventions shared on the information used. Abby Covert, laid out a beautiful tapestry of things we all need to take on, to make sense in everyday life, and life at work. With clear and distinct guardrails, and signposts we don’t feel so distracted or lost. Her talk was a true enlightenment for me, being of the same profession, Information Architect.

View Fredric Landqvist's LinkedIn profileFredric Landqvist research blog

A Health Care Information Commons Vision: from frozen assets to liquid gold

This is the second post in a series (1), unpacking interoperability in the healthcare system. The basis in this post is semantic and technical interoperability, hence a systemic overview.

The future of health care relies on the improved flow of captured patient health information across the whole care continuum. This means a shared information system linking systems and devices from participating health care organisations while maintaining patient privacy and security standards. Such a realization would not only enhance the clinician and patient experience but also enable faster treatment and better care coordination for patients.

Information Commons is an information system, …, that exists to produce, conserve, and preserve information for current and future generations.

 A seamless and secure hub, heavily-linked, providing point-of-care access to critical patient data and care decision support information for the delivery of timely care, reducing the duplication of tests and procedures.

All in all, this has to be built upon a participatory community paradigm, where clinicians, policy makers and leaders, and patients share a vision to create an interoperable information space – that is sustainable, regardless of previous lock-in mechanisms set by different technical, and semantic standards, vendors and process and policy making.

Healthcare Information Commons

How do we create a interoperability climate?

 Changes for interoperability lie in the development of new pilots with strong collaboration. They are generally more successful where they are based on patient or illness groups, value-orientated, open and scalable. Post requirements phase, iteration based on early adopters’ feedback can identify the need for improvements and enhancements around the relevancy, format and visual display of data and information, the usability of the solution and provide insight into workflow impact. The Information Commons is also a good arena for clinicians to share positive anecdotes from their experiences upon which scalable pilots can be expanded.

Such developed infrastructure and services can also support or be leveraged by other national or regional health initiatives.

Technical Layers of interoperability

Interoperability can cover many layers but at its basis would be an interoperable access layer that integrates and securely shares clinical data from multiple sources giving one point of access. The user interface (GUI) could then provide and display data and information based on stakeholder users and medical/situational context.

Such a layer would have to accommodate and support various data from the distributed system of actors, aligning both to open standards while at the same time being plastic enough in design and instantiation.

Interoperability not only covers the sharing of information but also its usage. This may include added functionality by the EHR vendor themselves or the creation of further value-adding knowledge layers that can take advantage of both structured and (the untapped wealth of) unstructured data within EHRs.

Findwise in its EU funded KConnect project is doing just that. It is currently collecting use case studies from Jönköping (RJI/Qulturum) in order to create a pilot solution for clinicians to take advantage of ‘hidden’ textual data.

Questions of interoperability also lie in the physical user experience of the systems themselves. Should the basic layer provided by EHR vendors be open to include value-added software from other parties, should it be embedded or be made into another GUI? Which ultimately is best for the clinician workflow and the agility of software solutions in supporting new value-based outcomes and reiteration for improvements in efficiency and effectiveness?

Semantic Transformer

The annotations made in the healthcare systems across different domains, all have very similar outset, but lack coherent interoperable mechanism to work smoothly outside the local context. On a international, and national and regional level there should be services that acts as the electric grid to provide society with energy to be used in many contexts. A semantic grid that host controlled vocabularies within the domain, but also share practices and processes. With the use of open standards these could bridge across organisational boundaries and help clean the current messy Healthcare information space.

The healthcare information commons, do not per se have to be one system, but rather an interoperable set of services/systems that share standards to be able to exchange information and data. Very similar to they way Internet and linked data work today –  not restricted by walled gardens. The governance of the commons, should be a matter of public services, with sustainable resources and open governance agenda that can invite participation and engagement. No single actor in the network, be it a large hospital, private caretaker or regional public governing body will be able take care of this single-handedly. It should be a true “commons” undertaking!

The infusion of the Information Commons into everyday healthcare provisioning use cases with semantic transformer applications could be in several modalities: finding and acting upon information or contributing in the local context.

In the data entry or capture point, there will be options to add semantic layers and attributes to the type of content and data provisioned. An easy way to illustrate this, is the emerging use of schema.org templated entities and properties for the MedicalTypes, MedicalConditions, Drugs, Guidelines, Codes from controlled vocabularies like SnoMedCT, Mesh, ICD10 and the like.

 Analogously using digital cameras from smartphones or other devices, means that the user might add “some” metadata or tags about the picture. Devices and sensors add more layers of granularity with attributes that most end-users, never see or bother about. These extra resource descriptions, will interplay with cloud based services as Google Photos – where different algorithms reformat, package the content into new forms, as contextual albums, scenes and so forth.

 A set of semantic transformer application layers should be intertwingled with the Healthcare Information Commons. Firstly to make easy linkages between data sets – as the Web of Data scenarios and Linked Data propose –  but also to  provide smarter integration points in back-end supporting processes in the Healthcare systems where more private and locked-in data-sets exist about the patient conditions, treatments and drugs etc.

 The semantic transformer applications could both be open api:s developed by the community for the commons, but also could be commercial applications provided by line-of-business specialist software vendors. As long as all of these layers, are compliant with the open standards!

For such legacy systems as EHR , and off-the-shelf healthcare applications and business applications that are semantically impaired, these semantic transformer applications could work as a repair-kit for already old broken systems. Consequently there would be no need to overhaul all legacy software within the caretaker’s organisation. A kind of smoother migration path to interoperability.

There also exists the need for semantic interoperability between the contextual patient information within the EHR and the provision of clinical decision support information. This could be in the form of internal medical guidelines and best practices, or from external resources such as medical journals or clinical trial reports.

The KConnect project are providing semantic annotation and semantic search services in different languages for clinicians and researchers to access the very latest in medical literature. This is achievable by semantically annotating required medical information (EHRs, guidelines, journals etc) and having the semantic search engine take full advantage of known key medical entities/concepts and their relationships.

Through the indexing of new information about drug usage, best practices, guidelines, new clinical trials and journals, clinicians then access up-to-date relevant information whenever they need.

In the near future to maximise both clinician and patient user engagement with EHRs, different uses and views of the EHR will have to be driven by suitable context and stakeholder semantics.

Shared Decision making

When moving into valued-based health care and outcome measurement, (as presented here by Sveus), it is critical that all actors participate on a connected level field, so that communication between healthcare practitioners and patients and their social networks works.  This includes the need for shared norms and definitions as well as systems to support the decision making – and obviously a harmonised set of metrics to measure outcomes.

As presented by Peter Ubel, in his talks and recent book on Critical Decisions, it is key that we are able to share a common view between the clinician and the patient. All practitioners share jargon that do not always communicate well to the receiver. Hence there are plenty of communication breakdowns recorded in the everyday practices, leading to “malpractice” in the worst cases for the patient. In the last couple of decades, there has been a shift in power relations between healthcare professionals and patients and their families. Patient empowerment is a good thing, but if things get lost in translation, there is the risk that critical decisions are not fully supported.

With a Healthcare Information Commons pool of resources, there lays opportunities to guide patients and practitioners in their critical decision making. But also to strengthen the learning and innovation within the communities of practice, with open feedback loops to the pool.

Privacy & Security upfront

Just as data interoperability can be seen as the sharing of data, data security can be seen as the sharing of data in the right way and data privacy seen as the sharing of data with the right person in the right way. We are naturally concerned as to who may be using our data and want to be able to control its use.

The boundary between citizens’ App data and their medical data is blurring rapidly as App developments and sensors continue to provide new and different data that the individual, health care and clinical research can capitalise on in the effort to move towards better wellbeing and more value-based healthcare.

While data privacy and security have become the headline darlings of the media, they can often be distractors of innovation, often masking the true benefits of the flow of information. Just as with physical assets there are best practices for data misuse prevention, protection and policing. The majority of misuse or abuse of personal data is more often caused by human error and misjudgement than by the failure of technology.

Data interoperability can be better supported when services have clear guidelines to inform citizens as to who, when and how their data is shared, for what purpose and the available steps to alter said process. A better informed public would then see more free data resources being used for clinical research e.g. the Million Hearts initiative in the US where citizen data is being used to lower heart attacks and strokes.

Open regulations, collaboration and co-ordination along with risk assessment and protection practices such as encryption, anonymisation and de-identification, all can go a long way to allowing secure data interoperability, be it personal or aggregated data. IT has the potential too of rule-based access and forensic data access reports. No system can be made fool-proof, however precautions and the presence of well-designed data breach response plan are achievable.

Obviously we do not want all our healthcare records to be open in the air for anybody to use or read, as little as we want our financial records to be in the open. Privacy is really key! The means with the Information Commons should work with aggregated data. Not the singular set of records for one patient.

Patient security derives the need to a more free flow of data between actor systems. The medical conditions and contexts sets the standards for sharing, where extracts or segments should be possible to share aligned with privacy policies.

Future real-life experience exposé

Having a recent Swedish report on diabetes care and outcome measurement in mind. It makes sense, to illustrate the case of a diabetes patient living and acting in Göteborg, West of Sweden. They have a medical condition, being a lifelong journey with an endocrine system out of order. This has a great impact on the patient’s everyday life, and diabetes related complications. With good life balance to training, exercise and eating habits, it is possible to keep the glucose patterns in such a way that your life expectancy will equal to anybody else.

The use of personal choices to trigger improved behavior, gives the person options to chose selected wellbeing (e.g. Weight Watchers), fitness (e.g. Runkeeper) and health monitoring applications. In most cases these are closed down ecosystems, e.g. iOS included Health app, with options to share in social-media (about your progress, in terms of eating well, or improve your personal training). Many Life Science corporations are developing medical condition / disease area / treatment specific Health monitoring applications (e.g. FreeStyle Libre from Abbot for improving Glucose Monitoring) that clinicians recommend during patient consultations.

For clinical researchers there are ecosystem specific toolkits, like the open-sourced Apple Research Kit.  The existence of a closed ecosystem naturally makes it more problematic to share and exchange data. In this space a Open Standards based on the idea Information Commons makes sense too – where semantic translators could improve the transmission of data from one closed ecosystem to another, without privacy infringement.

A Personal Health Record (PHR) , is a health record where health data and information related to the care of a patient is maintained by the patient

In a future more seamlessly interoperable world, the citizen / patient should be provided one-secure-access point to his/hers health account, e.g. in Sweden 1177 and Mina Vårdkontakter and Hälsa för mig.

The outstanding question: How to get interoperability between PHR and Wellbeing, Fitness and Health apps where it is easy to share vital data bits in a sound manner?

In this scene, open standards should be applied to create a make-do semantic transformation.

Lastly – interoperability within the Professional Clinician Workplace?

The statements and real-life stories from the trenches in any clinical workplace, show a mess of supporting information systems. EHRs that by no means either cooperate or interoperate. Many clinicians realise that they have to do data provision into a handful of systems with significant double manual workload. This comes with risks, given the stressful environment, and many “malpractice” incidents can arise from this workplace disorder.

Each system support its part of the process. While some software suites try to close-down into one-system to ‘rule them all paradigm,’ they still barely lean upon any open standards and they lack semantic and structured ways for the use of data and information outside of the supporting system’s narrow scope.

 A diabetes nurse (post patient consultation) has to enter data into more than 10 different areas, including quality assurance and measurement systems e.g. NDR in Sweden. In some cases there have been integrated point-to-point solutions put in place, but mostly this is not the case and so unnecessary frustration is created.

In every intervention where clinicians and patients communicate, regardless of it being online, remote, on-site, there should be opportunities to tap into the Healthcare Information Commons space. With the potential to find recent new medical treatments, emerging standards/guidelines, breaking news for clinicians as well as patient-oriented and formatted communications. In the best of worlds, semantic translator applications will bridge between ecosystems inside the personal health space as well as into the workplace environment for clinicians – helping, guiding and improving all dimensions of interoperability.

Concluding remarks

Having value-based Healthcare and Outcome Measurement domain as a specific health care change driver, will push the use of standards on all levels to the limit. In the following blog post in this series, the ambition is to unpack information governance, since the data ownership and trust also have to be ironed out. And as stated by Prof Michael E. Porter, the capture of data to do proper Outcome Measurement is one of the major road-blocks ahead. The orchestration of all resources and governance still have to be unfolded. Happily some building blocks to the Healthcare Information Commons have emerged, so we do not need to reinvent the wheel:

  • Wikimedia realm “commons“- with all entries of semantic useful data in wikidata.org
  • Standard Sets for Medical Conditions by international collaboration at ICHOM, and in Sweden Sveus. Standards from Hl7 FHIR, W3C and Web of Data / Semantic Web. The Swedish National Board of Health and Welfare, have an embroic information structure (not in semantic machine readible, RDF, format). Information intermediaries like Google have settle for simple schemas for health and medicin.
  • Open Innovation, and the “open” paradigm, will change evidence based medicine, Bad Pharma and Science on a sociatal level, as stated by Ben Goldacre (TED) where we as patient together with clinicians are able to question treatments based on open data, and improve quality to Healthcare Information Commons.
  • The technology stack with smarter devices, sensors and things, along with Internet anywhere with cognitive computing and computational knowledge on-top of the commons will bring forward semantic translators.
  • New leaps in collaborative work and development with the use of the notebook theme, language and platform agnostic ways.

Making sense, defrosting health data into liguid gold improving healthcare for all.

For more information on Findwise research, please visit KConnect and Orios (Open Standards)


View Fredric Landqvist's LinkedIn profileFredric Landqvist research blog
View Peter Voisey's LinkedIn profilePeter Voisey

Stay Cleaning and moving boxes for cloud

This is the seventh post in a series (1, 2, 3, 4, 5, 6) on the challenges organisations face as they move from having online content and tools hosted firmly on their estate to renting space in the cloud.  We will help you to consider the options and guide on the steps you need to take.

Starting from our first post we have covered different aspects you need to consider as you take each step including information structure and how it is managed using Office 365 and SharePoint as a technology example.  Planning for migration.

Moving Boxes

Do not even think about moving into the cloud apartment without a proper  cleaning of the content buckets. Moving from an architected household to a rented place, taxes a structured audit. Clean out all redundant, outdated and trivial matter (ROT). The very same habit you have cleaning up the attic when moving out from your old house.

It is also a good idea to decorate and add any features to your new cloud apartment before the content furniture is there.  It means the content will fit with any new design and adapt to any extra functionality with new features like windows and doors.  This can be done by reviewing and updating your publishing templates at the same time.  This will save time in the future.

Leaning upon the information governance standards, it should be easy to address the cleaning before moving, for all content owners who have been appointed to a set of collections or habitats. Most organisations could use a content vacuum cleaner, or rather use the search facilities and metric means to deliver up to date reports on:

  1. Active / in-Active habitats
  2. No clear ownership or the owner has left the building
  3. Metadata and link quality to content and collections to be moved across to the cloud apartments.
  4. Review publishing templates and update features or design to be used in the Cloud

When all active habitats and qualified content buckets have been revisited by their set of curators and information owners. The preparation and use of moving boxes, should be applied.

All moving boxes do need proper tagging, so that any moving company will be able to sort out where about the stuff should be placed in the new house, or building. For collections, and habitats, this means using the very same set of questions stated for adding a new habitat or collection to the cloud apartment house. Who, why, where and so forth, through the use of a structured workflow and form. When this first cleaning steps have been addressed, there should be automatic metadata enhancement, aligned with the information management processes to be used in the new cloud.

With decent resource descriptions and cleaned up content through the audit (ROT), this last step will auto-tag content based upon the business rules applied for the collection or habitat. Then been loaded into the content moving truck, or loading dock. Ready to added to the cloud.

All content that neither have proper assigned information ownership, or are in such a shape that migration can’t be done should persist on the estate or be archived or purged. This means that all metadata and links to either content bucket or habitat that won’t be moved in the first instances, should at least have correct and unique uri:s, address, to this content. And in the case a bucket or habitat have been run down by a demolition firm, purged. All inter-linkage to that piece of content or collection have to be changed.

This is typically a perfect quality report, to the information owners and content editors, that they need to work through prior to actually loading the content on the content dock.

Rubbish and Weed
Finally when all rotten data, deserted habitats and unmanageable buckets have been weeded out. It is time to prepare the moving truck, sending the content into its new destination.

Our final thread will cover how will the organisation and it habitants will be able to find content in this mix of clouds, and things left behind on the old estate? Cloud Search and Enterprise Search, seamless or a nightmare?

Please join our Live Stream on YouTube the 20th November 8.30AM – 10AM Central European Time
View Fredric Landqvist's LinkedIn profileFredric Landqvist research blog
View Mark Morrell's LinkedIn profileMark Morell intranet-pioneer

Placemaking, wayfinding and game rules in the Clouds

This is the sixth post in a series (1, 2, 3, 4, 5, 7) on the challenges organisations face as they move from having online content and tools hosted firmly on their estate to renting space in the cloud.  We will help you to consider the options and guide on the steps you need to take.

Starting from our first post we have covered different aspects you need to consider as you take each step including information structure and how it is managed using Office 365 and SharePoint as a technology example.  We will cover more about SharePoint in this post, and placemaking in the cloud.
Funky Village
In SharePoint there are a set of logic chunks. One could decompose the digital workplace into intranet sites, as departmental and organisational buckets; team sites where groups collaborate, and lastly your personal domain being the my site collection. Navigating between these, is a mix of traditional information architecture and search driven content.  When being within a such a habitat as a teamsite, it is not always obvious how to cross-link or navigate to other domains within the digital workplace hosted in Sharepoint.

One way to overcome this, is to render different forms of portals, based upon dynamic navigation. These intersections and aggregates help users to move around the maze of buckets and collections of the content. Sharepoint have very good features, and options to create search-based content delivery mechanisms.

 A metadata and search-based content model, gives us cues for the future design of the digital workplace, with connected habitats and sustainable information architecture. Where people don’t get lost, and have wayfinding means to survive everyday work practices.

This is where how you manage the content in SharePoint and Office 365 is critical.  As we said in our first post it is important you have a good information architecture combined with a good governance framework that helps you to transform your buckets of content from the estate into the cloud.  We have covered information architecture so we now move more towards how governance completes the picture for you.

There are three approaches to the governance your organisation needs to have with SharePoint and Office 365.  You don’t have to use just one.  You can combine some of each to find the right blend for your organisation.  What works best for you will depend on a number of different factors.  Among them:

  • Restricting use – stopping some features from being used e.g. SharePoint Designer
  • Encouraging best practice – guidance and training available
  • Preventing problems – checking content before it is published

Each of these approaches can support your governance strategy.  The key is to understand what you need to use.

Restricting use

You need to be clear why your organisation is using SharePoint and Office 365 and the benefits expected.  This will shape how tight or loose your governance needs to be.

Once you are clear on this, you then need to consider the strategic benefits and drawbacks such as SharePoint Designer and site collection administration rights.

Benefits

  • You control what is being used.
  • You decide who uses a feature e.g. SharePoint Designer.
  • You manage the level of autonomy each site owner has.
  • You find out why someone needs to use a feature.
  • You monitor costs for licences, users, servers, etc.
  • You measure who is using what and why for reporting.

Drawbacks

  • You stifle innovation by not allowing people to test out ideas.
  • You stop legitimate use by asking for permission to use features.
  • You prevent people being able to share knowledge how they wish to.
  • You may be unable to realise the maximum potential of SharePoint.
  • You create unnecessary administration.
  • You risk adding costs without any value to offset them with.

You need to get the balance right with governance that gives you maximum value for the effort needed managing SharePoint and Office 365.

Encourage best practice

The goal from implementing SharePoint and Office 365 is to have an environment that enables employees to publish, share, find and use information easily to help with their work.  They are confident the information is reliable and appropriate, whatever their need for it is.  People also feel comfortable using these tools rather than alternative methods like calling helpdesks or emailing other employees for help.

Encouraging best practice by giving them the opportunity to test to meet their needs is one approach to achieving this.  There are factors you need to consider that can help or hinder the success of using this approach.

Benefits

  • You inform employees of all the benefits to be gained.
  • You train people to use the right tools.
  • You design a registration process to direct people to the right tools.
  • You point employees to guidance on how to follow best practice.
  • You encourage innovation by giving everyone freedom of use.

Drawbacks

  • You can’t prevent people using different tools to those you recommend.
  • You risk confusing employees using content unsure of its integrity.
  • You can’t prevent everyone ignoring best practice when publishing.
  • You may make it difficult for people to share knowledge effectively.
  • Your governance model may be ineffective and need improving.

Getting the balance right between encouraging best practice and the level of governance to deter behaviour which can destroy the value from using SharePoint and Office 365 is critical.

Preventing problems

As well as encouraging best practice, preventing problems helps to reduce time and costs wasted on sorting out unnecessary issues.  While that is the aim of most organisations the practical realities as it is rolled out can divert plans from achieving this.

You need to get the right level of governance in place to prevent problems.  Is it encouraging innovation and keeping governance light touch?  Is it a heavier touch to prevent the ‘wrong’ behaviour and minimise risk of your brand and reputation being damaged?  How much do you want to spend preventing problems?  What does your cost/benefit analysis show?

Benefits

  • People using SharePoint and Office 365 have a great experience (especially the first time they use it).
  • Everyone is confident they can use it for what they need it for without experience problems.
  • Employees don’t waste time calling the helpdesk because many problems have been prevented.
  • Effective governance encourages early adoption and increased knowledge sharing.
  • Costs spent preventing problems are justified by increased productivity and reduced risk of errors.

Drawbacks

  • People find registering difficult and lengthy because of extra steps taken to prevent problems and don’t bother.
  • People find it too restrictive for their needs and it stifles innovation.
  • People turn to other tools (maybe not approved) to meet their needs and ask other people for help to use them.
  • Too restrictive governance prevents most beneficial use by raising the barrier too high for people to use.
  • Costs of preventing problems are higher than benefits to be gained and not justified.

You need to consider the potential benefits and drawbacks before deciding on the level of governance that is right for your organisation.

Remember, it is possible and probably desirable to have different levels of governance for each feature.  It may be lighter for personal views and opinions expressed in MyProfile and MySite but tighter for policies and formal news items in TeamSites.

That is the challenge!  You have so much flexibility to configure the tools to meet your organisation’s needs.  Don’t be afraid to test out on part of your intranet to see what effect it has and involve employees to feed back on their experience before launching it.

The way forward is to create a sustainable information architecture, that supports an information environment that is available on any platform, everywhere, anytime and on any device.  A governance  framework can show roles and responsibilities, how they fit with a strategy and plan with publishing standards as the foundation to a consistently good user experience.

Combining a governance framework and information architecture with the same scope avoids any gaps in your buckets of content being managed or not being found.  It helps you transform from your estate to the cloud successfully.

In our last concluding posts we will dive into more design oriented topics with a helping hand from findability experts and developers. Adding migration thoughts in next post. But first navigating the social graph being people centric, leaving some outstanding questions. How will the graph interoperate if your business runs several clouds, and still have buckets of content elsewhere?

Please join our Live Stream on YouTube the 20th November 8.30AM – 10AM Central European Time
View Fredric Landqvist's LinkedIn profileFredric Landqvist research blog
View Mark Morrell's LinkedIn profileMark Morell intranet-pioneer