Beyond Office 365 – knowledge graphs, Microsoft Graph & AI!

This is the first joint post in a series where Findwise & SearchExplained, together decompose Microsoft’s realm with the focus on knowledge graphs and AI. The advent of graph technologies and more specific knowledge graphs have become the epicentre of the AI hyperbole.

microsoft_graph

The use of a symbolic representation of the world, as with ontologies (domain models) within AI is by far nothing new. The CyC project, for instance, started back in the 80’s. The most common use for average Joe would be by the use of Google Knowlege Graph that links things and concepts. In the world of Microsoft, this has become a foundational platform capacity with the Microsoft Graph.

It is key to separate the wheat from the chaff since the Microsoft Graph is by no means a Knowledge Graph. It is a highly platform-centric way to connect things, applications, users and information and data. Which is good, but still it lacks the obvious capacity to disambiguate complex things of the world, since this is not its core functionality to build a knowledge graph (i.e ontology).

From a Microsoft centric worldview, one should combine the Microsoft Graph with different applications with AI to automate, and augment the life with Microsoft at Work. The reality is that most enterprises do not use Microsoft only to envelop the enterprise information landscape. The information environment goes far beyond, into a multitude of organising systems within or outside to company walls.

Question: How does one connect the dots in this maze-like workplace? By using knowledge graphs and infuse them into the Microsoft Graph realm?

Office 365 MDM

The model, artefacts and pragmatics

People at work continuously have to balance between modalities (provision/find/act) independent of work practice, or discipline when dealing with data and information. People also have to interact with groups, and imaged entities (i.e. organisations, corporations and institutions). These interactions become the mould whereupon shared narratives emerge.

Knowledge Graphs (ontologies) are the pillar artefacts where users will find a level playing field for communication and codification of knowledge in organising systems. When linking the knowledge graphs, with a smart semantic information engine utility, we get enterprise-linked-data that connect the dots. A sustainable resilient model in the content continuum.

Microsoft at Work – the platform, as with Office 365 have some key building blocks, the content model that goes cross applications and services. The Meccano pieces like collections [libraries/sites] and resources [documents, pages, feeds, lists] should be configured with sound resource descriptions (metadata) and organising principles. One of the back-end service to deal with this is Managed Metadata Service and the cumbersome TermStore (it is not a taxonomy management system!). The pragmatic approach will be to infuse/integrate the smart semantic information engine (knowledge graphs) with these foundation blocks. One outstanding question, is why Microsoft has left these services unchanged and with few improvements for many years?

The unabridged pathway and lifecycle to content provision, as the creation of sites curating documents, will be a guided (automated and augmented [AI & Semantics]) route ( in the best of worlds). The Microsoft Graph and the set of API:s and connectors, push the envelope with people at centre. As mentioned, it is a platform-centric graph service, but it lacks connection to shared narratives (as with knowledge graphs).  Fuzzy logic, where end-user profiles and behaviour patterns connect content and people. But no, or very limited opportunity to fine-tune, or align these patterns to the models (concepts and facts).

Akin to the provision modality pragmatics above is the find (search, navigate and link) domain in Office 365. The Search road-map from Microsoft, like a yellow brick road, envision a cohesive experience across all applications. The reality, it is a silo search still 😉 The Microsoft Graph will go hand in hand to realise personalised search, but since it is still constraint in the means to deliver a targeted search experience (search-driven-application) in the modern search. It is problematic, to say the least. And the back-end processing steps, as well as the user experience do not lean upon the models to deliver i.e semantic-search to connect the dots. Only using the end-user behaviour patterns, end-user tags (/system/keyword) surface as a disjoint experience with low precision and recall.

The smart semantic information engine will usually be a mix of services or platforms that work in tandem,  an example:

  1. Semantic Tools (PoolParty, Semaphore)
  2. Search and Analytics (i3, Elastic Stack)
  3. Data Integration (Marklogic, Biztalk)
  4. AI modules (MS Cognitive stack)

In the forthcoming post on the theme Beyond Office 365 unpacking the promised land with knowledge graphs and AI, there will be some more technical assertions.
View Fredric Landqvist's LinkedIn profileFredric Landqvist research blog
View Agnes Molnar's LinkedIn profileAgnes Molnar SearchExplained

.

Tinkering with knowledge graphs

I don’t want to sail with this ship of fools, on the opulent data sea, where people are drowning without any sense-making knowledge shores in sight. You don’t see the edge before you drop!

Knowledge EngineeringEchoencephalogram (Lars Leksell)  and neural networks

How do organisations reach a level playing field, where it is possible to create a sustainable learning organisation [cybernetics]?
(Enacted Knowledge Management practices and processes)

Sadly, in many cases, we face the tragedy of the commons!

There is an urgent need to iron out the social dilemmas and focus on motivational solutions that strive for cooperation and collective action. Knowledge deciphered with the notion of intelligence and emerging utilities with AI as an assistant with us humans. We the peoples!

To make a model of the world, to codify our knowledge and enable worldviews to complex data is nothing new per se. A Knowlege Graph – is in its essence a constituted shared narrative within the collective imagination (i.e organisation). Where facts of things and their inherited relationships and constraints define the model to be used to master the matrix.  These concepts and topics are our communication means to bridge between groups of people. Shared nomenclatures and vocabularies.

Terminology Management

Knowledge Engineering in practice


At work – building a knowledge graph – there are some pillars, that the architecture rests upon.  First and foremost is the language we use every day to undertake our practices within an organisation. The corpus of concepts, topics and things that revolve around the overarching theme. No entity act in a vacuum with no shared concepts. Humans coordinate work practices by shared narratives embedded into concepts and their translations from person to person. This communication might be using different means, like cuneiform (in ancient Babel) or digital tools of today. To curate, cultivate and nurture a good organisational vocabulary, we also need to develop practices and disciplines that to some extent renders similarities to ancient clay-tablet librarians. Organising principles, to the organising system (information system, applications).  This discipline could be defined as taxonomists (taxonomy manager) or knowledge engineers. (or information architect)

Set the scope – no need to boil the ocean


All organisations, independent of business vertical, have known domain concepts that either are defined by standards, code systems or open vocabularies. A good idea will obviously be to first go foraging in the sea of terminologies, to link, re-hash/re-use and manage the domain. The second task in this scoping effort will be to audit and map the internal terrain of content corpora. Since information is scattered across a multitude of organising systems, but within these, there are pockets of a structure. Here we will find glossaries, controlled vocabularies, data-models and the like.  The taxonomist will then together with subject matter experts arrange governance principles and engage in conversations on how the outer and inner loop of concepts link, and start to build domain-specific taxonomies. Preferable using the simple knowledge organisation system (SKOS) standard

Participatory Design from inception


Concepts and their resource description will need to be evaluated and semantically enhanced with several different worldviews from all practices and disciplines within the organisation. Concepts might have a different meaning. Meaning is subjective, demographic, socio-political, and complex. Meaning sometimes gets lost in translation (between different communities of practices).

The best approach to get a highly participatory design in the development of a sustainable model is by simply publish the concepts as open thesauri. A great example is the HealthDirect thesaurus. This service becomes a canonical reference that people are able to search, navigate and annotate.

It is smart to let people edit and refine and comment (annotate) in the same manner as the Wikipedia evolves, i.e edit wiki data entries. These annotations will then feedback to the governance network of the terminologies. 

Term Uppdate

Link to organising systems

All models (taxonomies, vocabularies, ontologies etc.) should be interlinked to the existing base of organising systems (information systems [IS]) or platforms. Most IS’s have schemas and in-built models and business rules to serve as applications for a specific use-case.  This implies also the use of concepts to define and describe the data in metadata, as reference data tables or as user experience controls. In all these lego pieces within an IS or platform, there are opportunities to link these concepts to the shared narratives in the terminology service.  Linked-enterprise-data building a web of meaning, and opening up for a more interoperable information landscape.

One omnipresent quest is to set-up a sound content model and design for i.e Office 365, where content types, collections, resource descriptions and metadata have to be concerted in the back-end services as managed-metadata-service. Within these features and capacities, it is wise to integrate with the semantic layer. (terminologies, and graphs). Other highly relevant integrations relate to search-as-a-service, where the semantic layer co-acts in the pipeline steps, add semantics, link, auto-classify and disambiguate with entity extraction. In the user experience journey, the semantic layer augments and connect things. Which is for instance how Microsoft Graph has been ingrained all through their platform. Search and semantics push the envelope 😉

Data integration and information mechanics

A decoupled information systems architecture using an enterprise service bus (messaging techniques) is by far the most used model.  To enable a sustainable data integration, there is a need to have a data architecture and clear integration design. Adjacent to the data integration, are means for cleaning up data and harmonise data-sets into a cohesive matter, extract-load-transfer [etl]. Data Governance is essential! In this ballpark we also find cues to master data management. Data and information have fluid properties, and the flow has to be seamless and smooth.  

When defining the message structure (asynchronous) in information exchange protocols and packages. It is highly desired to rely on standards, well-defined models (ontologies). As within the healthcare & life science domain using Hl7/FHIR.  These standards have domain-models with entities, properties, relations and graphs. The data serialisation for data exchange might use XML or RDF (JSON-LD, Turtle etc.). The value-set (namespaces) for properties will be possible to link to SKOS vocabularies with terms.

Query the graph

Knowledge engineering is both setting the useful terminologies into action, but also load, refine and develop ontologies (information models, data models). There are many very useful open ontologies that could or should be used and refined by the taxonomists, i.e ISA2 Core Vocabularies, With data-sets stored in a graph (triplestore) there are many ways to query the graph to get results and insights (links). Either by using SPARQL (similar to SQL in schema-based systems), or combine this with SHACL (constraints) or via Restful APIs.

These means to query the knowledge graph will be one reasoning to add semantics to data integration as described above.

Adding smartness and we are all done…

Semantic AI or means to bridge between symbolic representation (semantics) and machine learning (ML), natural language processing (NLP), and deep-learning is where all thing come together.

In the works (knowledge engineering) to build the knowledge graph, and govern it, it taxes many manual steps as mapping models, standards and large corpora of terminologies.  Here AI capacities enable automation and continuous improvements with learning networks. Understanding human capacities and intelligence, unpacking the neurosciences (as Lars Leksell) combined with neural-networks will be our road ahead with safe and sustainable uses of AI.
View Fredric Landqvist's LinkedIn profileFredric Landqvist research blog

Benevolent & sustainable smart city development

The digitisation of society emerge in all sectors, and the key driver to all this is the abundance of data that needs to be brought into context and use.

Participation

When discussing digitisation, people commonly think in data highways and server farms as being the infrastructure. Access to comprehensive information resources is increasingly becoming a commodity, enabling and enhancing societal living conditions. To achieve this, sense-making of data has to be in integrative part of the digital infrastructure. Reflecting this to traditional patterns, digital roads need junctions, signs and semaphores to function, just as their physical counterparts.

The ambition with AI and smart society and cities should be for the benefit of its inhabitants, but without a blueprint to get a coherent model that will be working in all these utilities, it will all break. Second to this, benevolence, participation and sustainability, have to be the overarching theme, to contrast dystopian visions with citizen surveillance and fraudulent behaviour.

Data needs context to make sense and create value, and this frame of reference will be realised through domain models of the world, with shared vocabularies to disambiguate concepts. In short a semantic layer. It is impossible to boil the ocean, which makes us rather lean toward a layered approach.

All complex systems (or complex adaptive system, CAS) revolve around a set of autonomous agents, for example, cells in a human body or citizens in an urban city. The emergent behaviour in CAS is governed by self-organising principles. A City Information Architecture is by nature a CAS, and hence the design has to be resilient and coherent.

What infrastructural dimensions should a smart city design build upon?

  • Urban Environment, the physical spaces comprised of geodata means, register of cadastre (real-estate), roads and other things in the landscape.
  • Movable Objects, with mobile sensing platforms capturing things like vehicles, traffic and more, in short, the dynamics of a city environment.
  • Human actor networks, the social economic mobility, culture and community in the habitat
  • Virtual Urban Systems augmented and immersive platforms to model the present or envision future states of the city environment

Each of these organising systems and categories holds many different types of data, but the data flows also intertwine. Many of the things described in the geospatial and urban environment domain, might be enveloped in a set of building information models (BIM) and geographical information systems (GIS). The resource descriptions link the objects, moving from one building to a city block or area. Similar behaviour will be found in the movable object’s domain because the agents moving around will by nature do so in the physical spaces. So when building information infrastructures, the design has to be able to cross-boundaries with linked-models for all useful concepts. One way to express this is through a city information model (CIM).

When you add the human actor networks layer to your data, things will become messy. In an urban system, there are many organisations and some of these act as public agencies to serve the citizens all through the life and business events. This socially knitted interaction model, use the urban environment and in many cases moveble objects. The social life of information when people work together, co-act and collaborate, become the shared content continuum.
Lastly, data from all the above-mentioned categories also feeds into the virtual urban system, that either augment the perceived city real environment, or the city information modelling used to create instrumental scenarios of the future state of the complex system.

Everything is deeply intertwingled

Connect people and things using semantics and artificial intelligence (AI) companions. There will be no useful AI without a sustainable information architecture (IA). Interoperability on all levels is the prerequisite; systemic (technical and semantic),  organisational (process and climate).

Only when we follow the approach of integration and the use of a semantic layer to glue together all the different types and models – thereby linking heterogeneous information and data from several sources to solve the data variety problem – are we able to develop an interoperable and sustainable City Information Model (CIM).

Such model can not only be used inside one city or municipality – it should be used also to interlink and exchange data and information between cities as well as between cities and provinces, regions, countries and societal digitalisation transformation.

A semantic layer completes the four-layered Data & Content Architecture that usual systems have in place:

semantic-layer

Fig.: Four layered content & data architecture

Use standards (as ISA2), and meld them into contextual schemas and models (ontologies), disambiguate concepts and link these with verbatim thesauri and taxonomies (i.e SKOS). Start making sense and let AI co-act as companions (Deep-learning AI) in the real and virtual smart city, applying semantic search technologies over various sources to provide new insights. Participation and engagement from all actor-networks will be the default value-chain, the drivers being new and cheaper, more efficient smart services, the building block for the city innovation platform.

The recorded webinar and also the slides presented

 

View Fredric Landqvist's LinkedIn profileFredric Landqvist research blog
View Peter Voisey's LinkedIn profilePeter Voisey
View Martin Kaltenböck's LinkedIn profileMartin Kaltenböck
View Sebastian Gabler's LinkedIn profileSebastian Gabler

Trials & Jubilations: the two sides of the GDPR coin

We have all heard about the totally unhip GDPR and the potential wave of fines and lawsuits. The long arm of the law and it’s stick have been noted. Less talked about but infinitely more exciting is the other side. Turn over the coin and there’s a whole A-Z of organisational and employee carrots. How so?

Sign up to the joint webinar the 18th of April 3PM CET with Smartlogic & Findwise, to find out more.

https://flic.kr/p/fJD1eA

Signal Tools

We all leave digital trails behind us, trails about us. Others that have access to these trails can use our data and information. The new European General Data Protection Regulation (GDPR) intends the usage of such Personal Identifiable Information (PII) to be correct and regulated, with the power to decide given to the individual.

Some organisations are wondering how on earth they can become GDPR compliant when they already have a business to run. But instead of a chore, setting a pathway to allow for some more principled digital organisational housekeeping can bring big organisational gains sooner rather than later.

Many enterprises are now beginning to realise the extra potential gains of having introduced new organisational principles to become compliant. The initial fear of painful change soon subsides when the better quality data comes along to make business life easier. With the further experience of new initiatives from new data analysis, NLP, deep learning, AI, comes the feeling:  why we didn’t we just do this sooner?

Most organisations have a system(s) in place holding PII data, even if getting the right data out in the right format remains problematical. The organisation of data for GDPR compliance can be best achieved so that it becomes transformed to be part of a semantic data layer. With such a layer, knowing all the related data from different sources you have on Joe Bloggs becomes so much easier when he asks for a copy of the data you have about him. Such a semantic data layer will also bring other far-reaching and organisation-wide benefits.

Semantic Data Layer

Semantic Data Layer

For example, heterogeneous data in different formats and from different sources can become unified for all sorts of new smart applications, new insights and new innovation that would have been previously unthinkable. Data can stay where it is… no need to change that relational database yet again because of a new type of data. The same information principles and technologies involved in keeping an eye on PII use, can also be used to improve processes or efficiencies and detect consumer behaviour or market changes.

But it’s not just the business operations that benefit, empowered employees become happier having the right information at hand to do their job. Something that is often difficult to achieve, as in many organisations, no one area “owns” search, making it is usually somebody else’s problem to solve. For the Google-loving employee, not finding stuff at work to help them in their job can be downright frustrating. Well ordered data (better still in a semantic layer) can give them the empowering results page they need. It’s easy to forget that Google only deals with the best structured and linked documentation, why shouldn’t we do the same in our organisations?

Just as the combination of (previously heterogeneous) datasets can give us new insights for innovation, we also observe that innovation increasingly comes in the form of external collaboration. Such collaboration of course increases the potential GDPR risk through data sharing, Facebook being a very current point in case. This brings in the need for organisational policy covering data access, the use and handling of existing data and any new (extra) data created through its use. Such policy should for example cover newly created personal data from statistical inference analysis.

While having a semantic layer may in fact make human error in data usage potentially more possible through increased access, it also provides a better potential solution to prevent misuse as metadata can be baked into the data to classify both information “sensitivity” and control user accessibility rights.

So how does one start?

The first step is to apply some organising principles to any digital domain, be it in or outside the corporate walls [the discipline of organising, Robert Gluschko] and to ask the key questions:

  1. What is being organised?
  2. Why is it being organised?
  3. How much of it is being organised?
  4. When is it being organised?
  5. Where is it being organised?

Secondly start small, apply organising principles by focusing on the low-hanging fruit: the already structured data within systems. The creation of quality data with added metadata in a semantic layer can have a magnetic effect within an organisation (build that semantic platform and they will come).

Step three: start being creative and agile.

A case story

A recent case, within the insurance industry reveals some cues to why these set of tools will improve signals and attention for becoming more compliant with regulations dealing with PII. Our client knew about a set of collections (file shares) where PII might be found. Adding search, and NLP/ML opened up the pandoras box with visual analytic tools. This is the simple starting point, finding i.e names or personal number concepts in the text. Second to this will be to add semantics, where industry standard terminologies and ontologies can further help define the meaning of things.

In all corporate settings, there exist both well-cultivated and governed collections of information resources, but usually also a massive unmapped terrain of content collections, where no one has a clue if there might be PII hidden amongst it. The strategy using a semantic data layer should always be combined with operations to narrowing down the collections to become part of the signalling system – it is generally not a good idea to boil the whole-data-ocean in the enterprise information environment. Rather through such work practices, workers are aware of the data hot-spots, the well-cultivated collections of information and that unmapped terrain. Having the additional notion of PII to contend with will make it that just bit easier to recognise those places where semantic enhancement is needed.

not a good idea to boil the whole-data-ocean

Running with the same pipeline (with the option of further models to refine and improve certain data) will not only allow for the discovery of multiple occurrences of named entities (individuals) but also the narrative and context in which they appear.
Having a targeted model & terminology for the insurance industry will only go to improve this semantic process further. This process can certainly ease what may be currently manual processes or processes that don’t exist because of their manual pain: for example, finding sensitive textual information from documents within applications or from online textual chats. Developing such a smart information platform enables the smarter linking of other things from the model, such as service packages, service units / or organisational entities, spatial data as named places or timelines, or medical treatments, things perhaps currently you have less control over.

There’s not much time before the 25th May and the new GDPR, but we’ll still be here afterwards to help you with a compliance burden or a creative pathway, depending on your outlook.

Alternatively sign up to the joint webinar the 11th of April 3PM CET with Smartlogic & Findwise, to find out more.

View Fredric Landqvist's LinkedIn profileFredric Landqvist research blog
View Peter Voisey's LinkedIn profilePeter Voisey
View James Morris's LinkedIn profileJames Morris

Digital recycling & knowledge growth

How do we prevent the digital debris of human clutter and mess? And to what extent will future digital platforms guide us in knowledge creation and use?

Start making sense, and the art of making sense!

People and the Post, Postal History from the Smithsonian's  National Postal Museum

People and the Post, Postal History from the Smithsonian’s National Postal Museum

Mankind’s preoccupation for much of this century has to become fully digitalized. Utilities, software, services and platforms are all becoming an ‘intertwingled’ reality for all of us. Being mobile, the blurring of the borders between the workplace and recreational life plus the ease of digital creation are creating information overloads and (out-of-sight) digital landfills. While digital content creation is cheaper to create and store, its volume and its uncared for status makes it harder for everyone else to find and consume the bits they really need (and have some provenance for peace of mind).

Fear not. A collection of emerging digital technologies exist that can both support and maintain future sustainable digital recycling – things like: Cognitive Computing, Artificial Intelligence; Natural Language Processing; Machine Learning and the like, Semantics adding meaning to shared concepts, and Graphs linking our content and information resources. With good information management practice and having the appropriate supporting tools to tinker with, there is a great opportunity to not only automate knowledge digitization but to augment it.

Automation

In the content continuum (from its creation to its disposal) there is a great need for automating processes as much as possible in order to reduce the amount of obsolete or hidden (currently value-less) digital content. Digital knowledge recycling is difficult as nearly every document or content creator is, by nature, reluctant to add further digital tags (a.k.a. metadata) describing their content or documents once they have been created. What’s more experience shows this is inefficient on a number of accounts, one of which is inconsistency.

Most digital documents (and most digital content, unless intended to sell something publicly) therefore lack the proper recycling resource descriptors that can help with e.g. classification, topic description or annotation with domain specific (shared, consistent) concepts. Such descriptions add appropriate meaning or context to content, aiding its further digital reuse (consumption). Without them, the problem of findability is likely to remain omnipresent across many intranets and searched resources.

Smartphones generate content automatically, often without the user thinking or realizing. All kinds of resource descriptors (time, place etc.) are created automatically through movement and mobile usage. With the addition of further machine learning and algorithms, online services such as Google Photos use these descriptors (and some automatic annotation of their own) to add more contextual data before classifying pictures into collections. This improved data quality (read: metadata addition and improved findability) allows us to find the pictures or timeline we want more easily.

In the very same manner, workplace content or documents can now have this same type of supporting technical platform that automatically adds additional business specific context and meaning. This could include data from users: their profiles, departments or their system user behaviour patterns.

For real organizational agility though a further extra layer of automatic annotation (tagging) and classification is needed – achieved using shared models of the business. These models can be expressed through a combination of various controlled vocabularies (taxonomies) that can be further joined through relationships (ontologies) and finally published (publicly or privately) as domain models as linked data (in graphs). Within this layer exist not just synonyms, but alternative and preferred labels, and more importantly relationships can be expressed between concepts – hence the graph: concepts being the dots (nodes) with relationships the joining lines (vertices). Using certain tools, the certain relationships between concepts can be further given a weighting.

This added layer generates a higher quality of automated context, meaning and consistency for the annotation (tagging) of content and documents alike. The very same layer feeds information architecture in the navigation of resources (e.g. websites). In Search, it helps to disambiguate between queries (e.g. apple the fruit, or apple the organization?).

This digital helper application layer works very much in the same smooth manner as e.g. Google Photos, i.e. in the background, without troubling the user.

This automation however, will not work without sustainable organizing principles, applied in information management practices and tools. We still need a bit of human touch! (Just as Google Photos added theirs behind the scenes earlier, as a work in progress)

Augmentation

This codification or digitalization of knowledge allows content to be annotated, classified and navigated more efficiently. We are all becoming more aware of the Google Knowledge Graph or the Microsoft Graph that can connect content and people. The analogy of connecting the dots in a graph is like linking digital concepts and their known relationships or values.

Augmentation can take shape in a number of forms. A user searching for a particular query can be presented not only with the most appropriate search results (via the sense-making connections and relationships) but also can be presented with related ideas they had not thought of or were unaware of – new knowledge and serendipity!

Search, semantic, and cognitive platforms have now reached a much more useful level than in earlier days of AI. Through further techniques new knowledge can also be discovered by inference, using the known relationships within the graph to fill in missing knowledge.

Key to all of this though is the building of a supporting back-end platform for continuous improvement in the content continuum. Technically, something that is easier to start than one may first suspect.

Sustainable Organising Principles to the Digital Workplace

 


View Fredric Landqvist's LinkedIn profileFredric Landqvist research blog
View Peter Voisey's LinkedIn profilePeter Voisey

Digital wizardry for customers & employees – the next elements

A reflection on Mobile World Congress topics mobility, digitalisation, IoT, the Fourth Industrial Revolution and sustainability

MWC2017Commerce has always had a conversational, today it is digital. Organisations are asking how to organise clean effective data for an open digital conversation. 

Digitalization’s aim is to answer customer/consumer-centric demands effectively (with relevant and related data) and in an efficient manner. [for the remainder of the article read consumer and customer interchangeably]

This essentially means Joining the dots between clean data and information and being able to recognise the main and most consumer-valuable use cases, be it common transaction behaviour or negating their most painful user experiences.

This includes treading the fine line between being able to offer “intelligent” information (intelligent in terms of relevance and context)  to the consumer effectively and not seeming to be freaky or stalker-like. The latter is dealt with by forming a digital conversation where the consumer understands the use of their information only being used for their end needs or wants.

While clean, related data from the many various multi-channel customer touch-points forms the basis of an agile digital organisation, it is the combination of significant data analysis insight of user demand & behaviour (clicks, log analysis etc), machine learning and sensible prediction that forms the basis of artificial Intelligence. Artificial intelligence broken down is essentially resultant action based on the inferences of knowing certain information, i.e. the elementary Dr Watson, but done by computers.

This new digital data basis means being able to take data from what were previous data silos and combine it effectively in a meaningful way, for a valuable purpose. While the tag of Big Data becomes weary in a generalised context, key is the picking of data/information to get relevant answers to the mosts valuable questions, or in consumer speak, to get a question answered or a job done effectively.

Digitalisation (and then the following artificial intelligence) relies obviously on computer automation, but it still requires some thoughtful human-related input. Important steps in the move towards digitalization include:

  • Content and Data Inventory, to clean data/ the cleansing of data and information;
  • Its architecture (information modelling, content analysis, automatic classification and annotation/tagging);
  • Data analysis in combination with text analysis (or NLP: natural language processing for the more abundant unstructured data, content), the latter to put flesh on the bone as it were, or adding meaning and context
  • Information Governance: the process of being responsible for the collection, proper storage and use of important digital information (now made less ignorable with new citizen-centric data laws (GDPR) and the need for data agility or anonymization of data)
  • Data/system Interoperability: which data formats, structures, and standards, are most appropriate for you? What data collections are most Relational databases, Linked/graph data, data lakes etc.?); 
  • Language/cultural interoperability: letting people with different perspectives accessing the same information topics using their own terminology.
  • Interoperability for the future also means being able to link everything in your business ecosystem for collaboration, in- and outbound conversations, endless innovation and sustainability.
  • IoT or the Internet of Things is making the physical world digital and adding further to the interlinked network, soon to be superseded by the AoT (the analysis of things)
  • Newer steps of Machine learning (learning about consumer preferences and behaviour etc.) and artificial intelligence (being able to provide seemingly impossible relevant information, clever decision-making and/or seamless user experience).

The fusion of technologies continues further as the lines between the physical, digital, and biological spheres with developments in immersive Internet, as with Augmented Reality (AR) and Virtual Reality (VR).

The next elements are here already: semantic (‘intelligent’) search, virtual assistants, robots, chat bots… with 5G around the corner to move more data, faster.

Progress within mobility paves the way for a more sustainable world for all of us (UN Sustainable Development), with a future based on participation. In emerging markets we are seeing giant leaps in societal change. Rural areas now have access to the vast human resources of knowledge to service innovation e.g. through free-access to Wikipedia on cheap mobile devices and Open Campuses. Gender equality with changed monetary and mobile financial practices and blockchain means to raise to the challenge with interoperability. We have to address the open paradigm (e.g Open Data) and the participation economy, building the next elements. Shared experience and information commons. This also falls back to the intertwingled digital workplace, and practices to move into new cloud based arenas.

Some last remarks on the telecom industry, it is loaded with acronyms and for laymen in the area sometimes a maze to navigate and to build some sensemaking.

So are these steps straightforward, or is the reality still a potential headache for your organisation? 

Contact Findwise now to ease the process, before your competitor does 😉
View Fredric Landqvist's LinkedIn profileFredric Landqvist research blog
View Peter Voisey's LinkedIn profilePeter Voisey

Sensemaking or Digital Despair

Finding our way in the bright, futuristic, data-driven & intertwined world, often taxes us and our digital-hungry senses. Fast rewind to the recent FindabilityDay 2015 and the parade of brilliant speaker talents on stage. Starting of with our dear friend and peer, Martin White, on the topic the future of search.

Human factors, from idea inception to design and practical UX of our digital artifacts. The key has been make-do and ship. This is the reason the more technically-advanced mobiles fell by the wayside 8 years ago Apple’s iPhone.

The social life with information, shapes our daily lives, in a hyper-connected world. It’s still very hard to find that information needle in the haystack, and most days we feel despair when losing the scent of information nuggets. The results from the Findability Survey, spoke clearly. Without sound organising principles to information and data, and a pliable recorded vision, we won’t find anything of value.

Next, moving into an old business model, with Luna’s and Sara’s presentation, a great example, where we see that the orchestration and choreography of their data assets will determine their survival or demise – in conjunction with infused means to information management practices, processes and tools. They showed a new set of facets to delivering on their mission in their line-of business.

Regardless of the line of business, it becomes clear that our fragmented workplace setting now only partly “on tap”. It makes our daily lives a mess, since things do not interoperate. The vision should show the way to a shared information commons, where we all cultivate.

So finally, How do we make sense of any mess?

Answer: Architect a place where you can find comfort with social conventions shared on the information used. Abby Covert, laid out a beautiful tapestry of things we all need to take on, to make sense in everyday life, and life at work. With clear and distinct guardrails, and signposts we don’t feel so distracted or lost. Her talk was a true enlightenment for me, being of the same profession, Information Architect.

View Fredric Landqvist's LinkedIn profileFredric Landqvist research blog

A Health Care Information Commons Vision: from frozen assets to liquid gold

This is the second post in a series (1), unpacking interoperability in the healthcare system. The basis in this post is semantic and technical interoperability, hence a systemic overview.

The future of health care relies on the improved flow of captured patient health information across the whole care continuum. This means a shared information system linking systems and devices from participating health care organisations while maintaining patient privacy and security standards. Such a realization would not only enhance the clinician and patient experience but also enable faster treatment and better care coordination for patients.

Information Commons is an information system, …, that exists to produce, conserve, and preserve information for current and future generations.

 A seamless and secure hub, heavily-linked, providing point-of-care access to critical patient data and care decision support information for the delivery of timely care, reducing the duplication of tests and procedures.

All in all, this has to be built upon a participatory community paradigm, where clinicians, policy makers and leaders, and patients share a vision to create an interoperable information space – that is sustainable, regardless of previous lock-in mechanisms set by different technical, and semantic standards, vendors and process and policy making.

Healthcare Information Commons

How do we create a interoperability climate?

 Changes for interoperability lie in the development of new pilots with strong collaboration. They are generally more successful where they are based on patient or illness groups, value-orientated, open and scalable. Post requirements phase, iteration based on early adopters’ feedback can identify the need for improvements and enhancements around the relevancy, format and visual display of data and information, the usability of the solution and provide insight into workflow impact. The Information Commons is also a good arena for clinicians to share positive anecdotes from their experiences upon which scalable pilots can be expanded.

Such developed infrastructure and services can also support or be leveraged by other national or regional health initiatives.

Technical Layers of interoperability

Interoperability can cover many layers but at its basis would be an interoperable access layer that integrates and securely shares clinical data from multiple sources giving one point of access. The user interface (GUI) could then provide and display data and information based on stakeholder users and medical/situational context.

Such a layer would have to accommodate and support various data from the distributed system of actors, aligning both to open standards while at the same time being plastic enough in design and instantiation.

Interoperability not only covers the sharing of information but also its usage. This may include added functionality by the EHR vendor themselves or the creation of further value-adding knowledge layers that can take advantage of both structured and (the untapped wealth of) unstructured data within EHRs.

Findwise in its EU funded KConnect project is doing just that. It is currently collecting use case studies from Jönköping (RJI/Qulturum) in order to create a pilot solution for clinicians to take advantage of ‘hidden’ textual data.

Questions of interoperability also lie in the physical user experience of the systems themselves. Should the basic layer provided by EHR vendors be open to include value-added software from other parties, should it be embedded or be made into another GUI? Which ultimately is best for the clinician workflow and the agility of software solutions in supporting new value-based outcomes and reiteration for improvements in efficiency and effectiveness?

Semantic Transformer

The annotations made in the healthcare systems across different domains, all have very similar outset, but lack coherent interoperable mechanism to work smoothly outside the local context. On a international, and national and regional level there should be services that acts as the electric grid to provide society with energy to be used in many contexts. A semantic grid that host controlled vocabularies within the domain, but also share practices and processes. With the use of open standards these could bridge across organisational boundaries and help clean the current messy Healthcare information space.

The healthcare information commons, do not per se have to be one system, but rather an interoperable set of services/systems that share standards to be able to exchange information and data. Very similar to they way Internet and linked data work today –  not restricted by walled gardens. The governance of the commons, should be a matter of public services, with sustainable resources and open governance agenda that can invite participation and engagement. No single actor in the network, be it a large hospital, private caretaker or regional public governing body will be able take care of this single-handedly. It should be a true “commons” undertaking!

The infusion of the Information Commons into everyday healthcare provisioning use cases with semantic transformer applications could be in several modalities: finding and acting upon information or contributing in the local context.

In the data entry or capture point, there will be options to add semantic layers and attributes to the type of content and data provisioned. An easy way to illustrate this, is the emerging use of schema.org templated entities and properties for the MedicalTypes, MedicalConditions, Drugs, Guidelines, Codes from controlled vocabularies like SnoMedCT, Mesh, ICD10 and the like.

 Analogously using digital cameras from smartphones or other devices, means that the user might add “some” metadata or tags about the picture. Devices and sensors add more layers of granularity with attributes that most end-users, never see or bother about. These extra resource descriptions, will interplay with cloud based services as Google Photos – where different algorithms reformat, package the content into new forms, as contextual albums, scenes and so forth.

 A set of semantic transformer application layers should be intertwingled with the Healthcare Information Commons. Firstly to make easy linkages between data sets – as the Web of Data scenarios and Linked Data propose –  but also to  provide smarter integration points in back-end supporting processes in the Healthcare systems where more private and locked-in data-sets exist about the patient conditions, treatments and drugs etc.

 The semantic transformer applications could both be open api:s developed by the community for the commons, but also could be commercial applications provided by line-of-business specialist software vendors. As long as all of these layers, are compliant with the open standards!

For such legacy systems as EHR , and off-the-shelf healthcare applications and business applications that are semantically impaired, these semantic transformer applications could work as a repair-kit for already old broken systems. Consequently there would be no need to overhaul all legacy software within the caretaker’s organisation. A kind of smoother migration path to interoperability.

There also exists the need for semantic interoperability between the contextual patient information within the EHR and the provision of clinical decision support information. This could be in the form of internal medical guidelines and best practices, or from external resources such as medical journals or clinical trial reports.

The KConnect project are providing semantic annotation and semantic search services in different languages for clinicians and researchers to access the very latest in medical literature. This is achievable by semantically annotating required medical information (EHRs, guidelines, journals etc) and having the semantic search engine take full advantage of known key medical entities/concepts and their relationships.

Through the indexing of new information about drug usage, best practices, guidelines, new clinical trials and journals, clinicians then access up-to-date relevant information whenever they need.

In the near future to maximise both clinician and patient user engagement with EHRs, different uses and views of the EHR will have to be driven by suitable context and stakeholder semantics.

Shared Decision making

When moving into valued-based health care and outcome measurement, (as presented here by Sveus), it is critical that all actors participate on a connected level field, so that communication between healthcare practitioners and patients and their social networks works.  This includes the need for shared norms and definitions as well as systems to support the decision making – and obviously a harmonised set of metrics to measure outcomes.

As presented by Peter Ubel, in his talks and recent book on Critical Decisions, it is key that we are able to share a common view between the clinician and the patient. All practitioners share jargon that do not always communicate well to the receiver. Hence there are plenty of communication breakdowns recorded in the everyday practices, leading to “malpractice” in the worst cases for the patient. In the last couple of decades, there has been a shift in power relations between healthcare professionals and patients and their families. Patient empowerment is a good thing, but if things get lost in translation, there is the risk that critical decisions are not fully supported.

With a Healthcare Information Commons pool of resources, there lays opportunities to guide patients and practitioners in their critical decision making. But also to strengthen the learning and innovation within the communities of practice, with open feedback loops to the pool.

Privacy & Security upfront

Just as data interoperability can be seen as the sharing of data, data security can be seen as the sharing of data in the right way and data privacy seen as the sharing of data with the right person in the right way. We are naturally concerned as to who may be using our data and want to be able to control its use.

The boundary between citizens’ App data and their medical data is blurring rapidly as App developments and sensors continue to provide new and different data that the individual, health care and clinical research can capitalise on in the effort to move towards better wellbeing and more value-based healthcare.

While data privacy and security have become the headline darlings of the media, they can often be distractors of innovation, often masking the true benefits of the flow of information. Just as with physical assets there are best practices for data misuse prevention, protection and policing. The majority of misuse or abuse of personal data is more often caused by human error and misjudgement than by the failure of technology.

Data interoperability can be better supported when services have clear guidelines to inform citizens as to who, when and how their data is shared, for what purpose and the available steps to alter said process. A better informed public would then see more free data resources being used for clinical research e.g. the Million Hearts initiative in the US where citizen data is being used to lower heart attacks and strokes.

Open regulations, collaboration and co-ordination along with risk assessment and protection practices such as encryption, anonymisation and de-identification, all can go a long way to allowing secure data interoperability, be it personal or aggregated data. IT has the potential too of rule-based access and forensic data access reports. No system can be made fool-proof, however precautions and the presence of well-designed data breach response plan are achievable.

Obviously we do not want all our healthcare records to be open in the air for anybody to use or read, as little as we want our financial records to be in the open. Privacy is really key! The means with the Information Commons should work with aggregated data. Not the singular set of records for one patient.

Patient security derives the need to a more free flow of data between actor systems. The medical conditions and contexts sets the standards for sharing, where extracts or segments should be possible to share aligned with privacy policies.

Future real-life experience exposé

Having a recent Swedish report on diabetes care and outcome measurement in mind. It makes sense, to illustrate the case of a diabetes patient living and acting in Göteborg, West of Sweden. They have a medical condition, being a lifelong journey with an endocrine system out of order. This has a great impact on the patient’s everyday life, and diabetes related complications. With good life balance to training, exercise and eating habits, it is possible to keep the glucose patterns in such a way that your life expectancy will equal to anybody else.

The use of personal choices to trigger improved behavior, gives the person options to chose selected wellbeing (e.g. Weight Watchers), fitness (e.g. Runkeeper) and health monitoring applications. In most cases these are closed down ecosystems, e.g. iOS included Health app, with options to share in social-media (about your progress, in terms of eating well, or improve your personal training). Many Life Science corporations are developing medical condition / disease area / treatment specific Health monitoring applications (e.g. FreeStyle Libre from Abbot for improving Glucose Monitoring) that clinicians recommend during patient consultations.

For clinical researchers there are ecosystem specific toolkits, like the open-sourced Apple Research Kit.  The existence of a closed ecosystem naturally makes it more problematic to share and exchange data. In this space a Open Standards based on the idea Information Commons makes sense too – where semantic translators could improve the transmission of data from one closed ecosystem to another, without privacy infringement.

A Personal Health Record (PHR) , is a health record where health data and information related to the care of a patient is maintained by the patient

In a future more seamlessly interoperable world, the citizen / patient should be provided one-secure-access point to his/hers health account, e.g. in Sweden 1177 and Mina Vårdkontakter and Hälsa för mig.

The outstanding question: How to get interoperability between PHR and Wellbeing, Fitness and Health apps where it is easy to share vital data bits in a sound manner?

In this scene, open standards should be applied to create a make-do semantic transformation.

Lastly – interoperability within the Professional Clinician Workplace?

The statements and real-life stories from the trenches in any clinical workplace, show a mess of supporting information systems. EHRs that by no means either cooperate or interoperate. Many clinicians realise that they have to do data provision into a handful of systems with significant double manual workload. This comes with risks, given the stressful environment, and many “malpractice” incidents can arise from this workplace disorder.

Each system support its part of the process. While some software suites try to close-down into one-system to ‘rule them all paradigm,’ they still barely lean upon any open standards and they lack semantic and structured ways for the use of data and information outside of the supporting system’s narrow scope.

 A diabetes nurse (post patient consultation) has to enter data into more than 10 different areas, including quality assurance and measurement systems e.g. NDR in Sweden. In some cases there have been integrated point-to-point solutions put in place, but mostly this is not the case and so unnecessary frustration is created.

In every intervention where clinicians and patients communicate, regardless of it being online, remote, on-site, there should be opportunities to tap into the Healthcare Information Commons space. With the potential to find recent new medical treatments, emerging standards/guidelines, breaking news for clinicians as well as patient-oriented and formatted communications. In the best of worlds, semantic translator applications will bridge between ecosystems inside the personal health space as well as into the workplace environment for clinicians – helping, guiding and improving all dimensions of interoperability.

Concluding remarks

Having value-based Healthcare and Outcome Measurement domain as a specific health care change driver, will push the use of standards on all levels to the limit. In the following blog post in this series, the ambition is to unpack information governance, since the data ownership and trust also have to be ironed out. And as stated by Prof Michael E. Porter, the capture of data to do proper Outcome Measurement is one of the major road-blocks ahead. The orchestration of all resources and governance still have to be unfolded. Happily some building blocks to the Healthcare Information Commons have emerged, so we do not need to reinvent the wheel:

  • Wikimedia realm “commons“- with all entries of semantic useful data in wikidata.org
  • Standard Sets for Medical Conditions by international collaboration at ICHOM, and in Sweden Sveus. Standards from Hl7 FHIR, W3C and Web of Data / Semantic Web. The Swedish National Board of Health and Welfare, have an embroic information structure (not in semantic machine readible, RDF, format). Information intermediaries like Google have settle for simple schemas for health and medicin.
  • Open Innovation, and the “open” paradigm, will change evidence based medicine, Bad Pharma and Science on a sociatal level, as stated by Ben Goldacre (TED) where we as patient together with clinicians are able to question treatments based on open data, and improve quality to Healthcare Information Commons.
  • The technology stack with smarter devices, sensors and things, along with Internet anywhere with cognitive computing and computational knowledge on-top of the commons will bring forward semantic translators.
  • New leaps in collaborative work and development with the use of the notebook theme, language and platform agnostic ways.

Making sense, defrosting health data into liguid gold improving healthcare for all.

For more information on Findwise research, please visit KConnect and Orios (Open Standards)


View Fredric Landqvist's LinkedIn profileFredric Landqvist research blog
View Peter Voisey's LinkedIn profilePeter Voisey

Interoperability in Healthcare using Open Standards

The emerging major overhaul to the Healthcare system, aided by Value Based Healthcare and Outcome Measurement, is inevitable and that’s is good!

The outstanding question is how do we infuse Sensemaking in the future Healthcare realm?

The cues for a better interopable worldview is nothing new. The main obstacles and roadblocks could be narrowed down to the following: closed-down data and information silos, with no governanace and policy making that apply the open innovation paradigm. This is the first post in a series (2) unpacking interoperability in the healthcare system.

Open Standards – the remedy for the Healthcare systems incurable prognosis?

The use of open standards to reach for interoperability on all levels should be the main driver for all policy making in the healtchare system regardless of country, region, hospital or clinic. And moving into patient engagement and health monitoring and consumer centric applications and services, this becomes even more obvious.

In a recent thesis “Standardization of interoperability in health care information systems“, (exec brief presentation) the different levels of interoperability was presented. Using Value Based Healthcare change in Sweden as background.

Interop Map

The results presented showed that without a good “Interoperability Climate” determined by sustainable resources and clear governance, the other interoperability levels will be problematic. With the bedrock being healthcare provisioning in Sweden, this could unfold to a better orchestrated interoperability practice, from Government, to the National Board of Health and Welfare, to local regional healthcare providers and hospitals, private clinics. As well as with citizen centric Health services, and consumer Health and Wellbeing apps on any platform. From policy makers, this implies that new policies should stress and enforce the use of open standards as a way to unleash the closed down data silos and practices.

In the future blog posts we will discuss semantic interoperability and technical interoperability, given that Findwise work in EC funded project, KConnect. And the final blog post will relate to information governance models, and why open standard uses make sense in the organisational interoperability domain.

This is a brief conversation with the students presenting their thesis. The first introduction is in Swedish (5-10min). The walk through of the thesis is in English.

 

Finding business values in the emerging digital workplace

How does one experience the promised business rewards of the emerging digital workplace (a.k.a the intranet)?

A group of renowned intranet professionals have taken on the task this question and offer sound practical advice as to how to achieve real business value in their new book “intranets that create business value” or in Swedish “intranät som skapar värde“,

intranat-som-skapar-varde-framsida

Today, in fact most days, end-users feel bewildered when using the intranet.It is to some extent impossible to navigate.There exists a hodgepodge of mixed user experiences, given that the intranet often serves as the access point to several tools. And findability too is low! With a coherent, smooth and interoperable workplace, users should be able to find information and data, peers and colleagues to solve their everyday tasks, in an efficient way…  anywhere, on any device and anytime.

The authors’ narrative describes how the intranet can best be used to produce beneficial business transformation, by including detailed chapters on: strategy, content & information architecture, search/findability, governance and stakeholder management, end-user engagement and adaptation. Measures and metrics are also included to qualify the sought after business values.

Findwise have contributed to the sections relating to organising principles. Put simply, it should be easy for a user to know where and how to contribute with information and content in a good manner, so that others are able to find and co-act on such codified knowledge.

Without sound and sustainable organising principles there will be no findability: shit in = shit out! Regardless of the technology platform employed for search or intranet

Buy the e-book today, in advance of the published printed version in May!