A reflection on Mobile World Congress topics mobility, digitalisation, IoT, the Fourth Industrial Revolution and sustainability
Commerce has always had a conversational, today it is digital. Organisations are asking how to organise clean effective data for an open digital conversation.
Digitalization’s aim is to answer customer/consumer-centric demands effectively (with relevant and related data) and in an efficient manner. [for the remainder of the article read consumer and customer interchangeably]
This essentially means Joining the dots between clean data and information and being able to recognise the main and most consumer-valuable use cases, be it common transaction behaviour or negating their most painful user experiences.
This includes treading the fine line between being able to offer “intelligent” information (intelligent in terms of relevance and context) to the consumer effectively and not seeming to be freaky or stalker-like. The latter is dealt with by forming a digital conversation where the consumer understands the use of their information only being used for their end needs or wants.
While clean, related data from the many various multi-channel customer touch-points forms the basis of an agile digital organisation, it is the combination of significant data analysis insight of user demand & behaviour (clicks, log analysis etc), machine learning and sensible prediction that forms the basis of artificial Intelligence. Artificial intelligence broken down is essentially resultant action based on the inferences of knowing certain information, i.e. the elementary Dr Watson, but done by computers.
This new digital data basis means being able to take data from what were previous data silos and combine it effectively in a meaningful way, for a valuable purpose. While the tag of Big Data becomes weary in a generalised context, key is the picking of data/information to get relevant answers to the mosts valuable questions, or in consumer speak, to get a question answered or a job done effectively.
Digitalisation (and then the following artificial intelligence) relies obviously on computer automation, but it still requires some thoughtful human-related input. Important steps in the move towards digitalization include:
Content and Data Inventory, to clean data/ the cleansing of data and information;
Its architecture (information modelling, content analysis, automatic classification and annotation/tagging);
Data analysis in combination with text analysis (or NLP: natural language processing for the more abundant unstructured data, content), the latter to put flesh on the bone as it were, or adding meaning and context
Information Governance: the process of being responsible for the collection, proper storage and use of important digital information (now made less ignorable with new citizen-centric data laws (GDPR) and the need for data agility or anonymization of data)
Data/system Interoperability: which data formats, structures, and standards, are most appropriate for you? What data collections are most Relational databases, Linked/graph data, data lakes etc.?);
Language/cultural interoperability: letting people with different perspectives accessing the same information topics using their own terminology.
Interoperability for the future also means being able to link everything in your business ecosystem for collaboration, in- and outbound conversations, endless innovation and sustainability.
IoT or the Internet of Thingsis making the physical world digital and adding further to the interlinked network, soon to be superseded by the AoT (the analysis of things)
Newer steps of Machine learning (learning about consumer preferences and behaviour etc.) and artificial intelligence (being able to provide seemingly impossible relevant information, clever decision-making and/or seamless user experience).
The fusion of technologies continues further as the lines between the physical, digital, and biological spheres with developments in immersive Internet, as with Augmented Reality (AR) and Virtual Reality (VR).
The next elements are here already: semantic (‘intelligent’) search, virtual assistants, robots, chat bots… with 5G around the corner to move more data, faster.
Progress within mobility paves the way for a more sustainable world for all of us (UN Sustainable Development), with a future based on participation. In emerging markets we are seeing giant leaps in societal change. Rural areas now have access to the vast human resources of knowledge to service innovation e.g. through free-access to Wikipedia on cheap mobile devices and Open Campuses. Gender equality with changed monetary and mobile financial practices and blockchain means to raise to the challenge with interoperability. We have to address the open paradigm (e.g Open Data) and the participation economy, building the next elements. Shared experience and information commons. This also falls back to the intertwingled digital workplace, and practices to move into new cloud based arenas.
Some last remarks on the telecom industry, it is loaded with acronyms and for laymen in the area sometimes a maze to navigate and to build some sensemaking.
So are these steps straightforward, or is the reality still a potential headache for your organisation?
There have been discussions surrounding the great generational renewal in the workplace for a while. The 50’s generation, who have spent a large part of their working lives within the same company, are being replaced by an agile bunch born in the 90’s. We are not taken by tabloid claims that this new generation does not want to work, or that companies do not know how to attract them. What we are concerned with is that businesses are not adapting fast enough to the way the new generation handle information to enable the transfer of knowledge within the organisation.
Working for the same employer for decades
Think about it for a while, for how long have the 50’s generation been allowed to learn everything they know? We see it all the time, large groups of employees ready to retire, after spending their whole working lives within the same organisation. They began their careers as teenagers working on the factory floor or in a similar role, step by step growing within the company, together with the company. These employees have tended to carry a deep understanding of how their organisation work and after years of training, they possess a great deal of knowledge and experience. How many companies nowadays are willing to offer the 90’s workers the same kind of journey? Or should they even?
2016 – It’s all about constant accessibility
The world is different today, than 50 years ago. A number of key factors are shaping the change in knowledge-intense professions:
Information overload – we produce more and more information. Thanks to the Internet and the World Wide Web, the amount of information available is greater than ever.
Education has changed. Employees of the 50’s grew up during a time when education was about learning facts by rote. The schools of today focus more on teaching how to learn through experience, to find information and how to assess its reliability.
Ownership is less important. We used to think it was important to own music albums, have them in our collection for display. Nowadays it’s all about accessibility, to be able to stream Spotify, Netflix or an online game or e-book on demand. Similarly we can see the increasing trend of leasing cars over owning them. Younger generations take these services and the accessibility they offer for granted and they treat information the same way, of course. Why wouldn’t they? It is no longer a competitive advantage to know something by heart, since that information is soon outdated. A smarter approach of course is to be able to access the latest information. Knowing how to search for information – when you need it.
Factors supporting the need for organising the free flow of the right information:
Employees don’t stay as long as they used to in the same workplace anymore, which for example, requires a more efficient on boarding process. It’s no longer feasible to invest the same amount of time and effort on training one individual since he/she might be changing workplace soon enough anyway.
It is much debated whether it is possible to transfer knowledge or not. Current information on the other hand is relatively easy to make available to others.
Access to information does not automatically mean that the quality of information is high and the benefits great.
Organisations lack the right tools
Knowing a lot of facts and knowledge about a gradually evolving industry was once a competitive advantage. Companies and organisations have naturally built their entire IT infrastructure around this way of working. A lot of IT applications used today were built for a previous generation with another way of working and thinking. Today most challenges involve knowing where and how to find information. This is something we experience in our daily work with clients. Organisations more or less lack the necessary tools to support the needs of the newer generation in their daily work.
To summarize the challenge: organisations need to be able to supply their new workforce with the right tools to constantly find (and also manipulate) the latest and best information required for them to shine.
Success depends on finding the right information
In order for the new generation to succeed, companies must regularly review how information is handled plus the tools supporting information-heavy work tasks.
New employees need to be able to access the information and knowledge left by retiring employees, while creating and finding new content and information in such a way that information realises its true value as an asset.
Efficiency, automation… And Information Management!
There are several ways of improving efficiency, the first step is often to investigate if parts, or perhaps the entire creating and finding process can be automated. Secondly, attack the information challenges.
What kind of information is it?
Where is the information located?
What is important, the information objects in their entirety or the subsets?
How will the information be consumed?
What prior knowledge is needed to interpret the information?
When we get a grip of the information we are to handle, it’s time to look into the supporting IT systems. How are employees supposed to find what they are looking for? How do they want to?
We have gotten used to find answers by searching online. This is in the DNA of the 90’s employee. By investing in a great search platform and developing processes to ensure high information quality within the organisation, we are certain the organisation will not only manage the generational renewal but excel in continuously developing new information centric services.
In accordance to sources, the birth of the intranet fell on a 1994 – 1996, that was true prehistory from an IT systems point of view. Intranet history is bound up with the development of Internet – the global network. The idea of WWW, proposed in 1989 by Tim Berners-Lee and others, which aim was to enable the connection and access to many various sources, became the prototype for the first internal networks. The goal of intranet invention was to increase employees productivity through the easier access to documents, their faster circulation and more effective communication. Although, access to information was always a crucial matter, in fact, intranet offered lots more functionalities, i.e.: e-mail, group work support, audio-video communication, texts or personal data searching.
Overload of information
Over the course of the years, the content placed on WWW servers had becoming more important than other intranet components. First, managing of more and more complicated software and required hardware led to development of new specializations. Second, paradoxically the easiness of information printing became a source of serious problems. There was too much information, documents were partly outdated, duplicated, without homogeneous structure or hierarchy. Difficulties in content management and lack of people responsible for this process led to situation, when final user was not able to reach desired piece of information or this had been requiring too much effort.
Google to the rescue
As early as in 1998 the Gartner company made a document which described this state of Internet as a “Wild West”. In case of Internet, this problem was being solved by Yahoo or Google, which became a global leader on information searching. In internal networks it had to be improved by rules of information publishing and by CMS and Enterprise Search software. In many organizations the struggle for easier access to information is still actual, in the others – it has just began.
And the Search approached
It was search engine which impacted the most on intranet perception. From one side, search engine is directly responsible for realization of basic assumptions of knowledge management in the company. From the other, it is the main source of complaints and frustration among internal networks users. There are many reasons of this status quo: wrong or unreadable searching results, lack of documents, security problems and poor access to some resources. What are the consequences of such situation? First and foremost, they can be observed in high work costs (duplication of tasks, diminution in quality, waste of time, less efficient cooperation) as well as in lost chances for business. It must not be forgotten that search engine problems often overshadow using of intranet as a whole.
How to measure efficiency?
In 2002 Nielsen Norman Group consultants estimated that productivity difference between employees using the best and the worst corporate network is about 43%. On the other hand, annual report of Enterprise Search and Findability Survey shows that in situation, when almost 60% of companies underline the high importance of information searching for their business, nearly as 45% of employees have problem with finding the information.
Leaving aside comfort and level of employees satisfaction, the natural effect of implementation and improvement of Enterprise Search solutions is financial benefit. Contrary to popular belief, investments profits and savings from reaching the information faster are completely countable. Preparing such calculations is not pretty easy. The first step is: to estimate time, which is spent by employees on searching for information, to calculate what percentage of quests end in a fiasco and how long does it take to perform a task without necessary materials. It should be pointed out that findings of such companies as IDC or AIIM shows that office workers set aside at least 15-35% of their working hours for searching necessary information.
Problems with searching are rarely connected with technical issues. Search engines, currently present on our market, are mature products, regardless of technologies type (commercial/open-source). Usually, it is always a matter of default installation and leaving the system in untouched state just after taking it “out of the box”. Each search engine is different because it deals with various documents collections. Another thing is that users expectations and business requirements are changing continually. In conclusion, ensuring good quality searching is an unremitting process.
Knowledge workers main tool?
Intranet has become a comprehensive tool used for companies goals accomplishment. It supports employees commitment and effectiveness, internal communication and knowledge sharing. However, its main task is to find information, which is often hide in stack of documents or dispersed among various data sources. Equipped with search engine, intranet has become invaluable working tool practically in all sectors, especially in specific departments as customer service or administration.
So, how is your company’s access to information?
This text makes an introduction to series of articles dedicated to intranet searching. Subsequent articles are intended to deal with: search engine function in organization, benefit from using Enterprise Search, requirements of searching information system, the most frequent errors and obstacles of implementations and systems architecture.
This is the seventh post in a series (1, 2, 3, 4, 5, 6) on the challenges organisations face as they move from having online content and tools hosted firmly on their estate to renting space in the cloud. We will help you to consider the options and guide on the steps you need to take.
Starting from our first post we have covered different aspects you need to consider as you take each step including information structure and how it is managed using Office 365 and SharePoint as a technology example. Planning for migration.
Do not even think about moving into the cloud apartment without a proper cleaning of the content buckets. Moving from an architected household to a rented place, taxes a structured audit. Clean out all redundant, outdated and trivial matter (ROT). The very same habit you have cleaning up the attic when moving out from your old house.
It is also a good idea to decorate and add any features to your new cloud apartment before the content furniture is there. It means the content will fit with any new design and adapt to any extra functionality with new features like windows and doors. This can be done by reviewing and updating your publishing templates at the same time. This will save time in the future.
Leaning upon the information governance standards, it should be easy to address the cleaning before moving, for all content owners who have been appointed to a set of collections or habitats. Most organisations could use a content vacuum cleaner, or rather use the search facilities and metric means to deliver up to date reports on:
Active / in-Active habitats
No clear ownership or the owner has left the building
Metadata and link quality to content and collections to be moved across to the cloud apartments.
Review publishing templates and update features or design to be used in the Cloud
When all active habitats and qualified content buckets have been revisited by their set of curators and information owners. The preparation and use of moving boxes, should be applied.
All moving boxes do need proper tagging, so that any moving company will be able to sort out where about the stuff should be placed in the new house, or building. For collections, and habitats, this means using the very same set of questions stated for adding a new habitat or collection to the cloud apartment house. Who, why, where and so forth, through the use of a structured workflow and form. When this first cleaning steps have been addressed, there should be automatic metadata enhancement, aligned with the information management processes to be used in the new cloud.
With decent resource descriptions and cleaned up content through the audit (ROT), this last step will auto-tag content based upon the business rules applied for the collection or habitat. Then been loaded into the content moving truck, or loading dock. Ready to added to the cloud.
All content that neither have proper assigned information ownership, or are in such a shape that migration can’t be done should persist on the estate or be archived or purged. This means that all metadata and links to either content bucket or habitat that won’t be moved in the first instances, should at least have correct and unique uri:s, address, to this content. And in the case a bucket or habitat have been run down by a demolition firm, purged. All inter-linkage to that piece of content or collection have to be changed.
This is typically a perfect quality report, to the information owners and content editors, that they need to work through prior to actually loading the content on the content dock.
Finally when all rotten data, deserted habitats and unmanageable buckets have been weeded out. It is time to prepare the moving truck, sending the content into its new destination.
Our final thread will cover how will the organisation and it habitants will be able to find content in this mix of clouds, and things left behind on the old estate? Cloud Search and Enterprise Search, seamless or a nightmare?
This is the sixth post in a series (1, 2, 3, 4, 5, 7) on the challenges organisations face as they move from having online content and tools hosted firmly on their estate to renting space in the cloud. We will help you to consider the options and guide on the steps you need to take.
Starting from our first post we have covered different aspects you need to consider as you take each step including information structure and how it is managed using Office 365 and SharePoint as a technology example. We will cover more about SharePoint in this post, and placemaking in the cloud.
In SharePoint there are a set of logic chunks. One could decompose the digital workplace into intranet sites, as departmental and organisational buckets; team sites where groups collaborate, and lastly your personal domain being the my site collection. Navigating between these, is a mix of traditional information architecture and search driven content. When being within a such a habitat as a teamsite, it is not always obvious how to cross-link or navigate to other domains within the digital workplace hosted in Sharepoint.
One way to overcome this, is to render different forms of portals, based upon dynamic navigation. These intersections and aggregates help users to move around the maze of buckets and collections of the content. Sharepoint have very good features, and options to create search-based content delivery mechanisms.
A metadata and search-based content model, gives us cues for the future design of the digital workplace, with connected habitats and sustainable information architecture. Where people don’t get lost, and have wayfinding means to survive everyday work practices.
This is where how you manage the content in SharePoint and Office 365 is critical. As we said in our first post it is important you have a good information architecture combined with a good governance framework that helps you to transform your buckets of content from the estate into the cloud. We have covered information architecture so we now move more towards how governance completes the picture for you.
There are three approaches to the governance your organisation needs to have with SharePoint and Office 365. You don’t have to use just one. You can combine some of each to find the right blend for your organisation. What works best for you will depend on a number of different factors. Among them:
Restricting use – stopping some features from being used e.g. SharePoint Designer
Encouraging best practice – guidance and training available
Preventing problems – checking content before it is published
Each of these approaches can support your governance strategy. The key is to understand what you need to use.
You need to be clear why your organisation is using SharePoint and Office 365 and the benefits expected. This will shape how tight or loose your governance needs to be.
Once you are clear on this, you then need to consider the strategic benefits and drawbacks such as SharePoint Designer and site collection administration rights.
You control what is being used.
You decide who uses a feature e.g. SharePoint Designer.
You manage the level of autonomy each site owner has.
You find out why someone needs to use a feature.
You monitor costs for licences, users, servers, etc.
You measure who is using what and why for reporting.
You stifle innovation by not allowing people to test out ideas.
You stop legitimate use by asking for permission to use features.
You prevent people being able to share knowledge how they wish to.
You may be unable to realise the maximum potential of SharePoint.
You create unnecessary administration.
You risk adding costs without any value to offset them with.
You need to get the balance right with governance that gives you maximum value for the effort needed managing SharePoint and Office 365.
Encourage best practice
The goal from implementing SharePoint and Office 365 is to have an environment that enables employees to publish, share, find and use information easily to help with their work. They are confident the information is reliable and appropriate, whatever their need for it is. People also feel comfortable using these tools rather than alternative methods like calling helpdesks or emailing other employees for help.
Encouraging best practice by giving them the opportunity to test to meet their needs is one approach to achieving this. There are factors you need to consider that can help or hinder the success of using this approach.
You inform employees of all the benefits to be gained.
You train people to use the right tools.
You design a registration process to direct people to the right tools.
You point employees to guidance on how to follow best practice.
You encourage innovation by giving everyone freedom of use.
You can’t prevent people using different tools to those you recommend.
You risk confusing employees using content unsure of its integrity.
You can’t prevent everyone ignoring best practice when publishing.
You may make it difficult for people to share knowledge effectively.
Your governance model may be ineffective and need improving.
Getting the balance right between encouraging best practice and the level of governance to deter behaviour which can destroy the value from using SharePoint and Office 365 is critical.
As well as encouraging best practice, preventing problems helps to reduce time and costs wasted on sorting out unnecessary issues. While that is the aim of most organisations the practical realities as it is rolled out can divert plans from achieving this.
You need to get the right level of governance in place to prevent problems. Is it encouraging innovation and keeping governance light touch? Is it a heavier touch to prevent the ‘wrong’ behaviour and minimise risk of your brand and reputation being damaged? How much do you want to spend preventing problems? What does your cost/benefit analysis show?
People using SharePoint and Office 365 have a great experience (especially the first time they use it).
Everyone is confident they can use it for what they need it for without experience problems.
Employees don’t waste time calling the helpdesk because many problems have been prevented.
Effective governance encourages early adoption and increased knowledge sharing.
Costs spent preventing problems are justified by increased productivity and reduced risk of errors.
People find registering difficult and lengthy because of extra steps taken to prevent problems and don’t bother.
People find it too restrictive for their needs and it stifles innovation.
People turn to other tools (maybe not approved) to meet their needs and ask other people for help to use them.
Too restrictive governance prevents most beneficial use by raising the barrier too high for people to use.
Costs of preventing problems are higher than benefits to be gained and not justified.
You need to consider the potential benefits and drawbacks before deciding on the level of governance that is right for your organisation.
Remember, it is possible and probably desirable to have different levels of governance for each feature. It may be lighter for personal views and opinions expressed in MyProfile and MySite but tighter for policies and formal news items in TeamSites.
That is the challenge! You have so much flexibility to configure the tools to meet your organisation’s needs. Don’t be afraid to test out on part of your intranet to see what effect it has and involve employees to feed back on their experience before launching it.
The way forward is to create a sustainable information architecture, that supports an information environment that is available on any platform, everywhere, anytime and on any device. A governance framework can show roles and responsibilities, how they fit with a strategy and plan with publishing standards as the foundation to a consistently good user experience.
Combining a governance framework and information architecture with the same scope avoids any gaps in your buckets of content being managed or not being found. It helps you transform from your estate to the cloud successfully.
In our last concluding posts we will dive into more design oriented topics with a helping hand from findability experts and developers. Adding migration thoughts in next post. But first navigating the social graph being people centric, leaving some outstanding questions. How will the graph interoperate if your business runs several clouds, and still have buckets of content elsewhere?
This is the fourth post in a series (1, 2, 3,5, 6, 7) on the challenges organisations face as they move from having online content and tools hosted firmly on their estate to renting space in the cloud. We will help you to consider the options and guide on the steps you need to take.
In the first post we set out the most common challenges you are likely to face and how you may overcome these. In the second post we focused on how Office 365 and SharePoint can play a part in moving to the cloud. In the third post we covered how they can help join up your organisation online using their collaboration tools and features.
In this post we will cover engagement and how sorting and categorisation of artifacts, according to a simple-to-understand and easy-to-use standard, will form the bits and parts of the curation and cultivation process.
All document libraries should have one standard listing of all items – with two very distinct audiences: being either actors within the habitat or the people contributing, acting and joining the daily conversation; and secondly, those visitors who pass-by the habitat to collect, link and act upon the content presented within the habitats realm.
This makes it very easy for visitors to find their way around a habitat, if the visitors’ area (business lounge) is pretty much aligned to the overarching theme of the site… and all artifacts that the project team like to share wider, have been listed in a virtual bookshelf, with major versions only. The visitors’ area, has all the relevant data, presented upfront. Basically the answers to the questions set when starting the project. The visitors’ area shouldn’t be a backdrop, but rather a storefront. The content has to be of good quality. Then there should be options to engage with the inner-living-room of the habitat, and enter the messy on-going conversations, depending on access-rights. But the default setting, should always be openfor unexpected “internal” (within the realm of the organisation) visitors. If the visitors’ area is compiled in a nice and easy to use manner, most visitors are just happy to pick the best-read from the bookshelf, or at least raise a questions for the team! The social construct for this is “welcoming a stranger”, since that visitor might link to your team’s content, cross-linking into his social-spaces.
The habitat’s livingroom and social conversations, will address new context-specific organising principles. A team might want to add new list-items, sort categories or introduce very local what-goes-where themes. This may be especially so when the team consists of actors who have different roles and responsibilities with regard to the overall outcome. And because of this, there may be a certain mix of tools or services in this one habitat of many, where they hang-out for project tasks.
The contextual adjustment is where the curator has to work on a cultivation process that glues the team together. The shared terminology within a group conversation, is what match their practices together. At inception, the curator picks a bouquet of on-topic terms from the controlled vocabularies. Mixing this with everyday use, and contributions from all members, this can be the fruitful and semantically-enhanced conversations with end-user generated tags or “folksonomies”. The same goes for interior design of links, tools, chosen content types and other forms of artifacts that the team will be needing to fulfill their goals and outcome.
The governance of the habitat, leans very much on the shared experiences in the group, and assigned responsibilities for stewardship and curation – where publishing standards, guidelines and training should be part of the mix.
This is the third post in a series (1, 2,4, 5, 6, 7) on the challenges organisations face as they move from having online content and tools hosted firmly on their estate to renting space in the cloud. We will help you to consider the options and guide on the steps you need to take.
In the first post we set out the most common challenges you are likely to face and how you may overcome these. In the second post we focused on how Office 365 and SharePoint can play a part in moving to the cloud. Here we cover how they can help join up your organisation online using their collaboration tools and features.
When arranging the habitat, it is key to address the theme of collaboration. Since each of these themes, derives different feature settings of artifacts and services. In many cases, teamwork is situated in the context of a project. Other themes for collaboration are the line of business unit teamwork, or the more learning networks a.k.a communities of practice. I will leave these later themes for now.
Most enterprises have some project management process (i.e. PMP) that all projects do have to adhere to, with added complementary documentation, and reporting mechanisms. This is so the leadership within the organisation will be able to align resources, govern the change portfolio across different business units. Given this structure, it is very easy to depict measurable outcomes, as project documents have to be produced, regardless of what the project is supposed to contribute towards.
Why? usually defined in project description, setting common ground for the goals and expected outcome. ( dc.description )
How? defines used processes, practices and tools to create the expected outcome for the project, with links to common resources as the PMP framework, but also links to other key data-sets. Like ERP record keeping and masterdata, for project number and other measures not stored in the habitat, but still pillars to align to the overarching model. (dc.relation)
When these questions have been answered, the resource description for the habitat is set. In Sharepoint the properties bag (code) feature. During the lifespan of the on-going project, all contribution, conversations and creation of things can inherit rule-based metadata for the artifacts from the collections resource description. This reduces the burden weighing on the actors building the content, by enabling automagic metadata completion where applicable. And from the wayfinding, and findability within and between habitats, these resource descriptions will be the building blocks for a sustainable information architecture. In our next post we will cover how to encourage employee engagement with your content.
This is the second post in a series (1, 3, 4, 5, 6, 7) on the challenges organisations face as they move from having online content and tools hosted firmly on their estate to renting space in the cloud. We will help you to consider the options and guide on the steps you need to take.
In the first post we set out the most common challenges you are likely to face and how you may overcome these. In this post we focus on how Office 365 and SharePoint online can play a part in moving to the cloud.
Let us be pragmatic and down-to-earth! It is time to roll up our sleeves and consider using Office 365, as one example of how organisations can make this transition from their estate to the cloud. Given that this is the collaborative space many organisations consider using, Office 365 is compelling as a one-size-fits-all, instant build and just roll-out enterprise-wide approach to take sometimes without an Information Architecture plan whatsoever!
In the Office 365 environment, one has to map the terrain, so that there are distinct districts to where things relate – the same goes for the structure of neighborhoods of clustered habitats. But where it gets tough is to have an agile and resilient city plan for the real-world experience. This is actually the pillar construction in a digital domain, aiming for resilience and emerging uses over the time… but with a simple and agreed upon game plan.
Pace-layering the information architecture
Most organisations have an ontology of entities, things, that are generic, as stated in the W3C Organisation schema. And these perspectives, domain models, vocabularies and ontologies, add up to become districts, and neighborhoods in the Information Architecture map, with a few angles:
Organisation Units (Business Unit, Division, Function, Group)
Governing agencies, or regulatory entities, intermediaries
Locations (Sites, Geographical places as /world/continent/country/region/city/address …)
Business Processes (Process & Activities)
Professions, and Disciplines ( Roles), Practices
Topics (derived from line of Business, and controlled vocabularies)
Regardless of line of Business for an organisation, these pan out as pretty good structural elements on which to build upon. Since an enterprise is a social construct with agreed borders, it is populated with people who act and interplay in various different ways, and have a multitude facets with regards to everyday work. Some entities change more frequently, generally in the organisational units further down in the leaves, and less so in the top main branches. The vocabularies within an organisation needs to be the center pillar, to reduce linguistic insecurities.
From an Information Architecture perspective, in using Office 365 or Sharepoint it is wise to use pace-layering to the building blocks, on to which navigational constructs are built upon. This means, using the highest level of the organisational unit tree branch, a pretty stable foundation for the site-structure can be built. This is where content and teamsites live. More fluid navigational themes (temporal, or topic entities) can then be added.
This goes for activities undertaken within daily practices, where a set of professions and disciplines interact. All of these activities lay out a tapestry of overarching business processes. The outcome or result, might be a thing that is detailed as topic taxonomies. For example, a product structure for a specific manufacturing industry. Since all organisations have actor networks in their ecology, it is preferable to add these entities into the structure, as clients, partners, competitor, regulatory agencies, social networks, communities of practice and so forth.
All of these set of terms, have to be maintained in a Managed Metadata Service, a.k.a TermStore. In most organisations there are other sources of their controlled vocabularies, hence mapping is key, to have aligned master term sets. Either through subscription models (batch) or enterprise-linked-data sets. All these actions, defines the terrain, so we map the ecosystem as taxonomic chartographers.
The Building Blocks: Artifacts and Collections
Office 365 comes with a pretty organised set of tools, themes and things to build upon. For more website related things, one could either use published web sites / portals, or enterprise wikis. The other main services are digital habitats, or collaborative spaces, team-sites. And lasty there are ESN (Enterprise Social Networks) like Yammer, and instant messaging tools like Lync, and Exchange Services like mail and calendar. Sharepoint Online and Office 365 is a Swiss Army Knife.
This is the first post in a series(2, 3, 4, 5, 6, 7) on the challenges organisations face when they move from having online content and tools hosted firmly on their estate to renting space in the cloud. We will help you to consider the options and guide you on the steps you need to take.
In this first post we show you the most common challenges that you are likely to face and how you may overcome these.
A fast migration path, to become tenants in a cloud apartment housing unfolds a set of business critical issues that have to be mitigated:
The way forward is to settle a sustainable information architecture, that supports an information environment in constant flux. With information and data interoperable on any platform, everywhere, anytime and on any device.
You need to show how everything is managed and everyone fits together. A governance framework can help do this. It can show who is responsible for the intranet, what their responsibilities are and fit with the strategy and plan. Making it available to everyone on the intranet helps their understanding of how it is managed and supports the business.
The main point is to have a governance framework and information architecture with the same scope to avoid gaps in content being managed or not being found.
Both need to be in harmony and included in any digital strategy. This avoids competing information architectures and governance frameworks being created by different people that causes people to have inconsistent experiences not finding that they need and using alternative, less efficient, ways in future to find what they need to help with their work.
Building huts, houses and villages is an emerging social construction. As humans we coordinate our common resources, tools and practices. A habitat populated by people needs housekeeping rules with available resources for cooking, cleaning, social life and so on. Routines that defines who does what task and by when in order to keep everything ok.
A framework with governing principles that set out roles and responsibilities along with standards that set out the expected level of quality and quantity of each task that everyone is engaged and complies with, is similar to how the best intranets and digital workplaces are managed.
In the early stages with a small number of habitats the rules for coordination are pretty simple, both for shared resources between the groups and pathways to connect them. The bigger a village gets, it taxes the new structures to keep things smooth. When we move ahead into mega cities with 20+ million people living close, it boils down to a general overarching plan and common infrastructures, but you also need local networked communities, in order to find feasible solutions for living together.
Like villages and mega cities there is a need for consistency that helps everyone to work and live together. Whenever you go out you know that there are pavements to walk on, roads for driving, traffic lights that we stop at when they turn red and signs to help us show the easiest way to get to our destination.
Sustainable architecture and governance creates a consistent user experience. A well structured information architecture that is aligned with a clear governance framework sets out roles and responsibilities. Publishing standards based on business needs that supports the publishers follow them. This means wherever content is published, whether it is accredited or collaborative, it will appear to be consistent to people and located where they expect it to be. This encourages a normal way to move through a digital environment with recognizable headings and consistently placed search and other features.
This allegori, fits like a glove when moving into large enterprise-wide shared spaces for collaboration. Whether it is cloud based, on-premises or a mix thereof. The social constructions and constraints still remain the same. As an IT-services on tap, cloud, has certainly constraints for a flexible and adjustable habitual construction to be able to host as many similar habitats as possible. But offers a key solutions to instantly move into! Tenants share the same apartment building (Sharepoint online).
When the set of habitats grow, navigation in this maze becomes a hazard for most of us. Wayfinding in a digital mega city, is extremely difficult. To a large extent, enterprises moving into collaboration suites suffer from the same stigma. Regardless if it is SharePoint, IBM Connections, Google Apps for Work, or a similar setting. It is not a discussion of which type of house to choose, but rather which architecture and plan that work in the emerging environment.
All collections and shared spaces, should have persistent URI:s, which is the fourth star in the ladder. When it comes to the third star of non-proprietary formats it obviously becomes a bit tricky, since i.e. MS Sharepoint and MS Office like to encourage their own format to things. But if one add resource descriptions to collections and artifacts using Dublin Core elements, it will be possible to connect different types of matter. With feasible and standardised resource descriptions it will be possible to add schemas and structures, that can tell us a little bit more about the artifacts or collection thereof. Hence the option to adhere to the second star. The first star, will inside the corporate setting become key to connect different business units, areas with open licenses and with restrictions to internal use only and in some cases open for other external parties.
Linking data-sets, that is collections or habitats, with different artifacts is the fifth star. This is where it all starts to make sense, enabling a connected digital workplace. Building a city plan, with pathways, traffic signals and rules, highways, roads, neighborhoods and infrastructural services and more. In other words, placemaking!
Placemakingis a multi-faceted approach to the planning, design and management of public spaces. Placemaking capitalizes on a local community’s assets, inspiration, and potential, with the intention of creating public spaces that promote people’s health, happiness, and well being.
We will cover more about how this applies to Office 365 and SharePoint in our next post.
The emerging hyper-connected and agile enterprises of today are stigmatised by their IS/IT-legacy, so the question is: Will emerging web and semantic technologies and practices undo this stigma?
Semantic Technologies and Linked-Open-Data (LOD) have evolved since Tim Berners-Lee introduced their basic concepts, and they are now part of everyday business on the Internet, thanks mainly due to their uptake by information and data-run companies like Google, social networks like Facebook and large content sites, like Wikipedia. The enterprise information landscape is ready to be enhanced by semantic web, to increase findability and usability. This change will enable a more agile digital workplace where members of staff can use cloud based services, anywhere, anytime on any device, in combination with the set of legacy systems backing their line-of-business. All in all, more efficient organising principles for information and data.
The Corporate Information Landscape of today
In everyday workplace we use digital tools to cope with the tasks at hand. These tools have been set into action to address meta models to structure the social life dealing with information and data. The legacy of more than 60 years of digital records keeping, has left us in an extremely complex environment, where most end-users have a multitude of spaces where they are supposed to contribute. In many cases their information environment lacks interoperability.
A good, or rather bad example of this, is the electronic health records (EHR) of a hospital, where several different health professionals try to codify their on-going work in order to make better informed decisions regarding the different medical treatments. While this is a good thing, it is heavily hampered with closed-down silos of data that do not work in conjunction with the new more agile work practices. It is not uncommon to have more than 20 different information systems employed to do provisioning during a workday.
The information systems architecture, in any organisation or enterprise, may comprise of home-grown legacy systems from the past, or bought off-the-shelf software suites and extremely complex enterprise-wide information systems like ERP, BI, CRM and the like. The connections between these information systems (or integration points) often resemble “spaghetti” syndrome, point-to-point. The work practice for many IT professionals is to map this landscape of connections and information flows, using for example Enterprise Architecture models. Many organisations use information integration engines, like enterprise-service-bus applications, or master data applications, as means to decouple the tight integration and get away from the proprietary software lock-in.
On top of all these schema-based, structured data, information systems, lies the social and collaborative layer of services, with things like intranet (web based applications), document management, enterprise wide social networks (e.g. Yammer) and collaborative platforms (e.g SharePoint) and more obviously e-mail, instant messaging and voice/video meeting applications. All of these platforms and spaces where one carries out work tasks, have either semi-structured (document management) or unstructured data.
A matter of survival in the enterprise information environment, requires a large dose of endurance, and skills. Many end-users get lost in their quest to find the relevant data when they should be concentrating on making well-informed decisions. Wayfinding is our in-built adaptive way of coping with the unexpected and dealing with it. Finding different pathways and means to solve the issues. In other words … Findability.
Outside-in and Inside-Out
Today most organisations and enterprises workers act on the edge of the corporate landscape – in network conversations with customers, clients, patients/citizens, partners, or even competitors, often employing means not necessarily hosted inside the corporate walls. On the Internet we see newly emerging technologies become used and adapted at a faster rate and in a more seamless fashion than the existing cumbersome ones of the internal information landscape. So the obvious question raised in all this flux is: why can’t our digital workplace (the inside information landscape) be as easy to use and to find things / information as in the external digital landscape? Why do I find knowledgeable peers in communities of practice more easily outside than I do on the inside? Knowledge sharing on the outpost of the corporate wall is vivid, and truly passionate whereas inside it is pretty stale and lame to say the least.
Release the DATA now
Aggregate technologies, such as Business Intelligence and Datawarehouse, use a capture, clean-up, transform and load mechanism (ETL) from all the existing supporting information systems. The problem is that the schemas and structures of things do not compile that easily. Different uses and contexts make even the most central terms difficult to unleash into a new context. This simply does not work. The same problem can be seen in the enterprise search realm where we try to cope with both unstructured or semi-structured data. One way of solving all this is to create one standard that all the others have to follow and including a least common denominator combined with master data management. In some cases this can work, but often the set of failures fromsuch efforts are bigger than those arising from trying to squeeze an enterprise into a one-size-fits-all mega-matrix ERP-system.
Why is that? you might ask, from the blueprint it sounds compelling. Just align the business processes and then all data flows will follow a common path. The reality unfortunately is way more complex because any organisation comprises of several different processes, practices, professions and disciplines. These all have a different perspectives of the information and data that is to be shared. This is precisely why we have so many applications in the first place! To what extent are we able to solve this with emerging semantic technologies? These technologies are not a silver bullet, far from it! The Web however shows a very different way of integration thinking, with interoperability and standards becoming the main pillars that all the other things rely on. If you use agreed and controlled vocabularies and standards, there is a better chance of actually being able to sort out all the other things.
Remember that most members of staff, work on the edges of the corporate body, so they have to align themselves to the lingo from all the external actor-networks and then translate it all into codified knowledge for the inside.
Today most end-users use internet applications and services that already use semantic enhancements to bridge the gap between things, without ever having to think about such things. One very omnipresent social network is Facebook, that relies upon the FOAF (Friend-of-a-Friend) standard for their OpenGraph. Using a Graph to connect data, is the very corner stone of linked-data and the semantic web. A thing (entity) has descriptive properties, and relations to other entities. One entity’s property might be another entity in the Graph. The simple relationship subject-predicate-object. Hence from the graph we get a very flexible and resilient platform, in stark contrast to the more traditional fixed schemas.
The Semantic Web and Linked-Data are a way to link different data sets that may grow from a multitude of schemas and contexts into one fluid interlinked experience. If all internal supporting systems or at least the aggregate engines could simply apply a semantic texture to all the bits and bytes flowing around, it could well provide a solution to the area where other set ups have failed. Remember that these linked-data sets are resilient by nature.
There is a set of controlled vocabularies (thesauri, ontologies and taxonomies) that capture all the of topics, themes and entities that make up the world. These vocabs have to some extent already been developed, classified and been given sound resource descriptors (RDF). The Linked-Open-Data clouds are experiencing a rapid growth of meaningful expressions. WikiData, dbPedia, Freebase and many more ontologies have a vast set of crispy and useful data that when intersected with internal vocabularies, can make things so much easier. A very good example of such useful vocabularies, are the ones developed by professional information science people is that of the Getty Institute’s recently released thesari for AAT (Arts and Architecture), CONA (Cultural Object Authority) and TGN (Geographical Names). These are very trustworthy resources, and using linked-data anybody developing a web or mobile app can reuse their namespace for free and with high accuracy. And the same goes for all the other data-sets in the linked-open-data cloud. Many governments have declared open data as the main innovation space in which to release their things, under the realm of the “Commons”.
Inaddition to this, all major search engines have agreed on a set of very simple-to-use schemas captured in the www.schema.org world. These schemas have been very well received from their very inception by the webmaster community. All of these are feeding into the Google Knowledge Graph and all the other smart-things (search-enabled) we are using daily.
From the corporate world, these Internet mega-trends, have, or should have, a big impact on the way we do information management inside the corporate walls. This would be particularly the case if the siloed repositories and records were semantically enhanced from their inception (creation), for subsequent use and archiving. We would then see more flexible and fluid information management within the digital workplace.
The name of the game is interoperability at every level: not just technical device specifics, but interoperability at the semantic level and at the level we use governing principles for how we organise our data and information, regardless of their origin.
Stepping down, to some real-life examples
In the law enforcement system in any country, there is a set of actor-networks at play: the police, attorneys, courts, prisons and the like. All of them work within an inter-organisational process from capturing a suspect, filing a case, running a court session, judgement, sentencing and imprisonment; followed at the end by a reassimilated member of society. Each of these actor-networks or public agencies have their own internal information landscape with supporting information systems, and they all rely on a coherent and smooth flow of information and data between each other. The problem is that while they may use similar vocabularies, the contexts in which they are used may be very different due to their different responsibilities and enacted environment (laws, regulations, policies, guidelines, processes and practices) when looking from a holistic perspective.
A way to supersede this would be to infuse semantic technologies and shared controlled vocabularies throughout, so that the mix of internal information systems could become interoperable regardless of the supporting information system or storage type. In such a case linked-open-data and semantic enhancements could glue and bridge the gaps to form one united composite, managed by just one individual’s record keeping. In such a way, the actual content would not be exposed, rather a metadata schema would be employed to cross any of the previously existing boundaries.
This is a win-win situation, as semantic technologies and any linked-open-data tinkering use the shared conversation (terms and terminologies) that already exists within the various parts of the process. While all parts cohere to the semantic layers, there is no need to reconfigure internal processes or apply other parties’ resource descriptions and elements. In such a way only parts of schemas are used that are context specific for a given part of a process, and so allowing the lingo of the related practices and professions to be aligned.
This is already happening in practice in the internal workplace environment of an existing court, where a shared intranet is based on such organising principles as already mentioned, uses applied sound and pragmatic information management practices and metadata standards like Dublin Core and Common Vocabularies – all of which are infused in Content Provisioning.
For the members of staff, working inside a court setting, this is a major improvement, as they use external databases everyday to gain insights in order to carry out their duties. And when the internal workplace uses such a set up, their knowledge sharing can grow – leading to both improved wayfinding and findability.
Yet another interesting case, is a service company that operates on a global scale. They are an authoritative resource in their line-of-business, maintaining a resource of rules and regulations that have become a canonical reference. By moving into a new expanded digital workplace environment (internet, extranet and intranet) and using semantic enhancement and search, they get a linked-data set that can be used by clients, competitors and all others working within their environment. At the same time their members of staff can use the very same vocabularies to semantically enhance their provision of information and data into the different information systems internally.
The last example is an industrial company with a mix of products within their line-of-business. They have grown through M&A over the years, and ended up in a dead-end mess of information systems that do not interoperate at all. A way to overcome the effect of past mergers and aquisitions, was to create an information governance framework. Applying it with MDM and semantic search they were able to decouple data and information, and as a result making their workplace more resilient in a world of constant flux.
One could potentially apply these pragmatic steps to any line of business, since most themes and topics have been created and captured by the emerging semantic web and linked-data realm. It is only a matter of time before more will jump on this bandwagon in order to take advantage of changes that have the ability to make them a canonical reference, and a market leader. Just think of the film industry’s IMDB.
A final thought: Are the vendors ready and open-minded enough to alter their software and online services in order to realise this outlined future enterprise information landscape?