Analytical power at your fingertips with natural language and modern visualisation

Today we are all getting used to interactive dashboards and plots in self-service business intelligence (BI) solutions to drill down and slice our facts and figures. The market for BI tools has seen an increased competition recently with Microsoft Power BI challenging proven solutions such as Tableau, Qlik, IBM Cognos, SAP Lumira and others. At the same time, it is hard to benchmark tools against each other as they all come with very similar features. Has the BI development saturated?

Compared to how we are used to consume graphics and information, the BI approach to interactive analysis is somewhat different. For instance: a dashboard or report is typically presented in a printer-oriented flat layout on white background, weeks of user training is typically needed before “self-service” can be reached, and interactions are heavily click-oriented – you could almost feel it in your mouse elbow when opening the BI frontend.

On the other hand, when surfing top internet sites and utilizing social media, our interactions are centred around the search box and the natural interface of typing or speaking. Furthermore, there is typically no training needed to make use of Google, Facebook, LinkedIn, Pinterest, Twitter, etc. Through an intuitive interface we learn along the way. And looking at graphics and visualization, we can learn a lot from the gaming industry where players are presented with well-designed artwork – including statistics presented in an intuitive way to maximize the graphical impression.

Take a look at this live presentation to see how a visiual analysis using natural language can look like. 

screenshot007040

Rethink your business analytics

It appears as if BI tools are sub optimized for a limited scope and use case. To really drive digitalization and make use of our full information potential, we need a new way of thinking for business analytics. Not just continuous development, rather a revolution to the business intelligence approach. Remember: e-mail was not a consequence of the continuous development of post offices and mail handling. We need to rethink business analytics.

At Findwise, we see that the future for business analytics involves:

  • added value by enriching information with new unstructured sources,
  • utilizing the full potential of visualization and graphics to explore our information,
  • using natural language to empower colleagues to draw their own conclusions intuitively and secure

 

Enrich data

There is a lot of talk about data science today; how we can draw conclusions from our data and make predictions about the future. This power largely depends on the value in the data we possess. Enriching data is all about adding new value. The enrichment may include a multitude of sources, internal and external, for instance:

  • detailed customer transaction logs
  • weather history and forecasts
  • geospatial data (locations and maps)
  • user tracking and streams
  • social media and (fake) news

Comparing with existing data, a new data source could be orthogonal to the existing data and add a completely new understanding. Business solutions of today are often limited to highly structured information sources or information providers. There is a large power in unstructured, often untouched, information sources. However, it is not as straight forward as launching a data warehouse integration, since big data techniques are required to handle the volume, velocity and variety.

At Findwise, utilizing the unstructured data has always been the key in developing unique solutions for search and analytics. The power of our solutions lies in incorporating multiple sources online and continuously enrich with new aspects. For this we even developed our own framework, i3, with over hundred connectors for unstructured data sources. A modern search engine (or insight engine) scales horizontally for big data applications and easily consumes billions of texts, logs, geospatial and other unstructured – as well as structured – data. This is where search meets analytics, and where all the enrichment takes place to add unique information value.

 

Visually explore

As human beings we have very strong visual and cognitive abilities, developed over millions of years to distinguish complex patterns and scenarios. Visualization of data is all about packaging information in such a way that we can utilize our cognitive skills to make sense out of the noise. Great visualization and interaction unleash the human power of perception and derivation. It allows us make sense out of the complex world around us.

When it comes to computer visualization, we have seen strong development in the use of graphical processors (GPUs) for games but recently also for analytics – not the least in deep learning where powerful GPUs solve heavy computations. For visualisation however, typical business intelligence tools today only use a minimal fraction of the total power of our modern devices. As a comparison: a typical computer game renders millions of pixels in 3D several times per second (even via the web browser). In a modern BI tool however, we may struggle to display 20 000 distinct points in a plot.

There are open standards and interfaces to fully utilize the graphical power of a modern display. Computer games often build on OpenGL  to interact with the GPU. In web browsers, a similar performance can be reached with WebGL and JavaScript libraries. Thus, this is not only about regular computers or installed applications, The Manhattan Population Explorer (built with JavaScript on D3.js and Mapbox GL JS) is a notable example of an interactive and visually appealing analysis application that very well runs on a regular smart phone.

price-over-time

Example from one of our prototypes: analysing the housing market – plotting 500 000 points interactively utilizing OpenGL.

Current analysis solutions and application built with advanced graphical analysis are typically custom made for a specific purpose and topic, as in the example above. This is very similar to how BI solutions were built before self-service BI came in to play – specific solutions hand crafted for a few use cases. In contrast to this, Open graphical libraries, incorporated as the core of visualizations, with inspiration from gaming art work, can spark a revolution to how we visually consume and utilize information.

screenshot014002

Natural language empowers

The process of interpreting and working with speech and text is referred to as Natural Language Processing (NLP). NLP interfaces are moving towards the default interface to interaction. For instance Google’s search engine can give you instant replies on questions such as “weather London tomorrow” and with Google Duplex (under development) NLP is used to automate phone calls making appointments for you.  Other examples include the search box popping up as a central feature on many larger web sites and voice services such as Amazon Alexa, Microsoft Cortana, Apple Siri, etc.

When it comes to analysis tools we have seen some movements in this direction lately. In Power BI Service (web) Cortana can be activated to allow for simple Q&A on your prepared reports. Tableau has started talking about NLP for data exploration with “research prototypes you might see in the not too distant future”. The clearest example in this direction is probably ThoughtSpot built with a search-driven analytics interface. Although for most of the business analytics carried out today, clicking is still in focus and clicking is what is being taught on trainings. How can this be, when our other interactions with information move towards natural language interfaces? The key to move forward is to give NLP and advanced visualization a vital role in our solutions, allowing for an entirely natural interface.

Initially it may appear hard to know exactly what to type to get the data right. Isn’t training needed also with an NLP interface? This is where AI comes in to help us interpret our requests and provide us with smart feedback. Having a look at Google again, we continuously get recommendations, automatic spelling correction and lookup of synonyms to optimize our search and hits. With a modern NLP interface, we learn along the way as we utilize it. Frankly speaking though, a natural language interface is best suited for common queries that aren’t too advanced. For more advanced data munging and customized analysis, a data scientist skillset and environment may well be needed. However, the power of e.g. Scientific Python or the R language could easily be incorporated into an NLP interface, where query suggestions turn into code completion. Scripting is a core part of the data science workflow.

An analytical interface built around natural language helps direct focus and fine-tunes your analysis to arrive at intuitive facts and figures, explaining relevant business questions. This is all about empowering all users, friends and colleagues to draw their own conclusions and spread a data-driven mentality. Data science and machine learning techniques fit well into this concept to leverage deeper insights.

 

Conclusion – Business data at everyone’s fingertips

We have highlighted the importance of enriching data with concern taken to unstructured data sources, demonstrated the importance of visual exploration to enable our cognitive abilities, and finally empowering colleagues to draw conclusions through a natural language interface.

Compared with the current state of the art for analysis and business intelligence tools, we stand before a paradigm shift. Standardized self-service tools built on clicking, basic graphics and the focus on structured data will be overrun by a new way of thinking analysis. We all want to create intuitive insights without the need of thorough training on how to use a tool. And we all want our insights and findings to be visually appealing. Seeing is believing. To communicate our findings, conclusions and decisions we need to show the why. Convincing. This is where advanced graphics and art will help us. Natural language is the interface we use for more and more services. It can easily be powered by voice as well. With a natural interface, anyone will learn to utilize the analytical power in the information and draw conclusions. Business data at everyone’s fingertips!

To experience our latest prototype where we demonstrate the concept of data enrichment, advanced visualization and natural language interfaces, take a look at this live presentation.

 

Author: Fredrik Moeschlin, senior Data Scientist at Findwise

Major highlights from Elastic{ON} 2018 – Findwise reporting

Two Elastic fans have just returned from San Francisco and the Elastic{ON} 2018 conference. With almost 3.000 participants this year Elastic{ON} is the biggest Elastic conference in the world.

Findwise regularly organises events and meetups, covering among other topics Elastic. Keep an eye for an event close to you.

Here are some of the main highlights from Elastic{ON} 2018.

Let’s start with the biggest announcement of them all, Elastic is opening the source code of the XPack. This mean that you now not only will be able to access the Elastic stack source code, but also the subscription-based code of XPack that up until now have been inaccessible. This opens the opportunity for you as a developer to contribute back code.

news-elasticon-2018

 

Data rollups is a great new feature for anyone with the need to look at old data but feel the storage costs are too high. With rollups only predetermined metrics and terms will be stored. Still allowing you to analyze these dimensions of your data but no longer being able to view the individual documents.

Azure monitoring available in Xpack Basic. Elastic will in an upcoming 6.x release an Azure Monitoring Module, which will consist of a bundle of Kibana dashboards and make it really easy to get started exploring your Azure infrastructure. The monitoring module will be released as part of the XPack basic version – in other words, it will be free to use.

Forecasting was the big new thing in X-packs Machine learning component. As the name suggest the machine learning module can now not only spot anomalies in your data but also predict how it will change in the future.

Security in Kibana will get an update to make it work more like the Security module in Elasticsearch. This will also mean that one of the most requested security questions for Kibana will be resolved, giving users access to only some dashboards.

Dashboard are great and a fundamental part of Kibana but sometimes you want to present your data in more dynamic ways with less focus on data density. This is where Canvas comes in. Canvas is a new Kibana module to produce infographics rather than dashboards but still using live data from Elasticsearch.

Monitoring of Kubernetes and Docker containers will be made a lot easier with the Elastic stack. A new infra component will be created just for this growing use case. This component will be powered by data collected by Beats which now also has an auto discovery functionality within Kubernetes. This will give an overview of not only your Kubernetes cluster but also the individual containers within the cluster.

Geo capabilities within Kibana will be extended to support multiple map layers. This will make it possible to do more kinds of visualizations on maps. Furthermore, work is being done on supporting not only Geo points but also shapes.

One problem some have had with maps is that you need access to the Elastic map service and if you deploy the Elastic stack within a company network this might not be reachable. To solve this work is being done to make it possible to deploy the Elastic maps service locally.

Elastic acquired SaaS solution Swiftype last year. Since then Swiftype have been busy developing even more features to its portfolio. At current Swiftype comes in 3 different version:

  • Swiftype site Search – An out of the box (OOTB) solution for website search
  • Swiftype Enterprise Search – Currently in beta version, but with focus on internal, cloud based datasources (for now) like G Suite, Dropbox, O365, Zendesk etc.
  • Swiftype App Search – A set of API’s and developer tools that makes it quick to build user faced search applications

 

Elastic has also started to look at replacing the Zen protocol used to keep clusters in sync. Currently a PoC is being made to try to create a consensus algorithm that follow modern academic best practices. With the added benefit to remove the minimum master nodes setting, currently one of the most common pitfalls when running Elasticsearch in production.

ECE – Elastic Cloud Enterprise is big focus for Elastic and make it possible for customers to setup a fully service-based search solution being maintained by Elastic.

If you are interested in hearing more about Elastic or Findwise visit https://findwise.com/en/technology/elastic-elasticsearch

elasticon 2018

 

Writers: Mads Elbrond, regional manager Findwise Denmark & Torsten Landergren, senior expert consultant

Pragmatic or spontaneous – What are the most common personal qualities in IT-job ads?

Open Data Analytics

At Findwise we regularly have companywide hackathons with different themes. The latest theme was insights in open data, which I personally find very interesting.

Our group chose to fetch data from the Arbetsförmedlingen (Swedish Employment Agency), where ten years of job ads are available. There are about 4 million job ads in total during this time-period, so there is some material to analyze.

To make it easier to enable ad hoc analysis, we started off by extracting competences and personal traits mentioned in the job ads. This would allow us to spot trends in competences over time, in different regions or correlate competences and trait. Lots of possibilities.

 

Personal qualities and IT competences

As an IT-ninja I find it more exciting to focus on jobs, competences and traits within the IT industry. A lot is happening, and it is easy for me to relate to this area, of course. A report from Almega suggests that there is a huge demand of competences within IT for the coming years and it brings up a lot of examples of lacking technical skills. What is rarely addressed is what personality types are connected to these specific competences. We’re able to answer this interesting question from our data:

 

What personal traits are common complementary to the competences that are in demand?

arbetsförmedlingen hack

Figure 1 – Relevant worktitles, competences and traits for the search term “big data”

 

The most wanted personal traits are in general “Social, driven, passionate, communicative”. All these results should of course be taken with a grain of salt, since a few staffing/general IT consulting companies are a big part of the number of job ads within IT. But we can also look at a single competence and answer the question:

 

What traits are more common with this competence than in general? (Making the question a bit more specific.)

Some examples of competences in demand are system architecture, support and JavaScript. The most outstanding traits for system architecture are sharp, quality orientated and experienced. It can always be discussed if experienced is a trait (although our model thoughts so) but it makes sense in any case since system architecture tend to be more common among senior roles. For support we find traits such as service orientated, happy and nice, which is not unexpected, Lastly, for job-ads needing javascript-competence, personal traits such as quality orientated, quality aware and creative are the most predominant.

 

Differences between Stockholm and Gothenburg

Or let’s have a look at geographical differences between Sweden’s two largest cities when it comes to personal qualities in IT-job ads. In Gothenburg there is a stronger correlation to the traits spontaneous, flexible and curious while Stockholm correlates with traits such as sharp, pragmatic and delivery-focused.

 

What is best suitable for your personality?

You could also look at it the other way around and start with the personal traits to see which jobs/competences are meant for you. If you are analytical then jobs as controller or accountant could be jobs for you. If you are an optimist, then job coach or guidance counselors seems to be a good fit. We created a small application where you can type in competences or personal traits and get suggested jobs in this way. Try it out here!

 

Lear more about Open Data Analytics

In addition, we’re hosting a breakfast seminar December 12th where we’ll use the open data from Arbetsförmedlingen to show a process of how to make more data driven decisions. More information and registration (the seminar will be held in Swedish)

 

Author: Henrik Alburg, Data Scientist

Summary from Enterprise Search and Discovery Summit 2017

This year at Enterprise Search and Discovery Summit, Findwise was represented by us – search experts Simon Stenström and Amelia Andersson. With over a thousand attendees at the event, we’ve enjoyed the company of many peers. Let’s stay in touch for inspiration and to create magic over the Atlantic – you know who you are!

Enterprise Search and Discovery 2017 - findwise experts

Amelia Andersson and Simon Stenström, search experts from Findwise

 

Back to the event: We opened the Enterprise Search-track with our talk on how you can improve your search solutions through taking several aspects of relevance into account. (The presentation can be found in full here, no video unfortunately). If you want to know more about how to improve relevancy feel free to contact us or download the free guide on Improved search relevancy.

A few themes kept reoccurring during the Enterprise Search-track; Machine learning and NLP, bots and digital assistants, statistics and logs and GDPR. We’ve summarized our main takeaways from these topics below.

 

Machine learning and NLP

Machine learning and NLP were the unchallenged buzzwords of the conference. Everybody wants to do it, some have already started working with it, and some provided products for working with it. Not a lot of concrete examples of how organizations are using machine learning were presented unfortunately, giving us the feeling that few organizations are there yet. We’re at the forefront!

 

Bots, QA systems and digital assistants

Everyone is walking around with Siri or Google assistant in their pocket, but still our enterprise search solutions don’t make use of it. Panels were discussing voice based search (TV remote controls that could search content on all TV channels to set the right channel, a demo om Amazon Alexa providing answers for simple procedures for medical treatments etc.) pointing out that voice-to-text is now working well enough (at least in English) to use in many mobile use cases.

But bots can of course be used without voice input. A few different examples of using bots in a dialog setting were showed. One of the most exciting demos showed a search engine powered bot that used facet values to ask questions to specify what information the user was looking for.

 

Statistics and logs

Collect logs! And when you’ve done that: Use them! A clear theme was how logs were stored, displayed and used. Knowledge managements systems where content creators could monitor how users were finding their information inspired us to consider looking at dashboard for intranet content creators as well. If we can help our content creators understand how their content is found, maybe they are encouraged to use better metadata or wordings or to create information that their users are missing.

 

GDPR

Surprisingly, GDPR is not only a “European thing”, but will have a global impact following the legislation change in May. American companies will have to look at how they handle the personal information of their EU customers. This statement took many attendees by surprise and there were many worried questions on what was considered non-compliant of GDPR.

 

We’ve had an exciting time in Washington and can happily say that we are able bring back inspiration and new experience to our customers and colleagues at Findwise. On the same subject, a couple of weeks ago some or our fellow experts at Findwise wrote the report “In search for Insight”, addressing the new trends (machine learning, NLP etc) in Enterprise Search. Make sure to get your copy of the report if you are interested in this area.

Most of the presentations from Enterprise Search and Discovery Summit can be found here.

 

AuthorsAmelia Andersson and Simon Stenström, search experts from Findwise

Microsoft Ignite 2017 – from a Search and Findability perspective

Microsoft Ignite – the biggest Microsoft conference in the world. 700+ sessions, insights and roadmaps from industry leaders, and deep dives and live demos on the products you use every day. And yes, Findwise was there!

But how do you summarize a conference with more than 700 different sessions?

Well – you focus on one subject (search and findability in this case) and then you collaborate with some of the most brilliant and experienced people around the world within that subject. Add a little bit of your own knowledge – and the result is this Podcast.

Enjoy!

Expert Panel Shares Highlights and Opportunities in Microsoft’s Latest Announcements

microsoft ignite podcast findwise

Do you want to know more about Findwise and Microsoft? Find our how you can make SharePoint and Office 365 more powerful than ever before.

 

Time of intelligence: from Lucene/Solr revolution 2017

Lucene/Solr revolution 2017 has ended with me, Daniel Gómez Villanueva, and Tomasz Sobczak from Findwise on the spot.

First of all, I would like to thank LucidWorks for such a great conference, gathering this talented community and engaging companies all together. I would also like to thank all the companies reaching out to us. We will see you all very soon.

Some takeaways from Lucene/Solr revolution 2017

The conference basically met all of my expectations specially when it comes to the session talks. They gave ideas, inspired and reflected the capabilities of Solr and how competent platform it is when it comes to search and analytics.

So, what is the key take-away from this year’s conference? As usual the talks about relevance attract most audience showing that it is still a concern of search experts and companies out there. But what is different in this years relevance talks from previous years is that, if you want to achieve better result you need to add intelligent layers above/into your platform to achieve it. It is no longer lucrative nor profitable to spend time to tune field weights and boosts to satisfy the end users. Talk from Home Depo: “User Behaviour Driven Intelligent Query Re-ranking and Suggestion”, “Learning to rank with Apache Solr and bees“ from Bloomberg,  “An Intelligent, Personalized Information Retrieval Environment” from Sandia National Laboratories, are just a few examples of many talks where they show how intelligence comes to rescue and lets us achieve what is desired.

Get smarter with Solr

Even if we want to use what is provided out of the box by Solr, we need to be smarter. “A Multifaceted Look at Faceting – Using Facets ‘Under the Hood’ to Facilitate Relevant Search” by LucidWorks shows how they use faceting techniques to extract keywords, understand query language and rescore documents. “Art and Science Come Together When Mastering Relevance Ranking” by Wolters Kluwer is another example where they change/tune Solr default similarity model and apply advanced index time boost techniques to achieve better result. All of this shows that we need to be smarter when it comes to relevance engineering. The time of tuning and tweaking is over. It is the time of intelligence, human intelligence if I may call it.

Thanks again to LucidWorks and the amazing Solr Community. See you all next year. If not sooner.

Writer: Mohammad Shadab, search consultant / head of Solr and fusion solutions at Findwise

Choosing an Open Source search engine: Solr or Elasticsearch?

There has never been a better time to be a search and open source enthusiast than 2017. Far behind are the old days of information retrieval being a field only available for academia and experts.

Now we have plenty of search engines that allow us not only to search, but also navigate and discover our information. We are going to be focusing on two of the leading search engines that happed to be open source projects: Elasticsearch and Solr.

Comparison (Solr and Elasticsearch)

Organisations and Support

Solr is an Apache sub-project developed parallelly along Lucene. Thanks to this it has synchronized releases and benefits directly from any new Lucene feature.

Lucidworks (previously Lucid Imagination) is the main company supporting Solr. They provide development resources for Solr, commercial support, consulting services, technical training and commercial software around Solr. Lucidworks is based in San Francisco and offer their services in the USA and all rest of the world through strategic partners. Lucidworks have historically employed around a third of the most active Solr and Lucene committers, contributing most of the Solr code base and organizing the Lucene/Revolution conference every year.

Elasticsearch is an open-source product driven by the company Elastic (formerly known as Elasticsearch). This approach creates a good balance between the open-source community contributing to the product and the company making long term plans for future functionality as well as ensuring transparency and quality.

Elastic comprehends not only Elasticsearch but a set of open-source products called the Elastic stack: Elassticsearch, Kibana, Logstash, and Beats. The company offers support over the whole Elastic stack and a set of commercial products called X-Pack, all included, in different tiers of subscriptions. They offer trainings every second week around the world and organize the ElasticON user conferences.

Ecosystem

Solr is an Apache project and by being so it benefits from a large variety of apache projects that can be used along with it. The first and foremost example is its Lucene core (http://lucene.apache.org/core/) that is released on the same schedule and from which it receives all its main functionalities. The other main project is Zookeper that handles SolrCloud clusters configuration and distribution.

On the information gathering side there is Apache Nutch, a web crawler, and Flume , a distributed log collector.

When it comes to process information, there are no end to Apache projects, the most commonly used alongside Solr are Mahout for machine learning, Tika for document text and metadata extraction and Spark for data processing.

The big advantage lies in the big data management and storage, with the highly popular Hadoop  library as well as Hive, HBase, and Cassandra databases. Solr has support to store the index in a Hadoop Highly Distributed File System for high resilience.

Elasticsearch is owned by the Elastic company that drives and develops all the products on its ecosystem, which makes it very easy to use together.

The main open-source products of the Elastic stack along Elasticsearch are Beats, Logstash and Kibana. Beats is a modular platform to build different lightweight data collectors. Logstash is a data processing pipeline. Kibana is a visualization platform where you can build your own data visualization, but already has many build-in tools to create dashboards over your Elasticsearch data.

Elastic also develop a set of products that are available under subscription: X-Pack. Right now, X-Pack includes five producs: Security, Alerting, Monitoring, Reporting, and Graph. They all deliver a layer of functionality over the Elastic Stack that is described by its name. Most of them are included as a part of Elasticsearch and Kibana.

Strengths

Solr

  • Many interfaces, many clients, many languages.
  • A query is as simple as solr/select?q=query.
  • Easy to preconfigure.
  • Base product will always be complete in functionality, commercial is an addon.

Elasticsearch

  • Everything can be done with a JSON HTTP request.
  • Optimized for time-based information.
  • Tightly coupled ecosystem.

Base product will contain the base and is expandable, commercial are additional features.

solr vs elasticsearch comparison open source search engine

Conclusion – Solr or Elasticsearch?

If you are already using one of them and do not explicitly need a feature exclusive of the other, there is no big incentive in making a migration.

In any case, as the common answer when it comes to hardware sizing recommendations for any of them: “It depends.” It depends on the amount of data, the expected growth, the type of data, the available software ecosystem around each, and mostly the features that your requirements and ambitions demand; just to name a few.

 

At Findwise we can help you make a Platform evaluation study to find the perfect match for your organization and your information.

 

Written by: Daniel Gómez Villanueva – Findability and Search Expert

Using search technologies to create apps that even leaves Apple impressed

At Findwise we love to see how we can use the power of search technologies in ways that goes beyond the typical search box application.

One thing that has exploded the last few years is of course apps in smartphones and tablets. It’s no longer enough to store your knowledge in databases that are kept behind locked doors. Professionals of today want to have instant access to knowledge and information right where they are. Whether if it’s working at the factory floor or when showcasing new products for customers.

When you think of enterprise search today, you should consider it as a central hub of knowledge rather than just a classical search page on the intranet. Because when an enterprise search solution is in place, when information from different places have been normalized and indexed in one place, then there really are no limits for what you can do with the information.

By building this central hub of knowledge it’s simple to make that knowledge available for other applications and services within or outside of the organization. Smartphone and tablet applications is one great example.

Integrating mobile apps with search engine technologies works really well because of four reasons:

  • It’s fast. Search engines can find the right information using advanced queries or filtering options in a very short time, almost regardless of how big the index is.
  • It’s lightweight. The information handled by the device should only be what is needed by the device, no more, no less.
  • It’s easy to work with. Most search engine technologies provides a simple REST interface that’s easy to integrate with.
  • A unified interface for any content. If the content already is indexed by the enterprise search solution, then you use the same interface to access any kind of information.

 

We are working together with SKF, a company that has transformed itself from a traditional industry company into a knowledge engineering company over the last years. I think it’s safe to say that Findwise have been a big part of that journey by helping SKF create their enterprise search solution.

And of course, since we love new challenges, we have also helped them create a few mobile apps. In particular there are two different apps that we have helped out with:

  • SKF Shelf – A portable product brochures archive. The main use case is quick and easy access to product information for sales reps when visiting customers.
  • MPLP (mobile product landing page) – A mobile web app that you get to if you scan QR-codes printed on the package.

 

And even more recently the tech giant Apple has noticed how the apps makes the day to day work of employees easier.


SKF Shelf3

Read more about Enterprise Search at Findwise

Elastic{ON} 2017 – breaking all the records!

Elastic{ON} 2017 draws 2200 participants to Pier 48 during these somewhat chilly San Francisco days in March. It’s a 40% increase from the 1600 or so participants last year, in line with the growing interest for the Elastic Stack and the successes of Elastic commercially.

From Findwise – we are a team of 4 Findwizards, networking, learning and reporting.

Shay Banon, the creator of Elasticsearch and Elastic CTO, is doing both the opening and closing keynote. It is apparent that the transition of the CEO role from Steven Schuurman has already started.

ElasticON 2017

2016 in retrospective with the future in mind

Elastic reached 100 million downloads in 2016, and have managed to land approximately 4000 paying subscription customers out of this installed base to date. A lot of presentations during the conference is centered around new functionality that is developed and will be released to the open source community freely. Other functionality goes into the commercial X-pack subscriptions. Some X-pack functionality is available freely under the Basic subscription level that only requires registration.

Most presentations are centered around search powered analytics, and fewer around regular free text search. Elasticsearch and the Elastic Stack got its main use cases within logging, analytics and in various applications as a data platform or middle-layer with search use-cases as a strong sidekick.

A strong focus on analytics

There’s 22 sponsors at the event, and most of the companies are either offering cloud based monitoring or machine learning services. IBM, the platinum sponsor, are promoting the Bluemix cloud services for cognitive Watson functionality and uses the conference to reach out to the predominantly developer-focused audience.

Prelert was acquired in September last year, and is now being integrated into the Elastic Stack as the Machine Learning component and is used for unsupervised anomaly detection to give operation log insights. Together with the new modular Beats architecture and various Kibana improvements, it looks apparent that Elastic is chasing the huge market Splunk currently controls within logging and analytics.

Elasticsearch SQL – giving BI what it needs

Elasticsearch SQL will give the search engine SQL capability just like Solr got with their parallel SQL interface. Elasticsearch is becoming more and more a “data platform”. Increasingly becomming an competitor to HPE Vertica and Amazon RedShift as it hits a sweet spot use-case where a combination of faster data loading and extreme scalability is needed, and it is acceptable with the tradeoffs of limited functionality (such as the lack of JOIN operations). With SQL support the platform can use existing visualization tools such as Tableu and it expands the user base as many people in the Business Intelligence sector knows SQL by heart.

Fast and simple Beats is music to our ears

Beats will become modular in the next release, and more beats modules will be created either by Elastic or in the open source or commercial community. This increases simple connectivity to various data sources, and adds standardized dashboards for the data source, which will increase simplicity and speed in implementation.

Heartbeat is a new Beat (with a beautiful name!) that send pings to check that services are alive and functioning.

Kibana goes international

Kibana is maturing with some new key updates coming soon. A Time series visual builder that will give graphical guidance on how to build the dashboards, Kibana Canvas gives custom dynamic reports and enables slide show presentations with live data, and the GUI frontend is translated to various languages.

There’s a new tile service for maps, so instead of relying on external map services, Elastic now got control over the maps functionality. The service can be used free of charge but requires registration (Basic subscription) to use all 18 zoom levels.

kibana-int

 

To conclude, we’ve had three good days with exciting product news and lots of interesting meetings in what could very well be the biggest show for search and search-driven analytics right now! Be sure to see us at the next year’s Elastic{ON} again. If not before, see you then!

 

From San Francisco with love,

/Andreas, Christian, Joar and Peter

What will happen in the information sector in 2017?

As we look back at 2016, we can say that it has been an exciting and groundbreaking year that has changed how we handle information. Let’s look back at the major developments from 2016 and list key focus areas that will play an important role in 2017.

3 trends from 2016 that will lay basis for shaping the year 2017

Cloud

There has been a massive shift towards cloud, not only using the cloud for hosting services but building on top of cloud-based services. This has affected all IT projects, especially the Enterprise Search market when Google decided to discontinue GSA and replace it with a cloud based Springboard. More official information on Springboard is still to be published in writing, but reach out to us if you are keen on hearing about the latest developments.

There are clear reasons why search is moving towards the cloud. Some of the main ones being machine learning and the amount of data. We have an astonishing amount of information available, and the cloud is simply the best way to handle this overflow. Development in the cloud is faster, the cloud gives practically unlimited processing power and the latest developments available in the cloud are at an affordable price.

Machine learning

One area that has taken huge steps forward has been machine learning. It is nowadays being used in everyday applications. Google wrote a very informative blog post about how they use Cloud machine learning in various scenarios. But Google is not alone in this space – today, everyone is doing machine learning. A very welcome development was the formation of Partnership on AI by Amazon, Google, Facebook, IBM and Microsoft.

We have seen how machine learning helps us in many areas. One good example is health care and IBM Watson managing to find a rare type of leukemia in 10 minutes. This type of expert assistance is becoming more common. While we know that it is still a long path to come before AI becomes smarter than human beings, we are taking leaps forward and this can be seen by DeepMind beating a human at the complex board game Go.

Internet of Things

Another important area is IoT. In 2016 most IoT projects have, in addition to consumer solutions, touched industry, creating a smart city, energy utilization or connected cars. Companies have realized that they nowadays can track any physical object to the benefits of being able to serve machines before they break, streamline or build better services or even create completely new business based on data knowledge. On the consumer side, we’ve in 2016 seen how IoT has become mainstream with unfortunate effect of poorly secured devices being used for massive attacks.

 

3 predictions for key developments happening in 2017

As we move towards the year 2017, we see that these trends from 2016 have positive effects on how information will be handled. We will have even more data and even more effective ways to use it. Here are three predictions for how we will see the information space evolve in 2017.

Insight engine

The collaboration with computers are changing. For decades, we have been giving tasks to computers and waited for their answer. This is slowly changing so that we start to collaborate with computers and even expect computers to take the initiative. The developments behind this is in machine learning and human language understanding. We no longer only index information and search it with free text. Nowadays, we can build a computer understanding information. This information includes everything from IoT data points to human created documents and data from other AI systems. This enables building an insight engine that can help us formulate the right question or even giving us insight based on information to a question we never ask. This will revolutionize how we handle our information how we interact with our user interfaces.

We will see virtual private assistants that users will be happy to use and train so that they can help us to use information like never before in our daily live. Google Now, in its current form, is merely the first step of something like this, being active towards bringing information to the user.

Search-driven analytics

The way we use and interact with data is changing. With collected information about pretty much anything, we have almost any information right at our fingertips and need effective ways to learn from this – in real time. In 2017, we will see a shift away from classic BI systems towards search-driven evolutions of this. We already have Kibana Dashboards with TimeLion and ThoughtSpot but these are only the first examples of how search is revolutionizing how we interact with data. Advanced analytics available for anyone within the organization, to get answers and predictions directly in graphs and diagrams, is what 2017 insights will be all about.

Conversational UIs

We have seen the rise of Chatbots in 2016. In 2017, this trend will also take on how we interact with enterprise systems. A smart conversational user interface builds on top of the same foundations as an enterprise search platform. It is highly personalized, contextually smart and builds its answers from information in various systems and information in many forms.

Imagine discussing future business focus areas with a machine that challenges us in our ideas and backs everything with data based facts. Imagine your enterprise search responding to your search with a question asking you to detail what you actually are achieving.

 

What are your thoughts on the future developement?

How do you see the 2017 change the way we interact with our information? Comment or reach out in other ways to discuss this further and have a Happy Year 2017!

 

Written by: Ivar Ekman