Last century, market researchers’ perspectives were largely shaped by the universe of projects they commissioned and reports they purchased. Aggregating all this unstructured and structured information in an insights platform enabled a team of market researchers and their stakeholders to look beyond their own projects and reports, to share and leverage information across regions, categories and brands. Today, vast amounts of consumer data from social media and the structured journey and sensory data from the IoT has flipped the researchers’ challenge from finding information to staying afloat on top of it all. So, you’d think that adding relevancy ranking algorithms to your insights platform would solve the problem, right? Unfortunately, not quite so.
Let’s take a look at how ranking algorithms work. Simple statistics tell us, that if you compare two bodies of information, one small and one large, and assume both contain a percentage of relevant documents for a given query, the larger body of information will contain more hits than the smaller one. Ranking alone will not solve the problem. To sort results, most search engines will also apply an unrelated set of factors – what the user likes to look at, and their social network connections, for example. The underlying principle is, that, if you want to generate a more relevant set of results from a larger mass of equally important results, it’s beneficial to select the results that are closest to the type of documents the user typically looks at using cognitive computing. The idea largely comes from e-commerce use cases, where a “recommender system” tries to predict what a user wants to buy by modelling what they tap on the best seller list. The problem is that the underlying objective is to sell the user something, not to widen their perspective as the starting point for creativity.
This phenomenon is known as the information “filter bubble.” Users simply don’t see the bigger picture because the algorithms favour the results they typically consume. Witness the US presidential elections, where metropolitan liberals were stunned by Trump’s victory as their news was biased to negative coverage. In technical terms, not being a critical observer is more energy efficient which means even if an observer suspects that there must theoretically be orthogonal insights, those are not searched for as the algorithm works against that goal.
At Market Logic, our clients need search algorithms that are explicitly designed to retrieve the most unbiased information selection possible. To do this, information is organized on a business level, not the topic level, with the Market Logic knowledge graph. The graph ensures that huge amounts of information are aggregated along the dimensions commonly observed as the basic truths of marketing, for example, a market consists of consumers and those consumers buy products because they offer functional and emotional benefits. The aggregation is not biased towards specific topic distributions. For example, let’s say some research reports promote value shoppers as a prominent target group, while others promote the premium shopper segment. Both insights are considered to be equally important, so the graph displays frequencies and importance for both observations.
As a result, cognitive marketing helps to avoid the filter bubble to get the holistic view of the market and consumers, while market researchers surf the wave of consumer data.