The Lab

The fruits of riotous experimentation.

Dots and patterns: how to make your competitors colour blind

An alternative title for this piece could have been “Narrative research—a practical introduction.”

As stated previously, most of my clients are hard-nosed and commercially-orientated, and typically prefer to see a cause-and-effect relationship between spending and a return. They often have a science or engineering background, and have a predilection for anything that can be measured and spreadsheeted—preferably with error bars. They have huge intellectual horsepower and readily assimilate the concepts around complex adaptive systems, but less readily want to deal with the attending implications of managing ambiguity.

This preference for certainty (with error bars) is reflected in their allocation of market research spend. External reports are usually industry reports from large publication houses (also bought by their competitors) and internal research are mostly Likert scale—as is any customer satisfaction survey. Between survey periods, the numbers are poured over incessantly, but I have yet to see any business link a return on the investment for these surveys which ties in directly the resultant actions from these surveys. I think there is an opportunity to shift this experience to the better, but how can it be done?

My preference when introducing new material is to use the existing tools and culture as the foundation; don’t build a new house, build on what we have. In this spirit, I introduce the power of narrative research through an accepted quantitative tool. Some clients are happy users of the Net Promoter Score (NPS). NPS was developed by Fred Reichold as means of determining a level of customer satisfaction, having determined a link between a high NPS score, and business performance. The Net Promoter Score is obtained by asking customers a single question on a 0 to 10 rating scale, where 10 is "extremely likely" and 0 is "not at all likely": "How likely is it that you would recommend our company to a friend or colleague?”.

Data suggests that businesses which provide a higher level of customer satisfaction are generally more profitable, and this is reflected in a higher NPS score. You can see how this tool, simple to use and with immediate “quantitative” feedback, might be attractive to a resource constrained business. But the real value, often overlooked, is held with the second, supporting, question; “what story, anecdote, experience, reasons did you have that prompted that particular score?” This feedback often provides the richest source of improvement opportunity, and provides a perfect entry into a discussion on the power of narrative research.

This is how we introduce narrative research.

1. At a typical quantitatively-oriented client we introduce NPS if it is not used already.
2. We ask the senior business managers (usually a group of 5-8) to undertake at least 10 market-related conversations each, and ask them to use the NPS questions. 
3. We organise a 1/2 day workshop to review the data, having first transcribed the anecdotes onto stickies, one anecdote per stickie.
4. We place all the stickies on the wall, and ask the team to seek patterns in the data (opportunities for new offers, overall themes for improvement), and to identify dots (opportunities for immediate operational improvement.)
5. We ask the team members to start to summarise the opportunities on the chart (‘make your competitors colour blind’) in such a way that they are actionable, practical and pragmatic.

The output can be very powerful indeed at at least three levels.

1. It demonstrates the power of narrative research (it is holistic and detailed, qualitative and quantitative)
2. It demonstrates to the senior teams the benefits of involvement in the complexity of the market place (access to granular information, reduction of cognitive bias, disintermediation.)
3. On more than one occassion, this ‘taster’ has lead to a Big Idea which has shifted a marketplace (and business performance.)

Hopefully, if all goes well, this exercise makes the subsequent adoption of SenseMaker for mass data capture a leap of budget, not a leap of faith