Customer Experience

Customer Experience

Opinions expressed on this blog reflect the writer’s views and not the position of the Capgemini Group

Experimental design: the key to unlock business value with analytics

Willingly or unwillingly, we are all modern-day lab rats. Every time you visit a website, buy something at the supermarket, or ignore an email you received from your favourite retailer, you are being observed, measured, and analysed. As storage and computing power costs continue to decrease at exponential rates, storing and analysing large data sets is becoming increasingly common. Combine this ability with new knowledge in cognitive and behavioural psychology, and you have a powerful way of understanding what works and what doesn’t in the messy, noisy reality of your customers’ world.

By designing experiments with scientific rigour, businesses and organisations (including governments) have found that they can nudge and influence the way we behave.

1. Jawbone data scientists set out to determine how to motivate users of their fitness tracker to increase their step count. Through careful experimentation, they found that simply accepting a 'challenge' in their mobile app to be more active during Thanksgiving increased their step count by an average of 1,500 steps.

2. Robert Cialdini, the father of 'the commitment principle' and oPower's chief data scientist, wanted to know the best way to encourage customers to reduce their energy consumption. A study that tested multiple interventions found that, as behavioural psychologists were suggesting, social pressure provided the best results. By simply hanging notices on consumers’ house doors that said "most people in the neighbourhood” were conserving energy proved to be the most effective stimulus.

3. Chicago’s LearnMetrics founder, Julian Miller, needed to determine in which technology platform his company should invest to provide the best possible educational outcomes for his customers. Through a series of experiments, Julian found that Chromebook laptops were better at engaging students than iPads at a K-12 school in Atlanta (using grades, attendance and log-on time metrics as a proxy for engagement).

The examples above come from a wide range of industries and applications and yet, they share a few critical characteristics. There was a clear, desirable outcome to achieve, multiple options were available to achieve the same outcome, and identifying the best option was dependent in understanding which option would best influence and shape individuals’ behaviours.

A new mindset

In cases like this, analytics can often provide the answer. With newer and faster ways to collect information on customer behaviour – from online analytics to geo-tagging of devices and biometric trackers – we are now more capable than ever of collecting information that can inform small but crucial business decisions.

The secret to unlocking business value from analytics, however, does not lie in the use of new technologies. Rather, it depends on a mindset of developing a clear and rigorous approach to the design of experiments.

At the core of experimental design is the quest for identifying correlation and causality. A good experiment starts with a hypothesis of factors that influence an outcome. A large enough number of tests are set where, ideally, all factors remain equal except for one, which is then randomly allowed to change. By analysing the recorded outcomes of all the tests, the experimenter can then determine how changes in the only factor that was changed impact the outcome.

In the real world, it is nearly impossible to ensure all other factors remain constant across tests. The key is to design the experiment in a way that ensures there is enough variation across the other factors that make analysis of their impact possible and robust.

For example, in the case of Jawbone trying to understand the impact of different approaches to increase step count, they would need to collect enough information from users of different genders, with different average activity levels, and so forth, to ensure that the results of one test are not influenced (biased) by variations in those other factors.

Actionable insights can therefore be derived from experiments that enable identification of causality, not just correlation. To identify causality, an experimenter must be able to answer three key questions: 

1. Are the factors correlated?

The first step in identifying causality is to determine whether factors are correlated or not. When analysing correlation, it is worth looking beyond what the numbers say, and trying to understand whether the observed correlation has a feasible explanation. There are hundreds of cases of ‘spurious correlation’ that would be hard to argue show causality.

For example, the level of US Spending on science, space and technology is almost perfectly correlated for the years 1999 to 2009 with the number of suicides by hanging, strangulation and suffocation in America. Can we realistically suggest that one is the cause for the other?

2. Does one factor happen prior to the other?

In order for causality to exist, results need to account for the timeframes in which the causal factor exists in relation to the outcome. For example, if a marketer believes that by exposing a customer to educational videos on the benefits of their product they will buy more of the product, it is not enough to show that people who viewed the video more often had higher sales. Causality requires that they viewed the video more times PRIOR to making the sale.

The experimenter must determine whether watching the video is really leading to more purchases. Could it be instead that people who purchase the product will naturally look for instructional videos to get more value out of their purchase? Poor experiment designs do not account for the temporal relationships in the analysis of results.

3. Have all the other factors been accounted for?

In other words, the experimenter needs to feel comfortable enough that the results she is observing are not due to a bias in the samples from which the results have been measured, or to a common factor that caused both metrics. Back to the example of ‘spurious correlation’ between suicides and investment -- could it be that there is a common factor that has a causality impact on both metrics (for example, population levels having a direct impact on the total level of government spending, as well as on the absolute number of deaths)?

Delivering true business value from analytics depends on multiple factors. You need to be able to capture and store the relevant information about your customers, their characteristics and their environment.

It also requires having the ability to link analytical objectives to business objectives; and it critically depends on translating these technical and business capabilities into a robust design of experiments that provides actionable insights. 

About the author

Diego Villaveces
Diego Villaveces
Diego is a thought leader in business analytics with over 20 years of experience across Asia, Latin America, and Australia in Financial Services, Media, Public sector and Utilities. Currently leading the Customer Value Analytics practice of the Digital Services Unit at Capgemini, Diego delivers value to clients by providing actionable, commercially-minded, insights-driven recommendations and advise.

Leave a comment

Your email address will not be published. Required fields are marked *.