Customer Experience

Customer Experience

Opinions expressed on this blog reflect the writer’s views and not the position of the Capgemini Group

Data quality management and the difference between data and oil [Part 7 of 10]

“Data is the new oil". There is hardly any speech about the future of the automotive industry in the digital age not using this expression (or a similar one).

The data hype has shown significant impact already: three years ago big data and analytics initiatives were a complete exception, but as of today a variety of pilot projects are running at all established OEMs. The Volkswagen Group, for instance, has already launched a dedicated data lab to pilot analytics use cases (also read our article on the topic of implementation of Big Data & Analytics). However, looking into the practice of data analytics in the automotive industry, there is a major challenge: the biggest share of the raw material “data” is typically stored within the retail IT-system landscape with very limited access for HQs and NSCs.

Moreover, data is often fragmented across multiple silos and is characterized by insufficient quality. Incomplete, outdated, obsolete, redundant or simply incorrect datasets are industry “standards”. Our project experience shows that more than 50% of customer data sets in automotive retail systems don’t meet basic criteria for data quality. This can have severely detrimental effects on the customer experience. 

The causes of poor data quality

In the past, the systems, processes and job roles at a retail level have not been set up with the intent to create and process data as a strategic asset. In the pre-digital age, data has primarily been used to support and manage the business processes in sales and after sales. Focused data quality management was not carried out beyond these business processes.

A simple case: customers are usually assigned to a personal sales employee in the IT-systems and blocked for access by other employees. If the customer is consequently being served by a different sales employee than his primary contact, a new dataset is created. In extreme cases, we have seen examples of 11 redundant datasets for one customer in our projects.

Another example of poor data quality can be found looking at customer addresses; it can be assumed that every year 15% of customers change their residence. Even if the customer address has been recorded properly at the first customer contact (which is not often the case), 55% of the address data becomes obsolete within an average customer life cycle.

Poor data quality is massively damaging to business

Insufficient data quality is a significant obstacle for the analytical exploitation of data. In our analytics projects, we observe that up to 80% of the total effort is needed to compensate insufficient data quality.  In some cases, data quality is so poor that even an expensive processing doesn’t make up for it.

Besides lacking exploitability for big data analytics initiatives, poor data quality oftentimes even has a damaging effect on business: high costs for undeliverable postal campaigns, very limited email marketing reach (usually less than 5% of the customers provide an email address to their dealer), high customer churn rates or negative customer experiences.  Last but not least the lack of data quality may result in extremely inefficient business processes.

To put it simply, poor data quality costs a lot of money on both the revenue and expenditure side.

Data is not a natural resource

Yes, data will influence the upcoming economic era. But in comparison with oil, data is not just a given natural resource. Data must be generated systematically and optimized continuously to generate business value. Despite the fact that digital channels gain increasing important, personal contact at the dealership remains an essential data source in automotive sales and service processes.

If businesses strive to establish data as a strategic "asset", systems, processes, and roles need to be realigned accordingly. Keeping data quality high should - for instance - be defined as a bonus-relevant KPI and be made measurable via the established IT systems. Moreover, there is a wide range of service offerings in the market to optimize data quality.

Besides algorithms for automatic detection and merging of redundant datasets, service providers are also capable of data correction and enrichment on the basis of external databases (e.g. address directory and relocation information). There is a set of effective ways to tackle data quality challenges and one thing is certain: an investment in data quality promises a hardly beatable business case.

About the author

Christoph Lauenstein
Christoph Lauenstein
Christoph Lauenstein ist Experte für Digital Transformation in produzierenden Industrien. Als Berater bei Capgemini Consulting beschäftigt er sich fokussiert mit der vierten industriellen Revolution. Er entwickelt sowohl in Kundenprojekten als auch in einem gemeinsamen Forschungsprojekt mit dem Massachusetts Institute of Technology (MIT) Center for Digital Business innovative Strategien für das nächste industrielle Zeitalter.

Christoph is an expert for Digital Transformation in the manufacturing and automotive industry. He focuses on the fourth industrial revolution, develops innovative strategies for the next industrial era in customer projects as well as in a joint research project with the Massachusetts Institute of Technology (MIT) Center for Digital Business.

Leave a comment

Your email address will not be published. Required fields are marked *.