Over the past few years, most businesses have come to recognize that the ability to collect and analyze the data they generate has become a key source of competitive advantage.
ZF, a global automotive supplier based in Germany, was no exception. Digital startups had begun producing virtual products that ZF did not know how to compete against, and engineers in logistics, operations, and other functions were finding that their traditional approaches couldn’t handle the complex issues they faced. Some company executives had begun to fear they were in for their own “Kodak moment” – a fatal disruption that could redefine their business and eliminate overnight advantages accumulated over decades.
With automotive analysts forecasting major changes ahead in mobility, they began to think that the firm needed a dedicated lab that focused entirely on data challenges.
But how?
At the time one of us, Niklas, a data scientist for ZF, was pursuing a PhD part-time at the University of Freiburg. Niklas took the first step and recruited his advisors at the university, Dirk Neumann and Tobias Brandt, to help them set up a lab for the company. This gave ZF access to top-notch expertise in data analytics and the management of information systems.
The hardest part was figuring out how the lab would work. After all, industrial data laboratories are a fairly new phenomenon– you can’t just download a blueprint. However, after a number of stumbles, we succeeded in winning acceptance for the lab and figured out a number of best practices that we think are broadly applicable to almost any data lab.
- Focus on the Right Internal Customers
ZF had dozens of departments filled with potentially high-impact data-related projects. Although we were tempted to tackle many projects across the entire company, we realized that to create visibility within a 146,000-employee firm, we had to focus on the most promising departments and projects first.
But how would we define “most promising”? As the goal of the data lab is to create value by analyzing data, we initially focused on the departments that generate the most data. Unfortunately, this didn’t narrow it down a whole lot. Finance, Logistics, Marketing, Sales, as well as Production and Quality all produced large amounts of data that could be interesting for data science pilot projects.
However, we knew from experience that the lowest hanging fruits for high-impact projects in a manufacturing company like ZF would be in Production and Quality. For years, ZF’s production lines had been connected and controlled by MES and ERP systems, but the data they generated had yet to be deeply tapped. We decided, therefore, to begin by concentrating on production issues, such as interruptions, rework rates, and throughput speed, where we could have an immediate impact.
- Identifying high-impact problems
Next, we selected those projects within Production and Quality that promised the highest-value outcomes. Our experience with the first few projects provided the basis for a project evaluation model, that we have continued to refine. The model contained a set of criteria along three dimensions that helped us to rank projects.
- The problem to be solved had to be clearly defined. We could not adopt an abstract aim such as “improve production.” We needed a clear idea of how the analysis would create business value.
- Hard data had to play a major role in the solution. And the data had to be available, accessible, and of good quality. We needed to shield the team from being flooded by business intelligence reporting projects.
- The team had to be motivated. We gave project teams independence in choosing how they solved the problems they took on. And while we made the budget tight enough to enforce focus, we made sure that it was not so tight that the team couldn’t make basic allocation decisions on its own. To sustain motivation and enthusiasm, we priotitized projects that could be subdivided into smaller but more easily achieved goals.
No comments:
Post a Comment