Across virtually every discipline, the world today is swimming in data. Its value, however, doesn’t lie in the data itself but in what people do with it―in its interpretation. Surprisingly, this has always been the case, but the fact is so rarely emphasized that many people think of it as a modern challenge.
Consider the skies above us. The sun, moon, stars, and planets have circled overhead since long before humans evolved. And once people appeared on the scene, the relevant orbits were there for everyone to see, in what may be the most democratic data set that ever existed, to which nobody enjoyed privileged access, at least until telescopes were invented. However, for thousands of years humans repeatedly drew the wrong insights from this data. Plainly visible as it was, it provided little value until correctly interpreted by Copernicus.
Even when exploited to the point of providing what some would call “predictive analytics” (pre-Copernican models allowed to forecast solar eclipses quite well), the lack of a correct interpretation prevented the data from adding value beyond the obvious. Being able to predict the motions of heavenly bodies was of little use beyond, well … predicting the motions of the bodies.
The correct interpretation, on the other hand, is so valuable that it is now part of elementary school curricula (the earth turns about its axis, the moon revolves around the earth, the earth and planets around the sun, with the stars as distant background). Its value resides in its spillover effect, for science as a whole would have been held back for centuries in the absence of Copernican insight, which in turn hinges on how the motions of the various celestial objects are linked to one another.
In the context of HR analytics―an admittedly younger discipline than astronomy―we face similar situations. To be sure, patterns are now visible that were previously hidden, but the value to be drawn from those patterns is far from self-evident. Data becomes valuable only when it informs business decisions through correct interpretations of observed patterns.
Let us look at an example.
We know from an April 2020 survey that two of the three most frequently measured variables in HR analytics are Employee Engagement (mentioned by 90% of respondents) and Mobility & Attrition (mentioned by 83%). Companies usually want engagement to be high and want attrition―at least that of high-performing employees―to be low. But the two variables tend to be tracked and analyzed in isolation. Subtle but rich connections between the two are hard to quantify and therefore all too often overlooked. The problem with this approach is that it is like tracking the motions of two planets while neglecting the way in which those motions are linked.
Engagement―no matter how a company chooses to define it―is usually held to be inversely linked to attrition, yet the nature of this link is frequently glossed over. Six years ago, a global company with 120,000 employees in the networks and telecommunications industry decided to explore the nature of this link. Its HR analytics team segmented the company’s 2013 “engagement index” data into 6 groups corresponding to teams with scores of 91-100, 81-90, 71-80, 61-70, 51-60, and below 50 (every team with at least six respondents received a score). Each segment contained several hundred data points, thus ensuring statistical significance if any link was found to other variables.
The first results were disappointing: engagement levels at the start of the year showed absolutely no correlation with attrition by year end. In hindsight, this shouldn’t have been a surprise, since attrition is driven by many factors, including commuting distance, life stage, job role, compensation level, etc. If the analysis had found a straightforward link with engagement scores, it should have aroused suspicion.
But patience paid off. The following year, survey data corresponding to 2014 was processed. Again, the correlation between attrition and engagement was found to be nil. However, due to a business downturn, company-wide attrition had in the meantime increased by a few percentage points. What was interesting was that it had not increased by the same amount across all segments. For teams with an engagement index either below 50 or in the 51-60 range, it had jumped by 10 percentage points, a huge change for such a large company. But at higher engagement levels, attrition had increased less and less, until for teams with engagement scores of 91-100 it was nearly identical to the previous year’s attrition. Moreover, the correlation between engagement level and increase in attrition was -0.92 (as good as it gets in this sort of analysis).
In summary, while the link between static (single-year) attrition and engagement levels may have been completely undetectable, there was a solid, quantifiable link between dynamic (year-over-year) changes in attrition and team engagement levels. Teams with an engagement score above 90 were “protected” against company-wide increases in attrition.
This was the point at which the data became valuable, for it justified a continued investment to keep engagement levels high (or increase them if low). And what was even more interesting, using rough but well-known estimates of the average cost of having to replace an employee, the analysis went further to show how many regrettable resignations would have been prevented by boosting every team’s engagement to a score of at least 80. For the first time, a financial value had been placed on what had formerly been a rather “soft” HR metric: employee engagement.