Talent Mobility Trends and Data Interpretation Pitfalls
A checklist for the implementation of people analytics
We are constantly bombarded with new trends and buzzwords in the field of international talent management. At the same time, the pressure to build robust business cases and adopt analytics to justify the cost and value of talent mobility initiatives and showcase their impact to management is mounting. It’s challenging to navigate this landscape, where constant benchmarking and embracing popular trends are hard to avoid.
Blindly following every new trend and careless implementation of talent analytics could lead to a perception that benchmarking and analytics are misleading or yield limited business value for talent mobility management.
The saying “correlation does not imply causation” highlights the complexities of analyzing new information and working with data. Establishing correlations and recognizing patterns can be deceptive: having a good understanding of the context and employing a rigorous methodology are required to draw accurate conclusions.
This does not mean we should reject all analyses or completely disregard trending topics. Buzzwords and public discussions often shed light on current issues — but they don’t always tell the whole story. Sometimes, it’s necessary to challenge assumptions and rephrase questions when discussing mobility issues with management and peers.
Here is a (non-exhaustive) data/trend interpretation checklist for a quick sanity check.
Reverse Causality
Issue: Confusing the cause and the effect.
Example: Are employees successful because they go on assignments, or are assignments mainly given to already successful employees?
Oversimplification and Complex Events
Issue: Assuming that changing one aspect of an assignment package significantly impacts assignee decisions and satisfaction.
Example: Assignee decisions are influenced by a complex interplay of factors, including pay, career, organizational dynamics, and personal matters. Consider the broader context and do not discuss an allowance or a benefit in isolation from this broader context.
Generalization
Issue: Making assumptions about employee groups based on observations, which may lack universal or lasting value.
Example: Assuming generational attributes frequently mentioned in articles and HR groups are stable over time and geography, unique (not shared by other generations), distinct from other factors (like the opposition between the young and the old) and directly applicable can lead to flawed policies. Organizations should conduct their analyses based on their specific employee groups and monitor the evolution of expectations on an ongoing basis.
False Attribution
Issue: Attributing success solely to individuals when it may result from team efforts or broader circumstances may mislead about success factors.
Example: Giving credit to team leaders or assignees for achievements when it is the team members who did the real work. Setting performance metrics and measuring assignment success is a delicate exercise. Organizations increasingly mix individual goals with team and unit performance metrics.
Beginning with the End in Mind
Issue: Preconceived goals can bias analysis results, leading to wishful thinking. The analysis becomes a mere validation of a decision already taken.
Example: Excessive claims are made about the willingness of employees to replace pay by a sense of purpose or about the growing willingness of assignees to move where the company wants them to be. In reality, if a sense of purpose is a strong motivation, there is little evidence it fully replace competitive pay — and preferences can change rapidly in times of financial hardship. Global mobility is increasingly driven by employees asking to move, but there is little correlation between where these employees want to move and where companies need them to be.
Unwittingly Integrating Biases in the Analysis
Issue: Unconscious biases can taint analysis, even with rigorous processes.
Example: Success criteria based on a specific group’s past actions may disadvantage others. This is fuelling a debate about the real impact of analytics and AI on diversity progress.
Signal Independence
Issue: Respondents’ interactions can affect survey validity as respondents might influence each other. You may capture the buzz but not the true issues. Make sure you capture feedback from diverse sources with independent opinions.
Example: When trying to capture the opinions of employees or crowdsource information, remember that assignees and mobility stakeholders influence and talk to each other. This is also valid for discussions within the “mobility industry” where people tend to echo and amplify the opinions of their peers without thoroughly double-checking all the facts behind the buzz.
Inadequate Samples
Issue: Small samples can lead to faulty generalizations and exaggerated trends. Results from small samples tend to have more variations and display more extreme patterns than larger ones.
Examples: Effective benchmarking requires a thorough analysis of the survey samples. All too often, conclusions are drawn from a small number of responses or minor variations due to survey participation changes over the years.
The Dark Side of Storytelling
Issue: Storytelling is important to communicate the results, but not all data tells a story. Perceived or real expectations of management might lead professionals to still try to find “a story” — i.e. something to change -, even if is based on minor or irrelevant points.
Example: New managers can be tempted to make their mark by taking new approaches and consultants are expected to provide suggestions to fix issues. This could lead to change for the sake of change and could trigger “change fatigue” among employees — a cause of disengagement and one of the reasons some changes are not effectively implemented.
One-time Results and Lack of Persistence
Issue: Persistence over time is the mark of a meaningful trend. One-time exceptionally good or bad performances will be followed by average ones if luck or isolated, non-recurring circumstances were involved, a phenomenon called “regression to the mean”. Be cautious about interpreting one-time exceptional performance as the result of new measures.
Example: After a punctual bad performance by some assignees, management sets a new process or goals leading to better results the following year. This leads management to assume that the new measures are effective. In reality, the one-time bad performance was due to specific circumstances. When they disappeared, the performance returned to its long-term average.
This is also a reminder that success during an assignment is not a predictor of success for a new assignment in different circumstances. Assumptions about success predictors for assignees should be determined with great caution.