Quantifying Human Resources - Clotilde Coron - E-Book

Quantifying Human Resources E-Book

Clotilde Coron

0,0
139,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Since the late 20th Century, Human Resources (HR) has had a legal obligation to produce reports for management in most firms. However, these have long been considered restrictive and are seldom used to improve decision-making. More recently, the emergence of analytics, Big Data and algorithms has enabled a reconfiguration of the uses of quantification in HR. Accompanied by empirical examples, this book presents and defines the different tools and uses of quantification in HR. It studies the effect of these tools on decision-making and ? without subscribing to the myth of objective and rational quantification ? presents the contributions and limits of the use of data in HR, and analyzes the potential risks of excessive quantification. It also discusses the appropriation of these tools by the various players in a company and examines their effects on the position of HR.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 397

Veröffentlichungsjahr: 2020

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Acknowledgments

Introduction

1 From the Statisticalization of Labor to Human Resources Algorithms: The Different Uses of Quantification

1.1. Quantifying reality: quantifying individuals or positions

1.2. From reporting to HR data analysis

1.3. Big Data and the use of HR algorithms

2 Quantification and Decision-making

2.1. In search of objectivity

2.2. In search of personalization

2.3. In search of predictability

3 How are Quantified HR Management Tools Appropriated by Different Agents?

3.1. The different avatars of the link between managerial rationalization and quantification

3.2. Distrust of data collection and processing

3.3. Distrust of a disembodied decision

4 What Effects are the Effects of Quantification on the Human Resources Function?

4.1. Quantification for HR policy evaluation?

4.2. Quantifying in order to legitimize the HR function?

4.3. The quantification and risk of HR business automation

5 The Ethical Issues of Quantification

5.1. Protection of personal data

5.2. Quantification and discrimination(s)

5.3. Opening the “black box” of quantification

Conclusion

References

Index

End User License Agreement

List of Tables

Chapter 1

Table 1.1. The characteristics of HR commensuration

Chapter 2

Table 2.1. The influences of the myth of objective quantification on perceived j...

Chapter 4

Table 4.1. The appropriation of monitoring indicators

Conclusion

Table C.1. The functional, structural and procedural dimensions of quantificatio...

List of Illustrations

Chapter 4

Figure 4.1. From selective policy appropriation to selective management tool app...

Figure 4.2. The staircase model (sources: Le Louarn 2008; Cossette et al. 2014)

Conclusion

Figure C.1. Summary of the work

Figure C.2. Theoretical framework for analyzing HR quantification

Guide

Cover

Table of Contents

Begin Reading

Pages

v

iii

iv

ix

xi

xii

xiii

xiv

xv

xvi

xvii

xviii

xix

xx

xxi

xxii

xxiii

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

Technological Changes and Human Resources Set

coordinated by

Patrick Gilbert

Volume 2

Quantifying Human Resources

Uses and Analyses

Clotilde Coron

First published 2020 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUK

www.iste.co.uk

John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USA

www.wiley.com

© ISTE Ltd 2020

The rights of Clotilde Coron to be identified as the author of this work have been asserted by her in accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Control Number: 2019957535

British Library Cataloguing-in-Publication Data

A CIP record for this book is available from the British Library

ISBN 978-1-78630-446-9

Acknowledgments

I would like to warmly thank all the people working at IAE Paris, the administrative staff and the teachers–researchers, for the stimulating working atmosphere and exchanges. In particular, I would like to thank Patrick Gilbert for his trust, support and wise advice.

My gratitude also goes to Pascal Braun for his attentive review and enriching remarks.

Finally, I would like to thank the team at ISTE, without whom this book would not have been possible.

Introduction

This book arises from an initial observation: quantification has gradually invaded all modern Western societies, and organizations and companies are not exempt from this trend. As a result, the human resources (HR) function is increasingly using quantification tools. However, quantification raises specific questions when it concerns human beings. Consequently, HR quantification gives rise to a variety of approaches, in particular: an approach that values the use of quantification as a guarantee of objectivity, of scientific rigor and, ultimately, of the improvement of the HR function; and a more critical approach that highlights the social foundations of the practice of quantification and thus challenges the myth of totally neutral or objective quantification. These two main approaches make it possible to clarify the aim of this book, which seeks to take advantage of their respective contributions to maintain a broad vision of the challenges of HR quantification.

I.1. The omnipresence of quantification in Western societies

In The Measure of Reality, Crosby (1998) describes the turning point in Medieval and Renaissance Europe that led to the supremacy of quantitative over qualitative thinking. Crosby gives several examples illustrating how widespread this phenomenon was in various fields: the invention and diffusion of the mechanical clock, double-entry accounting and perspective painting, for example. Even music could not escape this movement of “metrologization” (Vatin 2013). It became “measured”, rhythmic and obeyed quantified rules. Crosby goes so far as to link the rise of quantification to the supremacy that Europeans enjoyed in the following centuries.

The author reminds us that the transition to measurement and the quantitative method has been part of a very important change in mentality, and that the deeply rooted habits of a society dominated by quantification today make us partly blind to the implications of this upheaval. Crosby gives several reasons for this upheaval. First, he evokes the development of trade and the State, which has manifested itself in two emblematic places, the market square and the university, and then the renewal of science. But above all, it underlines the importance attached to visualization in the Middle Ages. According to him, the transition from oral to written transmission, whether in literature, music or account books, and the appearance of geometry and perspective in painting, accompanied and catalyzed the transition to quantification, which became necessary for these different activities: tempo and pitch measurement to write music, double-entry accounting to write in accounting books and the calculation of perspectives are all ways of introducing quantification in areas that had not previously benefited from it.

Supiot (2015, p. 104, author’s translation) also notes the growing importance of numbers, particularly in the Western world: “It is in the Western world that expectations of them have constantly expanded: initially objects of contemplation, they became a means of knowledge and then of forecasting, before being endowed with a strictly legal force with the contemporary practice of governance by numbers.” Supiot thus insists on the normative use of quantification, particularly in law and in international treaties and conventions, among others. More precisely, he identifies four normative functions conferred on quantification: accountability (an illustration being the account books that link numbers and the law), administration (knowing the resources of a population to be able to act on them), judging (the judge having to weigh up each testimony to determine the probability that the accused is guilty) and legislation (using statistics to decide laws in the field of public health, for example the preventive inoculation of smallpox that could reduce the disease as a whole but be fatal for some people inoculated in the 18th Century).

I.2. The specific challenges of human resources quantification: quantifying the human being

Ultimately, these authors agree on the central role of quantification in our history and in our societies today. More recently, the rise in the amount of available data has further increased the importance of this role, and has raised new questions, leading to new uses and even new sciences: the use of algorithms in different fields (Cardon 2015; O’Neil 2016), the rise of social physics that uses data on human behavior to model it (Pentland 2014), the study of social networks, etc.

Organizations are no exception to this rule: quantification is a central practice in organizations. Many areas of the company are affected: finance, audit, marketing, HR (human resources), etc. This book focuses on the HR function. This function groups together all the activities that enable an organization to have the human resources (staff, skills, etc.) necessary for it to operate properly (Cadin et al. 2012). Thus, it brings together recruitment, training, mobility, career management, dialog with trade unions, promotion, staff appraisal, etc. In other words, it is a function that manages the “human”, insofar as the majority of these missions are related to human beings (candidates during recruitment, employees, trade unionists, managers, etc.). HR quantification actually covers a variety of practices and situations, which we will elaborate on throughout the book:

– quantification of individuals: measurement of individual performance, individual skills, etc. This practice, the stakes of which are specified in

Chapters 1

and

2

, can be identified during decisions regarding recruitment, salary raises and promotion, for example;

– work quantification: job classification, workload quantification, etc. This measure does not concern human beings directly, but rather the work they must do.

Chapters 1

and

2

will examine this practice at length;

– quantification of the activity of the HR function: evaluation of the performance of the HR function, the effects of HR policies on the organization, etc. This practice, which is discussed in detail in

Chapter 4

, becomes all the more important as the HR function is required to prove its legitimacy.

These uses may seem disparate, but it seemed important to us to deal with them jointly, as they overlap on a number of issues. Thus, their usefulness for the HR function, or their appropriation by various agents, constitutes transversal challenges. In addition, in these three types of practices, quantification refers to the human being and/or their activities. However, the possibility of quantifying the human and human activities has given rise to numerous methodological and ethical debates in the literature. Two main positions can be identified. The first, which is the basis of the psychotechnical approach, seeks to broaden the scope of what is measurable in human beings: skills, behaviors, motivations, etc. The second, resulting from different theoretical frameworks, criticizes the postulates of the psychotechnical approach and considers on the contrary that the human being is never reducible to what can be measured.

The psychotechnical approach was developed at the beginning of the 20th Century. It is based on the idea that people’s skills, behaviors and motivations can be measured objectively. As a result, the majority of psychotechnicians’ research focuses on measuring instruments. They highlight four qualities necessary to make a good measuring instrument: standardization, ranking result, fidelity, and validity (Huteau and Lautrey 2006). Standardization refers to the fact that all subjects must pass exactly the same test (hence the importance of formalizing the conditions for taking the test, for example). Similarly, the correction of the test must leave as little margin as possible for the corrector. The stated objective of formalization is to make the assessment as objective as possible, whilst trying to avoid having the test results influenced by the test conditions or the assessor’s subjectivity. Then, the test must make it possible to differentiate individuals, in other words to rank them, usually on a scale (e.g. a rating scale). This characteristic implies having items whose difficulty is known in advance, and with a variation in the levels of difficulty. Indeed, the easy items, passed by the vast majority of individuals, are just as low ranking as the difficult items, passed by very few individuals. As a result, psychotechnicians recommend that items of varying levels of difficulty be mixed in the same test in order to achieve a more differentiated ranking of individuals. Accuracy refers to the fact that test results must be stable over time. Individual test results are influenced by random factors such as the fitness level of individuals, and the objective is to minimize this hazard. Finally, validity refers to the fact that the test must contribute to an accurate diagnosis or prognosis that is close to reality. This is called the “predictive value” of the test. This predictive value can be assessed by comparing the results obtained on a test with the actual situation that follows: for example, comparing a ranking of applications received for a position based on a test with the scores obtained on individual assessments by successful candidates, so as to infer the match between the test used for recruitment and the skills of candidates in real situations. Two typical examples of this approach are: the measurement of intellectual quotient (IQ), and the measurement of the factor (Box I.1).

The psychotechnical approach is therefore very explicitly part of an approach aimed at measuring the human being and demonstrating the advantages of such a measurement. Thus, psychotechnical work emphasizes that measurement allows for greater objectivity and better decision-making if it follows the following three assumptions (McCourt 1999). First of all, a good evaluation is universal and impersonal. Second, it must follow a specific procedure (the psychotechnical procedure). The last assumption is that organizational performance is the sum of individual performance.

Box I.1.Two incarnations of the psychotechnical approach: the IQ test and the theory of the g factor (sources: Gould 1997; Huteau and Lautrey 2006)

IQ tests are probably the most widely known tests of human intellectual ability for the general public. There are actually two definitions of IQ: intellectual development speed index (IQ-Stern) or group positioning index (IQWechsler). IQ-Stern depends on the age of the individual and measures the intellectual development of children. The IQ-Wechsler, defined in the late 1930s, is not a quotient, as its name suggests, but a device for calibrating individuals’ scores on an intellectual test. For example, an IQ of 130 corresponds to a 98 percentile (98% of the population scores below 130), while an IQ of 115 corresponds to the third quartile (75% of the population scores below 115). There are many debates about IQ tests. In particular, its opponents point out that tests measure only one form of intelligence, or that test results may depend to a large extent on educational inequalities, which makes them of little use in formulating educational policies.

Less well known to the general public, Spearman’s theory of the g factor is based on the observation that the results of the same individual on different intelligence tests are strongly correlated with each other, and infers that there is a common factor of cognitive ability. The challenge is therefore to measure this common factor. Multiple models were thus proposed during the 20th Century.

The second stance takes the opposite approach to this one by demonstrating its limits. Several arguments are put forward to this effect. The first challenges the notion of objectivity by highlighting the many evaluation biases faced by the psychotechnical approach (Gould 1997). These evaluation biases constitute a form of indirect discrimination: an apparently neutral test actually disadvantages some populations (women and ethnic minorities, for example). For example, intelligence tests conducted in the United States at the beginning of the 20th Century produced higher average scores for whites than blacks (Huteau and Lautrey 2006). These differences could be interpreted as hereditary differences, and could have contributed to racist theories and discourse, whereas in fact they illustrated the importance of environmental factors (such as school attendance) for test success, and thus showed that the test did not measure intelligence independently from a social context, but rather intelligence largely acquired in a social context (Marchal 2015). Moreover, this type of test, like craniometry, is based on the idea that human intelligence can be reduced to a measurement, subsequently allowing us to classify individuals on a one-dimensional scale, which is an unproven assumption (Gould 1997).

The second argument criticizes the decontextualization of psychotechnical measures, whereas many individual behaviors and motivations are closely linked to their context (e.g., work). This argument can be found in several theoretical currents. Thus, sociologists, ergonomists and some occupational psychologists argue that the measurement of intelligence is all the more impossible to decontextualize since it is also distributed outside the limits of the individual: it depends strongly on the people and tools used by the individual (Marchal 2015). However, as Marchal (2015) points out, work activities are “situated”, i.e. it is difficult to extract the activity from the context (professional, relational) in which it is embedded. This criticism is all the more valid for tests aimed at measuring a form of generic intelligence or performance, which is supposed to guarantee superior performance in specific areas. The g factor theory (Box I.1) is an instructive example of this decontextualized generalization, since it claims to measure a generic ability that would guarantee better performance in specific work activities. In practice, the same person, therefore with the same measure of g factor, may prove to be highly, or on the contrary, not very efficient depending on the work context in which he or she is placed.

The third argument questions the ethical legitimacy of the measurement of the individual and highlights in particular the possible excesses of this approach. Thus, the racist or sexist abuses to which craniometry or intelligence tests have given rise to are pointed out to illustrate the dangers of measuring intelligence (Gould 1997). In a more precise field of evaluation, many studies have highlighted the harms of quantified, standardized evaluation of individuals. In particular, Vidaillet (2013) denounces three of them. The first harm of quantified evaluation is that it contributes to changing people’s behavior, and not always in the desired direction. A known example of such a perverse effect is that of teachers who, being scored on the basis of their students’ scores on a test in the form of MCQs, are encouraged either to concentrate all their teaching on learning the skills necessary to succeed on the test, to the detriment of other, often fundamental skills, or to cheat to help their students when taking the test (Levitt and Dubner 2005). The second disadvantage is that it may harm the working environment by accentuating individual differences in treatment and thus increase competition and envy. The third harm is that it substitutes an extrinsic motivation (“I do my job well because I want a positive evaluation”) for an intrinsic motivation (“I do my job well because I like it and I am interested”). However, extrinsic motivation may reduce the interest of work for the person and therefore the intrinsic motivation: the two motivations are substitutable and not complementary.

Finally, the fourth argument emphasizes that, unlike objects and things, human beings can react and interact with the quantification applied to them. Thus, Hacking (2001, 2005) studies classification processes and more particularly human classifications, i.e. those that concern human beings: obesity, autism, poverty etc. He then refers to “interactive classification”, in the sense that the human being can be affected and even transformed by being classified in a category, which can sometimes lead to changes in category. Thus, a person who is entering the “obese” category after gaining weight may, due to this simple classification, want to lose weight and may therefore leave the category. This is what Hacking (2001, p. 9) calls the “loop effect of human specifications”. He recommends that the four elements underlying human classification processes (Hacking 2005) be studied together: classification and its criteria, classified people and behaviors, institutions that create or use classifications, and knowledge about classes and classified people (science, popular belief, etc.). Therefore, the possibility of quantifying human beings in a neutral way comes up against these interaction effects.

Finally, the confrontation between these two stances clearly shows the questions raised by the use of quantification when it comes to humans, and in HR notably: is it possible to measure everything when it comes to human beings? At what price? What are the implications, risks and benefits of quantification? Can we do without quantification?

I.3. HR quantification: effective solution or myth? Two lines of research

In response to these questions on the specificities of human quantification, two theoretical currents can be identified on the use of HR quantification.

One, generally normative, tends to consider quantification as an effective solution to improve HR decision-making, whether in recruitment or other areas. This approach thus supports evidence-based management (EBM), in other words management based on evidence which is most often made up of figures and measurements. In the EBM approach, quantification is therefore proof and can cover a multiplicity of objects: quantifying to better evaluate individuals (in line with the psychotechnical approach), or to know them better, or to better understand global HR phenomena (absenteeism, gender equality), all in order to make better decisions. The EBM approach thus considers that quantification improves decision-making, processes and policies, including HR. Lawler et al. (2010) thus believe that the use of figures and the EBM approach have become central to making the HR function a strategic function of the company. For example, they identify three types of metrics of interest in an EBM approach: the efficiency and effectiveness of the HR function, and the impact of HR policies and practices on variables such as organizational performance. More generally, according to the work resulting from this approach, quantification makes it possible to meet several HR challenges. The first challenge is to make the right human resources management decisions: recruitment, promotion and salary increases, for example. The psychotechnical approach already mentioned seems to provide an answer to this first challenge: by measuring individuals’ skills, motivations and abilities in an objective way, it seems to guarantee greater objectivity and rigor in HR decision-making.

The second challenge is to define the right HR policies. Rasmussen and Ulrich (2015) thus give an example where an offshore drilling company uses quantification to define a policy linking management quality, operational performance and customer satisfaction (Box I.2). This example therefore illustrates how quantification can help identify problems and links between different factors in order to define more appropriate and effective HR policies.

Box I.2.Quantification as a source of improvement in the definition of HR policies (source: Rasmussen and Ulrich 2015)

An offshore drilling company commissioned a quantitative study that demonstrated several links and influential relationships between different factors. First, the study shows that the quality of management (measured through an annual internal survey) influences turnover, on the one hand, and customer satisfaction, on the other hand (measured through a company’s customer relationship management tool). Staff turnover influences the competence of teams (measured according to industry standards) and their safety and maintenance performance (measured using internal company software, such as falling objects), which also has an impact on customer satisfaction, and is also strongly linked to the team’s operational performance. This study therefore provided the company with evidence of the links between these various factors, which made it possible to define a precise plan of action: improving the quality of management through training and a better selection of managers, improving team competence through training and increased control, among other things.

Finally, the third challenge is to prove the contribution of the HR function to the company’s performance. As Lawler et al. (2010) point out, the HR function suffers from the lack of an analytical model to measure the link between HR practices and policies, and the organizational performance, unlike the finance and marketing functions for example. To fill this gap, they suggest collecting data on the implementation of HR practices and policies aimed at improving employee performance, well-being or commitment, but also on organizational performance trends (such as increasing production speed or the more frequent development of innovations).

This trend therefore values quantification as a tool to improve the HR function via several factors: more objective decision-making, the definition of more appropriate and effective HR policies and proof of the link between HR practices and organizational performance, which can encourage the company to allocate more financial resources to HR departments.

The other, more critical trend is part of a sociological approach and takes a more analytical look at the challenges of quantification. Desrosières’ work (1993, 2008a, 2008b) founded the sociology of quantification, which focuses on quantification practices and shows how they are socially constructed (Diaz-Bone 2016). This analytical framework is based, among other things, on the concept of conventions, which are interpretative frameworks produced and used by actors to assess situations and decide how to act (Diaz-Bone and Thévenot 2010). The economics of conventions focuses on coordination that allows institutions and values to emerge, and shows how this coordination is based on conventions, which make it possible to share a framework for interpreting and valuing objects, acts and persons, and thus acting in situations of uncertainty (Eymard-Duvernay 1989). The originality of Desrosières’ work lies in mobilizing this concept of convention to analyze quantification operations, which amounts to studying “quantification conventions” (Desrosières 2008a), namely a set of representations of quantification that will make it possible to coordinate behaviors and representations (Chiapello and Gilbert 2013).

Desrosières thus seeks to deconstruct the assumptions that accompany the myths surrounding quantification (the myth of statistics that are ostensibly a transparent and neutral reflection of the world, for example, and that constitute a guarantee of objectivity, rigor and impartiality), in particular by emphasizing the extent to which quantification is based on social constructions, and not on physical or natural quantities. He suggests that statistical indicators should be considered as social conventions rather than measures in the sense of the natural sciences (e.g. air temperature) (Desrosières 2008a). Gould (1997), without claiming to be part of the sociology of quantification, also provides very illuminating illustrations of how quantification can be influenced by social prejudices, making objectivity impossible. In one of his books, Desrosières (2008a) also highlights the extent to which statistics, far from being merely a transparent reflection of the world, create a new way of thinking about it, representing it, measuring it and, ultimately, acting on it. However, his work also focuses on the history of statistics and the dissemination of new methods in the field. Thus, Desrosières (1993) highlights the link between the State and statistics. The latter, historically confined to population counting, has gradually been enriched by new methods and theories (probabilities with the law of large numbers, then econometrics with regression methods, to cite only two examples), which have partially loosened its ties with the State, and have brought it closer to other sciences, such as biology, physics and sociology. In another book, Desrosières (2008b) highlights the developments in modern statistics after the Second World War (reorganization and unification of official statistics, willingness to act on indicators such as the unemployment rate, etc.). These founding works have since been widely adopted by many authors.

Chiapello and Walter (2016), for example, are interested in the dissemination of calculation conventions used in finance. They show that, contrary to a rational ideology that would have the algorithms mobilized in finance be so because they are the most effective and rigorous, this dissemination is sometimes entangled in the power games between different functions or professions in the world of finance. Similarly, Juven (2016) shows that the activity-based pricing policy introduced in French hospitals does not always respond solely to the rational logic of improving hospital performance, but comes from choices and trials and errors that can only be understood by looking at the sociological foundations of the decisions taken (Box I.3). Finally, Espeland and Stevens (1998) focus on the social and sociological processes underlying “commensuration” operations, which make it possible to compare different entities (individuals and positions, for example) according to a common metric.

Box I.3.Example of the introduction of activity-based pricing in French hospitals (source: Juven 2016)

The introduction of activity-based pricing in French hospitals is a long-term process that takes several years. It required, among other things, a quantification of medical procedures and patients: how much a particular medical procedure costs and should be remunerated, or the management of a particular type of patient. However, this statisticalization has been the subject of many controversies between doctors, health authorities and patient associations. These different actors obviously have divergent interests, between reducing hospital costs and improving the management of a specific pathology. This case therefore illustrates the way in which the quantification of reality, far from being merely a neutral reflection of reality, proceeds from choices, negotiations and controversies that illustrate its sociologically constructed dimension.

Finally, this second trend takes a more critical approach to quantification. While the first trend is based in particular on the idea of quantification that can supposedly provide objectivity, transparency, neutrality and rationalization, the second trend questions this vision and these assumptions, thus questioning more generally the contributions of quantification to management.

I.4. The positioning of this work

Our book seeks to provide a nuanced and didactic perspective on the use of HR quantification. Therefore, it draws on these two currents to try to reflect as much as possible both the advantages and limitations of quantification. More precisely, we ask ourselves the question of the use that companies can make of HR quantification, but also the evolutions that the rise of quantification can represent for HR and the appropriation of these new devices by the various agents involved. In parallel, this book pays interest to the different theoretical and disciplinary trends that allow us to better understand the challenges of HR quantification.

To do this, this book mobilizes several types of sources and examples. Some of the information used comes from academic work. Another part is based on empirical surveys carried out within companies. These empirical materials are of several kinds: interviews with HR, employees, trade union representatives; participant observation as part of a professional experience as a Big Data HR project manager; company documents on the use of HR quantification; quantitative analyses conducted on personnel data.

Thus, this book aims to provide both theoretical and empirical knowledge on HR quantification. Finally, a few semantic clarifications must be added. The concepts of quantification, statistics and measurement are frequently used throughout this book. Quantification corresponds to a very broad set: all the tools and uses producing figures (or quantified data), and the figures thus produced. It therefore includes the concepts of statistics and measurement. The term “statistics” is employed when referring to the scientific and epistemological dimension of quantification, as Desrosières does, for example. Finally, the term “measurement” will be used when discussing the specific activity of quantifying a phenomenon, an object or a reality.

I.5. Structure of the book

The book is divided into five chapters of equal importance.

Chapter 1 seeks to delineate the subject by providing definitions and examples of the three major uses of HR quantification: personal and labor statisticalization, reporting and analysis, Big Data/algorithms. The next three chapters take up elements of this introductory chapter by analyzing them each from a different angle and can therefore be read independently of each other, and in the order desired by the reader.

Chapter 2 deals with the issue of decision-making. Indeed, as we have seen, the “EBM” approach sees the benefits of quantification as coming mainly from improving decision-making. Therefore, Chapter 2 examines the paradigms and beliefs that drive this link between quantification and decision-making.

Chapter 3 focuses on the appropriation of the different uses of quantification by the multiple actors involved in HR – managers, employees and trade unions, in particular.

Chapter 4 is based on the potential changes introduced by the increasing use of HR quantification, and questions the consequences of these changes for the HR function.

Finally, Chapter 5 deals with the ethical issues of quantification, particularly with regard to the protection of personal data and questions of discrimination.

1From the Statisticalization of Labor to Human Resources Algorithms: The Different Uses of Quantification

Quantification can be used in many HR processes, such as recruitment, evaluation and remuneration (with job classification, for example). In fact, human resources management gives rise to a very wide variety of uses of figures. The first use refers to decision-making concerning individuals (section 1.1), i.e. using quantified information to inform or justify decisions concerning specific individuals, for example, candidates for recruitment, employees for career management or remuneration. The second use corresponds to a general increase in the adoption of figures and their adoption at the collective level, no longer at the individual level (section 1.2). Historically, this use involved legal reporting and dashboards. It is therefore a question of defining relatively basic indicators and metrics, but aimed at monitoring or steering a situation (e.g. number of employees) or phenomenon (e.g. absenteeism). However, these basic indicators are not always sufficient, particularly because of the complexity of certain HR phenomena. The phenomenon of absenteeism can certainly be measured and monitored by basic bivariate indicators, but these will not be sufficient to identify the determinants of absenteeism, and therefore to define appropriate policies to reduce it. As a result, more sophisticated statistical methods have gradually been introduced in the HR field, both on the HR research side and on the business side: this approach is regularly referred to as “HR analytics”. More recently, the emergence of Big Data and the mobilization of algorithms in different sectors of society have gradually spread to the HR sphere, even if the notion of “Big Data HR” remains vague (section 1.3). This new horizon raises new questions and challenges for the HR function.

It should be stressed that the boundaries between these different uses are tenuous and shifting, and therefore this distinction remains partly arbitrary and personal. Thus, a dashboard can mobilize figures initially constructed with a view to decision-making about individuals. In addition, traditional reporting, which is particularly rich in cross-referencing, can be the beginning of a more sophisticated quantitative analysis, and produce similar results. Similarly, prediction and customization algorithms such as job or training recommendations, which we will classify under the category of Big Data and algorithms, are essentially based on statistical analysis tools (correlation, linear or logistic regression, etc.).

However, this chapter will focus on defining the outlines of these three types of uses, using definitions and examples.

1.1. Quantifying reality: quantifying individuals or positions

The HR function is regularly confronted with the need to make decisions about individuals: recruitment, promotion, remuneration, etc. However, under the joint pressure of ethical and legal issues, particularly around non-discrimination, it is also motivated to back these decisions up as much as possible in order to justify their legitimacy. One response to this search for justification is to mobilize quantified assessments of individuals or work (Bruno 2015). These statisticalization operations of the concrete world (Juven 2016) or commensuration (Espeland and Stevens 1998) aim to both inform decisions and justify them.

1.1.1. The statisticalization of individuals and work

To report on these operations, the focus here is on two types of activities. The first concerns the quantification of individuals and refers to, among other things, tools proposed by the psychotechnical approach briefly described in the introduction. The second refers to the quantification of work, necessary, for example, to classify jobs and thus make decisions related to remuneration, but which raises just as many questions because of the particular nature of the “work commodity” (Vatin 2013).

1.1.1.1. Different tools for the quantified assessment of individuals

Faced with the need to make decisions at an individual level (which candidate to recruit, which employee to promote, etc.), the HR function has had to take advantage of different types of quantified evaluation tools (Boussard 2009). Some tools are, in fact, partly the result of psychotechnical work, but HR agents do not necessarily master the epistemology of this approach. The tools are often used without real awareness of the underlying methodological assumptions. The use of quantified HR assessment tools has been relatively progressive, and two main factors have promoted it (Dujarier 2010). First of all, the transition to a market economy was accompanied by a division of labor and a generalization of employment, which required reflection on the formation and justification of pay levels and differences in pay levels within the same company. Secondly, the practices of selecting and assigning individuals within this division of labor have stimulated the quantified assessment of individuals.

Several examples of this are given here and highlight the uses made by the HR function, but also the criticisms resulting from this. However, in this chapter we do not insist on possible biases and therefore on questioning the notion of objectivity, because this will be the subject of section 2.1.

Psychological testing is a first example of a quantified assessment tool. Its use is frequent in the case of recruitment, and it can have several objectives. First, it can aim to match a candidate’s values with the company’s values. In this case, the test is based on the values and behaviors of the individual. Then, it may aim to match the personality of a candidate with what is generally sought by the company. In this case, the test includes questions that focus on behavior in the event of stress, uncertainty and conflict, for example. Finally, it may aim to match the personality of a candidate with the psychological composition of the team in which a position is to be filled. This variety of uses underlines the fact that the implementation and use of this type of test require upstream reflection in order to provide answers to the following questions: What are we trying to measure? What is the purpose of this measurement?

Once these questions have been answered, the second step is to answer the question: how do we measure what we are trying to measure? To this end, the academic and managerial literature provides many scales for measuring different characteristics and different attributes of individuals. Finally, after passing the test, a final reflection must be carried out on how to use it: to classify individuals, as a support point for the recruitment interview, and as a decision-making aid. A characteristic of these tests is that they can lead to a ranking of individuals in different profiles that are not necessarily ranked hierarchically. Thus, a test on one’s relationship to authority may lead to a classification of different types of relationship (submission, rebellion, negotiation, etc.) without one of these relationships necessarily being unanimously considered as preferable to the others. The preference for one type of profile over the others may depend, for example, on the sector of activity or type of company: a recruitment in the army will probably place a higher value on an obedient profile, unlike recruitment in a start-up or in a company with a flatter hierarchy, for example. Psychological tests are still widely used in recruitment today, although their format and administration methods may have changed (Box 1.1).

Box 1.1.Psychological tests and recruitment (source: Piotrowski and Armstrong 2006)

Psychological tests have been used since the second half of the 20th Century for recruitment purposes. However, they evolved at the beginning of the 21st Century, mainly due to the increasing use of the Internet and tests measuring the adequacy between a person and a profession (person job-fit test).

Among the companies in the Fortune-500 American index, in 2006, 20% used personality tests as part of their recruitment. In addition, 9% use online tests as a pre-recruitment tool. However, these tests are criticized for their lack of standardization and the doubts that remain about their predictive validity.

Personality tests are now generating renewed interest due to the development of “affinity recruitment” based on the matching model operated by dating networks such as Meetic or Tinder.

The aptitude, competence or intelligence test is a second tool that is often used, in the context of recruitment, for example. Although the distinction between aptitude, competence and intelligence remains relevant, it is necessary to place these tests in the same category here, because they are used to measure a characteristic of the individual considered useful and relevant for success in a given position. In addition, unlike psychological tests, aptitude, competence or intelligence tests are most often used to rank individuals on a one-dimensional scale. However, as with psychological tests, aptitude, competence or intelligence tests require upstream reflection, in this case on the competencies or skills required for successful performance in the role (Marchal 2015). Although theories such as the g factor or measures such as IQ outlined in the introduction assume that a single competency measure predicts or evaluates a set of interdisciplinary competencies, most aptitude and competency tests are designed to correspond to a specific position. However, the division of a position into skills or aptitudes is not without its difficulties (Box 1.2).

Box 1.2.The difficult division of a position into skills (source: Marchal 2015)

As early as the 20th Century, some psychotechnologists recommended making an inventory of the skills and abilities needed to hold a position, and suggested conducting extensive in situ analyses. However, this procedure represents significant costs, especially since a rigorous approach requires reproducing the analysis each time there is a change, even a minor one, in the organization or working or employment conditions. In addition, in situ analyses are initially very focused on the physical actions performed (by typists, for example), which loses its relevance with the tertiarization of employment. Under these two combined effects, the analyses are likely to evolve, focusing on behaviors and no longer on actions, and focusing on the identification of behaviors specific to a group of jobs and not to a specific job. In doing so, however, the tests produced from this type of analysis lose their specificity, accuracy and ultimately their predictive validity.

In addition, there are many criticisms of these analyses and tests. The first type of criticism highlights the many biases that job analyses can face, particularly because of the importance of the person observing the situation. The second type of criticism highlights the fact that the same job does not correspond to the same reality according to the organizational context in which it is practiced: being an engineer or nurse does not require the same skills or competences in every different organization. More precisely, exercising a trade does not only require skills intrinsically linked to the trade, but also cognitive and relational skills linked to the organization. As a result, it becomes illusory to hope to isolate skills needed by an employment group, regardless of organizational contexts. The third type of criticism comes from the fact that the observation of work situations does not make it possible to observe skills directly, but rather manifestations of skills. The transition from the manifestation of competence to competence requires a translation that is not obvious.

These criticisms have led to the implementation of new aptitude tests based on the simulation of work activities. This type of test is used in assessment centers sometimes used in recruitment. The aim is to put candidates in a situation close to the working situation. However, once again, the limitations of these methods are regularly highlighted. In particular, they involve identifying the most common or important work situations to be tested which can be difficult depending on the position concerned. They also represent a significant cost, since they require specific simulations to be defined for each workstation.

A third tool, used in particular to decide on promotions, is the quantified evaluation by the manager or other stakeholders with a grid of criteria. This tool is therefore based on a direct assessment by a third party, but generally the definition of fairly precise criteria seeks to limit the intervention of this third party and the intrusion of their subjectivity into what is supposed to constitute an objective and fair assessment (Erdogan 2002, Cropanzano et al. 2007). Two scenarios can be discerned according to the number and status of the people who assess a situation where the workers are assessed by their manager and a situation where they are assessed by all the clients with whom they come into contact. Evaluation by the manager is an extremely common situation in organizations (Gilbert and Yalenios 2017). However, this situation also varies greatly depending on the organizational context: the degree of formalization, frequency, criteria and use may differ. In terms of formalization, there are companies where the manager conducts an assessment interview with their subordinate without a prior grid, and others where the manager must complete an extremely precise grid on their subordinate, sometimes without this giving rise to an exchange with the person being assessed. In terms of frequency, some companies request annual assessments and others semiannual. In terms of criteria, situations where the criteria focus more on the achievement of objectives should be dissociated from situations where they are interested in the implementation of behaviors. Finally, in terms of use, some companies may take managerial evaluation into account as part of the remuneration process, others in promotion, others in development, etc. (Boswell and Boudreau 2000).

It should also be recalled that evaluation methods have evolved over time (Gilbert and Yalenios 2017). Thus, the Taylorism of the first half of the 20th Century gave rise to a desire to rate workers on precise criteria and relating to their activity and the achievement of objectives, while the school of human relations in the second half of the 20th Century valued dialogue, and thus the implementation of appraisal interviews aimed both at evaluating and establishing an exchange between managers and subordinates (Cropanzano et al. 2007). Evaluation by third parties, and, in particular, clients, is a very different but increasingly common situation (Havard 2008), particularly in professions involving contact with third parties (Box 1.3).