The WiW methodological paradigm for HRM research


The WiW methodological paradigm for HRM research

The results of HRM research do not lead to the construction of immutable laws, but only remain socially, culturally, and historically limited generalisations[1]. The formulation of a research program requires not only determining the area of research, but also specifying the problem itself and the purpose of this research[2]. What research instruments one will use in their case will be determined by the research objective and its feasibility.

We study what is observable, measurable, and susceptible to experimentation. Science is based on empirical evidence.

Key terms

All data obtained by asking employees questions are called survey data. All participants, regardless of whether they took part in surveys, experiments, or interviews, are called respondents, because the object of analysis is their reactions (answers).

Results of measuring people can have the form of numbers, in which case we speak of quantitative research/analysis, or words, which are most often a component of qualitative research/analysis.

Quantitative data are sets of numbers that are subjected to statistical analysis. Qualitative data are sets of words that are an attempt to describe different visions of the researched phenomenon (reality is in the eye of the beholder), subjected to the researcher's interpretative analysis, which may include objectivising elements such as classification of statements by independent judges, counting the frequency of using different phrases.

Quantitative research differs from qualitative research in the degree of proceduralization of methods of analysis. The aim of quantitative research is most often the objective testing of hypotheses assuming relations between variables. The aim of qualitative research is most often to identify individual ways of perceiving reality.

Methodological pluralism/eclecticism & pragmatism in the choice of problem

The WiW paradigm rejects both anarchism (accepting arbitrary methods and techniques drawn even from individual experience) and methodological fundamentalism, in which different research methods cannot be mixed. It agrees with the postulate that research methods in HRM should be applied reflexively, as they are heuristic in nature, making algorithmizing impossible. Therefore, it recommends pluralism and even methodological eclecticism that accepts the use of methods drawn from different disciplines and theoretical approaches to solve a research problem[3].

At the stage of selecting the research problem, it is recommended to apply a pragmatic approach, if the analysed research problem does not have important practical consequences, then it is not worth dealing with it, leaving such considerations to basic sciences.

Specificity of the test object

Methodologists forget that the study of inanimate objects is governed by different laws than the study of people. To make matters worse, we are dealing with conducting „people-by-people” research. The specificity of HRM research lies in the fact that the objects of measurement are people who create meanings, i.e., their reactions to stimuli are mediated by their expectations, interpretations determined to a large extent by the record of their previous experiences. Therefore, in contrast to the sciences, in HRM each replication of the study is a success, because the group of surveyed employees, their experience, the cultural context is always changing...

The objects of analysis in HRM research are mental facts, i.e., most often people's answers (verbal or categorized on numerical scales) to the questions asked. It should be remembered that this type of quantitative data is almost always distorted, as has been shown in many studies[4]. The model of the question-answer process shows why there is such a great variation in the responses of the respondents.

Answering a question about evaluation, e.g., job satisfaction, requires the activation of various information contained in long-term memory in its semantic (e.g., what it means to be satisfied) and episodic parts (e.g., recalling various emotional states). The recalled information, according to a concept of consciousness called a multiple sketch model, is subject to continuous editing. At no point in this process can it be said that the editing is complete, and the final outcome is consciously experienced. At a given moment, we recall the worst episodes; in an hour, we may recall information that radically changes our judgment. When we are in a good mood, we look for positive aspects of working in this company; when we are in a bad mood we "look for holes in the whole". Respondents, while filling in the questionnaire, very rarely have ready marks of satisfaction "in their heads". The assumption that we constantly archive different opinions is not very convincing. An alternative assumption is that we construct them on an ongoing basis when they are needed. Specific goals, standards, judgments, and attitudes with a high capacity to generate further information. We have various general opinions, goals, standards, and attitudes encoded in our minds to generate further opinions. These are essential for the formation of emotions, because without them it is impossible to give any meaning to the events we encounter. Most of the cognitive representations (e.g., views about the role of work in life) that we ask about are not represented in the mind before the evaluation is initiated. Such representations can be described as virtual (because they do not exist before the question is asked). Our approach differs significantly from the traditional approach of measurement theory, which assumes that the respondent already has a fixed 'true' answer - one they would give themselves, so the primary concern is to minimize measurement error caused by the form of the question, the social context. Every evaluation requires the ability to focus one's attention to select information, to omit or at least block out those that are of peripheral importance. In the process of transforming a thought into an utterance, a chain of associations emerges in the mind. Each word, especially an ambiguous one, triggers a sequence of associations that run often in different, even very divergent directions. There are many cognitive schemas encoded in long-term memory that are "ready" to interpret such a word. The mind usually sifts through associations and selects only those that are related to the thought we want to express. The more accurate this information sifting, the more effective the next stage of processing associated with conscious attention can be. Only a modest fraction of this process can be made conscious, but this does not mean that we cannot take control and turn our attention to different aspects of the issue. In this way, awareness modifies the operation of the filter. We can call up information from long-term memory, and it will filter the incoming information. To sum up, we must be aware that respondents very often do not have a ready answer and they form it only when the questions are asked. Very often, they do not reproduce their opinions but construct them. What opinion they form depends on which of the four strategies of forming an opinion we apply: 1) reproducing ready-made judgements, 2) motivated processing, 3) heuristic (simplified) processing, and 4) analytical (detailed) processing.

The information processing strategy chosen is determined by the respondent's cognitive abilities (e.g., level of reflexivity), state of the organism (overload, mood), and goals determining the degree of involvement. The choice is also influenced by the characteristics of the object of assessment (degree of familiarity and complexity) and the characteristics of the situation (time pressure, social approval, how costly mistakes are). In surveys, respondents, due to time constraints and the lack of costs of making an incorrect judgment, extremely rarely use an analytical strategy. Therefore, we should keep in mind:

(1)   Importance of psychological realism of the research - it is very important to maintain the respondents’ engagement e.g., by offering personalized feedback if it is possible. The respondent wants to understand not only WHAT is being asked about, but also WHY?

(2)   Respondents do not have ready answers in their heads and must have the right to say, “I don't know”, not applicable, or omit the answer. Forcing them to give an answer can lead to irritation and giving random answers to subsequent questions.

(3)   Respondents, if they can, will avoid the mental effort – they love to use middle options on the rating scale, so even-numbered points with Don’t Know (Difficult to Say) option outside the rating scale is recommended. Research[5] has shown that the absence of a middle option does not significantly increase the number of Don’t Know (Difficult to Say) answers.

Conclusion: Respondents’ answers have different validity and reliability. Sophisticated methods of data analysis are of no use if these data are distorted in various ways.

Scientific concepts and operational definitions

In science, we use the language of observation and the language of theory in parallel. In the language of theory, we use scientific concepts (theoretical constructs, latent variables) e.g. leadership style, need for dominance, emotional well-being of an employee etc., which have to be translated into the language of observation.

The WiW paradigm recognizes that the theoretical constructs under study are natural concepts that cannot be defined in a classical way by means of necessary and sufficient conditions, so the solution to the problem is operationism[6], which assumes that scientific concepts do not capture the essence of things, but only give the scientist’s actions, his psychophysical operations needed to define the thing under study.

We use various measurement tools to build indicators. An example would be sets of questions built to measure an employee characteristic. Such sets of questions are called scales (e.g., Anxiety Scale) or psychological tests, which can be treated as a variety of calibrated tools[7].

The positivist approach[8] to quantitative research analysis assumes that the objects of research are facts, which are presented in the language of variable values. Hundreds of variables and their operationalization have been described in scientific HRM studies. One can get the impression that the introduction of another scientific concept to describe a person is overly accepted. That is why the researcher must choose the variables that are the subject of his inquiries by describing the theoretical model of the phenomenon described and the measurement model of the theoretical constructs.

The task of the researcher is not limited to registering facts and laws governing the facts but consists in such an ordering of them in theoretical models as to be able to predict subsequent facts on their basis.

Theoretical Models

In HRM, cognition is achieved mainly through model testing rather than observation[9]. Therefore, the first step is to select, based on a literature review, the theoretical variables (scientific concepts) that will be used to model the phenomenon of interest to the researcher.

A theoretical model should be as follows:

-   simple - the fact that reality is complex does not imply that the model should be complex[10],

-   congruent with available scientific facts if it is not intended to question interpretation of them,

-   logical, internally consistent[11],

-   able to generate predictions,

-   empirically verifiable.

A theoretical model that has been confirmed by many studies can be called a theory.

Each model in HRM consists of an a priori part, an assumption that the selected variables are valid and relevant, or a set of hypothetical relationships between variables, which are subjected to precise empirical tests. In addition to the theoretical model, a measurement model must be specified, that is, a way of operationalizing all the variables.

Hypotheses are falsifiable statements about the relationships between the variables specified in the theoretical model.

 

Five types of triangulation

The WiW paradigm recommends 5 types of triangulation: (1) methods, (2) data, (3) operationalization, (4) modes of analysis, and (5) researcher.

Triangulation of methods

Even in online surveys, we can combine correlational, experimental, and qualitative methods. We analyse numerical answers to closed questions with quantitative methods, and verbal answers to open questions with qualitative methods.

Data triangulation

The availability of population representative random samples is very limited in the social sciences, since people can be drawn but cannot be forced to participate in surveys. Therefore, in most cases, surveys are conducted on convenience samples consisting of people who have agreed to participate in the survey. We increase external validity by replicating studies in different convenience samples. This means that we should test the same hypotheses on different data sets.

Triangulation of operationalizations

There are no standard operationalizations of variables in HRM. Operationalization of variables should be carefully selected considering the specifics of the sample, e.g., the item "I make decisions under time pressure more easily" is a good indicator of low reactivity in the group of young employees, but not among managers. Even if we use standardized ready-made measurement tools, their psychometric properties should be checked on the sample.

Triangulation of analytical methods

Although in quantitative analyses assumptions are made about the axiological neutrality of science and the non-interference of the researcher, even in the pre-proceduralised, objectified statistical analyses, the researcher has to make decisions about how to "clean" the data set, how to build indicators, how to choose assumptions about the level of measurement, how to choose statistical tests. The decision of whether to treat questionnaire score as a continuous or ordinal variable (e.g., after median splitting) may lead to different conclusions. Therefore, the WiW paradigm recommend using different ways of data analysis like parametric vs nonparametric tests on the same data, with the increased trust to the results that are robust to the change of statistical tests.

Researcher triangulation

When analysing qualitative data, words, researcher triangulation is recommended, data should be coded by at least two people independently of each other.

External and internal validity of research

We increase external validity by using different types of triangulation – in particular, by testing the same hypotheses on different data sets.

Where possible, we should take care to ensure the INTERNAL VALIDITY of the study. Even in surveys we can manipulate the independent variables - that is, we can conduct experimental research by assigning volunteers randomly to different experimental conditions.

Where possible, in both surveys and interviews, we introduce DESCRIPTIONS of the objects whose evaluation we want to know. For example, when asking employees for their opinions about their boss, we are not able to determine to what extent it results from the employee's perception and to what extent from the objective characteristics of the boss. Asking for the evaluation of the model description of e.g., a dominant, partner-like boss we can investigate individual differences in the evaluation of various features that were the basis for the construction of these descriptions.

Quality of Data

Before analysis, data sets should be carefully cleaned of "false" respondents, who, e.g., gave random answers[12]. Standard measurement tools used in research should be checked for psychometric properties/adapted to the group of respondents studied.

Quantitative, experimental case studies[13]

Findings on relationships between 2-3 variables (ceteris paribus) are difficult to apply in practice because of multidimensionality of reality). Therefore, WiW methodological paradigm promotes QUANTITATIVE experimental case studies, where the values of variables at selected time points are manipulated and quantitative measurements are made over a long period of time.



[1] (Sułkowski, 2011)

[2] (Niemczyk, 2011)

[3] (Sułkowski, 2011)

[4] (Wieczorkowska & Wierzbiński, 2013)

[5] (Wieczorkowska & Wierzbiński, 2011)

[6] Bridgman after: (Tatarkiewicz, 1950)

[7] (Brzeziński, 2019)

[8] (Tatarkiewicz, 1950)

[9] (McKelvey & Henrickson, 2002) (Czakon, 2011)

[10] As Professor Robert Zajonc used to say.

[11] (Burniewicz, 2021)

[12] (Wieczorkowska & Wierzbiński, 2011) (Kabut, 2021)

[13] (Wieczorkowska-Wierzbińska, 2014)

Reformatorski drenaż energii polskiej nauki

  Profesorska wizyta studyjna Agnieszki Kacprzak i Przemka Hensla w Ann Arbor (właśnie dostałam od nich zdjęcia) na zaproszenie prof. Gonzal...