Abstract
This study proposes a technological dimension of organizational justice and empirically investigates how this dimension takes form in a measure of justice. The study conducted the work within the higher education sector in Oman and collected the data from nine universities and higher institutes of learning. Participants represent a range of academic ranks (including lecturers, assistant professors, associate professors, and full professors), and they have different teaching experiences; they come from a wide range of cultures; and their ages widely differ. The study used exploratory factor analysis (EFA) of data from 416 survey respondents to test the dimensionality of organizational justice. It also used Cronbach’s alpha to check the reliability of the intended measure. The findings show an interpretable and distinct factor of what one might call “technological justice.” The study outcomes provide a better understanding of how employees perceive the fairness of using technological tools and its sustainability. This study can help organizations investigate the extent to which employees perceive their workplace to be technologically just. It can also help organizations reconsider their procedures and policies securing better justice in this domain. The current scholastic endeavor focuses on an area of research which has not received much attention when compared with that of research on other dimensions of organizational justice. This new domain would add some insights into the literature of organizational justice and enhance the knowledge of the dimensionality of justice.
Keywords
Introduction
Even though the technological revolution has introduced products and services with a significant impact on individuals’ and organizations’ welfare, it is not surprising to find that employees have not been granted access to technological tools to do their job. It may be traced to the reason that employees perceive that they are unfairly treated by their superiors regarding using those tools. This is true for the higher education sector. Black and Minority Ethnic (BME) academics experience racism, both institutional, and everyday racism (Lander & Santoro, 2017). The belief is that injustice causes damage to the synergistic relationship between the organization and its members. By contrast, justice can contribute to enhancing the personal satisfaction of employees and securing organizational effectiveness. Justice represents a significant concern for both the organization and its members.
There are a number of theories of organizational justice, but operationalizing its measurement scales has been slow (Jepsen & Rodwell, 2009). Distributive justice is a result from Adams’s (1965) equity theory that states that perceptions of distributive justice resulted from comparing personal outcome-contribution ratios. Also, procedural justice Thibaut and Walker (1975) developed is a result from observations of courtroom settings. Within this context, equity theory has been regarded to be “among the more the useful middle-range theories of organizational behavior” (Weick, 1966, p. 439). Other scholars, however, has not supported this trend (e.g., Furby, 1986; Miner, 1984). Miner has classified equity theory in a list of “not so useful” theories of organizational behavior. Furby discussed that this theory has fallen into disfavor because of limitations in its applicability and its internal validity (as cited in Greenberg, 1987).
The literature shows that there is an ongoing debate as to how organizational justice takes place in practice, especially regarding the issue of dimensionality (cf. Colquitt, 2001; Jepsen & Rodwell, 2009; Moorman, 1991). Furthermore, the literature includes a different taxonomy of organizational justice theories (see Greenberg, 1987) and calls for more studies based on employees in organizational settings (e.g., Greenberg, 1990). What is hoped for here is to go beyond the well-established theories—as the evolution of the workplaces (e.g., the issue of technology integration in the workplace) demands their revision.
This study investigates organizational justice when technology integration takes place in the higher education sector. The higher education has increasingly become important (Zulfqar, Valcke, Devos, Tuytens, & Shahzad, 2016) and that the past decade has observed a continuously increasing internationalization of higher education institutions (Vasilache, 2017). This makes it critical that instructors and authorities concern about developing teaching skills and equitably allocating the teaching tools (including the technology-influenced teaching tools) to secure a sound teaching process.
The study demonstrates a new form of justice stemming from the perceptions of employees about the fairness of using technological tools and its sustainability. The results could help organizations investigate the extent to which employees perceive what one might call “technological justice” at work and reconsider their procedures and policies, thus securing a better justice in this domain.
Rationale and Development of the Research Question
Since Rawls’s (1958) publication of Justice as Fairness, there has been an ongoing debate as to whether justice should be treated as a single- or a multidimensional construct. Both distributive and procedural justice are the main forms of organizational justice. The study of organizational justice has generally focused on both employees’ responses to the things they receive (i.e., outcomes) and the means by which they obtain those outcomes (i.e., procedures) (Cropanzano & Greenberg, 1997). The literature, however, has demonstrated other types of organizational justice.
Generally, organizational justice expresses the extent that employees perceive fairness within an organizational setting. In addition to distributive and procedural justice, this definition would also carry with it the other forms of justice that Colquitt (2001) discussed, that is, interpersonal and informational justice. Whereas interpersonal justice refers to the perceptions of employees about respect and dignity, informational justice refers to perceptions of how sufficient and truthful the explanations provided to employees are. According to Colquitt, both interpersonal and informational justice are independent kinds of justice. However, other studies (e.g., Skarlicki & Folger, 1997; Tata & Bowes-Sperry, 1996) have treated the concept of interactional justice Bies and Moag (1986) first introduced as being a third type of justice, while others (e.g., Moorman, 1991; Niehoff & Moorman, 1993) considered it as being a subset of procedural justice. Beyond that, Jepsen and Rodwell (2009) introduced “procedural voice” as being a new factorial component of justice. It refers to the employees’ desire to express their views and feelings about organizational procedures. Jepsen and Rodwell argued that the number of justice dimensions is not well defined.
Undoubtedly, both equity theory and social exchange theory assume a prominent place when addressing the issue of justice. The literature indicates that the equity theory has been given the greatest attention by many organizational scholars interested in justice (Greenberg, 1990). Social exchange theory has also become a leading approach to understanding the employees’ behavior that results from their perceptions of justice in the workplace. This theory states that individuals will direct their reciprocation efforts toward the source of any benefit they receive. Greenberg (1987) introduced different theories which would track trends in organizational justice research, believing that both equity theory and social exchange theory are not equipped to address all the matters of justice.
However, the current study used the principles of organizational justice demonstrated by Hoy and Tarter (2004), and the proactive and reactive process theories discussed by Greenberg (1987) to conceptually support the current idea of justice and its implications and formulate its components. This study demonstrates a new form of organizational justice which would help the authorities in organizations measure the perceptions of employees concerning the fairness of using technological tools and its sustainability. More specifically, the intended measure includes items expressing the perceptions of the academic staff in the higher education sector about the fairness of using technological tools intended for teaching purposes and the ability to be maintained at a certain level of using those tools. Hoy and Tarter (2004) discussed nine principles of organizational justice. The current study finds that the perception principle, the voice principle (a.k.a. the ability to express one’s opinion), and the ethical principle support the new construct of justice and provide a better understanding of its components. Whereas the perception principle refers to an individual’s perception of fairness in the workplace and the extent to which that workplace promotes a general sense of justice, the voice principle states that participation in the decision-making process enhances fairness. The ethical principle requires following universal moral and ethical standards. Table 1 shows the source on which each item is based.
Technological Justice Measure Items.
Note. The authors formulated the items.
The intended measure would help authorities predict the reactions of their subordinates which might result from unfair policies or procedures (in this case, reactive process theory can provide an explanation). This theory expresses the reactions of individuals to the procedures of decision making. Also, the measure would help authorities predict subordinates’ attempts at creating fair policies or procedures (in this case, proactive process theories work better, particularly the allocation preference theory introduced in Leventhal, 1980). The preceding discussion proposes that the procedures which would help individuals attain their goals and secure justice will be the procedures which are most preferred (see Greenberg, 1987). Eventually, this would enhance the relevant procedures and policies designed by authorities according to the perceptions of their subordinates.
In recognition of the importance of justice in the workplace, and in recognition of the likelihood that organizations (e.g., the education sector) have some degree of ongoing technology uptake, a scale measuring the fairness of using technological tools in the workplace from employees’ perspectives would enhance organizational justice. The current study included the Web of Science Core Collection and used the following search term for a precision search for articles and reviews published from 1950 until 2017:
TI = technolog* AND (fairness OR organizational justice OR distributive justice OR procedural justice OR interactional justice OR informational justice OR interpersonal justice OR discrimination).
The study, however, found that the current scholastic endeavor has not received attention when compared with other dimensions of organizational justice. Accordingly, this study aims to answer the following question: How would the technological dimension take form in a measure of justice?
Method
Sample and Data Collection Instrument
The sample includes different academic staff (including full professors, associate professors, assistant professors, and lecturers) working at the higher education bodies in Oman (representing nine universities and higher colleges located in different provinces of Oman). The literature includes studies conducted on the topic of organizational justice in education (e.g., Ishaq, Munazer, Hussain, Asim, & Cheema, 2012; Niessen, Meijer, & Tendeiro, 2017). The current study distributed a total of 620 questionnaires, and 416 responses were valid for statistical analysis, thus yielding a response rate of 67%, which is considered a good rate for management and behavioral sciences (Babbie, 1995). Respondent characteristics showed that the sample was diverse. In total, 59.6% of the sample were men, and 40.4% were women. Also, the majority (60.6%) were between 40 and 49 years old. About 52.2% of the sample had more than a decade of teaching experience, and the majority (57.7%) were lecturers. Interestingly, the sample shows that the majority of the respondents (64.9%) were originally from Asia. Table 2 describes the sample in detail.
Sample Characteristics.
The authors initially developed the questionnaire, which expresses the technological dimensions of organizational justice—as defined by the six items shown in Table 3 (presented below). A measure is considered to have face validity if the items are reasonably related to the perceived purpose of the measure (Kaplan & Saccuzzo, 2017). A set of academic staff, however, have checked the content of the measure before embarking on the full study. The current study had to refine the wording of the questions and asked respondents to express their perceptions about the fairness of using technological tools for teaching and, accordingly, to rate it on a 5-point scale. Using a 5-point scale would ease the survey process for the respondents. On this point, the literature indicates there being no significant difference between 5- and 7-point Likert-type scales and there being only slight differences between a 5-point Likert-type scale and the higher ratings (Dawes, 2008).
Assessing the Appropriateness of Factor Analysis.
Note: All the bolded values of correlations are significant (i.e., to the 0.000 level); Kaiser–Meyer–Olkin measure of sampling adequacy (KMO) = 0.812; Bartlett’s test of sphericity approximate chi-square = 574.94; df = 15; significance = .000.
The Procedures
The current study used exploratory factor analysis (EFA) to test the dimensionality of organizational justice. This approach is a useful way to search for the smaller set of latent factors to represent the larger set of variables (Henson & Roberts, 2006). It is also a useful way to define the underlying structure among the variables in the analysis (Hair, Black, Babin, & Anderson, 2010). Our data proved suitable for EFA analysis. In addition, the data include metric variables. The sample size includes 416 observations. Williams, Onsman, and Brown (2010) citing supporting references in their guide, stated that sample sizes of 300 and 500 are good and very good, respectively. Also, the sample to variable ratio is good enough. Furthermore, the current study examined factorability of the six items of the technological dimension. Table 3 shows that all the intercorrelations among the variables are significant (i.e., to the 0.000 level).
This inspection provides an adequate argument for proceeding to the examination of adequacy for factor analysis at the overall level and at the level of each variable. Whereas Bartlett’s Test of Sphericity examines the statistical significance of the entire correlation matrix, KMO quantifies the degree of intercorrelations among the variables and the appropriateness of factor analysis. According to Hair et al. (2010), Table 3 shows that Bartlett’s Test of Sphericity is significant (.000 < .05). It indicates that sufficient correlations exist among the variables to proceed—and that KMO is meritorious (.812 > .8).
Although both principal component analysis (PCA) and principal axis factoring (PCA) tend to be the most common ways to extract factors (Henson & Roberts, 2006), the practical difference between these methods is often negligible, especially when the variables have high reliability, such as in our case (Thompson, 1992). The current study used PCA to extract factors adopting a value of ≥0.50 as factor loadings. This would reduce a set of variables into factors. As a “rule of thumb,” loadings of ±0.50 or greater are taken to be practically significant (Hair et al., 2010).
To produce measure unidimensionality and to simplify the factor solutions, the current study also used both the Eigenvalue (EV > 1) and the cumulative percent of the variance. Using multiple criteria to determine factor extraction is recommended (Thompson & Daniel, 1996). However, the criteria used to determine the number of factors one should retain do not necessarily lead to the same decision (Henson & Roberts, 2006). For this reason, analysis will require careful and thoughtful judgment on which option is the best fit and on which of the factors extracted make the most conceptual sense (Williams et al., 2010). Whereas there is a measure of consensus showing that EV must be 1, there is no fixed threshold regarding the cumulative percent of the variance. Williams et al. (2010) cited a reference showing that the explained variance is commonly as low as 50% to 60% in the humanities. It is worth noting here that the current study used orthogonal rotation varimax. Such methods are understood to be the most rational methods (Hair et al., 2010). This technique of rotation would help provide more meaningful and interpretable factors with a smaller set of items.
Discussion
The results confirmed an interpretable and distinct componential factor extracted, titled “technological justice.” As shown in Table 4, the factor explains 47.14% of the variance with EV 2.828 > 1.
Principal-Factor Analysis With Varimax Rotation.
Note. Factor loadings of (0.50) and above are adopted.
All items have loadings of over 0.5, and the current study has deleted none of which. Also, the Cronbach’s alpha of this factor/dimension is .773. This value is acceptable (Hair et al., 2010; Nunnally, 1978). It indicates high reliability (Griffee, 2012) and is appropriate for questionnaire-based studies (Reid, 1990). The acquired value means that the items of the current measure well reflect and measure the construct of technological justice. In other words, all six items are attributable to technological justice. The obtained result is consistent with that of Hair et al. (2010) who stated that variables of less than 20 would extract a conservative number of factors (too few). The current study is content with these results—as the items conceptually express the construct of technological justice. Henson and Roberts (2006, p. 396) noted that “the meaningfulness of latent factors is ultimately dependent on researcher definition.”
Within the context revealed above, technological justice is defined as being the perceptions of employees about the fairness of using technological tools intended for work purposes and its sustainability. Concerning the sustainability, it expresses the ability of an employee to be maintained at a certain level of using those tools. By way of explanation, the first three questions—“Can you access the technological tools intended for teaching purposes?” “Can you consistently access those technological tools during the teaching process?” and “Are those technological tools distributed equitably and in a timely manner among the lecturers during the teaching process?”—could be better understood under the equity principle (in the abstract sense). More specifically, they could be better understood under the perception principle that Hoy and Tarter (2004) demonstrated. In other words, when individuals perceive that all academic staff have equal opportunities to access the technological teaching tools and that those tools are distributed equitably and in a timely manner, they would be satisfied. The source of their perceptions is usually the result of comparisons with other colleagues. Field experience has proven that employees use a variety of comparisons and that reactions to injustice vary across different comparisons (Scholl, Cooper, & McKenna, 1987).
The measure also demonstrates a number of issues in the domain of technological justice. Whereas the fourth question—“Are you able to express your views about the modernity of, and the problems of using, those technological tools?”—reveals the extent to which the individuals are able to voice their opinions about modernity and issues of educational technologies, the last two questions—“Does the university keep pace with new technological tools for teaching purposes?” and “Does the university keep on training its academic staff in how to use the modern technological tools intended for teaching purposes?”—involve the issue of the sustainability of technological justice. One can argue that the voice principle of organizational justice better clarifies the fourth question. It is imperative that individuals have a voice about the issues influencing them in the workplace, especially when they experience those issues themselves. Educational technologies and the problems resulting from its use are examples of such issues. Having a voice would enhance the decision-making process, both because such individuals constitute the main source of expressions of concern about those issues and because they might know information that would contribute to the making of good administrative decisions. The issue of voice in decision-making, labeled “the voice principle,” is categorized as being one of the main principles of organizational justice (Hoy & Tarter, 2004). Also, the literature includes studies which show a similar trend. Jepsen and Rodwell (2009), for example, investigated a new construct titled “procedural voice” dealing with employees’ desire to express their views about organizational procedures.
Concerning sustainability, technological justice assumes a different trend. The tendency here is to perceive that organizational decisions do not overlook the importance of keeping pace with new technologies—and that organizations keep its members trained in the use of such technologies. The source of such perceptions might be identified by using the comparison process with those working in other organizations who adopt educational technologies in teaching. Individuals in the organizations use a variety of standards of comparison (including standards external to the organization; Scholl et al., 1987). The standard for training potential educational leaders underscores administrative action characterized by fairness and ethical behavior (Hoy & Tarter, 2004). The current study expressed this using the ethical principle of organizational justice.
Limitations and Implications
The current study only used Web of Science Core Collection to check the extent to which the literature paid attention to the current domain and mainly concentrated on the articles and reviews. Although those results included leading journals with high impact factor, one should not overlook the fact that there are other studies may have an influence on one’s judgments. The matter here is about the quantity rather than the quality. For this, the current study suggests that further works should include other research platforms and other citation sources. This would better help future studies to show extensively miscellaneous findings.
Another issue is generalizability. The results involve a degree of generalizability. In fact, there are different methods of factor analysis validation, and the issue of the stability of the factor measure represents an aspect of generalizability (Hair et al., 2010). Factor stability is primarily dependent on the sample size and on the number of cases per variable. As discussed earlier, the current study sampled over 400 employees comprising 240 lecturers and more than 150 professors—from a range of cultures; with a variety of experiences; and of different ages. Nevertheless, additional studies are required to refine construct validity even further. Merging the current new items with those already founded in the domain of justice could be a further step to better validate the current construct.
Last but not least, this study mainly focused more on revealing a new construct of justice. It would be worthwhile for future studies to extensively investigate the relationships between the technological justice and the potential antecedents and consequences. Indeed, the literature is rich with works addressing the organizational justice, its types, and its antecedents and consequences (e.g., Colquitt, Conlon, Wesson, Porter, & Ng, 2001; Colquitt et al., 2013; Fassina, Jones, & Uggerslev, 2008; Kernan & Hanges, 2002; Rupp, Shao, Jones, & Liao, 2014), and the further studies can use those works to proceed in this scope. Notwithstanding the limitations mentioned above, the new construct of justice revealed in this study, however, could be materialized.
Concluding Sentence
This study made an attempt to provide a measure of justice stemming from the perceptions of employees about the fairness of using technological tools intended for work purposes and the ability to be maintained at a certain level of using those tools. Within the context of interpretation, the current measure can help the authorities in organizations predict their subordinates’ reactions that result from unfair procedures, or help authorities predict subordinates’ attempts at creating fair procedures. As discussed earlier, both reactive process and proactive process theories can provide a better understanding of such these perspectives. What is hoped here for is that further studies and debate from different organizational settings would help develop an even more comprehensive framework concerning the construct of technological justice.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
