Employees increasingly share workplaces and tasks with artificial intelligence (AI). Intelligent technologies have been developing so rapidly that they can take on the role of a co-worker (e.g., a robot that works in a shared workspace) or even a supervisor (e.g., an algorithm that makes decisions). Both types of relations between AI and employee affect employee motivation, well-being, and performance. In three studies, the present work therefore examines AI as robotic co-workers and as supervisors. More specifically, I investigated which robot design features make human-robot interaction (HRI) at work most successful and how and why effects of procedural justice differ depending on whether humans or AI act as decision agent.
In Study 1, we focussed on AI as co-worker and meta-analytically integrated 81 studies on the relation of five robot design features (i.e., feedback and visibility of the interface, adaptability and autonomy of the controller, and human likeness of the appearance) with seven indicators of successful HRI (i.e., task performance, cooperation, satisfaction, acceptance, trust, mental workload, and situation awareness). Results showed that the features of interface and controller significantly affected successful HRI, while human likeness did not. Moderation analyses revealed that only design features of the controller had significant specific effects in addition to those on task performance and satisfaction: Adaptability affected cooperation and acceptance, and autonomy affected mental workload.
In Studies 2 and 3, we focussed on AI as supervisor and examined and compared procedural justice effects of human and AI decision agents on employee attitudes and behaviour. To this end, we conducted two vignette experiments in each study. In Study 2, we investigated whether the type of decision agent (human vs. AI) influenced the effects of procedural justice on employee attitudes and behaviour. The results showed no differences in effect sizes between humans or AI as decision agent, emphasising the importance of procedural justice for both decision agents. In Study 3, we compared strength and specificity of four mediators of procedural justice effects, investigated differences between decision agents and examined responsibility as explaining mechanism for these differences. The results for both types of decision agents showed trust as strongest mediator for effects on attitudes, and negative affect as strongest mediator for effects on behaviour. When comparing the two types of decision agents, trust as mediator was less pronounced for AI compared to human decisions, whereas no difference between the two types of decision agents was found for negative affect. Additionally, we confirmed the responsibility that is attributed to a decision agent as underlying mechanism for these differences.
In summary, the present work extends the understanding of employee interactions with AI as co-worker and supervisor at work by integrating theories from industrial and organisational psychology as well as engineering and information science. The results provide valuable insights for theory development in HRI and organisational justice concerning the integration and investigation of context factors, of effects of robot design characteristics on successful HRI and of characteristics of the decision agents that might influence justice effects. Moreover, the results provide recommendations for engineers, AI designers and human resource practitioners on what to bear in mind when planning to develop and implement AI in the workplace.