Intelligence (AI) has recently taken flight. What does an instrument like ChatGPT mean for the HR field? These and other developments in AI should also ring alarm bells for CHROs.
By Dr. Maarten Renkema, Assistant Professor HRM, University of Twente – this article is an English translation and adaptation of the article for CHRO.nl
A chatbot that can function as a personal assistant. One that you can converse with, that writes texts that are almost indistinguishable from those written by a human, that can suggest dinner options, and that can help reflect on one's own behavior.
Last December, the American research organization for artificial intelligence, OpenAI, introduced the online application ChatGPT to the general public. And you probably didn't miss it. The Dutch newspaper NRC has already written at least 17 articles about ChatGPT, and it has appeared in 77 articles in the New York Times. And on Google, the term "ChatGPT" now has 671 million hits. In other words, ChatGPT is booming! But is it a hype or does it really have revolutionary consequences? And what does this mean for the field of HRM?
Last year, I wrote in a blog post about AI and the consequences for knowledge work - at that time, it was about the DALL·E application, also from OpenAI. This time, OpenAI has again attracted even more attention by making ChatGPT freely available to the public, an application based on the GPT-3 language model.
Generative Language Models
Generative language models GPT stands for Generative Pre-trained Transformer and essentially means that ChatGPT is a language model that can generate text, developed using deep learning and supervised learning techniques. ChatGPT creates text based on an instruction (‘prompt’) entered by the user.
The idea is that the model can calculate/predict what the next word/phrase is. The model has also been trained by humans. It uses 175 billion parameters, making it one of the largest language models in the world.
There are many possible applications based on ChatGPT. Given that it is a language model, producing text seems to be the most logical application, for example in structuring and writing communication (think emails, news articles, and policy documents). Additionally, it can be supportive in the initial phase of creativity by generating a list of proposals or creating arguments for and against a viewpoint.
It is also capable of summarizing texts or creating titles – as I did for this article. I find it interesting that there is currently a lot of experimentation and pioneering with ChatGPT: what works and what doesn't?
My original idea was also to have this article partially written by ChatGPT, to demonstrate the possibilities, but since then many writers and journalists have already shown that this can be done (see, for example, NRC). Therefore, in this blog, I will address the underlying developments and implications for the work of HR professionals.
Regarding its application in the HRM field, it can be used in writing HR policies (absence and reward policies), onboarding processes, job descriptions (see text box 1), personas, and job vacancy texts, as well as screening job application letters. The Chatbot can also be used to practice (difficult) conversations.
Hence, the introduction of ChatGPT is a milestone in something that has been in development for some time: the application of artificial intelligence (AI) in (knowledge) work. This is the topic that we are studying in a research project at the moment (see blog).
Collaboration between humans and AI
Within our research project, we are particularly interested in the perspective of employees who work together with AI. During the Dutch HRM Network conference, we presented the first findings of the literature review we conducted.
It appeared that knowledge workers are not necessarily positive about the collaboration: although AI can be supportive in the execution of certain tasks, it can also lead to less transparency and autonomy.
Over the past few months, we have also conducted interviews with developers and (potential) end-users of AI applications. What stood out is that for end-users, it doesn't always matter whether AI is used in a particular application.
Impact on expertise development
Another important aspect is the implication for knowledge and expertise development. Although ChatGPT, and AI in general, can make knowledge more widely available and thus increase knowledge dissemination, there are also limitations to the development of professional expertise .
Would knowledge workers (such as accountants, consultants, medical specialists, and scientists) not lose a certain value when they no longer know the basis for their decision-making? Does writing with AI not limit the learning of a skill?
Although Susskind and Susskind (2022) describe in their book on the future of the profession that technologies like AI can indeed help make knowledge available to a broader audience, they also recognize that the development of knowledge and expertise is an important limitation . Because when professionals are no longer needed, who will ensure knowledge development?
Caution is required
It is therefore important that we also mention the limitations of these AI applications. The prompts that the human user gives are crucial for the usability of the output.
In addition, it is important not to blindly accept the output of AI. ChatGPT remains a language model, it has no awareness and therefore no idea what it has written. OpenAI itself indicates that even though answers are presented convincingly, they may contain errors and nonsensical information. Some speak of "artificial hallucinations", where incorrect or hallucinatory statements are made with conviction.
It is therefore crucial that a knowledgeable person analyzes and integrates the output of AI and takes responsibility - this approach is also called human-in-the-loop, where tasks are not automated but employees are augmented by AI .
One development we are dealing with at universities is authorship, how do we know if a student has written a text when ChatGPT can generate texts that are difficult to distinguish at first glance?
However, not only teachers may be affected by this, as HR managers also evaluate texts. Think of job application letters, are they still written by applicants? Or policy documents produced in the HR team?
The future: collaborating with AI?
Human work seems unlikely to be quickly replaced by technology in the HRM field, but there is a chance that HR professionals may be replaced or outpaced by their colleagues who do use these smart technologies. So CHROs need to learn about AI applications, conduct experiments, but also be critical!
A reconstruction by the New York Times has shown that ChatGPT has instigated ‘code red’ at Google headquarters. Developments in AI should also set off alarm bells for CHROs: this is a development that has consequences, and HR professionals must have or develop (technical) knowledge to be able to speak and represent the interests of employees.
To further investigate the (possible) impact of AI on the HRM field and HR professionals, prof. Andy Charlwood and I are going to organize conversations and workshops with HR professionals, AI experts, and scientists – in the context of my Digit fellowship. If you would like to participate or learn more, please contact firstname.lastname@example.org.
>>> Please also take part in our survey <<<
1. Ardichvili, A., The Impact of Artificial Intelligence on Expertise Development: Implications for HRD. Advances in Developing Human Resources, 2022. 24(2): p. 78-98.
2. Susskind, R.E. and D. Susskind, The future of the professions: How technology will transform the work of human experts, updated edition. 2022: Oxford University Press.
3. Raisch, S. and S. Krakowski, Artificial Intelligence and Management: The Automation-Augmentation Paradox. Academy of Management Review, 2020. 46(1): p. 192-210.