Since it launched on 30 November 2022, ChatGPT has garnered lots of publicity, although not all positive. A quick search on the internet reveals numerous lists of how best to make use of it, but also headlines highlighting that companies including Microsoft and Amazon have warned employees not to share sensitive information with it.
What is ChatGPT?
If you ask ChatGPT to tell you what it is, its response is that it is a "deep learning-based conversational language model trained on a large dataset of text, with the aim of generating human-like responses to questions". On its launch date OpenAI, the company that developed it, described it as a model that is "trained to follow an instruction in a prompt and provide a detailed response". It is capable of understanding natural human language and responding in a human like conversational but highly detailed way. It has been described as everything from an alternative to Google to a replacement for humans, particular those working in professions dependent upon content production such as journalism.
So what are the main risks of ChatGPT for employers?
Perhaps the most significant risk to employers is not being aware that employees are using ChatGPT. According to recent research, 68% of workers using it are doing so without their employer's knowledge. When Amazon and Microsoft issued their warnings to employees, they clearly knew the tool was being used and were taking steps to control the extent of that use. Employers need to get on the front foot with this - assume it is being used and take steps to manage that use.
ChatGPT also brings with it confidentiality risks. If employees divulge confidential information when using it, there is the risk that information is then "learnt" by the system. The OpenAI terms provide that users agree to OpenAI using any input data and the output it produces to "develop and improve" the system unless the user specifically opts out from doing so. The discussions users have with ChatGPT are also taking place over the internet and that, in itself, brings security issues with it. Amazon reportedly found ChatGPT using data that "closely matches existing material" from inside the company when answering questions. Microsoft, who recently confirmed a multibillion dollar investment in OpenAI, also advised employees not to divulge confidential information when making use of ChatGPT. If an investor is doing that it is a warning worth heeding.
The next issue is the accuracy of information. When launched, OpenAI's website warned that "ChatGPT sometimes writes plausible sounding but incorrect or nonsensical answers". So misinformation could be presented as fact. ChatGPT can only be as good as the source of the information it is trained on, and we all know that the internet is not a wholly reliable source. ChatGPT was also trained using data sets available in 2021, rendering queries or searches requiring more up to date information pointless. Employees using ChatGPT to create documents for clients will need to fact check and fill in the blanks.
As well as accuracy issues, documentation generation by ChatGPT could also bring with it the risk of copyright infringement. The materials available on the internet that have formed the basis of what ChatGPT has been trained upon could include material subject to copyright. If those materials are recreated when ChatGPT creates a document that could constitute copyright infringement. That risk will extend not just to OpenAI but also the user who created the document at issue. A similar risk arises where the dataset ChatGPT is trained upon includes personal information. Processing of the information, even if unintentional, may breach data protection laws.
Employee development is also worth considering when deciding how or if ChatGPT should be used. Where the system can be used effectively is there a risk that employees will miss out on development opportunities or become de-skilled? There will be a balance to be found between increasing efficiency and potentially disenfranchising employees.
Those working in Human Resources will already be well aware of the risk of bias when using AI. ChatGPT will not be immune to that problem either given any bias in the information it is trained upon will be reflected in the way it answers questions. Recruitment has historically been where this problem has most frequently arisen, but any decision making ChatGPT is involved in would run the same risk. Currently at least, these types of decisions should not be made without being checked by a human.
It is not just employees that businesses need to think about. One of the most commonly highlighted ways of using ChatGPT is to create CVs and application letters. This has the potential to create a false picture of a job applicant, placing more importance on face-to-face meetings with potential new recruits.
What should an employer be doing?
In the first instance employers should not assume that their employees are not using ChatGPT. By January it had been downloaded over 100 million times. Publicity surrounding it is continuing to grow so that number will be significantly higher now. If businesses do not want it used at all then that should be made clear to their employees. For those who do intend to use it then controls should be put in place clearly setting out what the expectations around its use are. That may be via amending an existing policy, introducing a new one or simply making a statement to employees about it.
If there was any scepticism about whether AI would be increasingly used in the workplace then ChatGPT will have silenced many of the doubters. What is more, an updated version is expected later this year. The influence of AI in the workplace is undoubtedly on the rise - for many employers the aspiration will be to balance the risks of using it with the benefits it may bring.