Talent Canada
Talent Canada

Features HR Technology
Regulating the invisible: Employers grapple with AI policies in the workplace

Avatar photo

August 1, 2023
By Todd Humber


Photo: Adobe Stock

In an age of relentless technological advancement, employers are struggling with what — exactly — to do about some of the powerful AI tools that are bubbling up and being embraced by staff both with and without permission.

Companies are grappling with a dilemma: How do you regulate the use of a tool that is as invisible as it is omnipresent? It’s raising ethical questions around usage in the workplace and whether employers need to implement new policies or update existing ones to cover their bases.

A survey by Tech.co conducted in July found 68.5% of business leaders said employees should not be using AI tools like ChatGPT without express permission from their boss. On the flip side, about one in 10 (12.3%) had no qualms with it being with used without permission.

New policy versus updating existing ones

Kiljon Shukullari, HR advice manager at Peninsula Canada, said he hasn’t seen a groundswell yet of companies implementing AI-specific policies for their workers. In some cases, updating existing IT and privacy policies has been sufficient to cover off the use of AI tools.

Advertisement

“Use of AI opens up a bit of a risk. Privacy is one of the concerns in terms of what information is being shared,” he said. For example, a worker might upload the copy of an email into an AI tool to help generate an appropriate response.

“That email could have sensitive information that should not be shared externally,” said Shukullari.

Determining how AI is used at your company

A good start is identifying how the company wants to use AI tools and for what purpose. Then, there is a need to put operational infrastructure around it. For example, is there going to be a process to review the information generated by AI?

“Who’s going to validate it? Who’s going to put the stamp of approval on it as something that can leave your walls as work that’s been done collaboratively between ChatGPT and the employee?” he said.

The way companies respond will also be influenced by how sophisticated their staff are at using these tools. It may need to start with training and awareness to identify potential uses and weigh the pros and cons and determine whether a formal, AI-specific policy is needed.

“You don’t want to overcomplicate it. Policy creation can sometimes get a bit out of hand, and we start repeating ourselves and overlapping,” he said. “That’s why you want to be a bit aware as to what you have in place and maybe expand on it.”

If you don’t have any relevant policies in place, and find that the use of AI by your teams is disrupting your business, then it is probably worth pursuing as a standalone policy, said Shukullari.

Enforcing the policy

When it comes to enforcing policies on AI, he pointed to recent legislative changes, particular in Ontario, around employee monitoring for guidance.

“Technically speaking, when you’re using your employer’s equipment and resources, there is no expectation of privacy,” said Shukullari. “Employers can actually see what kind of traffic you’re generating to the devices on a daily basis as you do your work.”

Companies could start by telling staff to inform their manager when using ChatGPT, for example. Or, if they already have sufficient IT and privacy policies, they could just raise awareness and let employees know that if they use ChatGPT or other AI tools, the company will know about it.

Then they can set boundaries on when and how to use the tools and what the approval process looks like. “I know a lot of companies already use it when it comes to insurance and so forth in terms of identifying potential risks,” he said.

Making mistakes

The Tech.co survey, which got responses from 69 business leaders and decision-makers, found a split among them when it comes to who is responsible for mistakes generated by AI.

Almost one-third (31.9%) said the worker using the tool is to blame, and about the same (33.3%) would point the finger jointly at the worker and the manager who was responsible. A slightly smaller number (26.1%) thought all three parties share some responsibility for any errors coming out of AI.

At the same time, the vast majority (82.4%) saw no ethical concerns about using tools like ChatGPT to write responses to employees and nearly the same amount (80.8%) believed it was ethical to proactively disclose AI had been used.


Print this page

Advertisement

Stories continue below