Talent Canada
Talent Canada

Features Artificial Intelligence
AI is tempting, but there are potential risks in relying on it for employers

March 26, 2024
By John Hyde

Photo: Adobe Stock

In recent years, the development and use of AI has become increasingly popular and various AI technology and tools have become available for public and commercial use.

It may be tempting for employers to integrate the ever-increasing number of AI tools into their business practices. However, employers need to be aware that, because AI use is relatively new in many contexts, there may not be much (if any) legislative or case law guidance regarding the legal implication of using AI in certain ways. This means, it may be difficult to thoroughly assess the potential liabilities an employer may face as a result of using AI.

Company liability for AI

Recent case law does suggest that businesses will likely be held responsible for the consequences of and any missteps caused by their use of AI.

A very recent example of this is Moffatt v Air Canada, 2024 BCCRT 149, where Air Canada was held liable for the misinformation it’s AI chatbot provided to a customer.


The customer had been looking for information regarding Air Canada’s bereavement travel policy and the chatbot had provided inaccurate information, stating that the customer should book the flight immediately and then request a refund within 90 days. However, Air Canada’s bereavement policy actually does not provide refunds.

Air Canada unsuccessfully tried to argue that it could not be held liable for the chatbot’s misleading information. The BC Civil Resolution Tribunal did not accept that argument, stating that the chatbot was a part of Air Canada’s website and Air Canada “is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”

Accordingly, employers need to be aware that they will likely be held responsible for any consequences that result from their use of AI and will need to exercise caution when assessing whether to integrate AI and how to integrate it.

Potential areas of concern in the HR context

In the employment context, AI is already being used by some companies in their hiring process. It could also potentially be used in other HR areas, such as records management and administrative tasks, or even performance management (e.g., employee monitoring).

Given that employers will likely remain liable for any decision or assessment AI makes within the workplace, employers will want to maintain a degree of oversight and review in any area AI is used.

Human rights law is one example of a legal area that could be violated by reliance on AI without oversight or understanding of how the AI process works. Whether it is used during the hiring process or elsewhere in the business, the potential risk is that the AI program being used may make discriminatory assessments and recommendations. This may, for example, be because the training data set was biased, or there are human rights considerations that were not programmed into the AI.

Another potential area of risk are privacy laws and whether the AI uses (and stores) personal information in a legally compliant manner. Although that AI programs will likely be provided by a third-party to an employer, it will be the employer’s responsibility to ensure privacy law obligations are being met.

AI and legislation

While there may not yet be an extensive body of legislation or case law directly addressing the use of AI, it is clear that this is an emerging area of law that will continue to develop.

While most statutes currently do not directly address the use of AI, that is likely to change as new legislation is drafted, or as amendments are proposed for existing legislation, resulting in the use of AI gradually being regulated and addressed within legislation.

A prime example of this process is the Ontario Bill 149, Working for Workers Four Act, 2023, which has proposed amendments to the Ontario Employment Standards Act, 2000, that would add a requirement for employers to disclose in public job ads whether AI is being used in the hiring process.

Lessons for employers

It is important for employers to recognize that the regulation of AI use is an area of law that is continuing to develop and the use of AI may come with legal liabilities that have not yet been addressed by the court or legislation.

Employers who do use AI should make best efforts to be aware of how that use interplays with their legal obligations as employers. This may include the need for extensive oversight or an understanding of how the AI works, and how it handles personal data.

Finally, employers who use AI will want to pay attention to any changes or additions to the law that may introduce regulations or restrictions on AI use.

John Hyde is the managing partner at Hyde HR Law in Toronto. He advises management on all aspects of employment and labour law, including representation before administrative tribunals, collective agreement negotiation, arbitrations, wrongful dismissal defence and human rights.

Print this page


Stories continue below