AI Negligence: When Is a Company Liable For Damages

a person in a black and white striped sweater messages with an AI chatbot on a website

When a customer suffers a financial loss or other harm as a result of misinformation negligently communicated by a company, we’d hope that the business would accept responsibility and offer the customer appropriate compensation.

But what if a company tries to shift blame elsewhere? Fortunately, consumer protection laws and other common law remedies have restricted some of the avenues used in an effort to escape liability. Unfortunately, one company recently tried a novel argument in court that could have opened up a new one.

“It was the employee’s error, not ours.”

- Perhaps, but as an employer you have vicarious liability for an employee’s actions in the course of their work.

“It was a computer malfunction.”

- Nevertheless, common law has generally found that a company is responsible for any acts or omissions (including  misrepresentations) from a computer system it uses.

“The negligent party is a separate legal entity.”

- Hmmm, that can be more of a grey area. Tell me more. Who is the negligent party?

“A chatbot on our website.”

In this blog post, I examine a widely-publicized small claims court case between Air Canada and a passenger. The airline attempted, unsuccessfully, to argue that it could not be responsible for negligent misrepresentation by a chatbot on its website because it could not be held liable for information provided by its ‘agents, servants or representatives’.

While the stakes were relatively small in monetary terms, this type of argument may be used again. I suggest that we closely monitor corporate attempts to evade liability when using artificial intelligence technology to interact with or serve their clients.

Moffatt v. Air Canada, 2024 BCCRT 149.

In late 2022, Jake Moffatt sought out information about bereavement fare discounts offered by Air Canada. Moffatt, who was determining whether he could afford to travel by plane for his grandmother’s funeral, opened up a chat window to consult an online ‘agent’ on the airline’s website.

He was informed he could purchase tickets at a reduced rate, or at least receive a partial reimbursement of the full cost of a ticket already purchased if he submitted his claim within 90 days of the travel date. A link to a website outlining the airline’s bereavement policy was included in the chatbot’s reply. That policy noted: “Please be aware that our Bereavement policy does not allow refunds for travel that has already happened.”

Which source should he believe? The information provided in the chat window on the airline’s website, or the written policy that was linked by the chatbot to that same website?

Jake relied on the former over the latter. Air Canada subsequently declined to honour the bereavement fair rate for the tickets he had purchased and later submitted for a partial refund. After engaging in months of communication with the airline in an attempt to resolve the dispute, Jake took the matter to small claims court.

British Columbia's Civil Resolution Tribunal ruled that both sources of information were equally valid because the chatbot and the static web page describing the bereavement policy were both elements of the same website. Air Canada had not given the prospective passenger any indication as to why one part of the website should be considered more trustworthy than another.

In its defence, Air Canada submitted that it could not be held liable for information provided by its agents, servants, or representatives. As the Tribunal noted in its decision: “In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada's website.”

Using this rationale, and drawing an adverse inference from Air Canada’s decision not to provide a copy of the relevant portion of the tariff which it had argued limited its liability, the Tribunal found the airline liable for negligent misrepresentation. The company had a responsibility to provide accurate information on its website - whether the information was part of a static or interactive component.

It is exceedingly rare for a small claims court case to draw international attention. However, many news agencies - both foreign and domestic - picked up the story because they incorrectly assumed Air Canada’s negligent misrepresentation was derived from an “AI chatbot”. In fact, an Air Canada spokesperson later stated that the chatbot used by the airline had been developed prior to generative AI technology or large language model capabilities.

While the facts of this case may not have been exactly what reporters were looking for as they chased stories involving the deleterious effects of artificial intelligence, it does offer the legal community an opportunity to reflect on liability in the brave new world of artificial intelligence.

Liability and Efforts to Limit Liability.

Companies that employ people or technology in the course of their business bear a certain amount of responsibility for their acts or omissions. Good training practices, quality control assurance, and appropriate supervisory management of employees and machinery should greatly reduce a company's risk of being found vicariously liable for negligence causing harm.

Of course, businesses are customers themselves. When they purchase technology produced by outside sources for use in their businesses, they should expect the technology to work as intended. If these products are defective in design or manufacture, or if the manufacturer has not provided adequate user instruction or warnings of foreseeable potential hazards, the company that produced or sold the technology could be liable for certain damages caused to a business using the product.

There are, of course, ways that a company might attempt to limit its liability through contractual language. But even in those cases, courts would likely examine whether the language used in such contracts or terms of service is comprehensible, clearly-defined, and compliant with all applicable laws.

In the Air Canada case, the airline’s website did not contain any disclaimer to effectively caution users about the chatbot’s potential for inaccuracy. Language advising the chatbot’s users to treat written policies on Air Canada’s static web pages as paramount was also absent.

If there was a manufacturing defect in the chatbot’s programming, or if the firm selling the chatbot technology failed in its own duty to instruct/teach the purchaser how to properly use it, presumably the airline could sue the manufacturer and/or seller for any foreseeable losses sustained. In the Air Canada case, however, the Tribunal noted the absence of any suggestion of liability on the part of the chatbot’s maker. Moreover, it was presented with no evidence as to how the chatbot had been programmed.

There was, however, a suggestion that the chatbot was essentially its own legal entity. Such an assertion should give all of us pause. In an age when the science fiction behind Max Headroom is quickly becoming fact, we need to think carefully about the consequences of absolving AI makers and users from responsibility for the actions of their creation (or their tools).

A Product With a Mind of Its Own.

Early generations of chatbots and automated attendants on phone lines were adept at answering simple questions or directing customer inquiries to the appropriate department for further discussion with humans. Inquiries that required more complex or nuanced responses were not as promising. While common questions could be anticipated, responses to deviations would often falter.

With artificial intelligence technology now advancing at a rapid pace, chatbots and their like will soon not only be able to answer questions with far more precision; they will be able to learn from their own experiences interacting with people and adjust their output accordingly.

As readers of this blog will know, I am a strong proponent of the use of technological advances to help people. We should not instinctively fear or reject artificial intelligence. Technology is not inherently good or bad - it’s a tool that we can use to do certain things. Of course, the trade-off when using technology to do good is bearing responsibility for it, taking steps to mitigate any foreseeable adverse consequences, and repairing any harm done.

To consider early generation chatbots (or any AI-equipped version planned or currently in operation) to be separate legal entities completely responsible for their own actions, would be to absolve oneself from any part in their creation, instructional programming or directives. The Tribunal’s incredulous reaction to this submission in the Air Canada case is spot-on.

Imagine if we were to ask an artificial intelligence product to provide a literary analogy of this argument through a parody of a popular work of science fiction:

In Mary Shelley’s Frankenstein, a doctor who discovers a way to impart “artificial intelligence” into inanimate matter does not struggle with the moral implications of implementing his discovery. After designing an artificially intelligent creature who wreaks havoc, the doctor decides that he bears no moral responsibility for what his creation has done. Rather than being ravaged by guilt and vowing to do something, the doctor lives happily ever after, safe in the knowledge that any consequences of his own choices and actions are not his problem or concern.

A literary masterpiece this is not; nor is its inspiration a legal argument we should ever accept as meritorious.

The Bottom Line: Ensure Accuracy and Take Precautions.

As society begins to realize the full potential (and potential problems) of this technological breakthrough, the legal profession will be looking for answers to some fascinating questions relating to liability and the direction of case law. For example:

  • Will courts continue to view AI technology as equivalent to technology without generative capability, or would vicarious liability similar to an employment context eventually apply?
  • If a company that is sued by a customer for negligent misrepresentation attempts to sue an AI chatbot manufacturer, could the customer expand the claim to include this third party in the absence of any contractual privity between them?
  • How would/could a court determine whether an AI chatbot had been properly programmed or trained?
  • How will disclaimers, terms of service, waivers, and consumer protection laws evolve as AI becomes prevalent?

When new technology is employed, courts generally assign the burden of risk to the company or entity delivering the technology, not to the consumer as an end-user. Whether that burden will shift or evolve into a “buyer beware” model as consumers become more familiar with this technology and its risks, remains to be seen. In my view, it is in a consumer’s best interests to have a simple and straightforward idea of who should be liable for negligence involving artificial intelligence technology: the company using it.

Until we know the ultimate direction the courts will take, the Air Canada case provides an excellent opportunity to remind any company that is already using or planning to use AI technology for client-facing service, to exercise caution. Taking adequate and reasonable steps to provide accurate information to clients or consumers remains paramount. Reducing exposure to potential liability by providing appropriate instructions, warnings, or effective language in terms of use is also likely to serve the best interests of both the company and its clients.

When a company is found negligent in carrying out its specific duty to clients, this can be quite costly. An ounce of prevention is always better for a company’s bottom line (and reputation) than a pound of cure.

At Gluckstein Lawyers, we pride ourselves on being technological innovators in personal injury law. We understand our responsibility to our clients and to our profession, so you can trust us to always keep your safety and security top of mind. It’s part of our commitment to full-circle care.

If you or a loved one has suffered a serious injury and would like to learn more about your legal rights and options, contact one of our personal injury lawyers for a no cost, no obligation consultation.

Share

Subscribe to our Newsletter

Sign me up