Iryna Tolmachova/Shutterstock
Philippe Dufresne, the Privateness Commissioner of Canada, has discovered OpenAI was “not compliant with” Canadian federal and provincial privateness legal guidelines within the coaching of its AI fashions. Following an investigation, Dufresne and his counterparts in Alberta, Quebec and British Columbia say OpenAI’s strategy to issues like information assortment and consent stepped on a number of legal guidelines, together with Canada’s Private Data Safety and Digital Paperwork Act (PIPEDA), which governs how firms gather and use private data in the course of the regular course of enterprise.
The commissioners collaborating within the investigation recognized a number of privateness points with OpenAI’s strategy, together with that the corporate “gathered huge quantities of non-public data with out ample safeguards to stop use of that data to coach its fashions,” and that it failed to amass consent to gather and use that private data within the first place. Warnings in ChatGPT be aware that interactions with the AI might be utilized in coaching, however third-party information OpenAI has bought or scraped additionally consists of private particulars folks seemingly aren’t even conscious of. The truth that ChatGPT customers don’t have any method to entry, right or delete that information was one other situation that the commissioners recognized, in line with a abstract of the investigation’s findings, together with OpenAI’s lackluster makes an attempt to acknowledge the inaccuracy of a few of ChatGPT’s responses.
Canada’s Privateness Commissioner contends that OpenAI was open and aware of the investigation, and has already dedicated to creating a number of modifications to ChatGPT to comply with Canadian privateness legal guidelines. OpenAI has retired earlier fashions that violated Canadian privateness regulation, and now makes use of “a filtering instrument to detect and masks private data (comparable to names or telephone numbers) in publicly accessible web information and licensed datasets used to coach its fashions,” the Commissioner says. The corporate has additionally agreed inside the subsequent three months so as to add a brand new discover to the signed-out model of ChatGPT explaining that chats can be utilized for coaching and delicate data should not be shared, and inside the subsequent six months:
Whereas Canada’s investigation into OpenAI’s privateness insurance policies was opened in 2023, the corporate has obtained scrutiny from regulators extra not too long ago due to its connection to the mass taking pictures that occurred in Tumbler Ridge in February 2026. OpenAI had reportedly flagged the alleged shooter’s account in 2025 for holding warnings of real-world violence, however didn’t escalate these considerations to Canadian legislation enforcement. Following the taking pictures, regulators demanded the corporate change its strategy to security, and OpenAI finally agreed to be extra collaborative with Canadian legislation enforcement and well being companies sooner or later.

