In one look.
- Tim Hortons detained in violation of Canadian privacy regulations.
- Voice identification technology.
- Australian health data exposed.
Where did you buy that donut, huh?
According to a two-year investigation by the Office of the Privacy Commissioner of Canada (OPC), the official app of Canadian coffee and donut chain Tim Hortons violated the privacy laws of the countries by regularly tracking and recording the locations of its users. even when the app was not in use and without proper consent. Privacy Commissioner Daniel Therrien told Reuters: “Tim Hortons has clearly crossed the line by amassing an enormous amount of highly sensitive customer information.” Started in 2020, the joint investigation by federal and provincial authorities sheds light on the pitfalls of what Therrien called “poorly designed technologies” because the app lacked a privacy management program that could have helped the company prevent problems. According to the report released Wednesday, the tracking data was collected by third-party service provider Radar for the purposes of targeted advertising and product promotions, but apparently Tim Hortons never actually used the data. Additionally, the contract with Radar contained “vague and permissive” language that could have allowed Radar to use personal data in an aggregated or anonymized form for its own business. “While we accept that Radar did not engage in any use or disclosure for its own purposes, the contractual language in this instance would not appear to provide adequate protection by Tim Hortons of users’ personal information,” says The report. CBC reports that Tim Hortons removed location technology from the app in August 2020 and also agreed to remove all granular location data and have all third-party service providers do so as well. The company also says it will establish a privacy management program for all of its apps to ensure they comply with federal and provincial privacy laws.
The value of a vote.
Wired explores the world of voice recognition technology and the multi-billion dollar industry that has grown around it. Advances in artificial intelligence and machine learning have made it possible to detect not only what you say, but also your identity, your mood and even the shape of your face, all from the sound of your voice. Siri and Alexa have long been able to identify a user’s voice, TikTok has begun collecting users’ “voice prints”, and call centers are using voice recognition to determine caller behaviors and emotions. As the value of a voice increases, privacy researchers race to find ways to protect users from having their voices used against them. Emmanuel Vincent, a senior researcher specializing in voice technologies at the National Institute for Research in Digital Sciences and Technologies, says a user’s voice can reveal details about their emotions and even their state of health. “This additional information helps to create a more complete profile, which would then be used for all kinds of targeted advertising.” Additionally, some hackers have even found ways to clone a victim’s voice in order to impersonate them. Researchers are exploring various ways to protect voice data, including obfuscation, distributed and federated learning, and speech anonymization. Additionally, privacy legislation like the EU’s General Data Protection Regulation includes voices in protected biometric data, and companies like McDonald’s and Amazon have already been scrutinized by the courts for their use of voice data.
Australian health data exposed in third-party breach.
CTARS, a cloud-based customer management system used by the Australian National Disability Insurance Scheme, revealed that it suffered a data breach on May 15 and the stolen data has already been published on the dark web. CTARS said: “While we cannot confirm details of all data in the time available, to be very cautious, we are treating all information in our database as compromised.” Although CTARS says the volume of data collected makes it difficult to determine exactly what data was compromised, Have I Been Pwned owner Troy Hunt says at least 12,000 email addresses were affected, ZDNet reports. . Mental health problems. Drug use (prescription and illicit). Violent behavior. Sexual Abuse… This was posted on a hacking forum and viewed by countless people. It’s horrible. In its formal apology, however, CTARS dismissed the idea that a cybercriminal might be interested in patient health information. “Health and other sensitive personal information by itself is generally not helpful to a cybercriminal,” the company said, then urged anyone who may be experiencing distress to seek medical attention.