Skip to main content

Google AI flagged parents’ accounts for potential abuse over nude photos of their sick kids

Illustration by Alex Castro / The Verge

A concerned father says that after using his Android smartphone to take photos of an infection on his toddler’s groin, Google flagged the images as child sexual abuse material (CSAM), according to a report from The New York Times. The company closed his accounts and filed a report with the National Center for Missing and Exploited Children (NCMEC) and spurred a police investigation, highlighting the complications of trying to tell the difference between potential abuse and an innocent photo once it becomes part of a user’s digital library, whether on their personal device or in cloud storage.

Concerns about the consequences of blurring the lines for what should be considered private were aired last year when Apple announced its Child Safety plan. As part of the plan, Apple would locally scan images on Apple devices before they’re uploaded to iCloud and then match the images with the NCMEC’s hashed database of known CSAM. If enough matches were found, a human moderator would then review the content and lock the user’s account if it contained CSAM.

The Electronic Frontier Foundation (EFF), a nonprofit digital rights group, slammed Apple’s plan, saying it could “open a backdoor to your private life” and that it represented “a decrease in privacy for all iCloud Photos users, not an improvement.”

Apple eventually placed the stored image scanning part on hold, but with the launch of iOS 15.2, it proceeded with including an optional feature for child accounts included in a family sharing plan. If parents opt-in, then on a child’s account, the Messages app “analyzes image attachments and determines if a photo contains nudity, while maintaining the end-to-end encryption of the messages.” If it detects nudity, it blurs the image, displays a warning for the child, and presents them with resources intended to help with safety online.

The main incident highlighted by The New York Times took place in February 2021, when some doctor’s offices were still closed due to the COVID-19 pandemic. As noted by the Times, Mark (whose last name was not revealed) noticed swelling in his child’s genital region and, at the request of a nurse, sent images of the issue ahead of a video consultation. The doctor wound up prescribing antibiotics that cured the infection.

According to the NYT, Mark received a notification from Google just two days after taking the photos, stating that his accounts had been locked due to “harmful content” that was “a severe violation of Google’s policies and might be illegal.”

Like many internet companies, including Facebook, Twitter, and Reddit, Google has used hash matching with Microsoft’s PhotoDNA for scanning uploaded images to detect matches with known CSAM. In 2012, it led to the arrest of a man who was a registered sex offender and used Gmail to send images of a young girl.

In 2018, Google announced the launch of its Content Safety API AI toolkit that can “proactively identify never-before-seen CSAM imagery so it can be reviewed and, if confirmed as CSAM, removed and reported as quickly as possible.” It uses the tool for its own services and, along with a video-targeting CSAI Match hash matching solution developed by YouTube engineers, offers it for use by others as well.

Google “Fighting abuse on our own platforms and services”:

We identify and report CSAM with trained specialist teams and cutting-edge technology, including machine learning classifiers and hash-matching technology, which creates a “hash”, or unique digital fingerprint, for an image or a video so it can be compared with hashes of known CSAM. When we find CSAM, we report it to the National Center for Missing and Exploited Children (NCMEC), which liaises with law enforcement agencies around the world.

A Google spokesperson told the Times that Google only scans users’ personal images when a user takes “affirmative action,” which can apparently include backing their pictures up to Google Photos. When Google flags exploitative images, the Times notes that Google’s required by federal law to report the potential offender to the CyberTipLine at the NCMEC. In 2021, Google reported 621,583 cases of CSAM to the NCMEC’s CyberTipLine, while the NCMEC alerted the authorities of 4,260 potential victims, a list that the NYT says includes Mark’s son.

Mark ended up losing access to his emails, contacts, photos, and even his phone number, as he used Google Fi’s mobile service, the Times reports. Mark immediately tried appealing Google's decision, but Google denied Mark’s request. The San Francisco Police Department, where Mark lives, opened an investigation into Mark in December 2021 and got ahold of all the information he stored with Google. The investigator on the case ultimately found that the incident “did not meet the elements of a crime and that no crime occurred,” the NYT notes.

“Child sexual abuse material (CSAM) is abhorrent and we’re committed to preventing the spread of it on our platforms,” Google spokesperson Christa Muldoon said in an emailed statement to The Verge. “We follow US law in defining what constitutes CSAM and use a combination of hash matching technology and artificial intelligence to identify it and remove it from our platforms. Additionally, our team of child safety experts reviews flagged content for accuracy and consults with pediatricians to help ensure we’re able to identify instances where users may be seeking medical advice.”

While protecting children from abuse is undeniably important, critics argue that the practice of scanning a user’s photos unreasonably encroaches on their privacy. Jon Callas, a director of technology projects at the EFF called Google’s practices “intrusive” in a statement to the NYT. “This is precisely the nightmare that we are all concerned about,” Callas told the NYT. “They’re going to scan my family album, and then I’m going to get into trouble.”



Source: The Verge

Popular posts from this blog

Apple Releases First Public Beta of tvOS 17

Apple today seeded the first beta of the upcoming tvOS 17 update to its public beta testing group, allowing the general public to download and test the update ahead of its September launch. Public beta testers can download the tvOS 17 beta by opening up the Settings app on Apple TV , choosing the Software Updates section under System, and then toggling on the Get Public Beta Updates option. Signing up on Apple's public beta website is also required. tvOS 17 adds FaceTime to the ‌Apple TV‌, with an iPhone or iPad serving as the camera. The ‌FaceTime‌ interface shows up on the bigger display of the TV, and Center Stage keeps you front and center as you move around the room. There's even a Split View option so you can use ‌FaceTime‌ while watching TV or playing a game on the other part of the screen. There's a revamped Control Center that makes it quicker to get to key settings and information without needing to go into the Settings app, plus it supports useful sho

Apple Releases macOS Ventura 13.4.1 With Security Fixes

Apple today released macOS Ventura 13.4, a minor update for the ‌macOS Ventura‌ operating system that was released last October. ‌macOS Ventura‌ 13.4.1 comes more than a month after the launch of macOS Ventura 13.4 . The ‌‌‌‌‌macOS Ventura‌‌‌‌‌ 13.4.1 update can be downloaded for free on all eligible Macs using the Software Update section of System Settings. According to Apple's release notes, the update provides important security fixes and is recommended for all users. Apple has also released macOS 11.7.8 and macOS 12.6.7 security updates for those who are unable to run Ventura. Related Roundup: macOS Ventura Related Forum: macOS Ventura This article, " Apple Releases macOS Ventura 13.4.1 With Security Fixes " first appeared on MacRumors.com Discuss this article in our forums Source: TechRadar

Apple Says 128GB iPhone 15 Pro Limited to 1080p ProRes Video Recording Unless External Storage Connected

ProRes video recording remains limited to 1080p quality at 30 frames per second on the 128GB model of the iPhone 15 Pro, unless the device is recording directly to a connected external storage drive , according to Apple. On the 256GB and higher iPhone 15 Pro and iPhone 15 Pro Max, ProRes video recording is supported in up to 4K quality at 60 frames per second to both internal storage and external storage. Apple does not mention this information on the iPhone 15 Pro's tech specs page on its website, but the limitation is listed when comparing the iPhone 15 Pro to another iPhone model in the Apple Store app, as seen in the screenshot below. The same limitation applied to iPhone 13 Pro and iPhone 14 Pro models with 128GB of storage, but those devices cannot record ProRes video to external storage, so at least iPhone 15 Pro users have that option this time around. The limitation does not apply to the iPhone 15 Pro Max, as that model starts with 256GB of storage. ProRes video fi

Relay FM Launches Fundraiser for St. Jude Children's Research Hospital

September is Childhood Cancer Awareness Month, and in recognition of this important cause, well-known podcast network Relay FM has launched its annual fundraiser for St. Jude Children's Research Hospital , located in Memphis, Tennessee. Since 2019, the Relay FM community has raised over $2.2 million for the hospital. St. Jude's mission statement: The mission of St. Jude Children's Research Hospital is to advance cures, and means of prevention, for pediatric catastrophic diseases through research and treatment. Consistent with the vision of our founder Danny Thomas, no child is denied treatment based on race, religion or a family's ability to pay. Relay FM has multiple Apple-related podcasts, such as Connected , hosted by Stephen Hackett, Myke Hurley, and Federico Viticci. Hackett's son received treatment at St. Jude as an infant, so this initiative is near and dear to him . Donations can be made on the Relay FM for St. Jude website , with rewards such as Relay