Experts warn against YouTube’s “creepy” AI age estimation system launching in the US

youtube user with question mark over head

YouTube will begin rolling out its AI-powered age estimation model in the US on August 13, and privacy experts are calling it a “creepy” expansion of surveillance.

The Google-owned platform says the tool will determine if a user is under 18, regardless of the birthdate on their Google account. It uses AI to analyze account age, viewing history, search queries, and even the categories of videos watched.

If flagged as a teen, YouTube will automatically restrict content, enable wellbeing tools, limit personalized ads, and reduce exposure to what it calls “problematic” videos.

Users who believe the AI is wrong will have to verify their age via government ID, selfie, or credit card.

While the system has already been deployed in other countries, its arrival in the US is drawing heavy criticism from digital rights groups like the Electronic Frontier Foundation (EFF) and the Electronic Privacy Information Center (EPIC).

YouTube’s AI age estimation system sparks privacy concerns

David Greene of the EFF told Ars Technica that the lack of transparency means users have no idea how YouTube will retain or repurpose data from the appeals process, warning that leaks or breaches could expose vulnerable users who rely on anonymity.

Suzanne Bernstein from EPIC said the company hasn’t explained how long such data is stored, whether it’s sold, or how quickly it’s deleted — calling the increased surveillance “not privacy protective.”

YouTube has only confirmed that it won’t retain data from a user’s “ID or Payment Card for the purposes of advertising,” which has Greene convinced the Google-owned platform will keep the information for “other purposes.”

A photo of a man holding up the YouTube application on a smartphone.

“The most privacy protective option involves retaining the least amount of information and certainly not sharing it with third parties, which is not something that YouTube here has promised to do,” Bernstein added.

Related

Experts also note that YouTube hasn’t revealed the AI’s accuracy, with no external audits or academic studies on its performance. Even the best systems have a two-year margin of error, meaning 16- to 20-year-olds could easily be misclassified.

Without strong US privacy laws, both Bernstein and Greene say users who appeal have “all bad” options, especially when it comes to submitting biometric data. Greene called the process “really bad and creepy,” warning that a biometric breach is “far more significant” than other kinds of data leaks.

Biometric data is especially dangerous because, unlike a password, you can’t change your face if it’s stolen. A breach could expose that information forever, leaving users vulnerable with no real safeguards.

If the tool is wrong, users could soon face a stark choice: hand over sensitive personal information or lose access to one of the world’s largest online platforms.

Page was generated in 5.2620048522949