BigTech's Betrayal: AI Training on Kids' Photos

When privacy fails: BigTech is using your children's photos to train AI without consent.
Human Rights Watch (HRW) has uncovered a disturbing trend: AI models are being trained on photos of children, even those protected by strict privacy settings. HRW researcher Hye Jung Han discovered 190 photos of Australian children in the LAION-5B dataset, created from web snapshots. These photos, taken without consent, expose children to privacy risks and potential deepfake misuse.
The problem extends beyond just capturing images. URLs in the dataset can reveal children's identities, including names and locations, posing severe safety threats. Despite strict YouTube privacy settings, AI scrapers still archive and use these images, highlighting BigTech's failure to protect user data.
AI-generated deepfakes already harm children, as seen in Melbourne, where explicit deepfakes of 50 girls were circulated online. For indigenous communities, the unauthorized use of images disrupts cultural practices, exacerbating the issue. How can we trust BigTech to safeguard our future when they exploit our children's privacy?
Read the full article on Ars Technica.
----
💡 We're entering a world where intelligence is synthetic, reality is augmented, and the rules are being rewritten in front of our eyes.
Staying up-to-date in a fast-changing world is vital. That is why I have launched Futurwise; a personalized AI platform that transforms information chaos into strategic clarity. With one click, users can bookmark and summarize any article, report, or video in seconds, tailored to their tone, interests, and language. Visit Futurwise.com to get started for free!
