BigTech's Betrayal: AI Training on Kids' Photos
When privacy fails: BigTech is using your children's photos to train AI without consent.
Human Rights Watch (HRW) has uncovered a disturbing trend: AI models are being trained on photos of children, even those protected by strict privacy settings. HRW researcher Hye Jung Han discovered 190 photos of Australian children in the LAION-5B dataset, created from web snapshots. These photos, taken without consent, expose children to privacy risks and potential deepfake misuse.
The problem extends beyond just capturing images. URLs in the dataset can reveal children's identities, including names and locations, posing severe safety threats. Despite strict YouTube privacy settings, AI scrapers still archive and use these images, highlighting BigTech's failure to protect user data.
AI-generated deepfakes already harm children, as seen in Melbourne, where explicit deepfakes of 50 girls were circulated online. For indigenous communities, the unauthorized use of images disrupts cultural practices, exacerbating the issue. How can we trust BigTech to safeguard our future when they exploit our children's privacy?
Read the full article on Ars Technica.
----