Connect with us

Hi, what are you looking for?

Tech News

AI image training dataset found to include child sexual abuse imagery

Logo of LAION, which created the LAION datasets
Photo Illustration by Rafael Henrique / SOPA Images / LightRocket via Getty Images

A popular training dataset for AI image generation contained links to child abuse imagery, Stanford’s Internet Observatory found, potentially allowing AI models to create harmful content.

LAION-5B, a dataset used by Stable Diffusion creator Stability AI and Google’s Imagen image generators, included at least 1,679 illegal images scraped from social media posts and popular adult websites.

The researchers began combing through the LAION dataset in September 2023 to investigate how much, if any, child sexual abuse material (CSAM) was present. They looked through hashes or the image’s identifiers. These were sent to CSAM detection platforms like PhotoDNA and verified by the Canadian Centre for Child Protection.

The dataset does not keep…

Continue reading…

You May Also Like

Editor's Pick

Travis Fisher The Environmental Protection Agency’s (EPA’s) greenhouse gas (GHG) rule for power plants was published in May 2023 and the original comment period...


Headline math is a simple percentage expressed as a fact without context. Its design is to create an emotional response, support an opinion, or...

Editor's Pick

On this week’s edition of StockCharts TV‘s StockCharts in Focus, Grayson discusses the most important chart on all of StockCharts – your default ChartStyle! In addition...

Editor's Pick

Clark Packard and Alfredo Carrillo Obregon Last week the House Select Committee on the Chinese Communist Party (Select Committee) released a lengthy report with a number...