quinta-feira, março 30, 2023
HomeAutomóvelThe darkish secret behind these cute AI-generated animal photographs

The darkish secret behind these cute AI-generated animal photographs


It’s no secret that giant fashions, reminiscent of DALL-E 2 and Imagen, skilled on huge numbers of paperwork and pictures taken from the net, take in the worst features of that knowledge in addition to the very best. OpenAI and Google explicitly acknowledge this.   

Scroll down the Imagen web site—previous the dragon fruit sporting a karate belt and the small cactus sporting a hat and sun shades—to the part on societal impression and also you get this: “Whereas a subset of our coaching knowledge was filtered to eliminated noise and undesirable content material, reminiscent of pornographic imagery and poisonous language, we additionally utilized [the] LAION-400M dataset which is understood to include a variety of inappropriate content material together with pornographic imagery, racist slurs, and dangerous social stereotypes. Imagen depends on textual content encoders skilled on uncurated web-scale knowledge, and thus inherits the social biases and limitations of enormous language fashions. As such, there’s a danger that Imagen has encoded dangerous stereotypes and representations, which guides our choice to not launch Imagen for public use with out additional safeguards in place.”

It is the identical sort of acknowledgement that OpenAI made when it revealed GPT-3 in 2019: “internet-trained fashions have internet-scale biases.” And as Mike Prepare dinner, who researches AI creativity at Queen Mary College of London, has identified, it’s within the ethics statements that accompanied Google’s giant language mannequin PaLM and OpenAI’s DALL-E 2. In brief, these companies know that their fashions are able to producing terrible content material, they usually do not know tips on how to repair that. 

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments