That took time and iteration, and Google will likely go through the same trial-and-error process, but on a very large public stage. ![]() Over time, OpenAI has refined its system prompts, now included with ChatGPT and DALL-E 3, to purposely include diversity in its outputs while mostly avoiding the situation Google is now facing. When OpenAI went through these issues in 2022, its technique for diversity insertion led to some awkward generations at first, but because OpenAI was a relatively small company (compared to Google) taking baby steps into a new field, those missteps didn't attract as much attention. "They must reflect the diversity of languages, culture, value systems, political opinions, and centers of interest across the world."įurther Reading DALL-E image generator is now open to everyone "We need a free and diverse set of AI assistants for the same reasons we need a free and diverse press," wrote Meta's chief AI scientist, Yann LeCun, on X. It's difficult to provide a monolithic AI model that will serve every political and cultural viewpoint, and some experts recognize that. Different factions demand different results from AI products (such as avoiding bias or keeping it) with no one cultural viewpoint fully satisfied. The episode reflects an ongoing struggle in which AI researchers find themselves stuck in the middle of ideological and cultural battles online. And that's generally a good thing because people around the world use it. Gemini's AI image generation does generate a wide range of people. AdvertisementĪs the controversy swelled on Wednesday, Google PR wrote, "We're working to improve these kinds of depictions immediately. To counteract this, OpenAI invented a technique in July 2022 whereby its system would insert terms reflecting diversity (like "Black," "female," or "Asian") into image-generation prompts in a way that was hidden from the user. For example, critics complained that prompts often resulted in racist or sexist images ("CEOs" were usually white men, "angry man" resulted in depictions of Black men, just to name a few). ![]() When AI image synthesis launched into the public eye with DALL-E 2 in April 2022, people immediately noticed that the results were often biased due to biased training data. This isn't the first time a company with an AI image-synthesis product has run into issues with diversity in its outputs. Wednesday night, Elon Musk chimed in on the politically charged debate by posting a cartoon depicting AI progress as having two paths, one with "Maximum truth-seeking" on one side (next to an xAI logo for his company) and "Woke Racist" on the other, beside logos for OpenAI and Gemini.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |