Machine learning systems almost universally exhibit bias against women and people of color, and DALL-E is no different. In the project’s documentation on GitHub, OpenAI admits that “models like DALL·E 2 could be used to generate a wide range of deceptive and otherwise harmful content” and that the system “inherits various biases from its training data, and its outputs sometimes reinforce societal stereotypes.