Google’s ambitious foray into the world of generative AI, the Gemini suite of models, has hit a snag. The company has announced a temporary suspension of its ability to generate images of people, citing concerns about historical inaccuracies in its outputs. This comes after a flurry of social media criticism highlighting instances where the AI depicted historical figures from marginalized backgrounds in incongruous ways.

Launched earlier this month, Gemini promised a powerful tool for generating unique images based on text prompts. However, examples soon emerged of the AI generating images of historical figures in inaccurate and insensitive ways. For instance, some depictions of the US Founding Fathers portrayed them as Native American, Black, or Asian, leading to accusations of bias and historical revisionism.

The criticism reached a crescendo with venture capitalist Michael Jackson’s scathing LinkedIn post, where he labeled Gemini’s AI “a nonsensical DEI parody”. DEI, in this context, stands for Diversity, Equity, and Inclusion. In response, Google acknowledged the issue in a statement, expressing awareness of the “inaccuracies in some historical image generation depictions” and pledging immediate improvement.

The underlying issue lies in the very nature of generative AI tools. These models rely on vast amounts of training data, and any biases or inaccuracies present within that data can be amplified in the generated outputs. Concerns about bias in AI are nothing new, with previous iterations facing criticism for generating stereotypical depictions, such as sexualized images of women or associating high-status jobs with white men.

Perhaps the most infamous example of AI bias came in 2015, when Google’s own image classification tool mislabeled Black men as gorillas. While the company promised a fix, it turned out to be a mere workaround: blocking the technology from recognizing gorillas altogether. This approach, while seemingly effective, raised concerns about sweeping fixes ignoring the root cause of the issue.

In the case of Gemini, the temporary suspension appears to be a more nuanced approach. By acknowledging the problem and pledging to improve the technology, Google is signaling its commitment to responsible AI development. However, the question remains: how will Google address the historical inaccuracies without simply resorting to another workaround?

One potential solution lies in improving the quality and diversity of the training data used for Gemini. This could involve incorporating more historical records, images, and information from diverse sources, ensuring a more accurate and representative representation of the past. Additionally, developing more robust algorithms that can identify and flag potential biases within the data could be crucial.

Furthermore, fostering transparency and collaboration with historians, cultural experts, and the public can be vital in guiding the development of responsible AI tools. Open dialogue and feedback loops can help identify potential pitfalls and ensure that AI technology is used ethically and accurately, especially when dealing with sensitive topics like history.

While the temporary pause on people image generation might seem like a setback, it can be seen as an opportunity for Google to learn and improve. By addressing the concerns surrounding historical accuracy and bias head-on, Google can ensure that Gemini lives up to its potential as a powerful and responsible tool for creative expression.