Bing AI Image Creator Crosses The Line – Unsettling 9/11-esque Images Spark Outrage

Microsoft Corporation’s MSFT Bing AI Image Creator has come under scrutiny for generating images of the popular video game character Kirby in scenarios reminiscent of terrorism, prompting discussions about the limitations and consequences of artificial intelligence-generated content.

What Happened: Despite Microsoft’s efforts to implement content filters and restrictions, individuals have managed to manipulate the AI into generating images of Kirby, the lovable pink character from Nintendo, piloting airplanes towards skyscrapers in what appears to be a grim reinterpretation of the tragic events of September 11, 2001.

See Also: Tom Hanks Exposes AI Clone In Misleading Ad, Says ‘The Polar Express’ Was Early Warning Sign

The problem lies in the inherent nature of AI. 

While it can autonomously generate images based on text inputs, it cannot comprehend context and intent. Simply put, even with bans on keywords like “9/11,” “Twin Towers,” and “terrorism,” users can craft alternative descriptions that bypass these filters, leading to unsettling and potentially offensive imagery, reported Kotaku.

It is important to mention that while these AI-generated images aren’t actually related to 9/11, the uncanny resemblance can hurt people’s sentiments. 

A Microsoft spokesperson responded to Kotaku, stating, “We have large teams working on the development of tools, techniques and safety systems that are aligned with our responsible AI principles. As with any new technology, some are trying to use it in ways that was not intended, which is why we are implementing a range of guardrails and filters to make Bing Image Creator a positive and helpful experience for users.”

Why It’s Important: The incident highlights the ongoing challenges posed by AI-generated content. As AI cannot discern the broader implications and sensitivities surrounding certain subjects, users are responsible for wielding this technology responsibly.

This has caused worry about false information, especially during elections. 

While both Democratic and Republican teams are trying out AI and OpenAI’s ChatGPT to help them with their digital work, some people are concerned that this tech could be used to spread wrong details about things like when and where to vote. 

Check out more of Benzinga’s Consumer Tech coverage by following this link.

Read Next: Microsoft’s Bing Chat Lets Users Reap Benefits Of OpenAI’s DALL-E 3 For Free

Market News and Data brought to you by Benzinga APIs
Posted In: NewsTechMedia9/11AIartificial intelligenceBing AIChatGPTConsumer TechImage CreatorKirbyMicrosoftNintendoOpenAi
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!

Loading...