Guardian Accuses Microsoft of Damage from AI-Generated Poll

Share the joy

Microsoft Shut Off AI-Generated Polls 

Source

Microsoft got rid of its news divisions three years ago. It replaced it with AI and algorithmic automation. Unfortunately, the content that its systems create contains errors that could have been prevented if humans were involved. 

The controversy about its AI systems has not stopped there. Recently, it made another headline. This time it involved a poll that somehow damaged the reputation of The Guardian. 

The automated poll was published next to a Guardian story about Lilie James, a water polo coach. She was found dead with head injuries. The AI program created a poll that asked readers what they thought about the reason for the woman’s death. These options were provided: murder, accident, or suicide. 

Not surprisingly, The Guardian’s readers reacted angrily. 

One reader suggested that the reporter of that article, who had nothing to do with it, must be fired. 

Most of the readers were not aware that the poll was created by Microsoft. 

In a letter to Microsoft, the Guardian’s chief executive said that the incident was “potentially distressing for the James’s family.” She also added that the poll damaged the reputation of the people who wrote the story. 

“This is clearly an inappropriate use of genAI [generative AI] by Microsoft on a potentially distressing public interest story, originally written and published by Guardian journalists.” – The Guardian

The poll was removed. However, the damage had been done. 

The Verge asked Microsoft what happened. And Microsoft replied that it is still investigating the cause of such content. 

“We have deactivated Microsoft-generated polls for all news articles and we are investigating the cause of the inappropriate content. A poll should not have appeared alongside an article of this nature, and we are taking steps to help prevent this kind of error from reoccurring in the future.” – The Verge

Human Oversight 

It clearly shows that AI must be implemented in a multi-faced approach. It must combine technology, policy, and human oversight. 

Combining AI with human moderators ensures a balance between automation and human judgment, as AI alone may have limitations in recognizing nuanced or context-dependent content. 

It is also useful if they can collaborate with external fact-checking organizations to verify information and label content accordingly. 

Furthermore, AI algorithms must be made transparent. This will ensure that users can understand how content is ranked, recommended, and flagged. Algorithmic transparency must also provide clear explanations for why certain content is shown to users, enabling them to make informed decisions. 

Tech companies must be held accountable for the spread of misinformation on their platforms by establishing clear policies and enforcement mechanisms. Implementing consequences for users and organizations that repeatedly share false information can also help. 

Ensure AI systems are designed with ethical principles in mind, prioritizing the well-being of users and society over profit. Then, conduct regular audits of AI systems to detect and mitigate biases. 

Most of all, tech companies must invest in research to develop more effective AI models for detecting misinformation, disinformation, and deepfakes. 


Share the joy

Author: Jane Danes

Jane has a lifelong passion for writing. As a blogger, she loves writing breaking technology news and top headlines about gadgets, content marketing and online entrepreneurship and all things about social media. She also has a slight addiction to pizza and coffee.

Share This Post On