Image Credit: Inovation Origins
As part of measure being put in place to combat fake news and misinformation from Russia, the European Union has urged Google and Facebook to start labeling content and images generated by artificial intelligence.
The EU has warned of serious consequences and possible “swift” sanctions should its new digital content laws scheduled to come into effect across bloc on august 25 not be met.
“This is not business as usual; what the Russians want is to undermine the support of the public opinion of our citizens for the support of Ukraine,” said Věra Jourová, a European Commission vice-president per The Guardian, while announcing the new package.
“We simply have to defend our interests, our democracy; we have also to defend our, I will say it, fight and war, because what we do is support your claim to win the war.”
Meanwhile, the EU has described Twitter’s decision to quit the voluntary code as “ a mistake.”
“Twitter has chosen the hard way. They chose confrontation. This was noticed very much in the commission. I know the code is voluntary but make no mistake, by leaving the code, Twitter has attracted a lot of attention, and its actions and compliance with EU law will be scrutinised vigorously and urgently,” Jourová said.
What the EU wants Google and Facebook to do is to label AI content in such a way as it would register with users while scrolling and distracted by other things.
The European Union wants users to be able to “clearly see” that the content is not produced by real users and be labeled with words such as “this is the robot talking.”
The EU VP said it is the responsibility of social media companies to fight against the potential “dark side” of AI, which has the potential to fake events and voices within seconds.
In April, Italy’s Privacy Guarantor ordered that ChatGPT be blocked in the country over concerns OpenAI violates the EU’s General Data Protection Regulation (GDPR) through the way it handles data.
The Italian regulator also said OpenAI was not doing enough to protect children. Though ChatGPT is said to be designed for users above the age of 13 according to the company, there is no age check to stop those below the age limit from accessing sensitive information, the Privacy Guarantor officials said.
OpenAI was then given 20 days from the day the deadline was given to address all areas of concerns or face a fine of up to $21.8 million or a maximum four percent of its annual worldwide turnover.