Microsoft Retires AI-Powered Facial Analysis Tools

Share the joy

It can guess someone’s emotions. 

AI-Powered Facial Analysis Tool 

Microsoft is now phasing out access to its AI-powered facial analysis tools. One of them is the ability to identify a person’s emotions from pictures and videos. Experts recognized these tools. The facial expressions are different across populations. The results are also unscientific to translate external emotions with internal feelings. 

For instance, a person in the image exhibits a scowling emotion. But a scowl and anger are two different things. 

The decision to phase out these tools is part of the changes in the company’s AI ethics and policies. Recently, the company released its Responsible AI Standard. It’s a foundation that guides how it is building AI systems. It’s a necessary step in developing better AI. 

According to this post

“The Responsible AI Standard sets out our best thinking on how we will build AI systems to uphold these values and earn society’s trust. It provides specific, actionable guidance for our teams that goes beyond the high-level principles that have dominated the AI landscape to date.”

The company’s new standards highlight accountability. In that way, it will know who is using its services. It means that the tools will still be around but the access is limited to some features of its services. But it’s removing other features entirely. 

If users want to use the tools, they will have to apply using facial identification. They also need to tell the company how they are going to use the tools and where they will deploy the systems. 

Other features that have lower chances of providing harmful effects will remain available. It means that if the tools are only used to blur faces in videos and images, then they are still accessible. 

Even though the company sunsets public access to those features, it will still use them in one of its products — Seeing AI. This app uses machine vision. This is useful for the visually impaired. The app can describe the world to them. 

In a blog post, Microsoft stated

“To mitigate these risks, we have opted to not support a general-purpose system in the Face API that purports to infer emotional states, gender, age, smile, facial hair, hair, and makeup. Detection of these attributes will no longer be available to new customers beginning June 21, 2022, and existing customers have until June 30, 2023, to discontinue use of these attributes before they are retired.”

The company will also launch restrictions on its Custom Neural Voice feature. It enables customers to create AI voices according to real people’s recordings. This feature is also known as audio deface. 

Even though the tool can be useful in education and entertainment. It can also be used to inappropriately impersonate people and deceive their listeners. 

Customers subscribed to these tools won’t immediately lose access to them. Rather, they’ll have one year to use the tools before losing entry. 

Last year, Google had a similar evaluation. As a result, it blocked 13 emotions from its tool. But it’s making a new system that can describe movements, like frowning and smiling without the need to attach them to an emotion.

Share the joy

Author: Jane Danes

Jane has a lifelong passion for writing. As a blogger, she loves writing breaking technology news and top headlines about gadgets, content marketing and online entrepreneurship and all things about social media. She also has a slight addiction to pizza and coffee.

Share This Post On