Microsoft Is Eliminating Emotion Recognition Capabilities From Its Facial Recognition Technology

18
facial recognition technology
Microsoft is eliminating emotion recognition capabilities from its facial recognition technology

The head of Microsoft’s responsible artificial intelligence projects included a warning when the company revealed last week that it would be removing some components from its facial recognition system that deal with emotion: “The science of emotion is far from settled.”

Microsoft’s decision, which was part of a bigger announcement about its “Responsible AI Standard” campaign, quickly rose to the top of the list of companies stepping away from emotion recognition AI. This relatively new technology has drawn harsh criticism, especially from academics.

Software is frequently used in emotion identification technology to analyze a variety of factors, such as word choice, tone of voice, and facial expressions, in an effort to automatically identify the emotional states. For use in business, education, and customer service, many technology companies have created software that makes the promise that it can read, recognize, or quantify emotions.

University of Oxford associate professor and senior research fellow Sandra Wachter said- “Even if there was proof that AI could accurately predict emotions, its use would still not be justified.” Human rights, like the right to privacy, safeguard our thoughts and feelings since they are the most intimate aspects of who we are.

It’s unclear how many large tech companies are utilizing algorithms designed to identify emotional states in people. More than 25 human rights organizations published a letter in May pleading with Zoom CEO Eric Yuan not to use emotion AI. The letter was written in response to a report from the technology news website Protocol that said Zoom would utilize such technology as a result of its recent study in the field. A comment from Zoom has not been received.

It’s unclear how many large tech companies are utilizing algorithms designed to identify human emotions in people.

Azure, Microsoft’s cloud platform that sells software and other services to businesses and organizations, is the main target of the company’s policy revisions. The announcement of Azure’s emotion detection AI in 2016 stated that it could identify a variety of emotions, including “happy, sadness, fear, anger, and more.”

Microsoft has also committed to reevaluating emotion detection AI across all of its systems in order to analyze the advantages and disadvantages of the technology in various contexts.

In a written statement, Andrew McStay, professor of digital life at Bangor University and director of the Emotional AI Lab, indicated that he would have preferred to see Microsoft discontinue all work on emotional AI. He claimed that there is no purpose in continuing to utilize emotional AI in products because it is well known to be dysfunctional.

A commitment to ensuring equity in speech-to-text technology, which according to research has nearly twice as many errors for black users as for white users, is one of the other adjustments made in the new guidelines. Due to worries about its possible use as a trickery tool, Microsoft has also banned the usage of its Custom Neural Voice, which enables a virtually identical replica of a user’s voice.

The need for the improvements, according to Crampton, was partly caused by the lack of government regulation of AI systems.

“AI is increasingly a part of our lives, but our laws are not keeping up,” she said. “Neither the expectations of society nor the specific threats of AI have been maintained up to date.” There are signs that the government is becoming more involved in AI, but we also understand that we have a duty to take action. We think that we need to make efforts to ensure that AI systems are accountable by design.