Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Judi Lynn

(160,450 posts)
Tue Apr 6, 2021, 04:33 AM Apr 2021

Time to regulate AI that interprets human emotions

06 APRIL 2021


The pandemic is being used as a pretext to push unproven artificial-intelligence tools into workplaces and schools.
Kate Crawford

During the pandemic, technology companies have been pitching their emotion-recognition software for monitoring workers and even children remotely. Take, for example, a system named 4 Little Trees. Developed in Hong Kong, the program claims to assess children’s emotions while they do classwork. It maps facial features to assign each pupil’s emotional state into a category such as happiness, sadness, anger, disgust, surprise and fear. It also gauges ‘motivation’ and forecasts grades. Similar tools have been marketed to provide surveillance for remote workers. By one estimate, the emotion-recognition industry will grow to US$37 billion by 2026.

There is deep scientific disagreement about whether AI can detect emotions. A 2019 review found no reliable evidence for it. “Tech companies may well be asking a question that is fundamentally wrong,” the study concluded (L. F. Barrett et al. Psychol. Sci. Public Interest 20, 1–68; 2019).

And there is growing scientific concern about the use and misuse of these technologies. Last year, Rosalind Picard, who co-founded an artificial intelligence (AI) start-up called Affectiva in Boston and heads the Affective Computing Research Group at the Massachusetts Institute of Technology in Cambridge, said she supports regulation. Scholars have called for mandatory, rigorous auditing of all AI technologies used in hiring, along with public disclosure of the findings. In March, a citizen’s panel convened by the Ada Lovelace Institute in London said that an independent, legal body should oversee development and implementation of biometric technologies (see go.nature.com/3cejmtk). Such oversight is essential to defend against systems driven by what I call the phrenological impulse: drawing faulty assumptions about internal states and capabilities from external appearances, with the aim of extracting more about a person than they choose to reveal.

Countries around the world have regulations to enforce scientific rigour in developing medicines that treat the body. Tools that make claims about our minds should be afforded at least the same protection. For years, scholars have called for federal entities to regulate robotics and facial recognition; that should extend to emotion recognition, too. It is time for national regulatory agencies to guard against unproven applications, especially those targeting children and other vulnerable populations.

More:
https://www.nature.com/articles/d41586-021-00868-5

1 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Time to regulate AI that interprets human emotions (Original Post) Judi Lynn Apr 2021 OP
If a machine can understand your emotional state Voltaire2 Apr 2021 #1
Latest Discussions»Culture Forums»Science»Time to regulate AI that ...