Subscribe

* indicates required
Okayplayer News

To continue reading

Create a free account or sign in to unlock more free articles.

By continuing, you agree to the Terms of Service and acknowledge our Privacy Policy

A.I. Thinks 'White Names' Are More 'Pleasant' Than 'Black Names'
A.I. Thinks 'White Names' Are More 'Pleasant' Than 'Black Names'

A.I. Thinks 'White Names' Are More 'Pleasant' Than 'Black Names'

A.I. Thinks 'White Names' Are More 'Pleasant' Than 'Black Names'

We're very much aware of the problem that is implicit bias: "the attitudes or stereotypes that affect our understanding, actions, and decisions in an unconscious manner," as defined by the Kirwan Institute.

We see implicit bias in the way police officers target certain citizens; the way managers choose prospective employees; and just about the way we go about navigating life on a day to day basis.

In a new study it looks like artificial intelligence is also suffering from the same problem. Conducted by Princeton University researchers Aylin Caliskan-Islam, Joanna Bryson and Arvind Narayanan, the study discovered that A.I. can be just as racist as humans are, associating "white names" with being more pleasant than "black names."

This report was inspired by one done in the late '90s by researchers at the University of Washington, where they used a panel of all white people and showed them common names for white people and black people. Their results? The test subjects rated the white sounding names as pleasant, and the black sounding names as unpleasant. Granted, the study was already set up for failure by having nothing but a group of white people offering their thoughts on the matter, but it's still telling nevertheless.

Anyways, what the Princeton researchers did differently was that they used the same method on GloVe (a popular algorithm that's used for processing natural language), but found the same results.

Now this is totally our fault, considering humans (read: primarily non POC working environments that allow these problems to become a reality) are the ones creating these algorithms.

It's terrifying to think about, especially since A.I. is becoming a more integral part of the world each and every day. Deciding who may or may not get a loan; who is more or less likely to commit a future crime if they're already an offender — A.I. is undeniably flawed, and problems such as these speak to that.

Hopefully, this study will lead to developers being more mindful about the algorithms they're creating, and teaching A.I. to use better judgment against implicit bias.