Industry Update

THE INHERENT BIAS IN COMPUTER PROGRAMS

Nirali Shah
Nirali Shah
Business Analyst
4th April 2020 7 mins read

Computer algorithms organize and select information across a wide range of applications and industries, from search results to social media. Abuses of power by Internet platforms have led to calls for algorithm transparency and regulation. Algorithms have a particularly problematic history of processing information about race. Yet some analysts have warned that foundational computer algorithms are not useful subjects for ethical or normative analysis due to complexity, secrecy, technical character, or generality. We respond by investigating what it is an analyst needs to know to determine whether the algorithm in a computer system is improper, unethical, or illegal in itself. We argue that an “algorithmic ethics” can analyze a particular published algorithm. We explain the importance of developing a practical algorithmic ethics that addresses virtues, consequences, and norms: We increasingly delegate authority to algorithms, and they are fast becoming obscure but important elements of social structure.

Today, computer algorithms play a critical role in producing and curating our communications and shared culture.  Some of the things they do impeccably are:

  • Determine how our questions are answered
  • Decide what is relevant for us to see
  • Craft our personal and professional networks
  • Suggest who we should date and what we should watch
  • Profile our behavior to determine what advertising we will receive.

Hewlett-Packard (HP) suffered serious public relations crisis because of some of their face tracking algorithms, as explained below: 

  • When it was revealed that its implementation of what was probably a bottom-up feature-based face localization algorithm (Yang, Kriegman, & Ahuja, 2002) did not detect Black people as having a face (Simon, 2009). 
  • Cameras on new HP computers did not track the faces of Black people in some common lighting conditions. 
  • In an amusing YouTube video with millions of views, Wanda Zamen (who is White) and Desi Cryer (who is Black) demonstrate that the HP camera eerily tracks Zamen’s face while ignoring Cryer, leading Cryer to exclaim jokingly, “Hewlett-Packard computers are racist” (Zamen, 2009).

A St. Louis tech executive named Emre Şarbak noticed something strange about Google Translate. He was translating phrases from Turkish — a language that uses a single gender-neutral pronoun “o” instead of “he” or “she.” But when he asked Google’s tool to turn the sentences into English, they seemed to read like a children’s book. The ungendered Turkish sentence “o is a nurse” would become “she is a nurse,” while “o is a doctor” would become “he is a doctor.” Google’s translation program decided that soldiers, doctors and entrepreneurs were men, while teachers and nurses were women. Overwhelmingly, the professions were male. 

A ProPublica investigation two years ago found that software used to predict inmates’ likelihood of being a high risk for recidivism was nearly twice as likely to be inaccurate when assessing African-American inmates versus white inmates.

Many companies have been using algorithms in software programs to help filter out job applicants in the hiring process, typically because it can be overwhelming to sort through the applications manually if many apply for the same job. A program can do that instead by scanning resumes and searching for keywords or numbers (such as school grade point averages) and then assigning an overall score to the applicant.

These programs also can learn as they analyze more data. Known as machine-learning algorithms, they can change and adapt like humans so they can better predict outcomes. Amazon uses similar algorithms so they can learn the buying habits of customers or more accurately target ads, and Netflix uses them so they can learn the movie tastes of users when recommending new viewing choices.

But there has been a growing debate on whether machine-learning algorithms can introduce unintentional bias much like humans do.

The researchers, led by Suresh Venkatasubramanian, an associate professor in the University of Utah’s School of Computing, have discovered a technique to determine if such software programs discriminate unintentionally and violate the legal standards for fair access to employment, housing and other opportunities

“The irony is that the more we design artificial intelligence technology that successfully mimics humans, the more that A.I. is learning in a way that we do, with all of our biases and limitations,” Venkatasubramanian says.

Venkatasubramanian’s research revealed that you can use a test to determine if the algorithm in question is possibly biased. If the test which ironically uses another machine-learning algorithm can accurately predict a person’s race or gender based on the data being analyzed, even though race or gender is hidden from the data, then there is a potential problem for bias based on the definition of disparate impact.

A researcher at the MIT Media Lab, thinks that facial recognition software has problems recognizing black faces because its algorithms are usually written by white engineers who dominate the technology sector. Their arguments were:

  • The engineers build on pre-existing code libraries, typically written by other white engineers.
  • As the coder constructs the algorithms, they focus on facial features that may be more visible in one race, but not another. 
  • These considerations can stem from previous research on facial recognition techniques and practices, which may have its own biases, or the engineer’s own experiences and understanding. 
  • The code that results is geared to focus on white faces, and mostly tested on white subjects.

And even though the software is built to get smarter and more accurate with machine learning techniques, the training data sets it uses are often composed of white faces. The code “learns” by looking at more white people – which doesn’t help it improve with a diverse array of races.

Nirali Shah
Nirali Shah
Business Analyst A business analyst at Creole Studios, and an unapologetic binge watching millennial, who can do unlimited reruns of Doctor Who.

JOIN OUR TEAM!

Why miss out on all the fun? Find out the perks of working with us.

CHECK US OUT
india office
India Office
A-404, Ratnaakar Nine Square,
Opp Keshavbaug party plot,
Vastrapur, Ahmedabad (380015), Gujarat.
+91 79 40086120
hong kong office
Hong Kong Office
Room A1, 11/F, Kwai Fong Ind.
Bldg., 9-15 Kwai Cheong Rd., Kwai Chung, New Territories,
Hong Kong.
+852 92014949