Artificial Intelligence (AI) is defined as the use of technology, including computers, to simulate intelligent behavior and critical thinking that is deemed comparable to human beings.1 AI systems rely on algorithms and are trained through labeled data; this type of system is used with face recognition software.4 Organizations and corporations may use automated facial analysis, which includes facial recognition, face detection, and face classification.4 AI has also been used in instances to determine what individuals to hire, fire, who to give a loan to, and used to determine how long a person spends in prison.4 This new, rapid emerging field was developed to help close the equity gap and progress society, but is it actually having the opposite effect?
Using labeled data (such as word embedding) to train AI systems can directly and indirectly create gender biases within AI applications.3 Word embedding is using text data to teach machines and creates natural language processing tasks.3 Gender stereotypes and gender biases can be created by word embedding through AI. Also, creating word associations/labeling gender neutral words are often lineated with gender definitions (e.g. receptionist and female or homemaker and female).3
Besides gender biases, there has been an influx of race biases in Artificial Intelligence technology. With AI, benchmarks are defined by binary labels. These binary labels are used to define gender classes, such as male or female or with race, such as darker or lighter. There are issues with using phenotypes of race, ethnicity, and individual features. Individual features can vary within different racial or ethnic categories and these categories are not consistent within various geographic locations.4
There are around 117 million Americans that are in law enforcement’s face recognition networks. Garvie and colleagues determined that African Americans are more likely to be stopped by law enforcement and be subjected to face recognition searches.6 Furthermore, facial recognition systems have misidentified women, young people ages 18-30, and people of color at high rates.7
With the issues of misidentification with women, young people, and people of color, we must wonder how AI will affect those whose gender is non-binary, transgender, or of a gender minority group. Those individuals who are gender nonconforming or transgender have higher rates of mental health problems that are linked to gender minority stress. This is due to stigma, discrimination, and the social stressor of being misgendered.2,5,8 As public health professionals, in order to protect the well-being of gender minority groups, we need to ask:
Artificial Intelligence was thought to be the tool to close the gap in health equity and other various aspects of daily living. In fact, AI has been shown to create a racial and gender gasp within the general population.
The call is this: for future research to be done to see how AI might impact gender minority populations, as well as for the development of bias-free AI.
BCPHR.org was designed by ComputerAlly.com.
Visit BCPHR‘s publisher, the Boston Congress of Public Health (BCPH).
Email [email protected] for more information.
Click below to make a tax-deductible donation supporting the educational initiatives of the Boston Congress of Public Health, publisher of BCPHR.
© 2024 BCPHR: An Academic, Peer-Reviewed Journal