Bergstresser S. Disability and the emergence of digital barriers to health screening HPHR. 2021;44.
DOI:10.54111/0001/RR3
For the disability community, there are many barriers to inclusion in public health initiatives, including screening programs and illness prevention. Physical barriers and social biases have long been impediments to access, but with the rapid creation of new digital health technologies, digital barriers have now become an additional problem. This paper illustrates via example the ways in which new digital barriers to inclusion are emerging, and it outlines the ways in which existing forms of physical and social exclusion become encoded into digital health technologies. The conclusion outlines why disability inclusion in digital health is necessary for social justice and equality in public health.
Health systems and institutions have historically been designed based on a normative vision of an average person. For people with disabilities who differ from these assumed norms, exclusion and denial of needed health resources are often the result.1 These problems also reach screening and prevention initiatives within public health, which can present multiple types of barriers for the disability community. The physical world can lead to lack of access due to inaccessible buildings or medical equipment. Exclusion from health education can result from a lack of information presented in alternative formats such as large print or braille.2-3 Social bias and stigma can also lead to exclusion. For example, people with disabilities are less likely to be screened for cancer; this is particularly apparent in disparities for breast and cervical cancer screening, treatment, and survival for women with disabilities.4-5 One source of bias comes from preconceived notions about disabled people as asexual, which can lessen provider attention to breast and cervical health.
Now, a new digital health world is being created, and it is once again being based on and made for people who fit within the statistical representations and social notions of average and normal. There is already ample evidence that bias exists in algorithmic and machine-learning applications for health screening and care; those that have been identified so far are primarily related to notions of race, ethnicity, economic class, sex, and gender.6-9 In the area of disability, historic barriers of physical and social exclusion have been slowly addressed over time, including physical accessibility mandates in the Americans with Disabilities act (ADA), the presentation of public health information in multiple formats, and more education for health care providers. Nevertheless, disability bias continues, and multiple forms of bias often intersect.10 As the digital health world is being created anew, the same persistent biases are making their way into data sampling methods and algorithmic design.11-12 Many of the past lessons and initiatives to increase access have been ignored or forgotten within the digital world, leaving public health researchers and health providers scrambling to address the excluded populations that their new digital tools are not being built to serve.
This example is based on an initiative to model risk for Clostridium difficile infection (CDI) based on electronic health records (EHR) and using a “generalizable machine-learning approach” that produces a model to be used for in-hospital screening.13 This example shows the specific process through which physical and social exclusion can translate directly into systematic exclusion in digital health, and it focuses on the emerging area of digital health tools that use artificial intelligence (AI) and machine-learning algorithms to predict and screen for illness and risk. In particular, the model of interest was developed to identify patients at high-risk for CDI so as to better target infection prevention strategies, and the aim was generalizability within institutions.13 The second aspect of this example is related to psychiatric disability. Individuals diagnosed with and hospitalized for severe mental illness constitute a population that has historically faced extreme social and physical exclusion via stigma and institutionalization in isolated facilities.
The CDI risk prediction model was developed at two major academic health centers. At one of these, the University of Michigan Hospitals (UM), the study population was defined as adult inpatients admitted to the University of Michigan Hospitals (UM) over a 6-year period (n=205,050 visits).13 From those data, discharges in fewer than three days (n=60,927), those who tested positive for CDI within two days of admission (n=797), and those with recent prior or duplicate CDI tests (n=1495) were excluded, so that 194,831 visits remained. These exclusions concerned variables directly related to in-hospital CDI cases, but the following exclusion was not. UM also excluded patients admitted to the inpatient psychiatric unit, which came to 3,817 excluded visits, or almost 2% of the 194,831 visit subtotal.
This nonrandom exclusion is problematic for multiple reasons. First, systematically excluding one population leads to a biased data sample; since this sample was used as the basis of subsequent UM model development, this also means that the model itself is biased in the same way. In addition, the existence of social and physical exclusion was taken as sufficient reason to further exclude this population digitally, showing a direct path by which historical types of exclusion can be encoded into the new digital health world. The study authors state: “This decision was based on the fact that psychiatric inpatients at UM are located in a secure region of the hospital isolated from other patients and caregivers.”13 Though this nonrandom exclusion of a vulnerable population was acknowledged in the methods section, and a list of many limitations was presented in the discussion, there was no acknowledgement that excluding admissions to the psychiatric inpatient unit was a limit on future generalizability to all adult inpatient admissions.
Though psychiatric admissions may at first seem largely irrelevant to a CDI-focused study, in reality, there are circumstances where psychiatric symptoms and increased risk for CDI are associated. For example, individuals with inflammatory bowel disease (IBD) are at higher risk for CDI, and psychiatric disorder is a frequent comorbidity of IBD.14 There are many other possible upstream determinants, though because psychiatric populations are often excluded from research on physical diseases, these potential pathways remain under-researched.
Parikh et al.8 describe two types of bias in artificial intelligence and health care: social bias and statistical bias. Social bias refers to inequity in care that systematically leads to suboptimal outcomes for a particular group, and it can be caused by human factors such as implicit or explicit bias. Statistical bias, on the other hand, results from factors including suboptimal sampling, measurement error, and heterogeneity of effects. The previously discussed example, where psychiatric inpatient visits were excluded from an infection-control risk screening model, shows evidence of both types of bias. Social bias lead the UW model developers to dismiss this population as expendable, rationalizing the exclusion based on the inconvenience of accessing a physically excluded population located in a locked hospital ward. This nonrandom exclusion lead to a biased sample. Statistical bias was subsequently introduced or perhaps even exacerbated because the machine-learning algorithm and final screening model were based on this biased sample.
While AI and machine-learning based screening models hold great potential promise for the future of rapid and accurate disease-specific screening, they are limited by their production via exclusion and biased sampling. In the making of the new world of digital health, the exclusion of disability and other marginalized populations must be remedied rapidly. If this is not prioritized, vast disparities in public health will persist, including the continued exclusion of a disproportionate number of disabled people from health screening initiatives. Mathematical solutions for algorithmic fairness are not enough, since they do not account well for complex causal relationships between biological, environmental, and social factors, social determinants of health, or the structural factors that affect health across multiple intersecting identities.15 The only path towards social justice and equality in public health is to include disabled and other marginalized populations from the start, and to sustain this inclusion throughout the entire process of developing new digital health technologies. Disability inclusion as an afterthought will not suffice.
The author(s) have no relevant financial disclosures or conflicts of interest.
Dr. Sara M. Bergstresser is currently Lecturer in the Masters of Bioethics program at Columbia University. She earned a PhD in Anthropology from Brown University, an MPH from Harvard School of Public Health, and an MS in Bioethics from Columbia University. Her research addresses the intersection of health and society, including global bioethics, mental health policy, stigma, disability studies, social justice, and structural inequalities.
BCPHR.org was designed by ComputerAlly.com.
Visit BCPHR‘s publisher, the Boston Congress of Public Health (BCPH).
Email [email protected] for more information.
Click below to make a tax-deductible donation supporting the educational initiatives of the Boston Congress of Public Health, publisher of BCPHR.
© 2024 BCPHR: An Academic, Peer-Reviewed Journal