Machine learning is the application of algorithms and data to make decisions. Machine learning (ML) and artificial intelligence (AI) are used in nearly every industry. They have already become an indispensable part of our lives. Science fiction has become a reality. MIT is starting a new college dedicated to computing with an AI focus costing $1 billion.
Technological companies spend between $20bn and $30bn in 2017 on Artificial Intelligence and Machine learning, with most of the funds going into research and development. They have helped improve cancer screening, fast-tracking drugs discovery simultaneously allowed humans to build self-driving cars. Usage of AI and ML grew by 270% from 2015 to 2019 and will generate $118.6 billion by 2025. 90% of Americans are already using AI products in their everyday lives. AI and ML will play a more critical role, especially now as the world is still engulfed with the effects of Covid-19 and people are working from home.
However, these technologies also have a downside. The same technology that predicts diseases also proliferates systemic racism. For example, self-driving cars are more likely to recognize white pedestrians than Black pedestrians, which results in diminished safety for black and brown people as the technology is more broadly embraced.
UnitedHealth’s Optum division was unaware of the biases in the data used to help hospitals on which particular individuals require additional care and management. The system used historical data to calculate which individuals need extra care, and since black patients spent less on healthcare historically, it wrongly assigned black patients the same level of care as healthy white patients. Once the algorithm was revised, it increased black patients receiving care from 17.7% to 46.5%. This algorithm and others like this are used to provide care for 200 million people in the US each year. We need systems that reduce inequalities in health care, not worsen them.
In another incident, a computer program used by US courts was found to be biased against black people. The software would tag black defendants twice as likely to commit a crime compared to white people. Software like these is used every day in US courts by the judges and other officials, thus impacting the final verdict. A black person is twice as likely to be arrested as a white person and five times as likely to be stopped without just cause. A study last year by the (NIST) found evidence of racial bias in almost 200 facial recognition algorithms.
At the same time, Financial software companies have also been exposed to discriminate against Black and Latinx families by charging them higher interest on the mortgage. In addition, a google program labeled several black people as gorillas, and a Microsoft chatbot began uttering anti-Semitic messages after one day on Twitter, which was built on unspecified public data and had an internal learning feature by which a group of people created a bias in the system.
Even automated speech recognition systems have significant racial inequalities, with voice assistants mistaking 35% of black users and only misidentifying 19% of words from white users. An algorithm that was developed by amazon to automate headhunting was biased against female candidates. Over the last ten years, the trends showed an inclination towards male candidates, precisely what the software did. It continued the same bias that Amazon was trying to reduce.
According to Mozilla’s 2020 internet health report, around 80% of employees at Facebook, Apple, Google, and Microsoft are men. Having a more diverse team produces more diverse training data sets that represent society more accurately. Organizations need to invest more in diversifying AI and ML fields financially. Diverse groups are more likely to spot bias and work with communities that will be affected by them.
Researchers are now using many techniques to ensure AI systems process data in advance and later change the algorithm’s decision. Another method they use is “counterfactual fairness,” which ensures that an algorithm’s decision is inconsistent with the world where sensitive characteristics such as gender or race were changed. This approach can help alleviate some biases, but more needs to be done by big tech.
Systems that do not provide ideal services to people of a specific gender or race are known as cognitive AI bias. Models and algorithms do what they are trained to do; find a pattern in data. Most engineers prefer to use open-source datasets, as making datasets requires lots of money and time. However, these data sets do not represent all communities and can include biased human decisions that mirror historical inequalities. For example, if a dataset were trained only on white people or men, it would have biases against black people or other genders. Algorithms trained to detect Skin-cancer are mainly trained on light-skinned people and are worse at identifying skin cancer affecting darker skin.
AI is not a perfect system, and if left unchecked, it will perpetuate the biases and the very same people that many people claim it will benefit. Algorithms are created to detect patterns in data using only a top-down approach. These systems are gradually becoming reliant on deep learning, a type of ML system that has reached a phase of effectiveness for specific problems. These systems are dependent on data with no logic presented as part of their output. AI is not able to explain how it has reached that particular conclusion.
If we are not careful while making these systems, we risk automating the same biases these systems are designed to eliminate. For example, software initially designed to predict areas where crime might occur can also result in over-policing of these areas.
More from Javaid Iqbal here.
BCPHR.org was designed by ComputerAlly.com.
Visit BCPHR‘s publisher, the Boston Congress of Public Health (BCPH).
Email [email protected] for more information.
Click below to make a tax-deductible donation supporting the educational initiatives of the Boston Congress of Public Health, publisher of BCPHR.
© 2024 BCPHR: An Academic, Peer-Reviewed Journal