We have accepted AI into our daily lives and its use in complex processes such as self-driving cars, facial recognition and medical diagnosis. It is an unescapable innovation, evolved beyond simple rule-based systems into sophisticated neural networks with deep learning algorithms. As computing power increases, so will AI, carving new frontiers of innovation and an unprecedented era of technological advancement for all industries.
That is the perfect world we wish we lived in. But first, a pressing issue needs addressing: AI bias. This refers to the tendency of algorithms to reflect human biases, making unfair decisions for particular groups of people. It occurs because humans choose the data and are the ones who decide how the results are used. It happens during the machine learning process where prejudiced assumptions were made, affecting the algorithm’s development and growth.
An AI is only as good as the data it ingests. Biases can potentially lead to unfair recruitment, prejudice and discrimination. Let’s take a look at some of the most common biases when developing an AI:
Data bias. Often touted as one of the most infallible forms of information, data is still far from perfect if it is incomplete or of low quality. With these attributes, the collected data does not accurately replicate what is happening in the real world.
Prejudice bias. This bias arises from human input as a result of stereotyping others, whether intentional or not. Our perception of age, gender, disability, social class, race and nationality can creep into datasets during the machine learning phase, skewing an AI’s learning model.
Interaction bias. When an AI is left to extract information on its own and learn by interacting with others, there will be instances of it picking up erroneous assumptions from other users. Instead of rejecting data, an AI will internalize it, reflecting opinions of the people who trained it.
Confirmation bias. This is a prevalent bias whenever developers train an AI. We tend to trust information that aligns with our beliefs, experience and understanding while dismissing those that do not. As we continue to adjust and manipulate the process towards a certain direction, AI will not only reinforce our views but thrust them into extreme behaviors.
Association bias. When data is collected without proper monitoring, the same pattern emerges and reemerges, training an AI to believe in a false fact. As the same results are generated and fed back into its algorithm, it causes a positive feedback loop that amplifies an AI’s bias.
It is easy to assume if biases occur during the development stage, there is nothing to be worried about as these biases can still be fixed. However, the reality is AI bias has been observed in abundance, with significant consequences across industries and for those who have been discriminated against. Consider the following instances:
Amazon’s recruiting engine. This AI was created to help Amazon evaluate resumes and decide which applicants will be called in for further interviews. Unfortunately, the algorithm was fed the company’s hiring practices, replicating its biases in the process. As a result, female candidates were penalized and received lower scores, decreasing their chances of landing a job with Amazon.
PredPol. Software company PredPol, now known as Geolitica, deployed an AI-powered policing solution between 2018 and 2021 to predict crime hotspots based on data collected by US police departments. Learning from historical data, the AI repeatedly sent officers to excessively patrol minority neighborhoods. The new data generated was then fed back into the system to create a feedback loop, amplifying its bias.
Google Photos. This app has an image labeling feature that added descriptive tags to identify objects and elements present in a photo. In 2015, a black software developer found the app to be racist as it organized photos of him and his friends into a folder called “Gorillas”. Despite being trained on millions of images in a controlled environment, harmful biases still exist in the app’s algorithm.
Implicit bias in language models. Last year, new research from the Penn State College of Information Sciences and Technology found all 13 NLP models tested had prejudice towards people with disabilities. Furthermore, adjectives generated for disability-related words had more negative sentiments than those without.
To address bias, business leaders need to establish ethical and technical strategies before the construction of AI. Organizations must hire and form teams with diverse backgrounds, disciplines, genders, races and cultures as their experiences and perspectives in real-world helps imbue AI with an inclusive vision of society, one with minimal biases, prejudices and stereotypes.
It is also good to keep in mind that there is no such thing as universal datasets. Developers and engineers have to identify and guide AI to prevent the creation of echo chambers, filter bubbles and misinformation that will eventually corrupt its machine learning model.
While there are many ways to minimize bias in AI, none are foolproof. It takes a multidisciplinary approach involving diverse teams, effective data control and governance, continuous validation for fairness, the establishment of best practices, and the enlistment of unique and multifarious groups of testers to identify bias and remove it from the equation.
The reason why designing the perfect AI is extremely complicated is that humans are complex. The machines we create reflect our flaws. Which is fine. Our failures have allowed us to move forward. Being human is a wonderful thing. We always find ways to remedy problems, challenge our perspectives and overcome adversities. We should strive to allow AI to only mirror our best selves, intentions and aspirations. Its success hinges on our ability to work collaboratively and inclusively to create better AI outcomes.