New Pittsburgh Courier

To make healthcare fairer for Black communities, AI needs to include Black voices and experiences

Portrait of smiling young African businesswoman using laptop in blurry office with double exposure of immersive AI artificial intelligence interface. Concept of machine learning

Getty Images

Artificial intelligence (AI) is becoming a big part of our lives, including in healthcare. Some people are worried AI might get too smart, take away jobs, or hurt people’s privacy.

Dr. Vanathi Gopalakrishnan, University of Pittsburgh Associate Professor, Department of Biomedical Informatics, wants to ease those fears. She hopes people will see how AI can help improve healthcare, especially for communities that don’t always get the care they need.

Dr. Gopalakrishnan believes that AI and generative AI can make healthcare fairer — but only if people from underserved communities help shape it. “It’s not enough to teach people how to use AI,” she says. “We need to teach them how to question it and teach it to reflect our values, not just our data.”

AI refers to computer systems that try to make decisions like humans. In healthcare, AI uses information to predict things like someone’s risk of getting cancer or to suggest treatments.

DR. VANATHI GOPALAKRISHNAN

AI also finds patterns in health data, which is called predictive modeling (PM). PM helps guess how cancer might grow or whether a patient might need to return to the hospital.

Generative AI is different. It learns from big amounts of data and creates new things — like copy, images, or artificial data that looks real. Researchers use this kind of data to protect real patients’ privacy during studies and testing.

For Black communities, AI can lead to better and more equal healthcare. With tools like apps and chatbots, doctors can give health advice 24/7. In Pittsburgh, for example, researchers at Carnegie Mellon University created the Auntie app (auntieai.org). It helps new moms connect their feelings, relationships, and self-care.

Other helpful uses of AI include quickly looking at scans, blood tests, and medical records. This helps doctors find illnesses like cancer or heart problems faster, so they can start treatment sooner.

AI can also help determine which communities are most at risk for diseases like diabetes, high blood pressure, or mental health issues — and suggest ways to help.

Additionally, AI can be trained to understand and esteem the culture and needs of a community. This can lead to more respectful care and health messages that make sense to that group.

Even though these benefits sound great, we still need to be careful. AI isn’t always clear about how it works and can sometimes be unfair — especially generative AI. That’s why it’s important to think about who trains the AI and how it’s trained.

If AI tools are trained with unfair or missing information, they can harm people, especially those who live in already-marginalized communities. Dr. Gopalakrishnan says, “If Black Pittsburghers aren’t part of the data or the process, the technology might ignore or misrepresent them. We also worry about data privacy and trust, which are key for using AI the right way.”

AI has a lot of promise — but also risks. AI systems that are “black boxes” don’t always explain how decisions are made. They can undermine people’s privacy if data is shared without permission.

Some current AI systems already have unfairness built in because of past data. Black patients and communities are often left out, though this also happens due to a lack of participation in scientific research studies.

“For example, an AI program that decides who gets special care might give lower scores to Black patients if it’s based on unfair data from the past,” Dr. Gopalakrishnan explains. “If we’re not careful, AI can make health differences worse.”

That’s why Dr. Gopalakrishnan focuses on something called data justice. This means the data used to create AI should be fair, collected ethically, and used in ways that help the people it represents. “At the PRoBE lab at Pitt, we focus on models that are based on data and guided by the community — especially when it comes to things like maternal health and long-term/chronic diseases,” she says.

Dr. Gopalakrishnan strongly believes that for AI to truly help Black communities, Black voices must be included every step of the way.

First, AI researchers need to come from different backgrounds and all work toward fairness, good outcomes, and ethics. “It’s not just about fixing the data,” she says. “It’s about deciding who sets the rules, what success looks like, and who benefits. That’s what AI fairness and cultural understanding really mean.”

Next, researchers should partner with Black community groups that bring trust and experience to the process of building and training AI. “Local people should help decide what questions AI is built to answer,” Dr. Gopalakrishnan adds.

In Pittsburgh, teaching people about AI is becoming more important. Libraries, nonprofits, and other groups are offering classes and apps that don’t just show people how to use AI — but how to question it.

For example, a group of Black engineers of all ages who are committed to increasing diversity in universities, are using an app to help young people become STEAM superheroes through “DahVarsity AI.” The app allows young people to tell their own stories using video games and AI. The organization is called Toyz STEAM (toyzelectronics.com). CBS Pittsburgh highlighted their work in a recent broadcast, while the Pittsburgh Technology Council featured Toyz STEAM in a cover story.

Dr. Gopalakrishnan encourages everyone to learn about and take part in AI. “If there’s a chance to help with AI research, please take it — and tell the young people in your life to get involved at school, at a library, or at community centers,” she says. “The next generation will keep this work going by asking: Who made this AI tool, and who does it help—or hurt?”

 

Exit mobile version