Women in AI: Chinasa T. Okolo researches AI's impact on the Global South

Chinasa T. Okolo

Image Credits: Chinasa T. Okolo

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch has been publishing a series of interviews focused on remarkable women who’ve contributed to the AI revolution. We’re publishing these pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Chinasa T. Okolo is a fellow at the Brookings Institution in the Center of Technology Innovation’s Governance Studies program, and in September 2024 was named one of the 100 most influential people in AI by Time. Before Brookings, she served on the ethics and social impact committee that helped develop Nigeria’s National Artificial Intelligence Strategy and has served as an AI policy and ethics adviser for various organizations, including the African Union Development Agency and the Quebec Artificial Intelligence Institute. She recently received a PhD in computer science from Cornell University, where she researched how AI impacts the Global South.

Briefly, how did you get your start in AI? What attracted you to the field?

I initially transitioned into AI because I saw how computational techniques could advance biomedical research and democratize access to healthcare for marginalized communities. During my last year of undergrad [at Pomona College], I began research with a human-computer interaction professor, which exposed me to the challenges of bias within AI. During my PhD, I became interested in understanding how these issues would impact people in the Global South, who represent a majority of the world’s population and are often excluded from and underrepresented in AI development. 

What work are you most proud of in the AI field?

I’m incredibly proud of my work with the African Union (AU) on developing the AU-AI Continental Strategy for Africa, which aims to help AU member states prepare for the responsible adoption, development, and governance of AI. The drafting of the strategy took over 1.5 years and was released in late February 2024. It is now in an open feedback period with the goal of being formally adopted by AU member states in early 2025.  

As a first-generation Nigerian American who grew up in Kansas City, Missouri, and didn’t leave the States until studying abroad during undergrad, I always aimed to center my career within Africa. Engaging in such impactful work so early in my career makes me excited to pursue similar opportunities to help shape inclusive, global AI governance.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?  

Finding community with those who share my values has been essential in navigating the male-dominated tech and AI industries. 

I’ve been fortunate to see many advances in responsible AI and prominent research exposing the harms of AI being led by Black women scholars like Timnit Gebru, Safiya Noble, Abeba Birhane, Ruha Benjamin, Joy Buolamwini, and Deb Raji, many of whom I’ve been able to connect with over the past few years. 

Seeing their leadership has motivated me to continue my work in this field and shown me the value of going “against the grain” to make a meaningful impact.

What advice would you give to women seeking to enter the AI field?

Don’t be intimidated by a lack of a technical background. The field of AI is multi-dimensional and needs expertise from various domains. My research has been influenced heavily by sociologists, anthropologists, cognitive scientists, philosophers, and others within the humanities and social sciences.

What are some of the most pressing issues facing AI as it evolves?

One of the most prominent issues will be improving the equitable representation of non-Western cultures in prominent language and multimodal models. The vast majority of AI models are trained in English and on data that primarily represents Western contexts, which leaves out valuable perspectives from the majority of the world.

Additionally, the race toward building larger models will lead to a higher depletion of natural resources and greater climate change impacts, which already disproportionately impact Global South countries.

What are some issues AI users should be aware of?

A significant number of AI tools and systems that have been put into public deployment overstate their capabilities and simply don’t work. Many tasks people aim to use AI for could likely be solved through simpler algorithms or basic automation. 

Additionally, generative AI has the capacity to exacerbate harms observed from earlier AI tools. For years, we’ve seen how these tools exhibit bias and lead to harmful decision-making against vulnerable communities, which will likely increase as generative AI grows in scale and reach. 

However, enabling people with the knowledge to understand the limitations of AI may help improve the responsible adoption and usage of these tools. Improving AI and data literacy within the general public will become fundamental as AI tools rapidly become integrated into society.

What is the best way to responsibly build AI?

The best way to responsibly build AI is to be critical of the intended and unintended use cases for these tools. People building AI systems have the responsibility to object to AI being used for harmful scenarios in warfare and policing and should seek external guidance if AI is appropriate for other use cases they may be targeting. Given that AI is often an amplifier of existing social inequalities, it is also imperative that developers and researchers be cautious in how they build and curate datasets that are used to train AI models. 

How can investors better push for responsible AI?
Many argue that rising VC interest in “cashing out” on the current AI wave has accelerated the rise of “AI snake oil,” [a term] coined by Arvind Narayanan and Sayash Kapoor. I agree with this sentiment and believe that investors must take leadership positions, along with academics, civil society stakeholders, and industry members, to advocate for responsible AI development. As an angel investor myself, I have seen many dubious AI tools on the market. Investors should also invest in AI expertise to vet companies and request external audits of tools demoed in pitch decks.

Anything else you wish to add?

This ongoing “AI summer” has led to a proliferation of “AI experts” who often detract from important conversations on present-day risks and harms of AI and present misleading information on the capabilities of AI-enabled tools. I encourage those interested in educating themselves on AI to be critical of these voices and seek reputable sources to learn from.

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注