AI THOUGHT LEADERSHIP & RESEARCH
2025 Yale School of Public Health Case Study : Girl Effect, Big Sis: Choosing a Path to Scale and Sustain a Mission-Driven, AI Enhanced Chatbot
Dive into how Karina Rios Michel, Chief Creative and Technology Officer at Girl Effect, led the evolution of Big Sis—an AI-powered health companion for adolescent girls—at a pivotal moment for global health innovation. This Yale School of Public Health case study examines how she balanced rapid advances in generative AI with shifting donor priorities and funding constraints, exploring how mission-driven organizations can harness emerging technologies responsibly and sustainably to achieve impact at scale.
2025 Stanford Center for Digital Health whitepaper : Generative AI for Health in LMICS
This white paper explores how large language models can be responsibly designed, deployed, and governed to improve health outcomes in low- and middle-income countries . It highlights opportunities for generative AI to close information and access gaps in underserved populations, while addressing challenges related to data bias, linguistic diversity, safety, and equity. The publication features real-world applications—including Girl Effect’s AI-powered health companions—as case studies demonstrating how human-centered design, local partnerships, and ethical frameworks can make AI health tools both scalable and trustworthy.
2025 GiRl Effect WhitePaper : Building with Genai, Girl Effect’s Journey to smarter Safer Health Chatbots.
This white paper shares Girl Effect’s journey in integrating generative AI into adolescent health platforms across Africa and South Asia. It highlights how the team designed, tested, and scaled AI-powered chatbots that deliver trusted, culturally attuned health information to millions of young people. The paper outlines practical lessons on building safe and ethical GenAI systems—covering language modeling for low-resource settings, human-in-the-loop safeguards, and strong data governance. It serves as both a field guide and a call to action for creating AI that prioritizes empathy, equity, and user trust.
2025 Girl Effect : Ethical AI Guidelines for the deployment of Social & behavior change Chatbots
Girl Effect’s AI Ethical Guidelines outline the principles and practices the organization follows to ensure responsible, safe, and equitable deployment of AI in adolescent health. The guidelines emphasize user safety, privacy, inclusivity, and transparency, providing a framework for human-centered design, human-in-the-loop moderation, and risk mitigation. By codifying these standards, the paper serves as a practical roadmap for building AI health tools that are trustworthy, culturally sensitive, and designed to empower young people while minimizing harm.
2023 Girl Effect : Artificial Intelligience & machine Learning Vision
This white paper explores how large language models can be responsibly designed, deployed, and governed to improve health outcomes in low- and middle-income countries. It highlights opportunities for generative AI to close information and access gaps in underserved populations, while addressing challenges related to data bias, linguistic diversity, safety, and equity. The publication features real-world applications—including Girl Effect’s AI-powered health companions—as case studies demonstrating how human-centered design, local partnerships, and ethical frameworks can make AI health tools both scalable and trustworthy.
202o United Nations Children Fund: Learning Brief, When Chatbots answer thier private questions
This brief offers guidance for implementing safer chatbots in digital sexuality education and support, emphasizing safeguarding measures to protect children and adolescents—especially girls—when engaging with sensitive topics. It outlines practical steps for developers to enhance the safety and effectiveness of these tools and highlights Girl Effect’s early use of keyword-trigger systems to escalate sensitive disclosures to trained team members for timely support.