The People Behind Mozilla’s Digital Revolution

It was a gray Tuesday morning when I first stepped into Mozilla Foundation‘s modest research lab in downtown San Francisco. Not the gleaming, sterile environment you might expect from a tech powerhouse, but rather a converted industrial space with exposed brick and the faint aroma of fair-trade coffee lingering in the air. As a journalist who’s covered tech for nearly two decades, I’ve grown somewhat cynical about grand promises of “making the internet better for everyone.” But something about Mozilla’s approach feels authentically different.

“We’re not building products to sell you things,” says Dr. Eliza Chen, who greets me with a firm handshake and immediately apologizes for the mess on her desk—a chaotic array of sticky notes, academic papers, and what appears to be half-eaten vegan muffin. “We’re researching how to make technology serve humans, not the other way around.”

Chen, whose background spans both computer science and social anthropology, represents the interdisciplinary approach that makes Mozilla’s foundation model research distinctive in a landscape dominated by profit-driven innovation.

Mozilla – The Human Side of Foundation Models

Mozilla Foundation’s work on foundation models—the sophisticated AI systems that power everything from chatbots to content recommendation engines—isn’t just about technological advancement. It’s fundamentally about people.

“Foundation models are reshaping our digital experiences, often invisibly,” explains Chen, pushing her wire-rimmed glasses up her nose as she gestures toward a whiteboard covered with flowcharts. “But who decides how they work? Who benefits? These are questions of power that affect everyone who uses the internet—which is basically everyone.”

Unlike many tech organizations that treat AI ethics as an afterthought, Mozilla has embedded these questions into their core research methodology. Their latest project examines how recommendation systems—like those used by streaming services and social media platforms—can be designed to prioritize user agency and transparency.

Mozilla - diverse researchers collaborating at computer workstation

Mozilla – From Labs to Living Rooms

What struck me most during my visit wasn’t the technical brilliance (though there was plenty), but how researchers continually connected their work to ordinary lives.

Marcus Jimenez, a former advertising executive who now leads Mozilla’s user experience research, has a personal stake in this work. “My mother, who’s 78, got completely overwhelmed by misinformation during the pandemic,” he tells me as we walk through the office kitchen, where someone has posted a handwritten sign asking people to please wash their dishes. “It wasn’t because she’s not intelligent—she has a master’s degree—but because recommendation algorithms kept feeding her increasingly alarming content.”

This personal connection drives Mozilla’s commitment to making their research accessible. Unlike proprietary foundation models developed behind closed doors, Mozilla’s approach emphasizes transparency and community involvement.

“We actually want people to understand how these systems work,” says Jimenez, who speaks with a slight New Jersey accent that becomes more pronounced when he’s excited. “That’s why our publications are freely available and written in relatively plain language.”

Their recent paper on recommendation systems outlines a framework for personalization that respects user autonomy—a stark contrast to engagement-maximizing approaches that dominate the industry. The paper has been downloaded over 15,000 times, not just by academics but by nonprofit leaders, educators, and concerned citizens.

Mozilla’s research aligns with several critical technology trends affecting nonprofits and public interest organizations. While many tech companies talk about “AI for good,” Mozilla is specifically asking how AI can serve democratic values and public interest.

“Foundation models represent an inflection point,” explains Dr. Aisha Rahman, whose office is adorned with photographs from her fieldwork in rural communities across four continents. “They can either concentrate more power in fewer hands, or they can help democratize access to knowledge and tools.”

Rahman, who initially seems reserved but becomes animated when discussing her work, leads Mozilla’s efforts to ensure foundation models can benefit underserved communities. Her team is developing frameworks for nonprofits to evaluate whether AI systems align with their missions and values.

“Many nonprofits feel pressured to adopt AI just to keep up,” she notes, “but they need tools to assess whether these technologies actually serve their communities or just create new dependencies on big tech.”

Mozilla - researcher explaining data visualization to community members

The Ripple Effects

Mozilla’s research isn’t happening in isolation. Their findings on foundation models are influencing how other organizations—from public libraries to international development agencies—approach technology.

Samantha Wu, director of technology at a medium-sized environmental nonprofit (who asked that I not name her organization), told me by phone that Mozilla’s frameworks helped them make critical decisions about their data strategy.

“We were being pitched expensive AI solutions that promised to revolutionize our donor engagement,” Wu explained. “But Mozilla’s research helped us ask the right questions about data ownership and algorithmic bias. We ended up with a simpler solution that actually respects our supporters’ privacy.”

This impact extends beyond formal organizations to individuals seeking to navigate increasingly complex digital environments. Mozilla’s research has informed educational resources that help people understand how their data fuels recommendation systems and how they can maintain more control over their online experiences.

Challenges and Contradictions

Of course, Mozilla isn’t without its contradictions and challenges. Funding remains a perpetual concern for research that doesn’t promise immediate commercial applications. Some researchers express frustration about the gap between their ideals and the current reality of the internet.

“Sometimes it feels like we’re designing fantastic seat belts while the car is already halfway off the cliff,” admits Chen with a wry smile. “But that doesn’t mean we stop trying.”

What distinguishes Mozilla’s approach is this willingness to acknowledge tensions without surrendering to cynicism. Their research on foundation models acknowledges both the potential benefits and harms of these technologies, offering pathways toward more human-centered implementations.

As I prepared to leave the office, Chen handed me a Mozilla sticker for my laptop—a small fox curled around a globe. “It’s a reminder that the internet belongs to all of us,” she said. “Even when it doesn’t feel that way.”

Walking back through San Francisco’s busy streets, past people absorbed in their phones, I found myself feeling something unexpected after a tech interview: hope. Not the breathless techno-optimism that permeates so much industry coverage, but something more grounded—a reminder that behind every algorithm and platform are people making choices. And different choices remain possible.