230+
nonprofits published their work in 30 days

Fast Company World Changing Ideas 2022 honorable mention
A Platform for Nonprofit Funding
Designed the incentives that got 230+ nonprofits onto an empty platform in the first month

My role
Lead Product Designer. I designed the contributor experience as a five-step participation loop, refining each step as users showed me what they needed.
Impact
230+ nonprofits posted their work on a previously empty platform in the first month. Fast Company World Changing Ideas 2022 honorable mention.
What we set out to fix
A workforce training program with an 80% job placement rate couldn't run its next cohort because it couldn't get funded. In the meantime, $109 billion in foundation grants flows annually in the US. But the top 1% of foundations control 57% of all grant dollars, and the money moves largely through existing relationships.
Many nonprofits are doing impactful work and can't be seen. Programs that could change lives run out of runway before they can grow.

A nonprofit retraining auto workers for solar installation jobs. Important work. No way in.
So, we built the most direct path: a clean form for nonprofits to document and publish their work. The assumption was straightforward: If nonprofits could clearly present their work, foundations who were already looking for aligned programs would be able to find and fund them.
The original form
The reality? Nobody showed up. We sent it out to more than 20 nonprofits in Washington. They said it sounded great, that they'd send something over. Weeks passed. Nothing came in.
REFRAMING PROBLEM
Lack of incentive
The form had a question that asked, "What impact did you make in the last fiscal year?" Most nonprofits don't have that ready. They'd start the form, hit the question, and close the tab.
Nonprofits wanted visibility, but the cost of participation was too high relative to the value they could see.
The question became how to get nonprofits incentived enough to publish their work?
THE HYPOTHESIS
A participation loop
The system stalled because it was designed as a one-way extraction of data rather than a functional loop. To start, I sketched a five-step participation loop:

Participation loop
Each step was meant to lower the cost of the next. Each design decision that followed was an attempt to validate these steps against reality.
STEP 1
Lowering the barrier to entry
Instead of a long form upfront, I split contribution into two paths: create and develop. Create was the lightweight entry point: a few essential questions, under five minutes. Develop was the deeper path, for people already invested who wanted to go further.
Create
Develop
We let organizations become visible before they were complete.
By allowing visibility before completion, number of new challenges jumped immediately. But many just stopped at the basic setup.
STEP 2
Building a reward loop
The better orgs write about their work, the better chance they get funded. They need more to get there. Funding kept coming up in conversations unprompted. Nonprofits wanted to know who funds work like theirs. Where the money was flowing. Dr. Ying Li had built a knowledge graph of 10 years of foundation grant data, every grant given by every foundation in the US.
We had what nonprofits were asking for, just not usable in its raw form.
The Money Flow dashboard unlocked after completion rate reaches 75%
I structured the data around the two questions nonprofits kept asking: what they work on, and where. A geo map of the US showed where money was flowing. UN SDGs filter showed which issue areas were attracting support. A housing nonprofit in Texas could see whether housing was getting funded in their state, or whether the money was going to other places. I built the dashboard as a reward when users finished 75% of the challenge writing.
STEP 3
Introducing social proof
We'd designed the platform assuming nonprofits came to be seen. Post their work, get noticed, get funded. That was the mental model. The usage data told a different story. Nonprofits were reading each other's challenges. An education nonprofit in Washington would read three or four challenges before starting their own.
The platform transformed from just posting to finding peers and figure out who else was working on the same thing.
I treated this as a good opportunity to introduce social proof:

Claim shared challenges

Reach out to peer
A button labeled "My organization has this challenge" let nonprofits claim a shared problem with one click
A counter on every challenge showed how many other organizations had claimed it ("245 organizations have this challenge"), turning the platform into a peer map
"Invite to collaborate" let nonprofits reach out to peers
NEW PROBLEM
More content, less clarity
As more nonprofits found their way in, the platform got busier. That's when it got harder to navigate. A foundation looking for credible work to fund, now had to wade through hundreds of challenges across dozens of issue areas.
The difficulty of navigation is hurting impactful work that should be easy to surface, and putting the platform at risk of becoming a repository of contributions rather than a system for discovery and action.
The platform needed better tools for finding what mattered.
STEP 4
Designing for credibility
Discovery helped users find content they cared about. A foundation deciding whether to engage with a challenge is looking deeper and really asking two things. Is this a real problem? And did the people writing it know what they were doing?
Is this a real problem?
Is the writeup credible?
Quality Signals
Location
Funding
People affected
Peer validation
Completeness
1
2
3
5
6
4
Frequency
The signals behind each challenge card
The first one was already answered by data that nonprofits contributed. The second one was harder. Nothing on the platform told a foundation whether the writeup itself was any good. I added two signals.
Search results designed with quality signals
Verified nonprofit members could upvote challenges. An upvote from someone who works in the sector means something specific: this looks like real work to me. Anonymous upvotes don't carry that weight, so the platform tracked verified votes separately and showed that count on the card. The second signal was completion rate, surfaced as a tag, so a foundation could see at a glance how much the nonprofit had invested in articulating their work.
It changed behavior. Challenges that scored well drew more outreach from foundations and tech providers. Contributors could see what happened when they do impactful work and wrote well, which gave them a reason to.
STEP 5
Designing for relevance
To get people to visit the platfrom more frequently, I worked on helping people find work in their issue area faster. The 17 SDGs is the shared framework across nonprofits. I used that as the main organizing layer.
Before: dropdown list
After: color-coded wheel
The original UI was a dropdown. Users had to read down the list to find what they wanted. I changed it to the wheel that was mobile friendly and scanable at a glance.
A "Refine Search" state that appeared when users showed signs of frustration
I added a downvote on each challenge meaning "this isn't what I'm looking for." After 3+ downvotes in a session, the system surfaced a "refine search" prompt to help users narrow scope instead of scrolling further.
What worked, and what didn't
What we built, in the end, was almost a five-step loop. Some of these moves were planned from the start. Others emerged as we watched nonprofits behave in ways we hadn't designed for. The first four steps activated. 230+ nonprofits posted their work in the first month. The dashboard drew them back during their first session. Peer browsing became a real behavior. Foundations and tech providers started reaching out to challenges that scored well on the credibility signals.

Participation loop
But the fifth step didn't. We'd given people a reason to come, but not a reason to return. Without people coming back, the network effects didn't form. Lowering the cost to enter was necessary. It wasn't enough. A platform like this has to keep giving people reasons to return, new peer challenges to learn from, new foundation activity to track, new collaborations to join. Without that, the system stalls before it ever takes off. What I'd do differently: the Money Flow dashboard should have kept updating after the first visit. All of this was already in the data: new funding in a nonprofit's area, other organizations claiming their challenge, foundations paying attention. The platform just never told the nonprofit. A loop isn't only about making it easy to join. Each step has to give people something new every time they come back. We built the entry. We didn't build the reasons to return.
Acknowledgement A few people made this work possible. Dr. Ying Li developed the knowledge graph and data ingestion that powered the funding dashboard. The nonprofit and foundation interviews that shaped how I understood the problem came from the AI4PI Fellowship.