The Intelligence Lab for Public Interest

Layered incentive design that broke a three-sided cold start and brought 100+ organizations in month one

My role

Lead product designer. Framed the cold start as an incentive problem. Designed participation in layers: a lightweight entry flow, a funding-data dashboard that paid off before full contribution, and trust signals that made the platform credible to funders.

Impact

100+ orgs created challenges in the first month. High-scoring challenges drew outreach from foundations and tech providers. Fast Company World Changing Ideas 2022 honorable mention.

$109 billion in foundation grants flows annually in the US. But the top 1% of foundations control 57% of all grant dollars, and the money moves largely through existing relationships. Consider a nonprofit retraining displaced factory workers for solar installation jobs, based in a post-industrial city with no donor base. They're doing genuinely important work, and they have no reliable path in.

The constraint wasn't funding. It was who could be seen.

A qualified nonprofit and a willing funder, both in Ohio, separated by an invisible information gap

The obvious assumption

If nonprofits could clearly present their work, foundations already looking for aligned programs would be able to find and fund them.

The original form

So we built the most direct path: a structured way for nonprofits to document their challenges and publish them on the platform.

Nobody showed up

We started local. Talked with more than 20 nonprofits and helped them populate content. When we weren't there to support, people said "this sounds great, we'll get something over to you." Weeks passed. Nothing came in. When we followed up, the reasons were consistent:

Short-staffed, too many competing priorities

The form required material they hadn't assembled, so they got stuck and never restarted

The platform was empty. Not worth the effort if no one was there yet

THE REAL PROBLEM

Design for participation

The issue wasn't that nonprofits didn't want visibility. The system asked for too much before proving any value.

A single hard question arriving before they were invested was enough to lose them. "What impact did you make in the last fiscal year?" was a common one. Most closed the tab.

Instead of asking for full articulation upfront, we let organizations become visible before they were complete.

A challenge

In-depth write-up

Nature and Context

Negative Impact

Organization Type

Current Funding

Description

Data Sources

Symptoms and Causes

Economic Impact

Stakeholders

Potential Funding

Value Proposition

Sustainability

Sources

Success Metric

Problem

Impact

Beneficiaries

Funding

Ideas

Attributions

Mission

UN SDG Category

Keywords

Geographic Area

Contributors

Create

4 fields ~60 sec

Develop

20+ fields add over time

Contribution split into two paths.

Create

Create was the lightweight entry point: a few essential questions, under five minutes.

Develop was the deeper path, for people already invested who wanted to go further.

Develop

What participation earned

Lowering the barrier got people in the door. Keeping them there required something to show them.

Funding kept coming up in conversations unprompted. Nonprofits wanted to know who funds work like theirs, where the money was going, and what kind of organization got considered. At the same time, Dr. Ying Li's team had built a knowledge graph on 10 years of public funding data. We had what nonprofits were asking for, just not usable in its raw form.

The reward had to show up early enough to make continuing feel worth it.

The Money Flow dashboard unlocked after first contribution

I structured the data around the two questions nonprofits kept asking: what they work on, and where. For the what, I used 17-item SDG categories. For the where, a geo map showed the funding landscape by US state: where money was flowing, which issue areas were attracting support, and where gaps existed.

Something we didn't design for

We'd built the dashboard assuming nonprofits came to the platform to be seen. Post a challenge, attract foundation interest, get funded. The usage data told a different story.

Nonprofits working on the same problem in different places had no easy way to find each other. The platform was, accidentally, the closest thing to a directory of who else was working on what.

Claim shared challenges

Reach out to peer

Nonprofits were reading each other's challenges, especially ones that overlapped with their own work. A housing nonprofit in Texas looking at how peers in other states framed similar problems. An education nonprofit in Washington reading three challenges before starting their own draft. So we designed for that explicitly, treating challenges as the thing to collaborate on.

NEW PROBLEM

More content, less clarity

The peer-browsing was reshaping contribution. Nonprofits who read three or four challenges before writing their own wrote better ones. Completion rates went up. More people found their way in, and the volume of challenges on the platform grew.

That's when the platform started getting harder to navigate.

A nonprofit looking for peer challenges, or a foundation looking for credible work to fund, now had to wade through hundreds of challenges across dozens of issue areas.

The platform needed better tools for finding what mattered.

Designing for relevance

The platform used the UN's 17 SDGs as its organizing taxonomy. A traditional dropdown with 17 entries. I replaced it with a scannable color-coded wheel.

Before: dropdown list

After: color-coded wheel

From any SDG on the wheel, users could narrow search to that issue area before typing, so a housing nonprofit didn't have to filter through climate or education challenges to find peers.

A "Refine Search" state that appeared when users showed signs of frustration

For the refine search experience, I added downvotes for "this isn't what I'm looking for." When a user hit more than seven downvotes in a session, the system surfaced a refine search state, prompting them to narrow scope rather than scroll further.

Designing for credibility

Discovery helped users find content they cared about. The next layer was trust. Foundations evaluating a challenge needed a way to know what was credible. I think about credibility as two questions a foundation has to answer. Is this a real problem? Is the writeup credible?

Is this a real problem?

Is the writeup credible?

Quality Signals

Location

Funding

People affected

Peer validation

Completeness

1

2

3

5

6

4

Frequency

The signals behind each challenge card

I think about credibility as two questions a foundation has to answer. Is this a real problem? Is the writeup credible?

Search results designed with quality signals

It changed behavior. Challenges that scored well drew more outreach from foundations and tech providers. Contributors could see what happened when they do impactful work and wrote well, which gave them a reason to.

What worked, and what didn't

Within the first month, 230+ nonprofits created challenges. The dashboard drove engagement. Early cross-sector connections began forming. Contribution happened once, not repeatedly. The system created value to enter, but not value to return, and without retention, network effects never formed. Participation in systems like this has to be reinforced as an ongoing exchange, not just lowered to a one-time entry cost. Without a reason to return, the system stalls before network effects can form.

Acknowledgement A few people made this work possible. Dr. Ying Li developed the knowledge graph and data ingestion that powered the funding dashboard. The nonprofit and foundation interviews that shaped how I understood the problem came from the AI4PI Fellowship.