top of page

Flying Whale Masterclass:
Program Evaluation

Many Flying Whale organizations are rapidly developing solutions to age-old problems, but struggle to prove that they work.

 

They are ahead in program design

and behind in program evaluation.

You are allergic to:

  • Stacks of surveys developed to satisfy a funder 

  • Expensive databases you barely use 

  • Measuring what's easy instead of what matters

 

You’ve had a fellow work on data collection, but then they graduated from school and moved on. Your program director has set up systems for tracking data, but you're still measuring how busy you are — not how effective you are.

 

You’re not alone.

8 out of 10 Flying Whale Organizations are struggling to measure their work. Measuring thriving, healing, equity, or recovery feels too elusive. Too abstract. Too impossible without a research team.

So they table it. 

 

Good News. 

You don't need a research team, an expensive database, or a PhD to build a meaningful evaluation system. Most organizations are one clear framework away from measuring what actually matters.

 

It's taken me years as both an academic who has studied and taught research methods and a practitioner who has had to make evaluation work under real pressure, with real constraints...

  • experimenting

  • failing

  • redesigning

  • throwing out the 40-page logic model

  • starting over with a napkin

  • and finally cracking the code...

 

But I've found what makes program evaluation actually work for organizations like yours.

Is this you?

INTRODUCING

A Program Evaluation Masterclass for

10 nonprofit leaders who are serious about solving their evaluation challenges once and for all.

Build the evaluation framework that will help you:

Stop measuring outputs and start measuring outcomes that matter

Build a lean data collection system your team will actually use, and

Walk into every funder conversation — and every annual report — with confidence.

JULY 6-8

9:00am-1:00 pm MST

online

$1500

per team

WHAT YOU'LL WALK AWAY WITH

A COMPLETED LOGIC MODEL

This is the foundation of every grant application, every board presentation, every program design conversation. A solid logic model takes 4–6 hours to build well with an outside consultant. We’ll do it in less than a day.

A WRITTEN EVALUATION PLAN

An instruction manual for how you’ll measure the outcomes and outputs on your logic model. You’ll understand it, own it, and can update it yourself.

A READY-TO-USE DATA COLLECTION SYSTEM

Not a recommendation for a system. An actual system, set up for your program, that your staff can use on day one.

A PLAN FOR YOUR ANNUAL REPORT'S IMPACT PAGE

Know exactly which metrics to feature. Describe each one in plain, compelling language. Make your numbers tell a story your donors, board, and funders won't forget.

WHAT we'll accomplish

Part 1

Where Are You Now?

A self-assessment that places you on the organizational maturity spectrum.

 

In part 1 you’ll learn:

  • The stage of maturity of your current evaluation protocols

  • How this stage of maturity positions you in the new challenging landscape

Part 2

what should you be measuring?

Study how leading organizations in your sector measure impact. Rebuild your logic model so it reflects your theory of change — not just your activity list.

 

Hillary will curate a literature review based on your unique context. reading material and case studies that set you up for who is measuring what. This will allow you to confidently say that your evaluation methods are evidence based.

 

In part 2 you’ll learn:

  • How to choose metrics to measure that are based on research

  • How to create a logic model that describes what your organization does as a whole, not just a specific program

  • How to set outcome and output goals for your work as a whole

  • How to write a bold impact statement that serves as a north star for your work

Part 3

DESIGN YOUR EVALUATION PLAN

Set up a practical, right-sized plan to measure even the most elusive outcomes — thriving, healing, equity, mobility from poverty — without a research team.

 

In part 3 you’ll learn:

  • If you’re measuring individual, group, or systems change

  • To compare different measurement tools for usability (surveys, staff observations, focus groups)

  • To choose and revise a tool so it’s practical

  • To write out your plan for who will collect data, when, and what will they do with it

Part 4

BUILD YOUR SYSTEM

Set up your data collection using a user-friendly Google Sheets template you can start using immediately.

 

In part 4 you’ll learn:

  • How to pilot your evaluation plan with a google sheet before you build a CRM

  • How to customize a google sheet template to hold your specific data

  • How to integrate online survey tools with your sheet

WHAT WILL CHANGE?

You will compete for funding you couldn't touch before. Many mid-to-large funders now require evaluation plans and outcome data as part of the LOI. This course makes those grants accessible.

You will be able to make a compelling case for growth. Outcome data is what moves you from a $50K grant to a $500K grant. It's what gets you in front of major donors and government contracts.

You will retain your best program staff. People who do meaningful work want to see that it's working. A culture of learning and evidence keeps talented staff engaged.

You will protect your programs in a high stakes funding environment. Evaluation is organizational insurance right now.

bottom of page