Dana Snyder had never committed a Git in her life when we started working together in September 2025.
Tomorrow she is shipping a full-stack app that she built using an AI Agent; from the database and user authentication to Stripe payments and the AI engine trained on her proprietary methodology. If she had hired a development agency to build it, she would have paid $100,000 or more.
This is the story of how she did it in 5 months, with me as a coach, and what it taught me about the differece between AI fluency and AI literacy.
Before anything else, let me tell you what she built — because it's genuinely remarkable. Or better yet, listen to her video below (btw, her marketing makes me want to cry it's so good).
Dana is an industry authority on monthly giving for small nonprofits. She's got the podcast. She wrote the book. And she runs a mastermind program. Fundraisers message her regularly to say they got their first recurring donor because of her work.
That third one, the mastermind, it's her genius. But, it's around $15k per engagement and only reaches a dozen or so organizations a year. The problem is that it doesn't scale. She could only help ten organizations a year. Hundreds more needed her methodology but couldn't afford her rates.
The Monthly Giving Builder solves that. It's a web application that walks nonprofit fundraisers through Dana's complete five-step framework — naming the program, building the tech stack, recruiting supporters, launching, and retaining donors — using AI agents trained on her IP. It delivers the same quality of guidance as her mastermind program, personalized to each organization's specific context — at a price point three digits shorter than her consulting fees.
Not a course. Not templates. Not a chatbot. A product that applies Dana's expertise to a fundraiser's actual situation — their mission, their community, their starting point.
One of her beta testers, Meghan Walsh, had previously fired an agency over bad landing page copy. She ran her nonprofit through the tool, read the AI-generated landing page content it produced, and said: "You're freaking kidding me. It's so good."
Then, unprompted: "It's just like the mastermind."
She paused and repeated herself, because she understood the weight of what she was saying: "It's huge for me to say this is just like the mastermind."
That's the product. Now let me tell you how she built it.
This month I've been writing about the difference between AI literacy and AI fluency. Literacy is where most of us live — comfortable prompting, useful outputs, basic competence. Fluency is what comes next: thinking with AI as a collaborator, not just using it as a tool.
Dana arrived at our first session fully literate.
She had built custom GPTs. She was prompting regularly. She was getting good outputs. She had already started prototyping in a no-code platform and had a clear picture of what she wanted to build: five AI-powered steps, priced under $500, ready for a February summit with 7,000 attendees.
She had also already hit the wall.
The prototype looked like a product. It wasn't. No database, no authentication, no way to save what users built. She had prompted her way into a surface with no plumbing. And when I asked her to describe what she wanted to build next — the full vision — I heard: login systems, databases, AI agents drawing from her IP, progress tracking, Stripe payments, custom deliverables.
She said "simple" several times, and I winced each time I heard it — because none of it was going to pass the Helicopter Test.
The helicopter test helps tease out if things are actually simple or they just sound. "I just want something that goes up and down in the air." See, sounds simple. But if you try to build one, it turns out helicopters aren't simple.
And neither was Dana's vision. She needed login systems, databases, AI agents trained on her IP, progress tracking, Stripe payments, custom deliverables — that's a helicopter. And because she didn't yet have the vocabulary to name the components she was asking for, neither of us could have that conversation yet.
But the real thing I'm testing isn't if the vision is complex or simple, it's what they do when they hit the scaling wall.
Almost every non-technical person who builds with AI tools hits the Vibe Coder Scaling Wall.
You start prompting. You build something that looks impressive. You get excited. You show people and they say "what the hell?" and you feel like a genius.
Then you try to make it actually work. You try to connect systems. You try to get data to save to a database. You try to get two platforms to talk to each other. And prompting stops working.
AI can help you build a decent interface. But it cannot architect the invisible plumbing that makes a real product function. That requires a different kind of knowledge — and most people don't know what to call it, let alone how to learn it.
This is where Dana was in October. When the AI agent she had built returned beautiful outputs when I used the agent in the native app, but it produced garbage when triggered from the prototype app. She described the problem as: "It works on your screen but not mine."
That conversation took almost the entire coaching session... 45 minutes. We spent most of it ruling out things that weren't the issue - not because she was failing - because she didn't yet have the AI language to name what was actually happening.
Over the next three months Dana's technical ability changed, but you could hear the change because her vocabulary became a benchmark.
She learned what a webhook does, what a payload transports, how a field map works, why structured output matters, what happens when a source field and a target field aren't aligned: twenty, maybe thirty terms total — the minimum language layer for understanding how systems connect.
And the outcome was velocity.
When she learned webhook, a 45-minute diagnostic conversation became: "The webhook isn't seeing anything." When she understood payload, she could say "the payload structure doesn't match what the database expects" instead of spending half a session on symptoms.
She noticed this before I named it. Around session ten, she paused mid-conversation and said:"Language is efficiency!"
Exactly. That's the mechanism. That's what the leap from literacy to fluency actually looks like — not a new skill set, but the vocabulary to think precisely in a domain that used to be opaque.
In a matter of months Dana stopped describe unmet expectations and started diagnosing expected outcomes.
Session nine is when I knew something had changed.
We were working through a webhook failure. I was about to walk through my diagnostic. Dana got there first.
"It has to be the input," she said.
She was right. She had watched enough failures, named enough pieces, that she could see the pattern without me pointing to it. She wasn't describing symptoms anymore, she was diagnosing causes.
That is fluency. Not independence from all help — Dana would tell you clearly what she could handle and what still needed reinforcement. But the mode had shifted. She was thinking through problems with AI as a collaborator, directing the tools with precision, and building real judgment about where they were likely to fail.
When something didn't work, didn't ask me for answers. Her eyes lit up and she started confirming guesses.
Her team noticed. Her husband texted her mid-build: "You're addicted now, aren't you?" When she came back after a three-week gap while I was traveling, she had been in the database independently, fixing bugs, making product decisions on her own. I had to consciously stop telling her how impressed I was by her progress after I'd said it for the fifth time in 10 minutes.
She started calling herself a Promptinator.
Session twelve was the deployment call. Dana brought wine. I brought root beer. We settled in for this paltry little last step...
What followed was a two hour battle with a Stripe configuration — sandbox keys, live keys, webhook signing secrets, crash loops from missing dependencies. I knew we were in for it when she said "This process is taking so long, I'm out of wine."
But when she said,"It's live!" I felt like she'd won an olympic medal.
Two days later, five beta testers ran through the Monthly Giving Builder for the first time. The AI-generated program names produced audible reactions. The landing page copy made Meghan — a practitioner who had fired an agency over this exact problem — declare the output better than what she'd paid professionals to produce.
The obvious story here is Dana's app.
If you're a nonprofit trying to build a monthly giving program, she built exactly what you need — a tool that delivers her proven methodology directly to your situation, for a price point that doesn't require board approval. Go to monthlygivingbuilder.com and check it out.
The less obvious story is what it took to build it.
Dana isn't a senior developer. She still leans on other experts and AI agents to figure things out. There are still problems that require help — infrastructure decisions, security configurations, the kind of plumbing that falls outside what prompts can fix. That's honest and she'll tell you so.
But she knows the difference now. That's what changed. She knows which problems are hers to solve and which ones need reinforcement. She knows how to describe what is broken so that help — human or AI — can actually land in the right place.
That is AI fluency, not omniscience - it's the ability to think precisely in a domain that used to feel like a foreign language.
In five months, AI moved from Magical to Mechanical. And the product is the proof.
There are nonprofit consultants reading this right now who have spent fifteen years becoming the foremost expert in their corner of the sector. They have frameworks. They have methodologies they've delivered dozens of times. They have clients who would testify to the results.
And they think products like this are for people with technical backgrounds.
8 months ago the were right. But not anymore.
The barrier between Dana's expertise and a working product wasn't a computer science degree or a $100k development budget. It was the willingness to perserver (which not everyone has), vocabulary — the precise language to name what she was building and what was breaking — and the willingness to cross the ugly middle where nothing works yet, and a willingness to trust her coach.
She went from never using GitHub to navigating Stripe dashboards, debugging webhook payloads, and watching strangers validate her product with genuine emotion.
Maybe it's time to rethink what you can do.
This week is the Monthly Giving Summit (February 25–26, 2026), so I texted Dana about it and she texted me that she was locking the code for launch.
I had to sit with that for a second. Locking the code. Five months ago, she didn't know a webhook from a payload, today she's locking code.
When we were talking about working with each other she said: "Most people want the answer. Boiled down. But you don't give them the answer — you make them work to figure out the answer." She gets me.
And without thinking I said, "Dana I'm a great coach, but a terrible chauffeur. I will not drive you somewhere. I will not hand you the answer. I will sit next to you, ask you questions, and make you figure out where to turn."
Because the answer — the one you drive yourself to — is the one you actually own. You can debug it. You can explain it. You can build the next thing without me in the room.
So come to the Monthly Giving Summit this week and see Dana showing the world what she built. If you're a nonprofit fundraiser: grab her app.
And if you're an expert wanting to advance in AI, grab a meeting with me.
Dana Snyder is the founder of Positive Equation and creator of the Monthly Giving Builder at monthlygivingbuilder.com — an AI-powered tool that walks nonprofits through building sustainable monthly giving programs.
The Human Stack coaches social impact industry experts. Find him on linkedin and follow him at thehumanstack.com/timlockie