Every January, someone somewhere resolves to "get better at prompting AI." By February, they're back to typing "write this for me" into ChatGPT. Here's why that failure is actually perfectly rational—and how to work with your very human (lazy) brain instead of against it.
Here's what you'll learn:
Prompts Genuinely Matter
Humans are lazy, it's a feature not a bug.
The Solution: A Framework To Structure, AI To Enhance
When simple prompts are actually better
Try this right now
Systems Beat Habits
Try This Now
I've avoided saying ALMOST ANYTHING about prompting, but it matters.
Prompt engineering became a buzzword. Then it evolved into "content engineering." The topic developed a kind of fatigue around it, the way any overhyped skill does when everyone suddenly claims to be an expert. I watched this cycle happen and thought about staying quiet.
But prompts genuinely matter. The difference between asking an AI to "analyze this" and saying "I'm a grant writer, analyze this Ford Foundation application against these criteria" is significant in the output. That specificity changes everything about what you get back.
What's new about AI, and what still surprises me, is the ability to give it a small piece of information and receive huge output in return. You can input a few sentences and get a 50-page deep research report that's well thought out and includes sources. The ratio of input to output is unlike anything we've worked with before. That small prompt becomes the lever that moves enormous analytical weight.
The hesitation was valid. The fatigue is real. But the impact is too important to ignore, especially when I see people struggling with tools that could genuinely make their work better if they knew how to ask better questions.
You're lazy. I'm lazy. We're all lazy. Who cares!
This isn't a character flaw. It's a feature of how human cognition works. We optimize for cognitive efficiency. We take shortcuts. We default to the simplest version of any task that will get us to the outcome we need that works.
Humans excel at simple prompts. "Write this post for me." "Summarize this document." "Make this sound better." These feel natural because they match how we think and communicate with other humans. We provide context through conversation, through back-and-forth, through the shared understanding that builds over multiple exchanges.
AI excels at structured prompts. It performs best when you give it a persona, a specific task, detailed context, and a format specification. The more structure you provide upfront, the better the output. This is the opposite of how humans naturally communicate.
Better structured prompts create better output. This is empirically true. But we're terrible at creating structured prompts naturally. It requires a kind of upfront thinking that runs counter to our cognitive preferences.
This is the same pattern as New Year's resolutions. Every January, people resolve to build better habits through sheer willpower. By February, those habits have collapsed. Systems outperform habits every single time. If you create a system to do something versus a habit to do something, your system will outlast your habit.
The solution isn't to become a different kind of thinker. The solution is to build a system that accommodates how you actually think.
The solution isn't to stop being lazy. The solution is to build a system that works with your laziness.
Frameworks act as translation layers between simple human input and structured AI prompts. They're the system that converts your natural way of thinking into the format AI needs to give you excellent output.
Take the PTCF framework: Persona, Task, Context, Format. I worked with Beth Kanter and Project Evident to use this framework, though it's become fairly well-known in AI circles. It's not complicated, but it's structured.
Compare these two approaches:
Evaluate this Ford Foundation grant.
|
|
|
|
|
|
|
|
|
|
|
|
That's a PTCF-structured prompt. It gives the AI everything it needs to understand who you are, what you're trying to accomplish, why you're doing it, and how you want the information presented.
The output quality difference is significant.
Other frameworks exist. Jonathan Edwards created Spectra through AI Cred—there's a fascinating story there where he discovered his system was essentially creating its own framework for prompting. I've built four different frameworks into a tool I'm creating for the Upskillerator called the Promptonator. (The Upskillerator is a group I work with to upskill our AI competencies and capacity. We're all in it together, increasing our skills with AI work.)
The framework doesn't matter as much as having a framework. Pick one that makes sense for your work. Use it as the screen that helps AI think through the best structured prompt to give you. You'll end up with substantially better prompts without changing how you think.
Plot twist: sometimes the simple prompt is exactly what you need.
Not everything needs a structured prompt. Iteration and redirection work best with simple commands. Once you're in a conversation with AI and it understands the context, simple course-corrections are more efficient than re-structuring everything.
The simple prompt I use most frequently: "I don't like that, do it again."
Sometimes I'll say, "That's garbage, try again." The point is that you can use a simple prompt to redirect. That's better as a simple prompt than a structured prompt. You're not starting fresh—you're adjusting within an established context.
Simple prompts excel at course-correction within a conversation. You've already provided the structure. You've already established the persona, task, context, and format. Now you're refining. "Make it shorter." "Change the tone to be more formal." "Focus more on the second point."
Know when to use each approach. Frameworks for initial prompts. Simple redirects for refinement. This isn't about always using the most complex tool—it's about matching the tool to the task.
Don't just read about this—test it in the next five minutes.
Pick a task you do frequently. Writing emails. Creating social media posts. Summarizing documents. Whatever you use AI for regularly.
Give your favorite AI these 3 prompts and see which gets the best output:
Approach 1 - Simple Prompt: "Research this website <your website>."
Approach 2 - Structured Prompt:
Persona: You are an expert at researching websites and extracting meaning.
Task: Research my website <website> and help me understand how the world sees our business.
Context: I'm building context to understand where to target an investment in our public brand.
Format: Give me a TLDR, Sitemap in a table, a basic diagnostic, and a list of improvements ordered by importance.
Approach 3 - Enhanced Structured Prompt: The meta-approach - two steps:
Chat 1: Enhance this structured prompt but keep the PTCF Format <paste prompt from above>
Chat 2 (New Chat): <paste the enhanced structured prompt>
Notice the output quality difference across all three approaches. The difference isn't subtle. It's the difference between getting something usable and getting something you can deploy immediately with minimal editing.
If you want to systematize this process, tools like the Promptonator can help. We're building it for the Upskillerator, and there's a waitlist if you're interested. But you don't need a tool—you just need to understand the pattern and apply it.
Build a system and your habits will follow.
This isn't about self-improvement or becoming a different kind of thinker. It's about building systems that accommodate how humans actually work.
The goal isn't to stop being lazy. The goal is to be strategically lazy—to let AI handle the structure while you handle the ideas. You provide the creative input, the domain expertise, the judgment. Let the framework translate your thinking into the format AI needs to give you excellent output.
Prompt Up Your Jan is about practical systems, not aspirational habits. It's about working with your cognitive preferences instead of fighting them. Accept that you're lazy. Build systems that work with that laziness. Get better outputs without becoming a different person.
Try the PTCF framework on your next AI task and notice the difference. Join the conversation about what frameworks work for you. And if you're interested in systematic approaches to upskilling with AI, check out the Upskillerator or join us for AI for Anyone Live on LinkedIn, Wednesdays at 9 Mountain time.