8 min read
Thinking about the box: how cognitive diversity shapes AI adoption
Tim Lockie
:
Jan 30, 2026 12:00:00 AM
Why collaboration, not conformity, drives the human side of AI transformation
Every time I try to explain something that feels obvious in my head, I'm reminded how differently we all think. It's not about being smarter—it's about the way we think differently. And that difference is exactly what makes or breaks how we build and use AI together. This has been a lifelong struggle for me. Something crystal clear in my mind—understood to the moon and back—becomes impossibly difficult to transfer into someone else's head. And I'm not alone in this. We all experience moments where what seems obvious to us remains opaque to others. The question isn't whether we think differently. It's how we think differently that determines whether our AI initiatives succeed or stall.

Here's what you'll learn:
The way we think differently: inside, outside, and about the box
From prompts to instructions: how understanding structure unlocks AI's behavior center
The orchestra effect: why collective knowledge beats individual brilliance
Culture, coordination, and the human stack
Upskilling, not uploading: building real human-AI ecosystems
Where are you in your organization's AI journey?
The way we think differently: inside, outside, and about the box
Some people think inside the box
Some people think inside the box. Others think outside it. Some think about the box. I'm the last one.
And I was a snob about the other two. inside-the-box thinkers are boring, outside-the-box thinkers are better, and about-the-box thinkers are, best. That hierarchy is wrong. Worse, it's counterproductive.
I don't think that way anymore, and it was counterproductive (not to mention condescending). The reality is simpler and more useful: these are different cognitive modes, each essential for different aspects of AI implementation.
Inside-the-box thinkers bring structure and execution—they know how to take an abstract concept and turn it into deliverable work. They ask: What are we doing this week? How do we get it done? Is it on the agenda? These questions sound mundane until you realize how many AI initiatives fail precisely because nobody asked them.
Outside-the-box thinkers bring innovation and possibility. They see connections others miss, imagine applications others overlook. They're the ones who first suggest, "What if we used AI for this?" when "this" hasn't been done before.
About-the-box thinkers bring systems-level coordination. They see how the box itself functions, how different thinking styles interact, where the gaps appear between conception and execution. They're not necessarily smarter—they're operating at a different layer of abstraction. And that layer is absolutely critical for AI adoption.
It's frustrating that I can't think inside the box. I wish I could. In many ways, my inability to default to inside-the-box thinking has made my work harder, not easier. The same goes for the other modes. Each represents a genuine capability, and none of them is superior to the others.
Culture creates successful AI adoption not having more of one type or another. It's having all three types working together, with clear communication channels between them. Cognitive diversity—these layered perspectives working in concert—creates the foundation for strong human-AI collaboration. (Or conative layers as Dan Kershaw says brilliantly.)
From prompts to instructions: how understanding structure unlocks AI's behavior center
The first time I built a simple prompt engineer, everything clicked
The first time I built a prompt engineer, everything clicked. I finally saw that instructions are where AI's behavior actually lives.
This happened during Tyler and Sara's AI Build Lab course on Maven. I'd already made plenty of custom GPTs. I understood the concept intellectually. But I didn't get it get it until we built that prompt engineer. The moment I saw how structured prompts become instructions, and how those instructions control AI behavior with precision, everything else made sense. Suddenly I could see instructions everywhere in AI, and that insight has empowered most of the work I do now.
The progression matters: AI doesn't just respond to ideas—it responds to instructions, and those instructions are built on structured prompts. You cannot understand instructions until you understand structured prompts. It's three layers deep, and each layer depends on the one beneath it.
Think of it this way: a prompt is a request. A structured prompt is a request with a framework. An instruction is a structured prompt that's been refined into a repeatable pattern. When you write "summarize this article," that's a prompt. When you write "summarize this article in three bullet points, each under 20 words, focusing on actionable insights," that's a structured prompt. When you encode that pattern into an assistant that does it consistently, every time, for any article—that's an instruction.
Learning to write layered prompts is like learning to think about the box. It reveals how AI interprets patterns, how it repeats behaviors, where it needs more structure versus where it needs more flexibility. The better you can write prompts, the better you can write instructions. And instructions are the behavior center of AI—the artifact that gives you the most control over outcomes.
This is where about-the-box thinking becomes essential. You need to see the structure itself, not just work within it or imagine alternatives to it. You need to understand that the instruction layer exists as a distinct thing, separate from both the AI model and the task you're trying to accomplish.
The moment you get structured instructions, you see how they power everything from custom assistants to team workflows. But here's the counterintuitive part: people like me, who are really good at writing instructions, are often bad at using what we make. We need inside-the-box thinkers to actually implement and operate the systems we design. The design capability and the execution capability are different, and both matter equally.
The orchestra effect: why collective knowledge beats individual brilliance
We've spent over a year learning how to coordinate our human stack—and it's still messy, still human, still worth it.
I wish you could see the coordination meetings we have. They've taken more than a year to create the level of coordination we need around our human stack to actually get to the systems and delivery we use. And we're like every other organization: learning how to use AI, struggling with adoption, struggling with implementation. All the things.
I can do AI really well. But I'm not great at thinking inside the box or outside the box. I'm really good at thinking about the box. That's a specific, limited capability. It's valuable, but it's not sufficient.
This is where the orchestra effect comes in. It's not about how good any one player is—it's about how good the orchestra is together. AI culture building requires inside-the-box, outside-the-box, and about-the-box thinkers working in coordination. No single role or thinker type can cover the full spectrum of understanding, implementation, and iteration.
Consider what happens when you're missing one of these perspectives:
Without inside-the-box thinkers, you get brilliant ideas that never ship, elegant architectures that nobody uses, innovative approaches that don't fit into actual workflows.
Without outside-the-box thinkers, you get efficient execution of mediocre ideas, optimization of processes that shouldn't exist, perfect implementation of the wrong thing.
Without about-the-box thinkers, you get disconnected initiatives, miscommunication between innovators and executors, systems that don't integrate, and teams that work in parallel instead of in concert.
The measure of success isn't individual brilliance. It's collective coordination. This sounds obvious, but watch how most organizations approach AI adoption: they hire brilliant individual contributors, give them ambitious mandates, and expect magic. When it doesn't materialize, they blame the technology or the talent. They miss the actual problem: the absence of structured coordination between different thinking styles.
We need to value collective knowledge much more than individual contributors. That's what actually matters. The question isn't "Do we have the smartest AI expert?" It's "Do we have a coordinated team that can translate AI capability into organizational value?"
Culture, coordination, and The Human Stack
The tech is easy
The tech is easy. Humans are hard. That's why we built our own "human stack" to keep us aligned.
AI adoption struggles aren't about broken technology. The models work. The tools exist. The capabilities are real and rapidly expanding. The problem is human coordination—specifically, the complexity of getting different thinkers, different roles, and different organizational layers to work together effectively.
The "human stack" is the real operating system. It's the mix of thinkers, doers, and translators who make implementation real. It's the people who understand what needs to happen (about-the-box), the people who figure out novel approaches (outside-the-box), and the people who make sure it actually gets done (inside-the-box).
Erin on our team exemplifies this last capability. She's done such a great job of helping us think about what boxes we need to tick. I don't care how big the idea is—what I care about is: What are we doing this week? How do we get it done? Is it on the agenda? Those questions ground abstract AI potential in deliverable reality. They transform "we could use AI for this" into "here's what we're shipping Tuesday."
This is leadership, though we don't always recognize it as such. Leaders like Erin help teams coordinate across thinking styles. They translate between about-the-box abstractions and inside-the-box execution. They keep outside-the-box innovation tethered to actual organizational constraints.
Building a functional human stack takes time. Ours has taken over a year, and we're still iterating. We still have messy coordination meetings. We still struggle with the same adoption and implementation challenges everyone else faces. But we've learned something crucial: the struggle isn't a sign of failure. It's a sign of the actual work.
Organizations that succeed with AI don't have better technology. They have better human coordination. They've built their human stack intentionally, with clear roles, clear communication pathways, and clear respect for different cognitive contributions. They've stopped trying to find the one brilliant person who can do everything and started building teams that can coordinate effectively.
Upskilling, not uploading: building real human-AI ecosystems
Prompt literacy
When we talk about AI readiness, we shouldn't be talking about bandwidth or data—we should be talking about people.
AI transformation isn't about replacing people. It's about developing new layers of human capability. This distinction matters more than most organizations realize. The "uploading" metaphor—where we somehow transfer human knowledge into machines—misses the point entirely. What we're actually doing is upskilling: teaching teams how to think with AI, not just use it.
The difference shows up in practice. Organizations that think about uploading ask: "How do we automate this role?" Organizations that think about upskilling ask: "How do we augment this person's capability?" The first question leads to replacement anxiety, resistance, and failed adoption. The second question leads to capability development, enthusiasm, and successful integration.
The Upskillerator approach focuses on cultural learning and transformation. It's about humans growing into new ways of working, not being replaced by machines. This requires developing three distinct capabilities:
- Prompt literacy: Understanding how to communicate with AI systems effectively. This isn't about memorizing tricks—it's about developing intuition for how AI interprets instructions, where it needs more structure, where it needs more context.
- Instruction design: Learning to create repeatable, reliable AI behaviors through structured prompts. This moves beyond one-off interactions to building persistent capabilities that serve your team over time.
- Systems thinking: Seeing how AI capabilities integrate into workflows, where human judgment remains essential, how to coordinate between AI outputs and human decision-making.
Notice these aren't technical skills in the traditional sense. You don't need to understand transformers or neural networks. You need to understand structure, communication, and coordination. These are learnable capabilities, and organizations that invest in teaching them see dramatically better AI adoption than organizations that just deploy tools and hope.
Real human-AI ecosystems emerge when you stop trying to upload people and start upskilling them. When you recognize that successful AI adoption requires all three thinking styles—inside, outside, and about the box—working together. When you build your human stack with the same intentionality you bring to your technical stack.
---
Where are you in your organization's AI journey?
Take a moment to consider: Are you thinking inside, outside, or about the box
Take a moment to consider: Are you thinking inside, outside, or about the box? Each perspective brings essential value, and none of them works in isolation.
Maybe you're the leader trying to figure out how to get your team to use AI effectively. Maybe you're on the team trying to help your leader understand what's actually possible. Maybe you're somewhere in between, translating between different layers of understanding.
These are exactly the right questions to be asking. Because the tech is easy—humans are hard. The challenge isn't building better AI. It's building better coordination between the different ways we think, the different capabilities we bring, the different roles we play.
AI adoption works when we stop trying to make everyone think the same way and start learning to coordinate our differences. When we value collective knowledge over individual brilliance. When we recognize that the human stack—that messy, complicated, essential coordination between different thinkers—is what actually makes AI transformation possible.
This Is Our Journey Too
And just in case you think we've got this all buttoned down here at The Human Stack, we don't. We're still figuring this out ourselves. Our coordination meetings are still messy. We're still evolving. And that's exactly how it should be. The work of building effective human-AI collaboration isn't about reaching some perfect end state. It's about continuously learning to work together better, respecting what each thinking style contributes, and staying curious about how we can coordinate more effectively.
Where are you in this journey. How is your team building its human stack? What's working? What's still messy? Drop a comment and let's learn from each other's experiences. Because that collective knowledge—that shared understanding across different perspectives—is exactly what we need to make AI adoption actually work.



