Tim Lockie's Blog

Where Will AI Be In 3 Years?

Written by Tim Lockie | Jan 15, 2026 8:01:11 PM

Nobody asked me where AI would be in three years. Here's my answer.

Three years ago, I was on a boring college campus tour with my son. I pulled out my phone and asked this new thing called ChatGPT to write me a poem about bad donor data. Five stanzas. Perfect rhyme scheme. Hilarious.

I'd tried Jasper before. I'd tried Copy.ai. All these tools that had been around for a while felt interesting but not there. ChatGPT was different. It gave me exactly what I asked for, and it was actually good. I shared it on social media and people went wild.

In that moment, I knew we were in for an upset.

That was three years ago, so it felt timely to look at the next three years and say where things are going. 

 

 

My Predictions

1. Global: AI is good and bad.

2. Social Sector: Focus On Success Not Software

3. Leaders: Emergent Cultures Will Outperform Management Cultures

4. Everyone: Get AI Fluent (Literacy Won't Be Enough)

 

Global: AI is good and bad, not good or bad

Let's start with the hard truth: AI will be weaponized.

I wish that weren't the case. I wish for once we could look at an amazing technology and ask: how can we use this to create as much good in the world as we can? And that is happening—that's why I work in the social impact space. Those questions are getting asked.

But across the street, across the country, somewhere else, people are asking different questions. How do we create power positions? How do we create world dominance? How do we kill other people more effectively? And they're getting AI to help with those problems.

I don't love that. I wish it weren't the case. But like every major invention before it—writing, currency, the internet, wheels—AI is going to be used in both beneficial and harmful ways. I'm not naive enough to think we're not in an arms race, even though I wish we weren't.

So should you use AI? If it's creating these kinds of problems, should it be something you use?

I think asking that question is like asking in 2004 whether you should use the internet. Or asking centuries ago whether we should use writing or money. These societal inventions are so large they're not good or bad—they're good and bad. The question isn't whether to engage, but how to engage responsibly.

If you believe that abstaining from AI will stop the harm, I understand that from a values perspective. If that's an ethical choice you're making, I'm completely behind it. But to the extent that people believe individual abstention will actually make a difference in preventing harmful applications, I think that's an illusion of control. And illusion of control is just as dangerous as limiting beliefs.

We need to think clearly about where we have control and where we don't.

We can control our own competency in responsible AI use—understanding when AI introduces bias, when it obscures decision-making, when it reduces human accountability. We can advocate for frameworks and regulation—pushing for industry standards and accountability mechanisms that constrain harmful uses while enabling beneficial ones. And we can maximize beneficial impacts by using AI to create as much positive outcome as possible in our own domains.

If you want to see what this looks like in practice, check out what Fundraising.AI did with their framework. It launched in 2023 with 12 elements for responsible AI use in the fundraising space. It's an impressive piece of work given how early it came out, and it's made a huge impact in how the social impact sector understands AI deployment. Nathan Chappell drove that work—if you're not following him, you should be.

That's what responsible engagement looks like: not abstention, but active participation in shaping how the technology gets used.

Social Sector: Success Not Software

The tech world is afraid its customers will start asking if they'd rather pay for software, or pay for success with their software.

EVERYONE agrees success is better than software, but tech companies don't know how to create it and consultants have focused on setup and support.

The reality is, neither tech platforms nor professional services know how to make you successful, because tech success is a cultural transformation not a technical implementation.

In 2019 after spending years implementing Salesforce for nonprofits, I looked at the organizations I'd spun up on the platform—they were struggling to use it effectively, struggling to get adoption, struggling to see value. I started asking why technology was so hard to use.

What I found was that consultants and the entire consulting industry optimize for software implementation, not successful software usage. This is why Salesforce made you pay for 18 months of licenses while taking 18 months to roll out the platform. That's not just inefficient—it's the entire business model. It's scammy AF (which I told them) and they just do it anyway (which they told me). The distance between introduction of technology and application of technology is built into the pricing structure.

They'll tell you it takes six weeks. It doesn't. It takes much longer because the focus is on governance—preventing behaviors you don't want—rather than guidance toward behaviors you do want.

Governance is the stick. Guidance is the carrot.

Governance is a control and compliance function. It's needed, but it can't create adoption because it doesn't create the psychological safety required for adoption. Guidance creates engagement and fosters change. Both are needed, but not equally for all organizations depending on their culture. That's what I mean by Digital Guidance.

AI collapses the time to value gap and closes the distance between introduction and application. Non-technical people hop in, lead with curiosity rather than technology, and make a difference in their world instantly using AI. Even on free versions! (But I recommend upgrading.) It's happening without any of the fancy implementation processes that used to be required.

This matters because it shifts the professional service model. The future belongs to services focused on guidance, change management, culture, and engagement. When people can get value immediately, the question stops being "how do we implement this technology" and starts being "how do we use this technology well."

That's the sector shift happening over the next three years. Organizations that understand this will build for cultural transformation using methods like Guidance. Organizations that don't will keep building expensive governance structures that slow down adoption while trying to accelerate it.

Leaders: Emergent strategies will beat management strategies

Here's why that sector shift matters: it changes how organizations can operate strategically.

Remember MapQuest? You'd print out directions, take one wrong turn, and suddenly the whole thing was useless.

I remember getting in fights with my wife about this. I'd print the instructions, we'd take a different turn, and then we wouldn't be on the instructions anymore. They'd be completely useless. I wouldn't have brought a map and it would just be a train wreck.

Then GPS came along and changed everything. You didn't need to connect the dots yourself anymore. In fact, looking at a map beforehand could work against you because you might pick a route with a different starting point than where you actually were when you started the trip. GPS made it faster to just say: this is where I am now, this is where I want to go, you pick the route.

Pre-GPS Directions: Let's meet at the library. This is the BEST way to get there... [insert impossible to follow directions based on years of invisible context... then go through the old JC Penny's parking lot to avoid that red light they just put in a decade ago.]

Post-GPS Directions: Let's meet at the library.

Most organizations are still printing out MapQuest directions for their AI strategy. They're trying to plan the whole route in advance when they should be setting the destination and letting the route emerge.

AI is doing to organizational strategy what GPS did to printed directions—and that's bigger than it sounds. Vision is the destination. Strategy is the route.

Management strategy is Pre-GPS. Emergent strategy is GPS.

Management: This is where we want to be in 3 years (vision). Let's map out a 3 year plan (strategy).

Emergence: This is where we want to be in 3 years (vision). Let's map out the next two quarters and build the tools to evaluate and plan the next two.

In the management world, you set a vision with a strategy to plan everything in advance, you manage that plan, and whoever executes best according to the plan wins. That's the map world we used to live in. It works best in a predictable world with limited information.

In the emergence world, you set vision with a strategy to emerge the route as you go. You don't need to know three blocks from now whether you'll turn right or left. The system handles that in real-time. Emergence works best in an unpredictable world with unlimited information.

The speed of change in the world combined with unlimited information is shifting what works from stable management to just-in-time emergence.

AI is now allowing organizations to operate this way. You don't need to know the whole plan, and in fact a whole plan might be a liability. What you need to know is where you want to be and if a route is possible. Once you know both of those things, AI is getting good enough that you can start that journey without knowing every turn along the way.

What this looks like in the real world—let's say for grant makers.

Traditional approach: Find all the grants, research every one, write a bunch of letters of interest, create criteria lists upfront, manage the entire process according to predetermined steps.

Emergent approach: You upload your last five successful grant applications to Claude. You give it access to your program data and organizational strategy documents. Then you ask it to analyze a new grant opportunity: "Does this align with our mission? How does it compare to grants we've won before? What's our probability of success?" AI creates a scoring rubric based on your actual history, runs the grant through it, and tells you whether this is worth your time—all before you write a single word. When you find one worth pursuing, AI drafts the letter of interest using language patterns from your successful applications.

The best strategy maximizes information, mitigates risk, rewards coordination, and allows for unpredictability. That's the shift from managing to emerging.

This is for a different post, but AI isn't going to rot our brains or steal our agency just like GPS didn't make us bad drivers. Every new invention is accused of rotting our brains: books, calculators, TV, internet, smartphones. These technologies change what we value, which can feel like losing our souls. But they don't actually make us dumber—they make us different.

Everyone: Fluency > Literacy

Chatting with AI isn't enough. And the gap between people who think it is and people who've moved past that is about to get really, really wide.

When I say AI, I'm only talking about generative AI—the kind you chat with. If you've got some predictive model running in the background, nobody thinks of that as AI anyway. We're talking about the chat interfaces everyone's using now.

There are three levels of capability here, and the distance between them matters more than most people realize.

AI illiterate: You're not using it at all, or you're using it so minimally it doesn't affect your work.

AI literate: You know how to have a conversation with it. You're suspicious enough to catch the flattery. You understand the mechanics of how the language works.

AI fluent: The mechanics are automatic and you don't think about them anymore. You're thinking about what you're saying and what output you're getting. You know how to use different tools and models—thinking versus instant versus pro in OpenAI, Sonnet versus Haiku versus Opus in Claude. You know how to use projects, custom GPTs, gems, skills. You know how to create repeatable processes that save you time.

If you're learning a language, literacy is focusing on how the language works. Fluency is when the mechanics become automatic and you're just thinking about what you want to say.

Nate B Jones calls this the actual differentiator. People who are AI fluent get 10x productivity gains. People who are AI literate really aren't doing much more than people who aren't literate at all. (And check out AI Cred

I'm not saying literate people don't find value. They do. But the gap between fluent and literate will be wider than the gap between data-fluent and data-literate ever was. And the jump isn't easy—what we're seeing repeatedly is that people stop at decent literacy instead of pushing into real fluency.

To be clear: there's nothing moral about AI fluency. This isn't about being a better person. But if you've already decided you're on the AI train and you're not pushing toward fluency, that's worth examining. Stopping at literacy is like deciding to learn a language and stopping at "I can read street signs." Technically you're engaging. Practically you're not going to get much done.

The people who plateau at literacy are the ones creating beige content that sounds like something but says nothing—what everyone calls "AI slop." They're literate, they don't care about being more, and they want AI to do their thinking for them. Which prevents you from developing your own thinking about how to think about AI—and that matters more than any productivity gain.

Over the next three years, you're going to see a massive widening gap between those who dug in early and use AI daily, and those who stopped at literacy or never started. That gap will be wider than any previous technology capability gap because AI's accessibility creates the illusion that literacy is sufficient. 

My take: it isn't.

So there you have it, my guesses for the next 3 years. AI is good and bad. Professional services will pivot to success not software. Emergent cultures will outperform management cultures. AI Fluency will be a marketable differentiator. 

What do you think?

 

If you're thinking about these questions—how to use AI for good, how to move from literacy to fluency, how to build organizations that can adapt emergently—join the AI Club waitlist. This is the conversation we're having, and we'd love to have you in it.