3 min read

The Intent Test: Are You Using AI to Improve, or to Win?

 

The intent test is simple: before you use AI on a conflict, ask yourself whether you're trying to respond or trying to win. That single question determines whether AI makes you a clearer thinker or a more dangerous one.

Most people never ask it.

Megaphone

AI is an amplifier of intent

In a recent Upskillerator session, one participant described a dynamic I've seen play out more times than I can count. Someone is smart, verbally capable, and can win almost any argument, even when they're wrong. Add AI to that, and it gets worse. Now it's a smart person with a megaphone and an infinite prep team.

That's the risk nobody is talking about honestly.

The power imbalance isn't only AI versus humans. It's humans versus humans, with AI in the middle.

If you've ever dropped a workplace conflict into ChatGPT and asked it to help you respond, you already know this is true. We all do it. The fork in the road is what your intent was.


The trap: AI will let you off the hook if you ask it to

Psychologists who study motivated reasoning have documented for decades how skilled humans are at constructing justifications for conclusions they've already reached. We don't rationalize because we're bad people. We rationalize because we're human, and our minds are extraordinarily good at it.

AI doesn't fix that. It accelerates it.

AI will absolutely help you build the case against someone if that's what you're really asking for. Not because the model is broken, but because a tool trained to be helpful will help you do what you ask it to do — including helping you rationalize.

There are two versions of "AI, help me with this conflict." One is about becoming a better human. The other is about building the brief against the other party.

Different prompts. Different outcomes. Different ethics. Most people never notice which one they typed.


The counter-case, because this gets preachy fast

Sometimes the case-against-them version is exactly what's needed.

One participant shared examples of using AI to advocate inside healthcare systems that were failing her partner, and to translate consumer rights language that had been deliberately designed to confuse her. That's not petty dominance. That's survival inside systems that weren't built to be fair.

So the intent test isn't "never use AI to advocate for yourself."

It's "know which one you're doing."

Survival isn't pettiness. Reactivity dressed up as strategy isn't survival. The test is whether you can say your real goal out loud without flinching.


The policy: respond instead of react

The simplest guardrail I've found that actually works is one sentence written at the top of your prompt before you paste anything else in.

"What would it look like for me to respond instead of react?"

That question forces the intent shift. From "how do I win" to "how do I show up well." You can still be direct. You can still hold the line. You can still fire someone, end a contract, or ask a board member to step down. You just do it without becoming the thing you're objecting to.


Where AI actually helps in conflict

A good use of AI in conflict isn't writing the perfect takedown. It's getting honest about what's actually happening.

Use it to interpret what you're feeling without making it someone else's fault. Use it to name the real issue without theatrics. Use it to draft language that's clear, honest, and calm. Use it to choose a next action that matches your values.

Another participant shared how she uses AI in contract negotiation — intent-mapping both sides, translating legal language into plain consequences, making sure what both parties think they're agreeing to actually matches the document. That's AI as a de-escalation tool, not an escalation tool. That's the version worth building habits around.


A prompt I’ve actually used

Copy this. Use it the next time you're angry and reaching for AI.

I'm going to paste a message I want to send. Your job is NOT to make me sound persuasive. Your job is to make me honest, calm, and values-aligned: to help me respond rather than react.

Identify what I'm reacting to (1 sentence). Identify what I actually want (1 sentence). Identify the boundary I'm trying to set (1 sentence). Draft a reply under 120 words that doesn't blame, doesn't hedge, and includes a clear next step.

Also suggest one sentence from my original version that, if I were going to stick with that version, I should delete because it's me trying to win, not solve.

That last instruction is an important test.

If you feel internal resistance when AI tells you which sentence to cut, pay attention to that. It's usually the signal.


The bottom line

AI multiplies what you bring to it, not what you say you want. If your real intent is to win, it will help you win. If your real intent is to show up well, it will help you do that too.

Build the habit of naming your intent before you prompt. Write it at the top. One sentence. It takes ten seconds and it changes everything downstream.

The organizations that get this right won't just use AI more effectively. They'll build cultures where people trust each other more because AI made them more honest, not more capable of hiding.

That's the version worth working toward.

 


 

The Human Stack coaches social impact industry experts. Find Tim on LinkedIn and follow him at thehumanstack.com/timlockie