Post #9 – From Prediction to Partnership: Rethinking Generative AI

by Tom McAtee | 4 Aug 2025 | AI in Practice

Rethinking Generative AI as a Thinking Partner

In most workplaces, AI is still seen as a tool for getting things done faster.

“Write this.”
“Summarise that.”
“Give me five ideas.”

Useful? Absolutely. But that mindset only scratches the surface.

The real shift – the one now facing leaders, educators, consultants, and researchers – is not about speed. It’s about depth.

It’s about moving from prediction to partnership. From automation to augmentation.

From “using AI” to thinking with it.

This post reframes how generative AI works, what it actually does, and how to work with it more effectively – especially in roles that rely on language, judgment, creativity, and reflection.


Under the Hood: What Generative AI Actually Does

At its core, generative AI tools like ChatGPT (OpenAI), Claude (Anthropic), Gemini (Google), and Grok (xAI) are large language models (LLMs). And they’re designed to do one thing:

Predict the next token.

A token is a fragment of language – a word, part of a word, or punctuation mark. The model predicts what comes next, based on what’s come before.

This ability becomes powerful through three core technologies:

    • Training at scale – Trillions of tokens drawn from books, websites, code, academic journals, and more.

    • Transformer architecture – Enables the model to track relationships across long sequences of text.

    • Reinforcement Learning from Human Feedback (RLHF) – Adds human judgment to tune outputs for usefulness, tone, and safety.

The result? A system that doesn’t “understand” like a human does – but can generate language that sounds human, feels intelligent, and supports complex thinking.


Why It Feels Intelligent (But Isn’t)

Generative AI gives a strong impression of competence. But it’s not conscious. It’s not fact-checking. And it doesn’t know what’s “true.”

So why does it work so well?

    • Pattern recognition at scale – The model finds language patterns that resemble knowledge.

    • Instruction tuning – It learns to follow natural-language instructions (e.g. “rewrite this in plain English”).

    • Emergent behaviours – At certain scales, surprising capabilities arise: logic, reasoning, even emotional tone.

    • Alignment for helpfulness – It’s trained to be agreeable, polite, and constructive – sometimes too much so.

That’s why it’s powerful – and why it must be used with care.

It simulates understanding. But it doesn’t think for itself.


From Content Tool to Cognitive Partner

For many, AI is still a magic typewriter – useful for quick drafts or summaries.
But something more powerful happens when you stop treating it like a tool — and start engaging it like a thinking partner.

This is the move from output generation to cognitive collaboration.

Instead of:
“Do this for me.”
Try:
“Let’s think this through together.”

Generative AI can act as:

    • A reflective surface for clarifying your ideas

    • A pattern-matching assistant that helps spot connections

    • A voice mirror that sharpens how you communicate

    • A sparring partner that helps test your logic

It can help you:

    • Clarify complex arguments

    • Translate rough ideas into structured drafts

    • Challenge blind spots and surface alternatives

    • Refine your message for different audiences

    • Reflect on your leadership, not just your language

When used this way, AI becomes part of your reflective practice – a mirror and a map.


Case Insight: Strategy Memo Reframed at ARMCO

During a leadership offsite at ARMCO, one senior executive had been struggling with a strategy memo. He’d rewritten it multiple times – but it still didn’t land.

I suggested a different approach: “Speak it aloud into ChatGPT.”

The result? A clear, structured draft in his own voice.
“That’s exactly what I meant,” he said. “I just didn’t know how to say it.”

This wasn’t about replacing his judgment. It was about unlocking it – using AI as a reflective amplifier.


The 1 + 1 = 3 Principle

When AI is used as a thinking partner, the value doesn’t come from the model alone – it comes from the interaction.

Here’s the model I use:

    • 1 – Your expertise and contextual judgment

    • 1 – AI’s ability to organise, translate, and mirror structure

    • + – An iterative dialogue

    • = 3 – An outcome that neither could produce alone

This is especially powerful in leadership, consulting, education, and research – fields where the thinking is the work.


Prompt Engineering: Start with CRAFT

Most people struggle with prompts because they start too vaguely.
The solution is CRAFT – a simple framework developed by Brian Albert:

    • C = Context – What are you working on?

    • R = Role – Who should the AI act as?

    • A = Ask – What do you need help with?

    • F = Format – What kind of output do you want?

    • T = Target Audience – Who’s it for?

Using CRAFT helps you clarify your own thinking before involving the model. It raises the quality of input – and dramatically improves the output.


Meta-Prompting: Use AI to Help You Use AI

Want to get better at prompting? Ask the model to help you write better prompts:

“Can you help me write a strong CRAFT-style prompt for a stakeholder briefing?”

This kind of recursive prompting teaches you to think like a designer of the interaction – not just a consumer of the tool.

It’s not automation. It’s acceleration of judgment.


Beware the Sycophantic Trap

Generative AI is trained to be helpful – not critical. That’s why it often agrees with you, even when you’re wrong.

If you’re not careful, it becomes an echo chamber. Here’s how to avoid that:

    • Ask “What are the flaws in this argument?”

    • Use prompts like “Challenge my thinking” or “Play devil’s advocate”

    • Request counterpoints or edge cases

This intentionality ensures you stay in control – and don’t outsource your critical thinking.


Limitations and Ethical Boundaries

Even at its best, generative AI has known limits:

    • No live or proprietary data (unless integrated)

    • No genuine understanding or memory (outside session)

    • Prone to hallucination (inaccurate or fabricated claims)

    • May reflect biases in training data

So: Always treat outputs as drafts. Useful, but not definitive. Insightful, but not authoritative.

Your judgment matters more than ever.


Final Word: Own Every Word. Trust but Verify.

In an age of automation, human credibility is your edge.

AI can assist – with speed, structure, and scale. But it can’t own your words. You can.

    • Lead with judgment.

    • Think with clarity.

    • Use the machine – but don’t let it think for you.

Own every word. Trust but verify. That’s the essence of partnership.


Suggested Next Step

  • Try applying CRAFT to your next prompt.
  • Or ask ChatGPT to help you shape one.
  • And instead of asking, “What can this tool do?” — ask:

“What can we think through together?”

That’s the mindset shift. From prediction to partnership.

Written by Tom McAtee

Curious by nature, grounded by experience – I explore the intersection of AI, culture, and leadership, drawing on four decades in heavy industry and high-stakes organisations. These days, I’m diving deep into research, building tools for thinking, and sharing personal reflections along the way. I also happen to love golf, music, cycling, travel, food – and building elegant things with Divi.

Related Posts

Post #15 – Which AI for Which Task?

Post #15 – Which AI for Which Task?

Overwhelmed by AI choices? This post shows how to match the right tool to the right task — so you spend less time switching and more time working smart.

Submit a Comment

Your email address will not be published. Required fields are marked *