A technical reflection for leaders, educators, consultants, and researchers navigating the AI shift
In the past 12 months, I’ve had dozens of conversations with senior leaders, clients, educators, and researchers – all circling around the same core issue:
“I know generative AI is important… but I don’t really understand what it does – or how I should be using it.”
That uncertainty is understandable. The field is evolving fast, the language is often unclear, and the media oscillates between hype and fear.
But if we strip it back to first principles, a different picture emerges – one that’s less about automation, and more about augmentation.
Generative AI is not a magical solution, nor a replacement for human expertise.
It is a probabilistic prediction engine – and, when engaged with thoughtfully, it becomes a powerful thinking partner.
This piece offers a technically grounded yet accessible lens to reframe what generative AI is, what it isn’t, and how to work with it effectively – particularly in roles that rely on judgment, language, and complexity.
Under the Hood: What Generative AI Actually Does
At its core, generative AI models like ChatGPT (OpenAI), Claude (Anthropic), Gemini (Google), and Grok (xAI) are large language models (LLMs) trained to do one thing:
Predict the next token.
A token is the atomic unit of language – a word, part of a word, or a punctuation mark. The model’s job is to calculate, given everything that’s come before, what comes next.
This prediction capability is made powerful by three factors:
-
- Training at scale – LLMs are trained on datasets containing trillions of tokens from books, websites, code, transcripts, academic journals, and more.
- Transformers architecture – Introduced by Google in 2017, transformers enable models to detect patterns across long sequences of data, allowing for nuanced responses.
- Reinforcement Learning from Human Feedback (RLHF) – After initial training, human feedback is used to align the model’s responses with human preferences – improving helpfulness, safety, and coherence.
So, while the model doesn’t “understand” in the way humans do, it generates outputs that appear intelligent, contextually relevant, and even creative – because it has statistically learned the structure of language and reasoning.
Why It Feels Intelligent (But is not)
The illusion of intelligence stems from the interaction of several technical features:
-
- Pattern recognition at scale – The model doesn’t know facts; it recognises patterns that resemble knowledge.
- Alignment tuning – It is optimised to respond in ways that humans find helpful and polite – sometimes too polite.
- Emergent capabilities – At certain model sizes and thresholds, surprising behaviours emerge: reasoning, analogy, basic logic, even apparent empathy.
- Instruction following – Through fine-tuning, the model responds effectively to natural language instructions (e.g. “Summarise this,” “Play devil’s advocate,” “Rewrite in a professional tone”).
In short: it’s not sentient, but it simulates competence in a way that makes it incredibly useful – particularly for text-based workflows.
From Content Tool to Cognitive Partner
Where this matters most is not in content creation, but in cognitive collaboration.
For professionals whose work relies on judgment, synthesis, communication, and reflection – generative AI can function as:
-
-
- A second brain
- A pattern-matching collaborator
- A reflective surface for clarifying your own thinking
-
It can:
-
-
- Translate vague thoughts into structured drafts
- Unpack long documents into useful summaries
- Reflect your tone and phrasing in credible ways
- Push back on assumptions when prompted correctly
- Accelerate workflows without compromising on quality
-
But this shift – from tool to partner – requires a new mindset.
You’re not “using AI.”
You’re thinking with it.
Case Insight: A Strategy Memo Reframed at ARMCO
During a recent offsite with ARMCO’s executive team, one senior leader had been grappling with how to communicate a nuanced strategy pivot. He’d rewritten the memo multiple times — but it still didn’t land.
I suggested a different approach: speak the idea aloud into ChatGPT.
The result was a first draft that reflected his voice, respected the nuance, and clarified the logic.
“That’s exactly what I meant,” he said.
“I just didn’t know how to say it.”
This wasn’t about replacing his expertise. It was about amplifying it through structured reflection.
The 1 + 1 = 3 Principle
When AI is used as a thinking partner, the value comes from the interaction — not the automation.
Here’s the model I use:
-
-
- 1 – Your domain expertise and contextual judgment
- 1 – AI’s capacity to generate, organise, and mirror structure
- + – A deliberate iterative back-and-forth dialogue
- = 3 – An outcome that neither could produce alone
-
This principle is especially valuable in leadership, consulting, education, and research – where the process of thinking is as important as the outcome.
Prompt Engineering: Start with CRAFT
Most people struggle with prompting because they start with unclear intent.
The solution is CRAFT – a practical framework developed by Brian Albert at Lawton:
C = Context – What are you working on?
R = Role – Who should the AI act as? (e.g. strategist, editor, analyst)
A = Ask – What do you need help with?
F = Format – What kind of output do you want? (e.g. list, draft, table)
T = Target Audience – Who is this for? Yourself? A board? A client?
Applying CRAFT forces you to clarify your own thinking before expecting clarity from the model.
It improves the signal-to-noise ratio – and dramatically lifts the quality of the output.
Meta-Prompting: Design Better Interactions
You can go one step further by asking the AI to help you write your own CRAFT prompt:
“Can you help me write a strong CRAFT-style prompt for a research briefing I’m preparing?”
This not only accelerates setup – it teaches you to think like a prompt engineer.
It’s recursive prompting – using the model to shape how you use the model.
In doing so, you move from user to collaborator.
Cognitive Bias and the Sycophantic Trap
One critical limitation: generative AI is optimised for alignment and helpfulness, not critique.
This means it may reinforce your assumptions unless explicitly prompted to do otherwise.
To mitigate this:
-
- Ask “What are the weaknesses in this argument?”
- Request counterpoints or alternative perspectives
- Use instructions like “Challenge my thinking” or “Play devil’s advocate”
Without this layer of intentionality, the model can become an echo chamber.
Known Limitations and Ethical Boundaries
Even at its best, generative AI has technical and ethical limitations:
-
- No access to real-time or proprietary data
- No true contextual awareness (only simulated coherence)
- Prone to hallucination (fabricated facts or references)
- May embed latent biases from training data
Which is why every output must be treated as a first draft, not a final decision.
Generative AI supports your judgment – it does not replace it.
Final Word: Own Every Word. Trust but Verify.
As professionals, researchers, and leaders, we must remain accountable for what we say, write, publish, and sign.
AI can assist – with speed, breadth, and structure.
But ownership cannot be delegated.
Own every word. Trust but verify.
Use the machine – but lead with judgment.
In an age of accelerating automation, human credibility remains the most valuable asset.
Suggested Next Step
Try applying CRAFT to your next prompt.
Or better still, ask the AI to help you shape one.
And instead of asking “What can this tool do?” – ask:
“What can we think through together?”
Because the real shift isn’t about output.
It’s about partnership — and how you choose to engage.