How Your Ability to Articulate Ideas Determines What AI Can Do
And how individuals that master the art of articulating complex ideas in one attempt will outcompete those that do not.
Like many, I once wished for a smarter ChatGPT when my prompts failed. After a year of use, however, I learned even advanced models stumble with imprecise directions. My experience revealed one truth: the issue isn’t the AI—it’s my failure to specify exactly what I want.
For example, two months ago, I set out to craft a prompt that transforms a dialogue with GPT into a unified, first-person narrative. This prompt would convert a conversation between two people—Person A (me) and Person B (ChatGPT)—into a continuous story told by one voice. It would preserve every detail so nothing vital is lost, recording every change and improvement made during the exchange; in the end, I would receive a complete narrative reflecting one person’s introspective thought process.
My early attempts fell short. The prompt often produced disjointed summaries or incoherent results—sometimes it worked, other times it missed the point entirely.
After sixty days of articulating requests to GPT, I retried. Using the same GPT model, I can insert this prompt into any conversation and instantly generate an internal monologue fit for Substack, YouTube scripts, or LinkedIn posts. I was reminded that the problem wasn’t GPT—it was my inability to express exactly what I wanted.
The articulation skills I sharpen by refining AI prompts resemble the ones I improve through my Master's in Liberal Arts at St. John's. Every week, I hone my ability to express ideas through frequent, real-time debates; challenging texts force me to interpret, argue, and defend ideas on the spot—mirroring the iterative process of refining my requests to GPT.
At the school where I direct product, students engage in dialogue for four hours a day. This intense practice hones their ability to convey complex ideas—often to a level I have yet to reach. Their daily habit of discussing, writing, and debating makes their thoughts sharper and their ideas more vivid.
My work with GPT trains me for all forms of communication, ensuring that my requests—whether to a machine or a colleague—are crystal clear.
As a leader, clear expression has tangible benefits. Crafting effective GPT prompts has helped me delegate tasks and set clear expectations for my team. Instead of issuing vague instructions that trigger endless revisions, I now deliver a single, precise request that hits the mark. When it falls short, I treat it as a chance to learn, refine my approach, and update my process.
A key challenge is that clear expression is deeply personal and hard to replicate across a team. Even if one person excels, copying that clarity isn’t simple. I could have Michael Strong—a pioneer in education and an advocate of Socratic dialogue—explain his approach to questioning, yet I still wouldn’t capture his unique thought process. It takes time and practice.
Yet this challenge is also an advantage. Teams that master the art of articulating complex ideas in one attempt will outcompete those that do not; even if those weaker at expression try to catch up, the gap won’t close in weeks, months, or even years.
I urge anyone who thinks they have a foolproof process to test it—both with people and with GPT. Run your framework in various scenarios. If GPT or your team can’t deliver steady results, your articulation of the request needs work. Rigorous testing provides quick feedback and boosts your ability to share complex ideas. Think of how teachers use rubrics—if different teachers interpret a rubric differently, the framework needs work. Likewise, inconsistent results from your prompt reveal that it needs refinement.
Every vague prompt and round of revision is a chance to grow. Every unexpected result is a chance to tweak my wording, add context, and tighten my instructions. I turn the best version of a process into a template and refine it with each success and failure.