Guide to Refining Prompts & AI Prompts Terms

Generative AI

Imagine your intern runs up with a first draft of an article for the site, what are the chances it is perfect and requires no edits? Blank slate generative AIs (chatbots that don’t have context on your organization/data) are like over-confident interns. Rarely will 0-shot replies (the first reply to initial prompt request) be of the quality you need. 

Enter refining and iterative prompts.

Iterative prompt refinement forces clearer articulation of intent and goals upfront, enabling more aligned responses. Constructive feedback identifies weak points to address through rewriting. The process surfaces vagueness, irrelevance, and complexity to streamline. Distilling ideas into concise summaries presses for greater coherence. Rewriting strengthens logic, structure, and transitions. Collaboration provides an outside perspective to counter biases and assumptions. Overall, this methodology structures regular critical analysis of output to curate higher quality through ongoing refinement.

Basically, great writing comes from great editors. You are the editor.

The next time you are chatting away, give some of these refining prompts a try to increase to quality of the final product. And remember, always check any facts because this intern didn’t. 

Refining Prompt Examples

Clarifying Voice of Audience

  • Please provide more context about the previous reply – what was the intended audience, goal, and circumstances?
    • Rewrite the response incorporating the additional details to make it more tailored and appropriate.
  • Evaluate if the style and tone fit the intended audience and goals.
    • Rewrite to craft a more suitable voice (more passionate, enthusiastic, authoritative etc).


Strengthening Logic

  • Identify any flawed logic, questionable assumptions, or gaps in the reasoning of the previous reply.
    • Rewrite the response addressing those issues to improve the strength of the arguments.
  • Critique the previous reply, pointing out the flaws in thinking
    • Rewrite the response based on this evaluation


Summarizing Key Points

  • Distill the core ideas from the previous reply into a concise 5 sentence summary.
  • Provide feedback on how well the summary captures the essence.

Enhancing Clarity

  • Pinpoint confusing phrases, vague language, and complex sentences in the previous reply.
    • Rewrite the response improving clarity by simplifying wording and reducing perplexity.
  • Rewrite this for an 8th-grade reading level.

Building Persuasiveness

  • Suggest 3 specific ways the previous reply could be more persuasive for the target audience and goals.
    • Incorporate those suggestions into a rewritten version.

Strengthening Coherence

  • Identify ways the previous reply could have improved flow, transitions, and logical structure.
    • Rewrite the response enhancing coherence and organization.

Checking Relevance

  • Determine if any parts of the previous reply stray from the topic or intended goals.
    • Rewrite the response cutting extraneous content and sharpening the focus.

Updating accuracy

  • Based on the following information, update the inaccuracies in the previous responses. Based on this updated info: {}
  • Research relevant statistics, expert opinions, or factual details that could back up the main arguments. (if AI has internet access)

Providing Examples

  • Identify places where specific examples or anecdotes would make the ideas more concrete and relatable. 
    • Add 1-2 vivid examples to illustrate the key points.

Considering Counterarguments

  • Anticipate what objections or opposing views might be raised against the main arguments.
    • Address 1-2 counterarguments preemptively to strengthen the overall case.

Increasing Approachability

  • Pinpoint any overly academic language, jargon, or complex vocabulary.
    • Rewrite these parts in a more conversational and accessible tone.

Adding Multimedia Elements

  • Consider what types of visuals, infographics, video or audio clips could help engage the intended audience. Create an image prompt for these.
  • Incorporate 1-2 multimedia elements to enhance interest and impact.

You may have realized at this point that we’re basically talking about becoming a good editor with generative AI. So, here is a poem that captures the real goal of editing…

“A SHORT CONDENSED POEM IN PRAISE OF READER’S DIGEST CONDENSED BOOKS”

By Theodore Geisel (Dr. Seuss)

It has often been said
There’s so much to be read,
you never can cram
all those words in your head.

So the writer who breeds
more words than he needs
is making a chore
for the reader who reads.

That’s why my belief is
the briefer the brief is,
the greater the sigh
of the reader’s relief is.

And that’s why your books
have such power and strength.
You publish with shorth!
(Shorth is better than length.)

AI Prompting Terms to Know

0-shot learning: When an AI model can perform a task without additional training beyond its original dataset, relying only on capabilities acquired during pre-training.

Chain-of-thought prompting: Providing an initial prompt and iteratively refining it through back-and-forth collaboration to elicit higher quality responses. 

Few-shot learning: When an AI model is given very few real examples to adapt to a new task, often 5-10 samples, before inferring patterns to apply more broadly.

Few-shot prompting: Providing a model with a small number of real examples to adapt to a new task or genre.


In-context learning: Training a model by showing it unlabeled examples of a task embedded in the prompt.

Prompt engineering: The crafting of effective prompts that clearly communicate the intent while avoiding assumptions and leading language.

Prompt chaining: Building up prompts in a logical sequence, with each depending on and referring back to previous prompts and responses.

Prompt embedding: Incorporating a complete prompt within another prompt, useful for providing examples, context, and scoping the desired response. 

Prompt curation: Thoughtfully collecting, organizing and refining a library of prompts for easy discovery, reuse, and sharing.

Prompt taxonomy: Classifying prompts into a hierarchical structure with categories and tags to capture relationships and usage patterns.

Checkpoint prompting: Intermittently inserting comprehension checks into a long prompt to validate the AI understands instructions and is on track.

Prompt shaping: Gradually adapting prompts through positive and negative feedback to reinforce desired responses and discourage unwanted ones. 

Prompt augmentation: Altering parts of a prompt to generate variations for greater output diversity with the same base structure.

Prompt sampling: Trying a range of prompt styles and approaches to determine the most effective phrasing, tone, and complexity.

Priming: Providing some initial text or content to establish context and orient the AI model before the main prompt.

Soft prompting: Using more subtle cues and implicit guidance to shape the desired response vs explicit instructions.

Negative prompting: Specifying sample unwanted responses that the model should avoid generating.

Balanced prompting: Ensuring prompts cover multiple perspectives to mitigate bias, rather than skewing positive or negative.

Overprompting: Flooding the model with too much detailed prompting such that it decreases creativity and autonomy. As context windows grow, this may become a bigger issue with getting quality outcomes.

Underprompting: Not providing enough context or guidance, resulting in superficial, generic, or nonsensical responses.

Multi-step prompting: Chaining a series of prompts together in a logical flow to make complex requests.

Prompt masking: Hiding parts of a prompt during training to improve generalization and reduce overfitting.

Context Window: The context window is the amount of surrounding text provided to an AI model to establish relevant background before generating a response. Tuning the context window size allows focusing the model on the ideal details needed to produce high quality, targeted responses aligned to the prompt.

Prompt Library: List of custom prompts for a specific organization that is updated and maintained. This helps create a higher quality and consistency of generative AI prompt results.The Grey Jacket Problem: The result of using generic prompts in generative AI that yield results that may seem unique, but when viewed by the public look like everyone else’s content. Concept from Whole Whale when CEO wore the same grey jacket as another presenter on an AI panel.