Skip to content
AI Content Systems

How to write prompts for B2B content

A tactical reference on prompt engineering for B2B content workflows. Structures that produce voice-faithful long-form, newsletters, and social posts. Not a generic prompting guide.

By Justin DeMarchiMay 6, 202610 min read
In this article· 8 sections
How to write prompts for B2B content

A prompt for B2B content is not a clever sentence. It is a structured brief written for a model the way you would write one for an engineer. The shape of the prompt determines the shape of the output more than any other factor in the system.

This is the tactical layer underneath the voice work. If voice.md is the engineering spec that captures how someone writes, the prompt is the instruction set that puts the spec into action on a specific piece. The spec is stable. The prompt changes every time you produce something. Both have to work for the system to produce content worth reading.

I run DUO's content engine this way and have tested every pattern below in production. The article is meant as a reference. Read it once, come back to it when you are constructing a prompt that is not landing.

What does every effective B2B content prompt include?

A working prompt has five sections. They are not optional. Drop any one of them and the output will be visibly worse.

Context. Who is writing, for whom, and why now. A B2B SaaS founder writing about pricing for early-stage operators is a different prompt than a senior marketing leader writing about the same topic for a board audience. The model needs to know the persona, the reader, and the moment before it picks anything else.

Voice reference. The pointer to the voice spec the model should apply. In practice this is the voice file loaded as system prompt input plus a sentence in the user prompt that says "apply the voice profile in voice.md to this draft." Without this the model defaults to its average register.

Format spec. The shape of the output. Word count range. Heading structure. Whether quotes are allowed. Whether examples are required. Whether the close is a stance or a question. The model is good at following format specs when they are precise and bad at inferring them when they are vague.

Constraints. What the output cannot do. Banned vocabulary. Maximum paragraph length. Forbidden structures (no rhetorical questions, no listicle disguised as an essay, no "in this article we will explore"). Constraints are how you prevent the AI defaults from leaking back in.

Exit criteria. What done looks like. Specific enough that you could check the output against the list and answer yes or no on each item. "The post opens with a specific observation, not a preamble. The argument has three named examples. The close states a position." If you cannot articulate what done looks like, the model definitely cannot.

A prompt with all five reads like an engineering brief. That is the point.

How do you inject a voice profile into a prompt?

The voice file goes in as system prompt input. Not paraphrased. Not summarized into bullets in the user prompt. The whole file.

The pattern looks like this in a Claude Code or API context:

SYSTEM:
[Full contents of voice.md]
[Format-specific rules if they apply across many runs]

USER:
Brief: [topic, angle, named example, audience]
Format: [long-form / newsletter / LinkedIn]
Apply the voice profile above. Output a draft that meets the exit criteria below.
Exit criteria:
- [item 1]
- [item 2]
- [item N]

The voice file lives at the system level because it does not change between runs. The brief lives at the user level because it changes every time. This separation is what keeps the system maintainable. If you put the voice rules in the user prompt, you have to repeat them on every call and any change has to be made in dozens of places at once.

A common mistake is paraphrasing the voice file into the user prompt to "save tokens." The paraphrase drifts from the source file and the model gets a less specific spec than the one you actually wrote. Paste the file. Let the model see what you wrote.

What prompt patterns work for each B2B content format?

The structure of the prompt changes by format because the structure of the output is different.

Long-form (article, blog, guide)

Long-form is argument-led. The prompt has to scaffold the argument before the model writes a sentence.

Topic: [the question this piece answers]
Argument: [the position the piece takes]
Evidence: [3 to 5 specific points that support the position]
Named examples: [real examples, not invented ones]
Sources: [stats with citations, where applicable]
Audience: [the reader the piece is for]
Voice: apply voice.md
Length: 1500 to 2000 words
Structure: H2 headings as natural-language questions or statements. 2 to 4 sentence paragraphs. No em dashes.
Exit criteria:
- Opens with a specific observation, not a preamble
- Each H2 section advances the argument
- Closes with a stance, not a question
- No banned vocabulary
- Every claim is either sourced or qualified

The reason this works is that the model is not being asked to invent the argument. It is being asked to execute one you already supplied. Output quality on long-form drops sharply when the operator hands the model a topic and asks it to "write an article." The argument has to come from you.

Newsletter

Newsletters are voice-led and conversational. They run as a single thread, not as a structured argument.

Subject line angle: [the hook the subject line will work from]
Opening: [the specific observation or moment the issue starts with]
Single thread: [the one idea this issue carries]
Close: [the stance or the takeaway]
CTA: [what the reader is invited to do, if anything]
Voice: apply voice.md
Length: 400 to 700 words
Structure: short paragraphs, conversational rhythm, one thread, no subheads unless the issue is genuinely two parts
Exit criteria:
- Subject line is specific, not clever
- Opens in the voice, not in newsletter conventions
- Single idea is carried clean from start to close
- No padding, no recap of last issue, no "in this issue"

Newsletters break when the prompt asks for too much. The temptation is to add three updates, a quick tip, and a CTA. Most great B2B newsletters carry one idea per issue. The prompt should enforce that.

LinkedIn post

LinkedIn is hook-first and scannable. The prompt has to constrain the model into the format the platform actually rewards.

Topic: [the specific point]
Frame: [why this matters now, what makes it sayable]
Named example: [real, not invented]
Hook: [the first 2 lines, written tight, no setup]
Stance: [the position the post lands on]
Voice: apply voice.md
Length: 150 to 280 words
Structure: line breaks every 1 to 2 sentences. Hook first. Body builds the point. Close states the stance.
Exit criteria:
- First 2 lines work as a standalone hook (the reader can read just those and want more)
- Post earns its space (cutting any sentence would weaken it)
- Closes with a stance, not a question
- No emoji, no hashtags unless explicitly requested
- No banned vocabulary

A LinkedIn prompt that works is short. The model has less to do because the format is short. Most prompt mistakes on LinkedIn come from over-specifying the body and under-specifying the hook. The hook is where the prompt earns its keep.

How do you iterate on a content prompt?

The prompt you ship in production is rarely the prompt you started with. The system gets sharper through structured iteration, not through trying to write a perfect prompt on round one.

The loop:

  1. Start with constraints. Write the first prompt as tight as you can. Five sections, exit criteria, voice reference.
  2. Generate. Get one output.
  3. Review against the exit criteria. Mark which criteria the output met and which it failed.
  4. Refine the prompt with what failed. If the model closed with a question, add an explicit constraint. If it used "leverage," it goes in the banned list. If the argument lost the thread, the prompt needs more scaffolding.
  5. Generate again. Compare against the previous round.
  6. Repeat three to five times.

After five rounds the prompt is producing output that lands close to final. The prompt itself is also sharper than anything you would have written from scratch because each iteration was driven by a real failure.

This is the same loop that fixes the no-feedback-loop anti-pattern, applied to the prompt layer instead of the voice file. Both layers compound when you capture what you learned.

What are the most common B2B content prompt mistakes?

These are the prompt-level mistakes that show up most. They are different from the system-level anti-patterns and usually faster to fix.

Vague briefs. "Write a LinkedIn post about pricing." The model has to invent everything: angle, example, stance. It will pick the safest, most generic version of each. The fix is to supply the angle and the named example before the model writes a sentence.

Missing voice reference. The prompt does not point at voice.md. The model writes in its default register. Output sounds like every other AI post on LinkedIn. The fix is to load voice.md at the system level and reference it in the user prompt.

No exit criteria. The prompt asks for a draft but does not specify what good looks like. The model returns whatever it judges acceptable. The operator has no way to tell whether the output succeeded other than reading it and reacting. The fix is to write the exit criteria before the prompt runs.

Overstuffed prompts. Every edge case patched in over six months. Twenty rules competing with each other. The model hedges. This is prompt bloat at the prompt level. The fix is to refactor: pull stable rules into voice.md or the system prompt, delete rules that no longer earn their place, keep the working prompt tight.

Asking the model to do everything in one shot. Outline, draft, edit, optimize for SEO, format for LinkedIn, all in one call. The output is a compromised average across all those tasks. The fix is to split the workflow into discrete steps. Each step has its own prompt, its own exit criteria, and its own review pass.

Where do voice spec, prompts, and outputs each live?

Not every rule belongs in the prompt. The boundary between voice.md, the system prompt, the brief, and the user prompt is what makes a content system maintainable.

voice.md. Rules about how the person writes. Sentence rhythm. Banned vocabulary. Argument shape. Dos and don'ts. These do not change between pieces. Loaded at the system level on every run.

System prompt. Rules about how the workflow runs. Format conventions that apply across many pieces. Role definitions ("you are drafting a LinkedIn post in [name]'s voice"). The system prompt is stable and rarely changes.

Brief. Context for this specific piece. Topic. Angle. Named example. Audience. Stance. The brief is the variable layer that the operator supplies before each run. In an AI content system, the brief is the human's main job; the model handles execution.

User prompt. The wiring. References voice.md, references the brief, asks the model to apply both, lists the exit criteria for this specific piece. The user prompt is the smallest of the four and the most rewritten.

When something in the output is consistently wrong, ask which layer is missing the rule. If the model uses a banned word, voice.md or the system prompt is missing it. If the angle is generic, the brief was generic. If the format is wrong, the user prompt under-specified the format. The diagnostic is which layer the rule belongs in.

What is the difference between system prompts and user prompts?

The practical orientation is short.

System prompts are persistent. They load at the start of a session or a workflow run and apply to every subsequent message. Voice.md belongs here. Format conventions that span many runs belong here. Role definitions belong here.

User prompts are per-message. They carry the variable input for this specific run. The brief belongs here. The specific exit criteria for this piece belongs here. The reference back to the voice spec belongs here.

In a production workflow this separation matters because it keeps the system maintainable. A new content type gets a new user prompt template. A change to the voice file flows to every workflow on the next run. The two layers do different jobs and changing them at different cadences is what keeps the system from accumulating drift.

What does prompt discipline actually produce?

A prompt that produces B2B content worth reading is structured, specific, and tied to a voice spec that does the heavy lifting. The model executes. The operator supplies the judgment.

The systems that compound are the ones where the prompt layer is treated like code. Refactored. Versioned. Iterated against exit criteria. The systems that do not compound are the ones where every prompt is a one-off and the operator is rewriting from scratch every time.

Once a draft comes out of the system, the review layer is what closes the gap between voice-faithful and shippable. Prompt quality determines how much review the draft needs. Better prompts mean lighter review. That is the payoff for getting the prompt layer right.

Most of what people call "prompt engineering" in 2026 is actually content engineering. The prompt is one component of a system that also has a voice spec, a brief, a review pass, and a capture layer. Working at the prompt level alone produces marginal gains. Working at the system level, with the prompt as one piece of it, is what makes the content engineer role viable.

The shorter version. Write prompts as engineering briefs. Five sections, exit criteria, voice reference. Use a different shape per format. Iterate against what failed. Keep the layers separate. The model will keep getting better. The work is in how you write to it.

Frequently asked

Common questions.

  • What is a prompt in a B2B content workflow?

    A prompt is the instruction set passed to a language model to produce a specific piece of content. In a B2B content workflow it is not a one-line ask. It is a structured artifact with context, a voice spec reference, a format spec, constraints, and exit criteria. Treat it like a brief written for an engineer, not a creative request to a copywriter.

  • What does a good content prompt structure look like?

    Every effective content prompt has five sections: context (who is writing, for whom, why now), voice reference (the voice file or rules to apply), format spec (the shape of the output), constraints (banned vocabulary, length, structural rules), and exit criteria (what done looks like). Skipping any one of them is what produces generic output.

  • How do you reference a voice profile in a prompt?

    Load the voice file as system prompt input on every step that produces text in the person's voice. Reference it by name in the user prompt so the model knows which spec to apply. Do not paraphrase the voice file in the prompt. Paraphrasing introduces drift. The whole file goes in as context, the user prompt asks the model to apply it.

  • Should prompts be different for long-form, newsletters, and social posts?

    Yes. Long-form needs argument scaffolding and source instructions. Newsletters need single-thread structure and a CTA spec. LinkedIn posts need hook-first structure and stance-closing rules. The shape of the output is different in each format, so the prompt structure has to be different.

  • How do you iterate on prompts to get better content?

    Run a tight loop. Start with a constrained prompt. Generate one output. Review it against the exit criteria. Identify what failed and update the prompt to prevent that failure next time. Generate again. After three to five rounds the prompt is sharper than anything you would write from scratch.

  • What is the difference between system prompts and user prompts in a content workflow?

    System prompts hold the stable layer: voice spec, banned vocabulary, format rules, role definition. User prompts hold the variable layer: this specific topic, this specific angle, this specific brief. The system prompt rarely changes between content runs. The user prompt changes every time. Putting variable content in the system prompt is a common mistake that makes the system harder to maintain.

  • What are the most common mistakes in content prompts?

    Vague briefs (write a LinkedIn post about X), missing voice reference, no exit criteria, asking the model to do too many steps in one prompt, and prompt bloat where every edge case has been patched in. Each one produces output that drifts toward the model's default register, which is the register of every generic AI post on the internet.

  • Where does prompt logic belong: in the prompt, in voice.md, or in the brief?

    Voice.md holds rules about how the person writes (sentence length, banned vocabulary, argument shape). The brief holds context about this specific piece (topic, angle, named example, audience). The prompt is the wiring that joins them. Put a rule in the wrong place and either it is missing where you need it or it gets repeated where it should not be.

Justin DeMarchi
Written by

Justin DeMarchi

B2B content engineer and founder of DUO. Eight-plus years running marketing and content systems for brands in tech, SaaS, and AI.

More in AI Content Systems