Skip to content
The AI Content Systems guide

AI Content Systems for B2B: The Complete Guide

What an AI content system actually is, the five layers it runs on, the stack a lean B2B operator can run for under $250 a month, and the failure modes most teams hit. A working reference from inside the practice.

By Justin DeMarchiApril 10, 202625 min read

Living guide. Updated as the practice, the tooling, and the underlying tech evolve. The date above is the last meaningful revision.

In this guide· 11 sections

Most B2B companies using AI for content are running a single ChatGPT session against a vague brief and calling it a strategy. The output is generic, the workflow is fragile, and after thirty posts the system is exactly where it started. There is a better shape for this work, and it is not better prompts.

The short version. An AI content system is a connected pipeline with five layers: input, build, publish, analyze, distribute. It runs on a documented voice spec, structured prompts, a review layer, a feedback loop, and a measurement layer. The lean tooling stack runs around $106 a month. The work is run by one senior operator with AI as the production layer, not by AI alone. The systems that compound treat content as engineering. The ones that fail treat it as creative output run through a model.

What is an AI content system for B2B?

An AI content system is the infrastructure that turns a B2B operator's input into consistent, on-brand content across channels, with AI as the production layer and a human as the senior operator. It runs on five layers: input, build, publish, analyze, distribute. It is not a tool, a prompt, or a chat session.

Every word in that definition is doing work, so it is worth slowing down.

Infrastructure means the system exists between sessions. The voice spec, the prompt patterns, the review checklist, the dashboards, the channel templates. They live in files, not in someone's head. New work plugs into the existing layers rather than rebuilding them.

Operator's input means the raw material is human. Recorded conversations, voice notes, lived experience, sourced data, real opinions. AI is the production layer. The judgment, the angle, the named example, the stance: those come from the operator.

Consistent and on-brand means the output sounds like a specific person or brand across hundreds of pieces, not "professional" in the abstract. That fidelity is engineered, not wished for.

Across channels means the system is built for multi-format output from day one. LinkedIn posts, articles, newsletters, video clips, graphics. One source of input, several surfaces.

Senior operator means the human is the editor-in-chief, not the writer. They supply the spec and the judgment. They review every piece before it ships. They feed what works back into the system. The model handles execution.

This is the framing the rest of the guide builds on. Building and operating these systems is the work of a content engineer.

Why does this guide exist?

The pillar covers eight articles, each one a deeper cut on a specific layer. This guide pulls them into one shape so an operator can read it once and know which deeper article to go to next. It is also a working reference. I run DUO on this stack, and the patterns below are what produce duo.ca, the Studio Notes newsletter, and the LinkedIn cadence underneath both.

The audience is B2B founders and lean teams who already have a sense that AI changes how content gets produced and want a serious, operator-grounded view of how to actually run it.

What changed in 2025-2026?

The shift that made content engineering a real role happened in pieces over the last eighteen months.

AI tooling matured past chat-as-product. Claude Code, MCP, agentic workflows, and the developer-first stack collapsed the cost of building custom content infrastructure. Tasks that used to take a developer a week now take an afternoon. The implication for marketing teams is that the production layer no longer maps cleanly to headcount.

The hiring data tracks the shift. GTM Engineer postings roughly doubled in six months, from ~1,400 in mid-2025 to 3,000+ by January 2026, per Apollo. Over the same window, Content Marketing Manager postings dropped 73% across 8,000 U.S. listings since 2023, per Semrush. The classic title is shrinking. The engineering-shaped title is growing.

Adoption inside B2B caught up. Roughly half of B2B marketers have adopted AI tools in their workflows: 51% on personalization engines, 49% on generative AI, 47% on predictive analytics, per Sagefrog's 2026 report. Tool counts are coming down at the same time. 62% of B2B teams plan to reduce their tool count over the next 12 months, per Martech Alliance 2025. Companies running five or fewer core tools see 23% higher marketing-attributed pipeline per headcount than those running ten or more, per Forrester's 2025 benchmark.

The buyer changed too. 73% of B2B buyers now use AI tools like ChatGPT and Perplexity in research, per PR Newswire. Google AI Overviews appear in roughly 48 to 60% of US searches depending on the tracker, up from near zero two years ago, per Search Engine Journal and Advanced Web Ranking. The buyer is in the AI surface before they ever land on a website. The job of B2B content quietly moved upstream.

Adoption is up, tool counts are down, leaner setups are pulling ahead, and the buyer is asking AI before they ask Google. That is the macro frame.

What are the five layers of an AI content system?

Most write-ups split AI content into "extraction, voice profile, generation, review, distribution." That framing was right in 2024. It treats production as the center of the system. The shape that holds up better in 2026 is broader. Five connected layers, with the production work distributed across them.

1. Input

Source the raw material the system runs on. Recorded conversations, voice notes, transcripts, customer signal, product data, sourced stats, the operator's actual opinions. The job is to turn ambient knowledge into structured material that downstream layers can use.

For lean B2B, two patterns produce most of the input. Voice-to-text is the one that has shifted the most. Super Whisper sits in my menu bar and a global shortcut lets me dictate into any app: code, prompts, drafts, project tickets. Typing used to be the bottleneck on getting an idea into the system, and now it isn't. The other pattern is async AI interviews. I send a structured set of questions to a founder, they answer on their own time, the AI compiles their responses into raw material I can draft against. The founder saves the meeting. I get more depth than a 30-minute call would produce.

Input is the layer most teams underbuild. Without enough specific raw material, the build layer has to invent things, and inventing things is what produces generic output. The full breakdown of how this layer runs in practice is in the content engineer manifesto.

2. Build

Produce the artifacts. Articles, landing pages, newsletters, LinkedIn posts, video clips, graphics. The build layer is where AI does most of the work because the work is structuring, drafting, and tightening rather than generating ideas from nothing.

The build layer needs three things to produce something worth reading. A voice spec to apply (covered below). A structured prompt for the format (covered below). A real brief from the operator: topic, angle, named example, stance. With those three, the model produces output that lands close to final. Without them, it averages toward the AI default register and you end up rewriting it from scratch.

This is also where one extraction session compounds. A 45-minute voice conversation with a founder produces enough raw material for a long-form article, two LinkedIn posts, a newsletter section, and three video clips. The build layer adapts the format while the voice spec keeps the output consistent across all of them.

3. Publish

Ship into production systems. For long-form, that means MDX files in a Git repo deployed by Vercel, not a CMS workflow with a paste step. For LinkedIn, that means a scheduled post via Buffer or LinkedIn's native scheduler. For video, an export from Descript or Remotion. The publish layer is the most automatable, which is why most operators over-invest in it and under-invest in the layers that produce what gets published.

The structural choice that matters at this layer is whether the publish target is something AI can write to directly. If your articles live in a CMS Claude can't touch, every piece has to be hand-pasted at the end. If they live as MDX files in a repo, the build layer writes the file, the publish layer is automatic. That single decision changes the friction of running the system. More on this below in the web infrastructure section, and in the deeper Webflow vs. Next.js breakdown.

4. Analyze

Read the data. Search performance, AI citation tracking, on-site engagement, conversion. The analyze layer is where most marketing stacks lose the plot, because analytics, search, and project tracking live in different tabs that don't talk to each other.

The shift is treating analysis as a conversation, not a dashboard ritual. Claude reads PostHog, Google Search Console, and Linear inside one session through MCP, surfaces patterns, drafts what it sees, opens tickets for the things worth actioning. Twenty minutes once a week instead of half a day across five tabs. The full setup is in the AI martech stack review.

5. Distribute

Adapt source material into platform-native variants and feed performance data back into the system. LinkedIn variants of articles. Newsletter sections from extraction sessions. Video clips from recorded calls. Graphics from JSON-encoded style prompts. The distribute layer compounds when the cross-format adaptation runs through the same voice spec as the original piece, so the LinkedIn post sounds like the article, which sounds like the newsletter, which sounds like the founder.

The honest version of this layer in 2026 is that distribution is where most teams revert to manual work because the tooling is weakest here. JSON prompts for graphic style consistency, Remotion for code-driven video, and Descript for clip extraction all work but require the operator to build templates rather than press a button. The payoff is that one extraction session keeps producing content for weeks.

The five layers don't exist in isolation. They feed each other. Input determines what build can produce. Build determines what publish can ship. Analyze surfaces what worked. Distribute multiplies the reach of each piece. The feedback loop between analyze and the upstream layers is what makes the system compound rather than run as a treadmill.

What is the B2B Content Engineer role?

The role that runs the system varies by company size. At enterprise scale, it operates as a team building agentic content workflows for marketing and internal operations. At sales-led B2B, it overlaps with GTM engineering: producing sequences, identifying signals, pointing focus at specific accounts. At lean B2B, it compresses into one senior operator running input, build, publish, analyze, and distribute as a connected practice.

Vercel is hiring a content engineer right now and describes the role as two jobs: ship great content with technical narratives, and build the systems, agents, and tools that make great content repeatable. That framing matches the work whether the company is 10 people or 10,000. The output and the system that produces it are co-equal deliverables.

The titles around it are shifting too. AI Content Marketing Manager. Technical Content Strategist. AI Content Producer. Growth Content Manager. Different surface labels, same underlying shift: content and the workflows behind it are being treated as an independent function with engineering-shaped responsibilities, not as a marketing afterthought.

The deeper read on what the role does day-to-day, including five workflows from inside the practice, is in the content engineer manifesto.

How do you build the practical components?

The five layers are the architecture. The components below are what each layer is built out of in practice.

Voice infrastructure

The voice spec is the file that captures how a specific person writes and gets loaded as system prompt input on every AI step that produces text in their voice. Patterns, not aesthetics. Sentence rhythm, banned vocabulary, argument structure, reference points. Twelve to twenty unedited samples in. A structured markdown file out.

A brand voice document written for humans describes "confident but warm" and "data-driven with a human edge." Those phrases are useless to a language model. The model has billions of priors for what those mean in general, which is exactly the problem. A voice profile written as voice.md tells the model: paragraphs are 2 to 4 sentences, never 6. Open with an observation, not a preamble. Close with a stance, not a question. Never use these twelve words. The model can act on that.

The voice file is the asset that makes the system reusable. It lives in the repo, version-controlled, with a commit history. A new workflow that needs the founder's voice imports voice.md the same way the existing ones do. A change to the file flows to every workflow on the next run. None of that is true for a Google Doc full of brand voice adjectives.

The full anatomy of voice.md, including how to build one and the trap of mistaking it for a brand voice doc, is in the VoiceMD spec.

A note from a different angle: I spent three years in political communications before B2B. The work was getting senior people on television five days a week, on message, in their actual voice, under conditions that punish drift. Voice fidelity wasn't aspirational. It was the job. That's where the engineering frame on voice came from for me. A politician who sounds like a press release on Tuesday loses the seat on Friday. The same posture, ported into a B2B context with AI as the production layer, is what voice.md is doing.

Production infrastructure

Production is the prompt layer plus the build workflows that sit on top of voice.md. A working prompt has five sections: context, voice reference, format spec, constraints, exit criteria. Drop any one and the output gets visibly worse.

The structure is different per format. Long-form is argument-led and the prompt has to scaffold the argument before the model writes a sentence. Newsletters are voice-led and run as a single thread. LinkedIn posts are hook-first and the prompt has to constrain the model into the format the platform actually rewards. The full reference, with templates per format and the iteration loop that sharpens prompts over time, is in prompts for B2B content.

The system prompt holds the stable layer (voice spec, banned vocabulary, role definition). The user prompt holds the variable layer (this specific topic, this angle, this brief). When a rule belongs in the wrong place, either it's missing where you need it or it gets repeated where it shouldn't be.

The most common mistake at this layer is asking the model to do too much in one shot. Outline, draft, edit, optimize for SEO, format for LinkedIn, all in one call. The output is a compromised average across all those tasks. Splitting the workflow into discrete steps with their own exit criteria produces sharper output and a system that's easier to debug when something fails.

Web infrastructure

The publish target shapes what the system can do upstream. If your site lives on a hosted CMS Claude can't write to, every article ends in a paste step. If your site is a Next.js repo with MDX content, the build layer writes the file, the publish layer is automatic, and the schema and SEO infrastructure render from frontmatter on every page.

This is the decision that matters most at this layer. Webflow vs. Next.js, broken down honestly: Webflow ships in two weeks and is the right call for sites with marketing-team independence, low total page count, ship-today urgency, or non-technical primary editors. Next.js fits content-heavy strategies, SEO and AEO as competitive advantages, engineering presence (including AI coding), and custom features beyond a brochure site. The full comparison is in Next.js vs Webflow.

For DUO specifically, moving off Webflow to Next.js + MDX cut $40 a month off the recurring bill, removed the per-collection-item cap, and made the publish layer automatic. duo.ca runs as a Git repo deployed by Vercel. Articles are MDX files. Claude Code writes them, runs the editorial checks, pushes to main, Vercel rebuilds. There is no copy-paste step because the format Claude works in is the same format the site renders.

Distribution infrastructure

Distribution is two things. Cross-format adaptation: the LinkedIn variant of an article, the newsletter section pulled from the same source, the video clip excerpted from a recording. Platform-native publishing: the schedule, the format, the timing. Cross-format adaptation is where most teams underinvest and where the system compounds when done well.

For LinkedIn specifically, the algorithm rewards consistent original content from individual profiles over company pages. An AI content system for LinkedIn handles hook-first structure, optimal length, format selection, and posting cadence. A single source piece typically yields 4 to 6 LinkedIn variants. Each post starts from a specific story, observation, or stance the founder put on the table. The build layer structures it, the review layer pressure-tests the voice, the distribute layer schedules it.

For newsletters, Resend handles the transactional layer. Templates live in the repo as React components. Editing an issue is the same workflow as editing a page on the site. The Studio Notes newsletter and the drip emails on duo.ca all run through it.

For video, Remotion is React for video, code-driven and version-controlled like the rest of the stack. Descript handles the recording and clipping side: founder interviews, testimonials, content production sessions, with transcripts available immediately and clip exports in seconds.

For graphics, JSON-encoded style prompts produce consistent branded output across hundreds of pieces. The technique: reverse-engineer an existing style into a structured prompt that captures composition, palette, lighting, line weight, mood. Save it. Reuse it. Time investment shifts from each first-shot generation to one good prompt that earns its keep across the asset library.

Measurement infrastructure

Measurement is two layers, kept simple. Voice fidelity is a manual check on every piece: did this sound like the operator. Authority compounding is a quarterly check on downstream signals: inbound DMs, sales call references, search rankings, AI citations. Both matter. Engagement metrics on individual posts are noisy and rarely the right primary signal.

The AI search piece is structural. AI engines like Perplexity, ChatGPT search, Google AI Overviews, and Claude with web search cite content that leads with a clean definition, uses question-shaped H2s, includes scannable comparison tables with concrete attributes, sources stats with named primary sources, emits FAQPage JSON-LD, and links tightly inside topical clusters. Generic content gets ignored across all of them. The full breakdown of structural patterns and engine-specific behavior is in content for AI search.

For tooling, AthenaHQ tracks citation frequency for chosen prompts across ChatGPT, Perplexity, Claude, and Gemini. Google Search Console surfaces AI Overview impressions under the search appearance filter. Manual SERP checks on the top 20 buyer prompts once a month catch what tools miss. I'd skip most platforms claiming to "guarantee citation." The mechanics are the structural patterns, not a rank-tracking trick.

What does it cost?

The lean tooling stack runs around $106 a month for one operator.

ToolMonthlyTierJob
Claude Code$20ProOrchestration, build layer, publish
Vercel$20ProHosting, deploys
Resend$20Up to 50K emailsNewsletter, transactional email
Linear$8StandardProject tracking, sessions outliving Claude
Google Workspace$14Business StarterMail, docs, drive
Descript$24CreatorRecording, transcription, clipping
PostHog$0Free up to 1M eventsAnalytics
GitHub$0FreeCode, MDX content
Supabase$0FreeOperational data when needed
Total~$106

That replaces a traditional setup that included Google Analytics, Mailchimp or HubSpot, Notion for half the work, and Google Sheets for the rest, plus the time tax of keeping all of them current. For a lean B2B function, the math is straightforward.

The deeper breakdown of why this stack and what it replaces is in the martech stack review.

Tools are not the cost. The work is. A content engineer at fractional rates is the line item that matters, whether the role runs in-house or through a partner. Tooling is the floor.

What should you watch out for?

Most AI content systems fail in the same five ways. Each one is a design choice, not a tooling problem.

No documented voice. The operator pastes three good LinkedIn posts into a chat window and asks for ten more in the same style. Output comes back. It's fine. It doesn't sound like them. Without a structured voice file feeding the model, output averages toward the model's default register, which is the register of every AI-generated post on LinkedIn.

Prompt bloat. Every edge case gets patched in. After three months the prompt is a wall of conflicting instructions and the model is hedging on every output. The fix is to treat prompts like code: refactor, pull stable rules into voice.md, delete rules that no longer earn their place.

No feedback loop. The system produces content and ships it. The operator never goes back. The voice file on day 90 is identical to the voice file on day 1. After 30 posts you have 30 posts and nothing else. Compounding requires a capture step.

Treating it like a ghostwriter. The operator hands the AI a vague brief, edits the worst parts, and ships. There is no voice spec, no review layer, no feedback loop. The system is a one-shot translation engine. AI content systems work the opposite way: the human supplies the judgment up front, the model handles execution, the human reviews before publish.

No measurement layer. The system runs as a hamster wheel. Volume goes up. Authority doesn't compound. The operator can't tell which framings drove the conversations that mattered because no one is tracking what worked.

The full breakdown, with why each pattern lands and what the fix looks like at the system level, is in the anti-patterns piece.

The five share a single root. The operator is treating the AI content system as creative output instead of as engineering. Creative output is judged piece by piece. Engineering is judged by what the system produces consistently over time.

How does the review layer work?

Review is the layer most operators want to skip and the layer that separates AI content that compounds from AI content that quietly erodes trust. The trick isn't doing less of it. The trick is matching the depth to the format.

Long-form needs argument and voice checks. Read the article once for argument. Forget every word and ask: what is this piece arguing? Does each section move that argument forward? If the argument holds, the second pass is voice. The fix at this layer is usually surgical: replace three words, cut a sentence, tighten a paragraph that has two ideas where it should have one. A 1,500 to 2,000 word piece, if the draft is close to final, takes 20 to 30 minutes.

Newsletters need subject line, hook, voice, link, and CTA checks because the send is irreversible. Read the subject line in isolation. Does it earn the click on its own? Then the opening hook in isolation. Then the CTA. The most useful trick is to mimic the audience scan pattern: subject line, first two sentences, every subhead, the CTA. If those alone do the work, the body is doing its job.

Social posts need a hook check and ruthless cuts. A LinkedIn post is 200 to 1,300 characters and the first two lines decide whether anyone reads the rest. Read them aloud. Do they sound like the operator talking, or like a competent generic professional? After the hook: voice match, brevity, what to cut. Two to five minutes per post once a voice profile is in place.

The review layer compounds when the patterns the operator keeps fixing get fed back into the voice file or the prompt. A draft that gets rewritten in the same way three weeks running points to a pattern the system should learn. Without the loop, you're paying the review tax forever. With the loop, it shrinks every quarter.

The full breakdown by format, with what to look for and how long it should take, is in the review guide.

What does this enable for B2B operators?

The operator outcome is voice fidelity at scale, freed cognitive load, and authority that compounds rather than dissipates.

Voice fidelity at scale means the LinkedIn post and the long-form article and the newsletter section all sound like one specific person. Not "professional" in the abstract. Specifically recognizable. That fidelity is what turns content into a trust signal instead of background noise.

Freed cognitive load means the operator's time goes into judgment, not production. The 45-minute extraction session is the bulk of the input commitment. Drafting, format adaptation, scheduling, schema, internal linking: the system handles those. The operator reviews. The system gets sharper because the operator captured what they fixed.

Authority compounds when the analyze layer feeds back into the upstream layers. The pillar topics that get cited in AthenaHQ inform what the next batch of articles cover. The framings that drive inbound get logged and reused. The voice file tightens. The prompts sharpen. Six months in, the output is more specific and voice-faithful than it was on day one.

The honest tradeoffs are worth naming. AI content systems make the input layer matter more, not less. If the operator has thin opinions, no specific examples, or no real point of view, the system amplifies the absence. AI content systems also require a real voice file, which most operators underestimate the work to produce. Twenty samples beat fifty mediocre ones, and the file drifts after a few months as the operator's thinking shifts. Treating the system as set-and-forget is what produces the failure modes above.

The systems that compound are the ones run by an operator who treats content as engineering. Specs, refactors, feedback loops, measurement. The work isn't glamorous. It's what makes the difference between a system that produces authority and a treadmill that produces volume.

Where should you start?

Three concrete next steps, depending on where you are.

If you have nothing built. Start with the voice file. Pull twelve to twenty unedited samples of your writing or transcribed speech. Extract the patterns: sentence rhythm, banned vocabulary, argument shape, reference points. Document each section with rules and three to five examples from your own work. The first version is intentionally rough. Generate three to five posts against it and refine. Read VoiceMD for the structure and the prompts piece for the prompt patterns that load it.

If you have a system but it isn't compounding. Audit it against the five anti-patterns. The most common failure point is the absence of a feedback loop: voice file untouched in months, no capture habit, every post a one-off. The fix is a small weekly capture pass that updates the voice file or the prompt with what worked.

If you're choosing tools. Start with the martech stack review. The criteria isn't "best tool for the job" anymore. It's "can Claude read it, write to it, and orchestrate around it." A stack designed around one orchestration layer pays the context-switching tax once instead of every time you sit down to work.

The pillar covers the rest. Next.js vs Webflow for the web infrastructure decision. content for AI search for the GEO layer. The review guide for the review posture. Each one goes deeper on the layer this guide walks through.

The systems that win the next two years are the ones being built now by operators who treat content as engineering and AI as the production layer. Not as a writer to outsource judgment to. Not as a magic prompt to find. As infrastructure that runs because someone designed it to. That's the work.

For the human-layer counterpart on how this plays out across founder communications, see the Founder Communications guide. If you want to talk about applying this to your own practice, book a discovery call.

Frequently asked

Common questions.

  • What is an AI content system for B2B?

    An AI content system is the infrastructure that turns a B2B operator's input (voice, expertise, source material) into consistent, on-brand content across channels, with AI as the production layer and a human as the senior operator. It runs on five connected layers: input, build, publish, analyze, and distribute. It is not a single tool, a chat session, or a prompt template.

  • How is an AI content system different from using ChatGPT for content?

    Using ChatGPT for content is a single chat session with a single prompt and a single output. An AI content system is a connected pipeline with a documented voice spec, structured prompts per format, a review layer, a feedback loop, and a measurement layer. The content system is to ChatGPT what an engineering practice is to a code snippet.

  • Who runs an AI content system in a B2B company?

    A content engineer. The role combines editorial judgment, marketing strategy, and technical execution, with AI as the production layer that makes the workload viable solo. At lean B2B scale it is one senior operator running the full system. At enterprise scale it is a team running agentic content workflows. The shape varies. The role is the same.

  • What does an AI content system cost to run?

    The lean tooling stack runs around $106 a month for one operator: Claude Code, Vercel, Resend, Linear, Google Workspace, Descript, plus free tiers for PostHog, GitHub, and Supabase. Tools are not the cost. The work is. A content engineer at fractional rates is the line item that matters.

  • What is voice.md and why does it matter?

    voice.md is a structured markdown file that captures how a specific person writes: sentence rhythm, vocabulary they use and never use, argument structure, dos and don'ts. It loads as system prompt input on every AI step that produces text in that person's voice. Without it, the output averages toward the model's default register, which is the register of every generic AI post on the internet.

  • Where does human review fit in an AI content system?

    Review is the layer that separates AI content that compounds from AI content that quietly erodes trust. The depth changes by format. Long-form gets two passes (argument, then voice), 20 to 30 minutes for 1,500 to 2,000 words. Newsletters get a scan-pattern review. Social posts get a hook check and ruthless cuts. Review is not a single workflow. It is a posture matched to the format.

  • How do AI content systems get B2B content cited by ChatGPT, Perplexity, and Google AI Overviews?

    Through structural patterns: a clean definition in the first 50 words, question-shaped H2s, comparison tables with concrete attributes, sourced stats with named primary sources, FAQ sections with FAQPage JSON-LD, and tight internal linking inside a topical cluster. AI engines cite specificity. Generic content gets ignored across all of them.

  • What are the most common ways AI content systems fail?

    Five failure modes: no documented voice (output drifts to default register), prompt bloat (rules contradict, model hedges), no feedback loop (every post is a one-off), treating the model like a ghostwriter (operator outsources judgment), and no measurement layer (volume without compounding). Each one is a design choice, not a tooling problem.

  • Is AI content for B2B a flash-in-the-pan trend?

    The titles tracking the shift are early signals, not endpoints. GTM Engineer postings doubled in six months from ~1,400 to 3,000+ per Apollo. Content Marketing Manager postings dropped 73% since 2023 across 8,000 U.S. listings per Semrush. Roughly half of B2B marketers have adopted AI tools in their workflows per Sagefrog's 2026 report. The role is forming because the work is real.

  • Should a small B2B team build the system in-house or work with a partner?

    Both work. In-house works when there is one operator with editorial judgment and a willingness to build into the tooling. A partner works when the founder wants the system running without learning the stack. The wrong path is buying production-capacity tools without designing the system around them. That is what produces the failure modes.

Deeper dives

Essays referenced inside this guide.

Justin DeMarchi
Written by

Justin DeMarchi

B2B content engineer and founder of DUO. Eight-plus years running marketing and content systems for brands in tech, SaaS, and AI.