Skip to content
AI Content Systems

What is a content engineer? Five AI workflows you can try today

Five AI-first content workflows from a content engineer's day: voice-to-text input, Claude Code site building, MDX content, AI as research synthesizer, and JSON prompts for visuals. A practical guide to AI content engineering and content automation for B2B marketing operations.

By Justin DeMarchiApril 29, 20269 min read
What is a content engineer? Five AI workflows you can try today

Role titles and the language around them signal where industries are headed. Growth hacker. Demand gen. Revenue operator. GTM engineer.

In the marketing space, the title of content engineer emerged in mid 2025. According to Google Trends, search volume has climbed consistently, exceeding roles like content strategist and content marketer. People who searched for content engineer also searched for AI engineer. That tells you something about what's expected of the role.

Google Trends data: search interest in 'content engineer' rising relative to content strategist and content marketer

Job boards and search patterns don't always tell the same story, though.

If I search "Content Engineer" in LinkedIn Jobs, I see no postings with that exact title.

The roles I do see when I search "content engineer": AI Content Marketing Manager, Content Strategist, AI Content Producer, Technical Content Strategist, Growth Content Manager, Content Creator, Digital Content Strategist.

These postings point to something real. Content is being valued as an independent function, where previously it might have fallen under the Marketing Lead's role. And companies want people who can leverage AI and build custom workflows for content automation, production, and distribution.

My first interaction with the role was on a job board. Vercel was hiring for a content engineer. They described the role as two jobs:

  1. Ship great content with technical narratives.
  2. Build the systems, agents, and tools that make great content repeatable.

Titles don't tell the full story, but they can provide a helpful starting spot. Here's my working definition of a content engineer:

A content engineer is what emerges when AI workflows pull engineering thinking (system design, automation, infrastructure) into content production. The exact shape varies operator to operator. The role combines marketing strategy, storytelling, design, and creative direction with technical execution.

Examples include AI-first web development, AI-generated images and video, and content and analytic pipelines that leverage AI at their core. The work spans both content engineering and AI-driven marketing operations.

When I saw content engineer, it most closely matched the workflows we build and manage at DUO. So I swapped operator for content engineer. The better response than just renaming my role: showing examples of how we content engineer in practice.

Five workflows we use weekly, some daily. The arc: input, build, publish, analyze, distribute.

1. Voice-to-text as the input layer

The first example starts off as less of a workflow and more of a tool. But it's arguably had the single greatest impact on my input and output quality. Voice-to-text is the input layer for an AI-orchestrated content practice. What's emerged are adjacent workflows that build on it.

I use Super Whisper. There are others, but this is what I started with. It sits in my menu bar. I hold a global shortcut, talk, and text appears in whatever app is in focus. Code, prompts, drafts, notes, project tickets.

Typing used to be the bottleneck on getting an idea into a system. Now it isn't.

Two sub-workflows turn voice into something structured:

AI interviewing me by voice. I ask ChatGPT or Claude to interview me on a topic. I answer via voice. The AI compiles the answers into a draft. This is how a lot of my LinkedIn content gets produced. I don't sit down to write a post. I have a voice conversation with the AI about an idea, and it shapes the result.

Async AI interviews for founders. I send a structured AI interview to a founder. They complete it on their own time, voice or text, no scheduled call needed. The AI compiles their answers into raw material I use to draft content. The founder saves the meeting time. I get more depth than a 30-minute call would produce. This is the same source material that powers DUO's founder storytelling work.

The pattern: voice removes the friction of getting ideas out of someone's head. Whether that someone is me or a client.

Limitations. Voice-to-text accuracy depends on accent and pace. Heavy jargon and proper names get transcribed phonetically and need cleanup. If you do a lot of code, voice doesn't replace typing for syntax. But for prose, prompts, and notes, the shift is real.

Tech stack. Super Whisper · ChatGPT · Claude


2. Build the site itself with Claude Code

The site you're reading was built without a no-code site builder like Webflow, and without a formal engineer or developer. A B2B content engineer can build a production marketing site directly through conversation with Claude Code, replacing the traditional designer-developer-CMS handoff. duo.ca runs on Next.js, hosted on Vercel, source on GitHub. The actual writing of the JSX happened almost entirely through conversation with Claude Code. Articles, service pages, schema, navigation, SEO infrastructure. All of it.

A pause to debunk the myth that creating a site via AI is as simple as prompting your LLM. It's a serious undertaking even when AI is doing most of the work. Jumping straight to creation makes it easy to skip the research and planning needed to build something relevant and meaningful.

Done with proper process and discipline, this turns web design and development from a bottleneck into the marketing asset with the most potential.

The traditional handoff: designer to Figma, developer to code, content team to CMS, marketing to ticket queue. Two-week cycles for things that should take an hour. This collapses that.

The process I have landed on, in order:

  1. Plan, plan, plan, plan. Don't forget to plan. Before any code, work the planning layer. What does this page need to achieve? What's the content breakdown? What's the positioning? What does the visitor need at this stage? Half the build is solving these questions before opening the editor.

  2. Design via descriptive language and inspiration. Describe the look and feel in words: editorial, generous whitespace, terracotta accent. Name two or three inspiration sites and articulate why specific elements work. Logos, type, brand cues. The descriptive layer becomes the prompt.

  3. Skip Figma. I tried Claude → Figma → code as a pipeline. Figma is fine for graphic exploration, but the round trip back into a real coded site is messy. Easier to design directly in Claude Code, in actual JSX, where you can iterate live without translation.

  4. Repo structure matters. Clean breakdown: /app for routes, /lib for shared code, /content for MDX, /public for static assets. Each commit small and focused. Claude works better when the structure is legible.

  5. Start with the landing page. Once the repo is set up and the plan is clear, the landing page comes first. It sets the style guide for every page that follows.

  6. Build for the audience, not the vision. If you're like me, the new tools tempt you toward a big vision. Start small. Don't think of yourself, think about the audience. The site visitor. What is it they need to see? Do that, and nothing else. Make it easy for them to understand without sending them down a complicated web of pages.

Connect Claude to GitHub and Vercel for deployment.

Tech stack. Claude Code · Next.js · GitHub · Vercel

This is the AI-first web presence pattern in practice.


3. Publish long-form content as MDX files, not a CMS

I haven't used a CMS in three months. That's still strange for me to say out loud. Publishing long-form content as MDX files inside a Next.js repo replaces the traditional CMS workflow. For most of my career, I spent time weekly in the CMS, editing copy, adding new articles, adding internal and external links, and optimizing for SEO. When I worked with new clients, I would teach them how to operate within a CMS. Now articles, guides, and service pages live as MDX files in the duo.ca repo. The file is the content. There's no admin UI, no copy-paste-publish step.

Most teams handle long-form differently today. Produce a draft (Notion, Google Docs, somewhere). Maybe with AI help. Review. Copy the text into the CMS. Upload an image. Set the SEO fields. Hit publish. You can optimize the loop, but the shape stays the same: produce → review → paste-into-CMS → publish.

What changes with MDX: the file IS the content. Claude creates the file directly, fills in the metadata, writes the body, runs editorial checks, and pushes to the repo. Vercel deploys automatically. There's no paste step because the format Claude works in is the same format the site renders. You skip a whole layer.

The metadata at the top of each file (title, description, date, schema, FAQ entries) drives the page title, meta description, Article schema, FAQPage schema, sitemap entry, and internal-link signals. Once written, every change to one rule (say, FAQ structure) propagates across every article on the next deploy.

On top of MDX, Claude skills handle the SEO and GEO patterns. Audits surface gaps. Implementation applies the rules: FAQ frontmatter on every article, FAQPage JSON-LD, RSS feed, schema markup, internal linking suggestions, voice-rule enforcement. The operator writes the rules once. The system applies them across hundreds of pages.

The system compounds

Voice rules, banned vocab, FAQ patterns, internal-link conventions, even editorial preferences (em dash policy, paragraph length, citation style) live in code or prompts. Every piece I ship feeds back. The voice doc tightens. The banned-vocab list grows. The FAQ patterns refine.

This is what people miss when they say AI content sounds generic. It does, when there's no compounding loop. It doesn't, when the loop exists. The system gets sharper than it was on day one. Not because of any single insight, but because the surface that captures decisions stays alive.

A note on shipping speed and quality

This site, four pillar guides, 50+ articles, and a rebuild off Webflow shipped in a week. Honest qualifier: any one of these articles could be sharper if I went through it line by line. I haven't yet, on purpose.

SEO and GEO signals take months to develop. Shipping fast and iterating after Google and AI engines start crawling is more effective than perfecting each piece before launch. Quality control is baked into the prompting layer (voice rules, banned vocab, FAQ structure, schema). The floor is high. The ceiling improves as data informs revisions.

This is content engineering thinking. The system, not the artifact, is the deliverable.

How to try this today.

  1. Add a /content/insights/ folder (or similar) to your Next.js repo.
  2. Each article is a single .mdx file with YAML metadata at the top.
  3. Set up lib/mdx.ts to parse frontmatter and render the body. (Use gray-matter for frontmatter, next-mdx-remote for body.)
  4. Add a route like /insights/[slug]/page.tsx that reads the file at build time.
  5. Prompt Claude Code: "Write me an article about [topic]. Use the same metadata shape as the existing articles." It creates the file. You review the diff. Push to main.

Limitations. This needs a backend Claude can write to directly. We host duo.ca on Vercel with the source on GitHub, so Claude Code edits files locally and pushes via git. If you're on a hosted CMS (Webflow, HubSpot, Wix, Squarespace), this won't work for you. There's no file system Claude can touch. The pattern is for teams who already use, or are willing to move to, a Next.js + Git + Vercel stack.

Honest trade-offs even when it does work:

  • Pro: incredibly fast. A new article is one MDX file. No data entry. Edits via Claude Code or direct file edit.
  • Con: fine-tuning is awkward. Editing the MDX directly means you're not seeing the styled view. Editing via Claude conversation means describing the change rather than seeing it.
  • Con: multi-contributor gets complicated. Git workflow is fine for technical contributors. Five marketers contributing simultaneously needs a CMS.

Tech stack. MDX · Next.js · GitHub · Vercel · Claude Code


4. Use AI as a research synthesizer for ops

The traditional morning: open PostHog tab, glance at dashboards, switch to GSC tab, scroll through queries, switch to Linear tab, type a ticket from memory. Half a day of context-switching across tabs that don't talk to each other.

Most mornings now I open Claude Code and ask it to read the data. "What did the pillar guides do this week? Are there cannibalization risks I missed? Any pages dropping in GSC?" Claude pulls PostHog and Google Search Console, drafts what it sees, opens Linear tickets for the things worth actioning. I review, decide, action. Twenty minutes.

Used to be five tabs and a coffee that went cold. Now it's one prompt and the coffee stays warm.

Using AI as a research synthesizer means analytics, search performance, and project tracking flow through one orchestration layer instead of separate tabs. This is where AI content engineering crosses over with marketing operations.

Tools in the loop:

  • PostHog for analytics (sessions, paths, conversions, dashboards)
  • Google Search Console for organic search performance
  • Linear for tracking what's open
  • Resend for newsletter delivery and audience

Each one connects to Claude:

  • PostHog via MCP (Model Context Protocol, Claude's standard for talking to outside tools). It runs dashboard queries, creates new events, builds dashboards by description. Telling it "make a dashboard that shows landing-page conversion by referrer for the last 28 days" produces the dashboard.
  • Linear via MCP. Issues open, status updates, ticket descriptions get drafted with full context, tickets close when work is done. Most of my tickets get created during a session, not by me typing into Linear.
  • Google Search Console via Claude in Chrome (browser-based, no native MCP yet). The browser tool opens GSC, pulls Pages and Queries data, surfaces patterns. Different mechanic, same outcome.
  • Resend via SDK in code. Newsletter drafts get composed, the audience gets managed, transactional emails fire (like notifications when someone subscribes).

What this enables: I don't just publish content and hope. I publish, then prompt Claude to read the data three days later. It pulls PostHog and GSC, surfaces patterns, drafts recommendations into Linear. I review. I action. The next round of content is sharper because the data fed back. Sister piece: the AI MarTech stack as of May 2026 breaks down each tool further.

How to try this today.

  1. Start with one tool. PostHog is the easiest entry. Sign up, install the snippet on your site, then add the PostHog MCP server in Claude Desktop or Claude Code (claude mcp add --transport http posthog https://mcp.posthog.com/mcp).
  2. Ask Claude: "Show me top pages from the last 28 days, ordered by sessions." See if the answer matches your dashboard.
  3. Once that loop works, add Linear, Resend, and your Google Search Console (browser tool) one at a time.
  4. Build a morning routine: 10 minutes of "what do you see in the data?" conversations with Claude. Action what surfaces.

Limitations. This needs tools that have either (a) an MCP server or (b) a public API Claude can hit via browser tool. PostHog, Linear, Resend, and GitHub all have native MCP. Google Search Console doesn't yet, so we use the browser tool. If your stack is closed (legacy CRM, internal-only analytics platform), Claude can't read it without integration work first.

Tech stack. PostHog · Linear · Google Search Console · Resend · Claude Code · Claude in Chrome


5. Visual and video via reusable AI prompts

The newest workflow. Less mature, more experimental. This is the one I'm still finding the edges of. The first four are running daily. This one is closer to running experiments.

Visuals and video for AI-orchestrated content can be produced through reusable prompts (JSON-encoded style prompts for graphics, code-driven workflows for video) instead of one-off generations. Three threads.

JSON prompts for graphic style consistency

Most people generate a graphic on the first shot. The result is hit-or-miss. Style drifts between attempts. You end up with five graphics that almost look like a set, but not quite.

The technique I have landed on: reverse-engineer existing styles into JSON prompts. Find a style I like (mine, an artist's, a brand reference). Reverse-engineer it into a structured prompt that captures composition, palette, lighting, line weight, mood. Save the prompt. Reuse it.

When I want a new graphic, I feed the prompt plus the new content (an image, a topic). The AI applies the saved style. Output is consistent across pieces.

Time investment shifts: spend the time on the prompt, not on the first-shot generation. A good prompt earns its keep across hundreds of images.

Tech stack. ChatGPT · JSON prompts

Remotion for explainer video

Remotion is built into Claude Code. It's React for video. Code-driven, programmable, version-controlled like the rest of the stack.

The first time I used it: an explainer video for the LinkedIn Voice service page. Built in under 30 minutes. The video lives at the top of the LinkedIn Voice page. Go see it.

Caveat on the 30-minute number: the structure of the page itself helped inform the video. The narrative was already worked out via the page copy. Remotion turned narrative into motion, not narrative from scratch. Still, "30 minutes from intent to live video" is something my old workflow couldn't touch.

Tech stack. Remotion · Claude Code

Descript for clipping and testimonial recording

Descript is what I use to capture calls in higher quality than Zoom and to extract clips out of those recordings. Founder interviews, client testimonials, content production sessions. The recording is higher fidelity than Zoom captures, and the editing layer makes finding usable moments fast.

The workflow: record the call in Descript instead of Zoom. Once the call is done, the transcript is available immediately. I scan for moments worth pulling, mark them, and export a clip in seconds. For testimonials, I'll often pull two or three short clips from a 30-minute call. Each clip becomes its own asset for LinkedIn, the website, or a sales deck.

Tech stack. Descript

This is the frontier I'm watching. The first four workflows are settled practice. Five is still being shaped by what works.


What this enables

These workflows let one operator run an AI-orchestrated content practice across input, build, publish, analyze, and distribute. The system holds the parts that should compound. The operator focuses on the parts that don't compound: judgment, taste, the specific call.

That's the work of a content engineer at DUO. Not the writing. Not the design. Not the analytics. The system that connects them, plus the judgment to know what to ship.

This article was produced via the workflows it describes. Voice input via Super Whisper. Drafted in Claude Code. Published as MDX. The companion LinkedIn post is cross-format adaptation from the same source. Same insight, different surface, voice preserved across both. Related cluster reading: founder-led marketing and going to market.

If you're doing similar work, the DUO insights library has more of the pattern, including the AI content systems pillar guide. If you want to talk about applying it to your own practice, start with a discovery call.

Frequently asked

Common questions.

  • What is a content engineer?

    A content engineer is what emerges when AI workflows pull engineering thinking (system design, automation, infrastructure) into content production. The role combines marketing strategy, storytelling, design, and creative direction with technical execution. Examples include AI-first web development, AI-generated images and video, and content and analytic pipelines that have AI at their core.

  • How does voice-to-text fit into a content workflow?

    Voice-to-text is the input layer for an AI-orchestrated practice. Super Whisper lets you dictate to AI faster than you can type. From there, voice flows into two sub-workflows: AI interviewing you by voice on a topic, and async AI interviews sent to founders for content context. Voice removes the friction of getting ideas out of someone's head.

  • Can you really build a website with Claude Code?

    Yes. duo.ca is a Next.js site, source on GitHub, deployed on Vercel, written almost entirely through Claude Code. The process: planning phase first (purpose, content, positioning), then descriptive design language plus inspiration sites, then direct work in Claude Code. Skipping the Figma-to-code round trip saves hours. Pillar guides drafted in an afternoon are typical.

  • What tools does an AI-first content engineer use?

    Claude Code as the orchestration layer. Super Whisper for voice. MDX and Next.js for content. GitHub and Vercel for code and deploy. PostHog for analytics, Linear for tracking, Resend for email. Descript and Remotion for video. The criteria isn't 'best tool for the job' but 'best tool for an AI-orchestrated workflow,' meaning Claude can read it, write to it, and orchestrate around it.

  • Will this approach make AI content sound generic?

    Only if the system has no compounding loop. Voice rules, voice models, business context, and editorial preferences live in code and prompts that get sharper over time. Each piece of content feeds back into the system. Six months in, the output is more specific and voice-faithful than it was on day one. Generic AI content is a sign of a missing system, not a flaw in AI.

Justin DeMarchi
Written by

Justin DeMarchi

B2B content engineer and founder of DUO. Eight-plus years running marketing and content systems for brands in tech, SaaS, and AI.

More in AI Content Systems