How to Connect OpenAI to n8n to Automate Client Workflows
Connecting OpenAI to n8n is the single most impactful thing you can do as an AI automation agency owner. Once you have GPT wired into your workflows, you can build lead qualifiers, email writers, summarizers, chatbots, and dozens of other automations that clients will pay serious money for.
This guide walks you through the complete setup — from getting your OpenAI API key to chaining multiple AI steps together in production-ready workflows.
Prerequisites
- An active OpenAI account with billing enabled
- An n8n instance (Cloud or self-hosted)
- Basic familiarity with n8n's canvas interface
Step 1: Get Your OpenAI API Key
Go to platform.openai.com, sign in, and navigate to API Keys in the left sidebar. Click Create new secret key, give it a name like n8n-production, and copy the key immediately — you won't be able to see it again.
Set a usage limit under Billing → Usage Limits. For client projects, set a hard limit of $50/month until you understand your usage patterns. Nothing kills a client relationship faster than an unexpected $500 AI bill.
Step 2: Add OpenAI Credentials in n8n
In n8n, go to Settings → Credentials → Add Credential. Search for OpenAI and select it. You'll see a simple form with one field:
- API Key: paste your OpenAI secret key
Click Save. The credential is now available to all OpenAI nodes in your workspace. Name it OpenAI Production to distinguish it from test credentials.
Step 3: Add Your First OpenAI Node
Open a new workflow and add an OpenAI node. You'll see several resource options:
- Chat — conversational completions (most common)
- Text — legacy completions (older models)
- Image — DALL-E image generation
- Audio — Whisper transcription
- File — file uploads for fine-tuning
Select Chat and Message a Model. Configure:
- Credential: OpenAI Production
- Model: gpt-4o-mini (best price/performance for most tasks)
- Messages: click Add Message
- Role: system, Content: your system prompt
- Add another message with Role: user, Content: your dynamic input
Step 4: Writing Effective System Prompts for Automation
The system prompt is what makes your AI node actually useful. Here are proven templates for common agency use cases:
Lead Qualifier:
You are a B2B sales qualification expert. Given company information, output a JSON object with: qualified (true/false), score (1-10), reason (string), and next_action (string). Be concise and output only valid JSON.
Email Writer:
You are a cold email copywriter. Write short, personalized cold emails under 100 words. Sound human, not salesy. Use the prospect's company and role to personalize. Output only the email body, no subject line.
Summarizer:
You summarize business content into 3-bullet executive summaries. Each bullet is under 15 words. Output only the 3 bullets, no intro text.
Step 5: Using Dynamic Data in Prompts
The real power comes from injecting data from previous nodes into your OpenAI prompts. In the user message field, use n8n expressions:
{{$json.company_name}}— company name from previous node{{$json.job_title}}— job title field{{$json.website_content}}— scraped website text{{$('Google Sheets').item.json.email}}— reference a specific node's data
Example user message for email generation:
Write a cold email for: Name: {{$json.first_name}}, Company: {{$json.company}}, Title: {{$json.title}}, Pain point based on industry: {{$json.industry}}.
Step 6: Parsing JSON Responses from OpenAI
When you need structured output (scores, classifications, extracted data), force JSON output. Add this to your system prompt: "Always respond with valid JSON only. No markdown code blocks, no extra text."
Then add a Code node after the OpenAI node to parse the response:
const raw = $input.first().json.message.content; const parsed = JSON.parse(raw); return [{ json: { ...($input.first().json), ai_result: parsed } }];
For more robust parsing, add error handling:
try { const parsed = JSON.parse(raw); return [{ json: parsed }]; } catch(e) { return [{ json: { error: true, raw: raw } }]; }
Step 7: Chaining Multiple AI Steps
Complex workflows chain multiple OpenAI nodes together. Here's an example pipeline for automated prospect research:
- Node 1 — Summarize website: Take raw website text, summarize into 3 key value props
- Node 2 — Identify pain points: Based on summary, infer top 3 business pain points
- Node 3 — Write cold email: Use pain points to write a hyper-personalized opening line
- Node 4 — Generate subject line: Create 3 subject line variants for A/B testing
Each node passes its output to the next using n8n's expression syntax. The final output is a completely personalized outreach sequence generated without any human input.
Step 8: Handling Rate Limits and Errors
OpenAI has rate limits that will break your workflows if you're processing large batches. Protect your workflow:
- Add a Wait node between batches: set to 1 second for gpt-4o-mini, 3 seconds for gpt-4o
- Use SplitInBatches node: batch size of 5 with a 2-second wait
- Wrap OpenAI nodes in Error Handler subworkflows to retry on 429 errors
Real Client Workflow Examples
Here are three production workflows you can build immediately with this OpenAI+n8n setup:
1. Automated proposal generator: Client fills out a Typeform → n8n pulls form data → OpenAI writes a customized proposal → Google Docs creates the proposal → email sends it automatically. Saves 2–3 hours per proposal.
2. Support ticket classifier: New email arrives → OpenAI classifies urgency and category → routes to correct team member in Slack → creates task in ClickUp → sends auto-acknowledgment to customer. Response time drops from hours to seconds.
3. Content repurposer: Blog post published → n8n fetches content → OpenAI generates Twitter thread, LinkedIn post, and email newsletter version → all posted to respective platforms automatically. One post becomes five pieces of content.
Cost Optimization Tips
API costs can eat your margins if you're not careful. Here's how to keep costs low:
- Use gpt-4o-mini for classification, scoring, and short-form tasks — it's 95% cheaper than gpt-4o with comparable quality for structured tasks
- Set max_tokens to limit output length — for JSON scoring, 200 tokens is plenty
- Cache repeated lookups in a Google Sheet or Airtable to avoid re-calling the API
- Use temperature: 0 for structured/consistent outputs and temperature: 0.7 for creative writing
For more on building AI agent workflows, check out our complete guide on building AI agents in n8n and see how n8n compares to other tools in our n8n vs Make vs Zapier comparison.
Want to learn how to build and sell AI automations? Join our free community. Join the free AI Agency Sprint community.
Join 215+ AI Agency Owners
Get free access to our LinkedIn automation tool, AI content templates, and a community of builders landing clients in days.
