How to Connect Claude AI to n8n and Make.com
Step-by-step guide to integrating the Claude API into your n8n or Make.com workflows for lead scoring, email triage, and more.
A technical guide to integrating the Claude API into n8n and Make.com workflows — with real JSON examples, system prompt patterns, and four ready-to-build workflow recipes.
Introduction
You've heard the pitch: "connect AI to your workflows and automate everything." The reality is more specific than that. You need to pick a model, figure out the API, wire it into your automation platform, and write prompts that return structured, usable output instead of rambling paragraphs.
This guide walks you through connecting Claude AI to n8n and Make.com, step by step. We'll cover the API setup, the exact HTTP configuration for both platforms, real workflow recipes you can steal, and the prompt engineering patterns that make automation outputs reliable. If you've been building AI-powered automation systems or thinking about it, this is the practical starting point.
Why Claude for workflow automation
Why Claude for workflow automation
There are several large language models you could wire into n8n or Make.com. GPT-4o, Gemini, Llama, Mistral. Claude earns its spot in automation workflows for a few specific reasons.
Instruction following. Claude is unusually good at following complex, multi-step system prompts. When you tell it to return JSON with exactly four fields, score a lead on a 1-10 scale, and explain the reasoning in under 50 words, it does that. Consistently. This matters in automation where the next node in your workflow expects a specific format.
200K context window. Claude can process long inputs. If you're classifying support tickets that include entire email threads, or generating content briefs from 20-page research documents, you won't hit context limits the way you do with smaller models.
Structured output. Claude handles JSON output well. You can instruct it to return nothing but valid JSON, and it will. No markdown wrappers, no extra commentary. This is critical for automation — your n8n or Make.com workflow needs to parse the response without manual intervention.
Reasoning on complex tasks. For tasks like lead scoring, email triage, or content generation, Claude tends to produce more nuanced output than alternatives. It won't just parrot back keywords from the input; it will reason about intent, context, and priority.
If you're building AI and machine learning systems for your business, Claude's API is one of the most automation-friendly options available.
Setting up the Claude API
Setting up the Claude API
Before you touch n8n or Make.com, you need an Anthropic API key.
Go to console.anthropic.com and create an account. Navigate to API Keys in the left sidebar and generate a new key. Copy it and store it somewhere secure — you won't be able to see it again.
Pricing. Anthropic charges per token, with separate rates for input and output. As of early 2026:
- Claude 3.5 Haiku: $0.80 per million input tokens, $4 per million output tokens
- Claude 3.7 Sonnet: $3 per million input tokens, $15 per million output tokens
For context, 1 million tokens is roughly 750,000 words. A typical automation call — say, scoring a lead from a 200-word form submission — uses about 500-800 tokens total and costs fractions of a cent.
The API endpoint you'll use is:
https://api.anthropic.com/v1/messages
Every request needs two headers:
x-api-key: your-api-key-here
anthropic-version: 2023-06-01
Content-Type: application/json
And here's the basic request body:
{
"model": "claude-sonnet-4-20250514",
"max_tokens": 1024,
"system": "You are a lead scoring assistant. Respond only with valid JSON.",
"messages": [
{
"role": "user",
"content": "Score this lead: Name: Sarah Chen, Company: Meridian HVAC, Employees: 45, Source: Google Ads, Message: 'Looking for help automating our dispatch scheduling and customer follow-ups.'"
}
]
}
The response comes back like this:
{
"id": "msg_01ABC123",
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "{\"score\": 8, \"reasoning\": \"Mid-size service business actively seeking automation for core operations. High intent, specific use case, decision-maker likely.\", \"priority\": \"high\", \"suggested_action\": \"schedule_call\"}"
}
],
"model": "claude-sonnet-4-20250514",
"stop_reason": "end_turn",
"usage": {
"input_tokens": 142,
"output_tokens": 67
}
}
Notice the response text is inside content[0].text. You'll need to parse that in your automation workflow.
Connecting Claude to n8n
Connecting Claude to n8n
n8n gives you full control over HTTP requests, which makes the Claude integration straightforward. Here's the step-by-step setup.
Step 1: Add an HTTP Request node. In your n8n workflow, add an HTTP Request node after whatever trigger starts your flow (webhook, schedule, form submission, etc.).
Step 2: Configure the request.
- Method: POST
- URL:
https://api.anthropic.com/v1/messages - Authentication: None (you'll pass the API key as a header)
- Headers:
x-api-key→ your Anthropic API keyanthropic-version→2023-06-01Content-Type→application/json
- Body Content Type: JSON
- Body:
{
"model": "claude-sonnet-4-20250514",
"max_tokens": 1024,
"system": "Your system prompt here.",
"messages": [
{
"role": "user",
"content": "{{ $json.inputField }}"
}
]
}
The {{ $json.inputField }} syntax pulls data from the previous node. Replace inputField with whatever field name your trigger provides — like formData, emailBody, or ticketDescription.
Step 3: Parse the response. Add a Set node or Function node after the HTTP Request. The Claude response text lives at:
{{ $json.content[0].text }}
If Claude returned JSON (which it should, if your system prompt told it to), parse it:
const response = JSON.parse($input.first().json.content[0].text);
return { json: response };
Now you have the parsed fields available as individual values for downstream nodes — routing to different branches, updating a CRM, sending notifications, whatever your workflow needs.
Step 4: Add error handling. Wrap the HTTP Request node in an error workflow or use n8n's built-in retry logic. Set retry on fail to 2 attempts with a 5-second wait. Claude's API occasionally returns 429 (rate limit) or 529 (overloaded) responses, and a simple retry handles most of these.
Teams building AI agent systems often start with n8n because it's self-hosted and has no per-execution fees, which keeps costs predictable.
Connecting Claude to Make.com
Connecting Claude to Make.com
Make.com (formerly Integromat) uses a similar approach but with its own module structure.
Step 1: Add an HTTP "Make a request" module. After your trigger module, add an HTTP module and select "Make a request."
Step 2: Configure the module.
- URL:
https://api.anthropic.com/v1/messages - Method: POST
- Headers:
x-api-key→ your API keyanthropic-version→2023-06-01
- Body type: Raw
- Content type: JSON
- Request content: paste the same JSON body structure from the n8n example above, but use Make.com's double-curly-brace variable syntax to inject dynamic data from earlier modules.
Step 3: Parse the response.
Add a JSON Parse module after the HTTP module. Map the body output from the HTTP module into the parse module. This extracts the response into usable fields.
To get Claude's actual text response, you'll access:
content[].text
If Claude returned JSON text, add another JSON Parse module targeting that text field to break it into individual values.
Step 4: Route the output. Use a Router module after parsing to send the response down different paths based on the values Claude returned. For example, if Claude scored a lead as "high priority," route to a Slack notification and CRM update. If "low priority," route to an email drip sequence.
Make.com's visual scenario builder makes it easy to see the branching logic at a glance, which is helpful when you're managing workflows for AI-driven revenue systems.
Practical workflow recipes
Practical workflow recipes
Here are four workflows you can build today. Each one follows the same pattern: trigger → collect data → send to Claude → parse response → take action.
1. Lead scoring from form submissions
Trigger: Webhook receives form data from your website.
System prompt:
You are a lead scoring assistant for a B2B service company.
Score each lead from 1-10 based on company size, stated need,
and buying intent. Respond with ONLY valid JSON in this format:
{"score": number, "priority": "high"|"medium"|"low", "reason": "string under 30 words", "next_step": "schedule_call"|"send_info"|"add_to_nurture"}
User message: Insert the form fields — name, company, message, source.
Action: Route based on priority. High-priority leads get an immediate Slack ping and automated follow-up sequence. Medium leads go to an email nurture. Low leads get logged for review.
This is one of the highest-ROI automations you can build. We've written about AI lead qualification for service businesses in more detail if you want to go deeper on the scoring logic.
2. Email triage and auto-response
Trigger: New email arrives (via IMAP node in n8n, or Gmail/Outlook module in Make.com).
System prompt:
You are an email triage assistant. Classify each email and
draft a response. Return ONLY valid JSON:
{"category": "support"|"sales"|"billing"|"spam"|"other",
"urgency": "high"|"medium"|"low",
"draft_response": "string",
"needs_human": true|false}
User message: The full email body and subject line.
Action: If needs_human is false and category is "support," send the draft response automatically. If needs_human is true, create a task for your team with the draft as a starting point. Spam gets archived. Sales inquiries get forwarded to your pipeline.
Businesses that handle high email volume can pair this with a conversational AI chatbot to cover both email and live chat channels simultaneously.
3. Content brief to draft article
Trigger: New row added to a Google Sheet or Airtable with a content brief (topic, target keywords, word count, audience).
System prompt:
You are a content writer. Write an article based on the brief
provided. Follow these rules:
- Match the specified word count within 10%
- Use the target keyword naturally 3-5 times
- Write in a conversational, expert tone
- Include a meta description under 155 characters
Return JSON: {"title": "string", "meta_description": "string", "body": "string in markdown"}
Action: Push the draft to a Google Doc, Notion page, or your CMS. Flag it for human review before publishing.
4. Customer support ticket classification
Trigger: New support ticket created in your helpdesk (via webhook or integration module).
System prompt:
Classify this support ticket. Return ONLY valid JSON:
{"category": "bug"|"feature_request"|"account"|"billing"|"how_to",
"product_area": "string",
"severity": "critical"|"high"|"medium"|"low",
"suggested_response": "string under 100 words"}
Action: Auto-assign the ticket to the right team based on category and product_area. If severity is critical, trigger a PagerDuty or Slack alert. Attach the suggested_response as an internal note for the support agent.
These automations are the kinds of systems we build through our AI automation services. The patterns are repeatable across industries — what changes is the system prompt and the specific actions after Claude responds.
System prompt best practices for automation
System prompt best practices for automation
Writing prompts for automation is different from writing prompts for interactive chat. In automation, you don't get a second chance. The prompt runs, the response gets parsed, and the next node either works or breaks. Here's what matters.
Always demand structured output. Start your system prompt with "Respond with ONLY valid JSON" or "Return ONLY a JSON object." Include the exact schema you expect. Claude follows these instructions reliably when they're explicit.
Include an example. One-shot examples dramatically improve consistency. Add something like:
Example input: "John from Acme Corp, 200 employees, wants CRM integration"
Example output: {"score": 7, "priority": "high", "reason": "Mid-market company with specific integration need"}
Be specific about constraints. "Under 30 words" is better than "be concise." "Score from 1-10" is better than "rate the lead." Vague instructions produce vague output.
Set the role clearly. "You are a lead scoring assistant for a B2B HVAC service company" gives Claude the context it needs to make reasonable judgments about industry-specific signals.
Don't over-prompt. If your system prompt is 2,000 words long, something is wrong. Keep it under 500 words. The shorter and more specific the prompt, the more consistent the output.
For voice AI systems and real-time applications, these same principles apply — you just need even tighter constraints on response length and format.
Error handling and reliability
Error handling and reliability
Any production automation needs to handle failures gracefully. Here's what to watch for with the Claude API.
Rate limits (429 errors). Anthropic enforces rate limits based on your usage tier. New accounts start with lower limits. When you hit them, you get a 429 response with a retry-after header. In n8n, set the HTTP Request node to retry on failure. In Make.com, use the built-in error handler module with a Sleep module before retrying.
Timeouts. Claude can take 10-30 seconds on complex requests. Set your HTTP timeout to at least 60 seconds in both platforms. In n8n, this is under the HTTP Request node's settings. In Make.com, it's in the HTTP module's advanced settings.
Invalid JSON responses. Even with clear instructions, Claude occasionally wraps JSON in markdown code blocks or adds a brief explanation. Defend against this in your parsing step:
let text = $input.first().json.content[0].text;
// Strip markdown code blocks if present
text = text.replace(/```json\n?/g, '').replace(/```\n?/g, '').trim();
const parsed = JSON.parse(text);
return { json: parsed };
Fallback logic. If Claude's API is down or returns an error after retries, don't let your workflow silently fail. Send the original input to a queue (Google Sheet, database, or internal notification) for manual processing. No automation should be a black hole where leads or tickets disappear.
If you want a team to handle the reliability engineering for you, our process covers how we build and monitor these systems for clients.
Cost management
Cost management
Claude API costs are low per-call, but they add up at scale. Here's how to keep them under control.
Use the right model for the job. Claude 3.5 Haiku costs roughly 75% less than Sonnet. For simple classification tasks (spam detection, basic ticket routing, sentiment analysis), Haiku is fast and accurate enough. Save Sonnet for tasks that need real reasoning — lead scoring with nuanced criteria, content generation, complex email drafting.
You can switch models by changing the model field in your request body:
"model": "claude-3-5-haiku-20241022" // for simple tasks
"model": "claude-sonnet-4-20250514" // for complex tasks
Cache repeated context. If your system prompt is long and doesn't change between calls, you're paying for those input tokens every time. Anthropic offers prompt caching that can reduce input costs by up to 90% for repeated system prompts. You enable it by adding a cache_control block to your system message.
Limit output tokens. Set max_tokens to the minimum you need. Lead scoring? 200 tokens is plenty. Article drafting? You might need 4,000. Don't leave it at the default when a smaller value works.
Batch where possible. Instead of sending one API call per form submission, collect submissions over a 5-minute window and score them in a single call. Claude can handle a prompt like "Score these 10 leads" and return an array of results. One call instead of ten.
Understanding what an AI revenue system actually does helps you prioritize which workflows to automate first, so you're spending API credits on the highest-value tasks.
Frequently asked questions
Frequently asked questions
Do I need coding skills to connect Claude to n8n or Make.com? Minimal. Both platforms are visual workflow builders. The main "code" you'll write is a small JavaScript snippet to parse Claude's JSON response, and you can copy the examples from this guide directly. If you can edit a JSON object, you can build this integration.
How fast does Claude respond through the API? Simple requests (classification, scoring) typically return in 2-5 seconds. Longer content generation tasks can take 10-30 seconds. Set your HTTP timeout to 60 seconds to be safe.
Is there a native Claude integration for n8n or Make.com? n8n has a community node for Anthropic that simplifies the setup. Make.com does not have a native Anthropic module as of early 2026, so you use the generic HTTP module. Both approaches work fine — the HTTP module method gives you more control over request parameters.
What happens if I exceed the API rate limits? Anthropic returns a 429 error with a retry-after header. Your workflow should retry automatically after the specified delay. For most small-to-medium businesses, the default rate limits are more than sufficient. If you need higher limits, you can request a tier upgrade through the Anthropic console.
Can I use Claude for real-time customer interactions, not just background workflows? Yes. Claude's response times are fast enough for near-real-time use. You can connect it to live chat widgets, SMS auto-responders, and phone systems. The same API call structure applies — you just need lower latency on your middleware.
How do I keep my API key secure in n8n or Make.com? In n8n, store the API key as a credential (Settings → Credentials → HTTP Header Auth). In Make.com, use the built-in connection storage. Never hardcode API keys into workflow configurations that might be shared or exported.
Get started
Get started
You now have everything you need to connect Claude to n8n or Make.com: the API setup, the HTTP configuration for both platforms, parsing logic, four workflow recipes, and the prompt patterns that produce reliable structured output.
Start with one workflow. Lead scoring is the easiest win for most businesses — it's a single API call that turns unstructured form data into a prioritized, actionable score. Once that's running, expand to email triage, content generation, or ticket classification.
If you'd rather have a team build and maintain these integrations for you, get in touch. We design AI and machine learning systems that connect to your existing tools and run reliably without babysitting. Schedule a free consultation and we'll map out which workflows will move the needle for your business.
Related Articles
How to Build AI Workflows with n8n (Practical Guide for Businesses)
A practical guide to building AI-powered automation workflows with n8n — covering email triage, lead qualification, content drafting, invoice processing, and support ticket routing.
n8n vs Make vs Zapier for AI Automation: Which One Fits Your Business
A direct comparison of n8n, Make, and Zapier for building AI-powered business workflows — covering pricing at different scales, AI-specific features, and when each tool is the right choice.
Claude API vs ChatGPT API for Business Tools
A direct comparison of the Claude API and ChatGPT API for building business tools — covering model lineups, context windows, instruction following, pricing, and when to pick each one.
Need Help Implementing This?
Our team at Luminous Digital Visions specializes in SEO, web development, and digital marketing. Let us help you achieve your business goals.
Get Free Consultation