n8n Loop Over Items and Split In Batches: Complete Guide (2026)
Looping is counterintuitive in n8n because n8n already processes items in a list automatically. Most nodes run once per item without any explicit loop. But when you need rate-limited iteration, nested processing, or batched API calls, you need explicit loop control. This guide covers Split In Batches, Loop Over Items, and the patterns that actually work in production.
Why n8n Does Not Need Loops (Usually)
When a node passes 100 items to the next node, that next node runs its logic 100 times automatically. You do not write a for loop. You write the logic for one item, and n8n applies it to all items. This is the most important concept to understand before you reach for Split In Batches.
If you added a Set node that outputs 100 rows, and then connected it to an HTTP Request node, n8n fires 100 HTTP requests without any explicit loop. The implicit loop is built into how items flow through a workflow.
When You Actually Need Split In Batches
Four scenarios require explicit batch control. One: rate limiting. You need to space out API calls to avoid hitting provider limits. Two: nested data. You have a list and for each item in the list you need to fetch another list, then process each item in that second list. Three: large datasets. Processing 10,000 items at once exhausts memory or times out. Four: sequential dependencies. Each iteration needs the result of the previous one.
Split In Batches (Legacy Node)
The original way to loop in n8n. The Split In Batches node outputs items in groups of N. After processing each batch, you connect the end of the workflow back to the Split In Batches node, which then outputs the next batch. The loop ends automatically when all batches are processed.
Configuration: batchSize is how many items per iteration. Reset to false on subsequent calls (n8n tracks state internally). The first output is the batch; the second output triggers when the loop completes.
Loop Over Items (Newer, Simpler)
In newer n8n versions, the Loop Over Items node replaces the manual back-connection pattern. It has a clear loop body and an exit. You drop your processing inside the loop, and the node handles iteration without you needing to wire the output back manually. Use this if your n8n version supports it; it is cleaner.
When to Use Each Looping Pattern
Rate-Limited API Calls Pattern
You have 500 leads to enrich via an API that allows 10 requests per second. Use Split In Batches with batchSize 10, connect to HTTP Request, then a Wait node for 1 second, then loop back. 500 items divided into batches of 10 with a 1-second pause means the full run takes about 50 seconds and never exceeds the rate limit.
If the API has a burst allowance (e.g., 60 per minute but bursts of 20 allowed), adjust batchSize and Wait time accordingly. Do not assume rate limits are per-second; many APIs enforce per-minute or per-hour windows.
Nested Loop Pattern
You have 10 customers. For each customer, you fetch a list of their orders (varying counts). For each order, you fetch line items. This is two levels of nesting. In n8n, the way to do this is one Loop Over Items per level, or use implicit item flow twice (which often works fine because n8n's default behavior handles arrays of arrays).
Pitfall: when you use Split In Batches for the outer loop, the inner nodes see items from the current batch only. You need to collect results across batches if the downstream expects the full list.
Aggregating Results After a Loop
A common pattern: loop over items, transform each one, then aggregate back into a single result. Use the Aggregate node after the loop to combine all iterations back into a single array. Without aggregation, downstream nodes see only the last batch.
Memory Issues with Large Datasets
Running 100,000 items through an n8n workflow without batching will often crash the instance or cause timeouts. Split In Batches with a batchSize of 50 to 200 keeps memory usage flat. For truly large datasets, consider moving the work to an external queue (Redis, SQS) and processing one batch per n8n workflow execution.
Parallel vs Sequential Iteration
Default item flow in n8n is parallel: 100 items produce 100 concurrent operations. Split In Batches forces sequential batches but items within a batch still run in parallel. If you need fully sequential execution (one item fully processed before the next starts), use batchSize 1 with Split In Batches.
Sequential is much slower but sometimes necessary for APIs that cannot handle concurrency, or for operations where ordering matters (posting messages to a chat, for example).
Batch Size Trade-offs
Error Handling Inside Loops
If one item in a batch fails, do you want the whole loop to stop or continue? By default, n8n stops on error. To continue, enable "Continue On Fail" on the failing node. This lets one failed item be logged while the rest proceed. Pair with an error workflow to capture the failures for retry.
For truly critical loops, wrap the inner logic in a try-catch via the Code node. Return structured success/failure objects instead of throwing, and aggregate the failures at the end for separate handling.
Common Gotchas
Forgetting to wire the loop back: if you use Split In Batches without connecting the end of the loop back to it, the loop only runs once. The second output of Split In Batches must receive the completion signal; the first output goes through the processing and returns. Infinite loops: if you forget to increment or the exit condition never triggers, the workflow hangs. Test with small datasets first. Mixing Split In Batches with implicit item flow: the implicit flow ignores batch boundaries in some nodes. If you see unexpected behavior, switch to Loop Over Items for explicit control.
Join 215+ AI Agency Owners
Get free access to our all-in-one outreach platform, AI content templates, and a community of builders landing clients in days.