Skip to content

KB-006: Vercel Edge vs. Serverless Functions — When to Use Each

Field Detail
Document ID KB-006
Version 1.0
Date March 2026
Author Gemi (Gemini — Atlantis AI)
Reviewed By Shane Hardin
Applies To All Atlantis Vercel projects with backend API routes
Difficulty Intermediate
Est. Time 10 minutes
Related Docs KB-005 — Environment Variables, KB-007 — ISR, HT-003 — Vercel 502 Fix

1. Overview

Vercel offers two types of backend compute for running server-side code: Serverless Functions (Node.js) and Edge Functions (V8 runtime). They look similar in code but behave very differently at runtime — choosing the wrong one for a given task causes performance problems, timeout errors, or unnecessary cold start delays.

This matters directly for AtlantisITS because the roofing lead engine API route (/api/roofing-lead) is a Serverless Function, and any future AI chat interfaces or streaming responses should use Edge Functions.


2. The Core Difference

The simplest way to think about it:

  • Serverless = a full Node.js server spun up in a single region, capable of heavy lifting
  • Edge = a lightweight V8 runtime running at the network edge closest to the user, optimized for speed
Serverless Function:
  User Request → Vercel (US-East region) → Full Node.js environment → Response
  [Can take 1–2s on cold start, but can do anything Node.js can do]

Edge Function:
  User Request → Nearest Vercel Edge Node (100+ locations globally) → V8 runtime → Response
  [Near-instant start, but limited runtime — no native Node.js modules]

3. Feature Comparison

Feature Serverless Functions (Node.js) Edge Functions (V8)
Runtime Full Node.js V8 JavaScript (no Node.js APIs)
Location Single region (e.g., US-East-1) Global edge — closest to user
Cold Start 1–2 seconds if rarely used Near-zero — always warm
Max Execution Time 30s (Hobby) / 90s (Pro) Must begin responding within 25ms
Database Connections ✅ Full support ⚠️ Limited — use edge-compatible DBs
File System Access ✅ Read-only /tmp ❌ Not available
Node.js Modules ✅ All npm packages ⚠️ Web-standard APIs only
Streaming Responses ⚠️ Possible but not optimal ✅ Native streaming support
AI/LLM Streaming ⚠️ Workable ✅ Ideal
PDF / File Generation ✅ Best choice ❌ Not supported
Webhook Processing ✅ Best choice ⚠️ Works for simple cases

4. Atlantis Guidance — What to Use Where

Use Serverless Functions for:

  • Webhook endpoints/api/roofing-lead, future plumbing/electrical lead handlers
  • n8n integrations — processing form data, calling n8n webhook, validating secrets
  • Database operations — MongoDB Atlas queries, Google Sheets writes
  • PDF generation — future invoice or report generation features
  • Twilio SMS — sending SMS alerts (requires Node.js Twilio SDK)
  • Any task running longer than a few seconds

Use Edge Functions for:

  • AI chat interfaces — streaming responses from Claude, OpenAI, or Gemini APIs
  • Auth checks — validating JWT tokens or session cookies before routing
  • Geo-routing — redirecting users based on location
  • Rate limiting — lightweight request throttling at the edge
  • A/B testing middleware — fast routing decisions without a full server

💡 TIP: For the Atlantis AI Conference Room (future product), Edge Functions are the correct choice for the streaming chat interface. Serverless Functions handle the heavy backend work like saving transcripts or aggregating responses.


5. Code Examples

Serverless Function (existing Atlantis pattern)

// api/roofing-lead.js — Serverless Function
// Handles webhook processing, secret validation, n8n call
export default async function handler(req, res) {
  // Full Node.js — can use any npm package
  const secret = req.headers['x-atlantis-secret'];
  if (secret !== process.env.ATLANTIS_WEBHOOK_SECRET) {
    return res.status(401).json({ error: 'Unauthorized' });
  }

  // Call n8n — can await long-running operations
  const response = await fetch(process.env.N8N_WEBHOOK_URL, {
    method: 'POST',
    body: JSON.stringify(req.body)
  });

  return res.status(200).json({ success: true });
}

Edge Function (future AI streaming pattern)

// api/chat.js — Edge Function
// Streams AI responses with near-zero latency
export const config = { runtime: 'edge' };

export default async function handler(req) {
  const { prompt } = await req.json();

  // Edge runtime — must use fetch, no Node.js modules
  const stream = await fetch('https://api.anthropic.com/v1/messages', {
    method: 'POST',
    headers: { 'x-api-key': process.env.ANTHROPIC_API_KEY },
    body: JSON.stringify({ model: 'claude-sonnet-4-20250514', stream: true, messages: [{ role: 'user', content: prompt }] })
  });

  // Return stream directly — Edge handles this natively
  return new Response(stream.body, {
    headers: { 'Content-Type': 'text/event-stream' }
  });
}

6. Cold Start — What It Is and When It Matters

A cold start happens when a Serverless Function hasn't been called recently and Vercel has to spin up a new Node.js instance from scratch. This adds 1–2 seconds to the first request.

Cold starts matter for: - AI chat interfaces where users expect instant responses - Real-time dashboards with frequent polling - Any user-facing interaction where latency is noticeable

Cold starts don't matter for: - Webhook endpoints (n8n doesn't care about 1–2s delay) - Background processing tasks - Admin-only API routes called infrequently

💡 TIP: Edge Functions don't have cold starts — they're always warm because the V8 runtime is permanently resident at edge nodes globally. For the Atlantis Command Center dashboard, an Edge Function for the status API would eliminate any cold start latency.


7. Troubleshooting

Error / Symptom Cause Fix
Function timeout on long operations Serverless function exceeding 30s limit Break into smaller operations or upgrade to Vercel Pro (90s limit)
"Edge Function failed to respond in 25ms" Heavy processing in an Edge Function Move to Serverless Function
require() not found in Edge Function Node.js module used in Edge runtime Edge only supports Web APIs — use fetch instead of axios, etc.
Cold start delay on webhook Normal Serverless behavior Acceptable for webhooks — use Edge only if truly latency-sensitive
AI streaming not working Serverless Function buffering response Switch to Edge Function with ReadableStream response
Database connection errors in Edge Full DB client not supported in Edge Use edge-compatible client (e.g., Prisma Data Proxy, Neon serverless)

8. Quick Reference

Item Value
Serverless = use for Webhooks, DB ops, file generation, n8n calls, Twilio SMS
Edge = use for AI streaming, auth checks, geo-routing, rate limiting
Serverless max time 30s (Hobby) / 90s (Pro)
Edge max response start 25ms
Cold start Serverless only — 1–2s if idle
Enable Edge runtime export const config = { runtime: 'edge' }
Atlantis webhook route /api/roofing-lead — Serverless (correct)
Future AI chat route /api/chat — Edge (recommended)
Related doc Environment Variables → KB-005
Related doc ISR for static content → KB-007

Document prepared by Gemi (Gemini — Atlantis AI Automations)

atlantisits.info | KB-006 | v1.0 | March 2026