Last updated: Aug 4, 2025, 11:26 AM UTC

Supabase Cloud Setup Guide

Status: Complete

Overview

This guide walks through setting up Supabase on the cloud platform (supabase.com) for production use. The cloud platform provides managed hosting, automatic backups, global CDN, and enterprise features.

Cloud vs Self-Hosted Comparison

Feature Supabase Cloud Self-Hosted
Setup Time 2 minutes 30+ minutes
Maintenance Fully managed Your responsibility
Backups Automatic daily Manual setup
Updates Automatic Manual
SSL/TLS Included Manual setup
CDN Global edge network Manual setup
Monitoring Built-in dashboard Manual setup
Cost Free tier + usage-based Infrastructure costs
Compliance SOC2, HIPAA available Your responsibility

Pricing Tiers

Free Tier

  • 2 projects
  • 500MB database
  • 1GB storage
  • 2GB bandwidth
  • 50,000 monthly active users
  • 7-day log retention

Pro Tier ($25/month)

  • Unlimited projects
  • 8GB database
  • 100GB storage
  • 250GB bandwidth
  • 100,000 monthly active users
  • 90-day log retention
  • Point-in-time recovery

Team/Enterprise

  • Custom limits
  • SLA guarantees
  • Dedicated support
  • Compliance certifications

Account Setup

Step 1: Create Account

  1. Navigate to https://app.supabase.com
  2. Sign up with:
    • GitHub (recommended)
    • Email/Password
    • SSO (Enterprise)

Step 2: Verify Email

Check your email for verification link if using email signup.

Project Creation

Step 1: New Project

  1. Click "New Project"
  2. Configure:
    • Organization: Select or create new
    • Project Name: Your application name
    • Database Password: Strong password (save securely!)
    • Region: Choose closest to users
    • Pricing Plan: Start with Free

Step 2: Region Selection

Choose based on your primary user base:

Region Location Best For
us-east-1 N. Virginia Eastern US
us-west-1 N. California Western US
eu-west-1 Ireland Europe
eu-central-1 Frankfurt Central Europe
ap-southeast-1 Singapore Southeast Asia
ap-northeast-1 Tokyo Japan/Korea
ap-south-1 Mumbai India
sa-east-1 SΓ£o Paulo South America

Step 3: Wait for Provisioning

Takes 1-2 minutes. You'll receive:

  • Project URL
  • API Keys
  • Database credentials

Configuration

Step 1: Retrieve API Keys

Navigate to Settings > API:

# Project URL
https://[PROJECT_REF].supabase.co

# Anon Key (public)
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...

# Service Role Key (secret - server-side only!)
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...

Step 2: Database Connection

Settings > Database:

# Connection string
postgresql://postgres:[YOUR-PASSWORD]@db.[PROJECT_REF].supabase.co:5432/postgres

# Connection pooling (recommended for serverless)
postgresql://postgres:[YOUR-PASSWORD]@db.[PROJECT_REF].supabase.co:6543/postgres?pgbouncer=true

# Direct connection (for migrations)
postgresql://postgres:[YOUR-PASSWORD]@db.[PROJECT_REF].supabase.co:5432/postgres

Step 3: Configure Authentication

Authentication > Providers:

Email Authentication

// Settings
{
  "enable_signup": true,
  "enable_email_confirmation": true,
  "minimum_password_length": 8,
  "password_requirements": "letters_digits_symbols"
}

OAuth Providers

Google:

  1. Create OAuth 2.0 credentials at Google Cloud Console
  2. Add redirect URL: https://[PROJECT_REF].supabase.co/auth/v1/callback
  3. Enter Client ID and Secret in Supabase

GitHub:

  1. Create OAuth App at GitHub Settings
  2. Authorization callback URL: https://[PROJECT_REF].supabase.co/auth/v1/callback
  3. Enter Client ID and Secret in Supabase

Step 4: Email Templates

Authentication > Email Templates:

Customize templates for:

  • Confirmation email
  • Password recovery
  • Magic link
  • Invitation

Example customization:

<h2>Welcome to {{ .SiteURL }}!</h2>
<p>Please confirm your email:</p>
<a href="{{ .ConfirmationURL }}">Confirm Email</a>

Database Setup

Step 1: SQL Editor

Navigate to SQL Editor to run migrations:

-- Enable extensions
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
CREATE EXTENSION IF NOT EXISTS "pg_stat_statements";

-- Create schemas
CREATE SCHEMA IF NOT EXISTS app;
CREATE SCHEMA IF NOT EXISTS audit;

-- Set search path
ALTER DATABASE postgres SET search_path TO public, app, extensions;

Step 2: Create Tables

Use Table Editor or SQL:

-- Example: User profiles
CREATE TABLE profiles (
    id UUID REFERENCES auth.users PRIMARY KEY,
    username TEXT UNIQUE,
    full_name TEXT,
    bio TEXT,
    avatar_url TEXT,
    website TEXT,
    created_at TIMESTAMPTZ DEFAULT NOW(),
    updated_at TIMESTAMPTZ DEFAULT NOW()
);

-- Enable RLS
ALTER TABLE profiles ENABLE ROW LEVEL SECURITY;

-- Create policies
CREATE POLICY "Public profiles are viewable by everyone" 
ON profiles FOR SELECT 
USING (true);

CREATE POLICY "Users can update own profile" 
ON profiles FOR UPDATE 
USING (auth.uid() = id);

Step 3: Database Migrations

Best practices for production:

  1. Version control migrations:
migrations/
β”œβ”€β”€ 001_initial_schema.sql
β”œβ”€β”€ 002_add_profiles.sql
β”œβ”€β”€ 003_add_organizations.sql
└── 004_add_rls_policies.sql
  1. Apply via CI/CD:
# GitHub Actions example
- name: Apply migrations
  run: |
    for file in migrations/*.sql; do
      psql $DATABASE_URL -f $file
    done

Storage Configuration

Step 1: Create Buckets

Storage > New Bucket:

// Public bucket (avatars, images)
{
  "name": "avatars",
  "public": true,
  "file_size_limit": 5242880, // 5MB
  "allowed_mime_types": ["image/jpeg", "image/png", "image/webp"]
}

// Private bucket (documents)
{
  "name": "documents",
  "public": false,
  "file_size_limit": 10485760 // 10MB
}

Step 2: Storage Policies

-- Allow authenticated users to upload their own avatar
CREATE POLICY "Users can upload own avatar"
ON storage.objects FOR INSERT
WITH CHECK (
  bucket_id = 'avatars' AND
  auth.uid()::text = (storage.foldername(name))[1]
);

-- Allow public read access to avatars
CREATE POLICY "Avatar images are publicly accessible"
ON storage.objects FOR SELECT
USING (bucket_id = 'avatars');

Security Configuration

Step 1: Row Level Security (RLS)

Always enable RLS on public tables:

-- Enable RLS
ALTER TABLE your_table ENABLE ROW LEVEL SECURITY;

-- Force RLS for table owner
ALTER TABLE your_table FORCE ROW LEVEL SECURITY;

Step 2: API Security

Settings > API:

  • Rate limiting: Enable and configure
  • CORS: Configure allowed origins
  • JWT expiry: Set appropriate timeout

Step 3: Database Security

Settings > Database:

  • SSL Enforcement: Always enable for production
  • Connection pooling: Use for serverless functions
  • IP restrictions: Whitelist if needed

Environment Variables

Development (.env.local)

NEXT_PUBLIC_SUPABASE_URL=https://[PROJECT_REF].supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbGc...
SUPABASE_SERVICE_ROLE_KEY=eyJhbGc...

Production (Vercel/Netlify)

# Add via dashboard, not in code
NEXT_PUBLIC_SUPABASE_URL=[URL]
NEXT_PUBLIC_SUPABASE_ANON_KEY=[ANON_KEY]
SUPABASE_SERVICE_ROLE_KEY=[SERVICE_KEY]

Monitoring & Maintenance

Database Metrics

Monitor via Dashboard > Reports:

  • Query performance
  • Storage usage
  • Connection count
  • Cache hit ratio

Logs

Logs > [Service]:

  • API logs
  • Auth logs
  • Database logs
  • Function logs

Backups

Automatic daily backups included. For point-in-time recovery:

# Pro plan and above
Settings > Database > Backups > Point in Time Recovery

Performance Optimization

1. Database Indexes

-- Add indexes for frequently queried columns
CREATE INDEX idx_profiles_username ON profiles(username);
CREATE INDEX idx_posts_created_at ON posts(created_at DESC);

-- Composite indexes for complex queries
CREATE INDEX idx_posts_user_created ON posts(user_id, created_at DESC);

2. Connection Pooling

Use pooled connection string for serverless:

// Serverless function
const { createClient } = require('@supabase/supabase-js')

const supabase = createClient(
  process.env.SUPABASE_URL,
  process.env.SUPABASE_SERVICE_KEY,
  {
    db: { schema: 'public' },
    auth: { persistSession: false }
  }
)

3. Caching Strategies

// Client-side caching with SWR
import useSWR from 'swr'

const fetcher = (url) => supabase.from(url).select()

function useProfile(id) {
  const { data, error } = useSWR(`profiles-${id}`, 
    () => supabase.from('profiles').select().eq('id', id).single()
  )
  return { profile: data, error }
}

Deployment Integration

Vercel

  1. Install Vercel Integration from Supabase Dashboard
  2. Environment variables auto-synced
  3. Preview environments get separate databases

Netlify

# netlify.toml
[build.environment]
  NEXT_PUBLIC_SUPABASE_URL = "https://[ref].supabase.co"
  NEXT_PUBLIC_SUPABASE_ANON_KEY = "your-anon-key"

GitHub Actions

name: Deploy
on:
  push:
    branches: [main]

jobs:
  migrate:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Run migrations
        env:
          DATABASE_URL: ${{ secrets.DATABASE_URL }}
        run: |
          for file in migrations/*.sql; do
            psql $DATABASE_URL -f $file
          done

Troubleshooting

Common Issues

Issue Solution
Rate limit exceeded Upgrade plan or implement caching
Connection refused Check SSL enforcement and connection string
RLS blocking queries Test with service role key first
Slow queries Add indexes, check query performance
Storage upload fails Check bucket policies and file size limits

Debug Mode

Enable debug mode in client:

const supabase = createClient(url, key, {
  db: { schema: 'public' },
  global: { headers: { 'x-debug': 'true' } }
})

Migration from Other Platforms

From Firebase

# Use Supabase migration tool
npx supabase-migration firebase-to-supabase \
  --firebase-config ./firebase.json \
  --supabase-url $SUPABASE_URL \
  --supabase-key $SUPABASE_SERVICE_KEY

From Custom PostgreSQL

# Dump existing database
pg_dump postgresql://old-database > dump.sql

# Import to Supabase
psql $SUPABASE_DB_URL < dump.sql

Best Practices

  1. Never expose service role key in client-side code
  2. Always use RLS for public tables
  3. Implement rate limiting for production
  4. Use connection pooling for serverless
  5. Regular backups beyond automatic ones
  6. Monitor usage to avoid surprises
  7. Use preview branches for testing
  8. Implement proper error handling
  9. Cache frequently accessed data
  10. Document your schema and policies

Next Steps

  1. Set up CI/CD pipeline
  2. Configure monitoring alerts
  3. Implement backup strategy
  4. Plan scaling approach
  5. Review security checklist
  6. Set up staging environment

Resources