Supabase Realtime Orchestration
Supabase Realtime PostgreSQL CDC WebSocket Container Orchestration Kubernetes Docker Scaling Elixir Phoenix Presence Broadcast Channel
| Feature | Supabase | Firebase | Appwrite | Nhost |
|---|---|---|---|---|
| Database | PostgreSQL | Firestore | MariaDB | PostgreSQL |
| Realtime | CDC + WebSocket | Built-in | WebSocket | Hasura |
| Open Source | ใช่ | ไม่ | ใช่ | ใช่ |
| Self-host | Docker/K8s | ไม่ได้ | Docker | Docker |
Supabase Realtime Setup
# === Supabase Realtime ===
# JavaScript Client
# import { createClient } from '@supabase/supabase-js'
#
# const supabase = createClient(SUPABASE_URL, SUPABASE_ANON_KEY)
#
# // Subscribe to table changes
# const channel = supabase
# .channel('messages')
# .on('postgres_changes',
# { event: '*', schema: 'public', table: 'messages' },
# (payload) => {
# console.log('Change:', payload)
# if (payload.eventType === 'INSERT') {
# addMessage(payload.new)
# }
# }
# )
# .subscribe()
#
# // Presence — Who's online
# const presenceChannel = supabase.channel('online-users')
# presenceChannel
# .on('presence', { event: 'sync' }, () => {
# const state = presenceChannel.presenceState()
# console.log('Online:', Object.keys(state).length)
# })
# .subscribe(async (status) => {
# if (status === 'SUBSCRIBED') {
# await presenceChannel.track({
# user_id: currentUser.id,
# username: currentUser.name,
# online_at: new Date().toISOString(),
# })
# }
# })
#
# // Broadcast — Send messages to channel
# const broadcastChannel = supabase.channel('chat-room')
# broadcastChannel
# .on('broadcast', { event: 'message' }, (payload) => {
# console.log('Message:', payload)
# })
# .subscribe()
#
# broadcastChannel.send({
# type: 'broadcast',
# event: 'message',
# payload: { text: 'Hello!', user: 'Alice' },
# })
from dataclasses import dataclass
from typing import List
@dataclass
class RealtimeChannel:
name: str
type: str
subscribers: int
events_per_sec: float
latency_ms: int
channels = [
RealtimeChannel("messages", "postgres_changes", 250, 45, 12),
RealtimeChannel("online-users", "presence", 500, 5, 8),
RealtimeChannel("chat-room-1", "broadcast", 80, 120, 5),
RealtimeChannel("notifications", "postgres_changes", 1000, 20, 15),
RealtimeChannel("live-dashboard", "postgres_changes", 50, 200, 10),
]
print("=== Realtime Channels ===")
for c in channels:
print(f" [{c.type}] {c.name}")
print(f" Subscribers: {c.subscribers} | Events/s: {c.events_per_sec} | Latency: {c.latency_ms}ms")
Docker และ Kubernetes
# === Container Orchestration ===
# Docker Compose — Self-hosted Supabase
# services:
# db:
# image: supabase/postgres:15.1.0
# volumes: [./volumes/db:/var/lib/postgresql/data]
# environment:
# POSTGRES_PASSWORD:
#
# realtime:
# image: supabase/realtime:v2.25.0
# depends_on: [db]
# environment:
# DB_HOST: db
# DB_PORT: 5432
# DB_USER: supabase_admin
# DB_PASSWORD:
# DB_NAME: postgres
# PORT: 4000
# SECRET_KEY_BASE:
#
# kong:
# image: kong:2.8.1
# environment:
# KONG_DECLARATIVE_CONFIG: /var/lib/kong/kong.yml
# ports: ["8000:8000", "8443:8443"]
#
# studio:
# image: supabase/studio:latest
# environment:
# SUPABASE_URL: http://kong:8000
# STUDIO_PG_META_URL: http://meta:8080
# Kubernetes Deployment
# apiVersion: apps/v1
# kind: Deployment
# metadata:
# name: supabase-realtime
# spec:
# replicas: 3
# selector:
# matchLabels: { app: supabase-realtime }
# template:
# spec:
# containers:
# - name: realtime
# image: supabase/realtime:v2.25.0
# ports: [{ containerPort: 4000 }]
# resources:
# requests: { cpu: "500m", memory: "512Mi" }
# limits: { cpu: "2000m", memory: "2Gi" }
# env:
# - name: DB_HOST
# valueFrom:
# secretKeyRef: { name: supabase-db, key: host }
@dataclass
class K8sResource:
name: str
kind: str
replicas: int
cpu: str
memory: str
status: str
resources = [
K8sResource("supabase-db", "StatefulSet", 1, "2000m", "4Gi", "Running"),
K8sResource("supabase-realtime", "Deployment", 3, "500m", "512Mi", "Running"),
K8sResource("supabase-kong", "Deployment", 2, "250m", "256Mi", "Running"),
K8sResource("supabase-auth", "Deployment", 2, "250m", "256Mi", "Running"),
K8sResource("supabase-storage", "Deployment", 2, "250m", "512Mi", "Running"),
K8sResource("supabase-studio", "Deployment", 1, "250m", "256Mi", "Running"),
]
print("\n=== Kubernetes Resources ===")
for r in resources:
print(f" [{r.status}] {r.name} ({r.kind})")
print(f" Replicas: {r.replicas} | CPU: {r.cpu} | Memory: {r.memory}")
Scaling และ Monitoring
# === Production Scaling ===
# HPA — Horizontal Pod Autoscaler
# apiVersion: autoscaling/v2
# kind: HorizontalPodAutoscaler
# metadata:
# name: supabase-realtime-hpa
# spec:
# scaleTargetRef:
# apiVersion: apps/v1
# kind: Deployment
# name: supabase-realtime
# minReplicas: 2
# maxReplicas: 10
# metrics:
# - type: Resource
# resource:
# name: cpu
# target: { type: Utilization, averageUtilization: 70 }
# - type: Pods
# pods:
# metric: { name: websocket_connections }
# target: { type: AverageValue, averageValue: "500" }
# Prometheus Metrics
# supabase_realtime_connected_clients
# supabase_realtime_channels_count
# supabase_realtime_messages_per_second
# supabase_realtime_latency_milliseconds
monitoring = {
"Connected Clients": "2,500",
"Active Channels": "150",
"Messages/sec": "450",
"Avg Latency": "12ms",
"P99 Latency": "45ms",
"Realtime Pods": "5/10 (HPA)",
"DB Connections": "85/200",
"WebSocket Memory": "1.2GB / 2GB",
"CDC Lag": "< 100ms",
}
print("Supabase Realtime Dashboard:")
for k, v in monitoring.items():
print(f" {k}: {v}")
# Cost Comparison
costs = {
"Supabase Cloud (Pro)": "$25/mo + usage",
"Self-hosted (K8s 3-node)": "$150/mo (cloud VMs)",
"Self-hosted (Home Lab)": "$0/mo (electricity only)",
"Firebase (Blaze)": "Pay-as-you-go (~$50-200/mo)",
}
print(f"\n\nCost Comparison:")
for setup, cost in costs.items():
print(f" [{setup}]: {cost}")
เคล็ดลับ
- RLS: เปิด Row Level Security ทุกตาราง ป้องกัน Data Leak
- Channel: Subscribe เฉพาะตาราง/คอลัมน์ที่ต้องการ ลด Load
- HPA: ตั้ง HPA สำหรับ Realtime Server Scale ตาม Connection
- CDC: Monitor CDC Lag ถ้าสูงเกิน 1s ต้องตรวจสอบ
- Backup: Backup PostgreSQL เป็นประจำ ใช้ pg_dump หรือ Litestream
การดูแลระบบในสภาพแวดล้อม Production
การบริหารจัดการระบบ Production ที่ดีต้องมี Monitoring ครอบคลุม ใช้เครื่องมืออย่าง Prometheus + Grafana สำหรับ Metrics Collection และ Dashboard หรือ ELK Stack สำหรับ Log Management ตั้ง Alert ให้แจ้งเตือนเมื่อ CPU เกิน 80% RAM ใกล้เต็ม หรือ Disk Usage สูง
Backup Strategy ต้องวางแผนให้ดี ใช้หลัก 3-2-1 คือ มี Backup อย่างน้อย 3 ชุด เก็บใน Storage 2 ประเภทต่างกัน และ 1 ชุดต้องอยู่ Off-site ทดสอบ Restore Backup เป็นประจำ อย่างน้อยเดือนละครั้ง เพราะ Backup ที่ Restore ไม่ได้ก็เหมือนไม่มี Backup
เรื่อง Security Hardening ต้องทำตั้งแต่เริ่มต้น ปิด Port ที่ไม่จำเป็น ใช้ SSH Key แทน Password ตั้ง Fail2ban ป้องกัน Brute Force อัพเดท Security Patch สม่ำเสมอ และทำ Vulnerability Scanning อย่างน้อยเดือนละครั้ง ใช้หลัก Principle of Least Privilege ให้สิทธิ์น้อยที่สุดที่จำเป็น
Supabase Realtime คืออะไร
Real-time WebSocket PostgreSQL CDC Change Data Capture Presence Broadcast Channel Elixir Phoenix Subscribe Client ทันที
Container Orchestration คืออะไร
จัดการ Container อัตโนมัติ Kubernetes Docker Swarm Deployment Scaling Load Balancing Health Check Auto-restart Service Discovery
Supabase กับ Firebase ต่างกันอย่างไร
Supabase PostgreSQL Open Source Self-host SQL RLS RESTful Firebase Firestore NoSQL Proprietary Lock-in Google Rapid Prototyping
Deploy Supabase บน Kubernetes อย่างไร
Helm Chart Docker Compose K8s PostgreSQL Kong GoTrue Realtime Storage Studio PersistentVolume Ingress Secret JWT
สรุป
Supabase Realtime PostgreSQL CDC WebSocket Container Kubernetes Docker Presence Broadcast Channel HPA Scaling RLS Monitoring Prometheus Self-hosted Production
