CircleCI Orbs คืออะไร
CircleCI Orbs เป็น reusable packages ของ CircleCI configuration ที่ประกอบด้วย commands, jobs และ executors ที่สามารถนำไปใช้ซ้ำข้าม projects ได้ คิดเหมือน libraries สำหรับ CI/CD pipeline ลดการเขียน configuration ซ้ำซ้อน
ประโยชน์ของ Orbs ได้แก่ DRY Principle ไม่ต้องเขียน config ซ้ำ ใช้ orb เดียวข้าม projects, Community Orbs มี orbs สำเร็จรูปจาก community เช่น aws-cli, kubernetes, slack, docker, Versioning orbs มี version control update ได้โดยไม่กระทบ projects ที่ใช้ version เก่า, Testing orbs มี testing framework ทดสอบก่อน publish
Zero Downtime Deployment หมายถึงการ deploy application version ใหม่โดยไม่มี downtime ผู้ใช้ไม่ได้รับผลกระทบ strategies หลักได้แก่ Rolling Update, Blue-Green Deployment, Canary Deployment การรวม CircleCI Orbs กับ zero downtime deployment ทำให้ automate กระบวนการ deploy ได้อย่างปลอดภัย
ติดตั้งและตั้งค่า CircleCI
Setup CircleCI project
# === CircleCI Configuration ===
# 1. Basic .circleci/config.yml
cat > .circleci/config.yml << 'EOF'
version: 2.1
orbs:
aws-eks: circleci/aws-eks@2.2
kubernetes: circleci/kubernetes@1.3
slack: circleci/slack@4.12
docker: circleci/docker@2.6
executors:
default:
docker:
- image: cimg/base:2024.01
resource_class: medium
jobs:
build:
executor: default
steps:
- checkout
- docker/check
- docker/build:
image: myapp
tag:
- docker/push:
image: myapp
tag:
test:
executor: default
steps:
- checkout
- run:
name: Run unit tests
command: |
pip install -r requirements.txt
pytest tests/ -v --junitxml=test-results/results.xml
- store_test_results:
path: test-results
- store_artifacts:
path: test-results
deploy-staging:
executor: default
steps:
- checkout
- kubernetes/install-kubectl
- aws-eks/update-kubeconfig-with-authenticator:
cluster-name: staging-cluster
- run:
name: Deploy to staging
command: |
kubectl set image deployment/myapp \
myapp=registry/myapp: \
-n staging
kubectl rollout status deployment/myapp -n staging --timeout=300s
deploy-production:
executor: default
steps:
- checkout
- kubernetes/install-kubectl
- aws-eks/update-kubeconfig-with-authenticator:
cluster-name: production-cluster
- run:
name: Zero downtime deploy to production
command: |
kubectl set image deployment/myapp \
myapp=registry/myapp: \
-n production
kubectl rollout status deployment/myapp -n production --timeout=600s
- slack/notify:
event: pass
template: basic_success_1
workflows:
build-test-deploy:
jobs:
- build
- test:
requires: [build]
- deploy-staging:
requires: [test]
filters:
branches:
only: main
- hold-production:
type: approval
requires: [deploy-staging]
- deploy-production:
requires: [hold-production]
EOF
echo "CircleCI config created"
สร้าง Custom Orbs
สร้าง orb สำหรับ zero downtime deployment
# === Custom Orb Development ===
# 1. Initialize Orb
circleci orb init zero-downtime-deploy
# 2. Orb source (src/orb.yml)
cat > src/orb.yml << 'EOF'
version: 2.1
description: >
Zero downtime deployment orb supporting blue-green and canary strategies
display:
home_url: https://github.com/myorg/zero-downtime-orb
source_url: https://github.com/myorg/zero-downtime-orb
commands:
blue-green-deploy:
description: Execute blue-green deployment
parameters:
cluster:
type: string
namespace:
type: string
default: default
deployment:
type: string
image:
type: string
health-check-url:
type: string
default: ""
switch-timeout:
type: string
default: "300s"
steps:
- run:
name: Blue-Green Deploy
command: |
CURRENT=$(kubectl get svc << parameters.deployment >> -n << parameters.namespace >> -o jsonpath='{.spec.selector.version}')
if [ "$CURRENT" = "blue" ]; then NEW="green"; else NEW="blue"; fi
echo "Current: $CURRENT, Deploying to: $NEW"
# Update inactive deployment
kubectl set image deployment/<< parameters.deployment >>-$NEW \
app=<< parameters.image >> -n << parameters.namespace >>
# Wait for rollout
kubectl rollout status deployment/<< parameters.deployment >>-$NEW \
-n << parameters.namespace >> --timeout=<< parameters.switch-timeout >>
# Health check
if [ -n "<< parameters.health-check-url >>" ]; then
for i in $(seq 1 10); do
STATUS=$(curl -s -o /dev/null -w "%{http_code}" << parameters.health-check-url >>)
if [ "$STATUS" = "200" ]; then
echo "Health check passed"
break
fi
sleep 5
done
fi
# Switch traffic
kubectl patch svc << parameters.deployment >> -n << parameters.namespace >> \
-p "{\"spec\":{\"selector\":{\"version\":\"$NEW\"}}}"
echo "Traffic switched to $NEW"
canary-deploy:
description: Execute canary deployment
parameters:
deployment:
type: string
namespace:
type: string
default: default
image:
type: string
canary-weight:
type: integer
default: 10
analysis-duration:
type: string
default: "300"
steps:
- run:
name: Canary Deploy
command: |
# Deploy canary
kubectl set image deployment/<< parameters.deployment >>-canary \
app=<< parameters.image >> -n << parameters.namespace >>
kubectl rollout status deployment/<< parameters.deployment >>-canary \
-n << parameters.namespace >> --timeout=300s
echo "Canary deployed, monitoring for << parameters.analysis-duration >>s..."
sleep << parameters.analysis-duration >>
# Check error rate
ERROR_RATE=$(curl -s "http://prometheus:9090/api/v1/query?query=rate(http_requests_total{status=~'5..'}[5m])" | jq '.data.result[0].value[1]')
if (( $(echo "$ERROR_RATE < 0.01" | bc -l) )); then
echo "Canary healthy, promoting to production"
kubectl set image deployment/<< parameters.deployment >> \
app=<< parameters.image >> -n << parameters.namespace >>
else
echo "Canary unhealthy, rolling back"
kubectl rollout undo deployment/<< parameters.deployment >>-canary \
-n << parameters.namespace >>
exit 1
fi
jobs:
zero-downtime-deploy:
description: Full zero downtime deployment job
parameters:
strategy:
type: enum
enum: [blue-green, canary, rolling]
default: rolling
cluster:
type: string
deployment:
type: string
image:
type: string
namespace:
type: string
default: default
executor:
name: default
steps:
- checkout
- kubernetes/install-kubectl
- when:
condition:
equal: [blue-green, << parameters.strategy >>]
steps:
- blue-green-deploy:
cluster: << parameters.cluster >>
deployment: << parameters.deployment >>
image: << parameters.image >>
namespace: << parameters.namespace >>
- when:
condition:
equal: [canary, << parameters.strategy >>]
steps:
- canary-deploy:
deployment: << parameters.deployment >>
image: << parameters.image >>
namespace: << parameters.namespace >>
EOF
# 3. Publish Orb
circleci orb validate src/orb.yml
circleci orb publish src/orb.yml myorg/zero-downtime-deploy@1.0.0
echo "Custom orb created and published"
Zero Downtime Deployment Strategies
เปรียบเทียบ deployment strategies
#!/usr/bin/env python3
# deploy_strategies.py — Zero Downtime Deployment Strategies
import json
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("deploy")
class DeploymentStrategies:
def __init__(self):
self.strategies = {}
def compare_strategies(self):
return {
"rolling_update": {
"description": "ค่อยๆ replace pods ทีละชุด",
"downtime": "zero",
"rollback_speed": "fast (kubectl rollout undo)",
"resource_overhead": "25-50% extra during deployment",
"complexity": "low",
"risk": "low (gradual)",
"best_for": "Stateless applications, standard web apps",
"k8s_config": {
"maxSurge": "25%",
"maxUnavailable": "25%",
},
},
"blue_green": {
"description": "Run 2 identical environments, switch traffic",
"downtime": "zero (instant switch)",
"rollback_speed": "instant (switch back)",
"resource_overhead": "100% (double infrastructure)",
"complexity": "medium",
"risk": "low (full environment tested before switch)",
"best_for": "Critical applications, database migrations",
"trade_off": "Cost vs safety",
},
"canary": {
"description": "Route small % of traffic to new version",
"downtime": "zero",
"rollback_speed": "fast (route 0% to canary)",
"resource_overhead": "10-20% extra",
"complexity": "high",
"risk": "lowest (only small % affected)",
"best_for": "High-traffic apps, ML model deployments",
"requires": "Service mesh or ingress controller with traffic splitting",
},
"recreate": {
"description": "Stop all old pods, start new pods",
"downtime": "YES (seconds to minutes)",
"rollback_speed": "slow (need to redeploy)",
"resource_overhead": "none",
"complexity": "lowest",
"risk": "high (full downtime)",
"best_for": "Dev/staging environments, breaking changes",
},
}
def recommend_strategy(self, requirements):
traffic = requirements.get("daily_traffic", 0)
budget = requirements.get("budget", "medium")
criticality = requirements.get("criticality", "medium")
if criticality == "critical" and budget == "high":
return "blue_green"
elif traffic > 1000000 or criticality == "critical":
return "canary"
elif criticality == "low":
return "rolling_update"
else:
return "rolling_update"
strategies = DeploymentStrategies()
comparison = strategies.compare_strategies()
for name, details in comparison.items():
print(f"\n{name}:")
print(f" Downtime: {details['downtime']}")
print(f" Rollback: {details['rollback_speed']}")
print(f" Complexity: {details['complexity']}")
rec = strategies.recommend_strategy({"daily_traffic": 5000000, "budget": "high", "criticality": "critical"})
print(f"\nRecommended: {rec}")
Implement Blue-Green และ Canary
Kubernetes manifests สำหรับ zero downtime
# === Blue-Green Deployment Manifests ===
# 1. Blue Deployment
cat > k8s/blue-deployment.yaml << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-blue
labels:
app: myapp
version: blue
spec:
replicas: 3
selector:
matchLabels:
app: myapp
version: blue
template:
metadata:
labels:
app: myapp
version: blue
spec:
containers:
- name: app
image: registry/myapp:v1
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
EOF
# 2. Green Deployment (identical but version: green)
cat > k8s/green-deployment.yaml << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-green
labels:
app: myapp
version: green
spec:
replicas: 3
selector:
matchLabels:
app: myapp
version: green
template:
metadata:
labels:
app: myapp
version: green
spec:
containers:
- name: app
image: registry/myapp:v2
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
EOF
# 3. Service (switch between blue/green)
cat > k8s/service.yaml << 'EOF'
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
selector:
app: myapp
version: blue
ports:
- port: 80
targetPort: 8080
EOF
# Switch traffic:
# kubectl patch svc myapp -p '{"spec":{"selector":{"version":"green"}}}'
# Rollback:
# kubectl patch svc myapp -p '{"spec":{"selector":{"version":"blue"}}}'
# 4. Rolling Update Config
cat > k8s/rolling-deployment.yaml << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 5
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: myapp
template:
spec:
terminationGracePeriodSeconds: 30
containers:
- name: app
image: registry/myapp:v2
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 10"]
EOF
echo "Deployment manifests created"
Monitoring และ Rollback
Monitor deployments และ rollback
#!/usr/bin/env python3
# deploy_monitor.py — Deployment Monitoring
import json
import logging
from datetime import datetime
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("monitor")
class DeploymentMonitor:
def __init__(self):
self.deployments = []
def track_deployment(self, version, strategy):
deployment = {
"version": version,
"strategy": strategy,
"started_at": datetime.utcnow().isoformat(),
"status": "in_progress",
"metrics_before": self._capture_metrics(),
}
self.deployments.append(deployment)
return deployment
def _capture_metrics(self):
return {
"error_rate": 0.002,
"latency_p99_ms": 120,
"rps": 1500,
"cpu_usage_pct": 45,
"memory_usage_pct": 62,
}
def check_deployment_health(self, deployment):
current = self._capture_metrics()
before = deployment["metrics_before"]
checks = {
"error_rate_ok": current["error_rate"] < before["error_rate"] * 2,
"latency_ok": current["latency_p99_ms"] < before["latency_p99_ms"] * 1.5,
"rps_ok": current["rps"] > before["rps"] * 0.8,
}
healthy = all(checks.values())
return {
"healthy": healthy,
"checks": checks,
"current_metrics": current,
"recommendation": "continue" if healthy else "rollback",
}
def rollback_procedures(self):
return {
"rolling_update": {
"command": "kubectl rollout undo deployment/myapp",
"time": "30-60 seconds",
"automated": True,
},
"blue_green": {
"command": "kubectl patch svc myapp -p '{\"spec\":{\"selector\":{\"version\":\"previous\"}}}'",
"time": "instant (< 1 second)",
"automated": True,
},
"canary": {
"command": "kubectl delete deployment/myapp-canary",
"time": "instant (traffic stops to canary)",
"automated": True,
},
}
monitor = DeploymentMonitor()
dep = monitor.track_deployment("v2.1.0", "blue-green")
print("Deployment:", json.dumps(dep, indent=2))
health = monitor.check_deployment_health(dep)
print("\nHealth:", json.dumps(health, indent=2))
rollback = monitor.rollback_procedures()
print("\nRollback:", json.dumps(rollback, indent=2))
FAQ คำถามที่พบบ่อย
Q: CircleCI Orbs กับ GitHub Actions ต่างกันอย่างไร?
A: CircleCI Orbs เป็น reusable config packages เฉพาะ CircleCI มี commands, jobs, executors publish ผ่าน CircleCI registry version control ชัดเจน GitHub Actions เป็น reusable workflows/actions สำหรับ GitHub มี marketplace ใหญ่กว่า เขียนด้วย Docker หรือ JavaScript ผูกกับ GitHub ecosystem ทั้งคู่แก้ปัญหา DRY เหมือนกัน เลือกตามว่าใช้ CI/CD platform ไหน ถ้าใช้ CircleCI ใช้ Orbs ถ้าใช้ GitHub ใช้ Actions
Q: Blue-Green กับ Canary เลือกอันไหน?
A: Blue-Green เหมาะเมื่อต้องการ instant rollback, ทดสอบ environment เต็มรูปแบบก่อน switch, มี budget สำหรับ double infrastructure, ทำ database migration ที่ต้อง test ก่อน ข้อเสีย ใช้ resources 2 เท่า Canary เหมาะเมื่อมี traffic สูง ต้องการ validate กับ real users, ต้องการ gradual rollout, มี service mesh หรือ ingress ที่รองรับ traffic splitting, ต้องการ minimize risk ข้อเสีย ซับซ้อนกว่า ต้องมี automated metrics analysis สำหรับ 80% ของ cases Rolling Update เพียงพอและง่ายที่สุด
Q: readinessProbe กับ livenessProbe ต่างกันอย่างไร?
A: readinessProbe ตรวจว่า pod พร้อมรับ traffic หรือยัง ถ้า fail Kubernetes จะเอา pod ออกจาก service (ไม่ส่ง traffic ไป) pod ยังคงอยู่ สำคัญมากสำหรับ zero downtime deployment เพราะ pod ใหม่จะไม่ได้รับ traffic จนกว่าจะ ready livenessProbe ตรวจว่า pod ยังทำงานปกติหรือไม่ ถ้า fail Kubernetes จะ restart pod ใช้ detect deadlocks หรือ stuck processes ตั้งทั้งสองเสมอ readiness สำหรับ traffic routing, liveness สำหรับ auto-healing
Q: Database migration ทำอย่างไรให้ zero downtime?
A: ใช้ Expand and Contract pattern Phase 1 (Expand) เพิ่ม column/table ใหม่ โดยไม่ลบของเก่า deploy code ที่ write ทั้ง old และ new schema Phase 2 (Migrate) migrate data จาก old เป็น new format ทำ background ไม่กระทบ traffic Phase 3 (Contract) deploy code ที่ใช้ new schema เท่านั้น ลบ old columns/tables ทุก phase deploy แยกกัน ถ้ามีปัญหา rollback ได้ง่าย ห้าม rename columns โดยตรง ให้ add new → copy data → switch code → drop old
