Linkerd Pub/Sub
Linkerd Service Mesh Pub Sub Architecture mTLS Observability Traffic Split NATS Kafka Microservices Event-driven Async Communication Canary Deployment
| Service Mesh | Language | Memory/proxy | mTLS | Complexity | เหมาะกับ |
|---|---|---|---|---|---|
| Linkerd | Rust | ~10MB | Auto | ง่าย | Lightweight K8s |
| Istio | C++ (Envoy) | ~50MB | Auto | ซับซ้อน | Enterprise |
| Cilium | C (eBPF) | ~5MB | Auto | ปานกลาง | High Performance |
| Consul Connect | Go | ~30MB | Auto | ปานกลาง | Multi-platform |
Linkerd Setup
# === Linkerd Installation ===
# Install CLI
# curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh
# export PATH=$HOME/.linkerd2/bin:$PATH
# Pre-check
# linkerd check --pre
# Install to Kubernetes
# linkerd install --crds | kubectl apply -f -
# linkerd install | kubectl apply -f -
# linkerd check
# Install Viz (Dashboard)
# linkerd viz install | kubectl apply -f -
# linkerd viz dashboard &
# Inject sidecar to namespace
# kubectl annotate namespace default linkerd.io/inject=enabled
# kubectl rollout restart deployment -n default
# Verify mTLS
# linkerd viz edges deployment -n default
# linkerd viz tap deployment/my-app -n default
# Service Profile — Retry & Timeout
# apiVersion: linkerd.io/v1alpha2
# kind: ServiceProfile
# metadata:
# name: my-api.default.svc.cluster.local
# namespace: default
# spec:
# routes:
# - name: GET /api/orders
# condition:
# method: GET
# pathRegex: /api/orders
# isRetryable: true
# timeout: 5s
# - name: POST /api/orders
# condition:
# method: POST
# pathRegex: /api/orders
# timeout: 10s
from dataclasses import dataclass
@dataclass
class LinkerdFeature:
feature: str
description: str
config: str
impact: str
features = [
LinkerdFeature("mTLS", "Auto encryption pod-to-pod", "Auto (inject sidecar)", "Security ไม่ต้องแก้ Code"),
LinkerdFeature("Observability", "Golden metrics per route", "linkerd viz install", "Debug ง่ายขึ้นมาก"),
LinkerdFeature("Traffic Split", "Canary % based routing", "TrafficSplit CRD", "Safe deployments"),
LinkerdFeature("Retry", "Auto retry failed requests", "ServiceProfile isRetryable", "Higher success rate"),
LinkerdFeature("Timeout", "Per-route timeout", "ServiceProfile timeout", "Prevent cascading failure"),
LinkerdFeature("Circuit Breaker", "Fail fast on unhealthy", "ServiceProfile + budgets", "System resilience"),
]
print("=== Linkerd Features ===")
for f in features:
print(f" [{f.feature}] {f.description}")
print(f" Config: {f.config} | Impact: {f.impact}")
Pub/Sub with NATS
# === NATS Pub/Sub on Kubernetes ===
# Install NATS via Helm
# helm repo add nats https://nats-io.github.io/k8s/helm/charts/
# helm install nats nats/nats \
# --set nats.jetstream.enabled=true \
# --set nats.jetstream.memStorage.size=1Gi \
# --set nats.jetstream.fileStorage.size=10Gi
# Python Publisher
# import nats
# import json
# import asyncio
#
# async def publish_order(order):
# nc = await nats.connect("nats://nats:4222")
# js = nc.jetstream()
#
# # Create stream
# await js.add_stream(name="ORDERS", subjects=["orders.*"])
#
# # Publish
# await js.publish(
# f"orders.{order['type']}",
# json.dumps(order).encode(),
# headers={"Nats-Msg-Id": order["id"]}, # Dedup
# )
# await nc.close()
# Python Subscriber
# async def subscribe_orders():
# nc = await nats.connect("nats://nats:4222")
# js = nc.jetstream()
#
# # Durable consumer
# sub = await js.subscribe(
# "orders.*",
# durable="order-processor",
# deliver_policy=nats.api.DeliverPolicy.ALL,
# )
#
# async for msg in sub.messages:
# order = json.loads(msg.data.decode())
# try:
# process_order(order)
# await msg.ack()
# except Exception as e:
# await msg.nak(delay=5) # Retry after 5s
@dataclass
class PubSubBroker:
broker: str
delivery: str
persistence: str
throughput: str
latency: str
use_case: str
brokers = [
PubSubBroker("NATS Core", "At-most-once", "No", "High", "< 1ms", "Real-time lightweight"),
PubSubBroker("NATS JetStream", "At-least-once", "Yes", "High", "1-5ms", "Durable messaging"),
PubSubBroker("Kafka", "At-least-once", "Yes (log)", "Very High", "5-50ms", "Event sourcing pipeline"),
PubSubBroker("RabbitMQ", "At-least-once", "Yes", "Medium", "1-10ms", "Task queues routing"),
PubSubBroker("Redis Pub/Sub", "At-most-once", "No", "Very High", "< 1ms", "Cache invalidation"),
]
print("\n=== Message Brokers ===")
for b in brokers:
print(f" [{b.broker}] Delivery: {b.delivery}")
print(f" Persist: {b.persistence} | Throughput: {b.throughput} | Latency: {b.latency}")
print(f" Use Case: {b.use_case}")
Production Architecture
# === Production Pub/Sub + Linkerd ===
# Traffic Split — Canary Consumer
# apiVersion: split.smi-spec.io/v1alpha2
# kind: TrafficSplit
# metadata:
# name: order-consumer-split
# spec:
# service: order-consumer
# backends:
# - service: order-consumer-stable
# weight: 900 # 90%
# - service: order-consumer-canary
# weight: 100 # 10%
@dataclass
class ServiceMetric:
service: str
success_rate: str
p50_latency: str
p99_latency: str
rps: str
mtls: str
metrics = [
ServiceMetric("order-api", "99.8%", "12ms", "85ms", "450", "Enabled"),
ServiceMetric("order-consumer", "99.5%", "25ms", "180ms", "320", "Enabled"),
ServiceMetric("nats-client", "99.9%", "1ms", "5ms", "1200", "Enabled"),
ServiceMetric("payment-service", "99.7%", "45ms", "250ms", "180", "Enabled"),
ServiceMetric("notification-svc", "99.2%", "8ms", "50ms", "500", "Enabled"),
]
print("Production Service Metrics (Linkerd Viz):")
for m in metrics:
print(f" [{m.service}] SR: {m.success_rate} | mTLS: {m.mtls}")
print(f" p50: {m.p50_latency} | p99: {m.p99_latency} | RPS: {m.rps}")
architecture = {
"API Gateway": "Ingress → order-api (Linkerd injected)",
"Pub/Sub": "order-api → NATS JetStream → order-consumer",
"Async Processing": "order-consumer → payment-service → notification-svc",
"Observability": "Linkerd Viz → Prometheus → Grafana",
"Security": "mTLS auto all pod-to-pod + NATS TLS",
"Resilience": "Retry + Timeout + Circuit Breaker per route",
}
print(f"\n\nArchitecture:")
for k, v in architecture.items():
print(f" [{k}]: {v}")
เคล็ดลับ
- Linkerd: เลือก Linkerd ถ้าต้องการ Lightweight ง่าย
- NATS: ใช้ JetStream สำหรับ Durable Messaging
- mTLS: Linkerd ให้ mTLS อัตโนมัติไม่ต้องแก้ Code
- Canary: ใช้ TrafficSplit ทดสอบ Consumer Version ใหม่
- Monitor: ดู Golden Metrics ทุก Service ใน Linkerd Viz
การนำความรู้ไปประยุกต์ใช้งานจริง
แหล่งเรียนรู้ที่แนะนำ ได้แก่ Official Documentation ที่อัพเดทล่าสุดเสมอ Online Course จาก Coursera Udemy edX ช่อง YouTube คุณภาพทั้งไทยและอังกฤษ และ Community อย่าง Discord Reddit Stack Overflow ที่ช่วยแลกเปลี่ยนประสบการณ์กับนักพัฒนาทั่วโลก
Linkerd คืออะไร
Lightweight Service Mesh Kubernetes Rust mTLS Auto Observability Golden Metrics Traffic Split Retry Timeout ติดตั้งง่าย Memory น้อย
Pub/Sub Architecture คืออะไร
Publish Subscribe Messaging Publisher Topic Subscriber Decoupled Scale NATS Kafka RabbitMQ Event-driven Async Real-time Notification
ใช้ Linkerd กับ Pub/Sub อย่างไร
mTLS Service Broker Observability Latency Success Traffic Split Canary Consumer Retry Failed Circuit Breaker Rate Limiting Topology Dashboard
NATS ต่างจาก Kafka อย่างไร
NATS เบา At-most-once Real-time JetStream Persist Kafka Durable Replay High Throughput Event Sourcing Log ซับซ้อน Resource มาก
สรุป
Linkerd Service Mesh Pub Sub NATS Kafka mTLS Observability Traffic Split Canary Retry Timeout Event-driven Microservices Production Operations
