SiamCafe.net Blog
Technology

LocalAI Self-Hosted Feature Flag Management จัดการ Feature Flags ด้วย AI

localai self hosted feature flag management
LocalAI Self-hosted Feature Flag Management | SiamCafe Blog
2025-09-24· อ. บอม — SiamCafe.net· 818 คำ

LocalAI ????????? Feature Flag ?????????????????????

LocalAI ???????????? open source AI inference server ???????????????????????????????????????????????????????????????????????? (self-hosted) ?????????????????? LLM models ???????????????????????? (LLaMA, Mistral, Phi) ?????????????????????????????????????????????????????? cloud ??????????????? data privacy ????????? ????????????????????????????????????????????????????????????????????????????????? AI capabilities ???????????????????????????????????????????????? OpenAI ???????????? cloud providers

Feature Flags (Feature Toggles) ?????????????????????????????? software development ?????????????????????????????????/????????? features ?????????????????????????????? deploy code ???????????? ??????????????????????????? Progressive rollout ??????????????????????????? feature ????????? users ???????????????????????????, A/B testing ??????????????? variants ???????????????, Kill switch ????????? feature ???????????????????????????????????????????????????, Trunk-based development merge code ???????????????????????????????????????????????????????????????????????????????????? flag

?????????????????? LocalAI ????????? Feature Flag Management ??????????????? AI ????????????????????????????????????????????????/????????? features ???????????????????????????????????? metrics, user behavior, system health ???????????? ?????? traffic ?????? feature ??????????????????????????? error rate ????????? ??????????????????????????? rollout percentage ??????????????? metrics ??????

????????????????????? LocalAI ????????? Feature Flag System

Setup self-hosted infrastructure

# === LocalAI + Feature Flag Setup ===

# 1. Install LocalAI
docker pull localai/localai:latest

# Docker Compose for full stack
cat > docker-compose.yml << 'EOF'
version: '3.8'
services:
 localai:
 image: localai/localai:latest
 ports:
 - "8080:8080"
 volumes:
 - ./models:/models
 environment:
 - MODELS_PATH=/models
 - THREADS=4
 - CONTEXT_SIZE=2048
 deploy:
 resources:
 limits:
 memory: 8G

 unleash:
 image: unleashorg/unleash-server:latest
 ports:
 - "4242:4242"
 environment:
 - DATABASE_URL=postgres://unleash:password@db:5432/unleash
 - DATABASE_SSL=false
 - INIT_ADMIN_API_TOKENS=*:*.unleash-admin-token
 depends_on:
 - db

 db:
 image: postgres:16-alpine
 environment:
 POSTGRES_DB: unleash
 POSTGRES_USER: unleash
 POSTGRES_PASSWORD: password
 volumes:
 - pgdata:/var/lib/postgresql/data

 redis:
 image: redis:7-alpine
 ports:
 - "6379:6379"

volumes:
 pgdata:
EOF

docker compose up -d

# 2. Download AI Model for LocalAI
mkdir -p models
wget -O models/mistral-7b-instruct.gguf \
 "https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GGUF/resolve/main/mistral-7b-instruct-v0.2.Q4_K_M.gguf"

# 3. LocalAI Model Config
cat > models/mistral.yaml << 'EOF'
name: mistral
backend: llama
parameters:
 model: mistral-7b-instruct.gguf
 temperature: 0.1
 top_p: 0.9
 context_size: 2048
EOF

# 4. Verify
curl http://localhost:8080/v1/models
curl http://localhost:4242/api/admin/features \
 -H "Authorization: *:*.unleash-admin-token"

echo "LocalAI + Feature Flag system running"

??????????????? AI-Powered Feature Flags

????????? AI ???????????????????????? feature rollout

#!/usr/bin/env python3
# ai_feature_flags.py ??? AI-Powered Feature Flag Manager
import json
import logging
from typing import Dict, List

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("ai_flags")

class AIFeatureFlagManager:
 """AI-assisted feature flag management"""
 
 def __init__(self):
 self.flags = {}
 self.metrics = {}
 
 def create_flag(self, name, description, strategy="gradual"):
 """Create a feature flag"""
 self.flags[name] = {
 "name": name,
 "description": description,
 "enabled": False,
 "strategy": strategy,
 "rollout_pct": 0,
 "variants": {},
 "conditions": [],
 }
 return self.flags[name]
 
 def set_rollout(self, flag_name, percentage):
 """Set rollout percentage"""
 if flag_name in self.flags:
 self.flags[flag_name]["rollout_pct"] = min(100, max(0, percentage))
 self.flags[flag_name]["enabled"] = percentage > 0
 
 def evaluate(self, flag_name, user_context):
 """Evaluate flag for a user"""
 flag = self.flags.get(flag_name)
 if not flag or not flag["enabled"]:
 return {"enabled": False, "variant": None}
 
 # Simple hash-based rollout
 user_hash = hash(user_context.get("user_id", "")) % 100
 enabled = user_hash < flag["rollout_pct"]
 
 return {
 "enabled": enabled,
 "variant": "treatment" if enabled else "control",
 "rollout_pct": flag["rollout_pct"],
 }
 
 def ai_analyze_rollout(self, flag_name, metrics):
 """AI analyzes metrics and recommends rollout action"""
 # Simulate AI analysis (in production: call LocalAI)
 error_rate = metrics.get("error_rate", 0)
 latency_p99 = metrics.get("latency_p99_ms", 0)
 conversion_rate = metrics.get("conversion_rate", 0)
 
 current_pct = self.flags[flag_name]["rollout_pct"]
 
 if error_rate > 5:
 recommendation = {
 "action": "ROLLBACK",
 "new_pct": max(0, current_pct - 20),
 "reason": f"Error rate {error_rate}% exceeds 5% threshold",
 "confidence": 0.95,
 }
 elif error_rate > 2:
 recommendation = {
 "action": "HOLD",
 "new_pct": current_pct,
 "reason": f"Error rate {error_rate}% elevated, hold current rollout",
 "confidence": 0.80,
 }
 elif latency_p99 > 500:
 recommendation = {
 "action": "HOLD",
 "new_pct": current_pct,
 "reason": f"P99 latency {latency_p99}ms too high",
 "confidence": 0.85,
 }
 elif conversion_rate > 0 and current_pct < 100:
 recommendation = {
 "action": "INCREASE",
 "new_pct": min(100, current_pct + 10),
 "reason": f"Metrics healthy, increase rollout",
 "confidence": 0.75,
 }
 else:
 recommendation = {
 "action": "HOLD",
 "new_pct": current_pct,
 "reason": "Insufficient data",
 "confidence": 0.50,
 }
 
 return recommendation

# Demo
manager = AIFeatureFlagManager()

# Create flags
manager.create_flag("new-checkout", "New checkout experience", "gradual")
manager.create_flag("ai-recommendations", "AI product recommendations", "gradual")

# Set initial rollout
manager.set_rollout("new-checkout", 20)
manager.set_rollout("ai-recommendations", 5)

# Evaluate for users
for user_id in ["user_001", "user_050", "user_099"]:
 result = manager.evaluate("new-checkout", {"user_id": user_id})
 print(f" {user_id}: {'ON' if result['enabled'] else 'OFF'}")

# AI analysis
metrics = {"error_rate": 1.2, "latency_p99_ms": 250, "conversion_rate": 3.5}
rec = manager.ai_analyze_rollout("new-checkout", metrics)
print(f"\nAI Recommendation: {rec['action']}")
print(f" New rollout: {rec['new_pct']}%, Reason: {rec['reason']}")

Progressive Rollout Strategies

????????????????????? rollout ????????????????????????

#!/usr/bin/env python3
# rollout_strategies.py ??? Progressive Rollout Strategies
import json
import logging
from typing import Dict, List

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("rollout")

class RolloutStrategies:
 def __init__(self):
 pass
 
 def strategies(self):
 return {
 "percentage_rollout": {
 "description": "??????????????? % ????????????????????????",
 "steps": [1, 5, 10, 25, 50, 75, 100],
 "wait_between": "1-24 hours per step",
 "rollback_trigger": "error_rate > 2% or latency > 2x baseline",
 "best_for": "General features, UI changes",
 },
 "canary_release": {
 "description": "???????????????????????? users ??????????????????????????????????????????????????????",
 "steps": [1, 5, 100],
 "canary_group": "Internal users ??? Beta users ??? All users",
 "monitoring": "Compare canary vs baseline metrics",
 "best_for": "Backend changes, API changes",
 },
 "ring_deployment": {
 "description": "Deploy ???????????? ring (Microsoft style)",
 "rings": {
 "ring_0": "Internal dogfooding (1%)",
 "ring_1": "Early adopters (5%)",
 "ring_2": "General availability (50%)",
 "ring_3": "Full rollout (100%)",
 },
 "best_for": "Major features, platform changes",
 },
 "user_targeting": {
 "description": "????????????????????? users ??????????????????????????????????????????",
 "conditions": [
 "country == 'TH'",
 "plan == 'pro'",
 "registered_before('2024-01-01')",
 "user_id IN beta_list",
 ],
 "best_for": "Geo-targeted features, plan-specific features",
 },
 "time_based": {
 "description": "????????????/??????????????????????????????",
 "examples": [
 "???????????? feature ??????????????? business hours",
 "Holiday promotions",
 "Maintenance windows",
 ],
 "best_for": "Scheduled features, promotions",
 },
 }

strategies = RolloutStrategies()
for name, info in strategies.strategies().items():
 print(f"{name}: {info['description']}")
 print(f" Best for: {info['best_for']}\n")

Self-Hosted Feature Flag Server

????????????????????? Unleash feature flag server

# === Unleash Feature Flag Server ===

# 1. Create Feature Flag via API
cat > create_flags.sh << 'BASH'
#!/bin/bash
UNLEASH_URL="http://localhost:4242/api/admin"
TOKEN="*:*.unleash-admin-token"

# Create feature flag
curl -X POST "$UNLEASH_URL/projects/default/features" \
 -H "Authorization: $TOKEN" \
 -H "Content-Type: application/json" \
 -d '{
 "name": "new-checkout-flow",
 "description": "New checkout experience with AI recommendations",
 "type": "release",
 "impressionData": true
 }'

# Add gradual rollout strategy
curl -X POST "$UNLEASH_URL/projects/default/features/new-checkout-flow/environments/production/strategies" \
 -H "Authorization: $TOKEN" \
 -H "Content-Type: application/json" \
 -d '{
 "name": "flexibleRollout",
 "parameters": {
 "rollout": "10",
 "stickiness": "userId",
 "groupId": "new-checkout-flow"
 }
 }'

# Create variant
curl -X PATCH "$UNLEASH_URL/projects/default/features/new-checkout-flow/environments/production/variants" \
 -H "Authorization: $TOKEN" \
 -H "Content-Type: application/json" \
 -d '[
 {
 "name": "variant-a",
 "weight": 50,
 "payload": {"type": "json", "value": "{\"buttonColor\": \"blue\"}"}
 },
 {
 "name": "variant-b",
 "weight": 50,
 "payload": {"type": "json", "value": "{\"buttonColor\": \"green\"}"}
 }
 ]'

echo "Feature flags created"
BASH

chmod +x create_flags.sh

# 2. Python SDK Integration
cat > app_integration.py << 'PYEOF'
#!/usr/bin/env python3
"""Feature Flag Integration in Application"""
import json
import logging
import hashlib

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("app")

class FeatureFlagClient:
 """Simple feature flag client (Unleash-compatible)"""
 
 def __init__(self):
 self.flags = {
 "new-checkout-flow": {
 "enabled": True,
 "strategies": [
 {"name": "flexibleRollout", "parameters": {"rollout": 25, "stickiness": "userId"}},
 ],
 "variants": [
 {"name": "variant-a", "weight": 50, "payload": {"buttonColor": "blue"}},
 {"name": "variant-b", "weight": 50, "payload": {"buttonColor": "green"}},
 ],
 },
 "ai-search": {
 "enabled": True,
 "strategies": [
 {"name": "userWithId", "parameters": {"userIds": "user_001,user_002,user_003"}},
 ],
 },
 }
 
 def is_enabled(self, flag_name, context=None):
 flag = self.flags.get(flag_name)
 if not flag or not flag["enabled"]:
 return False
 
 for strategy in flag.get("strategies", []):
 if strategy["name"] == "flexibleRollout":
 rollout = strategy["parameters"]["rollout"]
 user_id = (context or {}).get("userId", "anonymous")
 user_hash = int(hashlib.md5(f"{flag_name}:{user_id}".encode()).hexdigest(), 16) % 100
 return user_hash < rollout
 
 elif strategy["name"] == "userWithId":
 user_ids = strategy["parameters"]["userIds"].split(",")
 return (context or {}).get("userId") in user_ids
 
 return False
 
 def get_variant(self, flag_name, context=None):
 flag = self.flags.get(flag_name)
 if not flag or not self.is_enabled(flag_name, context):
 return {"name": "disabled", "payload": None}
 
 variants = flag.get("variants", [])
 if not variants:
 return {"name": "enabled", "payload": None}
 
 user_id = (context or {}).get("userId", "anonymous")
 idx = int(hashlib.md5(f"{flag_name}:variant:{user_id}".encode()).hexdigest(), 16) % len(variants)
 return variants[idx]

client = FeatureFlagClient()

# Check flags for different users
for user_id in ["user_001", "user_050", "user_099"]:
 ctx = {"userId": user_id}
 checkout = client.is_enabled("new-checkout-flow", ctx)
 variant = client.get_variant("new-checkout-flow", ctx)
 ai_search = client.is_enabled("ai-search", ctx)
 print(f"{user_id}: checkout={'ON' if checkout else 'OFF'} (variant={variant['name']}), ai-search={'ON' if ai_search else 'OFF'}")
PYEOF

python3 app_integration.py

echo "Integration complete"

Monitoring ????????? Analytics

?????????????????? feature flag impact

#!/usr/bin/env python3
# flag_analytics.py ??? Feature Flag Analytics
import json
import logging
from typing import Dict, List

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("analytics")

class FlagAnalytics:
 def __init__(self):
 pass
 
 def dashboard(self):
 return {
 "active_flags": {
 "total": 15,
 "enabled": 12,
 "disabled": 3,
 "stale": 2,
 },
 "flag_details": [
 {"name": "new-checkout-flow", "rollout": 25, "days_active": 7, "evaluations_24h": 45000, "conversion_lift": "+3.2%"},
 {"name": "ai-search", "rollout": 100, "days_active": 30, "evaluations_24h": 120000, "conversion_lift": "+8.5%"},
 {"name": "dark-mode", "rollout": 50, "days_active": 14, "evaluations_24h": 80000, "conversion_lift": "+0.5%"},
 {"name": "new-pricing", "rollout": 10, "days_active": 3, "evaluations_24h": 12000, "conversion_lift": "-1.2%"},
 ],
 "ai_recommendations": [
 {"flag": "new-checkout-flow", "action": "INCREASE to 50%", "reason": "Positive conversion lift, stable error rate"},
 {"flag": "new-pricing", "action": "HOLD at 10%", "reason": "Negative conversion, needs investigation"},
 {"flag": "dark-mode", "action": "INCREASE to 100%", "reason": "No negative impact, user satisfaction high"},
 ],
 "alerts": [
 {"flag": "new-pricing", "severity": "warning", "message": "Conversion rate dropped 1.2% for treatment group"},
 ],
 }

analytics = FlagAnalytics()
dash = analytics.dashboard()
print("Feature Flag Dashboard:")
print(f" Active: {dash['active_flags']['enabled']}/{dash['active_flags']['total']}")

print("\nFlag Details:")
for flag in dash["flag_details"]:
 print(f" {flag['name']}: {flag['rollout']}% rollout, {flag['conversion_lift']} conversion")

print("\nAI Recommendations:")
for rec in dash["ai_recommendations"]:
 print(f" {rec['flag']}: {rec['action']}")
 print(f" Reason: {rec['reason']}")

FAQ ??????????????????????????????????????????

Q: ???????????????????????? self-host feature flags?

A: Self-hosted feature flag ????????????????????????????????? Data privacy ?????????????????? user context ?????????????????????????????????????????????????????????????????? (GDPR, PDPA compliance), No vendor lock-in ??????????????????????????? LaunchDarkly, Split.io, ??????????????????????????????????????? Unleash open source ????????? (paid ?????????????????? enterprise features), Customizable ??????????????? code ???????????????????????????????????????, Low latency SDK evaluate flags locally ????????????????????? network call ????????????????????? ???????????? maintain infrastructure ?????????, ??????????????? support ????????? vendor, Features ????????????????????????????????? SaaS ?????????????????????????????? (< 5 developers) ????????? SaaS (LaunchDarkly, Flagsmith) ????????????????????????????????? ???????????????????????????????????????????????????????????????????????????????????? data privacy ????????? self-hosted

Q: LocalAI ????????? Ollama ???????????????????????????????????????????

A: LocalAI ???????????? OpenAI API compatible server ?????????????????? multiple backends (llama.cpp, whisper, stablediffusion) API ???????????????????????? OpenAI ??????????????? switch ???????????? ?????????????????? GPU ????????? CPU inference ????????????????????????????????? production deployment Ollama ???????????? simplicity ?????????????????????????????????????????? (1 command) CLI-first approach ?????? ?????????????????? local development ????????? experimentation API ????????????????????? OpenAI ???????????????????????? ?????????????????? feature flag integration ????????? LocalAI ?????????????????? ??????????????? API compatible ????????? OpenAI SDKs ??????????????? switch ????????????????????? local ????????? cloud ????????????

Q: Feature flag ??????????????? code ???????????????????????????????????????????

A: ??????????????????????????????????????????????????? Technical debt ????????? feature flags ??????????????????????????? ??????????????? flags ????????? rollout 100% ???????????? (stale flags), Nested flags (flag ??????????????? flag), Flags ??????????????? side effects ????????????????????? ?????????????????????????????? ?????????????????? flag ????????? rollout 100% ???????????? 30 ????????? ?????????????????? code path ????????????, ????????? naming convention ?????????????????? (release-*, experiment-*, ops-*), Limit ??????????????? active flags (< 20 ???????????????????????????????????????), Review stale flags ????????? sprint, ?????? flag cleanup ???????????? part ????????? Definition of Done ???????????????????????????????????? feature flags ?????? complexity ??????????????? deploy ????????????????????????????????? rollback ????????????

Q: Unleash ????????? Flagsmith ????????? GrowthBook ??????????????????????????????????

A: Unleash ???????????? mature ?????????????????? community ???????????? ?????? SDK ???????????????????????? enterprise features ????????? (A/B testing, audit log) ??????????????????????????? open source version Flagsmith ?????? UI ??????????????????????????? built-in remote config ?????????????????? feature flags ?????????????????? edge proxy, multi-environment ?????? GrowthBook ???????????? experimentation ????????? A/B testing ?????? built-in statistics engine ????????????????????????????????? data-driven decisions ?????????????????? data warehouse ????????? ??????????????? Unleash ?????????????????????????????? stable mature platform, GrowthBook ????????????????????? A/B testing ????????? experimentation, Flagsmith ?????????????????????????????? remote config ????????????????????????

📖 บทความที่เกี่ยวข้อง

LocalAI Self-hosted Audit Trail Loggingอ่านบทความ → LocalAI Self-hosted Blue Green Canary Deployอ่านบทความ → LocalAI Self-hosted DevOps Cultureอ่านบทความ → LocalAI Self-hosted Testing Strategy QAอ่านบทความ → LocalAI Self-hosted Cloud Native Designอ่านบทความ →

📚 ดูบทความทั้งหมด →