SiamCafe.net Blog
Technology

Midjourney Prompt Feature Flag Management จัดการ AI Prompts ด้วย Feature Flags

midjourney prompt feature flag management
Midjourney Prompt Feature Flag Management | SiamCafe Blog
2025-07-31· อ. บอม — SiamCafe.net· 1,125 คำ

Midjourney Prompt Engineering ?????????????????????

Midjourney ???????????? AI image generation tool ?????????????????????????????????????????? text prompts ??????????????????????????????????????????????????????????????????????????????????????? prompt engineering ???????????????????????? ???????????????????????? prompt ????????????????????????????????? Subject (??????????????????????????????), Style (????????????????????????), Details (??????????????????????????????), Parameters (????????? settings ????????? Midjourney)

Feature Flag Management ?????????????????? AI Prompts ????????????????????? ?????????????????????????????? prompts ?????????????????? feature flags ?????? software development ????????????/????????? prompt versions ??????????????????????????????????????? deploy code ???????????? ??????????????? prompt ????????????????????? users ??????????????????????????????????????? rollout ????????????????????? rollback ?????? prompt ??????????????????????????????????????????????????????????????????

???????????????????????? Rapid iteration ??????????????? prompt ?????????????????????????????????, A/B testing ????????????????????????????????? prompt versions, Risk reduction rollback ?????????????????? prompt ??????????????????????????????????????????, Analytics ?????????????????????????????? prompt version, Collaboration ??????????????????????????????????????????????????? prompt library

Feature Flag ?????????????????? AI Prompts

?????????????????? feature flag system ?????????????????? prompts

# === Prompt Feature Flag System ===

cat > prompt_flags.yaml << 'EOF'
prompt_feature_flags:
  product_image_generator:
    name: "Product Image Generator"
    description: "Generate product images for e-commerce"
    enabled: true
    variants:
      control:
        weight: 50
        prompt_template: |
          {product_name}, product photography, white background,
          studio lighting, high resolution, 4k, commercial photography
        parameters: "--ar 1:1 --style raw --v 6"
        
      treatment_a:
        weight: 25
        prompt_template: |
          {product_name}, professional product shot, minimalist setup,
          soft shadows, clean composition, editorial style, 8k detail
        parameters: "--ar 1:1 --style raw --v 6 --stylize 200"
        
      treatment_b:
        weight: 25
        prompt_template: |
          {product_name}, lifestyle product photography, natural light,
          warm tones, bokeh background, premium feel, magazine quality
        parameters: "--ar 4:3 --style raw --v 6 --stylize 300"
    
    targeting:
      rules:
        - attribute: "product_category"
          operator: "in"
          values: ["electronics", "fashion", "home"]
        - attribute: "user_plan"
          operator: "equals"
          value: "premium"
    
    metrics:
      - "click_through_rate"
      - "conversion_rate"
      - "user_rating"
    
    rollout:
      percentage: 100
      sticky: true  # Same user always gets same variant

  social_media_banner:
    name: "Social Media Banner"
    enabled: true
    variants:
      v1:
        weight: 70
        prompt_template: |
          {brand_message}, social media banner, modern design,
          vibrant colors, eye-catching, {platform} format
        parameters: "--ar 16:9 --v 6"
      v2:
        weight: 30
        prompt_template: |
          {brand_message}, digital marketing banner, bold typography,
          gradient background, professional, {platform} optimized
        parameters: "--ar 16:9 --v 6 --stylize 400"
    rollout:
      percentage: 50  # Only 50% of users get this feature
EOF

echo "Prompt feature flags configured"

??????????????? Prompt Management System

Python system ???????????????????????????????????? prompts

#!/usr/bin/env python3
# prompt_manager.py ??? AI Prompt Management System
import json
import logging
import hashlib
from typing import Dict, List, Any, Optional

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("prompts")

class PromptManager:
    """Feature Flag-based Prompt Management"""
    
    def __init__(self):
        self.prompts = {}
        self.versions = {}
        self.metrics = {}
    
    def register_prompt(self, name, config):
        """Register a prompt with variants"""
        self.prompts[name] = config
        self.versions[name] = config.get("version", 1)
    
    def evaluate(self, prompt_name, context):
        """Evaluate which prompt variant to use"""
        config = self.prompts.get(prompt_name)
        if not config or not config.get("enabled", False):
            return None
        
        # Check targeting rules
        if not self._check_targeting(config.get("targeting", {}), context):
            return None
        
        # Check rollout percentage
        rollout = config.get("rollout", {}).get("percentage", 100)
        user_id = context.get("user_id", "anonymous")
        user_hash = int(hashlib.md5(f"{prompt_name}:{user_id}".encode()).hexdigest(), 16) % 100
        
        if user_hash >= rollout:
            return None
        
        # Select variant based on weights
        variant = self._select_variant(config["variants"], user_id, prompt_name)
        
        # Render template
        template = variant["prompt_template"]
        rendered = template.format(**context)
        
        return {
            "prompt_name": prompt_name,
            "variant_name": variant["name"],
            "rendered_prompt": rendered.strip(),
            "parameters": variant.get("parameters", ""),
            "version": self.versions[prompt_name],
        }
    
    def _check_targeting(self, targeting, context):
        rules = targeting.get("rules", [])
        if not rules:
            return True
        
        for rule in rules:
            attr = context.get(rule["attribute"])
            if rule["operator"] == "equals" and attr != rule["value"]:
                return False
            elif rule["operator"] == "in" and attr not in rule.get("values", []):
                return False
        return True
    
    def _select_variant(self, variants, user_id, prompt_name):
        total_weight = sum(v.get("weight", 1) for v in variants.values())
        user_hash = int(hashlib.md5(f"{prompt_name}:variant:{user_id}".encode()).hexdigest(), 16) % total_weight
        
        cumulative = 0
        for name, variant in variants.items():
            cumulative += variant.get("weight", 1)
            if user_hash < cumulative:
                return {**variant, "name": name}
        
        first_name = list(variants.keys())[0]
        return {**variants[first_name], "name": first_name}
    
    def record_metric(self, prompt_name, variant, metric_name, value):
        """Record metric for prompt variant"""
        key = f"{prompt_name}:{variant}:{metric_name}"
        if key not in self.metrics:
            self.metrics[key] = []
        self.metrics[key].append(value)
    
    def get_analytics(self, prompt_name):
        """Get analytics for prompt"""
        results = {}
        for key, values in self.metrics.items():
            if key.startswith(prompt_name):
                parts = key.split(":")
                variant = parts[1]
                metric = parts[2]
                if variant not in results:
                    results[variant] = {}
                results[variant][metric] = {
                    "count": len(values),
                    "avg": round(sum(values) / len(values), 3) if values else 0,
                    "min": min(values) if values else 0,
                    "max": max(values) if values else 0,
                }
        return results

# Demo
manager = PromptManager()

# Register product image prompt
manager.register_prompt("product_image", {
    "enabled": True,
    "version": 3,
    "variants": {
        "control": {
            "weight": 50,
            "prompt_template": "{product_name}, product photography, white background, studio lighting, 4k",
            "parameters": "--ar 1:1 --v 6",
        },
        "lifestyle": {
            "weight": 50,
            "prompt_template": "{product_name}, lifestyle photography, natural light, warm tones, premium feel",
            "parameters": "--ar 4:3 --v 6 --stylize 300",
        },
    },
    "rollout": {"percentage": 100},
})

# Evaluate for different users
for user_id in ["user_001", "user_002", "user_003"]:
    result = manager.evaluate("product_image", {
        "user_id": user_id,
        "product_name": "Wireless Headphones AX-500",
    })
    if result:
        print(f"{user_id} ??? {result['variant_name']}: {result['rendered_prompt'][:60]}...")

A/B Testing Prompts

??????????????? prompt variants

#!/usr/bin/env python3
# ab_testing.py ??? Prompt A/B Testing Framework
import json
import logging
import random
import math
from typing import Dict, List

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("ab_test")

class PromptABTest:
    """A/B Testing for AI Prompts"""
    
    def __init__(self, test_name):
        self.test_name = test_name
        self.variants = {}
        self.results = {}
    
    def add_variant(self, name, prompt_config):
        self.variants[name] = prompt_config
        self.results[name] = {"impressions": 0, "conversions": 0, "ratings": []}
    
    def record_impression(self, variant):
        self.results[variant]["impressions"] += 1
    
    def record_conversion(self, variant):
        self.results[variant]["conversions"] += 1
    
    def record_rating(self, variant, rating):
        self.results[variant]["ratings"].append(rating)
    
    def get_results(self):
        """Calculate A/B test results"""
        output = {}
        for name, data in self.results.items():
            impressions = data["impressions"]
            conversions = data["conversions"]
            ratings = data["ratings"]
            
            cvr = conversions / impressions if impressions > 0 else 0
            avg_rating = sum(ratings) / len(ratings) if ratings else 0
            
            output[name] = {
                "impressions": impressions,
                "conversions": conversions,
                "conversion_rate": round(cvr * 100, 2),
                "avg_rating": round(avg_rating, 2),
                "sample_size": impressions,
            }
        return output
    
    def statistical_significance(self, variant_a, variant_b):
        """Check if difference is statistically significant"""
        a = self.results[variant_a]
        b = self.results[variant_b]
        
        n_a = a["impressions"]
        n_b = b["impressions"]
        
        if n_a == 0 or n_b == 0:
            return {"significant": False, "reason": "Not enough data"}
        
        p_a = a["conversions"] / n_a
        p_b = b["conversions"] / n_b
        
        # Z-test for two proportions
        p_pool = (a["conversions"] + b["conversions"]) / (n_a + n_b)
        se = math.sqrt(p_pool * (1 - p_pool) * (1/n_a + 1/n_b)) if p_pool > 0 else 1
        z_score = (p_b - p_a) / se if se > 0 else 0
        
        significant = abs(z_score) > 1.96  # 95% confidence
        
        return {
            "variant_a_cvr": round(p_a * 100, 2),
            "variant_b_cvr": round(p_b * 100, 2),
            "lift": round((p_b - p_a) / p_a * 100, 2) if p_a > 0 else 0,
            "z_score": round(z_score, 3),
            "significant": significant,
            "confidence": "95%" if significant else "< 95%",
            "winner": variant_b if z_score > 1.96 else variant_a if z_score < -1.96 else "No winner yet",
        }

# Demo: A/B Test product image prompts
test = PromptABTest("product_image_v3")
test.add_variant("control", {"prompt": "product photography, white background"})
test.add_variant("lifestyle", {"prompt": "lifestyle photography, natural light"})

# Simulate data
random.seed(42)
for _ in range(500):
    test.record_impression("control")
    if random.random() < 0.12:  # 12% CVR
        test.record_conversion("control")
    test.record_rating("control", random.uniform(3.5, 5.0))

for _ in range(500):
    test.record_impression("lifestyle")
    if random.random() < 0.15:  # 15% CVR
        test.record_conversion("lifestyle")
    test.record_rating("lifestyle", random.uniform(3.8, 5.0))

results = test.get_results()
print("A/B Test Results:")
for name, data in results.items():
    print(f"  {name}: CVR={data['conversion_rate']}%, Rating={data['avg_rating']}, n={data['sample_size']}")

sig = test.statistical_significance("control", "lifestyle")
print(f"\nStatistical Significance:")
print(f"  Lift: {sig['lift']}%")
print(f"  Significant: {sig['significant']} ({sig['confidence']})")
print(f"  Winner: {sig['winner']}")

Version Control ????????? Rollback

?????????????????? prompt versions

# === Prompt Version Control ===

cat > prompt_versioning.yaml << 'EOF'
prompt_versions:
  product_image:
    current_version: 3
    versions:
      v1:
        created: "2024-01-15"
        author: "john"
        status: "archived"
        prompt: "product photo, white background, studio"
        parameters: "--ar 1:1 --v 5.2"
        performance:
          conversion_rate: 8.5
          avg_rating: 3.8
          
      v2:
        created: "2024-03-01"
        author: "jane"
        status: "archived"
        prompt: "product photography, white background, studio lighting, high resolution, 4k"
        parameters: "--ar 1:1 --style raw --v 6"
        change_log: "Upgraded to v6, added style raw"
        performance:
          conversion_rate: 12.1
          avg_rating: 4.2
          
      v3:
        created: "2024-05-15"
        author: "john"
        status: "active"
        prompt: "product photography, white background, studio lighting, 4k, commercial photography"
        parameters: "--ar 1:1 --style raw --v 6"
        change_log: "Added commercial photography keyword"
        performance:
          conversion_rate: 13.8
          avg_rating: 4.4
          
    rollback_policy:
      auto_rollback: true
      conditions:
        - metric: "conversion_rate"
          operator: "drops_below"
          threshold: 10
          window: "24h"
        - metric: "error_rate"
          operator: "exceeds"
          threshold: 5
          window: "1h"
      rollback_to: "previous_version"
EOF

# Git-based prompt versioning
cat > .gitattributes << 'EOF'
prompts/*.yaml diff
prompts/*.json diff
EOF

cat > prompts/product_image.json << 'EOF'
{
  "name": "product_image",
  "version": 3,
  "variants": {
    "default": {
      "template": "{product_name}, product photography, white background, studio lighting, 4k, commercial photography",
      "parameters": "--ar 1:1 --style raw --v 6",
      "negative_prompt": "blurry, low quality, distorted, watermark"
    }
  },
  "metadata": {
    "author": "john",
    "created": "2024-05-15",
    "tags": ["product", "e-commerce", "photography"]
  }
}
EOF

echo "Version control configured"

Monitoring ????????? Analytics

????????????????????????????????? prompts

#!/usr/bin/env python3
# prompt_analytics.py ??? Prompt Analytics Dashboard
import json
import logging
from typing import Dict, List

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("analytics")

class PromptAnalytics:
    def __init__(self):
        pass
    
    def dashboard(self):
        return {
            "overview": {
                "total_prompts": 15,
                "active_ab_tests": 3,
                "total_generations_24h": 12500,
                "avg_rating": 4.3,
            },
            "top_prompts": [
                {"name": "product_image_v3", "generations": 4500, "cvr": 13.8, "rating": 4.4},
                {"name": "social_banner_v2", "generations": 3200, "cvr": 8.5, "rating": 4.1},
                {"name": "blog_illustration", "generations": 2800, "cvr": 11.2, "rating": 4.3},
                {"name": "avatar_generator", "generations": 1200, "cvr": 22.5, "rating": 4.6},
            ],
            "ab_tests": [
                {
                    "test": "product_image_lifestyle",
                    "status": "running",
                    "days_active": 7,
                    "control_cvr": 12.1,
                    "treatment_cvr": 15.3,
                    "lift": "+26.4%",
                    "significant": True,
                    "recommendation": "Deploy treatment (lifestyle variant)",
                },
                {
                    "test": "banner_bold_typography",
                    "status": "running",
                    "days_active": 3,
                    "control_cvr": 8.5,
                    "treatment_cvr": 9.1,
                    "lift": "+7.1%",
                    "significant": False,
                    "recommendation": "Need more data (min 500 samples per variant)",
                },
            ],
            "cost_tracking": {
                "monthly_api_cost": 450,
                "cost_per_generation": 0.036,
                "generations_per_conversion": 7.2,
                "cost_per_conversion": 0.26,
            },
        }

analytics = PromptAnalytics()
dash = analytics.dashboard()
print("Prompt Analytics Dashboard:")
print(f"  Active Prompts: {dash['overview']['total_prompts']}")
print(f"  Generations (24h): {dash['overview']['total_generations_24h']}")

print("\nTop Prompts:")
for p in dash["top_prompts"]:
    print(f"  {p['name']}: {p['generations']} gens, CVR={p['cvr']}%, Rating={p['rating']}")

print("\nA/B Tests:")
for t in dash["ab_tests"]:
    print(f"  {t['test']}: {t['lift']} lift, Significant={t['significant']}")
    print(f"    ??? {t['recommendation']}")

print(f"\nCost: /conversion")

FAQ ??????????????????????????????????????????

Q: ????????????????????????????????? feature flags ?????????????????? AI prompts?

A: AI prompts ?????????????????????????????????????????????????????????????????????????????????????????? prompt ???????????????????????????????????????????????????????????????????????????????????????????????????????????? Feature flags ???????????? ??????????????? prompt ????????????????????? users ??????????????????????????????????????? (?????? risk), Rollback ???????????????????????? prompt ?????????????????????????????????????????? (????????????????????? deploy code), A/B test ????????????????????????????????? prompt versions ???????????? data ????????????, Gradual rollout ?????????????????????????????? % users ?????????????????? prompt ????????????, Audit trail ??????????????????????????????????????????????????? ????????????????????? prompt ???????????? ??????????????????????????? ??????????????????????????? feature flags ???????????? hardcode prompts ?????? code ?????????????????????????????????????????????????????? prompt ???????????? deploy ???????????? ??????????????????????????? A/B test ?????????????????????

Q: Midjourney parameters ???????????????????????????????????????????????????????

A: Parameters ???????????? --ar (aspect ratio) ????????????????????????????????????????????? ???????????? --ar 16:9, --ar 1:1, --ar 9:16, --v (version) ??????????????? model version ???????????? --v 6, --v 5.2, --style (style mode) raw ??????????????????????????? prompt ?????????????????????, --stylize (stylization) 0-1000 ?????????????????? = artistic ????????????????????? default 100, --chaos (variety) 0-100 ?????????????????? = ?????????????????????????????????????????????, --quality (render quality) 0.25, 0.5, 1 ?????????????????? = detail ?????????????????????, --no (negative prompt) ?????????????????????????????????????????????????????????????????? ?????????????????? product photography ??????????????? --style raw --v 6 --ar ?????????????????????????????? ?????????????????? creative artwork ????????? --stylize 300-500

Q: ??????????????????????????????????????? prompt library ??????????????????????

A: ?????????????????????????????????????????? Level 1 (Simple) ?????????????????? JSON/YAML files ?????? Git repository version control ???????????? Git commits ???????????????????????????????????? Level 2 (Structured) ????????? database (PostgreSQL) ???????????? prompts + versions + metrics ??????????????? admin UI ???????????????????????????????????? ???????????????????????????????????? Level 3 (Platform) ????????? prompt management platform ???????????? PromptLayer, Humanloop, LangSmith ?????? built-in versioning, analytics, A/B testing ????????????????????????????????????????????? ?????????????????????????????????????????? Git + YAML ????????????????????? ??????????????? prompts ????????????????????? (50+) ?????????????????????????????? database ???????????? platform

Q: Midjourney ????????? DALL-E ????????? Stable Diffusion ??????????????????????????????????

A: Midjourney ??????????????????????????????????????????????????? (???????????????????????? artistic style) ????????????????????? Discord ??????????????? API ????????????????????????????????????????????? (????????????????????? workaround) ???????????? $10-60/month ??????????????? creative work, marketing DALL-E 3 (OpenAI) ?????? API ???????????????????????? integrate ????????? ChatGPT ????????? text rendering ?????????????????? ??????????????? automated workflows ???????????? $0.04-0.12/image Stable Diffusion open source self-host ????????? (?????????) customize ??????????????????????????? (fine-tune, ControlNet, LoRA) ???????????? GPU ?????????????????? local inference ??????????????? developers ?????????????????????????????? control ????????????????????? ?????????????????? feature flag management ??????????????? DALL-E 3 (API ??????) ???????????? Stable Diffusion (self-host, API ???????????????????????????) Midjourney ??????????????????????????? automation ?????????????????????????????? official API

📖 บทความที่เกี่ยวข้อง

Midjourney Prompt Microservices Architectureอ่านบทความ → Midjourney Prompt Service Mesh Setupอ่านบทความ → TTS Coqui Feature Flag Managementอ่านบทความ → Midjourney Prompt Monitoring และ Alertingอ่านบทความ → Midjourney Prompt Career Development ITอ่านบทความ →

📚 ดูบทความทั้งหมด →