SiamCafe.net Blog
Technology

Model Registry Tech Conference 2026

model registry tech conference 2026
Model Registry Tech Conference 2026 | SiamCafe Blog
2026-01-29· อ. บอม — SiamCafe.net· 9,426 คำ

Model Registry

Model Registry MLflow Model Versioning A/B Testing Model Serving Production MLOps Weights Biases Neptune Vertex AI Kubernetes Docker Seldon KServe

ToolTypeHostingราคาเหมาะกับ
MLflowOpen SourceSelf-host / Cloudฟรีทั่วไป
Weights & BiasesSaaSCloudFree / $50/moExperiment Track
Neptune.aiSaaSCloudFree / CustomTeam Collab
Vertex AIGCP ManagedCloudPay-per-useGCP Users
SageMakerAWS ManagedCloudPay-per-useAWS Users

MLflow Setup

# === MLflow Model Registry ===

# pip install mlflow scikit-learn

import mlflow
import mlflow.sklearn
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, f1_score
import numpy as np

# Generate sample data
X, y = make_classification(n_samples=1000, n_features=20, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# MLflow Experiment
# mlflow.set_tracking_uri("http://localhost:5000")
# mlflow.set_experiment("conference-model-2026")

# Train and Log Model
# with mlflow.start_run(run_name="rf-v1") as run:
#     params = {"n_estimators": 100, "max_depth": 10, "random_state": 42}
#     mlflow.log_params(params)
#
#     model = RandomForestClassifier(**params)
#     model.fit(X_train, y_train)
#
#     y_pred = model.predict(X_test)
#     accuracy = accuracy_score(y_test, y_pred)
#     f1 = f1_score(y_test, y_pred)
#
#     mlflow.log_metrics({"accuracy": accuracy, "f1_score": f1})
#     mlflow.sklearn.log_model(model, "model",
#         registered_model_name="conference-classifier")
#
#     print(f"Run ID: {run.info.run_id}")
#     print(f"Accuracy: {accuracy:.4f} | F1: {f1:.4f}")

# Register and Transition
# from mlflow.tracking import MlflowClient
# client = MlflowClient()
#
# # Transition to Staging
# client.transition_model_version_stage(
#     name="conference-classifier",
#     version=1,
#     stage="Staging"
# )
#
# # After testing, promote to Production
# client.transition_model_version_stage(
#     name="conference-classifier",
#     version=1,
#     stage="Production"
# )

from dataclasses import dataclass

@dataclass
class ModelVersion:
    name: str
    version: int
    stage: str
    accuracy: float
    f1: float
    created_by: str
    created_at: str

versions = [
    ModelVersion("conference-classifier", 1, "Archived", 0.8850, 0.8820, "data-team", "2025-01-01"),
    ModelVersion("conference-classifier", 2, "Archived", 0.9120, 0.9080, "data-team", "2025-01-10"),
    ModelVersion("conference-classifier", 3, "Staging", 0.9350, 0.9310, "ml-engineer", "2025-01-15"),
    ModelVersion("conference-classifier", 4, "Production", 0.9480, 0.9450, "ml-engineer", "2025-01-20"),
]

print("=== Model Registry ===")
for v in versions:
    print(f"  [v{v.version}] {v.name} — {v.stage}")
    print(f"    Accuracy: {v.accuracy:.4f} | F1: {v.f1:.4f}")
    print(f"    By: {v.created_by} | Date: {v.created_at}")

Model Serving

# === Model Serving & Deployment ===

# MLflow Model Serve
# mlflow models serve -m "models:/conference-classifier/Production" -p 5001
#
# # Test
# curl -X POST http://localhost:5001/invocations \
#   -H "Content-Type: application/json" \
#   -d '{"inputs": [[1.0, 2.0, 3.0, ...]]}'

# Docker Deployment
# mlflow models build-docker \
#   -m "models:/conference-classifier/Production" \
#   -n conference-model:latest
#
# docker run -p 5001:8080 conference-model:latest

# Kubernetes Deployment
# apiVersion: apps/v1
# kind: Deployment
# metadata:
#   name: conference-model
# spec:
#   replicas: 3
#   selector:
#     matchLabels:
#       app: conference-model
#   template:
#     metadata:
#       labels:
#         app: conference-model
#     spec:
#       containers:
#         - name: model
#           image: conference-model:latest
#           ports:
#             - containerPort: 8080
#           resources:
#             requests:
#               cpu: "500m"
#               memory: "1Gi"
#             limits:
#               cpu: "2"
#               memory: "4Gi"
#           readinessProbe:
#             httpGet:
#               path: /health
#               port: 8080

# A/B Testing with Istio
# apiVersion: networking.istio.io/v1beta1
# kind: VirtualService
# spec:
#   http:
#     - route:
#         - destination:
#             host: model-v3
#           weight: 80
#         - destination:
#             host: model-v4
#           weight: 20

@dataclass
class ServingConfig:
    method: str
    latency_ms: int
    throughput_rps: int
    scaling: str
    cost: str

configs = [
    ServingConfig("MLflow Serve", 50, 100, "Manual", "Low"),
    ServingConfig("Docker + K8s", 30, 500, "HPA Auto", "Medium"),
    ServingConfig("Seldon Core", 25, 1000, "Auto + A/B", "Medium"),
    ServingConfig("KServe", 20, 2000, "Serverless", "Pay-per-use"),
    ServingConfig("BentoML", 15, 1500, "Auto", "Medium"),
]

print("\n=== Serving Options ===")
for s in configs:
    print(f"  [{s.method}]")
    print(f"    Latency: {s.latency_ms}ms | Throughput: {s.throughput_rps} RPS")
    print(f"    Scaling: {s.scaling} | Cost: {s.cost}")

MLOps Pipeline

# === MLOps Pipeline ===

# CI/CD for ML
# .github/workflows/ml-pipeline.yml
# name: ML Pipeline
# on:
#   push:
#     paths: ['models/**', 'data/**']
# jobs:
#   train:
#     runs-on: ubuntu-latest
#     steps:
#       - uses: actions/checkout@v4
#       - run: pip install -r requirements.txt
#       - run: python train.py
#       - run: mlflow models build-docker -m runs:/$RUN_ID/model -n model:$SHA
#   test:
#     needs: train
#     steps:
#       - run: python test_model.py --model-uri models:/model/Staging
#   deploy:
#     needs: test
#     if: github.ref == 'refs/heads/main'
#     steps:
#       - run: kubectl set image deployment/model model=model:$SHA

mlops_metrics = {
    "Models in Registry": "12",
    "Production Models": "4",
    "Daily Predictions": "2.5M",
    "Model Retrain Frequency": "Weekly",
    "A/B Tests Running": "2",
    "Avg Inference Latency": "25ms",
    "Model Accuracy (Production)": "94.8%",
    "Data Drift Alerts (30d)": "3",
    "Pipeline Runs (30d)": "45",
}

print("MLOps Dashboard:")
for k, v in mlops_metrics.items():
    print(f"  {k}: {v}")

conference_agenda = [
    "09:00 Keynote: MLOps in 2026 — State of the Art",
    "10:00 Workshop: MLflow Model Registry Hands-on",
    "11:30 Talk: A/B Testing ML Models at Scale",
    "13:00 Workshop: Kubernetes Model Serving with KServe",
    "14:30 Talk: Data Drift Detection & Auto-retraining",
    "16:00 Panel: MLOps Best Practices from Industry Leaders",
    "17:00 Lightning Talks: Community Projects",
]

print(f"\n\nTech Conference 2026 — ML Track:")
for item in conference_agenda:
    print(f"  {item}")

เคล็ดลับ

Model Registry คืออะไร

จัดเก็บจัดการ ML Model รวมศูนย์ Version Metadata Metrics Stage Staging Production Rollback MLflow W&B Neptune Vertex AI

MLflow Model Registry ใช้อย่างไร

pip install mlflow server UI log_model Register Version Transition Stage REST API mlflow models serve Kubernetes Docker

Model Versioning สำคัญอย่างไร

ติดตามทุก Version ย้อนกลับ Compare Performance A/B Testing Audit Trail Reproducibility Compliance ตรวจสอบย้อนหลัง

Deploy Model ไป Production อย่างไร

MLflow serve Docker Kubernetes Seldon Core KServe BentoML A/B Testing Istio Shadow Canary Monitoring Prometheus Grafana

สรุป

Model Registry MLflow Model Versioning A/B Testing Serving Kubernetes Docker Seldon KServe MLOps Pipeline CI/CD Data Drift Production Conference 2026

📖 บทความที่เกี่ยวข้อง

GCP Cloud Spanner Tech Conference 2026อ่านบทความ → Kubernetes HPA VPA Tech Conference 2026อ่านบทความ → Model Registry Domain Driven Design DDDอ่านบทความ → Model Registry IoT Gatewayอ่านบทความ → Rocky Linux Migration Tech Conference 2026อ่านบทความ →

📚 ดูบทความทั้งหมด →