SiamCafe.net Blog
Technology

Libvirt KVM Machine Learning Pipeline — สร้าง ML Infrastructure บน KVM

libvirt kvm machine learning pipeline
Libvirt KVM Machine Learning Pipeline | SiamCafe Blog
2025-11-17· อ. บอม — SiamCafe.net· 1,362 คำ

Libvirt KVM คืออะไรและใช้กับ ML Pipeline อย่างไร

Libvirt เป็น open source API และ management tool สำหรับจัดการ virtualization platforms รองรับ KVM, QEMU, Xen, LXC และอื่นๆ KVM (Kernel-based Virtual Machine) เป็น hypervisor ที่ built-in มากับ Linux kernel ให้ near-native performance สำหรับ VMs

การใช้ KVM สำหรับ ML Pipeline มีข้อดีคือ GPU passthrough สำหรับ training ด้วย dedicated GPU, resource isolation ระหว่าง training jobs ต่างๆ, snapshot และ restore สำหรับ experiment reproducibility, live migration สำหรับ workload balancing และ cost savings เมื่อเทียบกับ cloud GPU instances

ML Pipeline บน KVM ประกอบด้วย Data Preparation VMs สำหรับ ETL และ feature engineering, Training VMs ที่มี GPU passthrough สำหรับ model training, Inference VMs สำหรับ model serving, Experiment Tracking VMs สำหรับ MLflow/W&B และ Storage VMs สำหรับ datasets และ model artifacts

ข้อดีของ on-premise KVM เทียบกับ cloud คือ GPU cost ถูกกว่ามาก (NVIDIA A100 ราคา cloud ~$3/hr vs on-premise amortized ~$0.50/hr), data sovereignty เก็บข้อมูลใน premises, no egress charges สำหรับ large datasets และ custom hardware configurations

ติดตั้ง KVM และ Libvirt สำหรับ ML Workloads

ตั้งค่า KVM host สำหรับ machine learning

# === ติดตั้ง KVM และ Libvirt ===

# ตรวจสอบ hardware virtualization support
egrep -c '(vmx|svm)' /proc/cpuinfo
# ค่ามากกว่า 0 = supported

# ตรวจสอบ IOMMU สำหรับ GPU passthrough
dmesg | grep -i iommu

# Ubuntu/Debian
sudo apt update
sudo apt install -y \
    qemu-kvm libvirt-daemon-system libvirt-clients \
    bridge-utils virtinst virt-manager \
    ovmf cpu-checker

# ตรวจสอบ
sudo kvm-ok
sudo systemctl status libvirtd

# เพิ่ม user เข้า groups
sudo usermod -aG libvirt, kvm $USER

# === Enable IOMMU สำหรับ GPU Passthrough ===
# /etc/default/grub
# Intel:
# GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
# AMD:
# GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"

sudo update-grub
# Reboot required

# ตรวจสอบ IOMMU groups
for d in /sys/kernel/iommu_groups/*/devices/*; do
    n=$(basename $(dirname $(dirname "$d")))
    echo "IOMMU Group $n: $(lspci -nns )"
done

# === Storage Pool Setup ===
# สร้าง storage pool สำหรับ VM images
sudo virsh pool-define-as ml-pool dir --target /var/lib/libvirt/ml-images
sudo virsh pool-build ml-pool
sudo virsh pool-start ml-pool
sudo virsh pool-autostart ml-pool

# สร้าง storage pool สำหรับ datasets (NVMe)
sudo mkdir -p /data/ml-datasets
sudo virsh pool-define-as datasets dir --target /data/ml-datasets
sudo virsh pool-build datasets
sudo virsh pool-start datasets
sudo virsh pool-autostart datasets

# === Network Setup ===
# Bridge network สำหรับ VMs
cat > /tmp/ml-bridge.xml << 'XML'

  ml-network
  
  

XML
sudo virsh net-define /tmp/ml-bridge.xml
sudo virsh net-start ml-network
sudo virsh net-autostart ml-network

# === สร้าง Base VM Template ===
# Download Ubuntu cloud image
wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img
sudo cp jammy-server-cloudimg-amd64.img /var/lib/libvirt/ml-images/ubuntu-base.qcow2
sudo qemu-img resize /var/lib/libvirt/ml-images/ubuntu-base.qcow2 100G

# สร้าง cloud-init config
cat > /tmp/cloud-init.cfg << 'YAML'
#cloud-config
hostname: ml-worker
users:
  - name: ml
    sudo: ALL=(ALL) NOPASSWD:ALL
    ssh_authorized_keys:
      - ssh-rsa YOUR_PUBLIC_KEY
packages:
  - python3-pip
  - python3-venv
  - docker.io
  - nvidia-driver-535
  - nvidia-cuda-toolkit
runcmd:
  - pip3 install torch torchvision mlflow scikit-learn pandas
YAML

# Install VM
sudo virt-install \
    --name ml-template \
    --ram 32768 \
    --vcpus 16 \
    --disk path=/var/lib/libvirt/ml-images/ubuntu-base.qcow2 \
    --os-variant ubuntu22.04 \
    --network network=ml-network \
    --cloud-init user-data=/tmp/cloud-init.cfg \
    --noautoconsole \
    --import

สร้าง GPU Passthrough VMs สำหรับ Training

ตั้งค่า GPU passthrough สำหรับ ML training

#!/bin/bash
# gpu_passthrough_setup.sh — Setup GPU Passthrough for ML VMs
set -euo pipefail

# === 1. Identify GPU ===
echo "=== GPU Devices ==="
lspci -nn | grep -i nvidia
# Example output:
# 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA102 [GeForce RTX 3090] [10de:2204]
# 01:00.1 Audio device [0403]: NVIDIA Corporation GA102 [10de:1aef]

GPU_PCI="01:00.0"
GPU_AUDIO_PCI="01:00.1"
GPU_VENDOR_ID="10de:2204"
GPU_AUDIO_ID="10de:1aef"

# === 2. Bind GPU to VFIO driver ===
# /etc/modprobe.d/vfio.conf
echo "options vfio-pci ids=$GPU_VENDOR_ID,$GPU_AUDIO_ID" | \
    sudo tee /etc/modprobe.d/vfio.conf

# Blacklist nvidia driver on host
# /etc/modprobe.d/blacklist-nvidia.conf
cat << 'CONF' | sudo tee /etc/modprobe.d/blacklist-nvidia.conf
blacklist nouveau
blacklist nvidia
blacklist nvidia_drm
blacklist nvidia_modeset
CONF

# Update initramfs
sudo update-initramfs -u

# === 3. สร้าง VM ด้วย GPU Passthrough ===
# Clone template
sudo virt-clone \
    --original ml-template \
    --name ml-gpu-worker-1 \
    --file /var/lib/libvirt/ml-images/ml-gpu-worker-1.qcow2

# Attach GPU to VM
sudo virsh attach-device ml-gpu-worker-1 --file /dev/stdin --config << 'XML'

  
    
XML # Attach GPU Audio sudo virsh attach-device ml-gpu-worker-1 --file /dev/stdin --config << 'XML'
XML # Set VM resources sudo virsh setmaxmem ml-gpu-worker-1 65536M --config sudo virsh setmem ml-gpu-worker-1 65536M --config sudo virsh setvcpus ml-gpu-worker-1 16 --config --maximum sudo virsh setvcpus ml-gpu-worker-1 16 --config # Enable hugepages สำหรับ performance sudo virsh edit ml-gpu-worker-1 # เพิ่มใน : # # # # # Start VM sudo virsh start ml-gpu-worker-1 # ตรวจสอบ GPU ใน VM ssh ml@ml-gpu-worker-1 "nvidia-smi" # === 4. Multi-GPU Setup === # สำหรับ distributed training ข้าม VMs # แต่ละ VM ได้ GPU คนละตัว for i in 1 2 3 4; do sudo virt-clone \ --original ml-template \ --name "ml-gpu-worker-$i" \ --file "/var/lib/libvirt/ml-images/ml-gpu-worker-$i.qcow2" echo "Created ml-gpu-worker-$i" done echo "GPU passthrough setup complete"

ออกแบบ ML Pipeline บน KVM Infrastructure

สร้าง ML pipeline ที่รันบน KVM VMs

#!/usr/bin/env python3
# ml_pipeline.py — ML Pipeline on KVM Infrastructure
import subprocess
import json
import logging
import time
from pathlib import Path
from datetime import datetime
from typing import Dict, List, Optional

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("ml_pipeline")

class KVMMLPipeline:
    def __init__(self, ssh_key="~/.ssh/id_rsa"):
        self.ssh_key = ssh_key
        self.workers = {}
    
    def register_worker(self, name, host, gpu_id=None, role="training"):
        self.workers[name] = {
            "host": host,
            "gpu_id": gpu_id,
            "role": role,
            "status": "idle",
        }
    
    def _ssh_cmd(self, host, command):
        result = subprocess.run(
            ["ssh", "-i", self.ssh_key, "-o", "StrictHostKeyChecking=no",
             f"ml@{host}", command],
            capture_output=True, text=True, timeout=3600,
        )
        return result.stdout.strip(), result.returncode
    
    def _scp(self, host, local_path, remote_path):
        subprocess.run(
            ["scp", "-i", self.ssh_key, "-o", "StrictHostKeyChecking=no",
             local_path, f"ml@{host}:{remote_path}"],
            check=True, timeout=600,
        )
    
    def prepare_data(self, worker_name, dataset_path, output_path):
        worker = self.workers[worker_name]
        worker["status"] = "preparing_data"
        
        logger.info(f"Preparing data on {worker_name}")
        
        script = f"""
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import joblib

df = pd.read_parquet('{dataset_path}')
print(f'Loaded {{len(df)}} rows')

X = df.drop('target', axis=1)
y = df['target']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

import numpy as np
np.save('{output_path}/X_train.npy', X_train_scaled)
np.save('{output_path}/X_test.npy', X_test_scaled)
np.save('{output_path}/y_train.npy', y_train.values)
np.save('{output_path}/y_test.npy', y_test.values)
joblib.dump(scaler, '{output_path}/scaler.pkl')
print('Data preparation complete')
"""
        
        output, rc = self._ssh_cmd(worker["host"], f"python3 -c \"{script}\"")
        worker["status"] = "idle"
        logger.info(f"Data preparation: {output}")
        return rc == 0
    
    def train_model(self, worker_name, config):
        worker = self.workers[worker_name]
        worker["status"] = "training"
        
        logger.info(f"Starting training on {worker_name} (GPU: {worker.get('gpu_id')})")
        
        train_script = f"""
import torch
import torch.nn as nn
import mlflow
import numpy as np
from datetime import datetime

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(f'Training on: {{device}}')

X_train = np.load('{config["data_path"]}/X_train.npy')
y_train = np.load('{config["data_path"]}/y_train.npy')

X_train_t = torch.FloatTensor(X_train).to(device)
y_train_t = torch.FloatTensor(y_train).to(device)

model = nn.Sequential(
    nn.Linear(X_train.shape[1], 256),
    nn.ReLU(),
    nn.Dropout(0.3),
    nn.Linear(256, 128),
    nn.ReLU(),
    nn.Dropout(0.2),
    nn.Linear(128, 1),
).to(device)

optimizer = torch.optim.Adam(model.parameters(), lr={config.get('lr', 0.001)})
criterion = nn.MSELoss()

mlflow.set_tracking_uri('{config.get("mlflow_uri", "http://mlflow:5000")}')
mlflow.set_experiment('{config.get("experiment", "default")}')

with mlflow.start_run(run_name='{worker_name}_{{datetime.now().strftime("%Y%m%d_%H%M")}}'):
    mlflow.log_params({{
        'epochs': {config.get('epochs', 100)},
        'lr': {config.get('lr', 0.001)},
        'batch_size': {config.get('batch_size', 256)},
        'device': str(device),
        'worker': '{worker_name}',
    }})
    
    for epoch in range({config.get('epochs', 100)}):
        model.train()
        optimizer.zero_grad()
        output = model(X_train_t)
        loss = criterion(output.squeeze(), y_train_t)
        loss.backward()
        optimizer.step()
        
        if epoch % 10 == 0:
            mlflow.log_metric('train_loss', loss.item(), step=epoch)
            print(f'Epoch {{epoch}}: loss={{loss.item():.4f}}')
    
    torch.save(model.state_dict(), '/tmp/model.pt')
    mlflow.log_artifact('/tmp/model.pt')
    print('Training complete')
"""
        
        output, rc = self._ssh_cmd(worker["host"], f"python3 -c \"{train_script}\"")
        worker["status"] = "idle"
        logger.info(f"Training result: {output}")
        return rc == 0
    
    def run_pipeline(self, config):
        logger.info("Starting ML Pipeline")
        start_time = time.time()
        
        # Step 1: Prepare data
        data_worker = next(
            (k for k, v in self.workers.items() if v["role"] == "data"),
            list(self.workers.keys())[0],
        )
        self.prepare_data(data_worker, config["dataset"], config["data_path"])
        
        # Step 2: Train on GPU workers
        gpu_workers = [k for k, v in self.workers.items() if v["gpu_id"] is not None]
        for worker in gpu_workers:
            self.train_model(worker, config)
        
        elapsed = time.time() - start_time
        logger.info(f"Pipeline completed in {elapsed:.0f}s")

# pipeline = KVMMLPipeline()
# pipeline.register_worker("data-prep", "10.0.0.10", role="data")
# pipeline.register_worker("gpu-1", "10.0.0.11", gpu_id=0, role="training")
# pipeline.run_pipeline({"dataset": "/data/train.parquet", "data_path": "/tmp/ml", "epochs": 50})

Automation ด้วย Libvirt Python API

จัดการ VMs อัตโนมัติด้วย libvirt Python bindings

#!/usr/bin/env python3
# vm_manager.py — Libvirt VM Manager for ML
import libvirt
import xml.etree.ElementTree as ET
import json
import logging
import time
from datetime import datetime

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("vm_manager")

class MLVMManager:
    def __init__(self, uri="qemu:///system"):
        self.conn = libvirt.open(uri)
        if not self.conn:
            raise RuntimeError("Failed to connect to libvirt")
        logger.info(f"Connected to {uri}")
    
    def list_vms(self):
        vms = []
        for dom in self.conn.listAllDomains():
            state, _ = dom.state()
            state_names = {0: "nostate", 1: "running", 2: "blocked",
                          3: "paused", 4: "shutdown", 5: "shutoff", 6: "crashed"}
            
            info = dom.info()
            vms.append({
                "name": dom.name(),
                "uuid": dom.UUIDString(),
                "state": state_names.get(state, "unknown"),
                "memory_mb": info[1] // 1024,
                "vcpus": info[3],
                "cpu_time_ns": info[4],
            })
        return vms
    
    def create_ml_vm(self, name, memory_mb=32768, vcpus=16, disk_size_gb=100,
                     base_image="/var/lib/libvirt/ml-images/ubuntu-base.qcow2",
                     gpu_pci=None):
        import subprocess
        
        disk_path = f"/var/lib/libvirt/ml-images/{name}.qcow2"
        
        subprocess.run([
            "qemu-img", "create", "-f", "qcow2",
            "-b", base_image, "-F", "qcow2",
            disk_path, f"{disk_size_gb}G",
        ], check=True)
        
        xml_config = f"""

  {name}
  {memory_mb}
  {vcpus}
  
  
    hvm
    
  
  
    
      
      
      
    
    
      
      
    
    
  
  
    
  
"""
        
        dom = self.conn.defineXML(xml_config)
        
        if gpu_pci:
            bus, slot, func = self._parse_pci(gpu_pci)
            gpu_xml = f"""

  
    
""" dom.attachDeviceFlags(gpu_xml, libvirt.VIR_DOMAIN_AFFECT_CONFIG) dom.create() logger.info(f"Created and started VM: {name}") return dom.UUIDString() def _parse_pci(self, pci_addr): parts = pci_addr.replace(".", ":").split(":") return f"0x{parts[0]}", f"0x{parts[1]}", f"0x{parts[2]}" def snapshot_vm(self, name, snapshot_name=None): dom = self.conn.lookupByName(name) if not snapshot_name: snapshot_name = f"snap_{datetime.now().strftime('%Y%m%d_%H%M%S')}" snap_xml = f""" {snapshot_name} ML experiment snapshot """ dom.snapshotCreateXML(snap_xml) logger.info(f"Snapshot created: {name}/{snapshot_name}") return snapshot_name def restore_snapshot(self, name, snapshot_name): dom = self.conn.lookupByName(name) snap = dom.snapshotLookupByName(snapshot_name) dom.revertToSnapshot(snap) logger.info(f"Restored: {name} to {snapshot_name}") def get_vm_stats(self, name): dom = self.conn.lookupByName(name) stats = dom.getCPUStats(True) mem_stats = dom.memoryStats() return { "name": name, "cpu_time_ns": stats[0].get("cpu_time", 0), "memory_total_kb": mem_stats.get("actual", 0), "memory_used_kb": mem_stats.get("rss", 0), "memory_available_kb": mem_stats.get("unused", 0), } def destroy_vm(self, name): dom = self.conn.lookupByName(name) if dom.isActive(): dom.destroy() dom.undefine() logger.info(f"Destroyed VM: {name}") def close(self): self.conn.close() # manager = MLVMManager() # vms = manager.list_vms() # print(json.dumps(vms, indent=2)) # manager.create_ml_vm("ml-train-1", memory_mb=65536, vcpus=16, gpu_pci="01:00.0") # manager.snapshot_vm("ml-train-1", "before_training") # manager.close()

Monitoring และ Resource Management

ระบบ monitoring สำหรับ KVM ML infrastructure

#!/usr/bin/env python3
# kvm_monitor.py — KVM ML Infrastructure Monitoring
import libvirt
import json
import time
import logging
from datetime import datetime
from pathlib import Path

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("kvm_monitor")

class KVMMonitor:
    def __init__(self, uri="qemu:///system"):
        self.conn = libvirt.open(uri)
        self.history = []
    
    def collect_metrics(self):
        metrics = {
            "timestamp": datetime.utcnow().isoformat(),
            "host": self._host_metrics(),
            "vms": [],
        }
        
        for dom in self.conn.listAllDomains():
            if dom.isActive():
                vm_metrics = self._vm_metrics(dom)
                metrics["vms"].append(vm_metrics)
        
        self.history.append(metrics)
        return metrics
    
    def _host_metrics(self):
        info = self.conn.getInfo()
        mem_stats = self.conn.getMemoryStats(libvirt.VIR_NODE_MEMORY_STATS_ALL_CELLS)
        
        return {
            "cpu_model": info[0],
            "memory_total_mb": info[1],
            "cpus": info[2],
            "cpu_mhz": info[3],
            "memory_free_kb": mem_stats.get("free", 0),
            "memory_cached_kb": mem_stats.get("cached", 0),
        }
    
    def _vm_metrics(self, dom):
        info = dom.info()
        
        try:
            cpu_stats = dom.getCPUStats(True)
            cpu_time = cpu_stats[0].get("cpu_time", 0)
        except Exception:
            cpu_time = 0
        
        try:
            mem_stats = dom.memoryStats()
        except Exception:
            mem_stats = {}
        
        try:
            block_stats = dom.blockStats("vda")
        except Exception:
            block_stats = (0, 0, 0, 0, 0)
        
        return {
            "name": dom.name(),
            "state": "running" if info[0] == 1 else "stopped",
            "vcpus": info[3],
            "memory_mb": info[1] // 1024,
            "cpu_time_ns": cpu_time,
            "memory_used_kb": mem_stats.get("rss", 0),
            "memory_available_kb": mem_stats.get("unused", 0),
            "disk_read_bytes": block_stats[1],
            "disk_write_bytes": block_stats[3],
        }
    
    def generate_dashboard(self, output_path="kvm_dashboard.html"):
        if not self.history:
            self.collect_metrics()
        
        latest = self.history[-1]
        host = latest["host"]
        vms = latest["vms"]
        
        total_vcpus = sum(v["vcpus"] for v in vms)
        total_mem = sum(v["memory_mb"] for v in vms)
        
        html = f"""
Libvirt KVM Machine Learning Pipeline — สร้าง ML | SiamCafe


Updated: {latest['timestamp']}

{len(vms)}
Active VMs
{total_vcpus}/{host['cpus']}
vCPUs Used/Total
{total_mem//1024}G/{host['memory_total_mb']//1024}G
Memory Used/Total
{sum(1 for v in vms if 'gpu' in v['name'].lower())}
GPU Workers
""" for vm in vms: html += f""" """ html += "
VM NameStatevCPUsMemoryDisk ReadDisk Write
{vm['name']} {vm['state']} {vm['vcpus']} {vm['memory_mb']}MB {vm['disk_read_bytes']//1024//1024}MB {vm['disk_write_bytes']//1024//1024}MB
" Path(output_path).write_text(html) logger.info(f"Dashboard saved to {output_path}") def alert_check(self, thresholds=None): if not thresholds: thresholds = { "host_memory_percent": 90, "vm_memory_percent": 95, "host_cpu_percent": 90, } metrics = self.collect_metrics() alerts = [] host = metrics["host"] mem_used_pct = (1 - host["memory_free_kb"] / (host["memory_total_mb"] * 1024)) * 100 if mem_used_pct > thresholds["host_memory_percent"]: alerts.append({ "severity": "critical", "message": f"Host memory usage: {mem_used_pct:.0f}%", }) for vm in metrics["vms"]: if vm["memory_available_kb"] > 0: vm_mem_pct = (1 - vm["memory_available_kb"] / (vm["memory_mb"] * 1024)) * 100 if vm_mem_pct > thresholds["vm_memory_percent"]: alerts.append({ "severity": "warning", "message": f"VM {vm['name']} memory: {vm_mem_pct:.0f}%", }) return alerts # monitor = KVMMonitor() # metrics = monitor.collect_metrics() # print(json.dumps(metrics, indent=2)) # monitor.generate_dashboard() # alerts = monitor.alert_check() # for a in alerts: # print(f"[{a['severity']}] {a['message']}")

FAQ คำถามที่พบบ่อย

Q: GPU Passthrough มี performance loss เท่าไหร?

A: GPU passthrough ให้ near-native performance (95-99%) เพราะ GPU ถูก assign ให้ VM โดยตรงผ่าน IOMMU ไม่ผ่าน virtualization layer Performance loss ส่วนใหญ่มาจาก memory copy ระหว่าง host/guest ประมาณ 1-3% สำหรับ ML training workloads ที่ GPU-bound ผลต่างแทบไม่มี เทียบกับ bare-metal ได้เลย

Q: ใช้ Docker แทน KVM สำหรับ ML ได้ไหม?

A: ได้และแนะนำสำหรับหลาย use cases Docker ให้ overhead ต่ำกว่า KVM และ NVIDIA Container Toolkit รองรับ GPU ใน containers ได้ดี แต่ KVM เหมาะกว่าเมื่อต้องการ full OS isolation, dedicated GPU passthrough (ไม่แชร์ GPU), snapshot/restore สำหรับ experiments, live migration ข้าม hosts และ multi-tenant environments ที่ต้องการ strong isolation

Q: จะ scale ML training ข้าม VMs อย่างไร?

A: ใช้ distributed training frameworks เช่น PyTorch DDP (DistributedDataParallel) หรือ Horovod แต่ละ VM มี GPU คนละตัว สื่อสารผ่าน NCCL over network แนะนำใช้ 10GbE หรือ 25GbE เพื่อลด communication overhead ตั้ง NCCL_SOCKET_IFNAME ให้ชี้ไป high-speed network interface สำหรับ gradient synchronization

Q: On-premise ML infrastructure คุ้มค่ากว่า cloud เมื่อไหร?

A: คุ้มค่าเมื่อ GPU utilization สูงกว่า 50% อย่างต่อเนื่อง (>6 เดือน) ราคา GPU cloud instance ~$3/hr สำหรับ A100 = ~$2,000/month ซื้อ A100 PCIe ราคา ~$10,000 amortize 3 ปี = ~$280/month ถูกกว่า 7 เท่า แต่ต้องบวก electricity, cooling, admin costs ถ้า GPU utilization ต่ำ (ใช้แค่ training เป็นครั้งคราว) cloud คุ้มกว่าเพราะจ่ายตามใช้

📖 บทความที่เกี่ยวข้อง

Libvirt KVM 12 Factor Appอ่านบทความ → Libvirt KVM GitOps Workflowอ่านบทความ → Libvirt KVM Container Orchestrationอ่านบทความ → Libvirt KVM Business Continuityอ่านบทความ →

📚 ดูบทความทั้งหมด →