SiamCafe.net Blog
Technology

SASE Framework GreenOps Sustainability ลด Carbon Footprint ด้วย SASE และ GreenOps

sase framework greenops sustainability
SASE Framework GreenOps Sustainability | SiamCafe Blog
2026-01-31· อ. บอม — SiamCafe.net· 1,311 คำ

SASE Framework ?????????????????????

SASE (Secure Access Service Edge) ???????????? framework ?????????????????? network security ????????? WAN capabilities ????????????????????????????????????????????? cloud-delivered service ?????????????????????????????? SD-WAN ?????????????????? network optimization, SWG (Secure Web Gateway) ???????????? web traffic, CASB (Cloud Access Security Broker) ?????????????????? cloud app access, ZTNA (Zero Trust Network Access) verify ????????? connection, FWaaS (Firewall as a Service) cloud-based firewall

GreenOps ??????????????????????????????????????????????????? sustainability ????????????????????? IT operations ??????????????????????????????????????? carbon footprint ????????? infrastructure ???????????? compute, storage, network ????????????????????????????????? performance ???????????? security ?????????????????????????????? measure, reduce, offset ???????????????????????????????????????????????? ?????????????????????????????????????????????????????? ???????????????????????????????????????????????????

?????????????????? SASE ????????? GreenOps ???????????????????????????????????????????????????????????? security ????????? sustainability SASE ?????? hardware on-premise ?????????????????? cloud ????????????????????????????????????????????? SD-WAN optimize traffic routing ?????? bandwidth waste ZTNA ?????? VPN concentrators ?????????????????? power GreenOps ?????????????????? optimize resource usage ???????????? compute ????????? network

GreenOps ????????? Sustainability ?????? IT

????????????????????? GreenOps ?????????????????? IT infrastructure

# === GreenOps Fundamentals ===

# 1. Carbon Emission Categories (GHG Protocol)
# Scope 1: Direct emissions (on-premise generators, cooling)
# Scope 2: Indirect emissions (purchased electricity)
# Scope 3: Supply chain (cloud providers, hardware manufacturing)

# 2. Key Metrics
# PUE (Power Usage Effectiveness) = Total Facility Power / IT Equipment Power
#   Ideal: 1.0 (all power goes to IT)
#   Average data center: 1.58
#   Best-in-class: 1.1-1.2

# CUE (Carbon Usage Effectiveness) = CO2 emissions / IT Equipment Energy
# WUE (Water Usage Effectiveness) = Water usage / IT Equipment Energy

# 3. Cloud Provider Carbon Data
# AWS: Customer Carbon Footprint Tool
# GCP: Carbon Footprint dashboard (carbon-free energy score)
# Azure: Emissions Impact Dashboard

# 4. Green Cloud Regions
cat > green_regions.yaml << 'EOF'
cloud_regions:
  low_carbon:
    - provider: GCP
      region: us-central1
      carbon_free_energy_pct: 93
      
    - provider: GCP
      region: europe-north1 (Finland)
      carbon_free_energy_pct: 97
      
    - provider: AWS
      region: eu-north-1 (Stockholm)
      renewable_energy: high
      
    - provider: AWS
      region: ca-central-1 (Canada)
      renewable_energy: hydroelectric
      
    - provider: Azure
      region: Sweden Central
      carbon_free_energy_pct: 95

  high_carbon:
    - provider: AWS
      region: ap-southeast-1 (Singapore)
      note: "High grid carbon intensity"
      
    - provider: GCP
      region: asia-south1 (Mumbai)
      note: "Coal-heavy grid"
EOF

# 5. Sustainable Architecture Principles
# - Right-size instances (avoid over-provisioning)
# - Use spot/preemptible instances for batch workloads
# - Auto-scale down during off-hours
# - Choose green regions when latency allows
# - Use ARM-based instances (Graviton, Ampere) ??? 40% more efficient
# - Optimize data transfer (compress, cache, CDN)
# - Delete unused resources (zombie instances, orphan volumes)

echo "GreenOps fundamentals defined"

Implement SASE ????????? GreenOps

??????????????? SASE architecture ????????? sustainable

#!/usr/bin/env python3
# sase_greenops.py ??? SASE + GreenOps Implementation
import json
import logging
from typing import Dict, List
from datetime import datetime

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("sase")

class SASEGreenOps:
    def __init__(self):
        self.components = {}
    
    def sase_architecture(self):
        return {
            "edge_locations": {
                "purpose": "Bring security close to users, reduce backhaul traffic",
                "green_benefit": "Less network hops = less energy per request",
                "components": ["SWG", "CASB", "ZTNA", "FWaaS"],
            },
            "sd_wan": {
                "purpose": "Intelligent traffic routing across WAN",
                "green_benefit": "Optimize paths, reduce redundant traffic 30-50%",
                "features": ["Application-aware routing", "WAN optimization", "Deduplication"],
            },
            "ztna": {
                "purpose": "Zero Trust access without VPN",
                "green_benefit": "Eliminate VPN concentrators (each uses 200-500W)",
                "savings_per_appliance_kwh_year": 4380,
            },
            "cloud_native_security": {
                "purpose": "Replace on-prem security appliances",
                "green_benefit": "Shared infrastructure = higher utilization = less waste",
                "replaced_appliances": ["Firewall", "IDS/IPS", "Web Proxy", "DLP"],
            },
        }
    
    def carbon_reduction_estimate(self, on_prem_devices, users):
        """Estimate carbon reduction from SASE migration"""
        # Average power per security appliance
        avg_power_watts = 350
        on_prem_kwh_year = on_prem_devices * avg_power_watts * 8760 / 1000
        
        # PUE overhead (cooling, UPS, etc.)
        pue = 1.6
        total_on_prem_kwh = on_prem_kwh_year * pue
        
        # Carbon intensity (global average: 0.475 kg CO2/kWh)
        carbon_intensity = 0.475
        on_prem_co2_kg = total_on_prem_kwh * carbon_intensity
        
        # SASE cloud equivalent (shared infrastructure, green regions)
        cloud_kwh_per_user_year = 15  # Estimated
        cloud_total_kwh = users * cloud_kwh_per_user_year
        cloud_pue = 1.1  # Hyperscaler PUE
        cloud_renewable_pct = 0.6  # Average renewable %
        
        cloud_co2_kg = cloud_total_kwh * cloud_pue * carbon_intensity * (1 - cloud_renewable_pct)
        
        reduction = on_prem_co2_kg - cloud_co2_kg
        reduction_pct = (reduction / on_prem_co2_kg) * 100 if on_prem_co2_kg > 0 else 0
        
        return {
            "on_premise": {
                "devices": on_prem_devices,
                "kwh_per_year": round(total_on_prem_kwh),
                "co2_kg_per_year": round(on_prem_co2_kg),
            },
            "sase_cloud": {
                "users": users,
                "kwh_per_year": round(cloud_total_kwh * cloud_pue),
                "co2_kg_per_year": round(cloud_co2_kg),
            },
            "savings": {
                "kwh_saved": round(total_on_prem_kwh - cloud_total_kwh * cloud_pue),
                "co2_saved_kg": round(reduction),
                "reduction_pct": round(reduction_pct, 1),
                "trees_equivalent": round(reduction / 22),  # 1 tree absorbs ~22kg CO2/year
            },
        }

sase = SASEGreenOps()
arch = sase.sase_architecture()
print("SASE Components:")
for comp, info in arch.items():
    print(f"  {comp}: {info['green_benefit']}")

estimate = sase.carbon_reduction_estimate(on_prem_devices=20, users=500)
print(f"\nCarbon Reduction Estimate:")
print(f"  On-prem CO2: {estimate['on_premise']['co2_kg_per_year']} kg/year")
print(f"  SASE Cloud CO2: {estimate['sase_cloud']['co2_kg_per_year']} kg/year")
print(f"  Savings: {estimate['savings']['co2_saved_kg']} kg ({estimate['savings']['reduction_pct']}%)")
print(f"  Trees equivalent: {estimate['savings']['trees_equivalent']} trees")

??????????????? Carbon Footprint ???????????? Python

?????????????????????????????????????????????????????? carbon footprint

#!/usr/bin/env python3
# carbon_calculator.py ??? IT Carbon Footprint Calculator
import json
import logging
from datetime import datetime

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("carbon")

class CarbonFootprintCalculator:
    # Carbon intensity by region (kg CO2 per kWh)
    GRID_CARBON_INTENSITY = {
        "thailand": 0.493,
        "singapore": 0.408,
        "japan": 0.457,
        "usa_average": 0.386,
        "eu_average": 0.231,
        "sweden": 0.013,
        "france": 0.052,
        "india": 0.708,
        "australia": 0.656,
    }
    
    def __init__(self, region="thailand"):
        self.region = region
        self.carbon_intensity = self.GRID_CARBON_INTENSITY.get(region, 0.475)
    
    def compute_footprint(self, instances):
        """Calculate carbon footprint of compute instances"""
        total_kwh = 0
        total_co2 = 0
        details = []
        
        for inst in instances:
            # Estimate power: TDP * utilization * hours
            power_w = inst.get("tdp_watts", 200) * inst.get("avg_utilization", 0.5)
            hours = inst.get("hours_per_month", 730)
            kwh = power_w * hours / 1000
            co2 = kwh * self.carbon_intensity
            
            total_kwh += kwh
            total_co2 += co2
            details.append({
                "name": inst.get("name", "unknown"),
                "kwh_per_month": round(kwh, 1),
                "co2_kg_per_month": round(co2, 1),
            })
        
        return {
            "region": self.region,
            "carbon_intensity": self.carbon_intensity,
            "total_kwh_per_month": round(total_kwh, 1),
            "total_co2_kg_per_month": round(total_co2, 1),
            "total_co2_kg_per_year": round(total_co2 * 12, 1),
            "details": details,
        }
    
    def network_footprint(self, data_transfer_gb_month):
        """Estimate carbon from network data transfer"""
        # ~0.06 kWh per GB transferred (network equipment)
        kwh_per_gb = 0.06
        total_kwh = data_transfer_gb_month * kwh_per_gb
        co2 = total_kwh * self.carbon_intensity
        
        return {
            "data_transfer_gb": data_transfer_gb_month,
            "kwh_per_month": round(total_kwh, 1),
            "co2_kg_per_month": round(co2, 2),
        }
    
    def optimization_recommendations(self, footprint):
        recs = []
        total = footprint["total_co2_kg_per_year"]
        
        recs.append({
            "action": "Right-size instances (reduce 20% over-provisioning)",
            "estimated_savings_pct": 20,
            "co2_saved_kg": round(total * 0.20),
        })
        recs.append({
            "action": "Use ARM instances (Graviton/Ampere) ??? 40% more efficient",
            "estimated_savings_pct": 25,
            "co2_saved_kg": round(total * 0.25),
        })
        recs.append({
            "action": "Auto-scale off-hours (shut down dev/staging nights+weekends)",
            "estimated_savings_pct": 30,
            "co2_saved_kg": round(total * 0.30),
        })
        recs.append({
            "action": "Move to green region (Sweden: 97% less carbon than Thailand)",
            "estimated_savings_pct": 97,
            "co2_saved_kg": round(total * 0.97),
        })
        
        return recs

calc = CarbonFootprintCalculator("thailand")
instances = [
    {"name": "web-server-1", "tdp_watts": 150, "avg_utilization": 0.4, "hours_per_month": 730},
    {"name": "web-server-2", "tdp_watts": 150, "avg_utilization": 0.3, "hours_per_month": 730},
    {"name": "db-server", "tdp_watts": 300, "avg_utilization": 0.6, "hours_per_month": 730},
    {"name": "dev-server", "tdp_watts": 200, "avg_utilization": 0.2, "hours_per_month": 500},
]

footprint = calc.compute_footprint(instances)
print(f"Total CO2: {footprint['total_co2_kg_per_year']} kg/year")

recs = calc.optimization_recommendations(footprint)
for r in recs:
    print(f"  {r['action']}: save {r['co2_saved_kg']} kg CO2/year")

Cloud Cost Optimization

????????????????????????????????????????????? carbon footprint ????????????????????????

# === Cloud Cost & Carbon Optimization ===

# 1. AWS Cost Explorer with Carbon
cat > cost_optimization.sh << 'BASH'
#!/bin/bash
# Find unused resources

# Unattached EBS volumes
echo "=== Unattached EBS Volumes ==="
aws ec2 describe-volumes \
  --filters Name=status,Values=available \
  --query 'Volumes[*].[VolumeId,Size,CreateTime]' \
  --output table

# Unused Elastic IPs
echo "=== Unused Elastic IPs ==="
aws ec2 describe-addresses \
  --query 'Addresses[?AssociationId==null].[PublicIp,AllocationId]' \
  --output table

# Old snapshots (> 90 days)
echo "=== Old Snapshots ==="
aws ec2 describe-snapshots \
  --owner-ids self \
  --query 'Snapshots[?StartTime<`2024-01-01`].[SnapshotId,VolumeSize,StartTime]' \
  --output table

# Underutilized instances (< 10% CPU average)
echo "=== Underutilized Instances ==="
for instance_id in $(aws ec2 describe-instances --query 'Reservations[*].Instances[*].InstanceId' --output text); do
  avg_cpu=$(aws cloudwatch get-metric-statistics \
    --namespace AWS/EC2 \
    --metric-name CPUUtilization \
    --dimensions Name=InstanceId,Value=$instance_id \
    --start-time $(date -u -d '7 days ago' +%Y-%m-%dT%H:%M:%S) \
    --end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
    --period 86400 \
    --statistics Average \
    --query 'Datapoints[0].Average' \
    --output text 2>/dev/null)
  
  if [ "$avg_cpu" != "None" ] && [ "$(echo "$avg_cpu < 10" | bc)" -eq 1 ]; then
    echo "  $instance_id: avg CPU $avg_cpu%"
  fi
done
BASH

chmod +x cost_optimization.sh

# 2. Kubernetes Resource Optimization
cat > k8s-optimization.yaml << 'EOF'
# VPA (Vertical Pod Autoscaler) ??? right-size pods
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: myapp-vpa
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: myapp
  updatePolicy:
    updateMode: "Auto"
  resourcePolicy:
    containerPolicies:
      - containerName: "*"
        minAllowed:
          cpu: 50m
          memory: 64Mi
        maxAllowed:
          cpu: 2000m
          memory: 2Gi
---
# Kube-green ??? shutdown non-prod during off-hours
apiVersion: kube-green.com/v1alpha1
kind: SleepInfo
metadata:
  name: non-prod-sleep
  namespace: staging
spec:
  weekdays: "1-5"
  sleepAt: "20:00"
  wakeUpAt: "08:00"
  timeZone: "Asia/Bangkok"
  excludeRef:
    - apiVersion: apps/v1
      kind: Deployment
      name: critical-service
EOF

echo "Optimization configured"

Monitoring ????????? Reporting

Dashboard ?????????????????? sustainability metrics

#!/usr/bin/env python3
# sustainability_report.py ??? Sustainability Dashboard
import json
import logging
from datetime import datetime

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("report")

class SustainabilityReport:
    def __init__(self):
        self.data = {}
    
    def monthly_report(self):
        return {
            "period": "2024-06",
            "summary": {
                "total_co2_kg": 2450,
                "previous_month_co2_kg": 2800,
                "change_pct": -12.5,
                "trend": "improving",
                "target_co2_kg": 2000,
                "on_track": False,
            },
            "by_category": {
                "compute": {"co2_kg": 1500, "pct": 61.2, "change": -15},
                "storage": {"co2_kg": 400, "pct": 16.3, "change": -5},
                "network": {"co2_kg": 350, "pct": 14.3, "change": -10},
                "other": {"co2_kg": 200, "pct": 8.2, "change": -20},
            },
            "by_team": {
                "platform": {"co2_kg": 800, "instances": 25, "efficiency_score": 85},
                "backend": {"co2_kg": 650, "instances": 20, "efficiency_score": 78},
                "data": {"co2_kg": 550, "instances": 15, "efficiency_score": 72},
                "frontend": {"co2_kg": 250, "instances": 10, "efficiency_score": 90},
                "ml": {"co2_kg": 200, "instances": 5, "efficiency_score": 65},
            },
            "achievements": [
                "Reduced compute CO2 by 15% through right-sizing",
                "Migrated 3 workloads to Graviton (ARM) instances",
                "Implemented kube-green for staging environments",
                "Deleted 50 orphaned EBS volumes (500GB saved)",
            ],
            "next_actions": [
                "Migrate batch processing to spot instances",
                "Enable auto-scaling for all non-critical services",
                "Move analytics workloads to Sweden region",
                "Implement data lifecycle policies (archive cold data)",
            ],
        }

report = SustainabilityReport()
monthly = report.monthly_report()
print(f"Monthly CO2: {monthly['summary']['total_co2_kg']} kg ({monthly['summary']['change_pct']}%)")
print(f"\nBy Category:")
for cat, data in monthly["by_category"].items():
    print(f"  {cat}: {data['co2_kg']} kg ({data['change']}%)")
print(f"\nAchievements: {len(monthly['achievements'])} items")

FAQ ??????????????????????????????????????????

Q: GreenOps ????????? FinOps ???????????????????????????????????????????

A: FinOps ???????????? cost optimization ???????????????????????????????????? cloud GreenOps ???????????? carbon optimization ?????? environmental impact ???????????????????????????????????????????????????????????????????????????????????????????????????????????? resources = ?????????????????? cost ????????? carbon ????????????????????????????????? conflict ????????? ???????????? reserved instances ????????????????????? spot ????????? spot ????????? spare capacity (green ????????????) ???????????? region ??????????????????????????????????????????????????????????????? region ????????? green ?????????????????? ????????????????????????????????????????????? FinOps ????????? GreenOps ??????????????? balance ?????????????????????????????? ???????????????????????? FinOps ????????????????????????????????? cost ??????????????? carbon ????????????

Q: SASE ?????????????????? carbon footprint ???????????????????????????????

A: ????????????????????? SASE ?????? carbon ?????? 3 ????????? Replace on-prem appliances ????????? firewall, proxy, VPN concentrator ????????? retire ??????????????????????????? 200-500W per device ????????? cooling (PUE 1.6) ???????????????????????? Optimize network traffic SD-WAN ?????? redundant traffic 30-50% ?????? bandwidth = ?????? energy ?????? network equipment Shared infrastructure cloud provider ????????? hardware ????????????????????? utilization ????????????????????? on-prem ????????? (60-80% vs 15-20%) efficient ???????????? ???????????????????????????????????? enterprise 500 ?????? migrate ????????? on-prem security ?????? SASE ?????? carbon 60-80%

Q: ????????? carbon footprint ????????? cloud ???????????????????????????????

A: ?????????????????????????????? Cloud Provider Tools ??????????????? provider ?????? dashboard ???????????? AWS Customer Carbon Footprint Tool, GCP Carbon Footprint, Azure Emissions Impact Dashboard ?????????????????????????????????????????? account/project Third-party Tools ???????????? Cloud Carbon Footprint (open source), Climatiq API, CO2.js ???????????????????????????????????????????????????????????? Manual Calculation ????????????????????? CO2 = kWh x PUE x Grid Carbon Intensity x (1 - Renewable%) ????????????????????? instance type power consumption, region carbon intensity, provider PUE ????????? renewable energy % ??????????????????????????????????????? provider tools ???????????? ??????????????????????????????????????? third-party tools

Q: ARM instances (Graviton) ?????????????????? x86 ??????????????????????

A: ?????????????????? energy efficiency ???????????????????????????????????? AWS Graviton3 ?????????????????????????????????????????????????????? x86 ????????? 60% ?????????????????? performance ??????????????????????????? benchmark ?????????????????? Graviton ????????? price-performance ?????????????????? 40% ????????????????????????????????? 20% ???????????????????????? ??????????????????????????? software ?????????????????? ARM ???????????????????????? compatibility ???????????????????????????????????? (Go, Python, Node.js, Java) ?????????????????????????????????????????? Docker images ???????????? build ?????????????????? ARM (multi-arch) legacy apps ????????? compile ?????????????????? x86 ???????????? recompile ?????????????????????????????? migrate workloads ?????????????????? ARM ???????????? ??????????????????????????? migrate legacy ???????????????

📖 บทความที่เกี่ยวข้อง

SASE Framework Backup Recovery Strategyอ่านบทความ → React Server Components GreenOps Sustainabilityอ่านบทความ → Rust Serde GreenOps Sustainabilityอ่านบทความ → SASE Framework DevOps Cultureอ่านบทความ → Htmx Alpine.js GreenOps Sustainabilityอ่านบทความ →

📚 ดูบทความทั้งหมด →