Newman MLOps Testing
Postman Newman MLOps Workflow API Testing CI/CD Collection Environment Contract Testing Health Check Model Deployment Rollback Production
| Test Type | What to Test | When | Newman Command |
|---|---|---|---|
| Smoke Test | Health check, basic predict | After every deploy | newman run smoke.json |
| Contract Test | Response schema, field types | After API change | newman run contract.json |
| Integration Test | Full prediction pipeline | Nightly | newman run integration.json |
| Performance Test | Latency, throughput | Weekly | newman run perf.json -n 100 |
| Security Test | Auth, rate limiting | After auth change | newman run security.json |
Collection Design
# === Postman Collection for ML API ===
# Collection structure:
# ML-API-Tests/
# ├── Health/
# │ └── GET /health
# ├── Predict/
# │ ├── POST /predict (valid input)
# │ ├── POST /predict (invalid input)
# │ └── POST /predict (edge cases)
# ├── Batch/
# │ └── POST /batch-predict
# ├── Model Info/
# │ └── GET /model/info
# └── Auth/
# ├── GET /predict (no token)
# └── GET /predict (invalid token)
# Test Script Examples:
# // Health Check
# pm.test("Status 200", () => {
# pm.response.to.have.status(200);
# });
# pm.test("Response time < 500ms", () => {
# pm.expect(pm.response.responseTime).to.be.below(500);
# });
# pm.test("Status is healthy", () => {
# var json = pm.response.json();
# pm.expect(json.status).to.eql("healthy");
# pm.expect(json.model_loaded).to.be.true;
# });
# // Predict Endpoint
# pm.test("Prediction response valid", () => {
# var json = pm.response.json();
# pm.expect(json).to.have.property("prediction");
# pm.expect(json).to.have.property("confidence");
# pm.expect(json.confidence).to.be.within(0, 1);
# pm.expect(json).to.have.property("model_version");
# });
# pm.test("Latency within SLA", () => {
# pm.expect(pm.response.responseTime).to.be.below(1000);
# });
# // Schema Validation
# var schema = {
# type: "object",
# required: ["prediction", "confidence", "model_version"],
# properties: {
# prediction: { type: "number" },
# confidence: { type: "number", minimum: 0, maximum: 1 },
# model_version: { type: "string" },
# latency_ms: { type: "number" }
# }
# };
# pm.test("Response matches schema", () => {
# pm.response.to.have.jsonSchema(schema);
# });
from dataclasses import dataclass
@dataclass
class TestCase:
name: str
method: str
endpoint: str
body: str
expected_status: int
assertions: str
tests = [
TestCase("Health Check", "GET", "/health", "None",
200, "status=healthy, model_loaded=true"),
TestCase("Valid Prediction", "POST", "/predict",
'{"features": [1.2, 3.4, 5.6]}',
200, "prediction exists, confidence 0-1"),
TestCase("Invalid Input", "POST", "/predict",
'{"features": "not_array"}',
422, "error message, validation details"),
TestCase("Missing Field", "POST", "/predict",
'{}',
422, "error: features required"),
TestCase("Model Info", "GET", "/model/info", "None",
200, "version, name, metrics"),
TestCase("No Auth Token", "POST", "/predict",
'{"features": [1.2]}',
401, "error: unauthorized"),
TestCase("Batch Predict", "POST", "/batch-predict",
'{"instances": [[1.2,3.4],[5.6,7.8]]}',
200, "predictions array, same length as input"),
]
print("=== Test Cases ===")
for t in tests:
print(f" [{t.name}] {t.method} {t.endpoint}")
print(f" Body: {t.body}")
print(f" Expected: {t.expected_status} | Assert: {t.assertions}")
CI/CD Integration
# === GitHub Actions + Newman ===
# .github/workflows/ml-api-test.yml
# name: ML API Tests
# on:
# push:
# branches: [main]
# schedule:
# - cron: '*/5 * * * *' # Health check every 5 min
#
# jobs:
# smoke-test:
# runs-on: ubuntu-latest
# steps:
# - uses: actions/checkout@v4
# - uses: actions/setup-node@v4
# with: { node-version: '20' }
# - run: npm install -g newman newman-reporter-htmlextra
# - name: Run Smoke Tests
# run: |
# newman run tests/smoke.json \
# -e tests/env/prod.json \
# --reporters cli, junit, htmlextra \
# --reporter-junit-export results/junit.xml \
# --reporter-htmlextra-export results/report.html
# - uses: actions/upload-artifact@v4
# if: always()
# with:
# name: test-results
# path: results/
# - name: Alert on Failure
# if: failure()
# run: |
# curl -X POST $SLACK_WEBHOOK \
# -d '{"text":"ML API Smoke Test FAILED!"}'
# Environment file (prod.json)
# {
# "values": [
# {"key": "base_url", "value": "https://ml-api.example.com"},
# {"key": "api_key", "value": "{{secrets.ML_API_KEY}}"},
# {"key": "model_version", "value": "v2.1.0"},
# {"key": "sla_latency_ms", "value": "1000"}
# ]
# }
# Newman CLI Examples
# newman run collection.json # Basic run
# newman run collection.json -e prod.json # With environment
# newman run collection.json -n 10 # Run 10 iterations
# newman run collection.json --delay-request 100 # 100ms delay between
# newman run collection.json --bail # Stop on first failure
# newman run collection.json --timeout-request 5000 # 5s timeout
@dataclass
class CIStep:
stage: str
trigger: str
newman_cmd: str
on_fail: str
pipeline = [
CIStep("Pre-deploy Health", "Before deploy",
"newman run smoke.json -e staging.json --bail",
"Abort deployment"),
CIStep("Post-deploy Smoke", "After deploy",
"newman run smoke.json -e prod.json",
"Rollback to previous version"),
CIStep("Contract Test", "After API change",
"newman run contract.json -e prod.json",
"Block merge, fix schema"),
CIStep("Integration", "Nightly at 02:00",
"newman run integration.json -e prod.json -n 5",
"Create P2 ticket, alert team"),
CIStep("Health Monitor", "Every 5 minutes",
"newman run health.json -e prod.json --bail",
"Page on-call, check infrastructure"),
]
print("\nCI/CD Pipeline:")
for c in pipeline:
print(f" [{c.stage}] Trigger: {c.trigger}")
print(f" Command: {c.newman_cmd}")
print(f" On fail: {c.on_fail}")
Monitoring and Reporting
# === Newman Monitoring Dashboard ===
@dataclass
class MonitorResult:
endpoint: str
status: str
latency_ms: int
tests_passed: int
tests_failed: int
timestamp: str
results = [
MonitorResult("/health", "PASS", 45, 3, 0, "2024-01-15 10:00"),
MonitorResult("/predict", "PASS", 230, 5, 0, "2024-01-15 10:00"),
MonitorResult("/batch-predict", "PASS", 890, 4, 0, "2024-01-15 10:00"),
MonitorResult("/model/info", "PASS", 35, 2, 0, "2024-01-15 10:00"),
MonitorResult("/predict (invalid)", "PASS", 15, 3, 0, "2024-01-15 10:00"),
]
print("=== Latest Monitor Results ===")
total_passed = sum(r.tests_passed for r in results)
total_failed = sum(r.tests_failed for r in results)
for r in results:
icon = "PASS" if r.status == "PASS" else "FAIL"
print(f" [{icon}] {r.endpoint} — {r.latency_ms}ms — {r.tests_passed}/{r.tests_passed + r.tests_failed} tests")
print(f"\n Total: {total_passed} passed, {total_failed} failed")
print(f" Pass rate: {total_passed/(total_passed+total_failed)*100:.0f}%")
# Newman reporters
reporters = {
"cli": "Terminal output, default",
"junit": "JUnit XML for CI integration",
"htmlextra": "Beautiful HTML report with charts",
"json": "JSON output for custom processing",
"csv": "CSV export for spreadsheet analysis",
"teamcity": "TeamCity service messages",
}
print(f"\n Available Reporters:")
for k, v in reporters.items():
print(f" [{k}]: {v}")
เคล็ดลับ
- Bail: ใช้ --bail หยุดทันทีเมื่อ Test แรก Fail ประหยัดเวลา
- Environment: แยก Environment File ตาม dev staging prod
- Schema: ใช้ JSON Schema Validation ตรวจ Response Structure
- Version: ตรวจ Model Version ทุกครั้งหลัง Deploy
- Monitor: รัน Health Check ทุก 5 นาที แจ้งเตือนเมื่อ API Down
Newman คืออะไร
CLI Runner Postman Collection Command Line CI/CD Environment Reporter CLI HTML JSON JUnit npm Pre-request Test Script
ใช้กับ MLOps อย่างไร
ML API Predict Validation Schema Latency SLA Model Version Health Check Batch Authentication CI/CD Deploy Rollback
เขียน Test Script อย่างไร
pm.test Assertion Status 200 responseTime below jsonSchema prediction confidence Range Pre-request Environment Collection Variable
ใช้ใน CI/CD อย่างไร
GitHub Actions GitLab CI newman run JSON Git Environment JUnit Report Block Deployment Rollback Schedule Health Check 5 นาที Alert
สรุป
Postman Newman MLOps API Testing CI/CD Collection Environment Schema Validation Health Check Model Version Rollback Monitor Production
