SiamCafe.net Blog
Technology

Retest ทดสอบซ้ำหลังแก้ Bug อย่างมีประสิทธิภาพ

retest
Retest | SiamCafe Blog
2025-11-25· อ. บอม — SiamCafe.net· 1,394 คำ

Retest ?????????????????????

Retest ?????????????????????????????????????????????????????????????????????????????????????????? bug ???????????? defect ???????????????????????????????????? ??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????????????????????????????????????????????? (regression) ????????????????????????????????????????????????????????????????????????????????????

Retest ?????????????????????????????? ?????????????????? bug ???????????????????????????????????? bug ???????????? (side effects), Developer ??????????????????????????????????????????????????? scenario, Edge cases ????????????????????????????????????????????????????????????????????????????????????, Stakeholders ????????????????????? confidence ????????????????????????????????????????????????, Compliance requirements ????????? industry ???????????? document retest results

?????????????????????????????????????????????????????? Retest ????????? Regression Testing Retest ?????????????????????????????? bug ???????????????????????? ??????????????????????????? fix ??????????????? scope ????????? ??????????????????????????? bug Regression Testing ??????????????????????????????????????? ??????????????????????????????????????????????????????????????? scope ??????????????? ??????????????????????????? change ?????????????????????????????????????????????????????? Retest ?????????????????? fix + Regression ???????????????????????????????????????????????????????????????????????????

??????????????????????????? Retest

?????????????????????????????????????????? Retest

# === Retest Types ===

cat > retest_types.yaml << 'EOF'
retest_types:
  bug_fix_retest:
    description: "????????????????????????????????????????????? bug"
    steps:
      - "Reproduce bug ???????????? original steps"
      - "Verify fix resolves the issue"
      - "Test edge cases related to the fix"
      - "Check no new issues introduced"
    priority: "High"
    
  regression_retest:
    description: "???????????????????????????????????????????????????????????????????????????????????? change"
    steps:
      - "Run test suite for affected module"
      - "Test integration points"
      - "Verify backward compatibility"
    priority: "High"
    
  confirmation_retest:
    description: "??????????????????????????? fix ????????????????????? environments ???????????????"
    environments: ["Development", "Staging", "Production"]
    steps:
      - "Deploy fix to target environment"
      - "Run retest in that environment"
      - "Document results per environment"
    priority: "Medium"
    
  performance_retest:
    description: "??????????????????????????? fix ???????????????????????? performance"
    metrics:
      - "Response time (P50, P95, P99)"
      - "Throughput (requests per second)"
      - "Memory usage"
      - "CPU usage"
    steps:
      - "Run baseline performance test"
      - "Apply fix"
      - "Run performance test again"
      - "Compare results"
    priority: "Medium"
    
  security_retest:
    description: "??????????????????????????? security fix ?????????????????????????????????"
    steps:
      - "Run original exploit/PoC"
      - "Verify exploit no longer works"
      - "Run security scanner"
      - "Pen test related attack vectors"
    priority: "Critical"

  data_migration_retest:
    description: "??????????????????????????? data migration ?????????????????????"
    steps:
      - "Verify row counts match"
      - "Validate data integrity (checksums)"
      - "Test application with migrated data"
      - "Check edge cases (null, special characters)"
    priority: "High"
EOF

python3 -c "
import yaml
with open('retest_types.yaml') as f:
    data = yaml.safe_load(f)
types = data['retest_types']
print('Retest Types:')
for name, info in types.items():
    print(f'\n  {name} [{info[\"priority\"]}]:')
    print(f'    {info[\"description\"]}')
    for step in info['steps'][:2]:
        print(f'    - {step}')
"

echo "Retest types documented"

Automated Retest ???????????? Script

??????????????? automated retest framework

#!/usr/bin/env python3
# automated_retest.py ??? Automated Retest Framework
import json
import logging
import hashlib
from typing import Dict, List, Callable
from datetime import datetime

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("retest")

class RetestFramework:
    """Automated retest framework for bug verification"""
    
    def __init__(self, project_name="MyApp"):
        self.project = project_name
        self.test_cases = []
        self.results = []
    
    def add_retest(self, bug_id, title, test_fn, severity="medium"):
        """Register a retest case"""
        self.test_cases.append({
            "bug_id": bug_id,
            "title": title,
            "test_fn": test_fn,
            "severity": severity,
            "registered_at": datetime.now().isoformat(),
        })
    
    def run_all(self):
        """Execute all registered retests"""
        self.results = []
        passed = 0
        failed = 0
        
        for tc in self.test_cases:
            result = {
                "bug_id": tc["bug_id"],
                "title": tc["title"],
                "severity": tc["severity"],
                "started_at": datetime.now().isoformat(),
            }
            
            try:
                tc["test_fn"]()
                result["status"] = "PASSED"
                result["message"] = "Bug fix verified successfully"
                passed += 1
            except AssertionError as e:
                result["status"] = "FAILED"
                result["message"] = f"Fix not working: {str(e)}"
                failed += 1
            except Exception as e:
                result["status"] = "ERROR"
                result["message"] = f"Test error: {str(e)}"
                failed += 1
            
            result["finished_at"] = datetime.now().isoformat()
            self.results.append(result)
            
            status_icon = "PASS" if result["status"] == "PASSED" else "FAIL"
            logger.info(f"[{status_icon}] {tc['bug_id']}: {tc['title']}")
        
        return {
            "total": len(self.test_cases),
            "passed": passed,
            "failed": failed,
            "pass_rate": f"{passed/len(self.test_cases)*100:.1f}%" if self.test_cases else "0%",
            "results": self.results,
        }
    
    def generate_report(self):
        """Generate retest report"""
        report = {
            "project": self.project,
            "generated_at": datetime.now().isoformat(),
            "summary": {
                "total": len(self.results),
                "passed": sum(1 for r in self.results if r["status"] == "PASSED"),
                "failed": sum(1 for r in self.results if r["status"] == "FAILED"),
                "errors": sum(1 for r in self.results if r["status"] == "ERROR"),
            },
            "details": self.results,
            "recommendation": "",
        }
        
        if report["summary"]["failed"] == 0:
            report["recommendation"] = "All retests passed. Ready for release."
        else:
            report["recommendation"] = f"{report['summary']['failed']} retests failed. Fixes need revision."
        
        return report

# Example usage
framework = RetestFramework("E-Commerce App")

# Register retests
def test_bug_001():
    """BUG-001: Login fails with special characters in password"""
    password = "P@ss#w0rd!&*"
    # Simulate login
    assert len(password) > 0, "Password should not be empty"
    assert any(c in "!@#$%^&*" for c in password), "Special chars should be allowed"

def test_bug_002():
    """BUG-002: Cart total wrong with discount > 100%"""
    subtotal = 1000
    discount_pct = 150  # Invalid discount
    discount = min(discount_pct, 100)  # Fix: cap at 100%
    total = subtotal * (1 - discount / 100)
    assert total >= 0, f"Total should not be negative, got {total}"

def test_bug_003():
    """BUG-003: Search returns 500 for empty query"""
    query = ""
    # Simulate search
    results = [] if not query else ["item1", "item2"]
    assert isinstance(results, list), "Should return empty list, not error"

framework.add_retest("BUG-001", "Login with special characters", test_bug_001, "high")
framework.add_retest("BUG-002", "Cart discount > 100%", test_bug_002, "critical")
framework.add_retest("BUG-003", "Empty search query", test_bug_003, "medium")

# Run
summary = framework.run_all()
print(f"\nRetest Results: {summary['passed']}/{summary['total']} passed ({summary['pass_rate']})")

report = framework.generate_report()
print(f"Recommendation: {report['recommendation']}")

Retest Strategy ????????? Framework

????????????????????? retest ??????????????????????????????????????????????????????

# === Retest Strategy ===

# 1. Pytest retest configuration
cat > conftest.py << 'PYTHON'
import pytest
import json
from datetime import datetime

# Retest marker
def pytest_configure(config):
    config.addinivalue_line(
        "markers", "retest(bug_id): mark test as retest for specific bug"
    )

# Custom report
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
    outcome = yield
    report = outcome.get_result()
    
    if report.when == "call":
        retest_marker = item.get_closest_marker("retest")
        if retest_marker:
            bug_id = retest_marker.args[0] if retest_marker.args else "UNKNOWN"
            report.bug_id = bug_id
            report.retest = True

def pytest_terminal_summary(terminalreporter, exitstatus, config):
    reports = terminalreporter.stats.get("passed", []) + terminalreporter.stats.get("failed", [])
    retest_reports = [r for r in reports if getattr(r, "retest", False)]
    
    if retest_reports:
        terminalreporter.write_sep("=", "RETEST SUMMARY")
        for r in retest_reports:
            status = "PASS" if r.passed else "FAIL"
            bug_id = getattr(r, "bug_id", "UNKNOWN")
            terminalreporter.write_line(f"  [{status}] {bug_id}: {r.nodeid}")
PYTHON

# 2. Test file with retest markers
cat > test_retests.py << 'PYTHON'
import pytest

@pytest.mark.retest("BUG-101")
def test_login_special_chars():
    """Retest: Login should handle special characters"""
    password = "P@ss#w0rd!&*"
    assert validate_password(password) == True

@pytest.mark.retest("BUG-102")
def test_cart_negative_total():
    """Retest: Cart total should never be negative"""
    total = calculate_total(1000, discount_pct=150)
    assert total >= 0

@pytest.mark.retest("BUG-103")
@pytest.mark.parametrize("query", ["", " ", None, "normal search"])
def test_search_edge_cases(query):
    """Retest: Search should handle edge cases"""
    results = search(query)
    assert isinstance(results, list)

def validate_password(pwd):
    return len(pwd) >= 8

def calculate_total(subtotal, discount_pct=0):
    discount = min(discount_pct, 100)
    return subtotal * (1 - discount / 100)

def search(query):
    if not query or not query.strip():
        return []
    return [f"result for {query}"]
PYTHON

# 3. Run retests only
# pytest -m retest -v --tb=short

# 4. Playwright E2E retest
cat > e2e_retest.spec.ts << 'TYPESCRIPT'
import { test, expect } from '@playwright/test';

test.describe('Bug Fix Retests', () => {
  test('BUG-201: Checkout button disabled after error @retest', async ({ page }) => {
    await page.goto('/cart');
    await page.fill('#coupon', 'INVALID_CODE');
    await page.click('#apply-coupon');
    await expect(page.locator('#error-message')).toBeVisible();
    // After error, checkout button should still be clickable
    await expect(page.locator('#checkout-btn')).toBeEnabled();
  });

  test('BUG-202: Mobile menu not closing on navigation @retest', async ({ page }) => {
    await page.setViewportSize({ width: 375, height: 812 });
    await page.goto('/');
    await page.click('#mobile-menu-btn');
    await expect(page.locator('#mobile-nav')).toBeVisible();
    await page.click('a[href="/about"]');
    await expect(page.locator('#mobile-nav')).toBeHidden();
  });
});
TYPESCRIPT

echo "Retest strategy configured"

CI/CD Integration ?????????????????? Retest

????????? retest ????????????????????? CI/CD pipeline

# === CI/CD Retest Integration ===

# 1. GitHub Actions - Retest on bug fix PRs
cat > .github/workflows/retest.yml << 'EOF'
name: Bug Fix Retest

on:
  pull_request:
    types: [opened, synchronize]

jobs:
  detect-retests:
    runs-on: ubuntu-latest
    outputs:
      bug_ids: }
    steps:
      - uses: actions/checkout@v4
      - name: Extract bug IDs from PR
        id: extract
        run: |
          # Extract BUG-XXX from PR title/body
          TITLE="}"
          BODY="}"
          BUG_IDS=$(echo "$TITLE $BODY" | grep -oP 'BUG-\d+' | sort -u | tr '\n' ',')
          echo "bug_ids=$BUG_IDS" >> $GITHUB_OUTPUT
          echo "Detected bugs: $BUG_IDS"

  retest:
    needs: detect-retests
    if: needs.detect-retests.outputs.bug_ids != ''
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version: '3.12'
      
      - name: Install dependencies
        run: pip install pytest pytest-html

      - name: Run retests
        run: |
          pytest -m retest -v --tb=short \
            --html=retest-report.html \
            --self-contained-html
      
      - name: Upload report
        if: always()
        uses: actions/upload-artifact@v4
        with:
          name: retest-report
          path: retest-report.html

      - name: Comment PR with results
        if: always()
        uses: actions/github-script@v7
        with:
          script: |
            const fs = require('fs');
            const body = `## Retest Results
            Bug IDs: }
            Status: }
            [Download Report](../actions/artifacts)`;
            
            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: body
            });

  regression:
    needs: retest
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Run regression suite
        run: pytest tests/ -v --tb=short -x
EOF

echo "CI/CD retest pipeline configured"

Monitoring ????????? Reporting

??????????????????????????????????????????????????? retest

#!/usr/bin/env python3
# retest_dashboard.py ??? Retest Dashboard
import json
import logging
from typing import Dict, List

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("dashboard")

class RetestDashboard:
    def __init__(self):
        pass
    
    def dashboard(self):
        return {
            "current_sprint": {
                "bugs_fixed": 15,
                "retests_passed": 12,
                "retests_failed": 2,
                "retests_pending": 1,
                "pass_rate": "80%",
                "avg_retest_time": "15 minutes",
            },
            "by_severity": {
                "critical": {"total": 3, "passed": 3, "failed": 0},
                "high": {"total": 5, "passed": 4, "failed": 1},
                "medium": {"total": 4, "passed": 3, "failed": 1},
                "low": {"total": 3, "passed": 2, "failed": 1},
            },
            "failed_retests": [
                {"bug_id": "BUG-145", "title": "Payment timeout not handled", "severity": "high", "assignee": "developer_a", "reason": "Fix incomplete, edge case missed"},
                {"bug_id": "BUG-152", "title": "CSV export encoding issue", "severity": "medium", "assignee": "developer_b", "reason": "Thai characters still broken in Excel"},
            ],
            "trends": {
                "sprint_1": {"pass_rate": "70%", "bugs": 10},
                "sprint_2": {"pass_rate": "75%", "bugs": 12},
                "sprint_3": {"pass_rate": "82%", "bugs": 8},
                "sprint_4": {"pass_rate": "80%", "bugs": 15},
            },
            "recommendations": [
                "BUG-145: ???????????? handle timeout ????????? payment gateway level ??????????????????????????? frontend",
                "BUG-152: ????????? UTF-8 BOM ?????????????????? CSV export (Excel ????????????????????? BOM)",
                "??????????????? automated regression tests ?????????????????? payment module",
                "???????????? retest deadline ??????????????? 2 ????????????????????? fix deploy",
            ],
        }

dash = RetestDashboard()
data = dash.dashboard()
sprint = data["current_sprint"]
print(f"Retest Dashboard (Current Sprint):")
print(f"  Bugs Fixed: {sprint['bugs_fixed']}")
print(f"  Retests: {sprint['retests_passed']} passed, {sprint['retests_failed']} failed, {sprint['retests_pending']} pending")
print(f"  Pass Rate: {sprint['pass_rate']}")

print(f"\nBy Severity:")
for sev, info in data["by_severity"].items():
    print(f"  {sev}: {info['passed']}/{info['total']} passed")

print(f"\nFailed Retests:")
for f in data["failed_retests"]:
    print(f"  [{f['severity']}] {f['bug_id']}: {f['title']}")
    print(f"    Reason: {f['reason']}")

print(f"\nRecommendations:")
for r in data["recommendations"][:3]:
    print(f"  - {r}")

FAQ ??????????????????????????????????????????

Q: Retest ????????? Regression Testing ???????????????????????????????????????????

A: Retest ?????????????????????????????? bug ???????????????????????? ??????????????????????????? fix ???????????????????????????????????? scope ????????? ??????????????? bug ???????????? ?????????????????? developer ????????? bug ?????????????????? regression testing Regression Testing ??????????????????????????????????????? ??????????????????????????? fix ???????????????????????????????????????????????? scope ??????????????? ???????????????????????????????????? module ???????????????????????????????????? ?????????????????? retest ???????????? ?????????????????????????????????????????????????????? ???????????????????????? ????????? bug login ?????????????????????????????? special chars Retest ??????????????? login ???????????? special chars ??????????????? Regression ????????????????????????????????????????????????????????????????????????????????? authentication (login, logout, reset password, session management)

Q: ????????? automate retest ???????????????????????????????

A: ???????????????????????????????????????????????? ?????????????????? automate ???????????????????????????????????? ????????? automate Bug ????????????????????? regression (??????????????????????????????), Critical/High severity bugs, Bugs ???????????????????????????????????? core functionality, Tests ?????????????????????????????????????????? (????????? build) ?????????????????? automate Bug ?????????????????????????????? one-time issue (data corruption), Tests ????????????????????? human judgment (UI/UX review), Tests ????????? setup ?????????????????????????????? (cost > benefit) ???????????????????????? 70-80% of retests automated, 20-30% manual (exploratory, UX, edge cases) ?????????????????????????????? Pytest (Python), Jest (JavaScript), Playwright (E2E), Selenium (browser)

Q: Retest ?????????????????????????????????????????????????

A: ????????????????????? Document ????????? retest fail ????????????????????? (screenshots, logs, steps), Reopen bug ticket ?????? bug tracker, ????????? retest evidence (expected vs actual), Developer review ????????????????????????????????????, Repeat cycle (fix ??? retest ??? verify) ??????????????????????????? retest fail ???????????? Fix ?????????????????????????????????????????? scenario (edge cases), Fix ????????? symptom ?????????????????? root cause, Environment ????????????????????? (????????????????????? dev ??????????????????????????????????????? staging), Data dependency (??????????????????????????????????????????????????????) ????????????????????? Developer ???????????? unit test fix ???????????? deploy, Code review ?????????????????? bug fixes, ????????? tester review fix approach ???????????? implement

Q: Retest ????????????????????????????????????????????????????

A: ????????????????????? complexity Critical bugs retest ??????????????? (??????????????? 1-2 ????????????????????????????????? fix), High bugs retest ??????????????? 1 ?????????, Medium/Low bugs retest ??????????????? 2-3 ????????? ????????????????????? retest case Manual retest 15-30 ????????????, Automated retest 1-5 ????????????, E2E retest 5-15 ???????????? ????????? retest ???????????????????????????????????????????????? ????????????????????? Test case ????????????????????????????????? (break down ???????????? smaller tests), Environment setup ????????? (automate setup), Dependencies ???????????? (mock/stub external services) Best practice ???????????? SLA ?????????????????? retest ???????????? Critical ??????????????? 4 ?????????????????????, High ??????????????? 1 ?????????

📖 บทความที่เกี่ยวข้อง

amd etfอ่านบทความ → cup handleอ่านบทความ → divi etfอ่านบทความ → metv etfอ่านบทความ → us defense etfอ่านบทความ →

📚 ดูบทความทั้งหมด →