Skip to content
This repository was archived by the owner on Jan 28, 2026. It is now read-only.

Integration Patterns

amandaxmqiu edited this page Aug 7, 2025 · 5 revisions

This guide shows various ways to integrate the GUI-Based Testing Code Review action with your existing CI/CD pipelines.

Basic Patterns

1. Simple All-in-One

The easiest way to get started with full functionality:

name: GUI Test Review
on:
  pull_request:
    branches: [main]

jobs:
  test-and-review:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      pull-requests: write
      pages: write
      id-token: write
    
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0  # Required for visual comparison
          
      - uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
        with:
          github-token: ${{ secrets.GITHUB_TOKEN }}
          enable-visual-comparison: 'true'
          mode: 'full'  # Default - runs lint, test, and dashboard

2. Enhance Existing Test Pipeline

Add visual review to your current Playwright tests:

name: CI Pipeline
on: [pull_request, push]

jobs:
  # Your existing test job
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '18'
          cache: 'npm'
      - run: npm ci
      - run: npm test
      
      # Generate Playwright report with JSON reporter
      - run: npx playwright test --reporter=json
      
      # Upload for dashboard
      - uses: actions/upload-artifact@v4
        with:
          name: test-results
          path: |
            playwright-report/
            test-results/
            playwright-metrics.json
  
  # Add visual dashboard
  visual-review:
    needs: test
    runs-on: ubuntu-latest
    permissions:
      contents: read
      pull-requests: write
      pages: write
      id-token: write
      
    steps:
      - uses: actions/checkout@v4
      - uses: actions/download-artifact@v4
        with:
          name: test-results
          path: artifacts
      
      - uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
        with:
          mode: 'dashboard-only'
          custom-artifacts-path: 'artifacts'

3. Separate Lint and Test

Run components in parallel for faster feedback:

name: Parallel CI
on: [pull_request]

jobs:
  lint:
    runs-on: ubuntu-latest
    permissions:
      pull-requests: write  # For reviewdog inline comments
    steps:
      - uses: actions/checkout@v4
      - uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
        with:
          mode: 'lint-only'
          reviewdog-reporter: 'github-pr-review'
          
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0
      - uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
        with:
          mode: 'test-only'
          enable-visual-comparison: 'true'
          test-files: 'tests'  # Or your test directory
          
  dashboard:
    needs: [lint, test]
    runs-on: ubuntu-latest
    permissions:
      contents: read
      pull-requests: write
      pages: write
      id-token: write
      
    steps:
      - uses: actions/checkout@v4
      - uses: actions/download-artifact@v4
      - uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
        with:
          mode: 'dashboard-only'
          enable-pr-comments: 'true'
          enable-github-pages: 'true'

Best Practices

DO:

  • ✅ Use fetch-depth: 0 for visual comparison (required for branch checkout)
  • ✅ Set appropriate permissions for your needs (minimum required)
  • ✅ Cache dependencies and Playwright browsers for faster runs
  • ✅ Use matrix strategies for comprehensive testing across environments
  • ✅ Implement smart retries for flaky tests
  • ✅ Use artifacts-retention-days to manage storage costs
  • ✅ Leverage test-files parameter to focus on relevant tests
  • ✅ Use custom-artifacts-path when integrating with existing pipelines

DON'T:

  • ❌ Run all tests on every commit (use path filters)
  • ❌ Ignore security warnings from npm audit
  • ❌ Use fail-on-test-failure: true without retry logic for flaky tests
  • ❌ Forget to clean up resources (Docker containers, test servers, etc.)
  • ❌ Skip setting key-test-file when using visual comparison
  • ❌ Use dashboard-only mode without proper artifact structure

Performance Tips:

  1. Use GitHub's larger runners for heavy test suites:

    runs-on: ubuntu-latest-4-cores  # or ubuntu-latest-8-cores
  2. Parallelize independent test suites using job matrix

  3. Cache everything possible:

    • Node modules (actions/setup-node with cache)
    • Playwright browsers (see example above)
    • Build outputs if applicable
  4. Use shallow clones when comparison isn't needed:

    - uses: actions/checkout@v4
      # No fetch-depth: 0 if not comparing branches
  5. Consider test sharding for very large suites (see sharding example)

  6. Optimize artifact retention:

    artifacts-retention-days: '7'  # Instead of default 30
  7. Use continue-on-error for non-critical steps:

    - uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
      continue-on-error: true  # Dashboard still generates if tests fail

Debugging Tips:

Enable verbose logging when troubleshooting:

env:
  ACTIONS_STEP_DEBUG: true
  DEBUG: 'pw:*'  # Playwright debug logs

Check intermediate outputs:

- name: Debug artifacts
  if: always()
  run: |
    find artifacts -type f -name "*.json" -exec echo {} \; -exec head -20 {} \;

Clone this wiki locally