This repository was archived by the owner on Jan 28, 2026. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 1
Integration Patterns
amandaxmqiu edited this page Aug 7, 2025
·
5 revisions
This guide shows various ways to integrate the GUI-Based Testing Code Review action with your existing CI/CD pipelines.
The easiest way to get started with full functionality:
name: GUI Test Review
on:
pull_request:
branches: [main]
jobs:
test-and-review:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
pages: write
id-token: write
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Required for visual comparison
- uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
enable-visual-comparison: 'true'
mode: 'full' # Default - runs lint, test, and dashboardAdd visual review to your current Playwright tests:
name: CI Pipeline
on: [pull_request, push]
jobs:
# Your existing test job
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '18'
cache: 'npm'
- run: npm ci
- run: npm test
# Generate Playwright report with JSON reporter
- run: npx playwright test --reporter=json
# Upload for dashboard
- uses: actions/upload-artifact@v4
with:
name: test-results
path: |
playwright-report/
test-results/
playwright-metrics.json
# Add visual dashboard
visual-review:
needs: test
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
pages: write
id-token: write
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4
with:
name: test-results
path: artifacts
- uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
with:
mode: 'dashboard-only'
custom-artifacts-path: 'artifacts'Run components in parallel for faster feedback:
name: Parallel CI
on: [pull_request]
jobs:
lint:
runs-on: ubuntu-latest
permissions:
pull-requests: write # For reviewdog inline comments
steps:
- uses: actions/checkout@v4
- uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
with:
mode: 'lint-only'
reviewdog-reporter: 'github-pr-review'
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
with:
mode: 'test-only'
enable-visual-comparison: 'true'
test-files: 'tests' # Or your test directory
dashboard:
needs: [lint, test]
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
pages: write
id-token: write
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4
- uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
with:
mode: 'dashboard-only'
enable-pr-comments: 'true'
enable-github-pages: 'true'- ✅ Use
fetch-depth: 0for visual comparison (required for branch checkout) - ✅ Set appropriate permissions for your needs (minimum required)
- ✅ Cache dependencies and Playwright browsers for faster runs
- ✅ Use matrix strategies for comprehensive testing across environments
- ✅ Implement smart retries for flaky tests
- ✅ Use
artifacts-retention-daysto manage storage costs - ✅ Leverage
test-filesparameter to focus on relevant tests - ✅ Use
custom-artifacts-pathwhen integrating with existing pipelines
- ❌ Run all tests on every commit (use path filters)
- ❌ Ignore security warnings from npm audit
- ❌ Use
fail-on-test-failure: truewithout retry logic for flaky tests - ❌ Forget to clean up resources (Docker containers, test servers, etc.)
- ❌ Skip setting
key-test-filewhen using visual comparison - ❌ Use dashboard-only mode without proper artifact structure
-
Use GitHub's larger runners for heavy test suites:
runs-on: ubuntu-latest-4-cores # or ubuntu-latest-8-cores
-
Parallelize independent test suites using job matrix
-
Cache everything possible:
- Node modules (
actions/setup-nodewith cache) - Playwright browsers (see example above)
- Build outputs if applicable
- Node modules (
-
Use shallow clones when comparison isn't needed:
- uses: actions/checkout@v4 # No fetch-depth: 0 if not comparing branches
-
Consider test sharding for very large suites (see sharding example)
-
Optimize artifact retention:
artifacts-retention-days: '7' # Instead of default 30
-
Use
continue-on-errorfor non-critical steps:- uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1 continue-on-error: true # Dashboard still generates if tests fail
Enable verbose logging when troubleshooting:
env:
ACTIONS_STEP_DEBUG: true
DEBUG: 'pw:*' # Playwright debug logsCheck intermediate outputs:
- name: Debug artifacts
if: always()
run: |
find artifacts -type f -name "*.json" -exec echo {} \; -exec head -20 {} \;