-
Notifications
You must be signed in to change notification settings - Fork 1
Configuration Reference
This page provides a complete reference for all inputs, outputs, and configuration options of the GUI-Based Testing Code Review GitHub Action. The reference documentation enables developers to fully leverage the action's capabilities while maintaining compatibility with existing workflows.
The GitHub token enables API access for reviewdog integration and repository operations. This token requires appropriate permissions for the features you intend to use.
- Required: Yes
-
Default:
${{ github.token }} - Description: GitHub token for API access and reviewdog integration
- Permissions Required: Varies based on enabled features (see permissions section)
-
Example:
with: github-token: ${{ secrets.GITHUB_TOKEN }}
The execution mode determines which components of the action will run. This provides flexibility for different CI/CD scenarios and optimization strategies.
- Required: No
-
Default:
full -
Options:
-
full- Executes all components including lint checks, tests, and dashboard generation -
lint-only- Executes only ESLint and Prettier checks with reviewdog integration -
test-only- Executes only Playwright tests without linting or dashboard -
dashboard-only- Generates dashboard from existing artifacts without running tests
-
-
Example:
with: mode: 'test-only'
These inputs provide fine-grained control over component execution, overriding the base mode setting when specific customization is needed.
Controls whether Playwright tests execute regardless of the selected mode. This enables scenarios where you want to run tests alongside other custom operations.
-
Default:
true - Description: Run Playwright tests regardless of mode setting
- Use Case: Disable tests in full mode for lint-focused workflows
-
Example:
with: mode: 'full' enable-playwright: 'false' # Only lint and dashboard
Controls ESLint and Prettier execution independent of mode selection. This allows selective code quality checks based on workflow requirements.
-
Default:
true - Description: Run ESLint/Prettier checks regardless of mode
- Use Case: Skip linting in full mode when focusing on test results
-
Example:
with: mode: 'full' enable-lint: 'false' # Only tests and dashboard
Manages dashboard generation independently from test execution. This supports scenarios where raw test data is sufficient without visualization.
-
Default:
true - Description: Generate dashboard regardless of mode
- Use Case: Skip dashboard generation for faster CI runs
-
Example:
with: enable-dashboard: 'false' # Raw results only
Enables comparative testing between the pull request branch and the main branch, providing regression detection capabilities.
-
Default:
false - Description: Run tests on main branch for comparison
-
Requirements: Pull request context and full repository history (
fetch-depth: 0) - Performance Impact: Doubles test execution time
-
Example:
steps: - uses: actions/checkout@v4 with: fetch-depth: 0 # Required for branch comparison - uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1 with: enable-visual-comparison: 'true'
Controls whether the action posts summary comments on pull requests. This provides immediate feedback to developers within the PR interface.
-
Default:
true - Description: Post summary comment on pull requests
- Requirements: Pull request context and write permissions
-
Example:
with: enable-pr-comments: 'false' # Silent mode
Manages deployment of the interactive dashboard to GitHub Pages, providing persistent access to test results.
-
Default:
true - Description: Deploy dashboard to GitHub Pages
- Requirements: Pages enabled in repository settings and deployment permissions
-
Example:
with: enable-github-pages: 'true'
Specifies the location or pattern for test files, enabling flexible test organization strategies.
-
Default:
tests - Description: Folder or glob pattern for test files
- Supports: Relative paths and glob patterns
-
Examples:
test-files: 'tests' # All tests in folder test-files: 'e2e/**/*.spec.ts' # Specific pattern test-files: 'tests/critical' # Subset of tests test-files: 'src/**/*.test.ts' # Co-located tests
Specifies the Playwright configuration file location, supporting multiple configuration scenarios.
-
Default:
playwright.config.js - Description: Path to Playwright configuration
- Supports: JavaScript and TypeScript configurations
-
Example:
playwright-config: 'configs/playwright.ci.js'
Determines how reviewdog presents linting feedback in the pull request interface.
-
Default:
github-pr-review -
Options:
-
github-pr-review- Inline comments on changed lines -
github-check- Annotations in checks tab -
github-pr-check- Both inline and check annotations
-
-
Example:
reviewdog-reporter: 'github-check'
Specifies the base branch for visual comparison, accommodating different branching strategies.
-
Default:
main - Description: Branch name for visual comparison baseline
-
Example:
main-branch: 'master' # For repositories using master main-branch: 'develop' # For GitFlow workflows
Identifies a stable test file used to verify successful main branch checkout during visual comparison.
-
Default:
tests/demo-todo-app.spec.ts - Description: File to verify main branch checkout
- Use Case: Set to your most stable test file to ensure reliable comparison
-
Example:
key-test-file: 'tests/smoke.spec.ts'
Specifies the Node.js version for action execution, ensuring compatibility with project requirements.
-
Default:
18 - Description: Node.js version for the action
- Supports: Any valid Node.js version string
-
Example:
node-version: '20' # Use Node.js 20 LTS node-version: '18.x' # Latest 18.x release
Overrides the automatically generated dashboard URL in PR comments, supporting custom domains or external hosting.
-
Default:
''(auto-generated from repository settings) - Description: Custom dashboard URL for PR comments
- Use Case: Corporate domains or CDN hosting
-
Example:
web-report-url: 'https://test-results.company.com/pr-${{ github.event.number }}'
Controls how long GitHub retains uploaded artifacts, balancing storage costs with historical analysis needs.
-
Default:
30 - Description: Days to retain uploaded artifacts
-
Maximum:
90(GitHub limit) -
Example:
artifacts-retention-days: '7' # One week retention
Determines whether test failures cause the action to fail, enabling strict quality gates.
-
Default:
false - Description: Fail the action if tests fail
- Use Case: Enforce test passing in protected branches
-
Example:
fail-on-test-failure: 'true'
Allows installation of additional npm packages required by custom configurations or extensions.
-
Default:
'' - Description: Additional npm packages to install
- Format: Space-separated package names with optional versions
-
Example:
extra-npm-dependencies: 'dotenv@16.0.0 cross-env'
Specifies a custom location for artifacts when using dashboard-only mode with external test results.
-
Default:
'' - Description: Path to pre-generated artifacts
- Use Case: Integration with non-standard CI pipelines
-
Example:
with: mode: 'dashboard-only' custom-artifacts-path: 'test-results'
The action provides comprehensive outputs that enable integration with subsequent workflow steps and external systems.
Provides complete test execution summary in JSON format for programmatic processing.
- Type: JSON string
- Description: Complete Playwright test summary
-
Schema:
{ "total": 25, "passed": 23, "failed": 1, "skipped": 1, "duration": 45000, "pass_rate": 92 } -
Usage Example:
- name: Process results run: | echo '${{ steps.review.outputs.test-results }}' | jq '.pass_rate'
Provides the test success percentage as a numeric value for threshold checks.
- Type: Number
- Description: Percentage of tests passed
- Range: 0-100
-
Example:
92
Reports the total number of tests executed for coverage tracking.
- Type: Number
- Description: Total number of tests executed
-
Example:
25
Contains test results from the pull request branch for comparison analysis.
- Type: JSON string
- Description: PR branch test summary
- Available: When visual comparison is enabled
Contains baseline test results from the main branch for regression detection.
- Type: JSON string
- Description: Main branch test summary
- Available: When visual comparison is enabled
Provides detailed comparison analysis between branches including regression indicators.
- Type: JSON string
- Description: Comparison analysis between branches
- Schema: Includes differences in pass rates, new failures, and resolved issues
Boolean flag indicating whether regression was detected during comparison.
- Type: Boolean string
-
Description:
"true"if regression found,"false"otherwise -
Example Usage:
- name: Check for regression if: steps.review.outputs.gui-regression-detected == 'true' run: | echo "::error::GUI regression detected!" exit 1
Provides an overall code quality metric based on linting results.
- Type: Number
- Description: Overall lint score (0-100)
- Calculation: Based on weighted ESLint errors and warnings
Provides the URL where the interactive dashboard can be accessed.
- Type: URL string
- Description: Deployed dashboard URL
-
Example:
https://owner.github.io/repo/pr-123/
Indicates the location of all generated artifacts for downstream processing.
- Type: Path string
- Description: Location of generated artifacts
-
Default:
artifacts
Provides structured data about review checklist completion for tracking purposes.
- Type: JSON string
- Description: Checklist completion status
-
Schema:
{ "total": 6, "completed": 4, "items": ["tests", "lint", "flowchart"] }
The action leverages standard GitHub Actions environment variables for context and configuration:
-
GITHUB_TOKEN- Authentication token for API access -
GITHUB_EVENT_NAME- Workflow trigger type (pull_request, push, etc.) -
GITHUB_REPOSITORY- Repository identifier for Pages deployment -
GITHUB_EVENT_PATH- Path to event payload file -
GITHUB_SHA- Current commit SHA -
GITHUB_REF- Current Git reference -
GITHUB_WORKSPACE- Working directory path -
GITHUB_RUN_ID- Unique workflow run identifier -
GITHUB_RUN_NUMBER- Sequential run number
The action implements intelligent configuration detection to minimize setup requirements:
ESLint Configurations: The action automatically detects and uses ESLint configurations in the following priority order: .eslintrc.json, .eslintrc.js, eslint.config.js, and eslint.config.mjs. Custom configurations are respected when present.
Prettier Configurations: Prettier settings are auto-detected from .prettierrc, .prettierrc.json, .prettierrc.js, and prettier.config.js. The action falls back to sensible defaults when no configuration exists.
Playwright Configuration: The action searches for Playwright configurations in standard locations including playwright.config.js and playwright.config.ts. TypeScript configurations are transpiled automatically.
The action generates a consistent artifact structure that facilitates both manual review and automated processing:
artifacts/
├── playwright-summary-pr.json # PR test summary with key metrics
├── playwright-summary-main.json # Main test summary for comparison
├── playwright-metrics.json # Detailed test execution data
├── lint-summary.json # ESLint and Prettier results
├── checklist.md # Human-readable review checklist
├── checklist.json # Machine-readable checklist data
├── flowchart.png # Visual test flow diagram
├── pr-report/ # PR branch HTML report
│ └── index.html # Interactive test results
├── main-report/ # Main branch HTML report
│ └── index.html # Baseline test results
└── web-report/ # Consolidated dashboard
├── index.html # Main dashboard entry
├── flowchart.png # Embedded flow diagram
├── pr-report/ # Embedded PR results
└── main-report/ # Embedded main results
The action implements resilient error handling to maximize value delivery:
Lint and Test Failures: By default, failures in lint or test phases do not fail the action, allowing dashboard generation to proceed. This ensures visibility into failures through the dashboard even when tests fail.
Dependency Management: Missing dependencies are automatically installed based on package.json specifications. The action includes fallback versions for critical dependencies.
Network Resilience: Network operations implement exponential backoff retry logic with configurable maximum attempts. Transient failures do not prevent action completion.
Artifact Management: Upload failures are logged with detailed error information but do not fail the action. Partial artifacts are handled gracefully in dashboard generation.
- uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}- uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
mode: 'full'
enable-visual-comparison: 'true'
enable-pr-comments: 'true'
enable-github-pages: 'true'
test-files: 'e2e/**/*.spec.ts'
playwright-config: 'playwright.ci.config.js'
reviewdog-reporter: 'github-pr-check'
main-branch: 'develop'
key-test-file: 'e2e/smoke.spec.ts'
node-version: '20'
artifacts-retention-days: '14'
fail-on-test-failure: 'false'
extra-npm-dependencies: 'dotenv cross-env'- uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
with:
github-token: ${{ secrets.ENTERPRISE_TOKEN }}
mode: 'full'
enable-visual-comparison: 'true'
web-report-url: 'https://test-dashboard.enterprise.com'
reviewdog-reporter: 'github-check'
fail-on-test-failure: 'true'
artifacts-retention-days: '90'