From 4855c001633c44b32db56df414a66571e8716f42 Mon Sep 17 00:00:00 2001 From: bkioshn <35752733+bkioshn@users.noreply.github.com> Date: Tue, 9 Jan 2024 08:11:01 +0700 Subject: [PATCH] feat: scanning all targets related to the given `target` (#128) * chore: comment out format check * chore: debug action * chore: debug action * chore: debug action * chore: debug action * chore: debug action * chore: ts to js * Revert "chore: introduce err for md and spelling to test CI" This reverts commit 6e3a69b6f00e8b1940ee527221af7cfb63793549. * chore: hotfix remove ci_cli_version * chore: hotfix run go build in install * chore: hotfix try include cli bin * chore: hotfix show ls * chore: hotfix ts to js * chore: hotfix add await * chore: debug setup * chore: debug discover * chore: debug discover * chore: debug install * chore: debug discover * chore: debug discover * chore: debug install * chore: debug install and discover * chore: debug install and discover * chore: debug install and discover * chore: debug install and discover * chore: debug install * chore: debug install * chore: debug install * chore: debug * chore: debug cmd * chore: debug cmd * chore: debug cmd * chore: debug ci cmd * chore: build new go bin * Revert "chore: hotfix remove ci_cli_version" This reverts commit c1fdc5beff9764d5c0bcf5f960b706bb726effd6. * fix: new logic for target scanner and test * chore: fix readme * chore: fix wordking * chore: test new target scanner * chore: fix install lint and format * test: try run check-* * chore: printout go version * chore: install ci and printout directory list * fix: change branch name to commit hash * fix: typo * test: build cli * test: print out usr/local/bin * test: move build file * fix: add local flag to install * fix: path * test: debug local input * fix: print out cur directory * test: move file * test: try remove cd * fix: remove local bin * test: directory * test: print out directory * test: try add go os and arch * test: add move cmd * test: print out directory * test: printout directory * test: print out directory * test: print go.mod * test: print out error * test: go version * fix: fix path * fix: go install * test: ping google * test: curl instead of ping * test: change commit hash * test: comment fallback part * test: command * test: check GOBIN * test: ls gobin * test: log GOBIN * test: export GOBIN first * test: try -h * test: echo all steps * test: ignore reject * test: run cmdline as seq in one prom * test: promiss chain * test: combine cmd * fix: fix cmd * fix: bring back old install * fix: try go build instead of install * fix: return * fix: command * fix: command * test: print goroot * fix: command * fix: print out stderr and err * fix: divide cmd * fix: remove stderr, might not work when use * fix: log messages * fix: add log message * test: debug gettarget * test: print log * fix: handle target-* * testL debug run * fix: change earthly scan target logic * test: debug file permission * test: permission file * fix: add earthfile to path * fix: point to the right target * test: log * fix: read file to get targets * fixL regex * fix: change earthly scan target logic * fix: use line reader * test: log * fix: use fs instead of line reader * chore: fix linter and format * test: log args * chore: fix md and remove log * fix: spawn multiple process for multi target * test: log * fix: async * fix: run.yml to get inputs target * chore: cleanup * Update index.md docs: improve instructions * fix: target scan in yml * fix: change from exec lib * fix: try exec without then and catch * test: log directory * test: print out directory * test: print out directory * test: echo path * test: print out directory for /usr/bin * test: try adding arg * test: echo pwd * Revert "fix: target scan in yml" This reverts commit ad93385603478ad44d0f1860d4f2814fce5b13d8. * fix: targets in ci yml * feat: create new ci command called find * feat: add new struct called EarthTargets * fix: mod scanfortarget to return map of path to EarthTargets * test: debug earthfile and target * fix: use new find ci to get the targets * fix: ci command * fix: output of std cmd should be json * fix: create function to call ci find * fix: add async * fix: typo * fix: typo * fix: add path to earthfile * fix: regex * fix: remove img and make target a list in scan cmd * fix: action run * fix: ts to js * fix: check nullable * fix: add null check before return * fix: check null string * test: debug stdout * test: stdout null as string * chore: cleanup and add comment * fix: cleanup and fix async syntax * fix: find all target pattern * fix: parse json * test: wrong spelling still should run mdcheck * test: remove wrong spelling word * chore: hotfix comment out test * fix: add type checking * chore: remove comment code * test: try new logic * fix: command get key paths * fix: run condition * test: debug path * test: debug path * fix: remove command * fix: change output in discover * fix: remove json * fix: remove calling ci cmd to find filtered target * test: check and check-* * fix: remove find command * test: remove test multiple targets * fix: format and linting * test: log earthfile map * fix: discover stage * fix: run stage * fix: install stage * fix: install build ci command * fix: cmd quotation mark * fix: cmd quotation mark * fix: discover test * fix: page yaml for docs * fix: page yaml for docs format data * test: run only docs test * fix: remove " * fix: page github output * fix: bad substitution * fix: github output * fix: add bracket * test: print out pwd * fixL path for generate docs * fix: bring back all flow * fix: write test for target scanner * chore: add comments and edit name * chore: clean up * fix: target push command * fix: target should run in diff process * fix: install js file * fix: earthfile path for docs * fix: doc earthfile path * fix: doc earthfile path * fix: artifact tag * fix: push target * fix: args spawn * fix: pretier * fix: artifact test and push cmd * fix: format * test: try using relative path * fix: merge conflict * fix: change array of targets to string * test: debug output * test: add extract filter logic * test: try getting without jq * fix: extract filter logic * fix: typo * test: debug no output * fix: github output * test: debug output * test: bring back json parsing * fix: remove map and bring back targets in run * fix: github output * test: debug output targets * test: remove set github output * fix: convert targets to string seperated by space * fix: remove multi line in targets * fix: remove join * fix: try line break * fix: run to accept targets not map * fix: remove earthfile map * fix: update test for run * fix: targets as an array * fix: publish and release yaml file * chore: fix onboarding docs * chore: fix and add docs * fix: file scanner linting * fix: add log, fix typos, refactor * feat: implement catalyst ci simulate command (#139) * feat: add cat ci simulate command * fix: go syntax * chore: add markdown for simulate cmd * fix: wording * fix: change icon * feat: add generate command * fix: cleanup * chore: add and fix docs for simulate and generate * fix: default value for array targets * fix: return err * fix: refactor generate and simulate cmd * fix: simulate markdown * fix: default value for sim command * fix: earth target should be relative path * fix: simulate doc * fix: relative path --------- Co-authored-by: Steven Johnson * chore: add doc for artifact and img * fix: revert to master * fix: ci local to false --------- Co-authored-by: Joshua Gilman Co-authored-by: Steven Johnson --- .github/workflows/ci.yml | 4 +- .github/workflows/pages.yml | 2 +- .github/workflows/publish.yml | 21 +- .github/workflows/release.yml | 21 +- .github/workflows/run.yml | 133 +-- Earthfile | 14 +- actions/discover/dist/index.js | 1288 ++++++++++++++++++++++++- actions/discover/dist/licenses.txt | 24 + actions/discover/package-lock.json | 14 + actions/discover/package.json | 3 +- actions/discover/src/discover.test.ts | 120 +-- actions/discover/src/discover.ts | 35 +- actions/install/action.yml | 4 + actions/install/dist/index.js | 18 + actions/install/src/install.test.ts | 57 +- actions/install/src/install.ts | 22 + actions/run/action.yml | 5 +- actions/run/dist/index.js | 50 +- actions/run/package-lock.json | 14 + actions/run/package.json | 3 +- actions/run/src/run.test.ts | 65 +- actions/run/src/run.ts | 50 +- actions/setup/action.yml | 78 +- cli/cmd/main.go | 241 +++-- cli/pkg/earthfile.go | 7 +- cli/pkg/scanners/file_scanner.go | 54 +- cli/pkg/scanners/file_scanner_test.go | 76 +- docs/src/guides/languages/rust.md | 26 +- docs/src/guides/simulate.md | 135 +++ docs/src/onboarding/index.md | 89 +- docs/src/reference/actions.md | 9 +- docs/src/reference/targets.md | 42 +- examples/postgresql/Earthfile | 8 - examples/rust/Earthfile | 20 +- 34 files changed, 2295 insertions(+), 457 deletions(-) create mode 100644 docs/src/guides/simulate.md diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 733ff4edd..8174c36de 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -95,7 +95,7 @@ jobs: aws_region: ${{ inputs.aws_region }} ci_cli_version: ${{ inputs.ci_cli_version }} earthly_version: ${{ inputs.earthly_version }} - target: check + target: check check-* secrets: dockerhub_token: ${{ secrets.dockerhub_token }} dockerhub_username: ${{ secrets.dockerhub_username }} @@ -137,7 +137,7 @@ jobs: aws_region: ${{ inputs.aws_region }} ci_cli_version: ${{ inputs.ci_cli_version }} earthly_version: ${{ inputs.earthly_version }} - target: test + target: test test-* privileged: true secrets: dockerhub_token: ${{ secrets.dockerhub_token }} diff --git a/.github/workflows/pages.yml b/.github/workflows/pages.yml index 86ac393b1..7ab3a0df0 100644 --- a/.github/workflows/pages.yml +++ b/.github/workflows/pages.yml @@ -86,7 +86,7 @@ jobs: id: build with: earthfile: ${{ inputs.earthfile }} - target: ${{ inputs.target }} + targets: ${{ inputs.target }} runner_address: ${{ secrets.earthly_runner_address }} artifact: "true" - name: Publish docs diff --git a/.github/workflows/publish.yml b/.github/workflows/publish.yml index 804d98bae..a9a5fc91b 100644 --- a/.github/workflows/publish.yml +++ b/.github/workflows/publish.yml @@ -84,6 +84,7 @@ jobs: runs-on: ubuntu-latest outputs: json: ${{ steps.check.outputs.json }} + paths: ${{ steps.check.outputs.paths }} steps: - uses: actions/checkout@v3 - name: Setup CI @@ -100,24 +101,34 @@ jobs: - name: Check for empty output id: check run: | - output=$(echo '${{ steps.discover.outputs.json }}' | jq -rc) + json=$(echo '${{ steps.discover.outputs.json }}' | jq -rc) + paths=$(echo '${{ steps.discover.outputs.paths }}' | jq -rc) if [ "$output" == "null" ]; then echo "json=[]" >> $GITHUB_OUTPUT + echo "paths=[]" >> $GITHUB_OUTPUT else - echo "json=$output" >> $GITHUB_OUTPUT + echo "json=$json" >> $GITHUB_OUTPUT + echo "paths=$paths" >> $GITHUB_OUTPUT fi run: runs-on: ubuntu-latest needs: [discover] - if: needs.discover.outputs.json != '[]' + if: needs.discover.outputs.paths != '[]' strategy: fail-fast: false matrix: platform: - linux/amd64 - earthfile: ${{ fromJson(needs.discover.outputs.json) }} + earthfile: ${{ fromJson(needs.discover.outputs.paths) }} steps: + - name: Get filtered targets + id: get_target + run: | + targets=$(echo '${{ needs.discover.outputs.json }}' | jq -r --arg key '${{ matrix.earthfile }}' '.[$key][]') + echo "Found targets: $targets" + targets_with_space=$(echo $targets | tr '\n' ' ') + echo "targets=$targets_with_space" >> $GITHUB_OUTPUT - uses: actions/checkout@v3 - name: Setup CI uses: input-output-hk/catalyst-ci/actions/setup@master @@ -135,7 +146,7 @@ jobs: id: build with: earthfile: ${{ matrix.earthfile }} - target: ${{ inputs.target }} + targets: ${{ steps.get_target.outputs.targets }} platform: ${{ matrix.platform }} runner_address: ${{ secrets.earthly_runner_address }} - name: Push image diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index 02caa1e73..43433fd2f 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -68,6 +68,7 @@ jobs: runs-on: ubuntu-latest outputs: json: ${{ steps.check.outputs.json }} + paths: ${{ steps.check.outputs.paths }} steps: - uses: actions/checkout@v3 - name: Setup CI @@ -84,24 +85,34 @@ jobs: - name: Check for empty output id: check run: | - output=$(echo '${{ steps.discover.outputs.json }}' | jq -rc) + json=$(echo '${{ steps.discover.outputs.json }}' | jq -rc) + paths=$(echo '${{ steps.discover.outputs.paths }}' | jq -rc) if [ "$output" == "null" ]; then echo "json=[]" >> $GITHUB_OUTPUT + echo "paths=[]" >> $GITHUB_OUTPUT else - echo "json=$output" >> $GITHUB_OUTPUT + echo "json=$json" >> $GITHUB_OUTPUT + echo "paths=$paths" >> $GITHUB_OUTPUT fi run: runs-on: ubuntu-latest needs: [discover] - if: needs.discover.outputs.json != '[]' + if: needs.discover.outputs.paths != '[]' strategy: fail-fast: false matrix: platform: - linux/amd64 - earthfile: ${{ fromJson(needs.discover.outputs.json) }} + earthfile: ${{ fromJson(needs.discover.outputs.paths) }} steps: + - name: Get filtered targets + id: get_target + run: | + targets=$(echo '${{ needs.discover.outputs.json }}' | jq -r --arg key '${{ matrix.earthfile }}' '.[$key][]') + echo "Found targets: $targets" + targets_with_space=$(echo $targets | tr '\n' ' ') + echo "targets=$targets_with_space" >> $GITHUB_OUTPUT - uses: actions/checkout@v3 - name: Setup CI uses: input-output-hk/catalyst-ci/actions/setup@master @@ -118,7 +129,7 @@ jobs: id: build with: earthfile: ${{ matrix.earthfile }} - target: ${{ inputs.target }} + targets: ${{ steps.get_target.outputs.targets }} platform: ${{ matrix.platform }} runner_address: ${{ secrets.earthly_runner_address }} artifact: "true" diff --git a/.github/workflows/run.yml b/.github/workflows/run.yml index 11b7329d5..a42a6779a 100644 --- a/.github/workflows/run.yml +++ b/.github/workflows/run.yml @@ -1,67 +1,68 @@ # WARNING: If you modify this workflow, please update the documentation on: - workflow_call: - inputs: - privileged: - description: | - Whether the workflow should run earthly in privileged mode (earthly -P flag). - required: false - type: boolean - default: false - target: - description: | - The target to run. - required: true - type: string - aws_role_arn: - description: | - The ARN of the AWS role that will be assumed by the workflow. Only - required when configuring a remote Earthly runner. - required: false - type: string - aws_region: - description: | - The AWS region that will be used by the workflow. Only required when - configuring a remote Earthly runner. - required: false - type: string - ci_cli_version: - description: | - The version of the CI CLI to use. - required: false - type: string - default: latest - earthly_version: - description: The version of Earthly to use. - required: false - type: string - default: latest - secrets: - dockerhub_username: - description: The token to use for logging into the DockerHub registry. - required: false - dockerhub_token: - description: The token to use for logging into the DockerHub registry. - required: false - earthly_runner_address: - description: | - The address of the Earthly runner that will be used to build the - Earthly files. - required: false - earthly_runner_secret: - description: | - The ID of the AWS secret holding Earthly remote runner credentials. - This secret must contain the runner address and the necessary TLS - certificates required to authenticate with it. If omitted, a remote - Earthly runner will not be configured. - required: false + workflow_call: + inputs: + privileged: + description: | + Whether the workflow should run earthly in privileged mode (earthly -P flag). + required: false + type: boolean + default: false + target: + description: | + The target to run. + required: true + type: string + aws_role_arn: + description: | + The ARN of the AWS role that will be assumed by the workflow. Only + required when configuring a remote Earthly runner. + required: false + type: string + aws_region: + description: | + The AWS region that will be used by the workflow. Only required when + configuring a remote Earthly runner. + required: false + type: string + ci_cli_version: + description: | + The version of the CI CLI to use. + required: false + type: string + default: latest + earthly_version: + description: The version of Earthly to use. + required: false + type: string + default: latest + secrets: + dockerhub_username: + description: The token to use for logging into the DockerHub registry. + required: false + dockerhub_token: + description: The token to use for logging into the DockerHub registry. + required: false + earthly_runner_address: + description: | + The address of the Earthly runner that will be used to build the + Earthly files. + required: false + earthly_runner_secret: + description: | + The ID of the AWS secret holding Earthly remote runner credentials. + This secret must contain the runner address and the necessary TLS + certificates required to authenticate with it. If omitted, a remote + Earthly runner will not be configured. + required: false jobs: discover: runs-on: ubuntu-latest outputs: json: ${{ steps.check.outputs.json }} + paths: ${{ steps.check.outputs.paths }} steps: - uses: actions/checkout@v3 - name: Setup CI @@ -78,22 +79,32 @@ jobs: - name: Check for empty output id: check run: | - output=$(echo '${{ steps.discover.outputs.json }}' | jq -rc) + json=$(echo '${{ steps.discover.outputs.json }}' | jq -rc) + paths=$(echo '${{ steps.discover.outputs.paths }}' | jq -rc) if [ "$output" == "null" ]; then echo "json=[]" >> $GITHUB_OUTPUT + echo "paths=[]" >> $GITHUB_OUTPUT else - echo "json=$output" >> $GITHUB_OUTPUT + echo "json=$json" >> $GITHUB_OUTPUT + echo "paths=$paths" >> $GITHUB_OUTPUT fi run: runs-on: ubuntu-latest needs: [discover] - if: needs.discover.outputs.json != '[]' + if: needs.discover.outputs.paths != '[]' strategy: fail-fast: false matrix: - earthfile: ${{ fromJson(needs.discover.outputs.json) }} + earthfile: ${{ fromJson(needs.discover.outputs.paths) }} steps: + - name: Get filtered targets + id: get_target + run: | + targets=$(echo '${{ needs.discover.outputs.json }}' | jq -r --arg key '${{ matrix.earthfile }}' '.[$key][]') + echo "Found targets: $targets" + targets_with_space=$(echo $targets | tr '\n' ' ') + echo "targets=$targets_with_space" >> $GITHUB_OUTPUT - uses: actions/checkout@v3 - name: Setup CI uses: input-output-hk/catalyst-ci/actions/setup@master @@ -111,5 +122,5 @@ jobs: with: privileged: ${{ inputs.privileged }} earthfile: ${{ matrix.earthfile }} - target: ${{ inputs.target }} - runner_address: ${{ secrets.earthly_runner_address }} \ No newline at end of file + runner_address: ${{ secrets.earthly_runner_address }} + targets: ${{ steps.get_target.outputs.targets }} diff --git a/Earthfile b/Earthfile index 563135294..058fd27a5 100644 --- a/Earthfile +++ b/Earthfile @@ -3,6 +3,7 @@ VERSION --global-cache 0.7 # cspell: words livedocs sitedocs + # check-markdown can be done remotely. check-markdown: DO ./earthly/mdlint+CHECK @@ -23,19 +24,6 @@ check-bash: DO ./earthly/bash+SHELLCHECK --src=. - -## ----------------------------------------------------------------------------- -## -## Standard CI targets. -## -## These targets are discovered and executed automatically by CI. - -# check run all checks. -check: - BUILD +check-spelling - BUILD +check-markdown - BUILD +check-bash - # Internal: Reference to our repo root documentation used by docs builder. repo-docs: # Create artifacts of extra files we embed inside the documentation when its built. diff --git a/actions/discover/dist/index.js b/actions/discover/dist/index.js index 898c0ef08..f14ff2c57 100644 --- a/actions/discover/dist/index.js +++ b/actions/discover/dist/index.js @@ -994,6 +994,741 @@ exports.toCommandProperties = toCommandProperties; /***/ }), +/***/ 514: +/***/ (function(__unused_webpack_module, exports, __nccwpck_require__) { + +"use strict"; + +var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) { + if (k2 === undefined) k2 = k; + Object.defineProperty(o, k2, { enumerable: true, get: function() { return m[k]; } }); +}) : (function(o, m, k, k2) { + if (k2 === undefined) k2 = k; + o[k2] = m[k]; +})); +var __setModuleDefault = (this && this.__setModuleDefault) || (Object.create ? (function(o, v) { + Object.defineProperty(o, "default", { enumerable: true, value: v }); +}) : function(o, v) { + o["default"] = v; +}); +var __importStar = (this && this.__importStar) || function (mod) { + if (mod && mod.__esModule) return mod; + var result = {}; + if (mod != null) for (var k in mod) if (k !== "default" && Object.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k); + __setModuleDefault(result, mod); + return result; +}; +var __awaiter = (this && this.__awaiter) || function (thisArg, _arguments, P, generator) { + function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); } + return new (P || (P = Promise))(function (resolve, reject) { + function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } } + function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } } + function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); } + step((generator = generator.apply(thisArg, _arguments || [])).next()); + }); +}; +Object.defineProperty(exports, "__esModule", ({ value: true })); +exports.getExecOutput = exports.exec = void 0; +const string_decoder_1 = __nccwpck_require__(576); +const tr = __importStar(__nccwpck_require__(159)); +/** + * Exec a command. + * Output will be streamed to the live console. + * Returns promise with return code + * + * @param commandLine command to execute (can include additional args). Must be correctly escaped. + * @param args optional arguments for tool. Escaping is handled by the lib. + * @param options optional exec options. See ExecOptions + * @returns Promise exit code + */ +function exec(commandLine, args, options) { + return __awaiter(this, void 0, void 0, function* () { + const commandArgs = tr.argStringToArray(commandLine); + if (commandArgs.length === 0) { + throw new Error(`Parameter 'commandLine' cannot be null or empty.`); + } + // Path to tool to execute should be first arg + const toolPath = commandArgs[0]; + args = commandArgs.slice(1).concat(args || []); + const runner = new tr.ToolRunner(toolPath, args, options); + return runner.exec(); + }); +} +exports.exec = exec; +/** + * Exec a command and get the output. + * Output will be streamed to the live console. + * Returns promise with the exit code and collected stdout and stderr + * + * @param commandLine command to execute (can include additional args). Must be correctly escaped. + * @param args optional arguments for tool. Escaping is handled by the lib. + * @param options optional exec options. See ExecOptions + * @returns Promise exit code, stdout, and stderr + */ +function getExecOutput(commandLine, args, options) { + var _a, _b; + return __awaiter(this, void 0, void 0, function* () { + let stdout = ''; + let stderr = ''; + //Using string decoder covers the case where a mult-byte character is split + const stdoutDecoder = new string_decoder_1.StringDecoder('utf8'); + const stderrDecoder = new string_decoder_1.StringDecoder('utf8'); + const originalStdoutListener = (_a = options === null || options === void 0 ? void 0 : options.listeners) === null || _a === void 0 ? void 0 : _a.stdout; + const originalStdErrListener = (_b = options === null || options === void 0 ? void 0 : options.listeners) === null || _b === void 0 ? void 0 : _b.stderr; + const stdErrListener = (data) => { + stderr += stderrDecoder.write(data); + if (originalStdErrListener) { + originalStdErrListener(data); + } + }; + const stdOutListener = (data) => { + stdout += stdoutDecoder.write(data); + if (originalStdoutListener) { + originalStdoutListener(data); + } + }; + const listeners = Object.assign(Object.assign({}, options === null || options === void 0 ? void 0 : options.listeners), { stdout: stdOutListener, stderr: stdErrListener }); + const exitCode = yield exec(commandLine, args, Object.assign(Object.assign({}, options), { listeners })); + //flush any remaining characters + stdout += stdoutDecoder.end(); + stderr += stderrDecoder.end(); + return { + exitCode, + stdout, + stderr + }; + }); +} +exports.getExecOutput = getExecOutput; +//# sourceMappingURL=exec.js.map + +/***/ }), + +/***/ 159: +/***/ (function(__unused_webpack_module, exports, __nccwpck_require__) { + +"use strict"; + +var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) { + if (k2 === undefined) k2 = k; + Object.defineProperty(o, k2, { enumerable: true, get: function() { return m[k]; } }); +}) : (function(o, m, k, k2) { + if (k2 === undefined) k2 = k; + o[k2] = m[k]; +})); +var __setModuleDefault = (this && this.__setModuleDefault) || (Object.create ? (function(o, v) { + Object.defineProperty(o, "default", { enumerable: true, value: v }); +}) : function(o, v) { + o["default"] = v; +}); +var __importStar = (this && this.__importStar) || function (mod) { + if (mod && mod.__esModule) return mod; + var result = {}; + if (mod != null) for (var k in mod) if (k !== "default" && Object.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k); + __setModuleDefault(result, mod); + return result; +}; +var __awaiter = (this && this.__awaiter) || function (thisArg, _arguments, P, generator) { + function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); } + return new (P || (P = Promise))(function (resolve, reject) { + function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } } + function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } } + function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); } + step((generator = generator.apply(thisArg, _arguments || [])).next()); + }); +}; +Object.defineProperty(exports, "__esModule", ({ value: true })); +exports.argStringToArray = exports.ToolRunner = void 0; +const os = __importStar(__nccwpck_require__(37)); +const events = __importStar(__nccwpck_require__(361)); +const child = __importStar(__nccwpck_require__(81)); +const path = __importStar(__nccwpck_require__(17)); +const io = __importStar(__nccwpck_require__(436)); +const ioUtil = __importStar(__nccwpck_require__(962)); +const timers_1 = __nccwpck_require__(512); +/* eslint-disable @typescript-eslint/unbound-method */ +const IS_WINDOWS = process.platform === 'win32'; +/* + * Class for running command line tools. Handles quoting and arg parsing in a platform agnostic way. + */ +class ToolRunner extends events.EventEmitter { + constructor(toolPath, args, options) { + super(); + if (!toolPath) { + throw new Error("Parameter 'toolPath' cannot be null or empty."); + } + this.toolPath = toolPath; + this.args = args || []; + this.options = options || {}; + } + _debug(message) { + if (this.options.listeners && this.options.listeners.debug) { + this.options.listeners.debug(message); + } + } + _getCommandString(options, noPrefix) { + const toolPath = this._getSpawnFileName(); + const args = this._getSpawnArgs(options); + let cmd = noPrefix ? '' : '[command]'; // omit prefix when piped to a second tool + if (IS_WINDOWS) { + // Windows + cmd file + if (this._isCmdFile()) { + cmd += toolPath; + for (const a of args) { + cmd += ` ${a}`; + } + } + // Windows + verbatim + else if (options.windowsVerbatimArguments) { + cmd += `"${toolPath}"`; + for (const a of args) { + cmd += ` ${a}`; + } + } + // Windows (regular) + else { + cmd += this._windowsQuoteCmdArg(toolPath); + for (const a of args) { + cmd += ` ${this._windowsQuoteCmdArg(a)}`; + } + } + } + else { + // OSX/Linux - this can likely be improved with some form of quoting. + // creating processes on Unix is fundamentally different than Windows. + // on Unix, execvp() takes an arg array. + cmd += toolPath; + for (const a of args) { + cmd += ` ${a}`; + } + } + return cmd; + } + _processLineBuffer(data, strBuffer, onLine) { + try { + let s = strBuffer + data.toString(); + let n = s.indexOf(os.EOL); + while (n > -1) { + const line = s.substring(0, n); + onLine(line); + // the rest of the string ... + s = s.substring(n + os.EOL.length); + n = s.indexOf(os.EOL); + } + return s; + } + catch (err) { + // streaming lines to console is best effort. Don't fail a build. + this._debug(`error processing line. Failed with error ${err}`); + return ''; + } + } + _getSpawnFileName() { + if (IS_WINDOWS) { + if (this._isCmdFile()) { + return process.env['COMSPEC'] || 'cmd.exe'; + } + } + return this.toolPath; + } + _getSpawnArgs(options) { + if (IS_WINDOWS) { + if (this._isCmdFile()) { + let argline = `/D /S /C "${this._windowsQuoteCmdArg(this.toolPath)}`; + for (const a of this.args) { + argline += ' '; + argline += options.windowsVerbatimArguments + ? a + : this._windowsQuoteCmdArg(a); + } + argline += '"'; + return [argline]; + } + } + return this.args; + } + _endsWith(str, end) { + return str.endsWith(end); + } + _isCmdFile() { + const upperToolPath = this.toolPath.toUpperCase(); + return (this._endsWith(upperToolPath, '.CMD') || + this._endsWith(upperToolPath, '.BAT')); + } + _windowsQuoteCmdArg(arg) { + // for .exe, apply the normal quoting rules that libuv applies + if (!this._isCmdFile()) { + return this._uvQuoteCmdArg(arg); + } + // otherwise apply quoting rules specific to the cmd.exe command line parser. + // the libuv rules are generic and are not designed specifically for cmd.exe + // command line parser. + // + // for a detailed description of the cmd.exe command line parser, refer to + // http://stackoverflow.com/questions/4094699/how-does-the-windows-command-interpreter-cmd-exe-parse-scripts/7970912#7970912 + // need quotes for empty arg + if (!arg) { + return '""'; + } + // determine whether the arg needs to be quoted + const cmdSpecialChars = [ + ' ', + '\t', + '&', + '(', + ')', + '[', + ']', + '{', + '}', + '^', + '=', + ';', + '!', + "'", + '+', + ',', + '`', + '~', + '|', + '<', + '>', + '"' + ]; + let needsQuotes = false; + for (const char of arg) { + if (cmdSpecialChars.some(x => x === char)) { + needsQuotes = true; + break; + } + } + // short-circuit if quotes not needed + if (!needsQuotes) { + return arg; + } + // the following quoting rules are very similar to the rules that by libuv applies. + // + // 1) wrap the string in quotes + // + // 2) double-up quotes - i.e. " => "" + // + // this is different from the libuv quoting rules. libuv replaces " with \", which unfortunately + // doesn't work well with a cmd.exe command line. + // + // note, replacing " with "" also works well if the arg is passed to a downstream .NET console app. + // for example, the command line: + // foo.exe "myarg:""my val""" + // is parsed by a .NET console app into an arg array: + // [ "myarg:\"my val\"" ] + // which is the same end result when applying libuv quoting rules. although the actual + // command line from libuv quoting rules would look like: + // foo.exe "myarg:\"my val\"" + // + // 3) double-up slashes that precede a quote, + // e.g. hello \world => "hello \world" + // hello\"world => "hello\\""world" + // hello\\"world => "hello\\\\""world" + // hello world\ => "hello world\\" + // + // technically this is not required for a cmd.exe command line, or the batch argument parser. + // the reasons for including this as a .cmd quoting rule are: + // + // a) this is optimized for the scenario where the argument is passed from the .cmd file to an + // external program. many programs (e.g. .NET console apps) rely on the slash-doubling rule. + // + // b) it's what we've been doing previously (by deferring to node default behavior) and we + // haven't heard any complaints about that aspect. + // + // note, a weakness of the quoting rules chosen here, is that % is not escaped. in fact, % cannot be + // escaped when used on the command line directly - even though within a .cmd file % can be escaped + // by using %%. + // + // the saving grace is, on the command line, %var% is left as-is if var is not defined. this contrasts + // the line parsing rules within a .cmd file, where if var is not defined it is replaced with nothing. + // + // one option that was explored was replacing % with ^% - i.e. %var% => ^%var^%. this hack would + // often work, since it is unlikely that var^ would exist, and the ^ character is removed when the + // variable is used. the problem, however, is that ^ is not removed when %* is used to pass the args + // to an external program. + // + // an unexplored potential solution for the % escaping problem, is to create a wrapper .cmd file. + // % can be escaped within a .cmd file. + let reverse = '"'; + let quoteHit = true; + for (let i = arg.length; i > 0; i--) { + // walk the string in reverse + reverse += arg[i - 1]; + if (quoteHit && arg[i - 1] === '\\') { + reverse += '\\'; // double the slash + } + else if (arg[i - 1] === '"') { + quoteHit = true; + reverse += '"'; // double the quote + } + else { + quoteHit = false; + } + } + reverse += '"'; + return reverse + .split('') + .reverse() + .join(''); + } + _uvQuoteCmdArg(arg) { + // Tool runner wraps child_process.spawn() and needs to apply the same quoting as + // Node in certain cases where the undocumented spawn option windowsVerbatimArguments + // is used. + // + // Since this function is a port of quote_cmd_arg from Node 4.x (technically, lib UV, + // see https://github.com/nodejs/node/blob/v4.x/deps/uv/src/win/process.c for details), + // pasting copyright notice from Node within this function: + // + // Copyright Joyent, Inc. and other Node contributors. All rights reserved. + // + // Permission is hereby granted, free of charge, to any person obtaining a copy + // of this software and associated documentation files (the "Software"), to + // deal in the Software without restriction, including without limitation the + // rights to use, copy, modify, merge, publish, distribute, sublicense, and/or + // sell copies of the Software, and to permit persons to whom the Software is + // furnished to do so, subject to the following conditions: + // + // The above copyright notice and this permission notice shall be included in + // all copies or substantial portions of the Software. + // + // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + // FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS + // IN THE SOFTWARE. + if (!arg) { + // Need double quotation for empty argument + return '""'; + } + if (!arg.includes(' ') && !arg.includes('\t') && !arg.includes('"')) { + // No quotation needed + return arg; + } + if (!arg.includes('"') && !arg.includes('\\')) { + // No embedded double quotes or backslashes, so I can just wrap + // quote marks around the whole thing. + return `"${arg}"`; + } + // Expected input/output: + // input : hello"world + // output: "hello\"world" + // input : hello""world + // output: "hello\"\"world" + // input : hello\world + // output: hello\world + // input : hello\\world + // output: hello\\world + // input : hello\"world + // output: "hello\\\"world" + // input : hello\\"world + // output: "hello\\\\\"world" + // input : hello world\ + // output: "hello world\\" - note the comment in libuv actually reads "hello world\" + // but it appears the comment is wrong, it should be "hello world\\" + let reverse = '"'; + let quoteHit = true; + for (let i = arg.length; i > 0; i--) { + // walk the string in reverse + reverse += arg[i - 1]; + if (quoteHit && arg[i - 1] === '\\') { + reverse += '\\'; + } + else if (arg[i - 1] === '"') { + quoteHit = true; + reverse += '\\'; + } + else { + quoteHit = false; + } + } + reverse += '"'; + return reverse + .split('') + .reverse() + .join(''); + } + _cloneExecOptions(options) { + options = options || {}; + const result = { + cwd: options.cwd || process.cwd(), + env: options.env || process.env, + silent: options.silent || false, + windowsVerbatimArguments: options.windowsVerbatimArguments || false, + failOnStdErr: options.failOnStdErr || false, + ignoreReturnCode: options.ignoreReturnCode || false, + delay: options.delay || 10000 + }; + result.outStream = options.outStream || process.stdout; + result.errStream = options.errStream || process.stderr; + return result; + } + _getSpawnOptions(options, toolPath) { + options = options || {}; + const result = {}; + result.cwd = options.cwd; + result.env = options.env; + result['windowsVerbatimArguments'] = + options.windowsVerbatimArguments || this._isCmdFile(); + if (options.windowsVerbatimArguments) { + result.argv0 = `"${toolPath}"`; + } + return result; + } + /** + * Exec a tool. + * Output will be streamed to the live console. + * Returns promise with return code + * + * @param tool path to tool to exec + * @param options optional exec options. See ExecOptions + * @returns number + */ + exec() { + return __awaiter(this, void 0, void 0, function* () { + // root the tool path if it is unrooted and contains relative pathing + if (!ioUtil.isRooted(this.toolPath) && + (this.toolPath.includes('/') || + (IS_WINDOWS && this.toolPath.includes('\\')))) { + // prefer options.cwd if it is specified, however options.cwd may also need to be rooted + this.toolPath = path.resolve(process.cwd(), this.options.cwd || process.cwd(), this.toolPath); + } + // if the tool is only a file name, then resolve it from the PATH + // otherwise verify it exists (add extension on Windows if necessary) + this.toolPath = yield io.which(this.toolPath, true); + return new Promise((resolve, reject) => __awaiter(this, void 0, void 0, function* () { + this._debug(`exec tool: ${this.toolPath}`); + this._debug('arguments:'); + for (const arg of this.args) { + this._debug(` ${arg}`); + } + const optionsNonNull = this._cloneExecOptions(this.options); + if (!optionsNonNull.silent && optionsNonNull.outStream) { + optionsNonNull.outStream.write(this._getCommandString(optionsNonNull) + os.EOL); + } + const state = new ExecState(optionsNonNull, this.toolPath); + state.on('debug', (message) => { + this._debug(message); + }); + if (this.options.cwd && !(yield ioUtil.exists(this.options.cwd))) { + return reject(new Error(`The cwd: ${this.options.cwd} does not exist!`)); + } + const fileName = this._getSpawnFileName(); + const cp = child.spawn(fileName, this._getSpawnArgs(optionsNonNull), this._getSpawnOptions(this.options, fileName)); + let stdbuffer = ''; + if (cp.stdout) { + cp.stdout.on('data', (data) => { + if (this.options.listeners && this.options.listeners.stdout) { + this.options.listeners.stdout(data); + } + if (!optionsNonNull.silent && optionsNonNull.outStream) { + optionsNonNull.outStream.write(data); + } + stdbuffer = this._processLineBuffer(data, stdbuffer, (line) => { + if (this.options.listeners && this.options.listeners.stdline) { + this.options.listeners.stdline(line); + } + }); + }); + } + let errbuffer = ''; + if (cp.stderr) { + cp.stderr.on('data', (data) => { + state.processStderr = true; + if (this.options.listeners && this.options.listeners.stderr) { + this.options.listeners.stderr(data); + } + if (!optionsNonNull.silent && + optionsNonNull.errStream && + optionsNonNull.outStream) { + const s = optionsNonNull.failOnStdErr + ? optionsNonNull.errStream + : optionsNonNull.outStream; + s.write(data); + } + errbuffer = this._processLineBuffer(data, errbuffer, (line) => { + if (this.options.listeners && this.options.listeners.errline) { + this.options.listeners.errline(line); + } + }); + }); + } + cp.on('error', (err) => { + state.processError = err.message; + state.processExited = true; + state.processClosed = true; + state.CheckComplete(); + }); + cp.on('exit', (code) => { + state.processExitCode = code; + state.processExited = true; + this._debug(`Exit code ${code} received from tool '${this.toolPath}'`); + state.CheckComplete(); + }); + cp.on('close', (code) => { + state.processExitCode = code; + state.processExited = true; + state.processClosed = true; + this._debug(`STDIO streams have closed for tool '${this.toolPath}'`); + state.CheckComplete(); + }); + state.on('done', (error, exitCode) => { + if (stdbuffer.length > 0) { + this.emit('stdline', stdbuffer); + } + if (errbuffer.length > 0) { + this.emit('errline', errbuffer); + } + cp.removeAllListeners(); + if (error) { + reject(error); + } + else { + resolve(exitCode); + } + }); + if (this.options.input) { + if (!cp.stdin) { + throw new Error('child process missing stdin'); + } + cp.stdin.end(this.options.input); + } + })); + }); + } +} +exports.ToolRunner = ToolRunner; +/** + * Convert an arg string to an array of args. Handles escaping + * + * @param argString string of arguments + * @returns string[] array of arguments + */ +function argStringToArray(argString) { + const args = []; + let inQuotes = false; + let escaped = false; + let arg = ''; + function append(c) { + // we only escape double quotes. + if (escaped && c !== '"') { + arg += '\\'; + } + arg += c; + escaped = false; + } + for (let i = 0; i < argString.length; i++) { + const c = argString.charAt(i); + if (c === '"') { + if (!escaped) { + inQuotes = !inQuotes; + } + else { + append(c); + } + continue; + } + if (c === '\\' && escaped) { + append(c); + continue; + } + if (c === '\\' && inQuotes) { + escaped = true; + continue; + } + if (c === ' ' && !inQuotes) { + if (arg.length > 0) { + args.push(arg); + arg = ''; + } + continue; + } + append(c); + } + if (arg.length > 0) { + args.push(arg.trim()); + } + return args; +} +exports.argStringToArray = argStringToArray; +class ExecState extends events.EventEmitter { + constructor(options, toolPath) { + super(); + this.processClosed = false; // tracks whether the process has exited and stdio is closed + this.processError = ''; + this.processExitCode = 0; + this.processExited = false; // tracks whether the process has exited + this.processStderr = false; // tracks whether stderr was written to + this.delay = 10000; // 10 seconds + this.done = false; + this.timeout = null; + if (!toolPath) { + throw new Error('toolPath must not be empty'); + } + this.options = options; + this.toolPath = toolPath; + if (options.delay) { + this.delay = options.delay; + } + } + CheckComplete() { + if (this.done) { + return; + } + if (this.processClosed) { + this._setResult(); + } + else if (this.processExited) { + this.timeout = timers_1.setTimeout(ExecState.HandleTimeout, this.delay, this); + } + } + _debug(message) { + this.emit('debug', message); + } + _setResult() { + // determine whether there is an error + let error; + if (this.processExited) { + if (this.processError) { + error = new Error(`There was an error when attempting to execute the process '${this.toolPath}'. This may indicate the process failed to start. Error: ${this.processError}`); + } + else if (this.processExitCode !== 0 && !this.options.ignoreReturnCode) { + error = new Error(`The process '${this.toolPath}' failed with exit code ${this.processExitCode}`); + } + else if (this.processStderr && this.options.failOnStdErr) { + error = new Error(`The process '${this.toolPath}' failed because one or more lines were written to the STDERR stream`); + } + } + // clear the timeout + if (this.timeout) { + clearTimeout(this.timeout); + this.timeout = null; + } + this.done = true; + this.emit('done', error, this.processExitCode); + } + static HandleTimeout(state) { + if (state.done) { + return; + } + if (!state.processClosed && state.processExited) { + const message = `The STDIO streams did not close within ${state.delay / + 1000} seconds of the exit event from process '${state.toolPath}'. This may indicate a child process inherited the STDIO streams and has not yet exited.`; + state._debug(message); + } + state._setResult(); + } +} +//# sourceMappingURL=toolrunner.js.map + +/***/ }), + /***/ 526: /***/ (function(__unused_webpack_module, exports) { @@ -1796,6 +2531,502 @@ function isLoopbackAddress(host) { /***/ }), +/***/ 962: +/***/ (function(__unused_webpack_module, exports, __nccwpck_require__) { + +"use strict"; + +var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) { + if (k2 === undefined) k2 = k; + Object.defineProperty(o, k2, { enumerable: true, get: function() { return m[k]; } }); +}) : (function(o, m, k, k2) { + if (k2 === undefined) k2 = k; + o[k2] = m[k]; +})); +var __setModuleDefault = (this && this.__setModuleDefault) || (Object.create ? (function(o, v) { + Object.defineProperty(o, "default", { enumerable: true, value: v }); +}) : function(o, v) { + o["default"] = v; +}); +var __importStar = (this && this.__importStar) || function (mod) { + if (mod && mod.__esModule) return mod; + var result = {}; + if (mod != null) for (var k in mod) if (k !== "default" && Object.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k); + __setModuleDefault(result, mod); + return result; +}; +var __awaiter = (this && this.__awaiter) || function (thisArg, _arguments, P, generator) { + function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); } + return new (P || (P = Promise))(function (resolve, reject) { + function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } } + function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } } + function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); } + step((generator = generator.apply(thisArg, _arguments || [])).next()); + }); +}; +var _a; +Object.defineProperty(exports, "__esModule", ({ value: true })); +exports.getCmdPath = exports.tryGetExecutablePath = exports.isRooted = exports.isDirectory = exports.exists = exports.READONLY = exports.UV_FS_O_EXLOCK = exports.IS_WINDOWS = exports.unlink = exports.symlink = exports.stat = exports.rmdir = exports.rm = exports.rename = exports.readlink = exports.readdir = exports.open = exports.mkdir = exports.lstat = exports.copyFile = exports.chmod = void 0; +const fs = __importStar(__nccwpck_require__(147)); +const path = __importStar(__nccwpck_require__(17)); +_a = fs.promises +// export const {open} = 'fs' +, exports.chmod = _a.chmod, exports.copyFile = _a.copyFile, exports.lstat = _a.lstat, exports.mkdir = _a.mkdir, exports.open = _a.open, exports.readdir = _a.readdir, exports.readlink = _a.readlink, exports.rename = _a.rename, exports.rm = _a.rm, exports.rmdir = _a.rmdir, exports.stat = _a.stat, exports.symlink = _a.symlink, exports.unlink = _a.unlink; +// export const {open} = 'fs' +exports.IS_WINDOWS = process.platform === 'win32'; +// See https://github.com/nodejs/node/blob/d0153aee367422d0858105abec186da4dff0a0c5/deps/uv/include/uv/win.h#L691 +exports.UV_FS_O_EXLOCK = 0x10000000; +exports.READONLY = fs.constants.O_RDONLY; +function exists(fsPath) { + return __awaiter(this, void 0, void 0, function* () { + try { + yield exports.stat(fsPath); + } + catch (err) { + if (err.code === 'ENOENT') { + return false; + } + throw err; + } + return true; + }); +} +exports.exists = exists; +function isDirectory(fsPath, useStat = false) { + return __awaiter(this, void 0, void 0, function* () { + const stats = useStat ? yield exports.stat(fsPath) : yield exports.lstat(fsPath); + return stats.isDirectory(); + }); +} +exports.isDirectory = isDirectory; +/** + * On OSX/Linux, true if path starts with '/'. On Windows, true for paths like: + * \, \hello, \\hello\share, C:, and C:\hello (and corresponding alternate separator cases). + */ +function isRooted(p) { + p = normalizeSeparators(p); + if (!p) { + throw new Error('isRooted() parameter "p" cannot be empty'); + } + if (exports.IS_WINDOWS) { + return (p.startsWith('\\') || /^[A-Z]:/i.test(p) // e.g. \ or \hello or \\hello + ); // e.g. C: or C:\hello + } + return p.startsWith('/'); +} +exports.isRooted = isRooted; +/** + * Best effort attempt to determine whether a file exists and is executable. + * @param filePath file path to check + * @param extensions additional file extensions to try + * @return if file exists and is executable, returns the file path. otherwise empty string. + */ +function tryGetExecutablePath(filePath, extensions) { + return __awaiter(this, void 0, void 0, function* () { + let stats = undefined; + try { + // test file exists + stats = yield exports.stat(filePath); + } + catch (err) { + if (err.code !== 'ENOENT') { + // eslint-disable-next-line no-console + console.log(`Unexpected error attempting to determine if executable file exists '${filePath}': ${err}`); + } + } + if (stats && stats.isFile()) { + if (exports.IS_WINDOWS) { + // on Windows, test for valid extension + const upperExt = path.extname(filePath).toUpperCase(); + if (extensions.some(validExt => validExt.toUpperCase() === upperExt)) { + return filePath; + } + } + else { + if (isUnixExecutable(stats)) { + return filePath; + } + } + } + // try each extension + const originalFilePath = filePath; + for (const extension of extensions) { + filePath = originalFilePath + extension; + stats = undefined; + try { + stats = yield exports.stat(filePath); + } + catch (err) { + if (err.code !== 'ENOENT') { + // eslint-disable-next-line no-console + console.log(`Unexpected error attempting to determine if executable file exists '${filePath}': ${err}`); + } + } + if (stats && stats.isFile()) { + if (exports.IS_WINDOWS) { + // preserve the case of the actual file (since an extension was appended) + try { + const directory = path.dirname(filePath); + const upperName = path.basename(filePath).toUpperCase(); + for (const actualName of yield exports.readdir(directory)) { + if (upperName === actualName.toUpperCase()) { + filePath = path.join(directory, actualName); + break; + } + } + } + catch (err) { + // eslint-disable-next-line no-console + console.log(`Unexpected error attempting to determine the actual case of the file '${filePath}': ${err}`); + } + return filePath; + } + else { + if (isUnixExecutable(stats)) { + return filePath; + } + } + } + } + return ''; + }); +} +exports.tryGetExecutablePath = tryGetExecutablePath; +function normalizeSeparators(p) { + p = p || ''; + if (exports.IS_WINDOWS) { + // convert slashes on Windows + p = p.replace(/\//g, '\\'); + // remove redundant slashes + return p.replace(/\\\\+/g, '\\'); + } + // remove redundant slashes + return p.replace(/\/\/+/g, '/'); +} +// on Mac/Linux, test the execute bit +// R W X R W X R W X +// 256 128 64 32 16 8 4 2 1 +function isUnixExecutable(stats) { + return ((stats.mode & 1) > 0 || + ((stats.mode & 8) > 0 && stats.gid === process.getgid()) || + ((stats.mode & 64) > 0 && stats.uid === process.getuid())); +} +// Get the path of cmd.exe in windows +function getCmdPath() { + var _a; + return (_a = process.env['COMSPEC']) !== null && _a !== void 0 ? _a : `cmd.exe`; +} +exports.getCmdPath = getCmdPath; +//# sourceMappingURL=io-util.js.map + +/***/ }), + +/***/ 436: +/***/ (function(__unused_webpack_module, exports, __nccwpck_require__) { + +"use strict"; + +var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) { + if (k2 === undefined) k2 = k; + Object.defineProperty(o, k2, { enumerable: true, get: function() { return m[k]; } }); +}) : (function(o, m, k, k2) { + if (k2 === undefined) k2 = k; + o[k2] = m[k]; +})); +var __setModuleDefault = (this && this.__setModuleDefault) || (Object.create ? (function(o, v) { + Object.defineProperty(o, "default", { enumerable: true, value: v }); +}) : function(o, v) { + o["default"] = v; +}); +var __importStar = (this && this.__importStar) || function (mod) { + if (mod && mod.__esModule) return mod; + var result = {}; + if (mod != null) for (var k in mod) if (k !== "default" && Object.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k); + __setModuleDefault(result, mod); + return result; +}; +var __awaiter = (this && this.__awaiter) || function (thisArg, _arguments, P, generator) { + function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); } + return new (P || (P = Promise))(function (resolve, reject) { + function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } } + function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } } + function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); } + step((generator = generator.apply(thisArg, _arguments || [])).next()); + }); +}; +Object.defineProperty(exports, "__esModule", ({ value: true })); +exports.findInPath = exports.which = exports.mkdirP = exports.rmRF = exports.mv = exports.cp = void 0; +const assert_1 = __nccwpck_require__(491); +const path = __importStar(__nccwpck_require__(17)); +const ioUtil = __importStar(__nccwpck_require__(962)); +/** + * Copies a file or folder. + * Based off of shelljs - https://github.com/shelljs/shelljs/blob/9237f66c52e5daa40458f94f9565e18e8132f5a6/src/cp.js + * + * @param source source path + * @param dest destination path + * @param options optional. See CopyOptions. + */ +function cp(source, dest, options = {}) { + return __awaiter(this, void 0, void 0, function* () { + const { force, recursive, copySourceDirectory } = readCopyOptions(options); + const destStat = (yield ioUtil.exists(dest)) ? yield ioUtil.stat(dest) : null; + // Dest is an existing file, but not forcing + if (destStat && destStat.isFile() && !force) { + return; + } + // If dest is an existing directory, should copy inside. + const newDest = destStat && destStat.isDirectory() && copySourceDirectory + ? path.join(dest, path.basename(source)) + : dest; + if (!(yield ioUtil.exists(source))) { + throw new Error(`no such file or directory: ${source}`); + } + const sourceStat = yield ioUtil.stat(source); + if (sourceStat.isDirectory()) { + if (!recursive) { + throw new Error(`Failed to copy. ${source} is a directory, but tried to copy without recursive flag.`); + } + else { + yield cpDirRecursive(source, newDest, 0, force); + } + } + else { + if (path.relative(source, newDest) === '') { + // a file cannot be copied to itself + throw new Error(`'${newDest}' and '${source}' are the same file`); + } + yield copyFile(source, newDest, force); + } + }); +} +exports.cp = cp; +/** + * Moves a path. + * + * @param source source path + * @param dest destination path + * @param options optional. See MoveOptions. + */ +function mv(source, dest, options = {}) { + return __awaiter(this, void 0, void 0, function* () { + if (yield ioUtil.exists(dest)) { + let destExists = true; + if (yield ioUtil.isDirectory(dest)) { + // If dest is directory copy src into dest + dest = path.join(dest, path.basename(source)); + destExists = yield ioUtil.exists(dest); + } + if (destExists) { + if (options.force == null || options.force) { + yield rmRF(dest); + } + else { + throw new Error('Destination already exists'); + } + } + } + yield mkdirP(path.dirname(dest)); + yield ioUtil.rename(source, dest); + }); +} +exports.mv = mv; +/** + * Remove a path recursively with force + * + * @param inputPath path to remove + */ +function rmRF(inputPath) { + return __awaiter(this, void 0, void 0, function* () { + if (ioUtil.IS_WINDOWS) { + // Check for invalid characters + // https://docs.microsoft.com/en-us/windows/win32/fileio/naming-a-file + if (/[*"<>|]/.test(inputPath)) { + throw new Error('File path must not contain `*`, `"`, `<`, `>` or `|` on Windows'); + } + } + try { + // note if path does not exist, error is silent + yield ioUtil.rm(inputPath, { + force: true, + maxRetries: 3, + recursive: true, + retryDelay: 300 + }); + } + catch (err) { + throw new Error(`File was unable to be removed ${err}`); + } + }); +} +exports.rmRF = rmRF; +/** + * Make a directory. Creates the full path with folders in between + * Will throw if it fails + * + * @param fsPath path to create + * @returns Promise + */ +function mkdirP(fsPath) { + return __awaiter(this, void 0, void 0, function* () { + assert_1.ok(fsPath, 'a path argument must be provided'); + yield ioUtil.mkdir(fsPath, { recursive: true }); + }); +} +exports.mkdirP = mkdirP; +/** + * Returns path of a tool had the tool actually been invoked. Resolves via paths. + * If you check and the tool does not exist, it will throw. + * + * @param tool name of the tool + * @param check whether to check if tool exists + * @returns Promise path to tool + */ +function which(tool, check) { + return __awaiter(this, void 0, void 0, function* () { + if (!tool) { + throw new Error("parameter 'tool' is required"); + } + // recursive when check=true + if (check) { + const result = yield which(tool, false); + if (!result) { + if (ioUtil.IS_WINDOWS) { + throw new Error(`Unable to locate executable file: ${tool}. Please verify either the file path exists or the file can be found within a directory specified by the PATH environment variable. Also verify the file has a valid extension for an executable file.`); + } + else { + throw new Error(`Unable to locate executable file: ${tool}. Please verify either the file path exists or the file can be found within a directory specified by the PATH environment variable. Also check the file mode to verify the file is executable.`); + } + } + return result; + } + const matches = yield findInPath(tool); + if (matches && matches.length > 0) { + return matches[0]; + } + return ''; + }); +} +exports.which = which; +/** + * Returns a list of all occurrences of the given tool on the system path. + * + * @returns Promise the paths of the tool + */ +function findInPath(tool) { + return __awaiter(this, void 0, void 0, function* () { + if (!tool) { + throw new Error("parameter 'tool' is required"); + } + // build the list of extensions to try + const extensions = []; + if (ioUtil.IS_WINDOWS && process.env['PATHEXT']) { + for (const extension of process.env['PATHEXT'].split(path.delimiter)) { + if (extension) { + extensions.push(extension); + } + } + } + // if it's rooted, return it if exists. otherwise return empty. + if (ioUtil.isRooted(tool)) { + const filePath = yield ioUtil.tryGetExecutablePath(tool, extensions); + if (filePath) { + return [filePath]; + } + return []; + } + // if any path separators, return empty + if (tool.includes(path.sep)) { + return []; + } + // build the list of directories + // + // Note, technically "where" checks the current directory on Windows. From a toolkit perspective, + // it feels like we should not do this. Checking the current directory seems like more of a use + // case of a shell, and the which() function exposed by the toolkit should strive for consistency + // across platforms. + const directories = []; + if (process.env.PATH) { + for (const p of process.env.PATH.split(path.delimiter)) { + if (p) { + directories.push(p); + } + } + } + // find all matches + const matches = []; + for (const directory of directories) { + const filePath = yield ioUtil.tryGetExecutablePath(path.join(directory, tool), extensions); + if (filePath) { + matches.push(filePath); + } + } + return matches; + }); +} +exports.findInPath = findInPath; +function readCopyOptions(options) { + const force = options.force == null ? true : options.force; + const recursive = Boolean(options.recursive); + const copySourceDirectory = options.copySourceDirectory == null + ? true + : Boolean(options.copySourceDirectory); + return { force, recursive, copySourceDirectory }; +} +function cpDirRecursive(sourceDir, destDir, currentDepth, force) { + return __awaiter(this, void 0, void 0, function* () { + // Ensure there is not a run away recursive copy + if (currentDepth >= 255) + return; + currentDepth++; + yield mkdirP(destDir); + const files = yield ioUtil.readdir(sourceDir); + for (const fileName of files) { + const srcFile = `${sourceDir}/${fileName}`; + const destFile = `${destDir}/${fileName}`; + const srcFileStat = yield ioUtil.lstat(srcFile); + if (srcFileStat.isDirectory()) { + // Recurse + yield cpDirRecursive(srcFile, destFile, currentDepth, force); + } + else { + yield copyFile(srcFile, destFile, force); + } + } + // Change the mode for the newly created directory + yield ioUtil.chmod(destDir, (yield ioUtil.stat(sourceDir)).mode); + }); +} +// Buffered file copy +function copyFile(srcFile, destFile, force) { + return __awaiter(this, void 0, void 0, function* () { + if ((yield ioUtil.lstat(srcFile)).isSymbolicLink()) { + // unlink/re-link it + try { + yield ioUtil.lstat(destFile); + yield ioUtil.unlink(destFile); + } + catch (e) { + // Try to override file permission + if (e.code === 'EPERM') { + yield ioUtil.chmod(destFile, '0666'); + yield ioUtil.unlink(destFile); + } + // other errors = it doesn't exist, no work to do + } + // Copy over symlink + const symlinkFull = yield ioUtil.readlink(srcFile); + yield ioUtil.symlink(symlinkFull, destFile, ioUtil.IS_WINDOWS ? 'junction' : null); + } + else if (!(yield ioUtil.exists(destFile)) || force) { + yield ioUtil.copyFile(srcFile, destFile); + } + }); +} +//# sourceMappingURL=io.js.map + +/***/ }), + /***/ 29: /***/ ((__unused_webpack_module, exports, __nccwpck_require__) => { @@ -3000,6 +4231,14 @@ module.exports = require("assert"); /***/ }), +/***/ 81: +/***/ ((module) => { + +"use strict"; +module.exports = require("child_process"); + +/***/ }), + /***/ 113: /***/ ((module) => { @@ -3064,6 +4303,22 @@ module.exports = require("path"); /***/ }), +/***/ 576: +/***/ ((module) => { + +"use strict"; +module.exports = require("string_decoder"); + +/***/ }), + +/***/ 512: +/***/ ((module) => { + +"use strict"; +module.exports = require("timers"); + +/***/ }), + /***/ 404: /***/ ((module) => { @@ -3138,8 +4393,8 @@ __nccwpck_require__.r(__webpack_exports__); // EXTERNAL MODULE: ./node_modules/@actions/core/lib/core.js var core = __nccwpck_require__(186); -;// CONCATENATED MODULE: external "child_process" -const external_child_process_namespaceObject = require("child_process"); +// EXTERNAL MODULE: ./node_modules/@actions/exec/lib/exec.js +var exec = __nccwpck_require__(514); // EXTERNAL MODULE: ./node_modules/shell-quote/index.js var shell_quote = __nccwpck_require__(29); ;// CONCATENATED MODULE: ./src/discover.ts @@ -3148,16 +4403,27 @@ var shell_quote = __nccwpck_require__(29); async function run() { try { - const parse = core.getBooleanInput('parse_images'); const paths = (0,shell_quote.quote)([core.getInput('paths')]); const targets = core.getInput('targets'); - const flags = parse ? ['-ji'] : ['-j']; + const flags = ['-j']; if (targets.trim() !== '') { flags.push(...targets.split(' ').map(t => `-t ${t}`)); } const command = ['ci', 'scan', ...flags, paths].filter(Boolean).join(' '); core.info(`Running command: ${command}`); - core.setOutput('json', await execCommand(command)); + const { stdout } = await (0,exec.getExecOutput)(command); + // eslint-disable-next-line @typescript-eslint/no-unsafe-assignment + const parsedData = JSON.parse(stdout); + const pathsArray = []; + for (const key in parsedData) { + if (Object.prototype.hasOwnProperty.call(parsedData, key)) { + pathsArray.push(key); + } + } + // JSON of path mapping to its filtered list of targets that need to be run + core.setOutput('json', stdout); + // List of path that should be run + core.setOutput('paths', pathsArray); } catch (error) { if (error instanceof Error) { @@ -3168,18 +4434,6 @@ async function run() { } } } -async function execCommand(command) { - return new Promise((resolve, reject) => { - (0,external_child_process_namespaceObject.exec)(command, (error, stdout, stderr) => { - if (error || stderr) { - reject(new Error(error ? error.message : stderr)); - } - else { - resolve(stdout); - } - }); - }); -} ;// CONCATENATED MODULE: ./src/index.ts diff --git a/actions/discover/dist/licenses.txt b/actions/discover/dist/licenses.txt index d9e7c98a5..51cb44c07 100644 --- a/actions/discover/dist/licenses.txt +++ b/actions/discover/dist/licenses.txt @@ -10,6 +10,18 @@ The above copyright notice and this permission notice shall be included in all c THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. +@actions/exec +MIT +The MIT License (MIT) + +Copyright 2019 GitHub + +Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + @actions/http-client MIT Actions Http Client for Node.js @@ -35,6 +47,18 @@ WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. +@actions/io +MIT +The MIT License (MIT) + +Copyright 2019 GitHub + +Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + shell-quote MIT The MIT License diff --git a/actions/discover/package-lock.json b/actions/discover/package-lock.json index 6dc37b21a..dfc2e7683 100644 --- a/actions/discover/package-lock.json +++ b/actions/discover/package-lock.json @@ -6,6 +6,7 @@ "": { "dependencies": { "@actions/core": "^1.10.0", + "@actions/exec": "^1.1.1", "@actions/github": "^5.1.1", "shell-quote": "^1.8.1" }, @@ -43,6 +44,14 @@ "uuid": "^8.3.2" } }, + "node_modules/@actions/exec": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/@actions/exec/-/exec-1.1.1.tgz", + "integrity": "sha512-+sCcHHbVdk93a0XT19ECtO/gIXoxvdsgQLzb2fE2/5sIZmWQuluYyjPQtrtTHdU1YzTZ7bAPN4sITq2xi1679w==", + "dependencies": { + "@actions/io": "^1.0.1" + } + }, "node_modules/@actions/github": { "version": "5.1.1", "resolved": "https://registry.npmjs.org/@actions/github/-/github-5.1.1.tgz", @@ -62,6 +71,11 @@ "tunnel": "^0.0.6" } }, + "node_modules/@actions/io": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/@actions/io/-/io-1.1.3.tgz", + "integrity": "sha512-wi9JjgKLYS7U/z8PPbco+PvTb/nRWjeoFlJ1Qer83k/3C5PHQi28hiVdeE2kHXmIL99mQFawx8qt/JPjZilJ8Q==" + }, "node_modules/@ampproject/remapping": { "version": "2.2.1", "resolved": "https://registry.npmjs.org/@ampproject/remapping/-/remapping-2.2.1.tgz", diff --git a/actions/discover/package.json b/actions/discover/package.json index 52e28dcb1..f9f10e179 100644 --- a/actions/discover/package.json +++ b/actions/discover/package.json @@ -8,6 +8,7 @@ }, "dependencies": { "@actions/core": "^1.10.0", + "@actions/exec": "^1.1.1", "@actions/github": "^5.1.1", "shell-quote": "^1.8.1" }, @@ -62,4 +63,4 @@ "htmlWhitespaceSensitivity": "css", "endOfLine": "lf" } -} \ No newline at end of file +} diff --git a/actions/discover/src/discover.test.ts b/actions/discover/src/discover.test.ts index 6ab44825a..213922581 100644 --- a/actions/discover/src/discover.test.ts +++ b/actions/discover/src/discover.test.ts @@ -1,76 +1,82 @@ import * as core from '@actions/core' -import { exec } from 'child_process' +import { getExecOutput } from '@actions/exec' import { run } from './discover' jest.mock('@actions/core', () => ({ - getBooleanInput: jest.fn(), getInput: jest.fn(), - setFailed: jest.fn(), setOutput: jest.fn(), + setFailed: jest.fn(), info: jest.fn() })) -jest.mock('child_process', () => ({ - exec: jest.fn( - ( - _, - callback: (_error: Error | null, _stdout: string, _stderr: string) => void - ) => { - callback(null, 'mocked output', '') - } - ) +jest.mock('@actions/exec', () => ({ + getExecOutput: jest.fn() })) +const mockGetExecOutput = getExecOutput as jest.Mock describe('Discover Action', () => { afterEach(() => { jest.clearAllMocks() }) - describe('when testing running the ci command', () => { - const testCases = [ - { - parseImages: true, - paths: 'path1 path2', - targets: 'target1 target2', - expectedCommand: "ci scan -ji -t target1 -t target2 'path1 path2'" - }, - { - parseImages: false, - paths: 'path1', - targets: 'target1', - expectedCommand: 'ci scan -j -t target1 path1' - }, - { - parseImages: false, - paths: '.', - targets: '', - expectedCommand: 'ci scan -j .' - } - ] - - // actions mocks - const getBooleanInputMock = core.getBooleanInput as jest.Mock - const getInputMock = core.getInput as jest.Mock + const testCases = [ + { + paths: 'path1 path2', + targets: 'target1 target2', + expectedCommand: "ci scan -j -t target1 -t target2 'path1 path2'", + expectedJson: '{"/path1": ["target1"], "/path2": ["target2"]}', + expectedPaths: ['/path1', '/path2'] + }, + { + paths: 'path1', + targets: 'target1', + expectedCommand: 'ci scan -j -t target1 path1', + expectedJson: '{"/path1": ["target1"]}', + expectedPaths: ['/path1'] + }, + { + paths: '.', + targets: '', + expectedCommand: 'ci scan -j .', + expectedJson: '{}', + expectedPaths: [] + }, + { + paths: '.', + targets: 'target target-*', + expectedCommand: 'ci scan -j -t target -t target-* .', + expectedJson: + '{"/path1": ["target"], "/path2": ["target", "target-1", "target-2"], "/path3": ["target-1", "target-2"]}', + expectedPaths: ['/path1', '/path2', '/path3'] + } + ] - it.each(testCases)( - 'should execute the correct command', - async ({ parseImages, paths, targets, expectedCommand }) => { - getBooleanInputMock.mockReturnValue(parseImages) - getInputMock.mockImplementation((name: string) => { - switch (name) { - case 'paths': - return paths - case 'targets': - return targets - default: - return '' - } - }) + it.each(testCases)( + 'should execute the correct command', + async ({ + paths, + targets, + expectedCommand, + expectedJson, + expectedPaths + }) => { + const getInputMock = core.getInput as jest.Mock + getInputMock.mockImplementation((name: string) => { + switch (name) { + case 'paths': + return paths + case 'targets': + return targets + default: + return '' + } + }) - await run() + mockGetExecOutput.mockResolvedValueOnce({ stdout: expectedJson }) + await run() - expect(exec).toHaveBeenCalledWith(expectedCommand, expect.anything()) - expect(core.setOutput).toHaveBeenCalledWith('json', 'mocked output') - } - ) - }) + expect(getExecOutput).toHaveBeenCalledWith(expectedCommand) + expect(core.setOutput).toHaveBeenCalledWith('json', expectedJson) + expect(core.setOutput).toHaveBeenCalledWith('paths', expectedPaths) + } + ) }) diff --git a/actions/discover/src/discover.ts b/actions/discover/src/discover.ts index 287870a2c..e240952ec 100644 --- a/actions/discover/src/discover.ts +++ b/actions/discover/src/discover.ts @@ -1,21 +1,36 @@ import * as core from '@actions/core' -import { exec } from 'child_process' +import { getExecOutput } from '@actions/exec' import { quote } from 'shell-quote' export async function run(): Promise { try { - const parse = core.getBooleanInput('parse_images') const paths = quote([core.getInput('paths')]) const targets = core.getInput('targets') - const flags = parse ? ['-ji'] : ['-j'] + const flags = ['-j'] if (targets.trim() !== '') { flags.push(...targets.split(' ').map(t => `-t ${t}`)) } const command = ['ci', 'scan', ...flags, paths].filter(Boolean).join(' ') core.info(`Running command: ${command}`) - core.setOutput('json', await execCommand(command)) + const { stdout } = await getExecOutput(command) + + // eslint-disable-next-line @typescript-eslint/no-unsafe-assignment + const parsedData = JSON.parse(stdout) + + const pathsArray = [] + + for (const key in parsedData) { + if (Object.prototype.hasOwnProperty.call(parsedData, key)) { + pathsArray.push(key) + } + } + + // JSON of path mapping to its filtered list of targets that need to be run + core.setOutput('json', stdout) + // List of path that should be run + core.setOutput('paths', pathsArray) } catch (error) { if (error instanceof Error) { core.setFailed(error.message) @@ -24,15 +39,3 @@ export async function run(): Promise { } } } - -async function execCommand(command: string): Promise { - return new Promise((resolve, reject) => { - exec(command, (error, stdout, stderr) => { - if (error || stderr) { - reject(new Error(error ? error.message : stderr)) - } else { - resolve(stdout) - } - }) - }) -} diff --git a/actions/install/action.yml b/actions/install/action.yml index fb19fea05..556580f91 100644 --- a/actions/install/action.yml +++ b/actions/install/action.yml @@ -9,6 +9,10 @@ inputs: description: The version of the Catalyst-CI CLI to install required: false default: latest + local: + description: Build and use the local version of the Catalyst-CI CLI + required: false + default: 'false' runs: using: node20 main: dist/index.js diff --git a/actions/install/dist/index.js b/actions/install/dist/index.js index 8c9bdecc1..839280f5b 100644 --- a/actions/install/dist/index.js +++ b/actions/install/dist/index.js @@ -13715,10 +13715,13 @@ var core = __nccwpck_require__(2186); var tool_cache = __nccwpck_require__(7784); // EXTERNAL MODULE: ./node_modules/@actions/github/lib/github.js var github = __nccwpck_require__(5438); +// EXTERNAL MODULE: ./node_modules/@actions/exec/lib/exec.js +var exec = __nccwpck_require__(1514); ;// CONCATENATED MODULE: ./src/install.ts + const assetName = 'cli-linux-amd64.tar.gz'; const repoOwner = 'input-output-hk'; const repoName = 'catalyst-ci'; @@ -13730,6 +13733,21 @@ async function run(platform = process.platform) { try { const token = core.getInput('token'); const version = core.getInput('version'); + const local = core.getInput('local'); + // Local flag is tagged as true + if (local === 'true') { + core.info('Building ci locally'); + // go into cli folder + // build the ci and move to /usr/local/bin/ci + await (0,exec.exec)('go', [ + 'build', + '-ldflags=-extldflags=-static', + '-o', + '/usr/local/bin/ci', + 'cmd/main.go' + ], { cwd: 'cli/' }); + return; + } if (version !== 'latest' && !isSemVer(version)) { core.setFailed('Invalid version'); return; diff --git a/actions/install/src/install.test.ts b/actions/install/src/install.test.ts index dcbc9bd52..e9cb82647 100644 --- a/actions/install/src/install.test.ts +++ b/actions/install/src/install.test.ts @@ -2,6 +2,7 @@ import * as core from '@actions/core' import * as tc from '@actions/tool-cache' import * as github from '@actions/github' import { run } from './install' +import { exec } from '@actions/exec' jest.mock('@actions/core', () => { return { @@ -17,10 +18,14 @@ jest.mock('@actions/tool-cache', () => ({ jest.mock('@actions/github', () => ({ getOctokit: jest.fn() })) +jest.mock('@actions/exec', () => ({ + exec: jest.fn() +})) describe('Setup Action', () => { const token = 'token' const version = '1.0.0' + const local = 'false' // actions core mocks const getInputMock = core.getInput as jest.Mock @@ -45,6 +50,38 @@ describe('Setup Action', () => { }) describe('when the platform is linux', () => { + const platform = 'linux' + + describe('when local flag is set', () => { + beforeAll(() => { + getInputMock.mockImplementation((name: string) => { + switch (name) { + case 'token': + return token + case 'version': + return version + case 'local': + return 'true' + default: + throw new Error(`Unknown input ${name}`) + } + }) + }) + it('should call local ci build command', async () => { + await run(platform) + expect(exec).toHaveBeenCalledWith( + 'go', + [ + 'build', + '-ldflags=-extldflags=-static', + '-o', + '/usr/local/bin/ci', + 'cmd/main.go' + ], + { cwd: 'cli/' } + ) + }) + }) describe('when the version is invalid', () => { beforeAll(() => { getInputMock.mockImplementation((name: string) => { @@ -53,6 +90,8 @@ describe('Setup Action', () => { return token case 'version': return 'invalid' + case 'local': + return local default: throw new Error(`Unknown input ${name}`) } @@ -60,7 +99,7 @@ describe('Setup Action', () => { }) it('should fail', async () => { - await run() + await run(platform) expect(setFailedMock).toHaveBeenCalledWith('Invalid version') }) }) @@ -73,6 +112,8 @@ describe('Setup Action', () => { return token case 'version': return version + case 'local': + return local default: throw new Error(`Unknown input ${name}`) } @@ -93,7 +134,7 @@ describe('Setup Action', () => { }) it('should fail', async () => { - await run() + await run(platform) expect(setFailedMock).toHaveBeenCalledWith( `Version ${version} not found` ) @@ -121,7 +162,7 @@ describe('Setup Action', () => { }) it('should fail', async () => { - await run() + await run(platform) expect(setFailedMock).toHaveBeenCalledWith( `Asset for version v${version} not found` ) @@ -165,7 +206,7 @@ describe('Setup Action', () => { }) it('should download the asset', async () => { - await run() + await run(platform) expect(downloadToolMock).toHaveBeenCalledWith( 'https://example.com' @@ -174,7 +215,7 @@ describe('Setup Action', () => { it('should extract the asset', async () => { downloadToolMock.mockResolvedValue('/tmp/file.tar.gz') - await run() + await run(platform) expect(extractTarMock).toHaveBeenCalledWith( '/tmp/file.tar.gz', @@ -188,7 +229,7 @@ describe('Setup Action', () => { }) it('should fail', async () => { - await run() + await run(platform) expect(setFailedMock).toHaveBeenCalledWith('Download error') }) }) @@ -201,6 +242,8 @@ describe('Setup Action', () => { return token case 'version': return 'latest' + case 'local': + return local default: throw new Error(`Unknown input ${name}`) } @@ -208,7 +251,7 @@ describe('Setup Action', () => { }) it('should download the latest version', async () => { - await run() + await run(platform) expect(downloadToolMock).toHaveBeenCalledWith( 'https://example2.com' ) diff --git a/actions/install/src/install.ts b/actions/install/src/install.ts index 875ce2e74..4a781a6a7 100644 --- a/actions/install/src/install.ts +++ b/actions/install/src/install.ts @@ -1,6 +1,7 @@ import * as core from '@actions/core' import * as tc from '@actions/tool-cache' import * as github from '@actions/github' +import { exec } from '@actions/exec' const assetName = 'cli-linux-amd64.tar.gz' const repoOwner = 'input-output-hk' @@ -17,6 +18,27 @@ export async function run( try { const token = core.getInput('token') const version = core.getInput('version') + const local = core.getInput('local') + + // Local flag is tagged as true + if (local === 'true') { + core.info('Building ci locally') + // go into cli folder + // build the ci and move to /usr/local/bin/ci + await exec( + 'go', + [ + 'build', + '-ldflags=-extldflags=-static', + '-o', + '/usr/local/bin/ci', + 'cmd/main.go' + ], + { cwd: 'cli/' } + ) + + return + } if (version !== 'latest' && !isSemVer(version)) { core.setFailed('Invalid version') diff --git a/actions/run/action.yml b/actions/run/action.yml index 7e07c53c7..2cac23c7e 100644 --- a/actions/run/action.yml +++ b/actions/run/action.yml @@ -29,12 +29,13 @@ inputs: description: The port to use for connecting to the remote runner required: false default: "8372" - target: - description: The name of the target to run + targets: + description: A space separated list of targets to run required: true target_flags: description: Additional flags to pass to the target required: false + runs: using: "node20" main: "dist/index.js" diff --git a/actions/run/dist/index.js b/actions/run/dist/index.js index 430f0c4e7..6c582801a 100644 --- a/actions/run/dist/index.js +++ b/actions/run/dist/index.js @@ -2885,10 +2885,11 @@ async function run() { const privileged = core.getBooleanInput('privileged'); const runnerAddress = core.getInput('runner_address'); const runnerPort = core.getInput('runner_port'); - const target = core.getInput('target'); const targetFlags = core.getInput('target_flags'); + const targets = core.getInput('targets'); const command = 'earthly'; const args = []; + const targetsArgs = []; if (privileged) { args.push('-P'); } @@ -2901,26 +2902,39 @@ async function run() { if (flags) { args.push(...flags.split(' ')); } - if (artifact) { - args.push('--artifact', `${earthfile}+${target}/`, `${artifactPath}`); - } - else { - args.push(`${earthfile}+${target}`); - } if (targetFlags) { args.push(...targetFlags.split(' ')); } - core.info(`Running command: ${command} ${args.join(' ')}`); - const output = await spawnCommand(command, args); - const imageOutput = parseImage(output); - if (imageOutput) { - core.info(`Found image: ${imageOutput}`); - core.setOutput('image', imageOutput); - } - const artifactOutput = external_path_.join(earthfile, parseArtifact(output)); - if (artifactOutput !== earthfile) { - core.info(`Found artifact: ${artifactOutput}`); - core.setOutput('artifact', artifactOutput); + core.info(`Filtered targets >> ${targets}`); + targets.split(' ').map(tg => { + // Get the filtered targets associated with the pattern target and earthfile. + core.info(`Pushing target ${tg}`); + targetsArgs.push(`${earthfile}+${tg}`); + }); + // Running each target command in different process. + for (const t of targetsArgs) { + core.info(`Running target: ${t}`); + const argsSpawn = [...args]; + // Artifact is set + if (artifact) { + core.info(`Pushing target ${t} with artifact tag`); + argsSpawn.push('--artifact', `${t}/`, `${artifactPath}`); + } + else { + argsSpawn.push(t); + } + core.info(`Running command: ${command} ${argsSpawn.join(' ')}`); + const output = await spawnCommand(command, argsSpawn); + const imageOutput = parseImage(output); + if (imageOutput) { + core.info(`Found image: ${imageOutput}`); + core.setOutput('image', imageOutput); + } + const artifactOutput = external_path_.join(earthfile, parseArtifact(output)); + if (artifactOutput !== earthfile) { + core.info(`Found artifact: ${artifactOutput}`); + core.setOutput('artifact', artifactOutput); + } } } function parseArtifact(output) { diff --git a/actions/run/package-lock.json b/actions/run/package-lock.json index f1f5ab738..c917e6aca 100644 --- a/actions/run/package-lock.json +++ b/actions/run/package-lock.json @@ -6,6 +6,7 @@ "": { "dependencies": { "@actions/core": "^1.10.0", + "@actions/exec": "^1.1.1", "@actions/github": "^5.1.1", "shell-quote": "^1.8.1" }, @@ -43,6 +44,14 @@ "uuid": "^8.3.2" } }, + "node_modules/@actions/exec": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/@actions/exec/-/exec-1.1.1.tgz", + "integrity": "sha512-+sCcHHbVdk93a0XT19ECtO/gIXoxvdsgQLzb2fE2/5sIZmWQuluYyjPQtrtTHdU1YzTZ7bAPN4sITq2xi1679w==", + "dependencies": { + "@actions/io": "^1.0.1" + } + }, "node_modules/@actions/github": { "version": "5.1.1", "resolved": "https://registry.npmjs.org/@actions/github/-/github-5.1.1.tgz", @@ -62,6 +71,11 @@ "tunnel": "^0.0.6" } }, + "node_modules/@actions/io": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/@actions/io/-/io-1.1.3.tgz", + "integrity": "sha512-wi9JjgKLYS7U/z8PPbco+PvTb/nRWjeoFlJ1Qer83k/3C5PHQi28hiVdeE2kHXmIL99mQFawx8qt/JPjZilJ8Q==" + }, "node_modules/@ampproject/remapping": { "version": "2.2.1", "resolved": "https://registry.npmjs.org/@ampproject/remapping/-/remapping-2.2.1.tgz", diff --git a/actions/run/package.json b/actions/run/package.json index 52e28dcb1..f9f10e179 100644 --- a/actions/run/package.json +++ b/actions/run/package.json @@ -8,6 +8,7 @@ }, "dependencies": { "@actions/core": "^1.10.0", + "@actions/exec": "^1.1.1", "@actions/github": "^5.1.1", "shell-quote": "^1.8.1" }, @@ -62,4 +63,4 @@ "htmlWhitespaceSensitivity": "css", "endOfLine": "lf" } -} \ No newline at end of file +} diff --git a/actions/run/src/run.test.ts b/actions/run/src/run.test.ts index 3e2e4b2a1..64fd351ec 100644 --- a/actions/run/src/run.test.ts +++ b/actions/run/src/run.test.ts @@ -31,9 +31,9 @@ describe('Run Action', () => { output: '', runnerAddress: '', runnerPort: '', - target: 'target', + targets: 'target', targetFlags: '--flag1 test -f2 test2', - command: ['./earthfile+target', '--flag1', 'test', '-f2', 'test2'], + command: [['--flag1', 'test', '-f2', 'test2', './earthfile+target']], imageOutput: '', artifactOutput: '' }, @@ -47,9 +47,9 @@ describe('Run Action', () => { output: 'Artifact +target/artifact output as out\n', runnerAddress: '', runnerPort: '', - target: 'target', + targets: 'target', targetFlags: '', - command: ['--test', '--artifact', './earthfile+target/', 'out'], + command: [['--test', '--artifact', './earthfile+target/', 'out']], imageOutput: '', artifactOutput: 'earthfile/out' }, @@ -63,12 +63,10 @@ describe('Run Action', () => { output: '', runnerAddress: 'localhost', runnerPort: '8372', - target: 'target', + targets: 'target', targetFlags: '', command: [ - '--buildkit-host', - 'tcp://localhost:8372', - './earthfile+target' + ['--buildkit-host', 'tcp://localhost:8372', './earthfile+target'] ], imageOutput: '', artifactOutput: '' @@ -83,20 +81,41 @@ describe('Run Action', () => { output: 'Image +docker output as image1:tag1\n', runnerAddress: '', runnerPort: '', - target: 'target', + targets: 'target', targetFlags: '', command: [ - '-P', - '--platform', - 'linux/amd64', - '--flag1', - 'test', - '-f2', - 'test2', - './earthfile+target' + [ + '-P', + '--platform', + 'linux/amd64', + '--flag1', + 'test', + '-f2', + 'test2', + './earthfile+target' + ] ], imageOutput: 'image1:tag1', artifactOutput: '' + }, + { + artifact: '', + artifactPath: '', + earthfile: './targets/earthfile', + flags: '', + platform: 'linux/amd64', + privileged: 'true', + output: '', + runnerAddress: '', + runnerPort: '', + targets: 'target target-test', + targetFlags: '', + command: [ + ['-P', '--platform', 'linux/amd64', './targets/earthfile+target'], + ['-P', '--platform', 'linux/amd64', './targets/earthfile+target-test'] + ], + imageOutput: '', + artifactOutput: '' } ])( `should execute the correct command`, @@ -110,7 +129,7 @@ describe('Run Action', () => { output, runnerAddress, runnerPort, - target, + targets, targetFlags, command, imageOutput, @@ -136,8 +155,8 @@ describe('Run Action', () => { return runnerAddress case 'runner_port': return runnerPort - case 'target': - return target + case 'targets': + return targets case 'target_flags': return targetFlags default: @@ -161,8 +180,10 @@ describe('Run Action', () => { await run() - expect(spawn).toHaveBeenCalledTimes(1) - expect(spawn).toHaveBeenCalledWith('earthly', command) + expect(spawn).toHaveBeenCalledTimes(command.length) + command.map(cmd => { + expect(spawn).toHaveBeenCalledWith('earthly', cmd) + }) expect(stdoutSpy).toHaveBeenCalledWith('stdout') expect(stderrSpy).toHaveBeenCalledWith(output) diff --git a/actions/run/src/run.ts b/actions/run/src/run.ts index bceadf5d5..0b2d3e11d 100644 --- a/actions/run/src/run.ts +++ b/actions/run/src/run.ts @@ -11,11 +11,12 @@ export async function run(): Promise { const privileged = core.getBooleanInput('privileged') const runnerAddress = core.getInput('runner_address') const runnerPort = core.getInput('runner_port') - const target = core.getInput('target') const targetFlags = core.getInput('target_flags') + const targets = core.getInput('targets') const command = 'earthly' const args: string[] = [] + const targetsArgs: string[] = [] if (privileged) { args.push('-P') @@ -33,29 +34,42 @@ export async function run(): Promise { args.push(...flags.split(' ')) } - if (artifact) { - args.push('--artifact', `${earthfile}+${target}/`, `${artifactPath}`) - } else { - args.push(`${earthfile}+${target}`) - } - if (targetFlags) { args.push(...targetFlags.split(' ')) } - core.info(`Running command: ${command} ${args.join(' ')}`) - const output = await spawnCommand(command, args) + core.info(`Filtered targets >> ${targets}`) - const imageOutput = parseImage(output) - if (imageOutput) { - core.info(`Found image: ${imageOutput}`) - core.setOutput('image', imageOutput) - } + targets.split(' ').map(tg => { + // Get the filtered targets associated with the pattern target and earthfile. + core.info(`Pushing target ${tg}`) + targetsArgs.push(`${earthfile}+${tg}`) + }) - const artifactOutput = path.join(earthfile, parseArtifact(output)) - if (artifactOutput !== earthfile) { - core.info(`Found artifact: ${artifactOutput}`) - core.setOutput('artifact', artifactOutput) + // Running each target command in different process. + for (const t of targetsArgs) { + core.info(`Running target: ${t}`) + const argsSpawn = [...args] + // Artifact is set + if (artifact) { + core.info(`Pushing target ${t} with artifact tag`) + argsSpawn.push('--artifact', `${t}/`, `${artifactPath}`) + } else { + argsSpawn.push(t) + } + core.info(`Running command: ${command} ${argsSpawn.join(' ')}`) + const output = await spawnCommand(command, argsSpawn) + const imageOutput = parseImage(output) + if (imageOutput) { + core.info(`Found image: ${imageOutput}`) + core.setOutput('image', imageOutput) + } + + const artifactOutput = path.join(earthfile, parseArtifact(output)) + if (artifactOutput !== earthfile) { + core.info(`Found artifact: ${artifactOutput}`) + core.setOutput('artifact', artifactOutput) + } } } diff --git a/actions/setup/action.yml b/actions/setup/action.yml index 6dc9a7903..f4e7bd4e4 100644 --- a/actions/setup/action.yml +++ b/actions/setup/action.yml @@ -60,42 +60,42 @@ inputs: runs: using: composite steps: - - name: Install Earthly - uses: earthly/actions-setup@v1 - if: ${{ inputs.earthly_skip_install != 'true' }} - with: - version: ${{ inputs.earthly_version }} - - name: Install CI CLI - uses: input-output-hk/catalyst-ci/actions/install@master - if: ${{ inputs.cli_skip_install != 'true' }} - with: - version: ${{ inputs.cli_version }} - - name: Configure AWS Credentials - uses: aws-actions/configure-aws-credentials@v4 - if: ${{ inputs.aws_region != '' && inputs.aws_role_arn != '' }} - with: - role-to-assume: ${{ inputs.aws_role_arn }} - aws-region: ${{ inputs.aws_region }} - - name: Login to Docker Hub - uses: docker/login-action@v3 - if: ${{ inputs.dockerhub_username != '' && inputs.dockerhub_token != '' && inputs.configure_registries == 'true' }} - with: - username: ${{ inputs.dockerhub_username }} - password: ${{ inputs.dockerhub_token }} - - name: Login to ECR - uses: docker/login-action@v3 - if: ${{ inputs.aws_role_arn != '' && inputs.aws_ecr_registry != '' && inputs.configure_registries == 'true' }} - with: - registry: ${{ inputs.aws_ecr_registry }} - - name: Login to GitHub Container Registry - uses: docker/login-action@v3 - if: ${{ inputs.configure_registries == 'true' }} - with: - registry: ghcr.io - username: ${{ github.actor }} - password: ${{ inputs.github_token }} - - name: Setup Remote Runner - uses: input-output-hk/catalyst-ci/actions/configure-runner@master - if: ${{ inputs.earthly_runner_secret != '' && inputs.earthly_skip_install != 'true' }} - with: - secret: ${{ inputs.earthly_runner_secret }} + - name: Install Earthly + uses: earthly/actions-setup@v1 + if: ${{ inputs.earthly_skip_install != 'true' }} + with: + version: ${{ inputs.earthly_version }} + - name: Install CI CLI + uses: input-output-hk/catalyst-ci/actions/install@master + if: ${{ inputs.cli_skip_install != 'true' }} + with: + version: ${{ inputs.cli_version }} + - name: Configure AWS Credentials + uses: aws-actions/configure-aws-credentials@v4 + if: ${{ inputs.aws_region != '' && inputs.aws_role_arn != '' }} + with: + role-to-assume: ${{ inputs.aws_role_arn }} + aws-region: ${{ inputs.aws_region }} + - name: Login to Docker Hub + uses: docker/login-action@v3 + if: ${{ inputs.dockerhub_username != '' && inputs.dockerhub_token != '' && inputs.configure_registries == 'true' }} + with: + username: ${{ inputs.dockerhub_username }} + password: ${{ inputs.dockerhub_token }} + - name: Login to ECR + uses: docker/login-action@v3 + if: ${{ inputs.aws_role_arn != '' && inputs.aws_ecr_registry != '' && inputs.configure_registries == 'true' }} + with: + registry: ${{ inputs.aws_ecr_registry }} + - name: Login to GitHub Container Registry + uses: docker/login-action@v3 + if: ${{ inputs.configure_registries == 'true' }} + with: + registry: ghcr.io + username: ${{ github.actor }} + password: ${{ inputs.github_token }} + - name: Setup Remote Runner + uses: input-output-hk/catalyst-ci/actions/configure-runner@master + if: ${{ inputs.earthly_runner_secret != '' && inputs.earthly_skip_install != 'true' }} + with: + secret: ${{ inputs.earthly_runner_secret }} diff --git a/cli/cmd/main.go b/cli/cmd/main.go index 50d68c303..47184dde6 100644 --- a/cli/cmd/main.go +++ b/cli/cmd/main.go @@ -3,18 +3,21 @@ package main // cspell: words alecthomas afero sess tfstate import ( + "bufio" + "bytes" "encoding/json" "fmt" + "io" + "log" "os" + "os/exec" "path" "path/filepath" - "strings" "text/template" "time" "github.com/alecthomas/kong" "github.com/aws/aws-sdk-go/aws/session" - "github.com/input-output-hk/catalyst-ci/cli/pkg" "github.com/input-output-hk/catalyst-ci/cli/pkg/executors" "github.com/input-output-hk/catalyst-ci/cli/pkg/git" "github.com/input-output-hk/catalyst-ci/cli/pkg/parsers" @@ -32,10 +35,12 @@ type TagTemplate struct { } var cli struct { - Images imagesCmd `cmd:"" help:"Find the images generated by an Earthfile target."` - Scan scanCmd `cmd:"" help:"Scan for Earthfiles."` - State stateCmd `cmd:"" help:"Fetch outputs from remote Terraform state buckets."` - Tags tagsCmd `cmd:"" help:"Generate image tags with the current git context."` + Images imagesCmd `cmd:"" help:"Find the images generated by an Earthfile target."` + Scan scanCmd `cmd:"" help:"Scan for Earthfiles."` + State stateCmd `cmd:"" help:"Fetch outputs from remote Terraform state buckets."` + Tags tagsCmd `cmd:"" help:"Generate image tags with the current git context."` + Simulate simulateCmd `cmd:"" help:"Simulate earthly commands."` + Generate generateCmd `cmd:"" help:"Generate Earthfile from the given targets"` } type imagesCmd struct { @@ -74,91 +79,71 @@ func (c *imagesCmd) Run() error { type scanCmd struct { JSONOutput bool `short:"j" long:"json" help:"Output in JSON format"` - Images bool `short:"i" long:"images" help:"Also output images for the target of each Earthfile (requires -t option)"` Paths []string ` help:"paths to scan for Earthfiles" arg:"" type:"path"` - Target string `short:"t" help:"filter by Earthfiles that include this target" default:""` + Targets []string `short:"t" help:"filter by Earthfiles that include these target patterns" default:""` } func (c *scanCmd) Run() error { parser := parsers.NewEarthlyParser() scanner := scanners.NewFileScanner(c.Paths, parser, afero.NewOsFs()) - var files []pkg.Earthfile var err error - if c.Target != "" { - files, err = scanner.ScanForTarget(c.Target) - } else { - files, err = scanner.Scan() - } + // Target tag is set. + if len(c.Targets) != 0 { + var fileMapTarget = make(map[string][]string) + for _, t := range c.Targets { + pathToEarthMap, err := scanner.ScanForTarget(t) - if err != nil { - return err - } - - if c.Images { - if c.Target == "" { - return fmt.Errorf( - "the --images (-i) option requires the --target (-t) option", - ) - } - - var output = make(map[string][]string) - - for _, file := range files { - images, err := file.GetImages(c.Target) if err != nil { return err } - output[filepath.Dir(file.Path)] = images - } - - if c.JSONOutput { - var outFinal []interface{} - for path, images := range output { - out := struct { - Images []string `json:"images"` - Path string `json:"path"` - }{ - Images: images, - Path: path, + for key, value := range pathToEarthMap { + if existingTargets, ok := fileMapTarget[filepath.Dir(key)]; ok { + fileMapTarget[filepath.Dir(key)] = append(existingTargets, value.Targets...) + } else { + fileMapTarget[filepath.Dir(key)] = value.Targets } - outFinal = append(outFinal, out) - } - jsonOutput, err := json.Marshal(outFinal) - if err != nil { - return err - } - fmt.Println(string(jsonOutput)) - } else { - for path, images := range output { - fmt.Printf("%s %s\n", path, strings.Join(images, ",")) } } - return nil - } + err = c.printOutput(fileMapTarget) + if err != nil { + return err + } - if c.JSONOutput { + } else { + files, err := scanner.Scan() + if err != nil { + return err + } paths := make([]string, 0) for _, file := range files { paths = append(paths, filepath.Dir(file.Path)) } - jsonFiles, err := json.Marshal(paths) + err = c.printOutput(paths) if err != nil { return err } - fmt.Println(string(jsonFiles)) - } else { - for _, file := range files { - fmt.Println(filepath.Dir(file.Path)) - } } return nil } +func (c *scanCmd) printOutput(data interface{}) error { + if c.JSONOutput { + jsonOutput, err := json.Marshal(data) + if err != nil { + return err + } + fmt.Println(string(jsonOutput)) + return nil + } + fmt.Println(data) + return nil +} + type stateCmd struct { Bucket string `short:"b" long:"bucket" help:"S3 bucket that state is stored in" env:"CI_STATE_BUCKET" required:"true"` Environment string `short:"e" long:"environment" help:"The target environment to fetch state for" env:"CI_ENVIRONMENT" required:"true"` @@ -249,6 +234,142 @@ func (c *tagsCmd) Run() error { return nil } +type simulateCmd struct { + Path string ` help:"directory path to be iterated to search for targets within the Earthfile" arg:"" type:"path"` + Targets []string `short:"t" help:"Earthly targets pattern" default:"check,check-*,build,test,test-*"` +} + +func (c *simulateCmd) Run() error { + parser := parsers.NewEarthlyParser() + scanner := scanners.NewFileScanner([]string{c.Path}, parser, afero.NewOsFs()) + + // Loop through target patterns. + for _, tp := range c.Targets { + err := processTargets(scanner, tp, "", func(target string) { + fmt.Println(">>> Running target", target) + runEarthlyTarget(target) + }) + if err != nil { + return err + } + } + return nil +} + +// Run Earthly with target and print out logs. +func runEarthlyTarget(earthlyCmd string) { + command := "earthly" + + // Create the command. + cmd := exec.Command(command, earthlyCmd) + + var stdoutBuf, stderrBuf bytes.Buffer + cmd.Stdout = io.MultiWriter(os.Stdout, &stdoutBuf) + cmd.Stderr = io.MultiWriter(os.Stderr, &stderrBuf) + + err := cmd.Run() + if err != nil { + log.Fatalf("cmd.Run() failed with %s\n", err) + } + outStr, errStr := stdoutBuf.String(), stderrBuf.String() + fmt.Printf("\nout:\n%s\nerr:\n%s\n", outStr, errStr) +} + +type generateCmd struct { + Path string ` help:"directory path to be iterated to search for targets within the Earthfile" arg:"" type:"path"` + Targets []string `short:"t" help:"Earthly targets pattern" default:"check,check-*,build,test,test-*"` + Version string `short:"v" help:"Earthly version" default:"0.7"` +} + +// Generate Earthfile with given targets. +// All targets associated with the given targets will be listed inside. +func (c *generateCmd) Run() error { + directory := "generate" + // Create a directory generate with an Earthfile. + writer, err := createFile(directory, "Earthfile") + if err != nil { + return err + } + // Write down Earthly version and main target. + setup := fmt.Sprintf("VERSION --global-cache %s \nsimulate:\n", c.Version) + _, err = writer.WriteString(setup) + if err != nil { + return err + } + + parser := parsers.NewEarthlyParser() + scanner := scanners.NewFileScanner([]string{c.Path}, parser, afero.NewOsFs()) + for _, tp := range c.Targets { + err = processTargets(scanner, tp, directory, func(target string) { + data := fmt.Sprintf("\t BUILD %s\n", target) + fmt.Println(">>> Target with Path", data) + _, err = writer.WriteString(data) + if err != nil { + fmt.Println("Error writing to file:", err) + } + }) + if err != nil { + return err + } + err = writer.Flush() + if err != nil { + return err + } + } + return nil +} + +// Create file with given file name and directory. +func createFile(directory string, fileName string) (*bufio.Writer, error) { + // Create directory + err := os.MkdirAll(directory, os.ModePerm) + if err != nil { + return nil, err + } + + filePath := filepath.Join(directory, fileName) + file, err := os.Create(filePath) + if err != nil { + return nil, err + } + + writer := bufio.NewWriter(file) + + return writer, nil +} + +// Scans for targets using the given target pattern +// then called the callback for each target. +func processTargets(scanner *scanners.FileScanner, targetPattern string, directory string, callback func(target string)) error { + fmt.Println(">>>>>>> Detecting", targetPattern, "target") + pathToEarthMap, err := scanner.ScanForTarget(targetPattern) + if err != nil { + return err + } + + curDir, err := os.Getwd() + if err != nil { + return err + } + + filePath := filepath.Join(curDir, directory) + + // Loop through filtered targets. + for _, e := range pathToEarthMap { + for _, tg := range e.Targets { + // Get relative target path. + relativePath, err := filepath.Rel(filePath, e.Earthfile.Path) + if err != nil { + return err + } + + target := filepath.Join(filepath.Dir(relativePath), "+"+tg) + callback(target) + } + } + return nil +} + func main() { ctx := kong.Parse(&cli) err := ctx.Run() diff --git a/cli/pkg/earthfile.go b/cli/pkg/earthfile.go index 04e38ff52..208a60dce 100644 --- a/cli/pkg/earthfile.go +++ b/cli/pkg/earthfile.go @@ -18,6 +18,12 @@ type Earthfile struct { Version *spec.Version `json:"version,omitempty"` } +// Earthfile and its filtered targets. +type EarthTargets struct { + Earthfile Earthfile + Targets []string +} + func (e Earthfile) GetImages(target string) ([]string, error) { commands, err := e.GetCommands(target, "SAVE IMAGE") if err != nil { @@ -72,7 +78,6 @@ func (e Earthfile) GetTarget(target string) (*spec.Target, error) { return &t, nil } } - return nil, fmt.Errorf("target %s not found in %s", target, e.Path) } diff --git a/cli/pkg/scanners/file_scanner.go b/cli/pkg/scanners/file_scanner.go index 6806ebac7..7e3108bd2 100644 --- a/cli/pkg/scanners/file_scanner.go +++ b/cli/pkg/scanners/file_scanner.go @@ -3,8 +3,11 @@ package scanners // cspell: words afero import ( + "fmt" "os" "path/filepath" + "regexp" + "strings" "github.com/input-output-hk/catalyst-ci/cli/pkg" "github.com/spf13/afero" @@ -28,21 +31,60 @@ func (f *FileScanner) Scan() ([]pkg.Earthfile, error) { return earthfiles, nil } -func (f *FileScanner) ScanForTarget(target string) ([]pkg.Earthfile, error) { - earthfiles, err := f.scan(func(e pkg.Earthfile) (bool, error) { +// This function return a map. +// Key is the path to the Earthfile. +// Value is struct containing Earthfile and list of filtered target +// If no match found, return []. +// eg. map[/test/Earthfile]: {Earthfile, [filteredTargets]}. +func (f *FileScanner) ScanForTarget(target string) (map[string]pkg.EarthTargets, error) { + regexPattern := getTargetRegex(target) + r, err := regexp.Compile(regexPattern) + + if err != nil { + return nil, err + } + + pathToEarthTargets := make(map[string]pkg.EarthTargets) + + _, err = f.scan(func(e pkg.Earthfile) (bool, error) { + var targets []string for _, t := range e.Targets { - if t.Name == target { - return true, nil + // Matched target is added to a list. + if r.MatchString(t.Name) { + targets = append(targets, t.Name) } } - + // If there are filtered targets, add to a map. + if len(targets) != 0 { + pathToEarthTargets[e.Path] = pkg.EarthTargets{ + Earthfile: e, + Targets: targets, + } + return true, nil + } return false, nil }) if err != nil { return nil, err } - return earthfiles, nil + + return pathToEarthTargets, nil +} + +// Get the regex of the given target +// if target ends with -* , return a wildcard regex +// else, return the given target regex. +func getTargetRegex(target string) string { + // If target ends with -* + if strings.HasSuffix(target, "-*") { + // Should start with given target + // followed by hyphen and followed by one or more lowercase letters or numbers + return fmt.Sprintf("^%s-(?:[a-z0-9]+)?$", regexp.QuoteMeta(target[:len(target)-2])) + } + + // Match the exact target + return fmt.Sprintf("^%s$", regexp.QuoteMeta(target)) } func (f *FileScanner) scan( diff --git a/cli/pkg/scanners/file_scanner_test.go b/cli/pkg/scanners/file_scanner_test.go index 114ee22d5..d85bec215 100644 --- a/cli/pkg/scanners/file_scanner_test.go +++ b/cli/pkg/scanners/file_scanner_test.go @@ -1,6 +1,6 @@ package scanners_test -// cspell: words onsi gomega afero +// cspell: words onsi gomega afero testdocker import ( "errors" @@ -78,63 +78,53 @@ var _ = Describe("FileScanner", func() { }) Describe("ScanForTarget", func() { - BeforeEach(func() { + setup := func(target string) { err := afero.WriteFile( fs, "/test/Earthfile", - []byte("docker"), + []byte(target), 0644, ) + Expect(err).NotTo(HaveOccurred()) parser = &mockParser{ earthfile: pkg.Earthfile{ Targets: []spec.Target{ { - Name: "docker", + Name: target, }, }, }, } - }) - - It("should return Earthfiles with docker target", func() { - fScanner := scanners.NewFileScanner([]string{"/test"}, parser, fs) - earthfiles, err := fScanner.ScanForTarget("docker") - Expect(err).NotTo(HaveOccurred()) - Expect(earthfiles).To(HaveLen(1)) - Expect(earthfiles[0].Path).To(Equal("/test/Earthfile")) - }) - - Context("when the Earthfile does not contain docker target", func() { - BeforeEach(func() { - err := afero.WriteFile( - fs, - "/test/Earthfile", - []byte("other"), - 0644, - ) + } + DescribeTable("when Earthfile contain the target", + func(targetInput string, targetInFile string) { + setup(targetInFile) + directory := "/test" + fScanner := scanners.NewFileScanner([]string{directory}, parser, fs) + pathToEarthTargets, err := fScanner.ScanForTarget(targetInput) Expect(err).NotTo(HaveOccurred()) - parser = &mockParser{ - earthfile: pkg.Earthfile{ - Targets: []spec.Target{ - { - Name: "other", - }, - }, - }, - } - }) - - It("should return an empty slice", func() { - fScanner := scanners.NewFileScanner( - []string{"/test"}, - parser, - fs, - ) - earthfiles, err := fScanner.ScanForTarget("docker") + Expect(pathToEarthTargets).To(HaveLen(1)) + data := pathToEarthTargets[directory+"/Earthfile"] + Expect(data).NotTo(BeNil()) + Expect(data.Earthfile.Path).To(Equal(directory + "/Earthfile")) + Expect(data.Targets).To(Equal([]string{targetInFile})) + + }, + Entry("scanning 'docker', target in file is 'docker'", "docker", "docker"), + Entry("scanning 'docker-*', target in file is 'docker-test'", "docker-*", "docker-test"), + ) + DescribeTable("when Earthfile contain no target", + func(target string) { + setup(target) + fScanner := scanners.NewFileScanner([]string{"/test"}, parser, fs) + pathToEarthTargets, err := fScanner.ScanForTarget("docker") Expect(err).NotTo(HaveOccurred()) - Expect(earthfiles).To(BeEmpty()) - }) - }) + Expect(pathToEarthTargets).To(BeEmpty()) + + }, + Entry("scanning 'docker', target in file doesn't match but contain the word docker", "testdocker"), + Entry("scanning 'docker', no match target", "other"), + ) }) }) diff --git a/docs/src/guides/languages/rust.md b/docs/src/guides/languages/rust.md index 9f596ba42..c2f81932f 100644 --- a/docs/src/guides/languages/rust.md +++ b/docs/src/guides/languages/rust.md @@ -81,15 +81,15 @@ By default `toolchain` setup to `rust-toolchain.toml`. ```Earthfile # Test rust build container - Use best architecture host tools. -check-hosted: +hosted-check: FROM +builder DO ./../../earthly/rust+CHECK # Test which runs check with all supported host tooling. Needs qemu or rosetta to run. # Only used to validate tooling is working across host toolsets. -check-all-hosts: - BUILD --platform=linux/amd64 --platform=linux/arm64 +check-hosted +all-hosts-check: + BUILD --platform=linux/amd64 --platform=linux/arm64 +hosted-check ## Standard CI targets. ## @@ -105,14 +105,14 @@ check: ARG USERARCH IF [ "$USERARCH" == "arm64" ] - BUILD --platform=linux/arm64 +check-hosted + BUILD --platform=linux/arm64 +hosted-check ELSE - BUILD --platform=linux/amd64 +check-hosted + BUILD --platform=linux/amd64 +hosted-check END ``` With prepared environment and all data, we're now ready to start operating with the source code and configuration files. -The `check-hosted` target which actually performs all checks and validation +The `hosted-check` target which actually performs all checks and validation with the help of `+CHECK` UDC target. The `+CHECK` UDC target performs static checks of the Rust project as `cargo fmt`, `cargo machete`, `cargo deny` which will validate formatting, @@ -131,7 +131,7 @@ to be the same as defined in `earthly/rust/stdcfgs` directory of the `catalyst-c So when you are going to setup a new Rust project copy these configuration files described above to the appropriate location of your Rust project. -Another targets as `check-all-hosts` and `check` (running on CI) just invoke `check-hosted` +Another targets as `all-hosts-check` and `check` (running on CI) just invoke `hosted-check` with the specified `--platform`. It is important to define a `linux` target platform with a proper cpu architecture for the Rust project when you are building it inside Docker @@ -142,7 +142,7 @@ The same approach we will see for the another targets of this guide. ```Earthfile # Build the service. -build-hosted: +hosted-build: FROM +builder DO ./../../earthly/rust+BUILD --libs="bar" --bins="foo/foo" @@ -154,7 +154,7 @@ build-hosted: # Test which runs check with all supported host tooling. Needs qemu or rosetta to run. # Only used to validate tooling is working across host toolsets. -build-all-hosts: +all-hosts-build: BUILD --platform=linux/amd64 --platform=linux/arm64 +build-hosted # Run build using the most efficient host tooling @@ -167,16 +167,16 @@ build: ARG USERARCH IF [ "$USERARCH" == "arm64" ] - BUILD --platform=linux/arm64 +build-hosted + BUILD --platform=linux/arm64 +hosted-build ELSE - BUILD --platform=linux/amd64 +build-hosted + BUILD --platform=linux/amd64 +hosted-build END ``` After successful performing checks of the Rust project we can finally build artifacts. -As it was discussed in the previous section, actual job is done with `build-hosted` target, +As it was discussed in the previous section, actual job is done with `hosted-build` target, other targets needs to configure different platform running options. -So we will focus on `build-hosted` target. +So we will focus on `hosted-build` target. Obviously it inherits `builder` target environment and than performs build of the binary. Important to note that in this particular example we are dealing with the executable Rust project, so it produces binary as a final artifact. diff --git a/docs/src/guides/simulate.md b/docs/src/guides/simulate.md new file mode 100644 index 000000000..b6e775e1e --- /dev/null +++ b/docs/src/guides/simulate.md @@ -0,0 +1,135 @@ +--- +icon: material/gamepad-variant +--- + +# Earthly Simulator + +## Overview + +The following document provides an overview and usage guide for simulating Earthly locally. + +The simulation can be done in 2 ways + +1. Running a command line `simulate`: +This will run Earthly command on every targets that match the given input targets. +The targets will run sequentially, then preview the outcomes. +2. Running a command line `generate`: +This will create an Earthfile, which is set to be created at the `generate/` folder inside the current directory. +Inside the Earthfile, it contains a main target called `simulate`. +This main `simulate` target will contains all the targets that match the given input targets. +In order to test it, `earthly +simulate` can be run directly. + +## Setup + +Both of the commands are written in Go, which located in +[catalyst-ci](https://github.com/input-output-hk/catalyst-ci/cli/cmd/main.go) . + + +!!! Note + Make sure that your `GOPATH` is set correctly. + + +To begin, clone the Catalyst CI repository: + +``` bash +git clone https://github.com/input-output-hk/catalyst-ci.git +``` + +Navigate to `cli` directory. +The command can be found in `cli/cmd/main.go` + +### Running the command + +In the `cli` directory, the following command can be run + +``` bash +go run ./cmd/main.go +``` + +### Build a binary file + +Instead of running the command directly from `main.go`, +building binary file can be done instead. + +``` bash +go build -o bin/ci cmd/main.go +``` + +Now the `ci` command can be run directly without Go command + +## Simulate Command Usage + +### SimulateCmd Struct + +The simulateCmd struct is designed to be used with a command-line interface (CLI) and has the following fields: + +* `Path`: Specifies the directory path to be iterated to search for targets within the Earthfile. +* `Target`: A list of Earthly target patterns that the simulation will run. +If the flag is not set, the default pipeline will be run `check check-* build test test-*` + +### Default targets workflow + +If the target flag is not set, the default target patterns will be used. +The defaults value include `check check-* build test test-*`. + +``` bash +ci simulate . +``` + +### Customize targets workflow + +Specific stages can be simulated by adding target flag. +The argument is a list of target pattern, for example, `-t " "` + +``` bash +ci simulate . -t "test" -t "test-*" +``` + +## Generate Command Usage + +### GenerateCmd Struct + +The generateCmd struct is designed to be used with a command-line interface (CLI) and has the following fields: + +* `Path`: Specifies the directory path to be iterated to search for targets within the Earthfile. +* `Target`: A list of Earthly target patterns that the simulation will run. +If the flag is not set, the default pipeline will be run `check check-* build test test-*` +* `Version`: An Earthly version that need to be specify at the top of Earthfile. +The default version is 0.7 + +### Default value + +If the target flag is not set, the default target patterns will be used. +The defaults value include `check check-* build test test-*`. + +``` bash +ci generate . +``` + +The ci will create an Earthfile in `generate/` folder of the current directory. +The version of the Earthly will be set to 0.7 +The targets will be listed under the `simulate` target. +eg. `BUILD ../test/+target` + +### Customize targets workflow + +Customization can be done by specifying flags. + +* Adding target flag `-t "" -t ""` +* Adding version flag `-v ` + +``` bash +ci generate . -t "test-*" -t "check-*" -v 0.6 +``` + +The ci will create an Earthfile in `generate/` folder of the current directory. +The command above will iterate through the current directory. +Find the target that match `test-*` and `check-*`. +Set the version of Earthly to 0.6. + + +!!! Warning + Make sure that the directory of the Earthfile is not conflict with the existing Earthfile. + The Earthfile should be ignore in the `.gitignore` + + \ No newline at end of file diff --git a/docs/src/onboarding/index.md b/docs/src/onboarding/index.md index 02f75f955..5aafdbee9 100644 --- a/docs/src/onboarding/index.md +++ b/docs/src/onboarding/index.md @@ -22,7 +22,7 @@ During every run, the CI will automatically discover and execute a select number Each of these targets serves a single purpose and together they are responsible for executing the entire release process. The CI process is designed to be modular and reusable across a variety of different requirements. -By default, if a specific target the CI executes is not found in the discovery phase, it simply passes and moves on to the next one. +By default, if a specific target is not found in the discovery phase, it simply passes and moves on to the next one. This allows slowly building out a repository and only implementing the targets that make sense at that time. The discovery and execution nature of the CI allows developers to contractually define the outputs of the particular subproject they @@ -32,7 +32,8 @@ reserved target names to interact with the CI. This promotes self-service and establishes a clear boundary of ownership whereby developers only need to be concerned about maintaining a single file in their subproject. -The CI process is well-documented and troubleshooting unexpected errors only requires knowledge of Earthly and GitHub Actions. +The CI process is well-documented and troubleshooting unexpected errors only requires knowledge of **Earthly** +and **GitHub Actions**. All of the code is contained in a single [open-source repository](https://github.com/input-output-hk/catalyst-ci) and contributions are welcome. The remainder of the overview section will focus on discussing some of these concepts in more detail. @@ -48,18 +49,38 @@ environment. During a single run, the CI will go through multiple phases of discovery. In each of these discovery phases, a custom CLI provided by the `catalyst-ci` repository is executed. The CLI is responsible for recursively scanning the repository for `Earthfile`s and filtering them by target. -For example, during the `check` phase of the CI, the CLI will return a list of `Earthfile`s that contain the `check` target. +The CLI will return a list of Earthfile path and a map where key is the Earthfile path and the value is a list of filtered target. +For example, in the check phase of the CI, `check` and `check-*` will be executed. +The wildcard `*` serves as a regular search term, representing one or more other characters. +The output of the check phase may looks like the following: +**Map:** + +```json +{ + "/home/work/test": ["check-test1", "check-test2", "check-test3"], + "/home/work/test2": ["check"] +} +``` + +**Path:** + +```json +["/home/work/test", "home/work/test2"] +``` + +This list of `path` is fed into a [matrix job](https://docs.github.com/en/actions/using-jobs/using-a-matrix-for-your-jobs) that +multiplexes executing the filtered targets from each of the discovered `Earthfile`s. +The filtered targets will be retrieved from the map according to which Earthfile is currently running. +For example, from the above example, running `/home/work/test` will run the targets `check-test1`, `check-test2`, and `check-test3`. -The discovery phase will then return a list of `Earthfile`s matching the given criteria. -This list is fed into a [matrix job](https://docs.github.com/en/actions/using-jobs/using-a-matrix-for-your-jobs) that multiplexes -executing the targets from each of the discovered `Earthfile`s. -By running targets in parallel, we maximize network throughput and create an easier to digest view of the CI status. -For example, by doing this, every individual target gets its own dedicated job and logs that can be easily seen from the GitHub -Actions UI. +Executing each discovered Earthfile in parallel will maximize network throughput +and create a more easily digestible view of the CI status. +For example, by doing this, every individual Earthfile gets its own dedicated job and logs. +This can be easily seen from the GitHub Actions UI. ### Execution -After each discovery phase, a list of targets will be executed by the CI in parallel. +After each discovery phase, a list of targets will be executed by the CI. Execution is handled by Earthly and usually occurs on a remote Earthly runner that maximizes the benefits of caching. The exact steps that get executed by the target is defined by the developer. While most targets generally have a clearly defined scope, the overall goal is to enable adaptability by offloading the logic to the @@ -127,9 +148,51 @@ However, as a project grows, it can begin incorporating more stages without havi Now that we've covered how the CI process works at a conceptual level, how can we use it practically? As a developer, the main interface you need to be most concerned with is the `Earthfile`. Each "stage" discussed in the previous section can be directly correlated to an Earthly target. -For example, the `check` stage will search for and execute a `check` target. -In your `Earthfile`, you'll be responsible for defining these targets and their associated logic within the context of your -subproject. +Two patterns `target` or `target-*` can be used according to your usage and preferences. + +* In case of `target`, it will scan for targets that match the exact target name (ie., `target`) +* In case of `target-*`, it will scan for targets that start with `target-` and are followed by any number of digits or characters +(The wildcard `*` serves as a regular search term, representing one or more other characters.) + + +!!! Warning + Wildcard (`target-*`) is only compatible with targets that do not produce any artifacts or images. + The current design is only supported in the `check` and `test` stages. + + +The examples are provided below: + +* In the CI's `check` phase, if `check` is used, the following output will be returned: +**Map:** + +```json +{ + "/home/work/test": ["check"], + "/home/work/test2": ["check"] +} +``` + +**Path:** + +```json +["/home/work/test", "home/work/test2"] +``` + +* If the `check-*` is used, the following output will be returned +**Map:** + +```json +{ + "/home/work/test": ["check-test1", "check-test2", "check-test3"], + "/home/work/test2": ["check-test1"] +} +``` + +**Path:** + +```json +["/home/work/test", "home/work/test2"] +``` If you're contributing a new subproject with deliverables, you'll need to include an initial `Earthfile` as part of the contribution. diff --git a/docs/src/reference/actions.md b/docs/src/reference/actions.md index 48f183b95..3ceaec9c9 100644 --- a/docs/src/reference/actions.md +++ b/docs/src/reference/actions.md @@ -28,7 +28,7 @@ It performs the necessary steps to setup the local GitHub runner to perform CI t This includes: * Installing Earthly -* Installing the custom CI CLI +* Installing the custom CI CLI or build the CI locally if the flag is set * Configuring access to AWS * Authenticating with container registries * Configuring the Earthly remote runner @@ -41,9 +41,10 @@ Using these actions individually should be avoided unless absolutely necessary. ### Discover The `discover` action is another common action that shows up in many workflows. -It performs the "discovery" mechanism of finding Earthfiles with specific targets. -For example, the `check` workflow uses this action to discover all Earthfiles that have a `check` target. -The custom CI CLI **must** be installed (see the above section) in order for this action to work. +It performs the "discovery" mechanism of finding Earthfiles with filtered targets from specific targets. +For example, the check workflow, which run `check` and `check-*`, +uses this action to discover all Earthfiles that match `check` and `check-*` targets. +The custom CI CLI **must** be installed or built locally (see the above section) in order for this action to work. This action is useful when creating custom workflows which extend the existing Catalyst CI process. It can be used to create similar logic for discovering and acting upon specific Earthly targets contained in a repository. diff --git a/docs/src/reference/targets.md b/docs/src/reference/targets.md index b972cd24c..e1e21921c 100644 --- a/docs/src/reference/targets.md +++ b/docs/src/reference/targets.md @@ -15,24 +15,25 @@ This section is dedicated to explaining what these targets are, how they work, a ### Summary -The `check` target is responsible for validating that a given subproject is healthy and up to the appropriate standards. +The `check` or `check-*` target is responsible for validating that given subproject is healthy and up to the appropriate standards. This target should be used by all subprojects to improve the upkeep of code. No additional tasks are performed before or after running the target. ### How it Works -The `check` target is the **first target run** in the CI pipeline and **must pass** before any other targets are run. -The CI will call the `check` target and fail if it returns a non-zero exit code. +The `check` and `check-*` targets are the **first target run** in the CI pipeline +and **must pass** before any other targets are run. +The CI will call the `check` and `check-*` targets and fail if it returns a non-zero exit code. ### Usage -It's important to avoid adding any steps that may have flaky results to the `check` target. +It's important to avoid adding any steps that may have flaky results to the `check` and `check-*` target. Reducing the runtime of the `check` phase by avoiding any lengthy processes is also advisable. This includes things like complex integrations tests or E2E tests that should have their own dedicated workflows. -The goal of the `check` target is to "fail fast" and avoid running a lengthy CI pipeline if there are immediate problems with the -code. +The goal of the `check` and `check-*` targets are to "fail fast" and avoid running a lengthy CI pipeline +if there are immediate problems with the code. -Some typical tasks that would be appropriate for the `check` target are as follows: +Some typical tasks that would be appropriate for the `check` or `check-*` target are as follows: 1. Validating code format 2. Linting code @@ -93,7 +94,7 @@ In smaller repos, this target should be skipped. ### Summary -The `test` target is responsible for running tests to validate things are working as expected. +The `test` and `test-*` targets are responsible for running tests to validate things are working as expected. The target is intended to be versatile, and can be used to run several different formats of testing. For example: @@ -103,14 +104,14 @@ For example: ### How it Works -The `test` target is the **fourth target run** in the CI pipeline and **must pass** before any other targets are run. -The CI will call the `test` target and fail if it returns a non-zero exit code. +The `test` and `test-*` targets are the **fourth target run** in the CI pipeline and **must pass** before any other targets are run. +The CI will call the `test` and `test-*` targets and fail if it returns a non-zero exit code. ### Usage -The `test` target is intended to be versatile. -In many cases, separate `Earthfile`s that are outside of the scope of a single subproject are created to hold a `test` target which -runs integration tests. +The `test` and `test-*` targets is intended to be versatile. +In many cases, separate `Earthfile`s that are outside of the scope of a single subproject +are created to hold `test` or `test-*` targets which runs integration tests. At the same time, individual subprojects may utilize this target to run their own unit tests. The only requirement is that the target should *only* be used to run tests. @@ -122,7 +123,7 @@ This target is the final target that is run (and must pass) before artifacts are The `publish` target is responsible for building and publishing a container image to image registries. This target should be used when a subproject needs to produce and publish a container image. -The CI will execute this target after the `test` target, assuming it passes. +The CI will execute this target after the `test` phase, assuming it passes. ### How it Works @@ -181,3 +182,16 @@ For example, making the target save the local source code is redundant since Git new release is created. However, a consumer may want to be able to download precompiled versions of a binary (without relying on a container). In this case, it makes sense to create a release target that produces the binary as an artifact. + + +!!! Note + Targets can be written in 2 patterns `target` and `target-*`. + The wildcard `*` serves as a regular search term, representing one or more other characters. + Specifically, one or more numbers or lowercase characters. + + + +!!! Warning + Wildcard (`target-*`) is only compatible with targets that do not produce any artifacts or images. + The current design is only supported in the `check` and `test` stages. + \ No newline at end of file diff --git a/examples/postgresql/Earthfile b/examples/postgresql/Earthfile index e83b245a2..b3aac3253 100644 --- a/examples/postgresql/Earthfile +++ b/examples/postgresql/Earthfile @@ -108,11 +108,3 @@ test-3: # * applies seed data. test-4: DO +INTEGRATION_TEST_RUN --compose="./tests/docker-compose-svc.yml" --seed_data="data" --test_script=./tests/test1.sh - -# test the event db database schema. Invokes all tests. -# CI target : true -test: - BUILD +test-1 - BUILD +test-2 - BUILD +test-3 - BUILD +test-4 diff --git a/examples/rust/Earthfile b/examples/rust/Earthfile index 975d88ade..804e57db7 100644 --- a/examples/rust/Earthfile +++ b/examples/rust/Earthfile @@ -13,19 +13,19 @@ builder: DO ./../../earthly/rust+SETUP # Test rust build container - Use best architecture host tools. -check-hosted: +hosted-check: FROM +builder DO ./../../earthly/rust+CHECK # Test which runs check with all supported host tooling. Needs qemu or rosetta to run. # Only used to validate tooling is working across host toolsets. -check-all-hosts: - BUILD --platform=linux/amd64 --platform=linux/arm64 +check-hosted +all-hosts-check: + BUILD --platform=linux/amd64 --platform=linux/arm64 +hosted-check # Build the service. -build-hosted: +hosted-build: FROM +builder DO ./../../earthly/rust+BUILD --libs="bar" --bins="foo/foo" @@ -37,8 +37,8 @@ build-hosted: # Test which runs check with all supported host tooling. Needs qemu or rosetta to run. # Only used to validate tooling is working across host toolsets. -build-all-hosts: - BUILD --platform=linux/amd64 --platform=linux/arm64 +build-hosted +all-hosts-build: + BUILD --platform=linux/amd64 --platform=linux/arm64 +hosted-build test-hosted: FROM +builder @@ -64,9 +64,9 @@ check: ARG USERARCH IF [ "$USERARCH" == "arm64" ] - BUILD --platform=linux/arm64 +check-hosted + BUILD --platform=linux/arm64 +hosted-check ELSE - BUILD --platform=linux/amd64 +check-hosted + BUILD --platform=linux/amd64 +hosted-check END # Run build using the most efficient host tooling @@ -79,9 +79,9 @@ build: ARG USERARCH IF [ "$USERARCH" == "arm64" ] - BUILD --platform=linux/arm64 +build-hosted + BUILD --platform=linux/arm64 +hosted-build ELSE - BUILD --platform=linux/amd64 +build-hosted + BUILD --platform=linux/amd64 +hosted-build END test: