test: mobile benchmarking for proving and witness generation using BrowserStack and mobench#398
test: mobile benchmarking for proving and witness generation using BrowserStack and mobench#398
Conversation
| run: | | ||
| set -euo pipefail | ||
|
|
||
| scope="${PROOF_SCOPE,,}" | ||
| modes="${BENCH_MODES,,}" | ||
| modes="${modes//[[:space:]]/}" | ||
| if [[ -z "$modes" ]]; then | ||
| modes="all" | ||
| fi | ||
|
|
||
| case "$scope" in | ||
| both|pi1|pi2) ;; | ||
| *) | ||
| echo "::error::Invalid proof_scope: '$scope' (expected: both|pi1|pi2)" | ||
| exit 1 | ||
| ;; | ||
| esac | ||
|
|
||
| mode_enabled() { | ||
| local mode="$1" | ||
| [[ "$modes" == "all" ]] && return 0 | ||
| [[ ",$modes," == *",$mode,"* ]] | ||
| } | ||
|
|
||
| scope_enabled() { | ||
| local bench_scope="$1" | ||
| [[ "$scope" == "both" || "$scope" == "$bench_scope" ]] | ||
| } | ||
|
|
||
| mkdir -p target/mobench/ci/android | ||
|
|
||
| benches=( | ||
| "pi2 witness bench_mobile::bench_nullifier_witness_generation_only nullifier-witness" | ||
| "pi2 proving bench_mobile::bench_nullifier_proving_only nullifier-proving" | ||
| "pi2 full bench_mobile::bench_nullifier_proof_generation nullifier-full" | ||
| "pi1 witness bench_mobile::bench_query_witness_generation_only query-witness" | ||
| "pi1 proving bench_mobile::bench_query_proving_only query-proving" | ||
| "pi1 full bench_mobile::bench_query_proof_generation query-full" | ||
| ) | ||
|
|
||
| selected=0 | ||
| for bench in "${benches[@]}"; do | ||
| read -r bench_scope bench_mode function output <<<"$bench" | ||
| if ! scope_enabled "$bench_scope"; then | ||
| continue | ||
| fi | ||
| if ! mode_enabled "$bench_mode"; then | ||
| continue | ||
| fi | ||
| selected=$((selected + 1)) | ||
| cargo mobench run \ | ||
| --target android \ | ||
| --function "${function}" \ | ||
| --iterations "${{ inputs.iterations }}" \ | ||
| --warmup "${{ inputs.warmup }}" \ | ||
| --config bench-config.android.runtime.toml \ | ||
| --release \ | ||
| --fetch \ | ||
| --fetch-timeout-secs "${{ inputs.fetch_timeout_secs }}" \ | ||
| --summary-csv \ | ||
| --output "target/mobench/ci/android/${output}.json" | ||
| done | ||
|
|
||
| if [[ "$selected" -eq 0 ]]; then | ||
| echo "::error::No Android benchmarks selected by proof_scope='${scope}' and modes='${modes}'." | ||
| exit 1 | ||
| fi | ||
|
|
There was a problem hiding this comment.
Using variable interpolation ${{...}} with github context data in a run: step could allow an attacker to inject their own code into the runner. This would allow them to steal secrets and code. github context data can have arbitrary user input and should be treated as untrusted. Instead, use an intermediate environment variable with env: to store the data and use the environment variable in the run: script. Be sure to use double-quotes the environment variable, like this: "$ENVVAR".
🌟 Fixed in commit cd8f408 🌟
| run: | | ||
| cat > device-matrix.android.runtime.yaml <<EOF | ||
| devices: | ||
| - name: "${{ steps.resolve_android.outputs.device_name }}" | ||
| os: "android" | ||
| os_version: "${{ steps.resolve_android.outputs.os_version }}" | ||
| tags: ["runtime", "android"] | ||
| EOF | ||
| cat > bench-config.android.runtime.toml <<EOF | ||
| target = "android" | ||
| function = "bench_mobile::bench_nullifier_proving_only" | ||
| iterations = ${{ inputs.iterations }} | ||
| warmup = ${{ inputs.warmup }} | ||
| device_matrix = "device-matrix.android.runtime.yaml" | ||
| device_tags = ["runtime"] | ||
|
|
||
| [browserstack] | ||
| app_automate_username = "\${BROWSERSTACK_USERNAME}" | ||
| app_automate_access_key = "\${BROWSERSTACK_ACCESS_KEY}" | ||
| project = "mobile-bench-rs" | ||
| EOF | ||
|
|
There was a problem hiding this comment.
Using variable interpolation ${{...}} with github context data in a run: step could allow an attacker to inject their own code into the runner. This would allow them to steal secrets and code. github context data can have arbitrary user input and should be treated as untrusted. Instead, use an intermediate environment variable with env: to store the data and use the environment variable in the run: script. Be sure to use double-quotes the environment variable, like this: "$ENVVAR".
🚀 Fixed in commit cd8f408 🚀
| run: | | ||
| set -euo pipefail | ||
|
|
||
| scope="${PROOF_SCOPE,,}" | ||
| modes="${BENCH_MODES,,}" | ||
| modes="${modes//[[:space:]]/}" | ||
| if [[ -z "$modes" ]]; then | ||
| modes="all" | ||
| fi | ||
|
|
||
| case "$scope" in | ||
| both|pi1|pi2) ;; | ||
| *) | ||
| echo "::error::Invalid proof_scope: '$scope' (expected: both|pi1|pi2)" | ||
| exit 1 | ||
| ;; | ||
| esac | ||
|
|
||
| mode_enabled() { | ||
| local mode="$1" | ||
| [[ "$modes" == "all" ]] && return 0 | ||
| [[ ",$modes," == *",$mode,"* ]] | ||
| } | ||
|
|
||
| scope_enabled() { | ||
| local bench_scope="$1" | ||
| [[ "$scope" == "both" || "$scope" == "$bench_scope" ]] | ||
| } | ||
|
|
||
| mkdir -p target/mobench/ci/ios | ||
|
|
||
| benches=( | ||
| "pi2 witness bench_mobile::bench_nullifier_witness_generation_only nullifier-witness" | ||
| "pi2 proving bench_mobile::bench_nullifier_proving_only nullifier-proving" | ||
| "pi2 full bench_mobile::bench_nullifier_proof_generation nullifier-full" | ||
| "pi1 witness bench_mobile::bench_query_witness_generation_only query-witness" | ||
| "pi1 proving bench_mobile::bench_query_proving_only query-proving" | ||
| "pi1 full bench_mobile::bench_query_proof_generation query-full" | ||
| ) | ||
|
|
||
| selected=0 | ||
| for bench in "${benches[@]}"; do | ||
| read -r bench_scope bench_mode function output <<<"$bench" | ||
| if ! scope_enabled "$bench_scope"; then | ||
| continue | ||
| fi | ||
| if ! mode_enabled "$bench_mode"; then | ||
| continue | ||
| fi | ||
| selected=$((selected + 1)) | ||
| cargo mobench run \ | ||
| --target ios \ | ||
| --function "${function}" \ | ||
| --iterations "${{ inputs.iterations }}" \ | ||
| --warmup "${{ inputs.warmup }}" \ | ||
| --config bench-config.ios.runtime.toml \ | ||
| --release \ | ||
| --fetch \ | ||
| --fetch-timeout-secs "${{ inputs.fetch_timeout_secs }}" \ | ||
| --summary-csv \ | ||
| --output "target/mobench/ci/ios/${output}.json" | ||
| done | ||
|
|
||
| if [[ "$selected" -eq 0 ]]; then | ||
| echo "::error::No iOS benchmarks selected by proof_scope='${scope}' and modes='${modes}'." | ||
| exit 1 | ||
| fi | ||
|
|
There was a problem hiding this comment.
Using variable interpolation ${{...}} with github context data in a run: step could allow an attacker to inject their own code into the runner. This would allow them to steal secrets and code. github context data can have arbitrary user input and should be treated as untrusted. Instead, use an intermediate environment variable with env: to store the data and use the environment variable in the run: script. Be sure to use double-quotes the environment variable, like this: "$ENVVAR".
🚀 Fixed in commit cd8f408 🚀
| run: | | ||
| cat > device-matrix.ios.runtime.yaml <<EOF | ||
| devices: | ||
| - name: "${{ steps.resolve_ios.outputs.device_name }}" | ||
| os: "ios" | ||
| os_version: "${{ steps.resolve_ios.outputs.os_version }}" | ||
| tags: ["runtime", "ios"] | ||
| EOF | ||
| cat > bench-config.ios.runtime.toml <<EOF | ||
| target = "ios" | ||
| function = "bench_mobile::bench_nullifier_proving_only" | ||
| iterations = ${{ inputs.iterations }} | ||
| warmup = ${{ inputs.warmup }} | ||
| device_matrix = "device-matrix.ios.runtime.yaml" | ||
| device_tags = ["runtime"] | ||
|
|
||
| [browserstack] | ||
| app_automate_username = "\${BROWSERSTACK_USERNAME}" | ||
| app_automate_access_key = "\${BROWSERSTACK_ACCESS_KEY}" | ||
| project = "mobile-bench-rs" | ||
|
|
||
| [ios_xcuitest] | ||
| app = "target/mobench/ios/BenchRunner.ipa" | ||
| test_suite = "target/mobench/ios/BenchRunnerUITests.zip" | ||
| EOF | ||
|
|
There was a problem hiding this comment.
Using variable interpolation ${{...}} with github context data in a run: step could allow an attacker to inject their own code into the runner. This would allow them to steal secrets and code. github context data can have arbitrary user input and should be treated as untrusted. Instead, use an intermediate environment variable with env: to store the data and use the environment variable in the run: script. Be sure to use double-quotes the environment variable, like this: "$ENVVAR".
🧼 Fixed in commit cd8f408 🧼
ba76f1d to
16225b7
Compare
cd8f408 to
c9cfd8d
Compare
|
/mobench platforms=both iterations=30 warmup=5 proof_scope=both modes=all device_profile=auto-low-spec |
2eb9e35 to
4a2a8e9
Compare
|
/mobench platforms=both proof_scope=both modes=all device_profile=auto-low-spec iterations=30 warmup=5 fetch_timeout_secs=1800 |
|
Temporarily closing to retrigger mobile-bench checks after CI fix; reopening immediately. |
| run: | | ||
| set -euo pipefail | ||
| python3 world-id-protocol/bench-mobile/scripts/summarize_mobench_ci.py \ | ||
| --ios-dir artifacts/ios \ | ||
| --android-dir artifacts/android \ | ||
| --ios-result "${IOS_RESULT}" \ | ||
| --android-result "${ANDROID_RESULT}" \ | ||
| --platforms "${{ inputs.platforms }}" \ | ||
| --proof-scope "${{ inputs.proof_scope }}" \ | ||
| --modes "${{ inputs.modes }}" \ | ||
| --device-profile "${{ inputs.device_profile }}" \ | ||
| --mobench-ref "${{ inputs.mobench_ref }}" \ | ||
| --run-url "${RUN_URL}" \ | ||
| --pr-number "${{ inputs.pr_number }}" \ | ||
| --requested-by "${{ inputs.requested_by }}" \ | ||
| --request-command "${{ inputs.request_command }}" \ | ||
| --output /tmp/mobench-summary.md | ||
|
|
There was a problem hiding this comment.
Using variable interpolation ${{...}} with github context data in a run: step could allow an attacker to inject their own code into the runner. This would allow them to steal secrets and code. github context data can have arbitrary user input and should be treated as untrusted. Instead, use an intermediate environment variable with env: to store the data and use the environment variable in the run: script. Be sure to use double-quotes the environment variable, like this: "$ENVVAR".
🌟 Removed in commit 27ca130 🌟
|
/mobench platforms=both iterations=30 warmup=5 proof_scope=both modes=all device_profile=auto-low-spec mobench_ref=codex/ci-devex |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 3db5993494
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| inventory = "0.3" | ||
|
|
||
| # World ID | ||
| world-id-core = { workspace = true, default-features = false, features = ["authenticator", "embed-zkeys", "issuer"] } |
There was a problem hiding this comment.
Register bench-mobile in workspace members
bench-mobile inherits its core dependencies via workspace = true, but this commit never adds the crate to [workspace].members in the root Cargo.toml. For a package under the workspace root, Cargo rejects workspace inheritance unless it is a workspace member, so the new benchmark entry points (cargo run -p bench-mobile and CI mobench invocations) fail before any benchmark executes.
Useful? React with 👍 / 👎.
| thiserror = { workspace = true } | ||
|
|
||
| # UniFFI for mobile bindings | ||
| uniffi = { workspace = true, features = ["cli"] } |
There was a problem hiding this comment.
Define uniffi in workspace dependencies
This manifest declares uniffi = { workspace = true }, but the workspace root has no workspace.dependencies.uniffi entry, so Cargo cannot resolve this inherited dependency and fails to parse/build the new crate. As written, the mobile benchmark crate is not buildable until uniffi is added to workspace dependencies or pinned directly here.
Useful? React with 👍 / 👎.
The goal of this PR is to test mobench, a Rust benchmarking library for benchmarking Rust functions on mobile environments using BrowserStack. BrowserStack has an automated mobile testing API called App Automate which allows you to send
.ipaor.apkfiles to live devices that will install the apps and run whatever automation is in them. In our case we are using XCUITest for iOS and Espresso for Android. The mobench crate uses UniFFI, a Rust bindings generator, to create Swift and Kotlin bindings for the Rust functions and the benchmarking harness around them. The C ABI output bindings are put in Kotlin/Swift app templates which render a UI where the benchmarks are shown. A recording can also be seen on BrowserStack. The main output is aresults.jsonwith all benchmarking information. Besides performance, BrowserStack also records resource usage and metrics, e.g. RAM and CPU.Summary
mobile-bench-ios.yml) with configurable inputs (platforms,proof_scope,modes,iterations,warmup,device_profile, custom overrides,mobench_ref)mobile-bench-pr-command.yml) to run benchmarks via/mobench ...commentsmobile-bench-pr-label.yml) to avoid per-commit benchmark auto-runsbench-mobile/scripts/summarize_mobench_ci.py) with at-a-glance scorecard in workflow summary and sticky PR comment supportbench-mobile/docs/ci-pipeline-detailed.md) and refresh benchmark docsbench-mobile/:bench-mobile/bench-config.tomlbench-mobile/bench-config.ios.tomlbench-mobile/bench-config.android.tomlbench-mobile/device-matrix.yamlbench-mobile/device-matrix.ios.low-spec.yamlbench-mobile/device-matrix.android.low-spec.yamlNotes
/mobenchcomment dispatch is collaborator-restricted (OWNER|MEMBER|COLLABORATOR) and blocks fork PR dispatchworldcoin/mobile-bench-rs@codex/ci-devexand allows override viamobench_refbench-mobile/and uploaded in artifactsissue_commentworkflows from default branch; command dispatch requiresmobile-bench-pr-command.ymlto exist onmainValidation
bench-mobile/benchmarking assets/docs/code