A Benchmarking and Performance Analysis Framework
The code base includes three sub-systems. The first is the collection agent,
pbench-agent
, responsible for providing commands for running benchmarks
across one or more systems, while properly collecting the configuration data
for those systems, and specified telemetry or data from various tools (sar
,
vmstat
, perf
, etc.).
The second sub-system is the pbench-server
, which is responsible for
archiving result tar balls, indexing them, and unpacking them for display.
The third sub-system is the web-server
JS and CSS files, used to display
various graphs and results, and any other content generated by the
pbench-agent
during benchmark and tool post-processing steps.
The pbench Dashboard code lives in its own repository.
Instructions on installing pbench-agent
, can be found
in the Pbench Agent Getting Started Guide.
For Fedora, CentOS, and RHEL users, we have made available COPR
builds for the
pbench-agent
, pbench-server
, pbench-web-server
, and some benchmark and
tool packages.
Install the pbench-web-server
package on the machine from where you want to
run the pbench-agent
workloads, allowing you to view the graphs before
sending the results to a server, or even if there is no server configured to
send results.
You might want to consider browsing through the rest of the documentation.
Refer to the Pbench Agent Getting Started Guide.
TL;DR? See "TL;DR - How to set up the pbench-agent
and run a benchmark
" in the
main documentation for a super quick set of introductory steps.
The latest source code is at https://github.com/distributed-system-analysis/pbench.
The pbench dashboard code is maintained separately at https://github.com/distributed-system-analysis/pbench-dashboard.
Yes, we use Google Groups
Yes, we are using GitHub Projects. Please find projects covering the Agent, Server, and a project that is named the same as the current milestone.
Below are some simple steps for setting up a development environment for working with the Pbench code base. For more detailed instructions on the workflow and process of contributing code to Pbench, refer to the Guidelines for Contributing.
$ git clone https://github.com/distributed-system-analysis/pbench
$ cd pbench
To simply run the unit tests quickly from within the checked out source tree, execute:
jenkins/run jenkins/tox -r --current-env -e jenkins-pytests
jenkins/run jenkins/tox -r --current-env -e jenkins-unittests
The above commands run the tests in a Fedora-base container with all the proper packages installed.
If you want to run the unit tests outside of that environment, you need to
install tox
properly in your environment (Fedora/CentOS/RHEL):
$ sudo dnf install -y perl-JSON python3-pip python3-tox
Once tox is installed you can run the unit tests (use tox --listenvs
to see
the full list); e.g.:
tox -e util-scripts
-- for agent/util-scripts teststox -e server
-- for server teststox -e lint
-- to run the linting and code style checks
To run the full suite of unit tests in parallel, invoke the run-unittests
script at the top-level of the pbench repository.
This project uses the flake8==3.8.3 method of code style enforcement, linting, and checking.
All python code contributed to pbench must match the style requirements. These requirements are enforced by the pre-commit hook using the black==1.19b0 Python code formatter.
This project makes use of pre-commit to do automatic lint and style checking on every commit containing Python files.
To install the pre-commit hook, run the executable from your Python 3 framework while in your current pbench git checkout:
$ cd ~/pbench
$ pip3 install pre-commit
$ pre-commit install --install-hooks
Once installed, all commits will run the test hooks. If your changes fail any of the tests, the commit will be rejected.
We employ a simple major, minor, release, build (optional) scheme for tagging
starting with the v0.70.0
release (v<Major>.<Minor>.<Release>[-<Build>]
).
Prior to the v0.70.0 release, the scheme used was mostly v<Major>.<Minor>
,
where we only had minor releases (Major = 0
).
The practice of using -agent
or -server
is also ending with the v0.70.0
release.
This same GitHub "tag" scheme is used with tags applied to container images we build, with the following exceptions for tag names:
-
latest
- always points to the "latest" container image pushed to a repository -
v<Major>-latest
- always points to the "latest"Major
released image -
v<Major>.<Minor>-latest
- always points to the "latest" release forMajor
.Minor
released images -
<SHA1 git hash>
(9 characters) - commit hash of the checked out code