-
Notifications
You must be signed in to change notification settings - Fork 2
Automated gcsfuse micro benchmarking #41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
anushka567
wants to merge
34
commits into
main
Choose a base branch
from
gcsfuse-micro-benchmarking
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
34 commits
Select commit
Hold shift + click to select a range
1b5324d
configurable gcsfuse micro benchmarking
anushka567 10449de
add visualization
anushka567 c3813e4
update READme
anushka567 ad01628
add iops and cpu metrics
anushka567 3a3b433
plot correction
anushka567 a7ebbb6
Update README.md
anushka567 1acc888
Update README.md
anushka567 30a2ff2
zone correction
anushka567 6719609
adding resources for various types of benchmarking
anushka567 bcc6862
update the config file
anushka567 52877dc
update startup script for cplusplus benchmarking
anushka567 d8e7649
add fiojobfile for cplusplus benchmarking
anushka567 2099f3d
add directory for benchmark_plots
anushka567 09becae
correct zone
anushka567 235e1b8
Update README.md
anushka567 8e449e6
make it platform independant
anushka567 b29bf4b
fix cpp startup script
anushka567 7e5208e
resources for rapid perf sprint
anushka567 328a8cf
fio installation for cpp benchmarks
anushka567 011627d
correct load fio jobfile path
anushka567 0c31f2b
jobfile
anushka567 167c061
jobfile
anushka567 44bb124
Update README.md
anushka567 92b4f1d
Update gcsfuse-micro-benchmarking/defaults/speed-of-light/startup_scr…
anushka567 acfdb0f
Update gcsfuse-micro-benchmarking/helpers/upload.py
anushka567 4ba4b99
Update gcsfuse-micro-benchmarking/main.py
anushka567 d3af6f9
Update gcsfuse-micro-benchmarking/helpers/record_bench_id.py
anushka567 eed103e
Update gcsfuse-micro-benchmarking/main.py
anushka567 50343e7
add newline to csv file
anushka567 f84769e
script fixes
anushka567 90e95f2
update creation command
anushka567 20ea883
final fixes
anushka567 8a4b036
final fixes pt2
anushka567 35b1fa9
Update write_fio_job_cases.csv
anushka567 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,73 @@ | ||
## Steps to run the benchmark | ||
|
||
### 1. Setup the tool | ||
``` | ||
git clone https://github.com/GoogleCloudPlatform/gcsfuse-tools.git | ||
cd gcsfuse-tools | ||
git checkout gcsfuse-micro-benchmarking | ||
cd gcsfuse-micro-benchmarking | ||
``` | ||
|
||
### 2. Start the SSH agent and load your GCE private key to enable passwordless SSH access to your VMs. | ||
``` | ||
eval "$(ssh-agent -s)" | ||
ssh-add ~/.ssh/google_compute_engine | ||
|
||
#pkill ssh-agent # to kill the active the ssh session, Having multiple ssh-agent processes running can lead to unexpected behavior and connection issues | ||
``` | ||
Note: Please ensure that `gcloud compute ssh` works as expected locally. | ||
|
||
### 3. Setup the configurations as per your requirement | ||
For custom benchmark runs, according to your usecase, modify either of | ||
* fio_job_cases.csv | ||
- For executing mixed testcases such as the published GCSFuse benchmarks. | ||
* jobfile.fio | ||
- For executing fio jobs with different global configurations | ||
|
||
For more details on setting the configurations as per requirement, follow the guidelines [here](https://docs.google.com/document/d/1yI0ApvDC8SDnpzAmz95kbf75h1G-me41Xa1XH7zecF0/edit?usp=sharing) | ||
|
||
### 4. (Optional) Start the tmux session | ||
``` | ||
tmx2 new -A -s benchmarking-session | ||
``` | ||
Running the script can be blocking and any failure (for e.g. SSH issues of the local machine from which the script is triggered, etc.) can cause the entire script to retriggered , thus it is advised to run the benchmark in a tmux session. \ | ||
Note: tmx2 is recommended as tmux doesn't work well with propagated ssh-keys. To install tmx2, please run the following commands: `sudo apt install tmux gnubby-wrappers` | ||
|
||
### 5. Setup the virual environment | ||
``` | ||
python3 -m venv venv | ||
source venv/bin/activate | ||
pip install -r requirements.txt | ||
``` | ||
|
||
### 5. Run the benchmark | ||
``` | ||
python3 main.py --benchmark_id={benchmark_id} --config_filepath={path/to/benchmark_config_file} --bench_type={bench_type} | ||
``` | ||
Note: Please ensure Google Cloud SDK is updated as creating zonal buckets is not supported for older versions. | ||
|
||
### 6. Cleanup | ||
Whenever necessary, a GCE VM of name `{benchmark_id}-vm` and a GCS bucket of name `{benchmark_id}-bkt}` is created at runtime. | ||
|
||
Cleanup is handled as part of the script itself if the resources are created in runtime and explicitly stated via the config to delete after use. In case of tool failure, the resources are persisted. | ||
|
||
|
||
### 7. Benchmark Results | ||
The results from the benchmark run is available at the location `results/{benchmark_id}_result.txt}` locally, at the end of benchmarking and remotely, in the artifacts bucket at `gs://{ARTIFACTS_BUCKET}/{benchmark_id}/result.json` | ||
|
||
The raw results are also persisted in the artifacts bucket at `gs://{ARTIFACTS_BUCKET}/{benchmark_id}/raw-results/` | ||
|
||
### 8. Compare Benchmark Runs | ||
With identical benchmark runs for baseline/topline/feature , the results can be compared using the following steps: | ||
``` | ||
cd compare_runs | ||
python3 main.py --benchmark_ids=id1,id2,... --output_dir=output_dir | ||
``` | ||
|
||
Visual plots are generated and stored under `output_dir/` | ||
|
||
#### Note: | ||
* The benchmark_id passed as argument to the script, is used for creating the test bucket and VM instance if required, hence ensure the benchmark_id is complaint with the naming guidelines for such resources | ||
* In case the GCE VM instance is pre-existing, please ensure that the VM scope is set to | ||
`https://www.googleapis.com/auth/cloud-platform` for full access to all Cloud APIs | ||
* For future reference, the benchmark ids are also stored in the artifacts bucket at `gs://{ARTIFACTS_BUCKET}/${user}$/runs.json` . The runs can be labelled by setting the bench_type flag passed to the script`. | ||
Comment on lines
+43
to
+73
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There are a few formatting and syntax issues in the instructions:
|
Empty file.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,265 @@ | ||
import argparse | ||
import sys | ||
import subprocess | ||
import json | ||
import os | ||
import matplotlib.pyplot as plt | ||
import numpy as np | ||
import re | ||
|
||
ARTIFACTS_BUCKET = "gcsfuse-perf-benchmark-artifacts" | ||
|
||
def sanitize_filename(filename): | ||
"""Removes or replaces characters potentially problematic for filenames.""" | ||
filename = filename.replace('/', '_per_').replace('\\', '_').replace(' ', '_') | ||
filename = re.sub(r'[^a-zA-Z0-9._-]', '', filename) | ||
filename = re.sub(r'_+', '_', filename) | ||
return filename | ||
|
||
def load_results_for_benchmark_id(benchmark_id, bucket): | ||
""" | ||
Loads result.json from GCS if it exists, using gcloud CLI. | ||
|
||
The path checked is gs://{bucket}/{benchmark_id}/result.json | ||
|
||
Args: | ||
benchmark_id: The ID of the benchmark. | ||
bucket: The GCS bucket name. | ||
|
||
Returns: | ||
A dictionary loaded from the JSON file, or None if the file | ||
doesn't exist or an error occurs. | ||
""" | ||
gcs_path = f"gs://{bucket}/{benchmark_id}/result.json" | ||
# print(f"Attempting to load results from: {gcs_path}") | ||
|
||
describe_command = ["gcloud", "storage", "objects", "describe", gcs_path] | ||
try: | ||
subprocess.run(describe_command, check=True, stdout=subprocess.DEVNULL, stderr=subprocess.PIPE) | ||
except subprocess.CalledProcessError as e: | ||
print(f"File not found or no access: {gcs_path}") | ||
return None | ||
except FileNotFoundError: | ||
print("Error: 'gcloud' command not found. Ensure the Google Cloud SDK is installed and in your PATH.") | ||
return None | ||
except Exception as e: | ||
print(f"An unexpected error occurred during describe: {e}") | ||
return None | ||
|
||
cp_command = ["gcloud", "storage", "cp", gcs_path, "-"] | ||
try: | ||
cp_result = subprocess.run(cp_command, check=True, capture_output=True, text=True) | ||
file_content = cp_result.stdout | ||
except subprocess.CalledProcessError as e: | ||
print(f"Error copying file content from {gcs_path}: {e.stderr}") | ||
return None | ||
except FileNotFoundError: | ||
print("Error: 'gcloud' command not found.") | ||
return None | ||
except Exception as e: | ||
print(f"An unexpected error occurred during copy: {e}") | ||
return None | ||
|
||
try: | ||
data = json.loads(file_content) | ||
return data | ||
except json.JSONDecodeError as e: | ||
print(f"Error decoding JSON from {gcs_path}: {e}") | ||
return None | ||
|
||
def get_plot_summary(benchmark_ids, metric_configs, output_dir): | ||
"""Generates a summary of the plots being created.""" | ||
summary = [] | ||
summary.append("--- Benchmark Comparison Plot Summary ---") | ||
summary.append(f"Comparing Benchmark IDs: {', '.join(benchmark_ids)}") | ||
summary.append(f"Output Directory: {output_dir}\n") | ||
summary.append("Plots Generated:") | ||
|
||
for metric_name in metric_configs.keys(): | ||
plot_filename_base = sanitize_filename(metric_name.lower()) | ||
plot_filename = f"{plot_filename_base}.png" | ||
summary.append(f" - {metric_name}: {os.path.join(output_dir, plot_filename)}") | ||
|
||
summary.append("\nNote:") | ||
summary.append(" - Each plot visualizes a specific metric across different test cases (X-axis).") | ||
summary.append(" - Within each test case, points represent the mean value for each Benchmark ID.") | ||
summary.append(" - Error bars indicate +/- one Standard Deviation, as provided in the 'fio_metrics'.") | ||
summary.append(" - These are NOT true box-and-whisker plots as quartile/median data is not available in the input.") | ||
summary.append("--------------------------------------") | ||
return "\n".join(summary) | ||
|
||
def compare_and_visualize(results, output_dir="benchmark_plots"): | ||
""" | ||
Generates and saves plots comparing benchmark results using error bars. | ||
|
||
Args: | ||
results (dict): A dictionary where keys are benchmark_ids and values | ||
are the parsed JSON results. | ||
output_dir (str): Directory to save the plot images. | ||
""" | ||
if not results: | ||
print("No results to compare and visualize.") | ||
return | ||
|
||
os.makedirs(output_dir, exist_ok=True) | ||
benchmark_ids = sorted(results.keys()) | ||
if not benchmark_ids: | ||
print("Benchmark IDs list is empty.") | ||
return | ||
|
||
sample_bid = benchmark_ids[0] | ||
test_cases = sorted(results[sample_bid].keys()) | ||
if not test_cases: | ||
print(f"No test cases found in benchmark {sample_bid}") | ||
return | ||
|
||
n_test_cases = len(test_cases) | ||
|
||
metric_configs = { | ||
"Read Throughput (MB/s)": ("fio_metrics", "avg_read_throughput_mbps", "stdev_read_throughput_mbps"), | ||
"Write Throughput (MB/s)": ("fio_metrics", "avg_write_throughput_mbps", "stdev_write_throughput_mbps"), | ||
"Read Latency (ms)": ("fio_metrics", "avg_read_latency_ms", "stdev_read_latency_ms"), | ||
"Write Latency (ms)": ("fio_metrics", "avg_write_latency_ms", "stdev_write_latency_ms"), | ||
"Read IOPS": ("fio_metrics", "avg_read_iops", "stdev_read_iops"), | ||
"Write IOPS": ("fio_metrics", "avg_write_iops", "stdev_write_iops"), | ||
"Average CPU %": ("vm_metrics", "avg_cpu_utilization_percent", "stdev_cpu_utilization_percent"), | ||
"CPU per GBps": (None, "cpu_percent_per_gbps", None), | ||
} | ||
|
||
# Add log-scale metrics to the configuration | ||
metric_configs["Read Latency (ms) [Log Scale]"] = ("fio_metrics", "avg_read_latency_ms", "stdev_read_latency_ms", True) | ||
metric_configs["Write Latency (ms) [Log Scale]"] = ("fio_metrics", "avg_write_latency_ms", "stdev_write_latency_ms", True) | ||
|
||
print(get_plot_summary(benchmark_ids, metric_configs, output_dir)) | ||
|
||
# Use the correct, non-deprecated way to get a colormap | ||
colors = plt.colormaps.get_cmap('tab10') | ||
|
||
for metric_name, config in metric_configs.items(): | ||
data_group, avg_key, std_key = config[:3] | ||
is_log_scale = config[3] if len(config) > 3 else False | ||
|
||
fig, ax = plt.subplots(figsize=(max(12, n_test_cases * len(benchmark_ids) * 0.4), 8)) | ||
has_data_in_metric = False | ||
all_vals = [] | ||
|
||
bar_width = 0.3 / len(benchmark_ids) | ||
|
||
for i, test_case in enumerate(test_cases): | ||
x_offset_start = i - (len(benchmark_ids) / 2 - 0.5) * bar_width | ||
|
||
for j, bid in enumerate(benchmark_ids): | ||
test_data = results[bid].get(test_case, {}) | ||
if data_group: | ||
source = test_data.get(data_group, {}) | ||
else: | ||
source = test_data | ||
|
||
mean_val = source.get(avg_key) | ||
std_val = source.get(std_key) if std_key else None | ||
|
||
mean_val = 0.0 if mean_val is None else mean_val | ||
std_val = 0.0 if std_val is None else std_val | ||
|
||
x_position = x_offset_start + j * bar_width | ||
|
||
label = f"{bid}" | ||
color = colors(j) | ||
|
||
# Check for log scale before plotting to avoid log(0) | ||
if is_log_scale and mean_val > 0: | ||
ax.errorbar(x_position, mean_val, yerr=std_val, fmt='o', linestyle='', label=label, capsize=5, markersize=6, elinewidth=1.5, color=color) | ||
elif not is_log_scale: | ||
ax.errorbar(x_position, mean_val, yerr=std_val, fmt='o', linestyle='', label=label, capsize=5, markersize=6, elinewidth=1.5, color=color) | ||
|
||
if mean_val > 0: | ||
has_data_in_metric = True | ||
|
||
all_vals.extend([mean_val - std_val, mean_val + std_val]) | ||
|
||
# Remove duplicate labels from the legend | ||
handles, labels = ax.get_legend_handles_labels() | ||
unique_labels = dict(zip(labels, handles)) | ||
|
||
if not has_data_in_metric: | ||
print(f"Skipping plot for {metric_name}: No positive data found.") | ||
plt.close(fig) | ||
continue | ||
|
||
if is_log_scale: | ||
ax.set_yscale('log') | ||
ax.set_title(f"Comparison of {metric_name}", fontsize=16) | ||
else: | ||
ax.set_title(f"Comparison of {metric_name}", fontsize=16) | ||
|
||
ax.set_ylabel(metric_name, fontsize=12) | ||
|
||
ax.set_xticks(np.arange(n_test_cases)) | ||
ax.set_xticklabels(test_cases, rotation=45, ha="right", fontsize=10) | ||
|
||
ax.legend(unique_labels.values(), unique_labels.keys(), title="Benchmark ID", bbox_to_anchor=(1.04, 1), loc='upper left', fontsize=9) | ||
ax.grid(axis='y', linestyle='--', alpha=0.7) | ||
ax.tick_params(axis='x', labelsize=10) | ||
ax.tick_params(axis='y', labelsize=10) | ||
|
||
if all_vals and not is_log_scale: | ||
min_val = min(all_vals) | ||
max_val = max(all_vals) | ||
padding = (max_val - min_val) * 0.1 | ||
if padding == 0: padding = max(abs(max_val) * 0.1, 1) | ||
ax.set_ylim(max(0, min_val - padding), max_val + padding) | ||
|
||
plt.tight_layout(rect=[0, 0, 0.85, 1]) | ||
plot_filename_base = sanitize_filename(metric_name.lower()) | ||
plot_filename = f"{plot_filename_base}.png" | ||
plot_path = os.path.join(output_dir, plot_filename) | ||
|
||
try: | ||
plt.savefig(plot_path, dpi=150, bbox_inches='tight') | ||
except Exception as e: | ||
print(f"Error saving plot {plot_path}: {e}") | ||
finally: | ||
plt.close(fig) | ||
|
||
print(f"\nFinished generating plots in '{output_dir}' directory.") | ||
|
||
|
||
if __name__ == '__main__': | ||
parser = argparse.ArgumentParser( | ||
description="Script to process and visualize benchmark results from GCS." | ||
) | ||
parser.add_argument( | ||
'--benchmark_ids', | ||
type=str, | ||
default='', | ||
required=True, | ||
help='A comma-separated list of benchmark IDs (e.g., "id1,id2,id3").' | ||
) | ||
parser.add_argument( | ||
'--output_dir', | ||
type=str, | ||
default='benchmark_plots', | ||
help='Directory to save the output plots.' | ||
) | ||
args = parser.parse_args() | ||
|
||
benchmark_ids = [item.strip() for item in args.benchmark_ids.split(',') if item.strip()] | ||
if not benchmark_ids: | ||
print("Error: No benchmark IDs provided.") | ||
sys.exit(1) | ||
|
||
results = {} | ||
print("--- Loading Benchmark Results ---") | ||
for bid in benchmark_ids: | ||
result = load_results_for_benchmark_id(bid, ARTIFACTS_BUCKET) | ||
if result is not None: | ||
results[bid] = result | ||
print(f"Successfully loaded results for {bid}") | ||
else: | ||
print(f"Failed to load results for {bid}") | ||
print("--- Finished Loading ---\n") | ||
|
||
if results: | ||
compare_and_visualize(results, args.output_dir) | ||
else: | ||
print("No results were successfully loaded, skipping visualization.") |
Empty file.
17 changes: 17 additions & 0 deletions
17
gcsfuse-micro-benchmarking/defaults/gcsfuse-published/read_fio_job_cases.csv
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
bs,file_size,iodepth,iotype,threads,nrfiles | ||
128K,128K,64,read,128,30 | ||
128K,256K,64,read,128,30 | ||
1M,1M,64,read,128,30 | ||
1M,5M,64,read,128,20 | ||
1M,10M,64,read,128,20 | ||
1M,50M,64,read,128,20 | ||
1M,100M,64,read,128,10 | ||
1M,200M,64,read,128,10 | ||
1M,1G,64,read,128,10 | ||
128K,128K,64,randread,128,30 | ||
1M,5M,64,randread,128,20 | ||
1M,10M,64,randread,128,20 | ||
1M,50M,64,randread,128,10 | ||
1M,100M,64,randread,128,10 | ||
1M,200M,64,randread,128,10 | ||
1M,1G,64,randread,128,10 |
25 changes: 25 additions & 0 deletions
25
gcsfuse-micro-benchmarking/defaults/gcsfuse-published/read_fio_job_template.fio
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,25 @@ | ||
[global] | ||
allrandrepeat=0 | ||
create_serialize=0 | ||
direct=1 | ||
fadvise_hint=0 | ||
file_service_type=random | ||
group_reporting=1 | ||
iodepth=${IODEPTH} | ||
ioengine=libaio | ||
invalidate=1 | ||
numjobs=${NUMJOBS} | ||
openfiles=1 | ||
rw=${IOTYPE} | ||
thread=1 | ||
filename_format=${FILENAME_FORMAT} | ||
|
||
[experiment] | ||
stonewall | ||
directory=${MNTDIR} | ||
# Update the block size value from the table for different experiments. | ||
bs=${BLOCKSIZE} | ||
# Update the file size value from table(file size) for different experiments. | ||
filesize=${FILESIZE} | ||
# Set nrfiles per thread in such a way that the test runs for 1-2 min. | ||
nrfiles=${NRFILES} |
6 changes: 6 additions & 0 deletions
6
gcsfuse-micro-benchmarking/defaults/gcsfuse-published/write_fio_job_cases.csv
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,6 @@ | ||
bs,file_size,iodepth,iotype,threads,nrfiles | ||
16K,256K,64,write,112,30 | ||
1M,1M,64,write,112,30 | ||
1M,50M,64,write,112,20 | ||
1M,100M,64,write,112,10 | ||
1M,1G,64,write,112,2 |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are a few typos in the documentation that affect readability:
retriggered
should bere-triggered
.virual
should bevirtual
.complaint
should becompliant
.