Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 22 additions & 1 deletion cookbook/en/sandbox_advanced.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,14 @@ OSS_ACCESS_KEY_ID=your-access-key-id
OSS_ACCESS_KEY_SECRET=your-access-key-secret
OSS_BUCKET_NAME=your-bucket-name

# S3 settings
FILE_SYSTEM=s3
S3_ENDPOINT_URL=http://localhost:9000
S3_ACCESS_KEY_ID=your-access-key-id
S3_ACCESS_KEY_SECRET=your-access-key-secret
S3_BUCKET_NAME=your-bucket-name
S3_REGION_NAME=us-east-1

# K8S settings
K8S_NAMESPACE=default
KUBECONFIG_PATH=
Expand Down Expand Up @@ -147,12 +155,25 @@ For distributed file storage using [Alibaba Cloud Object Storage Service](https:

| Parameter | Description | Default | Notes |
| --- | --- | --- | --- |
| `FILE_SYSTEM` | File system type | `local` | `local`, or `oss` |
| `FILE_SYSTEM` | File system type | `local` | `local`, `oss`, or `s3` |
| `OSS_ENDPOINT` | OSS endpoint URL | Empty | Regional endpoint |
| `OSS_ACCESS_KEY_ID` | OSS access key ID | Empty | From the OSS console |
| `OSS_ACCESS_KEY_SECRET` | OSS access key secret | Empty | Keep secure |
| `OSS_BUCKET_NAME` | OSS bucket name | Empty | Pre-created bucket |

#### (Optional) S3 Settings

For distributed file storage using [Amazon S3](https://aws.amazon.com/s3/) or S3-compatible storage (e.g., MinIO):

| Parameter | Description | Default | Notes |
| --- | --- | --- | --- |
| `FILE_SYSTEM` | File system type | `local` | `local`, `oss`, or `s3` |
| `S3_ENDPOINT_URL` | S3 endpoint URL | Empty | For MinIO, use http://localhost:9000 |
| `S3_ACCESS_KEY_ID` | S3 access key ID | Empty | AWS access key or MinIO access key |
| `S3_ACCESS_KEY_SECRET` | S3 access key secret | Empty | AWS secret key or MinIO secret key |
| `S3_BUCKET_NAME` | S3 bucket name | Empty | Pre-created bucket |
| `S3_REGION_NAME` | S3 region name | `us-east-1` | AWS region or MinIO region |

#### (Optional) K8S Settings

To configure settings specific to Kubernetes in your sandbox server, ensure you set `CONTAINER_DEPLOYMENT=k8s` to activate this feature. Consider adjusting the following parameters:
Expand Down
23 changes: 22 additions & 1 deletion cookbook/zh/sandbox_advanced.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,14 @@ OSS_ACCESS_KEY_ID=your-access-key-id
OSS_ACCESS_KEY_SECRET=your-access-key-secret
OSS_BUCKET_NAME=your-bucket-name

# S3 设置
FILE_SYSTEM=s3
S3_ENDPOINT_URL=http://localhost:9000
S3_ACCESS_KEY_ID=your-access-key-id
S3_ACCESS_KEY_SECRET=your-access-key-secret
S3_BUCKET_NAME=your-bucket-name
S3_REGION_NAME=us-east-1

# K8S 设置
K8S_NAMESPACE=default
KUBECONFIG_PATH=
Expand Down Expand Up @@ -148,12 +156,25 @@ Redis 为沙箱状态和状态管理提供缓存。如果只有一个工作进

| Parameter | Description | Default | Notes |
| ----------------------- | ---------------- | ------- | --------------- |
| `FILE_SYSTEM` | 文件系统类型 | `local` | `local`或 `oss` |
| `FILE_SYSTEM` | 文件系统类型 | `local` | `local`、`oss`或 `s3` |
| `OSS_ENDPOINT` | OSS端点URL | 空 | 区域端点 |
| `OSS_ACCESS_KEY_ID` | OSS 访问密钥 ID | 空 | 来自 OSS 控制台 |
| `OSS_ACCESS_KEY_SECRET` | OSS 访问密钥秘钥 | 空 | 保持安全 |
| `OSS_BUCKET_NAME` | OSS 存储桶名称 | 空 | 预创建的存储桶 |

#### (可选)S3 设置

使用[Amazon S3](https://aws.amazon.com/s3/)或兼容S3的存储服务(如MinIO)进行分布式文件存储:

| Parameter | Description | Default | Notes |
| ----------------------- | ---------------- | ------- | --------------- |
| `FILE_SYSTEM` | 文件系统类型 | `local` | `local`、`oss`或 `s3` |
| `S3_ENDPOINT_URL` | S3端点URL | 空 | 对于MinIO,使用http://localhost:9000 |
| `S3_ACCESS_KEY_ID` | S3 访问密钥 ID | 空 | AWS访问密钥或MinIO访问密钥 |
| `S3_ACCESS_KEY_SECRET` | S3 访问密钥秘钥 | 空 | AWS密钥或MinIO密钥 |
| `S3_BUCKET_NAME` | S3 存储桶名称 | 空 | 预创建的存储桶 |
| `S3_REGION_NAME` | S3 区域名称 | `us-east-1` | AWS区域或MinIO区域 |

#### (可选)K8S 设置

要在沙盒服务器中配置特定于 Kubernetes 的设置,请确保设置 `CONTAINER_DEPLOYMENT=k8s` 。可以考虑调整以下参数:
Expand Down
1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ dependencies = [
"python-dotenv>=1.0.1",
"kubernetes>=33.1.0",
"shortuuid>=1.0.13",
"boto3>=1.40.51",
"celery[redis]>=5.3.1",
"a2a-sdk>=0.3.0",
"wuying-agentbay-sdk>=0.5.0",
Expand Down
10 changes: 10 additions & 0 deletions src/agentscope_runtime/sandbox/manager/sandbox_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -201,6 +201,16 @@ def __init__(
self.config.oss_endpoint,
self.config.oss_bucket_name,
)
elif self.file_system == "s3":
from .storage.s3_storage import S3Storage

self.storage = S3Storage(
self.config.s3_access_key_id,
self.config.s3_access_key_secret,
self.config.s3_endpoint_url,
self.config.s3_bucket_name,
self.config.s3_region_name,
)
else:
self.storage = LocalStorage()

Expand Down
5 changes: 5 additions & 0 deletions src/agentscope_runtime/sandbox/manager/server/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,11 @@ def get_config() -> SandboxManagerEnvConfig:
redis_container_pool_key=settings.REDIS_CONTAINER_POOL_KEY,
k8s_namespace=settings.K8S_NAMESPACE,
kubeconfig_path=settings.KUBECONFIG_PATH,
s3_endpoint_url=settings.S3_ENDPOINT_URL,
s3_access_key_id=settings.S3_ACCESS_KEY_ID,
s3_access_key_secret=settings.S3_ACCESS_KEY_SECRET,
s3_bucket_name=settings.S3_BUCKET_NAME,
s3_region_name=settings.S3_REGION_NAME,
agent_run_access_key_id=settings.AGENT_RUN_ACCESS_KEY_ID,
agent_run_access_key_secret=settings.AGENT_RUN_ACCESS_KEY_SECRET,
agent_run_account_id=settings.AGENT_RUN_ACCOUNT_ID,
Expand Down
11 changes: 10 additions & 1 deletion src/agentscope_runtime/sandbox/manager/server/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,12 +49,21 @@ class Settings(BaseSettings):
REDIS_CONTAINER_POOL_KEY: str = "_runtime_sandbox_container_container_pool"

# OSS settings
FILE_SYSTEM: Literal["local", "oss"] = "local"
FILE_SYSTEM: Literal["local", "oss", "s3"] = "local"
OSS_ENDPOINT: str = "http://oss-cn-hangzhou.aliyuncs.com"
OSS_ACCESS_KEY_ID: str = "your-access-key-id"
OSS_ACCESS_KEY_SECRET: str = "your-access-key-secret"
OSS_BUCKET_NAME: str = "your-bucket-name"

# S3 settings
S3_ENDPOINT_URL: Optional[
str
] = None # your-endpoint-url, like http://localhost:9000
S3_ACCESS_KEY_ID: str = "your-access-key-id"
S3_ACCESS_KEY_SECRET: str = "your-access-key-secret"
S3_BUCKET_NAME: str = "your-bucket-name"
S3_REGION_NAME: str = "us-east-1"

# K8S settings
K8S_NAMESPACE: str = "default"
KUBECONFIG_PATH: Optional[str] = None
Expand Down
2 changes: 2 additions & 0 deletions src/agentscope_runtime/sandbox/manager/storage/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,11 @@
from .data_storage import DataStorage
from .local_storage import LocalStorage
from .oss_storage import OSSStorage
from .s3_storage import S3Storage

__all__ = [
"DataStorage",
"LocalStorage",
"OSSStorage",
"S3Storage",
]
167 changes: 167 additions & 0 deletions src/agentscope_runtime/sandbox/manager/storage/s3_storage.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,167 @@
# -*- coding: utf-8 -*-
import os
import hashlib
import boto3
from botocore.exceptions import ClientError

from .data_storage import DataStorage


def calculate_md5(file_path):
"""Calculate the MD5 checksum of a file."""
with open(file_path, "rb") as f:
md5 = hashlib.md5()
while chunk := f.read(8192):
md5.update(chunk)
return md5.hexdigest()


class S3Storage(DataStorage):
def __init__(
self,
access_key_id,
access_key_secret,
endpoint_url,
bucket_name,
region_name="us-east-1",
):
"""
Initialize S3 storage client.

Args:
access_key_id (str): AWS access key ID
access_key_secret (str): AWS secret access key
endpoint_url (str): S3 endpoint URL
(for MinIO, use http://localhost:9000)
bucket_name (str): S3 bucket name
region_name (str): AWS region name (default: us-east-1)
"""
self.bucket_name = bucket_name
self.s3_client = boto3.client(
"s3",
aws_access_key_id=access_key_id,
aws_secret_access_key=access_key_secret,
endpoint_url=endpoint_url,
region_name=region_name,
)

# Ensure bucket exists
self._ensure_bucket_exists()

def _ensure_bucket_exists(self):
"""Ensure the bucket exists, create it if it doesn't."""
try:
self.s3_client.head_bucket(Bucket=self.bucket_name)
except ClientError as e:
error_code = int(e.response["Error"]["Code"])
if error_code == 404:
# Bucket doesn't exist, create it
self.s3_client.create_bucket(Bucket=self.bucket_name)
else:
raise

def download_folder(self, source_path, destination_path):
"""Download a folder from S3 to the local filesystem."""
if not os.path.exists(destination_path):
os.makedirs(destination_path)

# Ensure source_path ends with '/'
if not source_path.endswith("/"):
source_path += "/"

# List all objects with the given prefix
paginator = self.s3_client.get_paginator("list_objects_v2")
pages = paginator.paginate(Bucket=self.bucket_name, Prefix=source_path)

for page in pages:
if "Contents" in page:
for obj in page["Contents"]:
# Calculate relative path
relative_path = os.path.relpath(obj["Key"], source_path)
local_path = os.path.join(destination_path, relative_path)

# Create directory structure
if obj["Key"].endswith("/"):
# This is a directory
os.makedirs(local_path, exist_ok=True)
else:
# This is a file
os.makedirs(os.path.dirname(local_path), exist_ok=True)
# Download file
self.s3_client.download_file(
self.bucket_name,
obj["Key"],
local_path,
)

def upload_folder(self, source_path, destination_path):
"""Upload a local folder to S3."""
# Ensure destination_path ends with '/'
if not destination_path.endswith("/"):
destination_path += "/"

for root, dirs, files in os.walk(source_path):
# Upload directory structure
for d in dirs:
dir_path = os.path.join(root, d)
relative_path = os.path.relpath(dir_path, source_path)
s3_dir_path = (
os.path.join(
destination_path,
relative_path,
).replace(os.sep, "/")
+ "/"
)

# Create directory object in S3
self.s3_client.put_object(
Bucket=self.bucket_name,
Key=s3_dir_path,
Body=b"",
)

# Upload files
for file in files:
local_file_path = os.path.join(root, file)
relative_path = os.path.relpath(local_file_path, source_path)
s3_file_path = os.path.join(
destination_path,
relative_path,
).replace(os.sep, "/")

local_md5 = calculate_md5(local_file_path)

# Check if file exists in S3 and compare MD5
try:
response = self.s3_client.head_object(
Bucket=self.bucket_name,
Key=s3_file_path,
)
# Extract ETag from response and
# check if it's a plain MD5 hash
etag = response["ETag"].strip('"')
import re

if re.fullmatch(r"[a-fA-F0-9]{32}", etag):
s3_md5 = etag
else:
# ETag is not a plain MD5 hash,
# assume multipart or encrypted
s3_md5 = None
except ClientError as e:
if e.response["Error"]["Code"] == "404":
s3_md5 = None
else:
raise

# Upload if MD5 does not match or file does not exist
if local_md5 != s3_md5:
self.s3_client.upload_file(
local_file_path,
self.bucket_name,
s3_file_path,
)

def path_join(self, *args):
"""Join path components for S3."""
return "/".join(args)
43 changes: 41 additions & 2 deletions src/agentscope_runtime/sandbox/model/manager_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,9 @@ class SandboxManagerEnvConfig(BaseModel):
max_length=63 - UUID_LENGTH, # Max length for k8s pod name
)

file_system: Literal["local", "oss"] = Field(
file_system: Literal["local", "oss", "s3"] = Field(
...,
description="Type of file system to use: 'local' or 'oss'.",
description="Type of file system to use: 'local', 'oss', or 's3'.",
)
storage_folder: Optional[str] = Field(
"",
Expand Down Expand Up @@ -79,6 +79,30 @@ class SandboxManagerEnvConfig(BaseModel):
description="Bucket name in OSS. Required if file_system is 'oss'.",
)

# S3 settings
s3_endpoint_url: Optional[str] = Field(
None,
description="S3 endpoint URL. Required if file_system is 's3'. "
"For MinIO, use http://localhost:9000",
)
s3_access_key_id: Optional[str] = Field(
None,
description="Access key ID for S3. Required if file_system is 's3'.",
)
s3_access_key_secret: Optional[str] = Field(
None,
description="Access key secret for S3. Required if file_system "
"is 's3'.",
)
s3_bucket_name: Optional[str] = Field(
None,
description="Bucket name in S3. Required if file_system is 's3'.",
)
s3_region_name: Optional[str] = Field(
"us-east-1",
description="Region name for S3. Required if file_system is 's3'.",
)

# Redis settings
redis_server: Optional[str] = Field(
"localhost",
Expand Down Expand Up @@ -206,6 +230,21 @@ def check_settings(self):
f"{field_name} must be set when file_system is 'oss'",
)

if self.file_system == "s3":
required_fields = {
"s3_access_key_id": self.s3_access_key_id,
"s3_access_key_secret": self.s3_access_key_secret,
"s3_bucket_name": self.s3_bucket_name,
}
missing_fields = [
name for name, value in required_fields.items() if not value
]
if missing_fields:
raise ValueError(
f"Missing required S3 configuration fields: "
f"{', '.join(missing_fields)} when file_system is 's3'",
)

if self.redis_enabled:
required_redis_fields = [
self.redis_server,
Expand Down