Previous change logs can be found at CHANGELOG-1.1
- When add a new pool, default new logic pool is DENY, need tool to enable it.
- When io error occurs, whether nebd-server discards the corresponding rpc request depends on configuration setting(discard default).
- When client read a unallocated space, it does not allocate the segment to improve the space utilization.
- Add data stripe feature.
- Optimize ansible script, optimize build script, optimize log printing.
- Translate some document and code comment from chinese to english.
- Curve_ops_tool statistics chunkserver capacity by pool.
- Add a script for k8s to attach curve volume.
- curve_ops_tool improve:
- Clean up the unit test temporary folder.
Hardware: 6 nodes, each with:
- 20x SATA SSD Intel® SSD DC S3500 Series 800G
- 2x Intel(R) Xeon(R) CPU E5-2660 v4 @ 2.00GHz
- 2x Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection, bond mode is 802.3ad with layer2+3 hash policy
- 251G RAM
Performance test is based on curve-nbd, the size of each block device is 200GB, all configurations are default, and each Chunkserver is deployed on one SSD.
1 NBD block device:
item | iops/bandwidth | avg-latency | 99th-latency | striped volume iops/bandwidth |
striped volume avg-latency |
striped volume 99th-latency |
---|---|---|---|---|---|---|
4K randwrite, 128 depth | 97,700 iops | 1.3 ms | 2.0 ms | 97,100 iops | 1.3 ms | 3.0 ms |
4K randread, 128 depth | 119,000 iops | 1.07 ms | 1.8 ms | 98,000 iops | 1.3 ms | 2.2 ms |
512K write, 128 depth | 208 MB/s | 307 ms | 347 ms | 462 MB/s | 138 ms | 228 ms |
512K read, 128 depth | 311 MB/s | 206 ms | 264 ms | 843 MB/s | 75 ms | 102 ms |
10 NBD block device:
item | iops/bandwidth | avg-latency | 99th-latency | striped volume iops/bandwidth |
striped volume avg-latency |
striped volume 99th-latency |
---|---|---|---|---|---|---|
4K randwrite, 128 depth | 231,000 iops | 5.6 ms | 50 ms | 227,000 iops | 5.9 ms | 53 ms |
4K randread, 128 depth | 350,000 iops | 3.7 ms | 8.2 ms | 345,000 iops | 3.8 ms | 8.2 ms |
512K write, 128 depth | 805 MB/s | 415 ms | 600 ms | 1,077 MB/s | 400 ms | 593 ms |
512K read, 128 depth | 2,402 MB/s | 267 ms | 275 ms | 3,313 MB/s | 201 ms | 245 ms |
- Fix clone delete bug.
- Nbd unmap need wait thread exit.
- Mds need check file attach status.
- Fixed when disk fails, copyset reports an error, but sometimes the chunkserver does not exit.
- Fixed the direct_fd leak problem when wal write disk.
- Fixed the atomicity problem of GetFile when not using the file pool.
- Fixed in cluster_basic_test mds start error on InitEtcdClient() which produced coredump
- curve-nbd: support simulate 512-byte logical/physical block device
- curve-nbd: prevent mounted device to unmap
- curve-nbd: add auto mount options
- nebd: support multi-instances for different user
- client: retry allocate segment until success
- mds: schedule pending online chunkserver
- ansible: backup and recover curvetab
- debian package: backup and recover curvetab
- scripts: fix potential overflow when cnovert ip to value
- client: fix segment in SourceReader
- client: explicit stop LeaseExecutor in FileInstance::UnInitialize
- curve-nbd: fix concurrent nbd map
- snapshotclone: fix cancel task lost
- tools: remove copyset data after remove peer
- mds: modify stale heartbeat update error with warning
- chunkserver: fix config change epoch update error