Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changes to support RockDB examples and RocksDB on YCSB #31

Open
wants to merge 7 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,7 @@ SplitFS is under active development.
6. Filebench
7. LMDB
8. FIO
9. RocksDB (with YCSB)

## Testing
[PJD POSIX Test Suite](https://www.tuxera.com/community/posix-test-suite/) that tests primarily the metadata operations was run on SplitFS successfully. SplitFS passes all tests.
Expand Down
15 changes: 15 additions & 0 deletions dependencies/rocskdb_deps.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
#!/bin/bash

sudo apt-get update

sudo apt-get install libgflags-dev

sudo apt-get install libsnappy-dev

sudo apt-get install zlib1g-dev

sudo apt-get install libbz2-dev

sudo apt-get install liblz4-dev

sudo apt-get install libzstd-dev
9 changes: 6 additions & 3 deletions experiments.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,8 @@ We evaluate and benchmark on SplitFS using different application benchmarks like
* `$ export PATH=$PATH:$JAVA_HOME/bin`
* Check installation using `java -version`
* `$ sudo apt install maven`

3. RocksDB: Upgrade your gcc to version at least 4.8 to get C++11 support. For ubuntu, please run `cd dependencies ./rocksdb_deps.sh; cd..`
If you face any dependency issues, please refer the [doc](https://github.com/utsaslab/SplitFS/blob/master/rocksdb/INSTALL.md#dependencies)
---

### Kernel Setup
Expand Down Expand Up @@ -49,14 +50,15 @@ We evaluate and benchmark on SplitFS using different application benchmarks like
9. LMDB: `cd scripts/lmdb; ./compile_lmdb.sh; cd ../..` -- This will compile LMDB
10. Filebench: `cd scripts/filebench; ./compile_filebench.sh; cd ../..` -- This will compile Filebench
11. FIO: `cd scripts/fio; ./compile_fio.sh; cd ../..` -- This will compile FIO
12. Rocksdb: `cd scripts/ycsb_rocksdb; ./compile_rocksdb.sh; cd ../..` -- This will compile RocksDB

Note: The <num_threads> argument in the compilation scripts performs the compilation with the number of threads given as input to the script, to improve the speed of compilation.

---

### Workload Generation

1. YCSB: `cd scripts/ycsb; ./gen_workloads.sh; cd ../..` -- This will generate the YCSB workload files to be run with LevelDB, because YCSB does not natively support LevelDB, and has been added to the benchmarks of LevelDB
1. YCSB: `cd scripts/ycsb; ./gen_workloads.sh; cd ../..` -- This will generate the YCSB workload files to be run with LevelDB & RocksDB, because YCSB does not natively support LevelDB & RocksDB(C++), and has been added to the benchmarks of LevelDB & RocksDB
2. TPCC: `cd scripts/tpcc; ./gen_workload.sh; cd ../..` -- This will create an initial database on SQLite on which to run the TPCC workload
3. rsync: `cd scripts/rsync/; sudo ./rsync_gen_workload.sh; cd ../..` -- This will create the rsync workload according to the backup data distribution as mentioned in the paper
4. tar: `cd scripts/tar/; sudo ./gen_workload.sh; cd ../..` -- This will create the tar workload as mentioned in the paper
Expand All @@ -66,7 +68,7 @@ Note: The <num_threads> argument in the compilation scripts performs the compila

### Run Application Workloads

1. YCSB: `cd scripts/ycsb; ./run_ycsb.sh; cd ../..` -- This will run all the YCSB workloads on LevelDB (Load A, Run A-F, Load E, Run E) with `ext4-DAX, NOVA strict, NOVA Relaxed, PMFS, SplitFS-strict`
1. YCSB-LevelDB: `cd scripts/ycsb; ./run_ycsb.sh; cd ../..` -- This will run all the YCSB workloads on LevelDB (Load A, Run A-F, Load E, Run E) with `ext4-DAX, NOVA strict, NOVA Relaxed, PMFS, SplitFS-strict`
2. TPCC: `cd scripts/tpcc; ./run_tpcc.sh; cd ../..` -- This will run the TPCC workload on SQLite3 with `ext4-DAX, NOVA strict, NOVA Relaxed, PMFS, SplitFS-POSIX`
3. rsync: `cd scripts/rsync; ./run_rsync.sh; cd ../..` -- This will run the rsync workload with `ext4-DAX, NOVA strict, NOVA Relaxed, PMFS, SplitFS-sync`
4. tar: `cd scripts/tar; ./run_tar.sh; cd ../..` -- This will run the tar workload with `ext4-DAX, NOVA strict, NOVA Relaxed, PMFS, SplitFS-POSIX, SplitFS-sync, SplitFS-strict`
Expand All @@ -81,6 +83,7 @@ Note: The <num_threads> argument in the compilation scripts performs the compila
NOVA Relaxed, PMFS, SplitFS-POSIX`
8. FIO: `cd scripts/fio; ./run_fio.sh; cd ../..` --
This will run the random read-write workload with 50:50 reads and writes with `ext4-DAX, NOVA strict, NOVA Relaxed, PMFS, SplitFS-POSIX`
9. YCSB-RocksDB: `cd scripts/ycsb_rocksdb; ./run_ycsb_rocksdb.sh; cd ../..` -- This will run all the YCSB workloads on RocksDB (Load A, Run A-F, Load E, Run E) with `ext4-DAX, NOVA strict, NOVA Relaxed, PMFS, SplitFS (based on the one found in splitfs/libnvp.so)`

---

Expand Down
18 changes: 18 additions & 0 deletions implementation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
## Implementation details
Some of the implementation details of intercepted calls in SplitFS
- `fallocate, posix_fallocate`
- We pass this to the kernel.
- But before we pass this on to the kernel we fsync (relink) the file so that the kernel and SplitFS both see the file contents and metadata consistently.
- We update the file size after the system call accordingly in SplitFS before returning to the application.
- TODO: Figure out if ext4 implementation of fallocate will move the existing blocks to a new location (maybe to make it contiguous?). If yes, we will also have clear the mmap table in SplitFS because they might get stale after the system call.
- `sync_file_range`
- sync_file_range guarantees data durability only for overwrites on certain filesystems. It does not guarantee metadata durability on any filesystem.
- In case of POSIX mode of SplitFS too, we guarantee data durability and not metadata durability, i.e we want to provide the same guarantees as posix.
- The data durability is guaranteed by virtue of doing non temporal writes to the memory mapped file, so we don't really need to do anything here. In case where the file is not memory mapped (for e.g file size < 16MB) we pass it on to the underlying filesystem.
- In case of Sync and Strict mode in SplitFS, this is guaranteed by the filesystemitself and sync_file_range is not required for durability.
- `O_CLOEXEC`
- This is supported via `open` and `fcntl` in SplitFS. We store this flag value in SplitFS.
- In the supported `exec` calls, we first close the files before passing the `exec` call to the kernel.
- We do not currently handle the failure scenario for `exec`
- `fcntl`
- Currently in SplitFS we only handle value of the `close on exec` flag before it is passed through to the kernel.
12 changes: 12 additions & 0 deletions rocksdb/AUTHORS
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
Facebook Inc.
Facebook Engineering Team

Google Inc.
# Initial version authors:
Jeffrey Dean <[email protected]>
Sanjay Ghemawat <[email protected]>

# Partial list of contributors:
Kevin Regan <[email protected]>
Johan Bilien <[email protected]>
Matthew Von-Maszewski <https://github.com/matthewvon> (Basho Technologies)
Loading