Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Project 2: Xuanyi Zhou #15

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
122 changes: 116 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,122 @@ CUDA Stream Compaction

**University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 2**

* (TODO) YOUR NAME HERE
* (TODO) [LinkedIn](), [personal website](), [twitter](), etc.
* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab)
* Xuanyi Zhou
* [LinkedIn](https://www.linkedin.com/in/xuanyi-zhou-661365192/), [Github](https://github.com/lukedan)
* Tested on: Windows 10, i7-9750H @ 2.60GHz 32GB, RTX 2060 6GB

### (TODO: Your README)
## Features

Include analysis, etc. (Remember, this is public, so don't put
anything here that you don't want to share with the world.)
- Regular CPU and thrust implementations of scan and compact operations.
- Naive implementation of the scan operation.
- Recursive efficient scan implementation that supports arbitrary array lengths, and makes use of shared memory & bank conflict optimizations.
- Efficient compact implementaiton making use of the efficient scan operation.

## Questions

- Roughly optimize the block sizes of each of your implementations for minimal run time on your GPU.

For Naive scan, changing the block size does not affect executing time much as the operation is bounded by memory bandwidth. For efficient scan and compaction, the optimal block size is around 64/128. From tracing the program it can be seen that thrust also uses a block size of 128.

- Compare all of these GPU Scan implementations (Naive, Work-Efficient, and Thrust) to the serial CPU version of Scan. Plot a graph of the comparison (with array size on the independent axis).

Below is the graph of run time of all implementations:

![](img/perf.png)

See `data.xlsx` for the raw data. Note that since the efficient scan implementation is recursive, some memory allocations are included in the final time.

For small array sizes, all methods are fairly fast and the run times are relatively unpredictable. As the array size increases, the relationship between the run times become consistent: the naive scan is the slowest, followed by the CPU implementation, the work-efficient scan, and finally the thrust implementation is the fastest.

Looking at the Nsight timeline and kernel calls it can be seen that the number of threads spawned by the thrust implementation is far less than the work-efficient implementation which requires one thread for every two array elements. The thrust implementation also uses far more registers per thread. This suggests that the thrust implementation may be using an optimization mentioned in the GPU gems chapter, which is performing a scan operation for a few elements in each thread in serial, then aggregating the results by performing a scan on the sums.

Interestingly, the time gaps between copying data from CPU to GPU and back are roughly the same for the work-efficient implementation and the thrust implementation as shown in the timeline, but the work-efficient implementation is busy throughout the period while the thrust implementation is largely idle for the first half of the gap. Currently I do not have a reasonable explanation for this.

- Write a brief explanation of the phenomena you see here.

Since the naive implementation does not use any shared memory, it's understandable that it would be slower than the CPU implementation as the GPU has a much lower clock speed than the CPU, and does not have nearly the same amount of cache.

The thrust implementation and the work-efficient implementation are faster than the CPU version. The thrust version is about three times faster as it may have utilized other optimizations such as the one mentioned in the previous answer.

The reason why the optimization mentioned before works may be that it reduces the number of processed elements by a large factor (4x if 4 elements are summed in serial) while summing a few numbers in serial is relatively cheap and is very well parallelized.

- Paste the output of the test program into a triple-backtick block in your README.

The tests for radix sort can be seen at the end of the output.

```
****************
** SCAN TESTS **
****************
[ 45 4 11 40 10 5 35 48 33 44 0 28 24 ... 41 0 ]
==== cpu scan, power-of-two ====
elapsed time: 78.9076ms (std::chrono Measured)
[ 0 45 49 60 100 110 115 150 198 231 275 275 303 ... -1007647599 -1007647558 ]
==== cpu scan, non-power-of-two ====
elapsed time: 77.9638ms (std::chrono Measured)
[ 0 45 49 60 100 110 115 150 198 231 275 275 303 ... -1007647704 -1007647672 ]
passed
==== naive scan, power-of-two ====
elapsed time: 133.709ms (CUDA Measured)
passed
==== naive scan, non-power-of-two ====
elapsed time: 112.619ms (CUDA Measured)
passed
==== work-efficient scan, power-of-two ====
elapsed time: 11.0549ms (CUDA Measured)
passed
==== work-efficient scan, non-power-of-two ====
elapsed time: 11.0906ms (CUDA Measured)
passed
==== thrust scan, power-of-two ====
elapsed time: 4.04538ms (CUDA Measured)
passed
==== thrust scan, non-power-of-two ====
elapsed time: 4.0943ms (CUDA Measured)
passed

*****************************
** STREAM COMPACTION TESTS **
*****************************
[ 3 0 2 2 1 0 0 2 2 3 2 3 1 ... 2 0 ]
==== cpu compact without scan, power-of-two ====
elapsed time: 307.968ms (std::chrono Measured)
[ 3 2 2 1 2 2 3 2 3 1 3 2 2 ... 2 2 ]
passed
==== cpu compact without scan, non-power-of-two ====
elapsed time: 304.929ms (std::chrono Measured)
[ 3 2 2 1 2 2 3 2 3 1 3 2 2 ... 2 2 ]
passed
==== cpu compact with scan ====
elapsed time: 422.665ms (std::chrono Measured)
[ 3 2 2 1 2 2 3 2 3 1 3 2 2 ... 2 2 ]
passed
==== work-efficient compact, power-of-two ====
elapsed time: 25.4493ms (CUDA Measured)
[ 3 2 2 1 2 2 3 2 3 1 3 2 2 ... 2 2 ]
passed
==== work-efficient compact, non-power-of-two ====
elapsed time: 23.5873ms (CUDA Measured)
[ 3 2 2 1 2 2 3 2 3 1 3 2 2 ... 2 2 ]
passed
==== thrust compact, power-of-two ====
elapsed time: 4.44602ms (CUDA Measured)
passed
==== thrust compact, non-power-of-two ====
elapsed time: 4.03315ms (CUDA Measured)
passed

*********************
** RADIX SORT TEST **
*********************
[ 943102324 1728649027 1523795418 230368144 1853983028 219035492 1373487995 539655339 345004302 1682352720 528619710 1142157171 735013686 ... 646714987 484939495 ]
==== radix sort, power-of-two ====
elapsed time: 1305.98ms (CUDA Measured)
passed
[ 42 45 55 58 63 67 78 122 162 170 188 206 221 ... 1999999985 1999999998 ]
==== radix sort, non-power-of-two ====
elapsed time: 1338.5ms (CUDA Measured)
passed
[ 42 45 55 58 63 67 78 122 162 170 188 206 221 ... 1999999985 1999999998 ]
Press any key to continue . . .
```
Binary file added data.xlsx
Binary file not shown.
Binary file added img/perf.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
51 changes: 49 additions & 2 deletions src/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,14 @@
*/

#include <cstdio>
#include <random>
#include <stream_compaction/cpu.h>
#include <stream_compaction/naive.h>
#include <stream_compaction/efficient.h>
#include <stream_compaction/thrust.h>
#include "testing_helpers.hpp"

const int SIZE = 1 << 8; // feel free to change the size of array
const int SIZE = 1 << 27; // feel free to change the size of array
const int NPOT = SIZE - 3; // Non-Power-Of-Two
int *a = new int[SIZE];
int *b = new int[SIZE];
Expand Down Expand Up @@ -137,16 +138,62 @@ int main(int argc, char* argv[]) {
printDesc("work-efficient compact, power-of-two");
count = StreamCompaction::Efficient::compact(SIZE, c, a);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(count, c, true);
printArray(count, c, true);
printCmpLenResult(count, expectedCount, b, c);

zeroArray(SIZE, c);
printDesc("work-efficient compact, non-power-of-two");
count = StreamCompaction::Efficient::compact(NPOT, c, a);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
printArray(count, c, true);
printCmpLenResult(count, expectedNPOT, b, c);

zeroArray(SIZE, c);
printDesc("thrust compact, power-of-two");
count = StreamCompaction::Thrust::compact(SIZE, c, a);
printElapsedTime(StreamCompaction::Thrust::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(count, c, true);
printCmpLenResult(count, expectedCount, b, c);

zeroArray(SIZE, c);
printDesc("thrust compact, non-power-of-two");
count = StreamCompaction::Thrust::compact(NPOT, c, a);
printElapsedTime(StreamCompaction::Thrust::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(count, c, true);
printCmpLenResult(count, expectedNPOT, b, c);


printf("\n");
printf("*********************\n");
printf("** RADIX SORT TEST **\n");
printf("*********************\n");

// here we write our own version of genArray that does not make use of grandpa's functions
// because on my machine RAND_MAX is 0x7FFF which means not all bits can be tested
std::default_random_engine rand(std::random_device{}());
std::uniform_int_distribution<int> dist(0, 2000000000);
for (std::size_t i = 0; i < SIZE; ++i) {
a[i] = dist(rand);
}
printArray(SIZE, a, true);

printDesc("radix sort, power-of-two");
std::memcpy(b, a, sizeof(int) * SIZE);
std::sort(b, b + SIZE);
StreamCompaction::Efficient::radix_sort(SIZE, c, a);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
printCmpResult(SIZE, b, c);
printArray(SIZE, c, true);

printDesc("radix sort, non-power-of-two");
std::memcpy(b, a, sizeof(int) * NPOT);
std::sort(b, b + NPOT);
StreamCompaction::Efficient::radix_sort(NPOT, c, a);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
printCmpResult(NPOT, b, c);
printArray(NPOT, c, true);


system("pause"); // stop Win32 console from closing on exit
delete[] a;
delete[] b;
Expand Down
15 changes: 13 additions & 2 deletions stream_compaction/common.cu
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,11 @@ namespace StreamCompaction {
* which map to 0 will be removed, and elements which map to 1 will be kept.
*/
__global__ void kernMapToBoolean(int n, int *bools, const int *idata) {
// TODO
int iSelf = blockIdx.x * blockDim.x + threadIdx.x;
if (iSelf >= n) {
return;
}
bools[iSelf] = idata[iSelf] != 0 ? 1 : 0;
}

/**
Expand All @@ -32,7 +36,14 @@ namespace StreamCompaction {
*/
__global__ void kernScatter(int n, int *odata,
const int *idata, const int *bools, const int *indices) {
// TODO
int iSelf = blockIdx.x * blockDim.x + threadIdx.x;
if (iSelf >= n) {
return;
}

if (bools[iSelf] != 0) {
odata[indices[iSelf]] = idata[iSelf];
}
}

}
Expand Down
44 changes: 39 additions & 5 deletions stream_compaction/cpu.cu
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,13 @@ namespace StreamCompaction {
*/
void scan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO

int total = 0;
for (int i = 0; i < n; ++i) {
odata[i] = total;
total += idata[i];
}

timer().endCpuTimer();
}

Expand All @@ -30,9 +36,17 @@ namespace StreamCompaction {
*/
int compactWithoutScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO

int count = 0;
for (int i = 0; i < n; ++i) {
if (idata[i] != 0) {
odata[count] = idata[i];
++count;
}
}

timer().endCpuTimer();
return -1;
return count;
}

/**
Expand All @@ -42,9 +56,29 @@ namespace StreamCompaction {
*/
int compactWithScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO

// scan
int total = 0;
for (int i = 0; i < n; ++i) {
odata[i] = total;
total += (idata[i] == 0 ? 0 : 1);
}

// scatter
int count = 0;
for (int i = 1; i < n; ++i) {
if (odata[i] != count) {
odata[count] = idata[i - 1];
++count;
}
}
if (idata[n - 1] != 0) {
odata[count] = idata[n - 1];
++count;
}

timer().endCpuTimer();
return -1;
return count;
}
}
}
Loading