Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Project 2: Dayu Li #18

Open
wants to merge 5 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
69 changes: 63 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,69 @@ CUDA Stream Compaction

**University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 2**

* (TODO) YOUR NAME HERE
* (TODO) [LinkedIn](), [personal website](), [twitter](), etc.
* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab)
* Dayu Li
* Tested on: Windows 10, i7-10700K @ 3.80GHz 16GB, GTX 2070 8150MB (Personal laptop)

### (TODO: Your README)
## Features

Include analysis, etc. (Remember, this is public, so don't put
anything here that you don't want to share with the world.)
##### CPU:

-CPU Scan, CPU Comact

##### GPU:

-Naive Scan, Work-Efficient Scan, Thrust Scan

-Work-Efficient Compact

###### Extra:

-Why is CPU faster than Work-efficient?

-Radix Sort algorithm

![](img/1.png)
## Performance Analysis
### Blocksize Optimization
The graph below shows how the work efficient compact runtime changes with the block size. To me there is no obvious change of the runtime during the change of block size, similar to the previous project.

![](img/2.png)

### Runtime Comparison
The graphs below compare the runtime of different scan implementations for the small array size.

![](img/3.png)
- When the array size is small, CPU scan has the best performance. This is due to the extra efforts spent on calling the kernel and CUDA, as we expected, multi-threaded process runs slower than the normal computations when the task is small.

- When the array size is large, CPU scan runs slower and gpu is playing better. Where the thrust may be the best. This is due to the utilization of shared memory instead of global memory.
### Extra Credit:
#### Why is My GPU Approach So Slow?
One of the reasons why work efficient scan is slower than expected is the fact that there are too much useless threads during the computation process that are created but never used. The existance of the useless threads will play a more important role when the array size is getting larger.

To resolve this, I changeed the number of threads (iterations times) within the work efficient compact function, from 2^d to 2^(d-i), this will reduce the number of threads that are created without changing any of the results.

Before:
- number of threads = 2^d
After:
- number of threads = 2^(d-i)

After optimization, work efficient is significantly improved.
- testted on 128 blocksize

||Before| After
|--|--|--|
|2^8|0.2152 ms|0.03081 ms|
|2^16|4.31 ms|1.062 ms|

#### Radix sort (Not finished)

just follow the slides from the course.
Uncomment this part in the main.cpp will show the test result.
```
//printDesc("radix sort test");
//genArray(RADIXSIZE, radix, 32);
//zeroArray(RADIXSIZE, radix_sorted);
//StreamCompaction::Efficient::radixSort(RADIXSIZE, 6, radix, radix_sorted);
//printArray(RADIXSIZE, radix, true);
//printArray(RADIXSIZE, radix_sorted, true);
```
Binary file added img/1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
14 changes: 13 additions & 1 deletion src/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,14 @@
#include <stream_compaction/thrust.h>
#include "testing_helpers.hpp"

const int SIZE = 1 << 8; // feel free to change the size of array
const int SIZE = 1 << 4; // feel free to change the size of array
const int NPOT = SIZE - 3; // Non-Power-Of-Two
const int RADIXSIZE = 20;
int *a = new int[SIZE];
int *b = new int[SIZE];
int *c = new int[SIZE];
int *radix = new int[RADIXSIZE];
int* radix_sorted = new int[RADIXSIZE];

int main(int argc, char* argv[]) {
// Scan tests
Expand Down Expand Up @@ -147,8 +150,17 @@ int main(int argc, char* argv[]) {
//printArray(count, c, true);
printCmpLenResult(count, expectedNPOT, b, c);

//printDesc("radix sort test");
//genArray(RADIXSIZE, radix, 32);
//zeroArray(RADIXSIZE, radix_sorted);
//StreamCompaction::Efficient::radixSort(RADIXSIZE, 6, radix, radix_sorted);
//printArray(RADIXSIZE, radix, true);
//printArray(RADIXSIZE, radix_sorted, true);

system("pause"); // stop Win32 console from closing on exit
delete[] a;
delete[] b;
delete[] c;
delete[] radix;
delete[] radix_sorted;
}
12 changes: 12 additions & 0 deletions stream_compaction/common.cu
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,11 @@ namespace StreamCompaction {
*/
__global__ void kernMapToBoolean(int n, int *bools, const int *idata) {
// TODO
int index = threadIdx.x + (blockIdx.x * blockDim.x);
if (index >= n) {
return;
}
bools[index] = (idata[index]) ? 1 : 0;
}

/**
Expand All @@ -33,6 +38,13 @@ namespace StreamCompaction {
__global__ void kernScatter(int n, int *odata,
const int *idata, const int *bools, const int *indices) {
// TODO
int index = threadIdx.x + (blockIdx.x * blockDim.x);
if (index >= n) {
return;
}
if (bools[index]) {
odata[indices[index]] = idata[index];
}
}

}
Expand Down
48 changes: 45 additions & 3 deletions stream_compaction/cpu.cu
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,25 @@ namespace StreamCompaction {
return timer;
}

void CPUScan(const int* idata, int* odata, int n)
{
odata[0] = 0;
for (int i = 1; i < n; ++i) {
odata[i] = odata[i - 1] + idata[i - 1];
}
}

int CPUCompactWithoutScan(int n, int* odata, const int* idata)
{
int count = 0;
for (int i = 0; i < n; ++i) {
if (idata[i] != 0) {
odata[count] = idata[i];
count++;
}
}
return count;
}
/**
* CPU scan (prefix sum).
* For performance analysis, this is supposed to be a simple for loop.
Expand All @@ -20,6 +39,7 @@ namespace StreamCompaction {
void scan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
CPUScan(idata, odata, n);
timer().endCpuTimer();
}

Expand All @@ -31,20 +51,42 @@ namespace StreamCompaction {
int compactWithoutScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
int count = CPUCompactWithoutScan(n, odata, idata);
timer().endCpuTimer();
return -1;
return count;
}

/**
* CPU stream compaction using scan and scatter, like the parallel version.
*
* @returns the number of elements remaining after compaction.
*/
int compactWithScan(int n, int *odata, const int *idata) {
int compactWithScan(int n, int* odata, const int* idata) {

int* boolArray = (int*)malloc(n * sizeof(int));
int* indexArray = (int*)malloc(n * sizeof(int));

timer().startCpuTimer();
// TODO
for (int i = 0; i < n; ++i) {
boolArray[i] = (idata[i]) ? 1 : 0;
}

// scan
CPUScan(boolArray, indexArray, n);

// scatter

for (int i = 0; i < n; ++i) {
if (boolArray[i]) {
odata[indexArray[i]] = idata[i];
}
}
int count = boolArray[n - 1] ? indexArray[n - 1] + 1 : indexArray[n - 1];
timer().endCpuTimer();
return -1;
free(boolArray);
free(indexArray);
return count;
}
}
}
Loading