Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get the 87.4% classification accuracy that mentioned in your paper? #4

Open
Coder-AndyLee opened this issue Apr 18, 2018 · 10 comments

Comments

@Coder-AndyLee
Copy link

As is shown in the "RML2016.10a_VTCNN2_example.ipynb", the maximum accuracy is "Overall Accuracy: 0.723852385239" with SNR = 18. So how can I get the accuracy of 87.4%?

@Iamshg
Copy link

Iamshg commented May 12, 2018

yes , I also get the accuracy is lower than 80%. It is 75% I have run four times .

@shijieliu
Copy link

i got the same problem. And i emailed the author but there is no response yet.

@Ostnie
Copy link

Ostnie commented Aug 7, 2018

@Coder-AndyLee @Iamshg @shijieliu I think you didn't read the paper clearly, author has said that he use 12 million samples to get the 87.4% ,and your 75% is only depend on 22 0000 sample inRML2016.10a_dict.dat.

@KYevgeny
Copy link

I cannot agree with @Ostnie ...
In a paper https://arxiv.org/abs/1602.04105 is written:

We train on a corpus of approximately 12 million complex samples divided across the modulations.
These are divided into training examples of 128 samples in length. We use approximately ,96k
examples for training, and 64k examples for testing and validation.
After training, we achieve roughly a 87.4% classification accuracy across all signal to noise ratios on
the test dataset".

  1. 96k training examples * 128 complex samples per example = 12.288M complex training samples.
  2. In the code we divide 50/50 the dataset. This is different than in a paper.
  3. The VT-CNN2 network topology (in example) is not the same as described in paper:
    • It has only two Conv layers, containing 256 and 80 filters in layers 1 and 2 (in paper there is
      additional Conv3 layer with 256 filters).
    • Also Dense layers are different. In example there are only two Dense layers (256 and 11 neurons), in
      contrast to paper with dense layers of size 512, 256, 128 and 11 classes neurons.
  4. Best Dropout value in a paper equal 0.6, but we use 0.5 in an example.

What do you think ?

@lukerbs
Copy link

lukerbs commented Sep 16, 2019

Here is a link to my own implementation of the ResNet signal classifier that reached up to 96% classification accuracy: Signal Classifier

@BUGUER
Copy link

BUGUER commented Oct 31, 2019

* contrast to paper with dense layers of size 512, 256, 128 and 11 classes neurons.

The 3 point is wrong.
In the paper,the 3 layer which contains 256 filters is the dense layer. what you say 512,256,128 and 11 is the strcture of DNN

@KYevgeny
Copy link

KYevgeny commented Nov 2, 2019

* contrast to paper with dense layers of size 512, 256, 128 and 11 classes neurons.

The 3 point is wrong.
In the paper,the 3 layer which contains 256 filters is the dense layer. what you say 512,256,128 and 11 is the strcture of DNN

Hi Boguer,

I think, I know where the misunderstanding is ...

In the paper the network called CNN (see Fig:3), but in Example Classifier Jupyter Notebook: RML2016.10a_VTCNN2_example.ipynb, the same CNN network called CNN2.

In contrast to said above, CNN2 in a paper, is larger and deeper network than CNN. Jupyter example doesn't have CNN2 architecture implemented.

I base my previous explanation from the paper point of view.
Hence my previous note is still correct.

Thank you for reading my note :-)

@Wmoraynia17
Copy link

the code can not be opened, can you share it again? thk u

@MuscleCrocodile
Copy link

I use pytorch to build CNN2 which is the same as the netural net mentioned in author's paper.However, I can't train!Must I use tensorflow to get the same result? I even doubt that the author make a cheat. My CNN2 is as follows.Am I wrong?

class MyCNN1(nn.Module):
def init(self):
super(MyCNN1, self).init()
self.pad = nn.ZeroPad2d((2, 2, 0, 0))
self.relu = nn.ReLU()
self.drop = nn.Dropout(p=0.5)
self.conv1 = nn.Conv2d(kernel_size=(1, 3), in_channels=1, out_channels=256, stride=(1, 1))
self.conv2 = nn.Conv2d(kernel_size=(2, 3), in_channels=256, out_channels=80)
self.flatten = nn.Flatten(start_dim=1, end_dim=-1)
self.fc1 = nn.Linear(10560, 256)
self.fc2 = nn.Linear(256, 11)
# 卷积部分
self.stage1 = nn.Sequential(
self.pad,
self.conv1,
self.relu,
self.drop,
self.pad,
self.conv2,
self.relu,
self.drop,
self.flatten
)
# 全连接部分
self.stage2 = nn.Sequential(
self.fc1,
self.relu,
self.drop,
self.fc2,
nn.Softmax(dim=1)
)

def forward(self, input):
    input = input.unsqueeze(dim=1)
    out = self.stage1(input)
    out = self.stage2(out)

    return out

@DRosen766
Copy link

I use pytorch to build CNN2 which is the same as the netural net mentioned in author's paper.However, I can't train!Must I use tensorflow to get the same result? I even doubt that the author make a cheat. My CNN2 is as follows.Am I wrong?

class MyCNN1(nn.Module): def init(self): super(MyCNN1, self).init() self.pad = nn.ZeroPad2d((2, 2, 0, 0)) self.relu = nn.ReLU() self.drop = nn.Dropout(p=0.5) self.conv1 = nn.Conv2d(kernel_size=(1, 3), in_channels=1, out_channels=256, stride=(1, 1)) self.conv2 = nn.Conv2d(kernel_size=(2, 3), in_channels=256, out_channels=80) self.flatten = nn.Flatten(start_dim=1, end_dim=-1) self.fc1 = nn.Linear(10560, 256) self.fc2 = nn.Linear(256, 11) # 卷积部分 self.stage1 = nn.Sequential( self.pad, self.conv1, self.relu, self.drop, self.pad, self.conv2, self.relu, self.drop, self.flatten ) # 全连接部分 self.stage2 = nn.Sequential( self.fc1, self.relu, self.drop, self.fc2, nn.Softmax(dim=1) )

def forward(self, input):
    input = input.unsqueeze(dim=1)
    out = self.stage1(input)
    out = self.stage2(out)

    return out

I also converted to pytorch and have not been able to achieve comparable accuracy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants