Skip to content

The Wrong when I using the \DnCNN\TrainingCodes\DnCNN_TrainingCodes_v1.1\Demo_Train_model_64_25_Res_Bnorm_Adam.m #100

@yihaodong1

Description

@yihaodong1

以下是报错
错误使用 vl_nnconv
FILTERS are larger than the DATA (including padding).
出错 vl_simplenn (第 97 行)
res(i+1).x = vl_nnconv(res(i).x, l.weights{1}, l.weights{2}, ...
出错 DnCNN_train>process_epoch (第 182 行)
res = vl_simplenn(net, inputs, dzdy, res, ...
出错 DnCNN_train (第 111 行)
[net, state] = process_epoch(net, state, imdb, opts, 'train');
出错 Demo_Train_model_64_25_Res_Bnorm_Adam (第 41 行)
[net, info] = DnCNN_train(net, ...

我debug了一下,好像是因为在vl_simplenn.m的第95-117行对层的种类判断的时候没有对bnorm层处理,不知道是我哪里运行错了吗

switch l.type
        case 'conv'
            res(i+1).x = vl_nnconv(res(i).x, l.weights{1}, l.weights{2}, ...
                'pad', l.pad, ...
                'stride', l.stride, ...
                'dilate', l.dilate, ...
                l.opts{:}, ...
                cudnn{:}) ;
            
        case 'concat'
            if size(sigmas,1)~=size(res(i).x,1)
                sigmaMap   = bsxfun(@times,ones(size(res(i).x,1),size(res(i).x,2),1,size(res(i).x,4)),permute(sigmas,[3 4 1 2])) ;
                res(i+1).x = vl_nnconcat({res(i).x,sigmaMap}) ;
            else
                res(i+1).x = vl_nnconcat({res(i).x,sigmas}) ;
            end
            
        case 'SubP'
            res(i+1).x = vl_nnSubP(res(i).x, [],'scale',l.scale) ;
        case 'relu'
            leak = {} ; 
            res(i+1).x = vl_nnrelu(res(i).x,[],leak{:}) ;
    end

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions