Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# IDE settings
*.DS_Store
.vscode
.comate
.idea

# virtualenv
Expand Down
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ repos:
- id: remove-tabs
types: [text]
- repo: https://github.com/PFCCLab/typos-pre-commit-mirror.git
rev: v1.42.0
rev: v1.43.1
hooks:
- id: typos
args: [--force-exclude]
Expand Down
4 changes: 3 additions & 1 deletion _typos.toml
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,9 @@ extend-ignore-re = [
CANN = "CANN"
Clas = "Clas"
arange = "arange"
unsupport = "unsupport"
certifi = "certifi"
convnet = "convnet"
datas = "datas"
feeded = "feeded"
splitted = "splitted"
unsupport = "unsupport"
2 changes: 1 addition & 1 deletion docs/api/paddle/nn/Conv1DTranspose_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Conv1DTranspose
.. py:class:: paddle.nn.Conv1DTranspose(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, dilation=1, weight_attr=None, bias_attr=None, data_format="NCL")


一维转置卷积层(Convlution1d transpose layer)
一维转置卷积层(Convolution1d transpose layer)

该层根据输入(input)、卷积核(kernel)和空洞大小(dilations)、步长(stride)、填充(padding)来计算输出特征大小或者通过 output_size 指定输出特征层大小。输入(Input)和输出(Output)为 NCL 或 NLC 格式,其中 N 为批尺寸,C 为通道数(channel),L 为特征长度。卷积核是 MCL 格式,M 是输出图像通道数,C 是输入图像通道数,L 是卷积核长度。如果组数大于 1,C 等于输入图像通道数除以组数的结果。转置卷积的计算过程相当于卷积的反向计算。转置卷积又被称为反卷积(但其实并不是真正的反卷积)。欲了解转置卷积层细节,请参考下面的说明和 `参考文献 <https://arxiv.org/pdf/1603.07285>`_。如果参数 bias_attr 不为 False,转置卷积计算会添加偏置项。

Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/nn/Conv2DTranspose_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Conv2DTranspose
.. py:class:: paddle.nn.Conv2DTranspose(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, dilation=1, weight_attr=None, bias_attr=None, data_format="NCHW")


二维转置卷积层(Convlution2d transpose layer)
二维转置卷积层(Convolution2d transpose layer)

该层根据输入(input)、卷积核(kernel)和空洞大小(dilations)、步长(stride)、填充(padding)来计算输出特征层大小或者通过 output_size 指定输出特征层大小。输入(Input)和输出(Output)为 NCHW 或 NHWC 格式,其中 N 为批尺寸(batch size),C 为通道数(channel),H 为特征层高度,W 为特征层宽度。卷积核是 MCHW 格式,M 是输出图像通道数,C 是输入图像通道数,H 是卷积核高度,W 是卷积核宽度。如果组数大于 1,C 等于输入图像通道数除以组数的结果。转置卷积的计算过程相当于卷积的反向计算。转置卷积又被称为反卷积(但其实并不是真正的反卷积)。欲了解转置卷积层细节,请参考下面的说明和 `参考文献 <https://arxiv.org/pdf/1603.07285.pdf>`_。如果参数 bias_attr 不为 False,转置卷积计算会添加偏置项。

Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/nn/Conv3DTranspose_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Conv3DTranspose
.. py:class:: paddle.nn.Conv3DTranspose(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, dilation=1, weight_attr=None, bias_attr=None, data_format="NCDHW")


三维转置卷积层(Convlution3d transpose layer)
三维转置卷积层(Convolution3d transpose layer)

该层根据输入(input)、卷积核(kernel)和卷积核空洞大小(dilations)、步长(stride)、填充(padding)来计算输出特征层大小或者通过 output_size 指定输出特征层大小。输入(Input)和输出(Output)为 NCDHW 或者 NDHWC 格式。其中 N 为批尺寸,C 为通道数(channel),D 为特征深度,H 为特征层高度,W 为特征层宽度。转置卷积的计算过程相当于卷积的反向计算。转置卷积又被称为反卷积(但其实并不是真正的反卷积)。欲了解卷积转置层细节,请参考下面的说明和 `参考文献`_ 。如果参数 bias_attr 不为 False,转置卷积计算会添加偏置项。

Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/nn/Conv3D_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Conv3D

**三维卷积层**

根据输入、卷积核、步长(stride)、填充(padding)、空洞大小(dilations)一组参数计算得到输出特征层大小。输入和输出是 NCDHW 或 NDHWC 格式,其中 N 是批尺寸,C 是通道数,D 是特征层深度,H 是特征层高度,W 是特征层宽度。三维卷积(Convlution3D)和二维卷积(Convlution2D)相似,但多了一维深度信息(depth)。如果 bias_attr 不为 False,卷积计算会添加偏置项。
根据输入、卷积核、步长(stride)、填充(padding)、空洞大小(dilations)一组参数计算得到输出特征层大小。输入和输出是 NCDHW 或 NDHWC 格式,其中 N 是批尺寸,C 是通道数,D 是特征层深度,H 是特征层高度,W 是特征层宽度。三维卷积(Convolution3D)和二维卷积(Convolution2D)相似,但多了一维深度信息(depth)。如果 bias_attr 不为 False,卷积计算会添加偏置项。

对每个输入 X,有等式:

Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/nn/functional/conv1d_transpose_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ conv1d_transpose



一维转置卷积层(Convlution1D transpose layer)
一维转置卷积层(Convolution1D transpose layer)

该层根据输入(input)、卷积核(kernel)和空洞大小(dilations)、步长(stride)、填充(padding)来计算输出特征层大小或者通过 output_size 指定输出特征层大小。输入(Input)和输出(Output)为 NCL 或 NLC 格式,其中 N 为批尺寸,C 为通道数(channel),L 为特征层长度。卷积核是 MCL 格式,M 是输出图像通道数,C 是输入图像通道数,L 是卷积核长度。如果组数大于 1,C 等于输入图像通道数除以组数的结果。转置卷积的计算过程相当于卷积的反向计算。转置卷积又被称为反卷积(但其实并不是真正的反卷积)。欲了解转置卷积层细节,请参考下面的说明和 参考文献_。如果参数 bias_attr 不为 False,转置卷积计算会添加偏置项。如果 act 不为 None,则转置卷积计算之后添加相应的激活函数。

Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/nn/functional/conv2d_transpose_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ conv2d_transpose



二维转置卷积层(Convlution2D transpose layer)
二维转置卷积层(Convolution2D transpose layer)

该层根据输入(input)、卷积核(kernel)和空洞大小(dilations)、步长(stride)、填充(padding)来计算输出特征层大小或者通过 output_size 指定输出特征层大小。输入(Input)和输出(Output)为 NCHW 或 NHWC 格式,其中 N 为批尺寸,C 为通道数(channel),H 为特征层高度,W 为特征层宽度。卷积核是 MCHW 格式,M 是输出图像通道数,C 是输入图像通道数,H 是卷积核高度,W 是卷积核宽度。如果组数大于 1,C 等于输入图像通道数除以组数的结果。转置卷积的计算过程相当于卷积的反向计算。转置卷积又被称为反卷积(但其实并不是真正的反卷积)。欲了解转置卷积层细节,请参考下面的说明和 参考文献_。如果参数 bias_attr 不为 False,转置卷积计算会添加偏置项。如果 act 不为 None,则转置卷积计算之后添加相应的激活函数。

Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/nn/functional/conv3d_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ conv3d

.. py:function:: paddle.nn.functional.conv3d(x, weight, bias=None, stride=1, padding=0, dilation=1, groups=1, data_format="NCDHW", name=None)

三维卷积层(convolution3D layer),根据输入、卷积核、步长(stride)、填充(padding)、空洞大小(dilations)一组参数计算得到输出特征层大小。输入和输出是 NCDHW 或 NDHWC 格式,其中 N 是批尺寸,C 是通道数,D 是特征层深度,H 是特征层高度,W 是特征层宽度。三维卷积(Convlution3D)和二维卷积(Convlution2D)相似,但多了一维深度信息(depth)。如果 bias_attr 不为 False,卷积计算会添加偏置项。
三维卷积层(Convolution3D layer),根据输入、卷积核、步长(stride)、填充(padding)、空洞大小(dilations)一组参数计算得到输出特征层大小。输入和输出是 NCDHW 或 NDHWC 格式,其中 N 是批尺寸,C 是通道数,D 是特征层深度,H 是特征层高度,W 是特征层宽度。三维卷积(Convolution3D)和二维卷积(Convolution2D)相似,但多了一维深度信息(depth)。如果 bias_attr 不为 False,卷积计算会添加偏置项。

对每个输入 X,有等式:

Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/nn/functional/conv3d_transpose_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ conv3d_transpose



三维转置卷积层(Convlution3d transpose layer)
三维转置卷积层(Convolution3d transpose layer)

该层根据输入(input)、卷积核(kernel)和卷积核空洞大小(dilations)、步长(stride)、填充(padding)来计算输出特征层大小或者通过 output_size 指定输出特征层大小。输入(Input)和输出(Output)为 NCDHW 或者 NDHWC 格式。其中 N 为批尺寸,C 为通道数(channel),D 为特征深度,H 为特征层高度,W 为特征层宽度。转置卷积的计算过程相当于卷积的反向计算。转置卷积又被称为反卷积(但其实并不是真正的反卷积)。欲了解卷积转置层细节,请参考下面的说明和 `参考文献`_ 。如果参数 bias_attr 不为 False,转置卷积计算会添加偏置项。

Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/static/nn/conv2d_transpose_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ conv2d_transpose



二维转置卷积层(Convlution2D transpose layer)
二维转置卷积层(Convolution2D transpose layer)

该层根据输入(input)、滤波器(filter)和卷积核膨胀比例(dilations)、步长(stride)、填充(padding)来计算输出特征层大小或者通过 output_size 指定输出特征层大小。

Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/static/nn/conv3d_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ conv3d

输入和输出是 NCDHW 或 NDHWC 格式,其中 N 是批尺寸,C 是通道数,D 是特征层深度,H 是特征层高度,W 是特征层宽度。

三维卷积(Convlution3D)和二维卷积(Convlution2D)相似,但多了一维深度信息(depth)。
三维卷积(Convolution3D)和二维卷积(Convolution2D)相似,但多了一维深度信息(depth)。

如果 bias_attr 不为 False,卷积计算会添加偏置项。如果指定了激活函数类型,相应的激活函数会作用在最终结果上。

Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/static/nn/conv3d_transpose_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ conv3d_transpose



三维转置卷积层(Convlution3D transpose layer)
三维转置卷积层(Convolution3D transpose layer)

该层根据输入(input)、滤波器(filter)和卷积核膨胀比例(dilations)、步长(stride)、填充(padding)来计算输出特征层大小或者通过 output_size 指定输出特征层大小。

Expand Down
2 changes: 1 addition & 1 deletion docs/design/network/deep_speech_2.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ Key ingredients about the layers:
- These two type of sequences do not have the same lengths, thus a CTC-loss layer is required.
- **2D Convolution Layers**:
- Not only temporal convolution, but also **frequency convolution**. Like a 2D image convolution, but with a variable dimension (i.e. temporal dimension).
- With striding for only the first convlution layer.
- With striding for only the first convolution layer.
- No pooling for all convolution layers.
- **Uni-directional RNNs**
- Uni-directional + row convolution: for low-latency inference.
Expand Down
2 changes: 1 addition & 1 deletion docs/templates/common_docs.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@
force_cpu (bool, optional): Whether force to store the output tensor in CPU memory. If force_cpu is False, the output tensor will be stored in running device memory, otherwise it will be stored to the CPU memory. Default is False.
data_format (str, optional): Specify the input data format, the output data format will be consistent with the input, which can be ``NCHW`` or ``NHWC`` . N is batch size, C is channels, H is height, and W is width. Default is ``NCHW`` .
grad_clip (GradientClipBase, optional): Gradient clipping strategy, it's an instance of some derived class of ``GradientClipBase`` . There are three clipping strategies ( :ref:`api_fluid_clip_GradientClipByGlobalNorm` , :ref:`api_fluid_clip_GradientClipByNorm` , :ref:`api_fluid_clip_GradientClipByValue` ). Default is None, meaning there is no gradient clipping.
num_filters (int): The number of filter. It is as same as the output channals numbers.
num_filters (int): The number of filter. It is as same as the output channels numbers.
dim (int, optional): A dimension along which to operate. Default is 0.
is_sparse (bool, optional): Whether use sparse updating. For more information, please refer to :ref:`api_guide_sparse_update_en` . If it's True, it will use sparse updating.
place (paddle.CPUPlace()|paddle.CUDAPlace(N)|None): This parameter represents which device the executor runs on, and N means the GPU's id. When this parameter is None, PaddlePaddle will set the default device according to its installation version. If Paddle is CPU version, the default device would be set to CPUPlace(). If Paddle is GPU version, the default device would be set to CUDAPlace(0). Default is None.
Expand Down