-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
【complex op No.33】abs_coo/abs_csr(sparse) #62237
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
❌ The PR is not created using PR's template. You can refer to this Demo. |
@zbt78 可以进行 review 了 |
测试的时候建议先不添加新的测试文件,在原测试文件test_sparse_unary_op.py里进行修改。 |
@bapijun 请根据review意见进行修改 |
|
@zbt78 可以再review一下,目前Coverage覆盖率没过 |
self.check_result(dense_func, sparse_func, 'coo') | ||
self.check_result(dense_func, sparse_func, 'csr') | ||
def compare_with_dense(self, dense_func, sparse_func, dtype='float32'): | ||
for device in devices: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里device的作用是什么呢,看起来没起作用
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个代码少了一句,忘记把放到不同设备的哪一句写进来
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
现在覆盖率就过了👍
LGTM |
PD_REGISTER_SPARSE_UNARY_GPU_GRAD_KERNEL(pow, Pow) | ||
PD_REGISTER_SPARSE_UNARY_GPU_GRAD_KERNEL(expm1, Expm1) | ||
PD_REGISTER_SPARSE_UNARY_GPU_GRAD_KERNEL(relu6, Relu6) | ||
PD_REGISTER_SPARSE_UNARY_GPU_GRAD_KERNEL(leaky_relu, LeakyRelu) | ||
|
||
PD_REGISTER_KERNEL(abs_coo_grad, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
注册的形式是一样的,这里是不是也写一个带复数注册的宏会更好一点,也更方便后面稀疏方法的注册,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
注册的形式是一样的,这里是不是也写一个带复数注册的宏会更好一点,也更方便后面稀疏方法的注册,
事实上我在后面的复数修改就重新写了新的宏,我晚上有时间就去把那边pr的代码移动过来
DEFINE_SPARSE_UNARY_KERNEL(Expm1) | ||
DEFINE_SPARSE_UNARY_KERNEL(Relu6) | ||
DEFINE_SPARSE_UNARY_KERNEL_WITH_ONE_ATTR(Pow, factor) | ||
DEFINE_SPARSE_UNARY_KERNEL_WITH_ONE_ATTR(LeakyRelu, alpha) | ||
|
||
template <typename T, typename Context> | ||
void AbsCooKernel(const Context& dev_ctx, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这种的是不是也可以写一个宏,毕竟实现方式也是一样的
if (out->dtype() == DataType::COMPLEX64 || | ||
out->dtype() == DataType::COMPLEX128) { | ||
DenseTensor* out_values = out->mutable_non_zero_elements(); | ||
out->set_type(out_values->dtype()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里为什么会需要重新设置一下out的dtype
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里为什么会需要重新设置一下out的dtype
这是因为abs的结果是会是浮点数,比如对于value是complex64的输入,他的abs结果会是一个float32所以这里对于values需要重设他的类型
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
那这里应该是修改一下sparse_ops.yaml 文件里面infermeta的函数,可以先参考一下ops.yaml里面abs的infermeta函数是否可用,如果不行就需要新增一个,kernel 内部非必要不会去增加设置meta信息的逻辑,不利于整体op的维护。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
那这里应该是修改一下sparse_ops.yaml 文件里面infermeta的函数,可以先参考一下ops.yaml里面abs的infermeta函数是否可用,如果不行就需要新增一个,kernel 内部非必要不会去增加设置meta信息的逻辑,不利于整体op的维护。
我回去试了一下,只改infermeta函数是不行的,因为这个Kernel里面有一个EmptyLikeCXXKernel 函数,函数里面会试着把x的属性复制到out里面去,导致导致在infermeta重设过得dtype变回来,我的想法是要么在这里处理,要么写一个新的EmptyLikeCXXKernel 处理这个逻辑问题,我看了一下在op.yaml里面除了abs之外,还有另外三个算子也是会出现需要复数的输入,转化为浮点数的结果
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
当初sparse 设计的时候没有考虑像复数这种存在C-R 或是R-C导致输入输出meta 信息不一致的场景。所以为了规范sparse_kernel设计,这里需要修改相应逻辑,EmptyLikeCXXKernel实际上承担了两部分的工作,infermeta(坐标、dim、dtype等等)、内存分配。所以现在需要把这个两个功能分开,将infermeta的功能放到infermeta函数中,将分配内存的功能放到kernel内部,这里先修改abs_kernel,主要有两个修改点:
- 新增一个sparse的 unchangedinfermeta函数去设置坐标、dim、dtype这种
- abs kernel 内部的EmptyLikeCXXKernel 替换成分配内存的逻辑
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- unchangedinfermeta
好的我回去试试
@bapijun 可以考虑参加下 【HACKATHON 预备营】飞桨启航计划集训营(第二期) 的复数团,简历截止3月18日 |
const SparseCooTensor& x_or_out, | ||
const SparseCooTensor& dout, | ||
SparseCooTensor* dx) { | ||
EmptyLikeCooKernel<T, Context>(dev_ctx, x_or_out, dx); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
反向的也可以拆分吧
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
关于这个我回去看了一下,非稀疏的abs,看起来结果下,abs的梯度也就是复数形式的,应该是不需要c to r的写法
if device == 'cpu' or ( | ||
device == 'gpu' and paddle.is_compiled_with_cuda() | ||
): | ||
paddle.set_device(device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这种全局设置要慎用,很容易在并行测试的时候影响其他的单测,可以https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/Tensor_cn.html#tensor 主动用tensor.to 指定输入tensor的设备
@GGBond8488 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
请对应修改中文文档中的数据类型
bool) {} | ||
bool, | ||
phi::dtype::complex<float>, | ||
phi::dtype::complex<double>) {} | ||
|
||
PD_REGISTER_KERNEL(coo_to_csr, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这些kernel 对应的grad kernel 也注册上复数吧
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这些kernel 对应的grad kernel 也注册上复数吧
这一次提交里面,在对应的sparse_backward.yaml下面能找到的后向我改掉了,但是对应的to_dense和value的梯度,涉及到
mask_kernel下面的两个kernel
另外还有之前提到的,稀疏格式在backward的时候涉及到的meta丢失的问题,在修改to_dense的梯度的时候也出现了,这一次也改掉了
Sorry to inform you that e589c98's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually. |
|
对的 |
Sorry to inform you that de638a6's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually. |
LGTM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
hi, @bapijun
|
PaddlePaddle/docs#6645 |
PR Category
Operator Mechanism
PR Types
New features
Description
add complex support for abs_coo/abs_csr in sparse
#61975