Replies: 2 comments 2 replies
-
Great proposal,kenzo san @knzo25, looking forward to your master piece! By the way,We have a group discussion during Software working group meeting this week , and get a conclusion that we plan to integrate bevfusion nvidia optimized version(which is NVIDIA-AI-IOT) into autoware,(now we are working on nodes migration from autowre universe to core, hopely integration will start late January) Have a nice day! 心刚 |
Beta Was this translation helpful? Give feedback.
-
Great proposal, @knzo25 , looking forward to your next step of work! I prefer to support the third method.
As an Autoware user, I am used to pulling code and mounting it into the docker environment for compilation, and do not pay attention to the setup script of Autoware. Therefore, the third method treats When an error occurs during compilation of package The frequency of CUDA version upgrades in Autoware's docker environment is very low, so the frequency of code regeneration is very low. |
Beta Was this translation helpful? Give feedback.
-
Background
In 3d object detection, so far we have constrained ourselves to the family of models that use traditional operations (convolutions, dense layers, attention mechanism, etc). That is one of the main reasons why we use
centerpoint
andtransfusion
with pillar based backbones.In the ever evolving SOTA, there are many models that do not use sparse convolutions, but a great portion of them do, specially those that are focused on deployment (memory consumption, inference speeds, etc)
A quick glance at the code of the top ranking methods in object detection and segmentation will provide some insight on how popular sparse convolutions are:
I would like to propose integrating the most popular sparse convolutions backbone into autoware (i.e,
traveller59
's backend).Some alternatives are:
Proposals
traveller59's implementation is a python package that generates c++/cuda code.
To integrate it into autoware, I see the following alternatives:
TODO List
Beta Was this translation helpful? Give feedback.
All reactions