-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Drop of messages with History "keep all" but not with "keep last" #385
Comments
as far as I know, this is designed behavior. there is
if you want to increase the limits,
hopefully this helps. |
@fujitatomoya thanks for reverting. |
i am not quite following this, can you rephrase the question a bit? |
With a publisher using QoS depth 50,000 and transient local durability, sending 10 MB images can lead to a large queue. If memory is overwhelmed, the node might crash. Is this expected, and should developers prevent such crashes? or there is some mechanism to prevent such failure? |
i guess that should be detected at the configuration if you are using POD data before runtime, because it tries to configure the buffer with the size which is much bigger than system originally has? if that is the variable length data, |
So while we should do our best to not crash, I'm also curious what the use case is here. What are you trying to achieve with such a large depth of large images? |
The scenario I am talkin about is hypothetical and basically only intended to identify a case where a crash will happen. I understand that default max QoS Depth for KeepAll QoS History is 5000 for fast dds and 10000 for cyclone dds. (Although hypothetical message size can even exhaust them, XD). Then I tried KeepLast and was amazed to see their capacity can be more than what KeepAll can store, because from the definitions, KeepAll should have more capacity. I wanted to know if a crash due to overwhelming of message queue can be identified before or not, or we relying on developer to create the message queue so as to avoid crashing. |
I am creating a publisher and subscriber with QoS History "Keep all" and Durability "transient local".
If publisher publishes more than 5000 and messages, then I run the subscriber, to start listen, it will only capture last 5k messages and not all messages.
Now if I change the History to "keep last" and depth 50000, and if I do the same thing (starting subscriber after publishing 5000 messages), subscriber listen all the messages.
I want to know what the maximum queue depth allowed in fast rtps which can be adhered.
Publisher code:
#include
#include
#include
#include
#include "rclcpp/rclcpp.hpp"
#include "std_msgs/msg/int32.hpp"
using namespace std::chrono_literals;
class MinimalPublisher : public rclcpp::Node
{
public:
MinimalPublisher()
: Node("minimal_publisher"), count_(0)
{
// Set up QoS with history keep all, reliability reliable, and durability transient local
auto qos = rclcpp::QoS(rclcpp::KeepLast(50000))
.reliable()
.transient_local();
private:
void timer_callback()
{
if (count_ >= 7000) {
rclcpp::shutdown(); // Stop publishing after 7000 messages
return;
}
};
int main(int argc, char * argv[])
{
rclcpp::init(argc, argv);
rclcpp::spin(std::make_shared());
rclcpp::shutdown();
return 0;
}
Subscriber code:
#include
#include "rclcpp/rclcpp.hpp"
#include "std_msgs/msg/int32.hpp"
using std::placeholders::_1;
class MinimalSubscriber : public rclcpp::Node
{
public:
MinimalSubscriber()
: Node("minimal_subscriber")
{
// Set up QoS with history keep all, reliability reliable, and durability transient local
auto qos = rclcpp::QoS(rclcpp::KeepLast(50000))
.reliable()
.transient_local();
private:
void topic_callback(const std_msgs::msg::Int32::SharedPtr msg) const
{
RCLCPP_INFO(this->get_logger(), "I heard: '%d'", msg->data);
}
};
int main(int argc, char * argv[])
{
rclcpp::init(argc, argv);
rclcpp::spin(std::make_shared());
rclcpp::shutdown();
return 0;
}
please change the QoS values in code as needed.
The text was updated successfully, but these errors were encountered: