Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

net: Add buffer pool to replace connection allocation #15285

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

wengzhe
Copy link
Contributor

@wengzhe wengzhe commented Dec 19, 2024

Summary

Patches included:

  • net: Add buffer pool to replace connection allocation
  • net: Optimize ipfwd and wrbuffer by buffer pool

Our net socket connection allocations are powerful but redundant because they're implemented once in each protocol. This is not good for further optimizing and extending to other allocations, so maybe we can add a common implementation for the usage.

Impact

  1. We add a struct net_bufpool_s as pool descriptor, which may use a little bit more memory than previous implementation (~28Bytes).
  2. We share same functions between pools, so code size may shrink under some scenarios.

Testing

Mainly on SIM.

  • tcp/udp/icmp/devif_callback tested with different configs, both conn and wrbuffer pools.
  • Other pools share exactly the same logic, not tested very carefully.

@github-actions github-actions bot added Area: Networking Effects networking subsystem Size: XL The size of the change in this PR is very large. Consider breaking down the PR into smaller pieces. labels Dec 19, 2024
Our net socket connection allocations are powerful but redundant
because they're implemented once in each protocol.  This is not good for
further optimizing and extending to other allocations, so maybe we can
add a common implementation for the usage.

Impact:
1. We add a `struct net_bufpool_s` as pool descriptor, which may use a
   little bit more memory than previous implementation (~28Bytes).
2. We share same functions between pools, so code size may shrink under
   some scenarios.

Signed-off-by: Zhe Weng <[email protected]>
@nuttxpr
Copy link

nuttxpr commented Dec 19, 2024

[Experimental Bot, please feedback here]

No, this PR does not fully meet the NuttX requirements. While it provides a decent summary and impact assessment, it severely lacks in the testing section.

Here's a breakdown of what's missing:

  • Insufficient Summary Detail: While the summary explains the "why" and the "what," it lacks detail on how the buffer pool works. More information on the implementation details is needed. Mention the specific functions added/modified.

  • Missing Issue References: The PR summary should reference any related NuttX or NuttX Apps issues. Even if there isn't a directly related issue, consider creating one to track the motivation and discussion for this change.

  • Incomplete Impact Assessment: Several "Impact" fields are left blank. Explicitly state "NO" or provide details for all impact categories. For example:

    • Impact on user: Does this change the API? Do users need to configure the buffer pool?
    • Impact on build: Are there any new Kconfig options?
    • Impact on hardware: Does this affect any specific architectures or boards? Even if it's intended to be generic, stating that explicitly is important.
    • Impact on documentation: Does this require updates to the NuttX documentation? If so, were these updates included in the PR?
    • Impact on security: Even if the answer is no, explicitly state it.
    • Impact on compatibility: Does this break any existing applications?
  • Inadequate Testing Description: "Mainly on SIM" is not enough. Specify the exact SIM configuration used (e.g., nsh). What "different configs" were tested for tcp/udp/icmp/devif_callback? Provide concrete examples. "Other pools share exactly the same logic, not tested very carefully" is unacceptable. All affected code paths should be thoroughly tested.

  • Missing Test Logs: The template explicitly requests "Testing logs before change" and "Testing logs after change." These are crucial for demonstrating the effectiveness of the change and for regression testing. Include the actual command-line output and relevant log messages. Even for simple changes, showing the output before and after helps reviewers understand the impact.

In short, the PR needs significant improvement in the testing and impact assessment sections before it can be properly reviewed. A more detailed summary and inclusion of issue references would also improve the PR.

To allow dynamic allocation of these structs, performance keeps the same
as the previous implementation.

Signed-off-by: Zhe Weng <[email protected]>
****************************************************************************/

#ifndef CONFIG_NET_BLUETOOTH_MAX_CONNS
# define CONFIG_NET_BLUETOOTH_MAX_CONNS 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wengzhe why to use 0 as max connections? Please for logic reasoning use 1 and later in your logic operation decrease 1

#if CONFIG_CAN_PREALLOC_CONNS > 0
static struct can_conn_s g_can_connections[CONFIG_CAN_PREALLOC_CONNS];
#ifndef CONFIG_CAN_MAX_CONNS
# define CONFIG_CAN_MAX_CONNS 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto

#if CONFIG_NET_ICMP_PREALLOC_CONNS > 0
static struct icmp_conn_s g_icmp_connections[CONFIG_NET_ICMP_PREALLOC_CONNS];
#ifndef CONFIG_NET_ICMP_MAX_CONNS
# define CONFIG_NET_ICMP_MAX_CONNS 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here too

static struct icmpv6_conn_s
g_icmpv6_connections[CONFIG_NET_ICMPv6_PREALLOC_CONNS];
#ifndef CONFIG_NET_ICMPv6_MAX_CONNS
# define CONFIG_NET_ICMPv6_MAX_CONNS 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto

****************************************************************************/

#ifndef CONFIG_NET_IEEE802154_MAX_CONNS
# define CONFIG_NET_IEEE802154_MAX_CONNS 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto

static struct netlink_conn_s
g_netlink_connections[CONFIG_NETLINK_PREALLOC_CONNS];
#ifndef CONFIG_NETLINK_MAX_CONNS
# define CONFIG_NETLINK_MAX_CONNS 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto

@@ -52,19 +53,19 @@
(addr1[2] == addr2[2]) && (addr1[3] == addr2[3]) && \
(addr1[4] == addr2[4]) && (addr1[5] == addr2[5]))

#ifndef CONFIG_NET_PKT_MAX_CONNS
# define CONFIG_NET_PKT_MAX_CONNS 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto

#if CONFIG_NET_TCP_PREALLOC_CONNS > 0
static struct tcp_conn_s g_tcp_connections[CONFIG_NET_TCP_PREALLOC_CONNS];
#ifndef CONFIG_NET_TCP_MAX_CONNS
# define CONFIG_NET_TCP_MAX_CONNS 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto

#if CONFIG_NET_UDP_PREALLOC_CONNS > 0
static struct udp_conn_s g_udp_connections[CONFIG_NET_UDP_PREALLOC_CONNS];
#ifndef CONFIG_NET_UDP_MAX_CONNS
# define CONFIG_NET_UDP_MAX_CONNS 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto

static struct usrsock_conn_s
g_usrsock_connections[CONFIG_NET_USRSOCK_PREALLOC_CONNS];
#ifndef CONFIG_NET_USRSOCK_MAX_CONNS
# define CONFIG_NET_USRSOCK_MAX_CONNS 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Area: Networking Effects networking subsystem Size: XL The size of the change in this PR is very large. Consider breaking down the PR into smaller pieces.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants