Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Overhauled CoAP Stack and API #20792

Open
20 of 25 tasks
carl-tud opened this issue Jul 19, 2024 · 23 comments
Open
20 of 25 tasks

Overhauled CoAP Stack and API #20792

carl-tud opened this issue Jul 19, 2024 · 23 comments

Comments

@carl-tud
Copy link

carl-tud commented Jul 19, 2024

Hello RIOT Community,

RIOT features multiple CoAP libraries: gcoap, nanocoap_sock, and the nanocoap parser. In the future, it’d be great to have a single, unified, and modular library, facilitating CoAP over various protocols such as UDP, DTLS, and potentially TCP, TLS.

It’s time for something new 🥳!

Below I will give a brief overview before I outline the proposed design of a new client and server API, the improved options API, and the modular transport driver design.

Overview of the New CoAP Library for RIOT

The new CoAP library should provide a unified, versatile CoAP API for RIOT that is both easily extensible and beginner-friendly. Until a final, better name is found, I’m calling it the unified CoAP API, or unicoap for short.

The new CoAP stack will be based upon gcoap. The goal is to extend gcoap to provide synchronous and asynchronous APIs, and support message deduplication optionally.

The new API aims to reduce the need for in-depth protocol knowledge and implementation details while offering convenience APIs for commonly used features such as block-wise transfer and OSCORE. It minimizes boilerplate code.

For example, this is what a simple blocking GET request would look like with the new client API (error handling omitted):

unicoap_response_t response;
uint8_t buffer[CONFIG_UNICOAP_DEFAULT_PDU_BUFER_SIZE];
unicoap_send_request(unicoap_request_empty(UNICOAP_METHOD_GET), 
    unicoap_uri("coaps://iot.example.com/foo/bar?a=b"), &response, buffer, sizeof(buffer), 0);

printf("status code: %u", unicoap_response_get_status(&response));
my_dump_buffer(response.payload, response.payload_size);

Design

The new CoAP implementation comprises four main parts: a new API, the library messaging internals, and a new, modular parser and transport design. The latter two components are extensible enough to support CoAP over TCP or TLS.

unicoap-stack2

Client API

Today, users have to choose between nanocoap and gcoap for sending requests. nanocoap provides a synchronous interface for client requests, and gcoap on the other hand offers async callback-based functionality. Generally, the gcoap async interface is more versatile, yet sometimes, your application can’t perform any useful work until the request comeback has come in. Plus, nesting your application logic in response handlers, async or sync, becomes quickly messy. Hence, the new API will define both synchronous and async APIs.

The following synchronous request function blocks until a response is received (or the corresponding timeout is exceeded), and copies relevant response data into the supplied buffer.

int unicoap_send_request_aux(
    unicoap_request_t request,
    unicoap_resource_identifier_t resource_identifier,
    unicoap_response_t* response,
    uint8_t* buffer, size_t buffer_capacity, uint16_t flags,
    unicoap_profile_t* profile, unicoap_aux_t* aux);

Request data (method, payload (+ size), and optionally any options) must be passed through the request parameter. To avoid unnecessary boilerplate, unicoap supports defining requests/responses via convenience functions, including, but not limited to:

static inline unicoap_request_t unicoap_request_empty(unicoap_method_t method) { ... }
static inline unicoap_request_t unicoap_request_string(unicoap_method_t m, char* payload)`) { ... }
static inline unicoap_request_t unicoap_request(unicoap_method_t method, uint8_t* payload, size_t payload_size) { ... }
static inline unicoap_request_t unicoap_request_options(unicoap_method_t m, uint8_t* payload, size_t size, unicoap_options_t* options) { ... }

The unicoap_resource_identifier_t consists of an identifier type and value. This API allows for different representations of resource identifiers: The URI passed to unicoap_uri(...) specifies the transport type via the URI’s scheme, the resource’s address, the Uri-Path and Uri-Queries. unicoap_endpoint_udp(...) and unicoap_endpoint_dtls(...) specify the endpoint, i.e. for UDP/DTLS the address and port. This design prevents constructing URIs for requests where the raw sock_udp_ep_t already exists in the application and avoids having too many client APIs, especially with convenience APIs for block-wise and potential future identifiers like CRIs.

The flags parameter serves for, e.g., signaling to the CoAP stack that the request should be sent as a confirmable message.

Details about the response (such as status code, the payload (+ size), and options) can be obtained from the response out parameter.

For callback-based response handling, unicoap defines an async and a synchronous variant.

int unicoap_send_request_sync_callback(
    unicoap_request_t request,
    unicoap_resource_identifier_t resource_identifier,
    unicoap_response_callback callback, void* callback_args,
    uint16_t flags, unicoap_profile_t* profile);

int unicoap_send_request_async(
    unicoap_request_t request,
    unicoap_resource_identifier_t resource_identifier,
    unicoap_response_callback callback, void* callback_args,
    uint16_t flags, unicoap_profile_t* profile);

Note

Block-Wise Transfer
Sending: unicoap defines an auto-slice flag that can be passed alongside a client request or server response to instruct the stack that the request should be sliced and transmitted block-wise. In addition to that, you can use a slicer to send blocks manually.
Receiving: unicoap is also going to provide callback-based client request functions similar to the ones above that support automatic Block2 block-wise transfers. With these APIs, the callback is called per response block arriving at the client. The client and server API will also optionally (resource-intensive for async scenarios) support block-wise reassembly of requests/responses via a flag present in the resource definition or the requests flags parameter.

Note

On OSCORE and Profiles
unicoap is going to enable support for OSCORE. In order to leave room for future CoAP extensions akin to OSCORE, I’d like to introduce profiles⏤common characteristics for requests and responses that dictate special treatment is to be applied to a given message. E.g., an OSCORE security context stores characteristics relevant for encrypting/decrypting CoAP requests and responses.


Server API

unicoap will leverage existing resource definition APIs (nanocoap XFA and gcoap listeners), with slight modifications concerning flags (for things like restricting a resource from being accessed over unsecured transports). In contrast to existing APIs however, unicoap will allow sending responses after the request handler returns (needed for proxy operation).


A Note About Options

Let’s look at that unicoap_options_t* options field from before. unicoap_options_t provides a view on CoAP options in a message buffer. The new API implements several APIs for option manipulation, with special focus on how repeatable and non-repeatable ("single-instance") options are handled.
Non-repeatable options like Accept:

int unicoap_options_get_accept(unicoap_options_t* options, uint16_t* format);
int unicoap_options_set_accept(unicoap_options_t* options, uint16_t format);
int unicoap_options_remove_accept(unicoap_options_t* options);

Repeatable options like Etag and Uri-Query.:

int unicoap_options_get_next_etag(unicoap_option_iterator* iterator, uint8_t** value);
int unicoap_options_add_etag(unicoap_options_t* options, uint8_t* value, size_t value_size);
int unicoap_options_remove_etags(unicoap_options_t* options);

For options like Uri-Path whose combined values form another, aggregated value (here, the URI path is formed by concatenating all URI path components), you can voluntarily use convenience APIs like:

int unicoap_options_get_uri_path(unicoap_options_t* options, char* path, size_t capacity);

Note

APIs for getting, setting, adding, and removing options with arbitrary option numbers still exist:
unicoap_options_get(...), unicoap_options_copy_value(...), unicoap_options_set(...), unicoap_options_remove (non-repeatable);
unicoap_options_add(...), unicoap_options_copy_values, unicoap_options_remove_all (repeatable).

Tip

nanocoap also enforced a requirement where you would need to insert options in the order dictated by their corresponding option number. In unicoap this requirement is no longer present, yet inserting options in the order they’ll occur in the final packet will still deliver the best performance.


Exchangeable Transport Protocols and Message Formats

Expand for details With unicoap, it’s easy to implement CoAP over, e.g., TCP. unicoap strives for driver equality, i.e., the built-in UDP and DTLS drivers won’t receive special treatment. Driver functionality is conditionally compiled in (e.g., IS_USED(MODULE_UNICOAP_TRANSPORT_CARRIER_PIGEON)).

In unicoap’s transport layer, each ’driver’ must support two basic functions and a parser. As an example, let’s pick the UDP driver which provides these functions:

int _udp_init(event_queue_t* queue);
int _udp_send_message(unicoap_message_t* message, unicoap_properties_t* properties, sock_udp_ep_t* endpoint);
int unicoap_parse_pdu_udp_dtls(const uint8_t* pdu, size_t size, unicoap_message_t* message, unicoap_properties_t* properties)

When you call unicoap_init() in your application, unicoap will spin up a thread (again, like gcoap does), create an event queue and ask your driver to perform any setup work. _udp_init opens a GNRC socket and registers a callback for incoming datagrams on the queue via sock_udp_event_init. In the UDP case, the driver reads any available data from the socket and ultimately calls the following function (ignoring error handling).

void _process_pdu(
    uint8_t *pdu, size_t size,
    unicoap_parser_t parser,
    unicoap_endpoint_t remote,
    unicoap_endpoint_t local
)

Because the CoAP header format varies from transport to transport, you will need to pass in a parser. These parsers aren’t entirely transport-specific, i.e., a certain parser (that is, a header format) is shared between multiple transports. The option format and payload (i.e., payload marker + payload that follows) remain the same across all CoAP message formats, however. In the UDP case, the appropriate parser is passed. Ultimately, after parsing the header, the parser calls _parse_options_and_payload which is shared functionality among all parsers.

Sending works similarly, as each message is effectively passed down to the proper _my_transport_send_message(...) function which, in turn, also uses the corresponding header encoder.

static int _send_message(unicoap_message_t* message, unicoap_propertied_t* properties, unicoap_endpoint_t remote) {
    switch (remote.proto) {
#if IS_USED(MODULE_UNICOAP_UDP)
        case UNICOAP_PROTO_UDP:
            return _udp_send_message(message, properties, remote.transport_endpoint.udp);
#endif
#if IS_USED(MODULE_UNICOAP_DTLS)
       case UNICOAP_PROTO_DTLS:
/* ... */
/* ... */
/* ... */
#endif
        /* MARK: unicoap_transport_extension_point */
    }
}

(where unicoap_properties_t holds the message ID, token, type)


Implementation Status

Expand for details

At the moment the following components are implemented:

  • API
    • client
    • server
    • block-wise helpers
    • Observe
  • Parser
    • UDP/DTLS header parser
    • options parser
    • options API
    • options convenience APIs
  • Messaging
    • event loop
    • Block-wise reassembly and fragmentation
    • deduplication
    • Observe
  • Transport
    • vector send + recv interface
    • UDP driver implementation
    • DTLS driver implementation
    • parser-transport interface
  • gcoap and nanocoap modules (need adjustments)
    • gcoap DNS
    • nanocoap/gcoap cache
    • gcoap forward proxy
    • gcoap fileserver (VFS)

Thanks for reading! I’d love to hear your thoughts on the new API.

@chrysn
Copy link
Member

chrysn commented Jul 19, 2024

Thanks for tackling this – a unified way to access CoAP in RIOT has been overdue for some time, and will also make it easier to integrate the implementation bundling EDHOC and OSCORE that's being developed in RIOT-rs (currently called coapcore, ariel-os/ariel-os#348).

Some very early comments, only based on the text:

Profiles

That's a very overloaded word, and I'm not sure what "akin to OSCORE" would encompass. Would "middleware" fit the bill, and can they be chained?

unicoap_send_request{_aux} signature

The request_t type seems to be able to store all the option pointers (probably not the data, requiring that the pointed values live between request_t construction time and buffer population, which sounds OK). How will it do that with the bounded size of the request_t, not having access to the buffer yet?

Likewise, how does unicoap_resource_identifier_t store its inputs without already populating them into the buffer? For example, if I were to follow up on #13827 and add the CRI support you already mention, then building a unicoap_resource_identifier_t would probably look like:

unicoap_resource_identifier_t where;
cri_init(&where, CRI_COAP_TCP, "example.com")
cri_append_path("sensor");
cri_append_path("2");
cri_append_query("a=b");

Would the unicoap_resource_identifier_t spool them in some bound memory, or would building the CRI just mean creating more data structures?

MODULE_UNICOAP_TRANSPORT_CARRIER_PIGEON

I'd love to review that draft!

transports / profiles

A transport like the aforementioned coapcore would provide multiple URI schemes, and integrate EDHOC/OSCORE – whereas other implementations (like direct libOSCORE on Gcoap) would have authentication and transports in different places. Where would credential requirements be passed in, and can that be done in a way that works both if the authentication sits on a profile/middleware and if the authentication is built in? (I have no expectation that the precise same credentials could be passed in to either, but the credentials would be built and referenced in a way that can be transparent to the application)

@maribu
Copy link
Member

maribu commented Jul 21, 2024

My first thought is:

Comic "XKCB: Standards"

From my experience in using RIOT to teach students the current situation with gcoap and nanocoap is already highly confusing. Adding another user facing API would make IMHO a bad situation worse. Would you consider to instead have the goal of replacing nanocoap and gcoap as user facing API?

nanocoap has been extended a lot in the recent years in a way that does not bloat trivial use cases, while enabling more complex use cases at the expense of using more resources. I would love it if the same code could be used for CoAP with different transports.

I would like to emphasize the original goal of CoAP:

The Constrained Application Protocol (CoAP) is a specialized web
transfer protocol for use with constrained nodes and constrained
(e.g., low-power, lossy) networks. The nodes often have 8-bit
microcontrollers with small amounts of ROM and RAM, [...]

The part with 8-bit may not have aged that well, as it has been shown that tiny 32-bit bit MCUs are very much feasible. Still, larger SRAM will increase both the MCU cost and the power consumption, so that there is still a lot of value in keeping things small.

As a result, I would like to emphasize that a universal CoAP API should allow tiny applications for trivial use cases. Allowing the user to access more features by calling additional functions (not needed for the trivial use cases) or enabling more modules (not needed for the trivial use cases) would IMO be a good compromise. When modules are used to enable the transports selected, optimizing for the case only a single transport is support (e.g. by avoiding indirect function calls in this case) would be way to cover both use cases.

Note that the ability to add CoAP options in random order is IMO a bad trade-off. The convenience it adds is limited. A decent error message when adding options out of order, and developers will not waste a lot of time on this. The overhead of this little extra convenience is felt by all users, as it forces to keep the options in RAM until the options are written into the message buffer.

@chrysn
Copy link
Member

chrysn commented Jul 21, 2024

Adding another user facing API would make IMHO a bad situation worse

The way I understand this PR, it aims to replace both gcoap and nanocoap_sock as the user facing parts. Only then it makes sense to add this.

8-bit

I haven't seen the code, but I would expect that everything this new API does can be build-time folded into barely any different code than direct nanocoap or gcoap use would create, provided that LTO is enabled and there is only one backend (which would be a typical case).

@carl-tud
Copy link
Author

carl-tud commented Jul 22, 2024

@chrysn

Profiles

That's a very overloaded word, and I'm not sure what "akin to OSCORE" would encompass. Would "middleware" fit the bill, and can they be chained?

OSCORE is special implementation-wise as it needs to be hardcoded in. Consider any OSCORE-protected message passing through the transport layer and parser, just as any regular message would. Now, in theory, after parsing the message, the stack having inspected its OSCORE option, and decrypting the inner message, the combined plaintext message (serialized inner message) needs to be redirected back to the parser layer. This sort of behavior is unique to OSCORE and cannot be achieved with any middleware-like model.

The term "profile" isn't set in stone yet. Fundamentally, directly passing the OSCORE security context as an optional argument to, e.g., request functions would suffice. The idea here was to create some sort of tagged union that ensures extensions requiring special treatment like OSCORE can be added without source-breaking changes or creating tens of request function "overloads" in the future.

The cost associated to chaining middleware is too high for a constrained environment, I think. These use cases are too narrow to be built-in functionality, plus OSCORE doesn't event fit the middleware model.


Request signature and resource identifier (socket endpoint, URI, or potentially CRI)

First, the options issue. Options are stored in their serialized form in an options buffer. The header on the other hand is disconnected from the options buffer to support various transport-dependent headers (CoAP over TCP, UDP, (GATT), ...) and only added in by the stack afterwards.

The request struct stores a pointer to an options struct containing a fixed-size option array with pointers into the PDU, similar to the existing nanocoap design. Additionally, the options struct holds the number of options, the option buffer total capacity and current size. The option member in the request needs to be pointer, otherwise each time the request is copied you'd also do a rather expensive copy of the options struct.

If you really need to add options manually, it would look something like this:

uint8_t my_buf[CONFIG_UNICOAP_OPTIONS_BUFFER_DEFAULT_CAPACITY];
unicoap_options_t my_options1;
unicoap_options_init(&my_options1, my_buf, sizeof(my_buf));

/* or shorter */
UNICOAP_OPTIONS_ALLOC_CAPACITY(my_options2, 200);
/* or, with the default capacity (CONFIG_UNICOAP_OPTIONS_BUFFER_DEFAULT_CAPACITY) */
UNICOAP_OPTIONS_ALLOC(my_options2);
unicoap_options_set_content_format(&my_options2, UNICOAP_FORMAT_TEXT);

In general, supplying a URI would force you to allocate an options buffer before calling the request function. unicoap_resource_identifier_t is just a tag and a union either referencing a string or a socket endpoint. If you supply just the URI and no options buffer, the implementation will allocate the options buffer with a default size for you and automatically insert Uri-Path, Uri-Host, and friends. The goal here was to create a one liner that won't make you worry about option buffers if you aren't going to manually insert any. Just storing a char * or a sock_udp_ep_t * instead should be really cheap here.


MODULE_UNICOAP_TRANSPORT_CARRIER_PIGEON

I'd love to review that draft!

Next April. Promise ;)


A transport like the aforementioned coapcore would provide multiple URI schemes, and integrate EDHOC/OSCORE – whereas other implementations (like direct libOSCORE on Gcoap) would have authentication and transports in different places. Where would credential requirements be passed in, and can that be done in a way that works both if the authentication sits on a profile/middleware and if the authentication is built in? (I have no expectation that the precise same credentials could be passed in to either, but the credentials would be built and referenced in a way that can be transparent to the application)

I'm not sure I fully understand. In theory, OSCORE could also be handled by a transport layer driver in the current design. I guess, that just wouldn't be that useful compared to shared OSCORE handling in the messaging stack.

@carl-tud
Copy link
Author

carl-tud commented Jul 22, 2024

Would you consider to instead have the goal of replacing nanocoap and gcoap as user facing API?

@maribu Yes, the goal is to offer a full replacement for nanocoap, nanocoap_sock and gcoap. The new library will ruse existing code as much as possible, such as code from the nanocoap cache extension.

The transport drivers are currently compiled in via a module flag, so there shouldn't be any additional overhead.

optimizing for the case only a single transport is support would be way to cover both use cases.

Avoiding a single indirect function call doesn't seem to be possible to me without introducing too much complexity. As @chrysn said, code unused by the application should be stripped by the optimizer, so offering more functionality doesn't necessarily lead to increased binary sizes.

Note that the ability to add CoAP options in random order is IMO a bad trade-off.

Provided you add options in the correct order, you'll achieve the same performance behavior as with nanocoap. As an alternative, I could gate the ability to insert options in a random order behind a compile-time flag? I just think inserting options in some predefined order requires a level of protocol knowledge I wouldn't to demand from a first-time user. In either case I'd document that you can rearrange your option insertion calls to match the RFC order as an optimization.

@chrysn
Copy link
Member

chrysn commented Jul 22, 2024

cannot be achieved with any middleware-like model

The way libOSCORE is wrapped in Rust does exactly this: It presents a view of the plaintext message composed of both options from outside and from inside. So a sufficiently powerful middleware model (that can add and intercept properties, as would an HTTP authentication middleware that adds and strips cookie headers) can cover that. Whether that is an efficient and ergonomic thing to do in C is a different question, of course, but there's nothing fundamental in OSCORE that stops one from doing this.

In theory, OSCORE could also be handled by a transport layer driver in the current design.

The hard part about making it a transport is that transports then could be stacked – your remote can be an OSCORE transport tacked onto a UDP transport, or an OSCORE transport tacked onto an OSCORE transport tacked onto a TCP transport (while nested OSCORE is not in RFC8631, there is ongoing work and use cases to allow it). That works best if those different transports form a linked structure, but that requires multiple stack allocations in C, putting it again in the "might not be ergonomic" category of above. So yeah, maybe it is practical to give OSCORE a special place in the handling.

At the same time, the coapcore reference not only was made to illustrate what can be done there, but to point to a practical possibility: If this is to be a pluggable, it will need the ability to send credentials either into a specially-placed OSCORE layer, but also into transports that handle their own security (which may be OSCORE as part of the implementation).

@carl-tud
Copy link
Author

carl-tud commented Jul 22, 2024

Whether that is an efficient and ergonomic thing to do in C is a different question

Yeah, this what I meant by saying OSCORE "cannot be achieved with any middleware-like model". Still, I don't believe middleware is worth the overhead for the tiny number of application that'd benefit from middleware.

That works best if those different transports form a linked structure

The current implementation tries really hard to not have statically allocated driver structs or anything like that. Transport drivers are, put simply, conditional (again, compile-time) function calls to the transport driver implementation.

If this is to be a pluggable, it will need the ability to send credentials either into a specially-placed OSCORE layer, but also into transports that handle their own security (which may be OSCORE as part of the implementation).

For provisioning, RIOT largely relies on static configurations.. I could also make the profile (or whatever it is going to be called) available to the transport driver to make these "profiles" a way of "channeling" credentials to lower library components, such as the messaging stack (potentially handling OSCORE) or even transport drivers. As with middleware, and having talked to embedded/IoT developers, uses cases for stacking drivers are hard to find and too specific to be built directly into RIOT..

@chrysn
Copy link
Member

chrysn commented Jul 22, 2024

The current implementation tries really hard to not have statically allocated driver structs or anything like that.

I didn't express that clearly: What I meant was that at the time the message is created, a chain would need to be created:

coap_transport_data_t udp = coap_remote_for_uri("coap://host.example.com");
coap_transport_data_t protected = coap_oscore_for(&udp, my_security_requirements);
unicoap_send_request(..., protected, ...);

which doesn't align nicely with the single-line C call syntax.

pluggable / static configurations

I didn't mean run-time pluggable. If coapcore is a backend that is selected, it is probably the only backend.

@benpicco
Copy link
Contributor

Just some thought from my side:

One goal of nanoCoAP sock was to use zero-copy network functions (sock_udp_sendv(), sock_udp_recv_buf()) to avoid having to keep a separate buffer around to copy the payload + CoAP header into when possible.

e.g. in your first example, how is response.payload allocated?

So my question would be why you want to base your new API on top of GCoAP (which IMHO is way too complex and clunky) instead of nanoCoAP or your own re-write.

I extended nanoCoAP sock to cover all use-cases I had so I don't have to deal with GCoAP anymore. But there are still some footguns in the 'core' nanoCoAP library that you'd inherit if you use it. (e.g. coap_put_option() has no way to check if the destination buffer has still enough space left for the option it's going to write).

I see that you are using unicoap_request_t and unicoap_response_t, that's already an improvement over muddling everything together in coap_pkt_t which is now hard to untangle (see #17544).

Do you also have some application(s) in mind to make use of the new API?
Designing an API without actual users often leads to scope creep (this might be a handy feature!) while overlooking actual requirements and backing constraints into the API (payload must be in a single buffer).

e.g. what's the use-case for removing options from a CoAP header? I guess when you implement re-ordering, this pretty much comes for free though.

@carl-tud
Copy link
Author

carl-tud commented Jul 23, 2024

One goal of nanoCoAP sock was to use zero-copy network functions (sock_udp_sendv(), sock_udp_recv_buf()) to avoid having to keep a separate buffer around to copy the payload + CoAP header into when possible.

e.g. in your first example, how is response.payload allocated?

The callback APIs are essentially zero-copy APIs on top of a listen buffer. By listen buffer I mean the buffer gcoap writes scattered data returned by sock_udp_recv_buf_aux in, as parsing scattered data is a mess (in terms of readability + error-prone + code size). In theory, this is only an issue when you need to read from the socket more than once. (i.e., provided, the second read attempt yields zero bytes, you won't need to copy anything, using the socket's stack buffer should be perfectly fine here -- but that's just an optimization of the general case).

Now, for the blocking client API, you will need to copy the payload into some sort of user-supplied buffer. For the callback-based APIs, I'm just handing you the listen buffer in different packaging. There's no copying of options or payload going on here. The pointers in the options view array (that's also how nanocoap does it, you'll need some way of keeping track of options without re-parsing the entire options blob again) point into the listen buffer, same as with response.payload.

Of course, that does not apply for block-wise reassembly, but that's another use case (where you explicitly want to copy).

So my question would be why you want to base your new API on top of GCoAP (which IMHO is way too complex and clunky) instead of nanoCoAP or your own re-write.

Yeah, so this statement of building unicoap on top of gcoap might've been a little oversimplified. unicoap will have a messaging thread, like gcoap, enabling async operations. Other than that, another third of gcoap isn't usable because of the new transport modularity and I'm rewriting yet another third, because some parts are extremely hard to comprehend and debug. There's no struct or function that remains untouched ;)

I extended nanoCoAP sock to cover all use-cases I had so I don't have to deal with GCoAP anymore. But there are still some footguns in the 'core' nanoCoAP library that you'd inherit if you use it. (e.g. coap_put_option() has no way to check if the destination buffer has still enough space left for the option it's going to write).

I've rewritten the parser, bound checks included.

I see that you are using unicoap_request_t and unicoap_response_t, that's already an improvement over muddling everything together in coap_pkt_t which is now hard to untangle (see #17544).

This is one of the details I haven't settled on, yet. Currently, unicoap_request_t and unicoap_response_t are just typedef'ed to a common unicoap_message_t and accessor functions for the status code / method are provided. The alternative would be to create duplicate structs that each have a type-safe method or status code field, but that would require casting and relying on same-layout "guarantees".

... overlooking actual requirements and backing constraints into the API (payload must be in a single buffer).

If you want to send chunked payload, I'm happy to hand you an iolist_t. At some point, the iolist impl is going to copy your chunks into a larger buffer anway. But yeah, sure, that's something I could do for sure, it's just a question of how unergonomic the API would become.

e.g. what's the use-case for removing options from a CoAP header? I guess when you implement re-ordering, this pretty much comes for free though.

unicoap does not do reordering at all. Instead, options are alway inserted at the right palce right away. That's also why you can achieve nanocoaps performance by adding options in the order dictated by their option number.

Yes, generally, you won't remove any options. The removal function is still there from an earlier version I tested. I can remove it of course, but if you don't call this API, it's not going to land in your binary.

@benpicco
Copy link
Contributor

I'm happy to hand you an iolist_t. At some point, the iolist impl is going to copy your chunks into a larger buffer anway.

Only if you need async operation (so a separate send buffer is needed) or the message will be encrypted with DTLS. For plain blocking UDP, sock_udp_sendv() will work just fine without copying the payload to a separate buffer.

@carl-tud
Copy link
Author

@benpicco Is sending chunked payload something you definitely need? If yes, I'd be interested in the concrete use case you have.

@benpicco
Copy link
Contributor

It's certainly convenient, but my main use case was having a (small) buffer for the CoAP header and handing the payload over without coping it to a separate (CoAP header + payload length) buffer.

On the other hand, are async requests something we definitely need? If yes, I'd be interested in the concrete use case you have.

@carl-tud
Copy link
Author

carl-tud commented Jul 24, 2024

handing the payload over without coping it to a separate (CoAP header + payload length) buffer.

Oh, I think we've misunderstood each other. Yes, that's how I was going to do it anyway.

This avoids allocating or occupying a temporary buffer. Still, there's a case where you cannot live without a temporary buffer, and that's retransmissions 1.

Regular client call:
Sending:
user's payload + optional options buffer → unicoap creates an iolist (no copy), attaches header to iolist (cheap, exact size is known) → sock_udp_sendv()

Receiving:
sock_udp_recv_buf_aux() → more than one chunk → copy into chunks into listen buf → parse (no copy) → pass response and options (again, just a view on the listen buf, no copying) to callback function (for blocking API, copy.)
OR:
sock_udp_recv_buf_aux() → single chunk → parse (no copy) → pass response and options (again, just a view on the socket stack buffer, no copying) to callback function (for blocking API, copy.)

Footnotes

  1. This might also be the reason behind gcoap not utilizing any iolist APIs. For retransmissions, gcoap has no choice but to keep a serialized version of the message. Plus, keeping both the message in a buffer and initially sending the message via vectored send functions would involve an additional copy that's not needed.

@carl-tud
Copy link
Author

carl-tud commented Jul 24, 2024

On the other hand, are async requests something we definitely need?

I think I'm not quite getting the connection between chunked payload and async requests..
Currently, nanocoap discards any inbound messages not related to the current (therefore pending) request/response. This might lead to increased network activity as peers will retransmit since nanocoap cannot ack retransmitted messages. If your response handler is cheap, this is fine. In other scenarios, like proxy operation, you'd need to block the handler, which, in turn, blocks the event loop. Other than that, I believe async resource request handlers are useful when you want to perform other work in the meantime.

@maribu
Copy link
Member

maribu commented Jul 24, 2024

Do we have any evidence that there are real world use cases (outside of research labs and testbeds) for CoAP proxies running on MMU-less nodes?

I would guess that in real world deployments, the CoAP-Proxy will be running on some OpenWRT/Linux box near by anyway. I suspect that a CoAP-Proxy on a RIOT node is mostly an academic demonstrator, and nothing that one would actually use anyway.

@chrysn
Copy link
Member

chrysn commented Jul 24, 2024 via email

@benpicco
Copy link
Contributor

I did some digging to better understand the historical context on how we arrived at the current situation: Turns out both nanoCoAP (#5972) and GCoAP (#5598) arrived at roughly the same time.

Looks like GCoAP was first based on MicroCoAP, but was then rewritten on top of @kaspar030's nanoCoAP implementation - but only using the parser / writer part. nanocoap_sock was already part of nanoCoAP back then as a very basic, synchronous client/server implementation while GCoAP aimed at being a very versatile implementation with all the bells and whistles.

When SUIT came along, it needed a CoAP transport, but didn't want to use the heavy GCoAP library that comes with a separate thread and payload buffer, as this would increase the cost of providing firmware updates (and since SUIT needs to keep a flashpage in RAM, that's already rather high).

It used nanoCoAP to implement block-wise to fetch the updates.

Then I had a simple application where some sensors would just push some measurement data to a server via CoAP and found GCoAP very cumbersome to use for such a simple task.
Since we were using SUIT anyway, it was just natural to use the same module (nanocoap_sock) there too. I moved the new CoAP features that had been added inside SUIT to the common module and added convenience functions to make it easier to work with.

This was all under a 'only pay what you use' philosophy, so if you don't use a feature (e.g. server mode, async mode) you don't have to compile it in - unlike with GCoAP.

With that nanoCoAP sock now almost has feature parity with GCoAP, the only things missing are Observe handling and Proxying - but I didn't have a use case for that.

Do we have any evidence that there are real world use cases (outside of research labs and testbeds) for CoAP proxies running on MMU-less nodes?

I think @fabian18 and @mariemC were working on just that recently.

By listen buffer I mean the buffer gcoap writes scattered data returned by sock_udp_recv_buf_aux() in, as parsing scattered data is a mess (in terms of readability + error-prone + code size).

IMHO sock_udp_recv_buf_aux() is just a bad API. In practice there is never any scattered data with GNRC and even with LWIP I've never seen it - it also makes no sense.
The application shouldn't have to deal with de-fragmenting network packets, I really can't think of a use-case for that. That's why nanoCoAP just asserts() that there is no scattered data and operates on the socket buffer under that assumption, avoiding the copy and separate buffer.

@chrysn
Copy link
Member

chrysn commented Jul 25, 2024

It's borderline off-topic at this point, but just for completeness' sake: There is also 6TiSCh minimal joining (RFC9031), for which every routing node in a 6LoWPAN (easily MMU-less) acts as a proxy – not to do blockwise or retransmissions or to cache (it does none of that), but to forward before the network even tells the joining node which network addresses it is using. Granted, that may well not use the GCoAP based proxy we now have in RIOT (especially as it should be implemented statelessly), but it is a proxy on a very constrained node nonetheless.

@Teufelchen1
Copy link
Contributor

Hi carl,
I got a use-case which I am not sure is covered. I think your clients "convenience" functions do so, maybe you can clarify:

I often find myself in a situation where I want to emit a CoAP packet to RAM (for further hacking, cuddling & debugging), can your API allow me to do so? e.g. If you require me to have a valid transport this won't work.

This got me thinking, in combination with your approach to be agnostic of the transport driver, it should be easy to quickly build a "dummy" driver that emits packets to i.e. a ring buffer in RAM & receive packets from another ring buffer. Right?

Lastly I want to make you aware of SLIPMUX, if you aren't already. Within SLIPMUX, CoAP packet are transported via serial / uart. No IPv6 or UDP involved.

@chrysn
Copy link
Member

chrysn commented Jul 30, 2024

I don't think that that use case description is entirely accurate: There is no such thing as a CoAP message serialized without a transport, because not only does the CoAP header depend on the transport, but also CoAP options need to be set differently depending on the transport (for example, Observe over TCP has only zero values in notifications).

I think I can rephrase that though to do what you need: It could be convenient to have a transport that behaves like some particular transport (maybe UDP, then it builds a header, or maybe OSCORE, then it builds no header and just puts the CoAP method/code and relies on external matching of request/response), but only serializes into or reads from a dummy buffer. This transport might be configured statically as the only transport anyway, or it might be selected at runtime, or it might be configured for some Uri-Host value. This would allow easy access to serialized messages for inspection.

In particular, this could be used to implement CoAP transports such as slipmux that use another transport's mechanisms (although if the new CoAP stack has good APIs, implementing a new transport should be just as easy).

@maribu
Copy link
Member

maribu commented Jul 30, 2024

In particular, this could be used to implement CoAP transports such as slipmux that use another transport's mechanisms (although if the new CoAP stack has good APIs, implementing a new transport should be just as easy).

I can also see this being useful with some Segger RTT style transport.

I guess introducing some "buffer transport" backend would be an easy implementation choice that doesn't comprise the API design idea? For the use case of communicating from the host to a directly attached MCU the CoAP over websocket format should be pretty close to what would be perfect here, if I recall correctly.

@carl-tud
Copy link
Author

carl-tud commented Jul 31, 2024

@Teufelchen1 Do you need the packet in RAM to be serialized as it would occur on the wire or is it just the options (and header?) you want to store and modify?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants