Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Error: write EPIPE" or "Error: write EPIPE" on attempted pcap import #2926

Closed
philrz opened this issue Dec 8, 2023 · 7 comments · Fixed by #2955
Closed

"Error: write EPIPE" or "Error: write EPIPE" on attempted pcap import #2926

philrz opened this issue Dec 8, 2023 · 7 comments · Fixed by #2955
Assignees

Comments

@philrz
Copy link
Contributor

philrz commented Dec 8, 2023

tl;dr

An attempt to import a pcap containing 802.11 traffic resulted in a JavaScript error pop-up. I suspect the root cause is a Zeek failure that bubbles up through Brimcap. It would seem ideal if we could catch and surface that error instead.

Details

Repro is with Zui commit 29860a0.

This issue was reported in a recent community Slack thread. In the user's own words:

Hi everyone, new here. Im trying to use zui to view some a pcap file for a cyber security challenge. Installed zui and then tried to import the pcap file but i get this error. same error on my windows 10 and my linux vm. Can anyone help me solve this issue?
image

The user shared their pcap VanSpy.pcapng.zip which I used to reproduce the problem with Zui 29860a0 on macOS as shown in the attached video. In my repro it happened to show Error: write EPIPE as the exception instead.

Repro.mp4

As it turns out, if I dismiss the pop-up I can see a handful of Zeek events were generated based on the pcap. However, since the user was clearly expecting to see more, the pop-up gives the impression that the limited data was possibly due to a bug.

After seeing this, I opened the pcap file in Wireshark and could see the likely problem: All the traffic is 802.11. We've confronted this before and a prior issue asking about 802.11 (#570) was closed a long time ago when the team concluded that there's really not much that can be done here since Zeek and Suricata seem to expect IP traffic. In the time since I can see some 802.11 references in the Zeek repo so maybe there's potential for something there, but we're currently stuck with the older Zeek/Suricata artifacts we've got.

After seeing this, I dropped to the shell and sniffed the brimcap command line out of ps and tried to run it by hand. Here's what I saw:

$ export BRIM_SURICATA_USER_DIR="$(pwd)"

$ cat VanSpy.pcapng | /Applications/Zui.app/Contents/Resources/app.asar.unpacked/zdeps/brimcap analyze -json -
{_path:"weird",ts:2023-11-25T14:50:28.326413Z,uid:null(string),id:{orig_h:null(ip),orig_p:null(port=uint16),resp_h:null(ip),resp_p:null(port)},name:"unknown_packet_type",addl:null(string),notice:false,peer:"zeek"}
{_path:"weird",ts:2023-11-25T14:50:36.923448Z,uid:null(string),id:{orig_h:null(ip),orig_p:null(port=uint16),resp_h:null(ip),resp_p:null(port)},name:"truncated_link_header",addl:null(string),notice:false,peer:"zeek"}
{_path:"weird",ts:2023-11-25T14:50:44.339188Z,uid:null(string),id:{orig_h:null(ip),orig_p:null(port=uint16),resp_h:null(ip),resp_p:null(port)},name:"non_ip_packet_in_ieee802_11",addl:null(string),notice:false,peer:"zeek"}
{_path:"stats",ts:2023-11-25T14:50:28.326413Z,peer:"zeek",mem:65(uint64),pkts_proc:1(uint64),bytes_recv:122(uint64),pkts_dropped:null(uint64),pkts_link:null(uint64),pkt_lag:null(duration),events_proc:423(uint64),events_queued:12(uint64),active_tcp_conns:0(uint64),active_udp_conns:0(uint64),active_icmp_conns:0(uint64),tcp_conns:0(uint64),udp_conns:0(uint64),icmp_conns:0(uint64),timers:39(uint64),active_timers:35(uint64),files:0(uint64),active_files:0(uint64),dns_requests:0(uint64),active_dns_requests:0(uint64),reassem_tcp_size:0(uint64),reassem_file_size:0(uint64),reassem_frag_size:0(uint64),reassem_unknown_size:0(uint64)}
{"type":"status","ts":{"sec":1702005480,"ns":385277000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702005481,"ns":385356000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702005482,"ns":385343000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702005483,"ns":385330000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702005484,"ns":385383000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702005485,"ns":385368000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702005486,"ns":385449000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702005487,"ns":385443000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702005488,"ns":385469000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702005489,"ns":385463000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702005490,"ns":386441000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702005491,"ns":385530000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702005492,"ns":385516000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"error","error":"zeekrunner exited with code 1\nstdout:\nWARNING: No Site::local_nets have been defined.  It's usually a good idea to define your local networks.\nstderr:\n1700923856.942332 fatal error in \u003ccommand line\u003e, line 3: failed to read a packet from -: truncated pcapng dump file; tried to read 1596 bytes, only got 656\n"}

$ echo $?
1

So indeed, it looks like brimcap effectively failed because Zeek failed. However, is there any hope of catching the failure and surfacing the error in that last line instead of the JavaScript error that pops up now? If we could present that error text along with links to https://zui.brimdata.io/docs/support/Troubleshooting and https://github.com/brimdata/brimcap/wiki/Troubleshooting it would give the user a bit more to go on, e.g., I could make an entry in one of the Troubleshooting guides specifically acknowledging that 802.11 is not going to work.

@philrz
Copy link
Contributor Author

philrz commented Dec 8, 2023

I don't know if it could be related at all, but I also noticed there's an open Brimcap issue brimdata/brimcap#167 that mentions EPIPE and that's in the error message we see here.

@philrz
Copy link
Contributor Author

philrz commented Dec 8, 2023

I remain uncertain as to the degree that this is a Brimcap problem vs. a Zui problem vs. both. I'll continue to track here until there's more clarity around that.

I did some more debugging with a local build of Brimcap commit fe89082. After doing make build I ran build/dist/suricata/suricataupdater to make sure I had rules. I then created some custom Brimcap configs to piece apart how the analyzers were behaving on their own and together.

Analyzing the pcap with this Zeek-only config:

$ cat zeek-only.yml 
analyzers:
  - cmd: /Users/phil/work/brimcap/build/dist/zeek/zeekrunner
    name: zeek
    workdir: /Users/phil/work/brimcap/zeek-wd

I see:

$ brimcap -version
Version: v1.5.4-26-gfe89082

$ cat ~/Desktop/VanSpy.pcapng | build/dist/brimcap analyze -config zeek-only.yml -json -
{_path:"weird",ts:2023-11-25T14:50:28.326413Z,uid:null(string),id:{orig_h:null(ip),orig_p:null(port=uint16),resp_h:null(ip),resp_p:null(port)},name:"unknown_packet_type",addl:null(string),notice:false,peer:"zeek"}
{_path:"weird",ts:2023-11-25T14:50:36.923448Z,uid:null(string),id:{orig_h:null(ip),orig_p:null(port=uint16),resp_h:null(ip),resp_p:null(port)},name:"truncated_link_header",addl:null(string),notice:false,peer:"zeek"}
{_path:"weird",ts:2023-11-25T14:50:44.339188Z,uid:null(string),id:{orig_h:null(ip),orig_p:null(port=uint16),resp_h:null(ip),resp_p:null(port)},name:"non_ip_packet_in_ieee802_11",addl:null(string),notice:false,peer:"zeek"}
{_path:"stats",ts:2023-11-25T14:50:28.326413Z,peer:"zeek",mem:65(uint64),pkts_proc:1(uint64),bytes_recv:122(uint64),pkts_dropped:null(uint64),pkts_link:null(uint64),pkt_lag:null(duration),events_proc:423(uint64),events_queued:12(uint64),active_tcp_conns:0(uint64),active_udp_conns:0(uint64),active_icmp_conns:0(uint64),tcp_conns:0(uint64),udp_conns:0(uint64),icmp_conns:0(uint64),timers:39(uint64),active_timers:35(uint64),files:0(uint64),active_files:0(uint64),dns_requests:0(uint64),active_dns_requests:0(uint64),reassem_tcp_size:0(uint64),reassem_file_size:0(uint64),reassem_frag_size:0(uint64),reassem_unknown_size:0(uint64)}
{_path:"stats",ts:2023-11-25T14:55:28.358505Z,peer:"zeek",mem:65(uint64),pkts_proc:44118(uint64),bytes_recv:10812047(uint64),pkts_dropped:null(uint64),pkts_link:null(uint64),pkt_lag:null(duration),events_proc:403(uint64),events_queued:398(uint64),active_tcp_conns:0(uint64),active_udp_conns:0(uint64),active_icmp_conns:0(uint64),tcp_conns:0(uint64),udp_conns:0(uint64),icmp_conns:0(uint64),timers:1496(uint64),active_timers:41(uint64),files:0(uint64),active_files:0(uint64),dns_requests:0(uint64),active_dns_requests:0(uint64),reassem_tcp_size:0(uint64),reassem_file_size:0(uint64),reassem_frag_size:0(uint64),reassem_unknown_size:0(uint64)}
{_path:"stats",ts:2023-11-25T14:57:07.891274Z,peer:"zeek",mem:65(uint64),pkts_proc:1121(uint64),bytes_recv:139523(uint64),pkts_dropped:null(uint64),pkts_link:null(uint64),pkt_lag:null(duration),events_proc:101(uint64),events_queued:106(uint64),active_tcp_conns:0(uint64),active_udp_conns:0(uint64),active_icmp_conns:0(uint64),tcp_conns:0(uint64),udp_conns:0(uint64),icmp_conns:0(uint64),timers:506(uint64),active_timers:0(uint64),files:0(uint64),active_files:0(uint64),dns_requests:0(uint64),active_dns_requests:0(uint64),reassem_tcp_size:0(uint64),reassem_file_size:0(uint64),reassem_frag_size:0(uint64),reassem_unknown_size:0(uint64)}
{_path:"capture_loss",ts:2023-11-25T14:57:07.891274Z,ts_delta:6m39.564861s,peer:"zeek",gaps:0(uint64),acks:0(uint64),percent_lost:0.}
{"type":"status","ts":{"sec":1702056548,"ns":802386000},"pcap_read_size":12437436,"pcap_total_size":65536,"values_written":7}

$ echo $?
0

$ ls -l zeek-wd/
total 24
-rw-r--r--  1 phil  staff  278 Dec  8 09:29 capture_loss.log
-rw-r--r--  1 phil  staff  884 Dec  8 09:29 stats.log

Likewise with this Suricata-only config:

$ cat suricata-only.yml 
analyzers:
  - cmd: /Users/phil/work/brimcap/build/dist/suricata/suricatarunner
    name: suricata
    workdir: /Users/phil/work/brimcap/suricata-wd

I see:

$ cat ~/Desktop/VanSpy.pcapng | build/dist/brimcap analyze -config suricata-only.yml -json -
{"type":"status","ts":{"sec":1702056609,"ns":972225000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":0}
{"type":"status","ts":{"sec":1702056610,"ns":972305000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":0}
{"type":"status","ts":{"sec":1702056611,"ns":972323000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":0}
{"type":"status","ts":{"sec":1702056612,"ns":972316000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":0}
{"type":"status","ts":{"sec":1702056613,"ns":972325000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":0}
{"type":"status","ts":{"sec":1702056614,"ns":972342000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":0}
{"type":"status","ts":{"sec":1702056615,"ns":972356000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":0}
{"type":"status","ts":{"sec":1702056616,"ns":972410000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":0}
{"type":"status","ts":{"sec":1702056617,"ns":972398000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":0}
{"type":"status","ts":{"sec":1702056618,"ns":972433000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":0}
{"type":"status","ts":{"sec":1702056619,"ns":972447000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":0}
{"type":"status","ts":{"sec":1702056620,"ns":972455000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":0}
{"type":"status","ts":{"sec":1702056621,"ns":972478000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":0}
{"type":"status","ts":{"sec":1702056622,"ns":972486000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":0}
{"type":"status","ts":{"sec":1702056623,"ns":971913000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":0}
{"type":"status","ts":{"sec":1702056624,"ns":546468000},"pcap_read_size":131072,"pcap_total_size":65536,"values_written":0}

$ echo $?
0

$ ls -l suricata-wd/
total 0
-rw-r--r--  1 phil  staff  0 Dec  8 09:30 eve.json

After clearing out the *-wd directories, I re-run with this combined config, which should be equivalent to the default Brimcap behavior but with the benefit of those debug directories.

$ cat combined.yml 
analyzers:
  - cmd: /Users/phil/work/brimcap/build/dist/zeek/zeekrunner
    name: zeek
    workdir: /Users/phil/work/brimcap/zeek-wd
  - cmd: /Users/phil/work/brimcap/build/dist/suricata/suricatarunner
    name: suricata
    workdir: /Users/phil/work/brimcap/suricata-wd

$ cat ~/Desktop/VanSpy.pcapng | build/dist/brimcap analyze -config combined.yml -json -
{_path:"stats",ts:2023-11-25T14:50:28.326413Z,peer:"zeek",mem:65(uint64),pkts_proc:1(uint64),bytes_recv:122(uint64),pkts_dropped:null(uint64),pkts_link:null(uint64),pkt_lag:null(duration),events_proc:423(uint64),events_queued:12(uint64),active_tcp_conns:0(uint64),active_udp_conns:0(uint64),active_icmp_conns:0(uint64),tcp_conns:0(uint64),udp_conns:0(uint64),icmp_conns:0(uint64),timers:39(uint64),active_timers:35(uint64),files:0(uint64),active_files:0(uint64),dns_requests:0(uint64),active_dns_requests:0(uint64),reassem_tcp_size:0(uint64),reassem_file_size:0(uint64),reassem_frag_size:0(uint64),reassem_unknown_size:0(uint64)}
{_path:"weird",ts:2023-11-25T14:50:28.326413Z,uid:null(string),id:{orig_h:null(ip),orig_p:null(port=uint16),resp_h:null(ip),resp_p:null(port)},name:"unknown_packet_type",addl:null(string),notice:false,peer:"zeek"}
{_path:"weird",ts:2023-11-25T14:50:36.923448Z,uid:null(string),id:{orig_h:null(ip),orig_p:null(port=uint16),resp_h:null(ip),resp_p:null(port)},name:"truncated_link_header",addl:null(string),notice:false,peer:"zeek"}
{_path:"weird",ts:2023-11-25T14:50:44.339188Z,uid:null(string),id:{orig_h:null(ip),orig_p:null(port=uint16),resp_h:null(ip),resp_p:null(port)},name:"non_ip_packet_in_ieee802_11",addl:null(string),notice:false,peer:"zeek"}
{"type":"status","ts":{"sec":1702056690,"ns":527194000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702056691,"ns":527225000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702056692,"ns":526535000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702056693,"ns":527291000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702056694,"ns":526558000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702056695,"ns":526531000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702056696,"ns":526443000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702056697,"ns":526504000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702056698,"ns":527238000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702056699,"ns":527405000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702056700,"ns":527439000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702056701,"ns":527193000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702056702,"ns":527439000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702056703,"ns":527470000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702056704,"ns":527469000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"status","ts":{"sec":1702056705,"ns":527484000},"pcap_read_size":65536,"pcap_total_size":65536,"values_written":4}
{"type":"error","error":"zeekrunner exited with code 1\nstdout:\nWARNING: No Site::local_nets have been defined.  It's usually a good idea to define your local networks.\nstderr:\n1700923856.986817 fatal error in \u003ccommand line\u003e, line 3: failed to read a packet from -: truncated pcapng dump file; tried to read 48 bytes, only got 16\n"}

$ echo $?
1

$ ls -l zeek-wd/ suricata-wd/
suricata-wd/:
total 0
-rw-r--r--  1 phil  staff  0 Dec  8 09:31 eve.json

zeek-wd/:
total 16
-rw-r--r--  1 phil  staff  680 Dec  8 09:31 stats.log
-rw-r--r--  1 phil  staff  430 Dec  8 09:31 weird.log

To summarize what I just saw:

  • When run individually, the Zeek and Suricata runners each seem to run healthy.
  • When run together, in addition to the failure with non-zero exit code, Zeek didn't output the capture_loss.log log file.

@nwt
Copy link
Member

nwt commented Dec 8, 2023

What's happening here is that Suricata is exiting cleanly (i.e., with status 0) without reading its standard input to EOF. When Suricata exits, io.Copy in the goroutine created by analyzer.runProcesses returns because writes to the pipe connected to its standard input start failing. The goroutine then closes Zeek's standard input, resulting in this error.

@philrz
Copy link
Contributor Author

philrz commented Dec 11, 2023

I talked to @nwt offline about has last comment above. He confirmed the fix here would be non-trivial, but we know what it would take to fix it. We've got other priorities at the moment so we're not going to get to this right away. We also agreed that the error handling on the Zui side could still stand to be improved as I noted above, so I'll let this live on as a Zui issue and I can spawn off a Brimcap issue when we get some time to fix that side of things.

@philrz philrz changed the title "Error: write EPIPE" on attempted pcap import "Error: write EPIPE" or "Error: write EPIPE" on attempted pcap import Jan 7, 2024
@philrz
Copy link
Contributor Author

philrz commented Jan 7, 2024

We recently had what looks like another incident of this, brought to our attention in a community Slack thread. In their own words:

Hello! Greetings of the day. After uploading my pcap file into zui it is showing me this error. Image attached. Kindly help me
image

The user shared their pcap for repro but due to sensitivity asked that we not share it. However, the repro in the attached video shows once again that when I repro on macOS I instead see the exception as Error: write EPIPE, but I've confirmed that when I repro on Windows I get the same Error: write EOF that the user reported.

Repro.mp4

Separate from the exception, @nwt studied the nature of this pcap:

The problem with it is that it contains a link-layer protocol unsupported by the version of Zeek included with Zui v1.5.0.

/Applications/Zui.app/Contents/Resources/app.asar.unpacked/zdeps/brimcap analyze -json ~/Downloads/capture.pcap
{"type":"status","ts":{"sec":1704384096,"ns":873932000},"pcap_read_size":65536,"pcap_total_size":648532,"values_written":0}
{"type":"status","ts":{"sec":1704384097,"ns":872680000},"pcap_read_size":65536,"pcap_total_size":648532,"values_written":0}
{"type":"status","ts":{"sec":1704384098,"ns":872685000},"pcap_read_size":65536,"pcap_total_size":648532,"values_written":0}
{"type":"status","ts":{"sec":1704384099,"ns":872735000},"pcap_read_size":65536,"pcap_total_size":648532,"values_written":0}
{"type":"status","ts":{"sec":1704384100,"ns":872721000},"pcap_read_size":65536,"pcap_total_size":648532,"values_written":0}
{"type":"status","ts":{"sec":1704384101,"ns":872674000},"pcap_read_size":65536,"pcap_total_size":648532,"values_written":0}
{"type":"status","ts":{"sec":1704384102,"ns":872689000},"pcap_read_size":65536,"pcap_total_size":648532,"values_written":0}
{"type":"error","error":"zeekrunner exited with code 1\nstdout: (no output)\nstderr:\nfatal error in \u003ccommand line\u003e, line 3: problem with trace file - (unknown data link type 0x114)\n"}

Link type 0x114 (276 in decimal) is LINKTYPE_LINUX_SLL2 (aka DLT_LINUX_SLL2) per https://www.tcpdump.org/linktypes.html.
Zeek 5.1 added DLT_LINUX_SLL2 support and I’ve verified that Zeek 6.0.2 can read this pcap.

We've got work underway in https://github.com/brimdata/build-zeek to start building a newer Zeek that we can ship with Zui, so we hope to soon be at a point where the user's pcap would have loaded without error. However, since other problems of this nature are likely to come up in the future with unsupported link layer protocols, it's another reminder that we'll want to improve the error handling here.

@philrz philrz linked a pull request Jan 23, 2024 that will close this issue
@philrz
Copy link
Contributor Author

philrz commented Jan 23, 2024

The video below verifies the error handling improvements from #2955. Using the pcap data referenced in the opening of this issue, we no longer see the confusing pop-up, and instead the Zeek error that's bubbled up through Brimcap is presented and persists.

Verify.mp4

That specific error from Zeek unfortunately doesn't quite capture the "802.11 isn't supported" nature of the root issue, but that's a Zeek problem and not a Zui/Brimcap one. In the time since this issue was opened, Zui is now using a very current Zeek artifact (based on Zeek v6.0.3 so to the degree that there's some 802.11 support in there, it would remain an "exercise for the user" to pursue via a Custom Brimcap Config.

@philrz
Copy link
Contributor Author

philrz commented Jan 23, 2024

Since Zui is now handling this error better as shown in the last comment above, I'm going to go ahead and close this issue. In an offline discussion with @nwt and @mattnibs it was confirmed that Brimcap could also still do some better handling of the failure condition (as described above) so a new Brimcap issue brimdata/brimcap#331 has been opened to track that.

@philrz philrz closed this as completed Jan 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants