Skip to content

Comments

feat(go/adbc/snowflake): Multistatement support#13

Open
davidhcoe wants to merge 835 commits intodavidhcoe:dev/snowflake-multipleresultsfrom
CurtHagenlocher:MoreResults
Open

feat(go/adbc/snowflake): Multistatement support#13
davidhcoe wants to merge 835 commits intodavidhcoe:dev/snowflake-multipleresultsfrom
CurtHagenlocher:MoreResults

Conversation

@davidhcoe
Copy link
Owner

Push Curt's WIP to my branch to start working on multiple results

@davidhcoe davidhcoe changed the title Push Curt's WIP to my branch feat(go/adbc/snowflake): Multistatement support Nov 12, 2024
dependabot bot and others added 29 commits July 29, 2025 13:51
… /go/adbc (apache#3209)

Bumps
[google.golang.org/api](https://github.com/googleapis/google-api-go-client)
from 0.241.0 to 0.243.0.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…pache#3213)

Bumps
[org.junit:junit-bom](https://github.com/junit-team/junit-framework)
from 5.13.3 to 5.13.4.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.46.1 to 1.47.0.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…to 2.41.0 in /java (apache#3215)

Bumps
[com.google.errorprone:error_prone_core](https://github.com/google/error-prone)
from 2.40.0 to 2.41.0.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…ColumnsExtendedAsync (apache#3219)

## Motivation

Sometimes Databricks DBR or SQL warehouse fails to execute `DESC TABLE
EXTENDED` due to some internal error on data type parsing and returns
SQL State `20000 `. In this case, we also want to fallback to the
`HiveStatement2.GetColumnsExtendedAsync`

## Change
Add the fallback condition check on `SqlState==20000` in `Databricks
GetColumnsExtendedAsync`

## Testing
- E2E test
adbc-arrow-glib depends on arrow-glib.

In general, arrow-glib is released as major version release frequently.
Our adbc-arrow-glib deb/rpm packages may refer old arrow-glib binaries
soon. If the adbc-arrow-glib deb/rpm packages refer old arrow-glib
binaries, both of old arrow-glib and new arrow-glib (via red-arrow gem)
may be used in the same process. It causes some problems such as "File
already exists in database: orc_proto.proto".

We can implement red-arrow integration without adbc-arrow-glib by C data
interface. So, let's avoid using adbc-arrow-glib.

Fixes apache#3178.
… to be set from TracingConnection (apache#3218)

Provides an virtual override for `GetActivitySourceTags(properties)` to
retrieve tags when creating the `ActivitySource`.
Also adds the ActivitySourceName property so an `ActivityListener` can
create a useful filter.
…pache#3192)

## Motivation

Databricks will eventually require that all non-inhouse OAuth tokens be
exchanged for Databricks OAuth tokens before accessing resources. This
change implements mandatory token exchange before sending Thrift
requests. This check and exchange is performed in the background for now
to reduce latency, but it will eventually need to be blocking if
non-inhouse OAuth tokens will fail to access Databricks resources in the
future.

## Key Components

1. JWT Token Decoder - Decodes JWT tokens to inspect the issuer claim
and determine if token exchange is necessary
2. MandatoryTokenExchangeDelegatingHandler - HTTP handler that
intercepts requests and performs token exchange when required
3. TokenExchangeClient - Handles the token exchange logic with the same
/oidc/v1/token endpoint as token refresh, with slightly different
parameters

## Changes

- Added new connection string parameter: IdentityFederationClientId for
service principal workload identity federation scenarios
- Implemented token exchange logic that checks JWT issuer against
workspace host
- Introduced fallback behavior to maintain backward compatibility if
token exchange fails

## Testing
`dotnet test --filter
"FullyQualifiedName~MandatoryTokenExchangeDelegatingHandlerTests"`

```
[xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v3.1.1+bf6400fd51 (64-bit .NET 8.0.7)
[xUnit.net 00:00:00.06]   Discovering: Apache.Arrow.Adbc.Tests.Drivers.Databricks
[xUnit.net 00:00:00.15]   Discovered:  Apache.Arrow.Adbc.Tests.Drivers.Databricks
[xUnit.net 00:00:00.16]   Starting:    Apache.Arrow.Adbc.Tests.Drivers.Databricks
[xUnit.net 00:00:01.77]   Finished:    Apache.Arrow.Adbc.Tests.Drivers.Databricks
  Apache.Arrow.Adbc.Tests.Drivers.Databricks test net8.0 succeeded (2.6s)

Test summary: total: 11, failed: 0, succeeded: 11, skipped: 0, duration: 2.6s
```

`dotnet test --filter "FullyQualifiedName~TokenExchangeClientTests"`

```
[xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v3.1.1+bf6400fd51 (64-bit .NET 8.0.7)
[xUnit.net 00:00:00.06]   Discovering: Apache.Arrow.Adbc.Tests.Drivers.Databricks
[xUnit.net 00:00:00.14]   Discovered:  Apache.Arrow.Adbc.Tests.Drivers.Databricks
[xUnit.net 00:00:00.15]   Starting:    Apache.Arrow.Adbc.Tests.Drivers.Databricks
[xUnit.net 00:00:00.23]   Finished:    Apache.Arrow.Adbc.Tests.Drivers.Databricks
  Apache.Arrow.Adbc.Tests.Drivers.Databricks test net8.0 succeeded (0.8s)

Test summary: total: 19, failed: 0, succeeded: 19, skipped: 0, duration: 0.8s
```

`dotnet test --filter "FullyQualifiedName~JwtTokenDecoderTests"`

```
[xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v3.1.1+bf6400fd51 (64-bit .NET 8.0.7)
[xUnit.net 00:00:00.06]   Discovering: Apache.Arrow.Adbc.Tests.Drivers.Databricks
[xUnit.net 00:00:00.14]   Discovered:  Apache.Arrow.Adbc.Tests.Drivers.Databricks
[xUnit.net 00:00:00.15]   Starting:    Apache.Arrow.Adbc.Tests.Drivers.Databricks
[xUnit.net 00:00:00.19]   Finished:    Apache.Arrow.Adbc.Tests.Drivers.Databricks
  Apache.Arrow.Adbc.Tests.Drivers.Databricks test net8.0 succeeded (0.8s)

Test summary: total: 10, failed: 0, succeeded: 10, skipped: 0, duration: 0.8s
```

Also tested E2E manually with AAD tokens for Azure Databricks
workspaces, AAD tokens for AWS Databricks workspaces, and service
principal workload identity federation tokens
This adds a link to the doxygen section for AdbcDriverInitFunc so users
can find out how the driver manager infers an init function when it has
to guess at one.
Expands support for arrow to include the latest version 56.

Since datafusion does not support arrow 56, the arrow version on
Cargo.lock will not be updated (updating it will cause adbc_datafusion
to fail to build).
Therefore, update the CI to check whether tests other than
adbc_datafusion pass with the latest arrow.

Closes apache#3229.
Clarifies the relation between this document and the authoritative
definition of the ADBC API.
…lt true (apache#3232)

## Motivation

In PR apache#3171, `RunAsync` option
in `TExecuteStatementReq` was added and exposed via connection parameter
`adbc.databricks.enable_run_async_thrift`, but it is not enabled by
default. This is turned on by default in other Databricks drivers, we
should turn in on by default in ADBC as well.

## Change
- Set `DatabricksConnection:_runAsyncInThrift` default value to `true`

## Test
- PBI PQTest with all the test cases
Bumps [ruby/setup-ruby](https://github.com/ruby/setup-ruby) from 1.253.0
to 1.254.0.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…3236)

Bumps
[google-github-actions/auth](https://github.com/google-github-actions/auth)
from 2.1.11 to 2.1.12.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [docker/login-action](https://github.com/docker/login-action) from
3.4.0 to 3.5.0.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
… /go/adbc (apache#3233)

Bumps
[google.golang.org/api](https://github.com/googleapis/google-api-go-client)
from 0.243.0 to 0.244.0.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…adbc (apache#3235)

Bumps [modernc.org/sqlite](https://gitlab.com/cznic/sqlite) from 1.38.1
to 1.38.2.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…e validation for Spark, Impala & Hive (apache#3224)

Co-authored-by: Sudhir Emmadi <emmadisudhir@microsoft.com>
…_driver_manager package (apache#3197)

Part of apache#3106

Remove the `driver_manager` feature of adbc_core and adding a new
adbc_driver_manager package instead.

Crates that depended on the `driver_manager` feature, such as
adbc_snowflake, will need to be updated to include adbc_driver_manager
as a dependency.

---------

Co-authored-by: David Li <li.davidm96@gmail.com>
Improves the existing go pkgsite by adding a README.

This is an alternative to
apache#3199.
As per the
[docs](https://arrow.apache.org/adbc/main/format/driver_manifests.html#manifest-structure)
the Driver manifest allows overriding the `entrypoint` via the
`Driver.entrypoint` key. Rust follows this properly, but C++ checks for
a top-level key named `entrypoint` instead of following the docs. This
PR fixes this so that the C++ driver manager correctly looks for
`Driver.entrypoint`.
…d fields (apache#3240)

Co-authored-by: Xuliang (Harry) Sun <32334165+xuliangs@users.noreply.github.com>
…ge (apache#3244)

Current error messages do not contain details for what occurred, only a
message like:

`Cannot execute <ReadChunkWithRetries>b__0 after 5 tries`

This adds the Message of the last exception that occurred as well.

Co-authored-by: David Coe <>
)

Modifies the behavior of GetSearchPaths so macOS doesn't follow other
Unix-likes but instead uses the more conventional `/Library/Application
Support/ADBC`. `/etc/` isn't really a thing on macOS.

Also updates the driver manfiest docs to call this new behavior out.

Closes apache#3247.
… and StatusPoller to Stop/Dispose Appropriately (apache#3217)

### Motivation
The following cases are not properly stopping or disposing the status
poller:
1.  If the DatabricksCompositeReader is explicitly disposed by the user
2. CloudFetchReader is done returning results
3. Edge case terminal operation status (timedout_state, unknown_state)

In addition:
- When DatabricksOperationStatusPoller.Dispose(), it may cancel the
GetOperationStatusRequest in the client. If the input buffer has data
and cancellation is triggered, it leaves the TCLI client with
unconsumed/unsent data in the buffer, breaking subsequent requests
(fixed in this PR)

### Fixes

DatabricksOperationStatusPollerLogic is now more appropriately managed
by DatabricksCompositeReader (moved out of BaseDatabricksReader) to
handle all cases where null results (indicating completion) are
returned.

Disposing DatabricksCompositeReader appropriately disposes the
activeReader and statusPoller


#### TODO
Follow-up PR - when statement is disposed, it should also dispose the
reader (the poller is currently stopped when operationhandle is set to
null, but this should also happen explicitly)

Need add some unit testing (follow up pr:
apache#3243)
dependabot bot and others added 30 commits October 14, 2025 08:45
…1 to 13.2.1.jre11 in /java (apache#3566)

Bumps
[com.microsoft.sqlserver:mssql-jdbc](https://github.com/Microsoft/mssql-jdbc)
from 13.2.0.jre11 to 13.2.1.jre11.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…adbc (apache#3564)

Bumps [modernc.org/sqlite](https://gitlab.com/cznic/sqlite) from 1.39.0
to 1.39.1.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [ruby/setup-ruby](https://github.com/ruby/setup-ruby) from 1.263.0
to 1.265.0.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [r-lib/actions](https://github.com/r-lib/actions) from 2.11.3 to
2.11.4.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
… /go/adbc (apache#3563)

Bumps
[google.golang.org/api](https://github.com/googleapis/google-api-go-client)
from 0.251.0 to 0.252.0.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…/go/adbc (apache#3565)

Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from
1.75.1 to 1.76.0.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
We can't add Python 3.14 support until PyArrow releases new wheels.

Closes apache#3547.
…sage header (apache#3558)

## Rationale for this change

When the Databricks ADBC C# driver encounters HTTP errors during Thrift
operations (e.g., 401 Unauthorized, 403 Forbidden), the specific error
message from the Databricks server is lost. The server includes detailed
error information in the `x-thriftserver-error-message` HTTP response
header, but currently only generic HTTP status messages reach users.

This makes debugging authentication and authorization issues difficult,
as users cannot distinguish between different failure causes (expired
token vs. invalid token vs. insufficient permissions).

## What changes are included in this PR?

- Add `ThriftErrorMessageHandler` as a new `DelegatingHandler` that
intercepts HTTP error responses
- Extract `x-thriftserver-error-message` header and include it in
exception messages
- Integrate handler into `DatabricksConnection` HTTP handler chain as
the innermost handler
- Add comprehensive unit tests covering 11 test scenarios
- Compatible with .NET Framework 4.7.2, .NET Standard 2.0, and .NET 8.0

## Are these changes tested?

Yes. Added `ThriftErrorMessageHandlerTest.cs` with 11 unit tests
covering:
- HTTP 401/403 with Thrift error messages
- Success responses (pass through unchanged)
- Error responses without header (pass through unchanged)
- Empty header values (ignored)
- Multiple HTTP status codes (400, 401, 403, 500, 503)
- Multiple header values (joined with commas)

All tests pass:
```
Test Run Successful.
Total tests: 11
     Passed: 11
 Total time: 0.6654 Seconds
```

Build verification also passed for all target frameworks.

## Are there any user-facing changes?

Yes - users will now see detailed error messages from Databricks instead
of generic HTTP status codes:

**Before:**
```
An unexpected error occurred while opening the session. 'Response status code does not indicate success: 401 (Unauthorized).'
```

**After:**
```
An unexpected error occurred while opening the session. 'Thrift server error: Invalid personal access token (HTTP 401 Unauthorized)'
```

This is a backward-compatible enhancement - if the header is not
present, behavior is unchanged.

Closes apache#3557
…o enable retry before exception (apache#3578)

## Summary
Reordered HTTP delegating handlers in DatabricksConnection to ensure
RetryHttpHandler processes responses before ThriftErrorMessageHandler
throws exceptions. This fixes a bug where 503 Service Unavailable
responses with Retry-After headers (e.g., during cluster auto-start)
were not being retried.

## Problem
Previously, the handler chain had ThriftErrorMessageHandler as the
innermost handler:
```
ThriftErrorMessageHandler (inner) → RetryHttpHandler (outer) → Network
```

This caused ThriftErrorMessageHandler to process error responses first
and throw exceptions immediately, preventing RetryHttpHandler from
retrying 503 responses during cluster auto-start scenarios.

## Solution
Reordered the chain so RetryHttpHandler is inside
ThriftErrorMessageHandler:
```
RetryHttpHandler (inner) → ThriftErrorMessageHandler (outer) → Network
```

Now responses flow: Network → RetryHttpHandler →
ThriftErrorMessageHandler

With this order:
1. RetryHttpHandler processes 503 responses first and retries them
according to Retry-After headers
2. Only after all retries are exhausted does ThriftErrorMessageHandler
throw exceptions with Thrift error messages

## Changes
- Reordered handlers in `DatabricksConnection.CreateHttpHandler()`
- Added comprehensive documentation explaining handler chain execution
order and why it matters
- Added cross-references in `RetryHttpHandlerTest` and
`ThriftErrorMessageHandlerTest` pointing to the production code

## Test Plan
- ✅ All existing unit tests pass:
  - `ThriftErrorMessageHandlerTest`: 11/11 tests pass
  - `RetryHttpHandlerTest`: 14/14 tests pass
- The fix will be validated in E2E tests when connecting to Databricks
clusters that need auto-start

## Related Issues
Fixes cluster auto-start retry issues where 503 responses with
Retry-After headers were not being retried.

Co-authored-by: Claude <noreply@anthropic.com>
- Allow positional driver argument.
- Allow URI as a toplevel argument given that it is fairly common.
- Infer driver/URI argument if given a URI-like string as the driver
argument.

Closes apache#3517.
…errors when possible (apache#3581)

The driver currently throws an AggregateException when an error occurs
during the `connection.OpenAsync().Wait();` call. This PR attempts to
unwind the AggregateException and throw an AdbcException to conform to
the ADBC spec. It keeps the AggregateException as the InnerException of
the throw AdbcException to maintain the integrity of the stack.

Co-authored-by: David Coe <>
…repared statement operations to ensure CallOptions get set (apache#3586)

Use FlightSqlClientWithCallOptions for prepared statement operations to
ensure CallOptions get set

Fixes apache#3582.
…ivers (apache#3583)

For BigQuery and Databrics, Arrow-formatted record batches are returned
from the server in a format that's not strictly compatible with the
Arrow stream format due to the way that the schema and the array data
are split. A change was recently made to the C# Arrow library to allow
these to be deserialized independently, which means that we no longer
need to indirect through a Stream -- saving on both CPU and memory and
reducing pressure on the GC.
…racing to CloudFetch pipeline (apache#3580)

## Summary

This PR implements comprehensive Activity-based distributed tracing for
the CloudFetch download pipeline in the Databricks C# driver, enabling
real-time monitoring, structured logging, and improved observability.

### Key Changes:
- Add Activity-based tracing to CloudFetchDownloader with
TraceActivityAsync
- Create child Activities per individual file download for real-time
progress visibility
- Replace all Trace.TraceInformation/Error calls with Activity.AddEvent
for structured logging
- Add Activity tags for searchable metadata (offset, URL, file sizes)
- Implement proper Activity context flow through async/await chains
- Update CloudFetchDownloadManager to pass statement for tracing context
- Fix all tests to include statement parameter in CloudFetchDownloader
constructor

### Architecture:
The implementation follows a hierarchical Activity structure:
```
Statement Activity (parent)
  ├─ DownloadFilesAsync Activity (overall batch)
  │   ├─ DownloadFile Activity (file 1) - flushes when complete
  │   ├─ DownloadFile Activity (file 2) - flushes when complete
  │   └─ ...
  └─ ReadNextRecordBatchAsync Activity (reader operations)
```

### Benefits:
- **Real-time progress monitoring**: Events flush immediately as each
file completes (not batched)
- **Better fault tolerance**: Completed downloads are logged before
process crashes
- **Improved debuggability**: Searchable Activity tags enable filtering
by offset, URL, size
- **Granular metrics**: Per-file download times, throughput, compression
ratios visible in logs
- **OpenTelemetry-compatible**: Activities follow
System.Diagnostics.Activity standard

### Events Logged:
- `cloudfetch.download_start` - File download initiated
- `cloudfetch.content_length` - Actual file size from HTTP response
- `cloudfetch.download_retry` - Retry attempt with reason
- `cloudfetch.url_refreshed_before_download` - URL refreshed proactively
- `cloudfetch.url_refreshed_after_auth_error` - URL refreshed after
401/403
- `cloudfetch.decompression_complete` - LZ4 decompression metrics
- `cloudfetch.download_complete` - Download success with throughput
- `cloudfetch.download_failed_all_retries` - Final failure after all
retries
- `cloudfetch.download_summary` - Overall batch statistics

## Test Plan

- ✅ All existing CloudFetchDownloader E2E tests pass (7 test methods)
- ✅ Build succeeds with 0 warnings
- Manual testing: Query with CloudFetch enabled and verify Activity
events in logs
- Verified Activity context flows correctly through async/await chains
- Confirmed child Activities flush independently upon completion


`{"Status":"Ok","HasRemoteParent":false,"Kind":"Client","OperationName":"DownloadFile","Duration":"00:00:00.5952467","StartTimeUtc":"2025-10-16T15:00:48.1657713Z","Id":"00-dc2baa073e36e8feab91170cb360e2f1-b71e0296e60abf8b-01","ParentId":"00-dc2baa073e36e8feab91170cb360e2f1-06801209b222d0e9-01","RootId":"dc2baa073e36e8feab91170cb360e2f1","TraceStateString":null,"SpanId":"b71e0296e60abf8b","TraceId":"dc2baa073e36e8feab91170cb360e2f1","Recorded":true,"IsAllDataRequested":true,"ActivityTraceFlags":"Recorded","ParentSpanId":"06801209b222d0e9","IdFormat":"W3C","TagObjects":{"cloudfetch.offset":134802,"cloudfetch.sanitized_url":"https://root-benchmarking-prod-aws-us-west-2.s3.us-west-2.amazonaws.com/26Z_71145fb7-bc14-4719-91af-0cdfc92c8fc8","cloudfetch.expected_size_bytes":21839184},"Events":[{"Name":"cloudfetch.download_start","Timestamp":"2025-10-16T15:00:48.1649565+00:00","Tags":[{"Key":"offset","Value":134802},{"Key":"sanitized_url","Value":"https://root-benchmarking-prod-aws-us-west-2.s3.us-west-2.amazonaws.com/26Z_71145fb7-bc14-4719-91af-0cdfc92c8fc8"},{"Key":"expected_size_bytes","Value":21839184},{"Key":"expected_size_kb","Value":21327.328125}]},{"Name":"cloudfetch.content_length","Timestamp":"2025-10-16T15:00:48.3370864+00:00","Tags":[{"Key":"offset","Value":134802},{"Key":"sanitized_url","Value":"https://root-benchmarking-prod-aws-us-west-2.s3.us-west-2.amazonaws.com/26Z_71145fb7-bc14-4719-91af-0cdfc92c8fc8"},{"Key":"content_length_bytes","Value":6942292},{"Key":"content_length_mb","Value":6.6206855773925781}]},{"Name":"cloudfetch.decompression_complete","Timestamp":"2025-10-16T15:00:48.7599632+00:00","Tags":[{"Key":"offset","Value":134802},{"Key":"sanitized_url","Value":"https://root-benchmarking-prod-aws-us-west-2.s3.us-west-2.amazonaws.com/26Z_71145fb7-bc14-4719-91af-0cdfc92c8fc8"},{"Key":"decompression_time_ms","Value":347},{"Key":"compressed_size_bytes","Value":6942292},{"Key":"compressed_size_kb","Value":6779.58203125},{"Key":"decompressed_size_bytes","Value":21839184},{"Key":"decompressed_size_kb","Value":21327.328125},{"Key":"compression_ratio","Value":3.1458175484407742}]},{"Name":"cloudfetch.download_complete","Timestamp":"2025-10-16T15:00:48.7599632+00:00","Tags":[{"Key":"offset","Value":134802},{"Key":"sanitized_url","Value":"https://root-benchmarking-prod-aws-us-west-2.s3.us-west-2.amazonaws.com/26Z_71145fb7-bc14-4719-91af-0cdfc92c8fc8"},{"Key":"actual_size_bytes","Value":21839184},{"Key":"actual_size_kb","Value":21327.328125},{"Key":"latency_ms","Value":594},{"Key":"throughput_mbps","Value":35.063078909209281}]}],"Links":[],"Baggage":{}}`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
…he#3592)

This fix corrects the calls to `TraceActivity` when async to use
`TraceActivityAsync`, instead.
…if TraceActivity is called from async context (apache#3600)

Throws runtime exception if `TraceActivity` is called from async
context.
…33.0 in /java (apache#3593)

Bumps
[com.google.protobuf:protobuf-java](https://github.com/protocolbuffers/protobuf)
from 4.32.1 to 4.33.0.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
… binding (apache#3601)

This PR fixes an issue in the PostgreSQL driver’s parameter binding
logic where empty strings were incorrectly treated as NULL values.

Null detection was inferred from param_lengths[col] == 0 and empty
strings (valid zero-length values) were misclassified as NULL.

Closes apache#3585.
…exec poll interval connection param (apache#3589)

Modified the default value for the asyncExecPollInterval to 100 ms for
parity with simba ODBC.
When a connection is opened via databaseImpl, the cache clientCache
initializes
*flightsql.Client objects which require proper cleanup on connection
close to
prevent Goroutine leaks.

Previously, closing a connection released only 3 of the 6 Goroutines
created per
connection, leaving 3 Goroutines associated with the cached client
unmanaged.
This resulted in accumulated Goroutine leaks over time.

The fix ensures all Goroutines are properly cleaned up when connections
are closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.