Skip to content

[BUG] Inconsistent behavior of connection.IgnoreTransactions and AUTOCOMMIT handling #1158

@AndreiRudkouski

Description

@AndreiRudkouski

Describe the bug
By default, autocommit = true, and it cannot be overridden.
To skip commits, the JDBC parameter connection.IgnoreTransactions=1 is used. In this case, the following warning appears in the logs:

WARNING: ignoreTransactions flag is set - commit is no-op (deprecated behavior).
Please remove this flag to enable transaction support.

This warning is logged continuously and is repeated on every query execution, but LogLevel.OFF is used.

However, when the connection.IgnoreTransactions parameter is removed (as suggested by the log message), query execution fails with the following exception:

com.databricks.jdbc.dbclient.impl.thrift.DatabricksThriftAccessor checkOperationStatusForErrors
SEVERE: Operation failed with error: [org.apache.hive.service.cli.HiveSQLException: Error running query: [AUTOCOMMIT_NOT_SUPPORTED] com.databricks.sql.transaction.tahoe.DeltaUnsupportedOperationException: [AUTOCOMMIT_NOT_SUPPORTED] The AUTOCOMMIT mode feature is currently under development and not yet supported on Databricks.
	at org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors$.runningQueryError(HiveThriftServerErrors.scala:49)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$execute$1(SparkExecuteStatementOperation.scala:1050)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
	at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:51)
	at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:104)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:787)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$5(SparkExecuteStatementOperation.scala:578)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
	at org.apache.spark.sql.execution.SQLExecution$.withRootExecution(SQLExecution.scala:869)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$2(SparkExecuteStatementOperation.scala:578)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
	at com.databricks.logging.AttributionContextTracing.$anonfun$withAttributionContext$1(AttributionContextTracing.scala:80)
	at com.databricks.logging.AttributionContext$.$anonfun$withValue$1(AttributionContext.scala:348)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:59)
	at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:344)
	at com.databricks.logging.AttributionContextTracing.withAttributionContext(AttributionContextTracing.scala:78)
	at com.databricks.logging.AttributionContextTracing.withAttributionContext$(AttributionContextTracing.scala:75)
	at com.databricks.spark.util.DatabricksTracingHelper.withAttributionContext(DatabricksSparkTracingHelper.scala:62)
	at com.databricks.spark.util.DatabricksTracingHelper.withSpanFromRequest(DatabricksSparkTracingHelper.scala:89)
	at com.databricks.spark.util.DBRTracing$.withSpanFromRequest(DBRTracing.scala:43)
	at org.apache.spark.sql.hive.thriftserver.ThriftLocalProperties.$anonfun$withLocalProperties$15(ThriftLocalProperties.scala:238)
	at com.databricks.logging.AttributionContextTracing.$anonfun$withAttributionContext$1(AttributionContextTracing.scala:80)
	at com.databricks.logging.AttributionContext$.$anonfun$withValue$1(AttributionContext.scala:348)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:59)
	at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:344)
	at com.databricks.logging.AttributionContextTracing.withAttributionContext(AttributionContextTracing.scala:78)
	at com.databricks.logging.AttributionContextTracing.withAttributionContext$(AttributionContextTracing.scala:75)
	at com.databricks.spark.util.PublicDBLogging.withAttributionContext(DatabricksSparkUsageLogger.scala:29)
	at com.databricks.logging.AttributionContextTracing.withAttributionTags(AttributionContextTracing.scala:127)
	at com.databricks.logging.AttributionContextTracing.withAttributionTags$(AttributionContextTracing.scala:108)
	at com.databricks.spark.util.PublicDBLogging.withAttributionTags(DatabricksSparkUsageLogger.scala:29)
	at com.databricks.spark.util.PublicDBLogging.withAttributionTags0(DatabricksSparkUsageLogger.scala:108)
	at com.databricks.spark.util.DatabricksSparkUsageLogger.withAttributionTags(DatabricksSparkUsageLogger.scala:216)
	at com.databricks.spark.util.UsageLogging.$anonfun$withAttributionTags$1(UsageLogger.scala:668)
	at com.databricks.spark.util.UsageLogging$.withAttributionTags(UsageLogger.scala:780)
	at com.databricks.spark.util.UsageLogging$.withAttributionTags(UsageLogger.scala:789)
	at com.databricks.spark.util.UsageLogging.withAttributionTags(UsageLogger.scala:668)
	at com.databricks.spark.util.UsageLogging.withAttributionTags$(UsageLogger.scala:666)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withAttributionTags(SparkExecuteStatementOperation.scala:76)
	at org.apache.spark.sql.hive.thriftserver.ThriftLocalProperties.$anonfun$withLocalProperties$12(ThriftLocalProperties.scala:233)
	at com.databricks.spark.util.IdentityClaim$.withClaim(IdentityClaim.scala:48)
	at org.apache.spark.sql.hive.thriftserver.ThriftLocalProperties.withLocalProperties(ThriftLocalProperties.scala:229)
	at org.apache.spark.sql.hive.thriftserver.ThriftLocalProperties.withLocalProperties$(ThriftLocalProperties.scala:89)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withLocalProperties(SparkExecuteStatementOperation.scala:76)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:555)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:541)
	at java.base/java.security.AccessController.doPrivileged(AccessController.java:712)
	at java.base/javax.security.auth.Subject.doAs(Subject.java:439)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953)
	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2.run(SparkExecuteStatementOperation.scala:591)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:840)
Caused by: com.databricks.sql.transaction.tahoe.DeltaUnsupportedOperationException: [AUTOCOMMIT_NOT_SUPPORTED] The AUTOCOMMIT mode feature is currently under development and not yet supported on Databricks.
	at com.databricks.sql.transaction.tahoe.DeltaErrorsEdge.autoCommitNotSupported(DeltaErrorsEdge.scala:74)
	at com.databricks.sql.transaction.tahoe.DeltaErrorsEdge.autoCommitNotSupported$(DeltaErrorsEdge.scala:73)
	at com.databricks.sql.transaction.tahoe.DeltaErrors$.autoCommitNotSupported(DeltaErrors.scala:4111)
	at com.databricks.sql.transaction.tahoe.mst.AutoCommitContext$.checkIfSupported(AutoCommitContext.scala:77)
	at com.databricks.sql.transaction.tahoe.commands.SetAutoCommitCommand.run(SetAutoCommitCommand.scala:41)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.$anonfun$sideEffectResult$2(commands.scala:87)
	at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:195)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.$anonfun$sideEffectResult$1(commands.scala:87)
	at com.databricks.spark.util.FrameProfiler$.$anonfun$record$1(FrameProfiler.scala:114)
	at com.databricks.spark.util.FrameProfilerExporter$.maybeExportFrameProfiler(FrameProfilerExporter.scala:198)
	at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:105)

Resulting behavior
With connection.IgnoreTransactions=1:
Queries work, but logs are constantly polluted with warnings about deprecated behavior.
Without connection.IgnoreTransactions:
Queries fail due to AUTOCOMMIT_NOT_SUPPORTED.

Expected behavior
The driver should provide a consistent way to execute queries without:

  • continuous warning spam, or
  • runtime failures related to unsupported AUTOCOMMIT mode.

Client Environment (please complete the following information):

  • OS: MacOS
  • Java version: Java 17
  • Java vendor: graalvm
  • Driver Version 3.0.6

Additional context
Logging with LogLevel.OFF may be related to #1117
In this PR, the handler is not initialized when LogLevel.OFF is set. As a result, the logger falls back to the default JUL handler, which leads to unexpected log output.
Also there are unexpected logs like this

Dec 23, 2025 5:25:29 PM com.databricks.jdbc.api.impl.ExecutionResultFactory getResultHandler
INFO: Processing result of format ARROW_BASED_SET from Thrift server

Currently, the only way to avoid these logs is to explicitly redirect logging to STDOUT and set log level:
logpath=STDOUT
loglevel=FATAL

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions