Releases: apache/druid
Druid 31.0.0
Apache Druid 31.0.0 contains over 589 new features, bug fixes, performance enhancements, documentation improvements, and additional test coverage from 64 contributors.
See the complete set of changes for additional details, including bug fixes.
Review the upgrade notes and incompatible changes before you upgrade to Druid 31.0.0.
If you are upgrading across multiple versions, see the Upgrade notes page, which lists upgrade notes for the most recent Druid versions.
# Important features, changes, and deprecations
This section contains important information about new and existing features.
# Compaction features
Druid now supports the following features:
- Compaction scheduler with greater flexibility and control over when and what to compact.
- MSQ task engine-based auto-compaction for more performant compaction jobs.
For more information, see Compaction supervisors.
Additionally, compaction tasks that take advantage of concurrent append and replace is now generally available as part of concurrent append and replace becoming GA.
# Window functions are GA
Window functions are now generally available in Druid's native engine and in the MSQ task engine.
- You no longer need to use the query context
enableWindowing
to use window functions. #17087
# Concurrent append and replace GA
Concurrent append and replace is now GA. The feature safely replaces the existing data in an interval of a datasource while new data is being appended to that interval. One of the most common applications of this feature is appending new data (such as with streaming ingestion) to an interval while compaction of that interval is already in progress.
# Delta Lake improvements
The community extension for Delta Lake has been improved to support complex types and snapshot versions.
# Iceberg improvements
The community extension for Iceberg has been improved. For more information, see Iceberg improvements
# Projections (experimental)
Druid 31.0.0 includes experimental support for new feature called projections. Projections are grouped pre-aggregates of a segment that are automatically used at query time to optimize execution for any queries which 'fit' the shape of the projection by reducing both computation and i/o cost by reducing the number of rows which need to be processed. Projections are contained within segments of a datasource and do increase the segment size. But they can share data, such as value dictionaries of dictionary encoded columns, with the columns of the base segment.
Projections currently only support JSON-based ingestion, but they can be used by queries that use the MSQ task engine or the new Dart engine. Future development will allow projections to be created as part of SQL-based ingestion.
We have a lot of plans to continue to improve this feature in the coming releases, but are excited to get it out there so users can begin experimentation since projections can dramatically improve query performance.
For more information, see Projections.
# Low latency high complexity queries using Dart (experimental)
Distributed Asynchronous Runtime Topology (Dart) is designed to support high complexity queries, such as large joins, high cardinality group by, subqueries and common table expressions, commonly found in ad-hoc, data warehouse workloads. Instead of using data warehouse engines like Spark or Presto to execute high-complexity queries, you can use Dart, alleviating the need for additional infrastructure.
For more information, see Dart.
# Storage improvements
Druid 31.0.0 includes several improvements to how data is stored by Druid, including compressed columns and flexible segment sorting. For more information, see Storage improvements.
# Upgrade-related changes
See the Upgrade notes for more information about the following upgrade-related changes:
- Array ingest mode now defaults to array
- Disabled ZK-based segment loading
- Removed task action audit logging
- Removed Firehose and FirehoseFactory
- Removed the scan query legacy mode
# Deprecations
# Java 8 support
Java 8 support is now deprecated and will be removed in 32.0.0.
# Other deprecations
- Deprecated API
/lockedIntervals
is now removed #16799 - Cluster-level compaction API deprecates task slots compaction API #16803
- The
arrayIngestMode
context parameter is deprecated and will be removed. For more information, see Array ingest mode now defaults to array.
# Functional areas and related changes
This section contains detailed release notes separated by areas.
# Web console
# Improvements to the stages display
A number of improvements have been made to the query stages visualization
These changes include:
- Added a graph visualization to illustrate the flow of query stages #17135
- Added a column for CPU counters in the query stages detail view when they are present. Also added tool tips to expose potentially hidden data like CPU time #17132
# Dart
Added the ability to detect the presence of the Dart engine and to run Dart queries from the console as well as to see currently running Dart queries.
<a name="31.0.0-functional-areas-and-related-changes-web-console-copy-query-results-as-sql" href="#31.0.0-functional-areas-and-relat...
druid-30.0.1
The Apache Druid team is proud to announce the release of Apache Druid 30.0.1.
Druid is a high performance analytics data store for event-driven data.
Apache Druid 30.0.1 contains security fixes for CVE-2024-45384, CVE-2024-45537.
The release also contains minor doc and task monitor fixes.
Source and binary distributions can be downloaded from:
https://druid.apache.org/downloads.html
Full Changelog: druid-30.0.0...druid-30.0.1
A big thank you to all the contributors in this milestone release!
Druid 30.0.0
Apache Druid 30.0.0 contains over 407 new features, bug fixes, performance enhancements, documentation improvements, and additional test coverage from 50 contributors.
See the complete set of changes for additional details, including bug fixes.
Review the upgrade notes and incompatible changes before you upgrade to Druid 30.0.0.
If you are upgrading across multiple versions, see the Upgrade notes page, which lists upgrade notes for the most recent Druid versions.
# Upcoming removals
As part of the continued improvements to Druid, we are deprecating certain features and behaviors in favor of newer iterations that offer more robust features and are more aligned with standard ANSI SQL. Many of these new features have been the default for new deployments for several releases.
The following features are deprecated, and we currently plan to remove support in Druid 32.0.0:
- Non-SQL compliant null handling: By default, Druid now differentiates between an empty string and a record with no data as well as between an empty numerical record and
0
. For more information, see NULL values. For a tutorial on the SQL-compliant logic, see the Null handling tutorial. - Non-strict Boolean handling: Druid now strictly uses
1
(true) or0
(false). Previously, true and false could be represented either astrue
andfalse
or as1
and0
, respectively. In addition, Druid now returns a null value for Boolean comparisons likeTrue && NULL
. For more information, see Boolean logic. For examples of filters that use the SQL-compliant logic, see Query filters. - Two-value logic: By default, Druid now uses three-valued logic for both ingestion and querying. This primarily affects filters using logical NOT operations on columns with NULL values. For more information, see Boolean logic. For examples of filters that use the SQL-compliant logic, see Query filters.
# Important features, changes, and deprecations
This section contains important information about new and existing features.
# Concurrent append and replace improvements
Streaming ingestion supervisors now support concurrent append, that is streaming tasks can run concurrently with a replace task (compaction or re-indexing) if it also happens to be using concurrent locks. Set the context parameter useConcurrentLocks
to true to enable concurrent append.
Once you update the supervisor to have "useConcurrentLocks": true
, the transition to concurrent append happens seamlessly without causing any ingestion lag or task failures.
Druid now performs active cleanup of stale pending segments by tracking the set of tasks using such pending segments.
This allows concurrent append and replace to upgrade only a minimal set of pending segments and thus improve performance and eliminate errors.
Additionally, it helps in reducing load on the metadata store.
# Grouping on complex columns
Druid now supports grouping on complex columns and nested arrays.
This means that both native queries and the MSQ task engine can group on complex columns and nested arrays while returning results.
Additionally, the MSQ task engine can roll up and sort on the supported complex columns, such as JSON columns, during ingestion.
# Removed ZooKeeper-based segment loading
ZooKeeper-based segment loading is being removed due to known issues.
It has been deprecated for several releases.
Recent improvements to the Druid Coordinator have significantly enhanced performance with HTTP-based segment loading.
# Improved groupBy queries
Before Druid pushes realtime segments to deep storage, the segments consist of spill files.
Segment metrics such as query/segment/time
now report on each spill file for a realtime segment, rather than for the entire segment.
This change eliminates the need to materialize results on the heap, which improves the performance of groupBy queries.
# Improved AND filter performance
Druid query processing now adaptively determines when children of AND filters should compute indexes and when to simply match rows during the scan based on selectivity of other filters.
Known as filter partitioning, it can result in dramatic performance increases, depending on the order of filters in the query.
For example, take a query like SELECT SUM(longColumn) FROM druid.table WHERE stringColumn1 = '1000' AND stringColumn2 LIKE '%1%'
. Previously, Druid used indexes when processing filters if they are available.
That's not always ideal; imagine if stringColumn1 = '1000'
matches 100 rows. With indexes, we have to find every value of stringColumn2 LIKE '%1%'
that is true to compute the indexes for the filter. If stringColumn2
has more than 100 values, it ends up being worse than simply checking for a match in those 100 remaining rows.
With the new logic, Druid now checks the selectivity of indexes as it processes each clause of the AND filter.
If it determines it would take more work to compute the index than to match the remaining rows, Druid skips computing the index.
The order you write filters in a WHERE clause of a query can improve the performance of your query.
More improvements are coming, but you can try out the existing improvements by reordering a query.
Put indexes that are less intensive to compute such as IS NULL
, =
, and comparisons (>
, >=,
<
, and <=
) near the start of AND filters so that Druid more efficiently processes your queries.
Not ordering your filters in this way won’t degrade performance from previous releases since the fallback behavior is what Druid did previously.
# Centralized datasource schema (alpha)
You can now configure Druid to manage datasource schema centrally on the Coordinator.
Previously, Brokers...
druid-29.0.1
Druid 29.0.1
Apache Druid 29.0.1 is a patch release that fixes some issues in the Druid 29.0.0 release.
Bug fixes
- Added type verification for INSERT and REPLACE to validate that strings and string arrays aren't mixed #15920
- Concurrent replace now allows pending Peon segments to be upgraded using the Supervisor #15995
- Changed the
targetDataSource
attribute to return a string containing the name of the datasource. This reverts the breaking change introduced in Druid 29.0.0 for INSERT and REPLACE MSQ queries #16004 #16031 - Decreased the size of the distribution Docker image #15968
- Fixed an issue with SQL-based ingestion where string inputs, such as from CSV, TSV, or string-value fields in JSON, are ingested as null values when they are typed as LONG or BIGINT #15999
- Fixed an issue where a web console-generated Kafka supervisor spec has
flattenSpec
in the wrong location #15946 - Fixed an issue with filters on expression virtual column indexes incorrectly considering values null in some cases for expressions which translate null values into not null values #15959
- Fixed an issue where the data loader crashes if the incoming data can't be parsed #15983
- Improved DOUBLE type detection in the web console #15998
- Web console-generated queries now only set the context parameter
arrayIngestMode
toarray
when you explicitly opt in to use arrays #15927 - The web console now displays the results of an MSQ query that writes to an external destination through the
EXTERN
function #15969
Incompatible changes
Changes to targetDataSource
in EXPLAIN queries
Druid 29.0.1 includes a breaking change that restores the behavior for targetDataSource
to its 28.0.0 and earlier state, different from Druid 29.0.0 and only 29.0.0. In 29.0.0, targetDataSource
returns a JSON object that includes the datasource name. In all other versions, targetDataSource
returns a string containing the name of the datasource.
If you're upgrading from any version other than 29.0.0, there is no change in behavior.
If you are upgrading from 29.0.0, this is an incompatible change.
Dependency updates
- Updated PostgreSQL JDBC Driver version to 42.7.2 #15931
Credits
@abhishekagarwal87
@adarshsanjeev
@AmatyaAvadhanula
@clintropolis
@cryptoe
@dependabot[bot]
@ektravel
@gargvishesh
@gianm
@kgyrtkirk
@LakshSingla
@somu-imply
@techdocsmith
@vogievetsky
Druid 29.0.0
Apache Druid 29.0.0 contains over 350 new features, bug fixes, performance enhancements, documentation improvements, and additional test coverage from 67 contributors.
See the complete set of changes for additional details, including bug fixes.
Review the upgrade notes before you upgrade to Druid 29.0.0.
If you are upgrading across multiple versions, see the Upgrade notes page, which lists upgrade notes for the most recent Druid versions.
# Important features, changes, and deprecations
This section contains important information about new and existing features.
# MSQ export statements (experimental)
Druid 29.0.0 adds experimental support for export statements to the MSQ task engine. This allows query tasks to write data to an external destination through the EXTERN
function.
# SQL PIVOT and UNPIVOT (experimental)
Druid 29.0.0 adds experimental support for the SQL PIVOT and UNPIVOT operators.
The PIVOT operator carries out an aggregation and transforms rows into columns in the output. The following is the general syntax for the PIVOT operator:
PIVOT (aggregation_function(column_to_aggregate)
FOR column_with_values_to_pivot
IN (pivoted_column1 [, pivoted_column2 ...])
)
The UNPIVOT operator transforms existing column values into rows. The following is the general syntax for the UNPIVOT operator:
UNPIVOT (values_column
FOR names_column
IN (unpivoted_column1 [, unpivoted_column2 ... ])
)
# Range support in window functions (experimental)
Window functions (experimental) now support ranges where both endpoints are unbounded or are the current row. Ranges work in strict mode, which means that Druid will fail queries that aren't supported. You can turn off strict mode for ranges by setting the context parameter windowingStrictValidation
to false
.
The following example shows a window expression with RANGE frame specifications:
(ORDER BY c)
(ORDER BY c RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
(ORDER BY c RANGE BETWEEN CURRENT ROW AND UNBOUNDED PRECEDING)
# Improved INNER joins
Druid now supports arbitrary join conditions for INNER join. Any sub-conditions that can't be evaluated as part of the join are converted to a post-join filter. Improved join capabilities allow Druid to more effectively support applications like Tableau.
# Improved concurrent append and replace (experimental)
You no longer have to manually determine the task lock type for concurrent append and replace (experimental) with the taskLockType
task context. Instead, Druid can now determine it automatically for you. You can use the context parameter "useConcurrentLocks": true
for individual tasks and datasources or enable concurrent append and replace at a cluster level using druid.indexer.task.default.context
.
# First and last aggregators for double, float, and long data types
Druid now supports first and last aggregators for the double, float, and long types in native and MSQ ingestion spec and MSQ queries. Previously, they were only supported for native queries. For more information, see First and last aggregators.
Additionally, the following functions can now return numeric values:
- EARLIEST and EARLIEST_BY
- LATEST and LATEST_BY
You can use these functions as aggregators at ingestion time.
# Support for logging audit events
Added support for logging audit events and improved coverage of audited REST API endpoints.
To enable logging audit events, set config druid.audit.manager.type
to log
in both the Coordinator and Overlord or in common.runtime.properties
. When you set druid.audit.manager.type
to sql
, audit events are persisted to metadata store.
In both cases, Druid audits the following events:
- Coordinator
- Update load rules
- Update lookups
- Update coordinator dynamic config
- Update auto-compaction config
- Overlord
- Submit a task
- Create/update a supervisor
- Update worker config
- Basic security extension
- Create user
- Delete user
- Update user credentials
- Create role
- Delete role
- Assign role to user
- Set role permissions
Also fixed an issue with the basic auth integration test by not persisting logs to the database.
# Enabled empty ingest queries
The MSQ task engine now allows empty ingest queries by default. Previously, ingest queries that produced no data would fail with the InsertCannotBeEmpty
MSQ fault.
For more information, see Empty ingest queries in the upgrade notes.
In the web console, you can use a toggle to control whether an ingestion fails if the ingestion query produces no data.
# MSQ support for Google Cloud Storage
The MSQ task engine now supports Google Cloud Storage (GCS). You can use durable storage with GCS. See Durable storage configurations for more information.
# Experimental extensions
Druid 29.0.0 adds the following extensions.
# DDSketch
A new DDSketch extension is available as a community contribution. The DDSketch extension (druid-ddsketch
) provides support for approximate quantile queries using the DDSketch library.
# Spectator histogram
A new histogram extension is available as a community contribution. The Spectator-based histogram extension (druid-spectator-histogram
) provides approximate histogram aggregators and percentile post-aggregators based on Spectator fixed-bucket histograms.
# Delta Lake
A new Delta Lake extension is available as a community contribution. The Delta Lake extension...
Druid 28.0.1
Description
Apache Druid 28.0.1 is a patch release that fixes some issues in the 28.0.0 release. See the complete set of changes for additional details.
# Notable Bug fixes
- #15405 To make the start-druid script more robust
- #15402 Fixes the query caching bug for groupBy queries with multiple post-aggregation metrics
- #15430 Fixes the failure of tasks during an upgrade due to the addition of new task action
RetrieveSegmentsToReplaceAction
which would not be available on the overlord at the time of rolling upgrade - #15500 Bug fix with NullFilter which is commonly utilised with the newly default SQL compatible mode.
# Credits
Thanks to everyone who contributed to this release!
Druid 28.0.0
Apache Druid 28.0.0 contains over 420 new features, bug fixes, performance enhancements, documentation improvements, and additional test coverage from 57 contributors.
See the complete set of changes for additional details, including bug fixes.
Review the upgrade notes and incompatible changes before you upgrade to Druid 28.0.0.
# Important features, changes, and deprecations
In Druid 28.0.0, we have made substantial improvements to querying to make the system more ANSI SQL compatible. This includes changes in handling NULL and boolean values as well as boolean logic. At the same time, the Apache Calcite library has been upgraded to the latest version. While we have documented known query behavior changes, please read the upgrade notes section carefully. Test your application before rolling out to broad production scenarios while closely monitoring the query status.
# SQL compatibility
Druid continues to make SQL query execution more consistent with how standard SQL behaves. However, there are feature flags available to restore the old behavior if needed.
# Three-valued logic
Druid native filters now observe SQL three-valued logic (true
, false
, or unknown
) instead of Druid's classic two-state logic by default, when the following default settings apply:
druid.generic.useThreeValueLogicForNativeFilters = true
druid.expressions.useStrictBooleans = true
druid.generic.useDefaultValueForNull = false
# Strict booleans
druid.expressions.useStrictBooleans
is now enabled by default.
Druid now handles booleans strictly using 1
(true) or 0
(false).
Previously, true and false could be represented either as true
and false
as well as 1
and 0
, respectively.
In addition, Druid now returns a null value for Boolean comparisons like True && NULL
.
If you don't explicitly configure this property in runtime.properties
, clusters now use LONG types for any ingested boolean values and in the output of boolean functions for transformations and query time operations.
For more information, see SQL compatibility in the upgrade notes.
# NULL handling
druid.generic.useDefaultValueForNull
is now disabled by default.
Druid now differentiates between empty records and null records.
Previously, Druid might treat empty records as empty or null.
For more information, see SQL compatibility in the upgrade notes.
# SQL planner improvements
Druid uses Apache Calcite for SQL planning and optimization. Starting in Druid 28.0.0, the Calcite version has been upgraded from 1.21 to 1.35. This upgrade brings in many bug fixes in SQL planning from Calcite.
# Dynamic parameters
As part of the Calcite upgrade, the behavior of type inference for dynamic parameters has changed. To avoid any type interference issues, explicitly CAST
all dynamic parameters as a specific data type in SQL queries. For example, use:
SELECT (1 * CAST (? as DOUBLE))/2 as tmp
Do not use:
SELECT (1 * ?)/2 as tmp
# Async query and query from deep storage
Query from deep storage is no longer an experimental feature. When you query from deep storage, more data is available for queries without having to scale your Historical services to accommodate more data. To benefit from the space saving that query from deep storage offers, configure your load rules to unload data from your Historical services.
# Support for multiple result formats
Query from deep storage now supports multiple result formats.
Previously, the /druid/v2/sql/statements/
endpoint only supported results in the object
format. Now, results can be written in any format specified in the resultFormat
parameter.
For more information on result parameters supported by the Druid SQL API, see Responses.
# Broadened access for queries from deep storage
Users with the STATE
permission can interact with status APIs for queries from deep storage. Previously, only the user who submitted the query could use those APIs. This enables the web console to monitor the running status of the queries. Users with the STATE
permission can access the query results.
# MSQ queries for realtime tasks
The MSQ task engine can now include real time segments in query results. To do this, use the includeSegmentSource
context parameter and set it to REALTIME
.
# MSQ support for UNION ALL queries
You can now use the MSQ task engine to run UNION ALL queries with UnionDataSource
.
# Ingest from multiple Kafka topics to a single datasource
You can now ingest streaming data from multiple Kafka topics to a datasource using a single supervisor.
You configure the topics for the supervisor spec using a regex pattern as the value for topicPattern
in the IO config. If you add new topics to Kafka that match the regex, Druid automatically starts ingesting from those new topics.
If you enable multi-topic ingestion for a datasource, downgrading will cause the Supervisor to fail.
For more information, see Stop supervisors that ingest from multiple Kafka topics before downgrading.
# SQL UNNEST and ingestion flattening
The UNNEST function is no longer experimental.
Druid now supports UNNEST in SQL-based batch ingestion and query from deep storage, so you can flatten arrays easily. For more information, see UNNEST and Unnest arrays within a column.
You no longer need to include the context parameter enableUnnest: true
to use UNNEST.
# Recommended syntax for SQL UNNEST
The recommended syntax for SQL UNNEST has changed. We recommend using CROSS JOIN instead of commas for most queries to prevent issues with precedence. For example, use:
SELECT column_alias_name1 FROM datasource CROSS JOIN UNNEST(source_ex...
Druid 27.0.0
Apache Druid 27.0.0 contains over 316 new features, bug fixes, performance enhancements, documentation improvements, and additional test coverage from 50 contributors.
See the complete set of changes for additional details, including bug fixes.
Review the upgrade notes and incompatible changes before you upgrade to Druid 27.0.0.
# Highlights
# New Explore view in the web console (experimental)
The Explore view is a simple, stateless, SQL backed, data exploration view to the web console. It lets users explore data in Druid with point-and-click interaction and visualizations (instead of writing SQL and looking at a table). This can provide faster time-to-value for a user new to Druid and can allow a Druid veteran to quickly chart some data that they care about.
The Explore view is accessible from the More (...) menu in the header:
# Query from deep storage (experimental)
Druid now supports querying segments that are stored only in deep storage. When you query from deep storage, you can query larger data available for queries without necessarily having to scale your Historical processes to accommodate more data. To take advantage of the potential storage savings, make sure you configure your load rules to not load all your segments onto Historical processes.
Note that at least one segment of a datasource must be loaded onto a Historical process so that the Broker can plan the query. It can be any segment though.
For more information, see the following:
# Schema auto-discovery and array column types
Type-aware schema auto-discovery is now generally available. Druid can determine the schema for the data you ingest rather than you having to manually define the schema.
As part of the type-aware schema discovery improvements, array column types are now generally available. Druid can determine the column types for your schema and assign them to these array column types when you ingest data using type-aware schema auto-discovery with the auto
column type.
For more information about this feature, see the following:
- Type-aware schema discovery.
- 26.0.0 release notes for Schema auto-discovery.
- 26.0.0 release notes for array column types.
# Smart segment loading
The Coordinator is now much more stable and user-friendly. In the new smartSegmentLoading mode, it dynamically computes values for several configs which maximize performance.
The Coordinator can now prioritize load of more recent segments and segments that are completely unavailable over load of segments that already have some replicas loaded in the cluster. It can also re-evaluate decisions taken in previous runs and cancel operations that are not needed anymore. Moreoever, move operations started by segment balancing do not compete with the load of unavailable segments thus reducing the reaction time for changes in the cluster and speeding up segment assignment decisions.
Additionally, leadership changes have less impact now, and the Coordinator doesn't get stuck even if re-election happens while a Coordinator run is in progress.
Lastly, the cost
balancer strategy performs much better now and is capable of moving more segments in a single Coordinator run. These improvements were made by borrowing ideas from the cachingCost
strategy. We recommend using cost
instead of cachingCost
since cachingCost
is now deprecated.
For more information, see the following:
- Upgrade note for config changes related to smart segment loading
- New coordinator metrics
- Smart segment loading documentation
# New query filters
Druid now supports the following filters:
- Equality: Use in place of the selector filter. It never matches null values.
- Null: Match null values. Use in place of the selector filter.
- Range: Filter on ranges of dimension values. Use in place of the bound filter. It never matches null values
Note that Druid's SQL planner uses these new filters in place of their older counterparts by default whenever druid.generic.useDefaultValueForNull=false
or if sqlUseBoundAndSelectors
is set to false on the SQL query context.
You can use these filters for filtering equality and ranges on ARRAY columns instead of only strings with the previous selector and bound filters.
For more information, see Query filters.
# Guardrail for subquery results
Users can now add a guardrail to prevent subquery’s results from exceeding the set number of bytes by setting druid.server.http.maxSubqueryRows
in the Broker's config or maxSubqueryRows
in the query context. This guardrail is recommended over row-based limiting.
This feature is experimental for now and defaults back to row-based limiting in case it fails to get the accurate size of the results consumed by the query.
# Added a new OSHI system monitor
Added a new OSHI system monitor (OshiSysMonitor
) to replace SysMonitor
. The new monitor has a wider support for different machine architectures including ARM instances. We recommend switching to the new monitor. SysMonitor
is now deprecated and will be removed in future releases.
# Java 17 support
Druid now fully supports Java 17.
# Hadoop 2 deprecated
Support for Hadoop 2 is now deprecated. It will be removed in a future release.
For more information, see the upgrade notes.
# Additional features and improvements
# SQL-based ingestion
# Improved query planning behavior
Druid now fails query planning if a CLUSTERED BY column contains descending order.
Previously, queries would successfully plan if any CLUSTERED BY columns contained descending order.
The MSQ fault, InsertCannotOrderByDescending
, is deprecated. An INSERT or REPLACE query containing a CLUSTERED BY expression cannot be in descending order. Druid's segment generation code only supports ascending order. Instead of the fault, Druid now throws a query ValidationException
.
# Improved segment sizes
The default clusterStatisticsMergeMode
is now `S...
Druid 26.0.0
Apache Druid 26.0.0 contains over 390 new features, bug fixes, performance enhancements, documentation improvements, and additional test coverage from 65 contributors.
See the complete set of changes for additional details.
Review the upgrade notes and incompatible changes before you upgrade to Druid 26.0.0.
# Highlights
# Auto type column schema (experimental)
A new "auto" type column schema and indexer has been added to native ingestion as the next logical iteration of the nested column functionality. This automatic type column indexer that produces the most appropriate column for the given inputs, producing either STRING
, ARRAY<STRING>
, LONG
, ARRAY<LONG>
, DOUBLE
, ARRAY<DOUBLE>
, or COMPLEX<json>
columns, all sharing a common 'nested' format.
All columns produced by 'auto' have indexes to aid in fast filtering (unlike classic LONG
and DOUBLE
columns) and use cardinality based thresholds to attempt to only utilize these indexes when it is likely to actually speed up the query (unlike classic STRING columns).
COMPLEX<json>
columns produced by this 'auto' indexer store arrays of simple scalar types differently than their 'json' (v4) counterparts, storing them as ARRAY typed columns. This means that the JSON_VALUE
function can now extract entire arrays, for example JSON_VALUE(nested, '$.array' RETURNING BIGINT ARRAY)
. There is no change with how arrays of complex objects are stored at this time.
This improvement also adds a completely new functionality to Druid, ARRAY
typed columns, which unlike classic multi-value STRING
columns behave with ARRAY semantics. These columns can currently only be created via the 'auto' type indexer when all values are an arrays with the same type of elements.
An array data type is a data type that allows you to store multiple values in a single column of a database table. Arrays are typically used to store sets of related data that can be easily accessed and manipulated as a group.
This release adds support for storing arrays of primitive values such as ARRAY<STRING>
, ARRAY<LONG>
, and ARRAY<DOUBLE>
as specialized nested columns instead of breaking them into separate element columns.
These changes affect two additional new features available in 26.0: schema auto-discovery and unnest.
# Schema auto-discovery (experimental)
We’re adding schema-auto discovery with type inference to Druid. With this feature, the data type of each incoming field is detected when schema is available. For incoming data which may contain added, dropped, or changed fields, you can choose to reject the nonconforming data (“the database is always correct - rejecting bad data!”), or you can let schema auto-discovery alter the datasource to match the incoming data (“the data is always right - change the database!”).
Schema auto-discovery is recommend for new use-cases and ingestions. For existing use-cases be careful switching to schema auto-discovery because Druid will ingest array-like values (e.g. ["tag1", "tag2]
) as ARRAY<STRING>
type columns instead of multi-value (MV) strings, this could cause issues in downstream apps replying on MV behavior. Hold off switching until an official migration path is available.
To use this feature, set spec.dataSchema.dimensionsSpec.useSchemaDiscovery
to true
in your task or supervisor spec or, if using the data loader in the console, uncheck the Explicitly define schema
toggle on the Configure schema
step. Druid can infer the entire schema or some of it if you explicitly list dimensions in your dimensions list.
Schema auto-discovery is available for native batch and streaming ingestion.
# UNNEST arrays (experimental)
Part of what’s cool about UNNEST is how it allows a wider range of operations that weren’t possible on Array data types. You can unnest arrays with either the UNNEST function (SQL) or the unnest
datasource (native).
Unnest converts nested arrays or tables into individual rows. The UNNEST function is particularly useful when working with complex data types that contain nested arrays, such as JSON.
For example, suppose you have a table called "orders" with a column called "items" that contains an array of products for each order. You can use unnest to extract the individual products ("each_item") like in the following SQL example:
SELECT order_id, each_item FROM orders, UNNEST(items) as unnested(each_item)
This produces a result set with one row for each item in each order, with columns for the order ID and the individual item
Note the comma after the left table/datasource (orders
in the example). It is required.
#13268 #13943 #13934 #13922 #13892 #13576 #13554 #13085
# Sort-merge join and hash shuffle join for MSQ
We can now perform shuffle joins by setting by setting the context parameter sqlJoinAlgorithm
to sortMerge
for the sort-merge algorithm or omitting it to perform broadcast joins (default).
Multi-stage queries can use a sort-merge join algorithm. With this algorithm, each pairwise join is planned into its own stage with two inputs. This approach is generally less performant but more scalable, than broadcast.
Set the context parameter sqlJoinAlgorithm
to sortMerge
to use this method.
Broadcast hash joins are similar to how native join queries are executed.
# Storage improvements on dictionary compression
Switching to using frontcoding dictionary compression (experimental) can save up to 30% with little to no impact to query performance.
This release further improves the frontCoded
type of stringEncodingStrategy
on indexSpec
with a new segment format version, which typically has faster read speeds and reduced segment size. This improvement is backwards incompatible with Druid 25.0. Added a new formatVersion
option, which defaults to the the current version 0
. Set formatVersion
to 1
to start using the new version.
Additionally, overall storage size, particularly with using larger buckets, has been improved.
# Additional features and improvements
# MSQ task engine
# Array-valued parameters for SQL queries
Added support for array-valued parameters for SQL queries using. You can now reuse the same SQL for every ingestion, only passing in a different set of input files as query parameters.
# EXTEND clause for the EXTERN functions
You can now use an EXTEND clause to provide a list of column definitions for your source data in standard SQL format.
The web console now defaults to using the EXTEND clause syntax for all queries auto-generated in the web console. This means that SQL-based ingestion statements generated by the web console in Druid 26 (such as from the SQL based data loader) will not work in earlier versions of Druid.
# MSQ fault tolerance
Added the ability for MSQ controller task to retry worker task in case of failures. To enable, pass faultTolerance:true
in the query context.
[#13353](https...
Druid 25.0.0
Apache Druid 25.0.0 contains over 300 new features, bug fixes, performance enhancements, documentation improvements, and additional test coverage from 51 contributors.
See the complete set of changes for additional details.
# Highlights
# MSQ task engine now production ready
The multi-stage query (MSQ) task engine used for SQL-based ingestion is now production ready. Use it for any supported workloads. For more information, see the following pages:
# Simplified Druid deployments
The new start-druid
script greatly simplifies deploying any combination of Druid services on a single-server. It comes pre-packaged with the required configs and can be used to launch a fully functional Druid cluster simply by invoking ./start-druid
. For experienced Druids, it also gives complete control over the runtime properties and JVM arguments to have a cluster that exactly fits your needs.
The start-druid
script deprecates the existing profiles such as start-micro-quickstart
and start-nano-quickstart
. These profiles may be removed in future releases. For more information, see Single server deployment.
# String dictionary compression (experimental)
Added support for front coded string dictionaries for smaller string columns, leading to reduced segment sizes with only minor performance penalties for most Druid queries.
This can be enabled by setting IndexSpec.stringDictionaryEncoding
to {"type":"frontCoded", "bucketSize": 4}
, where bucketSize
is any power of 2 less than or equal to 128. Setting this property instructs indexing tasks to write segments using compressed dictionaries of the specified bucket size.
Any segment written using string dictionary compression is not readable by older versions of Druid.
For more information, see Front coding.
# Kubernetes-native tasks
Druid can now use Kubernetes to launch and manage tasks, eliminating the need for middle managers.
To use this feature, enable the druid-kubernetes-overlord-extensions in the extensions load list for your Overlord process.
# Hadoop-3 compatible binary
Druid now comes packaged as a dedicated binary for Hadoop-3 users, which contains Hadoop-3 compatible jars. If you do not use Hadoop-3 with your Druid cluster, you may continue using the classic binary.
# Multi-stage query (MSQ) task engine
# MSQ enabled for Docker
MSQ task query engine is now enabled for Docker by default.
# Query history
Multi-stage queries no longer show up in the Query history dialog. They are still available in the Recent query tasks panel.
# Limit on CLUSTERED BY columns
When using the MSQ task engine to ingest data, the number of columns that can be passed in the CLUSTERED BY clause is now limited to 1500.
# Support for string dictionary compression
The MSQ task engine supports the front-coding of String dictionaries for better compression. This can be enabled for INSERT or REPLACE statements by setting indexSpec
to a valid json string in the query context.
# Sketch merging mode
Workers can now gather key statistics, used to generate partition boundaries, either sequentially or in parallel. Set clusterStatisticsMergeMode
to PARALLEL
, SEQUENTIAL
or AUTO
in the query context to use the corresponding sketch merging mode. For more information, see Sketch merging mode.
# Performance and operational improvements
- Error messages: For disallowed MSQ warnings of certain types, the warning is now surfaced as the error. #13198
- Secrets: For tasks containing SQL with sensitive keys, Druid now masks the keys while logging with the help regular expressions. #13231
- Downsampling accuracy: MSQ task engine now uses the number of bytes instead of number of keys when downsampling data. #12998
- Memory usage: When determining partition boundaries, the heap footprint of internal sketches used by MSQ is now capped at 10% of available memory or 300 MB, whichever is lower. Previously, the cap was strictly 300 MB. #13274
- Task reports: Added fields
pendingTasks
andrunningTasks
to the worker report. See Query task status information for related web console changes. #13263
# Querying
# Async reads for JDBC
Prevented JDBC timeouts on long queries by returning empty batches when a batch fetch takes too long. Uses an async model to run the result fetch concurrently with JDBC requests.
# Improved algorithm to check values of an IN filter
To accommodate large value sets arising from large IN filters or from joins pushed down as IN filters, Druid now uses a sorted merge algorithm for merging the set and dictionary for larger values.
# Enhanced query context security
Added the following configuration properties that refine the query context security model controlled by druid.auth.authorizeQueryContextParams
:
druid.auth.unsecuredContextKeys
: A JSON list of query context keys that do not require a security check.druid.auth.securedContextKeys
: A JSON list of query context keys that do require a security check.
If both are set, unsecuredContextKeys
acts as exceptions to securedContextKeys
.
# HTTP response headers
The HTTP response for a SQL query now correctly sets response headers, same as a native query.
# Metrics
# New metrics
The following metrics have been newly added. For more details, see the complete list of Druid metrics.
# Batched segment allocation
These metrics pertain to batched segment allocation.
Metric | Description | Dimensions |
---|---|---|
task/action/batch/runTime |
Milliseconds taken to execute a batch of task actions. Currently only being emitted for batched segmentAllocate actions |
dataSource , taskActionType=segmentAllocate |
task/action/batch/queueTime |
Milliseconds spent by a batch of task actions in queue. Currently only being emitted for batched segmentAllocate actions |
dataSource , taskActionType=segmentAllocate |
task/action/batch/size |
Number of task actions in a batch that was executed during the emission period. Currently only being emitted for batched segmentAllocate actions |
dataSource , taskActionType=segmentAllocate |
task/action/batch/attempts |
Number of execution attempts for a single batch of task actions. Currently only being emitted for batched segmentAllocate actions |
dataSource , taskActionType=segmentAllocate |
task/action/success/count |
Number of task actions that were executed successfully during the emission period. Currently only being emitted for batched segmentAllocate actions |
dataSource , taskId , taskType , taskActionType=segmentAllocate |
task/action/failed/count |
Number of task actions that failed during the emission period. Currently only being emitted for batched segmentAllocate actions |
dataSource , taskId , `tas... |