You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: CHANGELOG.md
+18
Original file line number
Diff line number
Diff line change
@@ -1,3 +1,21 @@
1
+
# dbt_fivetran_log v1.8.0
2
+
[PR #130](https://github.com/fivetran/dbt_fivetran_log/pull/130) includes the following updates:
3
+
4
+
## 🚨 Breaking Changes 🚨
5
+
> ⚠️ Since the following changes result in the table format changing, we recommend running a `--full-refresh` after upgrading to this version to avoid possible incremental failures.
6
+
- For Databricks All-Purpose clusters, the `fivetran_platform__audit_table` model will now be materialized using the delta table format (previously parquet).
7
+
- Delta tables are generally more performant than parquet and are also more widely available for Databricks users. Previously, the parquet file format was causing compilation issues on customers' managed tables.
8
+
9
+
## Documentation Updates
10
+
- Updated the `sync_start` and `sync_end` field descriptions for the `fivetran_platform__audit_table` to explicitly define that these fields only represent the sync start/end times for when the connector wrote new or modified existing records to the specified table.
11
+
- Addition of integrity and consistency validation tests within integration tests for every end model.
12
+
- Removed duplicate Databricks dispatch instructions listed in the README.
13
+
14
+
## Under the Hood
15
+
- The `is_databricks_sql_warehouse` macro has been renamed to `is_incremental_compatible` and has been modified to return `true` if the Databricks runtime being used is an all-purpose cluster (previously this macro checked if a sql warehouse runtime was used) **or** if any other non-Databricks supported destination is being used.
16
+
- This update was applied as there have been other Databricks runtimes discovered (ie. an endpoint and external runtime) which do not support the `insert_overwrite` incremental strategy used in the `fivetran_platform__audit_table` model.
17
+
- In addition to the above, for Databricks users the `fivetran_platform__audit_table` model will now leverage the incremental strategy only if the Databricks runtime is all-purpose. Otherwise, all other Databricks runtimes will not leverage an incremental strategy.
18
+
1
19
# dbt_fivetran_log v1.7.3
2
20
[PR #126](https://github.com/fivetran/dbt_fivetran_log/pull/126) includes the following updates:
If you are using a Databricks destination with this package you will need to add the below (or a variation of the below) dispatch configuration within your root `dbt_project.yml`. This is required in order for the package to accurately search for macros within the `dbt-labs/spark_utils` then the `dbt-labs/dbt_utils` packages respectively.
117
-
```yml
118
-
dispatch:
119
-
- macro_namespace: dbt_utils
120
-
search_order: ['spark_utils', 'dbt_utils']
121
-
```
122
-
123
115
## (Optional) Step 6: Orchestrate your models with Fivetran Transformations for dbt Core™
0 commit comments