Skip to content

Commit

Permalink
Merge pull request #6 from octoenergy/revert-5-rebase-to-302
Browse files Browse the repository at this point in the history
Revert "Rebase to 302"
  • Loading branch information
matt-fleming authored Feb 9, 2024
2 parents 2091240 + 3a0886a commit f610a66
Show file tree
Hide file tree
Showing 64 changed files with 9,404 additions and 21,249 deletions.
44 changes: 2 additions & 42 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,64 +1,24 @@
# Release History

# 3.0.2 (2024-01-25)

- SQLAlchemy dialect now supports table and column comments (thanks @cbornet!)
- Fix: SQLAlchemy dialect now correctly reflects TINYINT types (thanks @TimTheinAtTabs!)
- Fix: `server_hostname` URIs that included `https://` would raise an exception
- Other: pinned to `pandas<=2.1` and `urllib3>=1.26` to avoid runtime errors in dbt-databricks (#330)

## 3.0.1 (2023-12-01)

- Other: updated docstring comment about default parameterization approach (#287)
- Other: added tests for reading complex types and revised docstrings and type hints (#293)
- Fix: SQLAlchemy dialect raised DeprecationWarning due to `dbapi` classmethod (#294)
- Fix: SQLAlchemy dialect could not reflect TIMESTAMP_NTZ columns (#296)

## 3.0.0 (2023-11-17)

- Remove support for Python 3.7
- Add support for native parameterized SQL queries. Requires DBR 14.2 and above. See docs/parameters.md for more info.
- Completely rewritten SQLAlchemy dialect
- Adds support for SQLAlchemy >= 2.0 and drops support for SQLAlchemy 1.x
- Full e2e test coverage of all supported features
- Detailed usage notes in `README.sqlalchemy.md`
- Adds support for:
- New types: `TIME`, `TIMESTAMP`, `TIMESTAMP_NTZ`, `TINYINT`
- `Numeric` type scale and precision, like `Numeric(10,2)`
- Reading and writing `PrimaryKeyConstraint` and `ForeignKeyConstraint`
- Reading and writing composite keys
- Reading and writing from views
- Writing `Identity` to tables (i.e. autoincrementing primary keys)
- `LIMIT` and `OFFSET` for paging through results
- Caching metadata calls
- Enable cloud fetch by default. To disable, set `use_cloud_fetch=False` when building `databricks.sql.client`.
- Add integration tests for Databricks UC Volumes ingestion queries
- Retries:
- Add `_retry_max_redirects` config
- Set `_enable_v3_retries=True` and warn if users override it
- Security: bump minimum pyarrow version to 14.0.1 (CVE-2023-47248)
## 2.9.4 (Unreleased)

## 2.9.3 (2023-08-24)

- Fix: Connections failed when urllib3~=1.0.0 is installed (#206)

## 2.9.2 (2023-08-17)

__Note: this release was yanked from Pypi on 13 September 2023 due to compatibility issues with environments where `urllib3<=2.0.0` were installed. The log changes are incorporated into version 2.9.3 and greater.__

- Other: Add `examples/v3_retries_query_execute.py` (#199)
- Other: suppress log message when `_enable_v3_retries` is not `True` (#199)
- Other: make this connector backwards compatible with `urllib3>=1.0.0` (#197)

## 2.9.1 (2023-08-11)

__Note: this release was yanked from Pypi on 13 September 2023 due to compatibility issues with environments where `urllib3<=2.0.0` were installed.__

- Other: Explicitly pin urllib3 to ^2.0.0 (#191)

## 2.9.0 (2023-08-10)

- Replace retry handling with DatabricksRetryPolicy. This is disabled by default. To enable, set `_enable_v3_retries=True` when creating `databricks.sql.client` (#182)
- Replace retry handling with DatabricksRetryPolicy. This is disabled by default. To enable, set `enable_v3_retries=True` when creating `databricks.sql.client` (#182)
- Other: Fix typo in README quick start example (#186)
- Other: Add autospec to Client mocks and tidy up `make_request` (#188)

Expand Down
7 changes: 0 additions & 7 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,8 +107,6 @@ End-to-end tests require a Databricks account. Before you can run them, you must
export host=""
export http_path=""
export access_token=""
export catalog=""
export schema=""
```

Or you can write these into a file called `test.env` in the root of the repository:
Expand Down Expand Up @@ -143,11 +141,6 @@ The `PySQLLargeQueriesSuite` namespace contains long-running query tests and is
The `PySQLStagingIngestionTestSuite` namespace requires a cluster running DBR version > 12.x which supports staging ingestion commands.

The suites marked `[not documented]` require additional configuration which will be documented at a later time.

#### SQLAlchemy dialect tests

See README.tests.md for details.

### Code formatting

This project uses [Black](https://pypi.org/project/black/).
Expand Down
7 changes: 4 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,15 @@
[![PyPI](https://img.shields.io/pypi/v/databricks-sql-connector?style=flat-square)](https://pypi.org/project/databricks-sql-connector/)
[![Downloads](https://pepy.tech/badge/databricks-sql-connector)](https://pepy.tech/project/databricks-sql-connector)

The Databricks SQL Connector for Python allows you to develop Python applications that connect to Databricks clusters and SQL warehouses. It is a Thrift-based client with no dependencies on ODBC or JDBC. It conforms to the [Python DB API 2.0 specification](https://www.python.org/dev/peps/pep-0249/) and exposes a [SQLAlchemy](https://www.sqlalchemy.org/) dialect for use with tools like `pandas` and `alembic` which use SQLAlchemy to execute DDL. Use `pip install databricks-sql-connector[sqlalchemy]` to install with SQLAlchemy's dependencies. `pip install databricks-sql-connector[alembic]` will install alembic's dependencies.
The Databricks SQL Connector for Python allows you to develop Python applications that connect to Databricks clusters and SQL warehouses. It is a Thrift-based client with no dependencies on ODBC or JDBC. It conforms to the [Python DB API 2.0 specification](https://www.python.org/dev/peps/pep-0249/) and exposes a [SQLAlchemy](https://www.sqlalchemy.org/) dialect for use with tools like `pandas` and `alembic` which use SQLAlchemy to execute DDL.

This connector uses Arrow as the data-exchange format, and supports APIs to directly fetch Arrow tables. Arrow tables are wrapped in the `ArrowQueue` class to provide a natural API to get several rows at a time.

You are welcome to file an issue here for general use cases. You can also contact Databricks Support [here](help.databricks.com).

## Requirements

Python 3.8 or above is required.
Python 3.7 or above is required.

## Documentation

Expand Down Expand Up @@ -47,7 +47,8 @@ connection = sql.connect(
access_token=access_token)

cursor = connection.cursor()
cursor.execute('SELECT :param `p`, * FROM RANGE(10)', {"param": "foo"})

cursor.execute('SELECT * FROM RANGE(10)')
result = cursor.fetchall()
for row in result:
print(row)
Expand Down
255 changes: 0 additions & 255 deletions docs/parameters.md

This file was deleted.

Loading

0 comments on commit f610a66

Please sign in to comment.