You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our current plan is to use sqlalchemy in order to support multiple SQL engines, either writing SQL strings for operations or using its expression API. We would have to write custom SQL strings for non-standard operations, such as reading Parquet and CSV files, pivot and unpivot, and time zone conversions.
Ibis may solve some of these problems.
read_csv
read_parquet
pivot / unpivot
time zone conversion
The text was updated successfully, but these errors were encountered:
Pro: Ibis can easily switch between backends, which makes it easy to scale up from DuckDB to Spark as needed and enables the use of dataframe API for data transformation (pivot/unpivot) and SQL for aggregations.
Cons: Need to learn a new API, API language is not attractive/intuitive. Does not have time zone conversion support.
Our non-standard operations and data transformation are limited to a few types, which we can support without Ibis. The cons outweigh the pros.
Unless I missed something, Ibis does not support a workflow where changes to a database can be rolled back on error. We need this functionality, which is provided by sqlalchemy.
Our current plan is to use sqlalchemy in order to support multiple SQL engines, either writing SQL strings for operations or using its expression API. We would have to write custom SQL strings for non-standard operations, such as reading Parquet and CSV files, pivot and unpivot, and time zone conversions.
Ibis may solve some of these problems.
The text was updated successfully, but these errors were encountered: