You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using awswrangler.athena.to_iceberg to write data from a pandas dataframe to iceberg table. And this is being run from a lambda function. My question is what permissions would be required to perform this operation,so that I can add those to the lambda execution role. I beleive we should have glue and target S3 bucket access. Can anyone suggest the exact glue action and s3 action required to be added in aws policy statement. Do we have to add any athena related policies/permissions?
Don't have a specific IAM policy to share, but my suggestion would be to design it based on the actions that are carried out in the API call. You can consult the awswrangler/athena/_write_iceberg.py module to identify them and scope them to your resources. For instance, you can see that an Athena StartQueryExecution action is also needed from the code.
Alternatively, you can check the AWS CloudTrail logs once you run the lambda to identify all the API calls that were made and build your policy based on that information.
I am using awswrangler.athena.to_iceberg to write data from a pandas dataframe to iceberg table. And this is being run from a lambda function. My question is what permissions would be required to perform this operation,so that I can add those to the lambda execution role. I beleive we should have glue and target S3 bucket access. Can anyone suggest the exact glue action and s3 action required to be added in aws policy statement. Do we have to add any athena related policies/permissions?
Code Snippet:
import awswrangler as wr
wr.athena.to_iceberg(df=df,database=GLUE_ICEBERG_DB,table=glue_table_name,table_location=OUTPUT_FILE_PATH,temp_path=TEMP_PATH,keep_files=False,index=False,schema_evolution=True,fill_missing_columns_in_df=True,partition_cols=['dt','ts'],additional_table_properties={'write_target_data_file_size_bytes':'536870912','write_compression':'SNAPPY'}
)
The text was updated successfully, but these errors were encountered: