Replies: 2 comments 3 replies
-
Thanks for opening your first issue here! Be sure to follow the issue template! If you are willing to raise PR to address this issue please do so, no need to wait for approval. |
Beta Was this translation helpful? Give feedback.
-
That's at most an interesting discussion. The increase in import might come from longer import classes for Airflow internal classes for example. Airflow 2.9 has significantly more features and imports more classes, which make the import take longer. It might be triggered by some imports your dag does. Also there might be differences coming from what composer environment dependencies are. If you you want to get to the bottom of it - possibly the easiest way for you to see the difference (and for anyone to be able to comment on this and help you to diagnose it) is to get the two environment with your dags, exec to the environment, install Then it would be great if you publish both flamegraphs here or even perform analysis yourself and see where the differences came from. That would be really useful if you did that analysis, maybe thanks to that you can find some consistent issue that can be diagnosed and fixed - eiither in Airflow or composer environment. |
Beta Was this translation helpful? Give feedback.
-
Apache Airflow version
Other Airflow 2 version (please specify below)
If "Other Airflow 2 version" selected, which one?
2.9.3
What happened?
We're using cloud composer (2.9.5) and airflow (2.9.3) on python(3.11) and, after making an upgrade from composer(2.5.4) airflow(2.5.3) python(3.8), we started to face an issue concerning our Dagbag import process. We use python scripts to generate dags. one of those script, generate a specific type of dags called "global", this dags are made and scheduled to control/orchestrate all the other subdags using TriggerDagRunOperators. When executing the triggering tasks in the workers, after the worker process fork, it started to self._process_file() and that starts by making the import of the corresponding python file. Well when the worker forked process starts "Filling up the dagbag [dag_file_folder]" as the logs say, it lasts more than 60 secs and it gets terminated with an error Process timeout (PID:xxxxx). In our configs we have dagbag_import_timeout at 60 and dag_file_processor_timeout at 300. Increasing the dagbag_import_timeout to 180 has solved the problem, but we need to understand when this delay is coming from in order to find the pattern that will help to avoid this issues in the future. Especially that it happens only in the newer versiosn mentionned above. Note that scaling up are worker by doubling the amount of memory and cpu, or decrease the worker_concurrency, didn't have any significant impact on the process timeout issue.
What you think should happen instead?
If we consider that python 3.11 is faster (check_this_article) than 3.8, we are supposed to have more efficient and faster import and file processing.
How to reproduce
Run a dag with a relative large amount of libs dependencies, set the dagbag_impot_timeout just above the actual import delay in an composer(2.5.4) airflow(2.5.3) python(3.8), and then do run the same dag with the same timeout configs in composer (2.9.5) and airflow (2.9.3) on python(3.11). use small sized composer cluster.
Operating System
Embedded in cloud composer 2.9.5
Versions of Apache Airflow Providers
No response
Deployment
Google Cloud Composer
Deployment details
No response
Anything else?
No response
Are you willing to submit PR?
Code of Conduct
Beta Was this translation helpful? Give feedback.
All reactions