Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cherry pick Rows out in EXPLAIN ANALYZE #670

Open
wants to merge 12 commits into
base: main
Choose a base branch
from

Conversation

robozmey
Copy link

@robozmey robozmey commented Oct 15, 2024

Cherry pick yezzey-gp/ygp@1d41230

Change logs

Add "Rows out" print in cdbexplain_showExecStats

set gp_enable_explain_rows_out=on;

drop table if exists tt; create table tt (a int, b int) distributed randomly;

explain (analyze,verbose) insert into tt select * from generate_series(1,1000)a,generate_series(1,1000)b;

 Insert  (cost=0.00..495560.34 rows=333334 width=8) (actual time=3.829..1148.458 rows=333741 loops=1)
   Output: generate_series_1.generate_series, generate_series.generate_series, "outer".ColRef_0002, generate_series_1.generate_series
   Executor Memory: 1kB  Segments: 3  Max: 1kB (segment 0)
   Rows out: 333333.33 rows avg x 3 workers, 333741 rows max (seg1), 333054 rows min (seg2).
   ->  Redistribute Motion 3:3  (slice1; segments: 3)  (cost=0.00..479935.34 rows=333334 width=12) (actual time=3.707..761.241 rows=333741 loops=1)
         Output: generate_series_1.generate_series, generate_series.generate_series, "outer".ColRef_0002
         Rows out: 333333.33 rows avg x 3 workers, 333741 rows max (seg1), 333054 rows min (seg2).
         ->  Result  (cost=0.00..479922.82 rows=333334 width=12) (actual time=0.226..882.580 rows=1000000 loops=1)
               Output: generate_series_1.generate_series, generate_series.generate_series, 1
               Rows out: 333333.33 rows avg x 3 workers, 1000000 rows max (seg1), 0 rows min (seg0).
               ->  Result  (cost=0.00..479918.82 rows=333334 width=8) (actual time=0.225..631.202 rows=1000000 loops=1)
                     Output: generate_series_1.generate_series, generate_series.generate_series
                     One-Time Filter: (gp_execution_segment() = 1)
                     Rows out: 333333.33 rows avg x 3 workers, 1000000 rows max (seg1), 0 rows min (seg0).
                     ->  Nested Loop  (cost=0.00..479898.05 rows=333334 width=8) (actual time=0.220..386.554 rows=1000000 loops=1)
                           Output: generate_series_1.generate_series, generate_series.generate_series
                           Join Filter: true
                           Rows out: 333333.33 rows avg x 3 workers, 1000000 rows max (seg1), 0 rows min (seg0).
                           ->  Function Scan on pg_catalog.generate_series generate_series_1  (cost=0.00..0.00 rows=334 width=4) (actual time=0.102..0.333 rows=1000 loops=1)
                                 Output: generate_series_1.generate_series
                                 Function Call: generate_series(1, 1000)
                                 work_mem: 40kB  Segments: 1  Max: 40kB (segment 1)
                                 Rows out: 333.33 rows avg x 3 workers, 1000 rows max (seg1), 0 rows min (seg0).
                           ->  Function Scan on pg_catalog.generate_series  (cost=0.00..0.00 rows=334 width=4) (actual time=0.000..0.092 rows=999 loops=1001)
                                 Output: generate_series.generate_series
                                 Function Call: generate_series(1, 1000)
                                 work_mem: 40kB  Segments: 1  Max: 40kB (segment 1)
                                 Rows out: 333333.67 rows avg x 3 workers, 1000001 rows max (seg1), 0 rows min (seg0).
 Planning time: 5.981 ms
   (slice0)    Executor memory: 87K bytes avg x 3 workers, 87K bytes max (seg0).
   (slice1)    Executor memory: 97K bytes avg x 3 workers, 172K bytes max (seg1).  Work_mem: 40K bytes max.
 Memory used:  128000kB
 Optimizer: Pivotal Optimizer (GPORCA)
 Execution time: 1180.349 ms
(34 rows)
explain (analyze,verbose,format text) select * from tt where a > b;

                                                          QUERY PLAN                                                           
-------------------------------------------------------------------------------------------------------------------------------
 Gather Motion 3:1  (slice1; segments: 3)  (cost=0.00..431.00 rows=1 width=8) (actual time=2.534..223.679 rows=499500 loops=1)
   Output: a, b
   ->  Seq Scan on public.tt  (cost=0.00..431.00 rows=1 width=8) (actual time=0.145..127.350 rows=166742 loops=1)
         Output: a, b
         Filter: (tt.a > tt.b)
         Rows out: 166500.00 rows avg x 3 workers, 166742 rows max (seg0), 166340 rows min (seg2).
 Planning time: 2.578 ms
   (slice0)    Executor memory: 183K bytes.
   (slice1)    Executor memory: 43K bytes avg x 3 workers, 43K bytes max (seg0).
 Memory used:  128000kB
 Optimizer: Pivotal Optimizer (GPORCA)
 Execution time: 245.018 ms
(12 rows)

Why are the changes needed?

"Rows out" is useful for auto explain

Does this PR introduce any user-facing change?

If yes, please clarify the previous behavior and the change this PR proposes.

How was this patch tested?

This feature tests in regression gp_explain test

Contributor's Checklist

Here are some reminders and checklists before/when submitting your pull request, please check them:

  • Make sure your Pull Request has a clear title and commit message. You can take git-commit template as a reference.
  • Sign the Contributor License Agreement as prompted for your first-time contribution(One-time setup).
  • Learn the coding contribution guide, including our code conventions, workflow and more.
  • List your communication in the GitHub Issues or Discussions (if has or needed).
  • Document changes.
  • Add tests for the change
  • Pass make installcheck
  • Pass make -C src/test installcheck-cbdb-parallel
  • Feel free to request cloudberrydb/dev team for review and approval when your PR is ready🥳

* Add "Rows out" print in cdbexplain_showExecStats

set gp_enable_explain_rows_out=on;

explain (analyze,verbose,format text) select * from tt where a > b;

                                                          QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------
 Gather Motion 3:1  (slice1; segments: 3)  (cost=0.00..431.00 rows=1 width=8) (actual time=2.534..223.679 rows=499500 loops=1)
   Output: a, b
   ->  Seq Scan on public.tt  (cost=0.00..431.00 rows=1 width=8) (actual time=0.145..127.350 rows=166742 loops=1)
         Output: a, b
         Filter: (tt.a > tt.b)
         Rows out: 166500.00 rows avg x 3 workers, 166742 rows max (seg0), 166340 rows min (seg2).
...
@CLAassistant
Copy link

CLAassistant commented Oct 15, 2024

CLA assistant check
All committers have signed the CLA.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hiiii, @robozmey welcome!🎊 Thanks for taking the effort to make our project better! 🙌 Keep making such awesome contributions!

@robozmey robozmey changed the title Rows out в EXPLAIN ANALYZE Rows out in EXPLAIN ANALYZE Oct 15, 2024
@robozmey robozmey changed the title Rows out in EXPLAIN ANALYZE Cherry pick Rows out in EXPLAIN ANALYZE Oct 15, 2024
@@ -277,6 +277,7 @@ int gp_hashagg_groups_per_bucket = 5;
int gp_motion_slice_noop = 0;

/* Cloudberry Database Experimental Feature GUCs */
bool gp_enable_explain_rows_out = false;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is better to send a param to explain command than using a guc to control if print out the "rows out"

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rows out feature created for auto_explain. If we add "rows out" as parameter of explain, then we'll need change auto_explain source, and auto_explain wont be compatible with vanilla

Copy link
Contributor

@fanfuxiaoran fanfuxiaoran Oct 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

 auto_explain wont be compatible with vanilla

Hmm... cannot understand. Could you give more details? Is auto_explain aslo included in vanilla?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

auto_explain is included in vanilla https://github.com/greenplum-db/gpdb-archive/tree/main/contrib/auto_explain

Also GUC gp_enable_explain_rows_out is made by analogy with GUC gp_enable_explain_allstat

src/include/cdb/cdbvars.h Show resolved Hide resolved
src/backend/commands/explain_gp.c Show resolved Hide resolved
fanfuxiaoran
fanfuxiaoran previously approved these changes Nov 4, 2024
my-ship-it
my-ship-it previously approved these changes Nov 5, 2024
Copy link
Contributor

@avamingli avamingli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have a Parallel Plan test(ex: cbdb_parallel.sql) to ensure the output is as expected?

ntuples_imin);
}
else {
// ExplainOpenGroup("Rows Out", NULL, false, es);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove unused codes

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed

@robozmey
Copy link
Author

Do we have a Parallel Plan test(ex: cbdb_parallel.sql) to ensure the output is as expected?

I'm not sure what we need Parallel Plan test, but I expanded gp_explain.sql test

@avamingli
Copy link
Contributor

avamingli commented Nov 26, 2024

Do we have a Parallel Plan test(ex: cbdb_parallel.sql) to ensure the output is as expected?

I'm not sure what we need Parallel Plan test, but I expanded gp_explain.sql test

In GPDB, there is one process executed the Slice on each segment. And explain use the statistic of that gang of QEs to compute the results.
In CBDB, we have Parallel feature. For a parallel plan, there may be multiple QEs execute the same Slice on each segment.
We call them parallel workers.

See CBDB style README https://github.com/apache/cloudberry/blob/11333c0b4d3a4962b0a6610ceb5b6d7a12e45ec4/src/backend/optimizer/README.cbdb.parallel for more details.
And most cases could be found in https://github.com/apache/cloudberry/blob/11333c0b4d3a4962b0a6610ceb5b6d7a12e45ec4/src/test/regress/sql/cbdb_parallel.sql

This is a significant difference between CBDB and GPDB.

In parallel plan cases, what do the statistic results should be? Do current codes compute results right or it's already as expected?
Maybe the current codes don't need to do anything else more, but we should be clear about that.
In any cases, there should be test cases to verify the explain output.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants