Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarks #22

Closed
xsc opened this issue Oct 14, 2016 · 9 comments
Closed

Benchmarks #22

xsc opened this issue Oct 14, 2016 · 9 comments

Comments

@xsc
Copy link

xsc commented Oct 14, 2016

First off, thanks for the work on this – it's exciting stuff! I'm aware that any optimisation efforts are probably still far on the horizon but I took the liberty of creating a small set of benchmarks in this repository, currently only for query parsing.

Unfortunately, as of now, it seems that parsing incurs nearly intolerable overhead. For example, the following query (which I don't feel is an unfairly contrived example) clocks in at around 50ms mean parsing time:

{
     newestUsers { name, image },
     topUser: firstUser (sort: "rank", order: "desc") {
       name,
       projects {
         __type,
         name,
         ...Spreadsheet,
         ...Painting
       }
     }
}

fragment Spreadsheet on SpreadsheetProject {
    rowCount,
    columnCount
}

fragment Paiting on PaintingProject {
    dominantColor { name, hexCode }
}

Raw Output for the :complex-query testcase (here):

Case:  :complex-query
Evaluation count : 1320 in 60 samples of 22 calls.
             Execution time mean : 47.453645 ms
    Execution time std-deviation : 2.097021 ms
   Execution time lower quantile : 45.597164 ms ( 2.5%)
   Execution time upper quantile : 52.667917 ms (97.5%)

Found 3 outliers in 60 samples (5.0000 %)
        low-severe       2 (3.3333 %)
        low-mild         1 (1.6667 %)
 Variance from outliers : 30.3294 % Variance is moderately inflated by outliers

Most of this seems to be instaparse, though, as shown by :complex-query-instaparse-only:

Case:  :complex-query-instaparse-only
Evaluation count : 1380 in 60 samples of 23 calls.
             Execution time mean : 45.664096 ms
    Execution time std-deviation : 333.091255 µs
   Execution time lower quantile : 45.152926 ms ( 2.5%)
   Execution time upper quantile : 46.305994 ms (97.5%)

Anyway, I just wanted to bring this to your attention. Things like query caching would surely help in a production context, but parsing overhead seems to me like something that can't hurt to reduce. Of course, if GraphQL is just a hard to parse language, there's nothing to be done.

Update: There was a problem with the benchmarks, causing a higher mean (~120ms) than the actual result – which is a lot better but 50ms might still contribute massively to resolution times.

@tendant
Copy link
Owner

tendant commented Oct 14, 2016

Thanks @xsc for putting time on the performance.

The performance is really important for this library.

For the parser performance, it is easy to support caching. I plan to enhance executor to accept parsed query. Once it is done, it will be up to user on how to cache parsed query.

It is even a security feature. Since you can choose to accept only cached query. And the parser will never been exposed to arbitrary query. I have heard this is what Facebook doing.

@tendant
Copy link
Owner

tendant commented Oct 15, 2016

I am debating the interface of executor.

Two options so far:

  1. Provide an new function, which support parsed query.
  2. Use multi-method to define exiting execute function. However multi-method might have performance concern.

I will leave this issue open for more comments and ideas.

@aew
Copy link
Collaborator

aew commented Oct 15, 2016

I'm in favor of a new function, and in general I think that separating the parsing (and validation) step from the execution step (with a cache in between) makes a lot of sense. This takes performance optimization pressure off of the parse / validate steps, which are complex, and also reduces the number of corner cases that can leak into the execution phase.

If it were me, I'd focus on (in order):

  1. robustness
  2. feature completeness
  3. code cleanliness
  4. execution phase performance
  5. documentation and examples
  6. other stuff

@aew
Copy link
Collaborator

aew commented Oct 18, 2016

I've thought about this a bit more, and I would propose we make validation a mandatory upstream step from execution.

My reasoning:

  • In order to preserve line and column metadata necessary to produce standards-compliant error messages, our parser needs to do fewer transformations than it is currently doing. Transformations to raw strings / values wipe out metadata, which instaparse uses to track locations in the AST. Also, to do line and column, we have to do another parser pass, which makes it slower.
  • I find it natural that for most applications, queries will be parsed and validated during a development phase (where good errors are as important as latency, and user permissions may be different) and then cached associated with a hash of the query document + schema or something like that.
  • The optimal data structure for validation can be different than the data structure resulting from parsing and validation, which would be designed for execution (we can do transformations as part of the validation tree visit). Some of the indirection / extra wrappers / vectors of maps instead of key value maps / etc are only necessary for validation and can be further optimized after validation passes.
  • There are problems today where the data structure that exits validation cannot be passed into execution, which we need to fix in any event.

@tendant
Copy link
Owner

tendant commented Oct 20, 2016

Just want to have more discussion around data and flow:

Existing flow:

Query/Schema String -- Parser --> Parsed AST -- Transformation --> Transformed Map -- Executor --> Result

Complete flow:

Query/Schema String -- Parser --> Parsed AST -- Transformation --> Transformed Map -- Validator --> Validation Error or Validated Map -- Executor --> Executor Error or Result

  1. Transformed Map and Validated Map should be same format if possible. So that validation process can be skipped if needed. Although some data can be added to validated map.
  2. Provide new executor function, it will take Validated Map (Transformed Map) as parameter in addition to existing execute function which takes only query string. Validated Map and Transformed Map can be cached and even persisted (like edn).
  3. Validation Error should be same format as Executor Error, which is defined in GraphQL Spec. So that it can be return to client as result.

@xsc
Copy link
Author

xsc commented Oct 20, 2016

Adding to that, it might be reasonable to have a separate repository/artifact specifying (as in clojure.spec) the format of what you named "Transformed Map", "Validation Error" and "Execution Error". It would then be possible to provide different parser/validator/executor implementations, as long as they fulfill those specs. (I'm not sure, though, whether there is a lot left that would remain within a graphql-clj project, maybe default implementations and executor contracts?)

For example, I've been trying out ANTLR4 for parsing (here), producing an AST conforming to this spec. I'd very much like to reuse an existing validator implementation (or plug in a different executor subsystem, e.g. one based on claro) but for that we need a common AST format that isn't prone to change (bar a GraphQL spec update).

(This also adds to @aew's "robustness" and "code cleanliness" points since those things can be tested and tuned independently.)

Thus, generally, I'm very much in favour of using more of a pipeline approach (parser -> validator -> executor) as long as the interfaces between the pipeline parts are well-defined and stable.

@tendant
Copy link
Owner

tendant commented Oct 20, 2016

@xsc Parsing, validation and execution phase are separated intentionally, in order to make those parts pluggable. It is still better to keep it in one repository, so it is easy to maintain and release. I am open to separate the library to multiple repositories, once it is stable enough. You can still use some parts of graphql-clj as you wish.

I agree it is very important to come up right spec/format for "Transformed Map". @aew did great job in introducing clojure.spec in validation. This helps us understand what exactly needs to be in the spec/format. I hope we can come to certain conclusion once validation is done.

For executor, I prefer to reuse the same spec/format as "Transformed Map" if possible. It will be painful to maintain two set of spec/format. (#35)

Current executor is not the ideal solution. There are a few things needed here:

  1. (Simplified Executor #36) Executor: which can walk through "Transformed Map" and realize the result map.
  2. (Generic data-loader #37) Dataloader: Generic data-loader can be used anywhere to provide caching and solution for n+1 problem. For example: https://github.com/facebook/dataloader. It should be a independent library.
  3. (Resolver function #38) Resolver: Provide mapping from (Type, field) -> fn. We need figure out what is good definition of resolver-fn.

@xsc I have a quick peek of claro, looks pretty promising. I had looked into muse before. I prefer something simple instead.

There are a lot of things can be improved, I will create issue for some of them. It will be great, we can keep getting comments/ideas around them.

@aew
Copy link
Collaborator

aew commented Nov 7, 2016

Having thought a bit more about this, I think that validation is intended to be a mandatory step (albeit one completed outside the inner loop of repeatedly executing a given query against a given schema):

https://facebook.github.io/graphql/#sec-Validation

An invalid request is still technically executable, and will always produce a stable result as defined by the procedures in the Execution section, however that result may be ambiguous, surprising, or unexpected relative to the request containing validation errors, so execution should not occur for invalid requests.

Typically validation is performed in the context of a request immediately before execution, however a GraphQL service may execute a request without explicitly validating it if that exact same request is known to have been validated before.`

This part of the spec explicitly states that execution should not occur for invalid requests. On top of this, it is more work to also support transformations directly from parse output to execution input rather than consistently arriving at execution via the validation phase, which introduces additional transformations needed for validation (and likely useful for an execution tree visit too).

@tendant
Copy link
Owner

tendant commented May 19, 2017

The parser performance is about 200 times faster in most recent version 0.2.3 with Java Parser.

Here is a result for comparison:

Case: :complex-query
Evaluation count : 7699500 in 60 samples of 128325 calls.
Execution time mean : 7.910276 µs
Execution time std-deviation : 183.697851 ns
Execution time lower quantile : 7.670498 µs ( 2.5%)
Execution time upper quantile : 8.329677 µs (97.5%)

Found 9 outliers in 60 samples (15.0000 %)
low-severe 9 (15.0000 %)
Variance from outliers : 11.0094 % Variance is moderately inflated by outliers

Detail performance bench can be found in:

https://github.com/xsc/graphql-clj-bench/blob/master/result-0.2.3.txt

@tendant tendant closed this as completed May 19, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants