Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enforce max series for metrics queries #4525

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
## main / unreleased

* [CHANGE] Enforce max series in response for metrics queries [#4525](https://github.com/grafana/tempo/pull/4525) (@ie-pham)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: the changelog entry makes me think that this is a behaviour change in the current endpoints, while we are adding new v2 endpoints. can we update the entry to make it clear.


# v2.7.0-rc.0

* [CHANGE] Disable gRPC compression in the querier and distributor for performance reasons [#4429](https://github.com/grafana/tempo/pull/4429) (@carles-grafana)
Expand Down
3 changes: 3 additions & 0 deletions cmd/tempo/app/modules.go
Original file line number Diff line number Diff line change
Expand Up @@ -360,6 +360,7 @@ func (t *App) initQuerier() (services.Service, error) {

queryRangeHandler := t.HTTPAuthMiddleware.Wrap(http.HandlerFunc(t.querier.QueryRangeHandler))
t.Server.HTTPRouter().Handle(path.Join(api.PathPrefixQuerier, addHTTPAPIPrefix(&t.cfg, api.PathMetricsQueryRange)), queryRangeHandler)
t.Server.HTTPRouter().Handle(path.Join(api.PathPrefixQuerier, addHTTPAPIPrefix(&t.cfg, api.PathMetricsQueryRangeV2)), queryRangeHandler)

return t.querier, t.querier.CreateAndRegisterWorker(t.Server.HTTPHandler())
}
Expand Down Expand Up @@ -414,6 +415,8 @@ func (t *App) initQueryFrontend() (services.Service, error) {
t.Server.HTTPRouter().Handle(addHTTPAPIPrefix(&t.cfg, api.PathSpanMetricsSummary), base.Wrap(queryFrontend.MetricsSummaryHandler))
t.Server.HTTPRouter().Handle(addHTTPAPIPrefix(&t.cfg, api.PathMetricsQueryInstant), base.Wrap(queryFrontend.MetricsQueryInstantHandler))
t.Server.HTTPRouter().Handle(addHTTPAPIPrefix(&t.cfg, api.PathMetricsQueryRange), base.Wrap(queryFrontend.MetricsQueryRangeHandler))
t.Server.HTTPRouter().Handle(addHTTPAPIPrefix(&t.cfg, api.PathMetricsQueryInstantV2), base.Wrap(queryFrontend.MetricsQueryRangeV2Handler))
t.Server.HTTPRouter().Handle(addHTTPAPIPrefix(&t.cfg, api.PathMetricsQueryRangeV2), base.Wrap(queryFrontend.MetricsQueryRangeV2Handler))

// the query frontend needs to have knowledge of the blocks so it can shard search jobs
if t.cfg.Target == QueryFrontend {
Expand Down
3 changes: 3 additions & 0 deletions docs/sources/tempo/configuration/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -696,6 +696,9 @@ query_frontend:
# Maximun number of exemplars per range query. Limited to 100.
[max_exemplars: <int> | default = 100 ]

# Maximum number of time series returned for a metrics query.
[max_response_series: <int> | default 1000]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
[max_response_series: <int> | default 1000]
[max_response_series: <int> | default = 1000]

to match other default values in the doc.


# query_backend_after controls where the query-frontend searches for traces.
# Time ranges older than query_backend_after will be searched in the backend/object storage only.
# Time ranges between query_backend_after and now will be queried from the metrics-generators.
Expand Down
1 change: 1 addition & 0 deletions docs/sources/tempo/configuration/manifest.md
Original file line number Diff line number Diff line change
Expand Up @@ -320,6 +320,7 @@ query_frontend:
query_backend_after: 30m0s
interval: 5m0s
max_exemplars: 100
max_response_series: 1000
multi_tenant_queries_enabled: true
response_consumers: 10
weights:
Expand Down
12 changes: 9 additions & 3 deletions modules/frontend/combiner/metrics_query_range.go
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
package combiner

import (
"fmt"
"math"
"slices"
"sort"
Expand All @@ -14,7 +15,7 @@ import (
var _ GRPCCombiner[*tempopb.QueryRangeResponse] = (*genericCombiner[*tempopb.QueryRangeResponse])(nil)

// NewQueryRange returns a query range combiner.
func NewQueryRange(req *tempopb.QueryRangeRequest, trackDiffs bool) (Combiner, error) {
func NewQueryRange(req *tempopb.QueryRangeRequest, trackDiffs bool, setMaxSeries bool, maxSeries int) (Combiner, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can pass maxSeries as 0 to disable it, and skip the setMaxSeries variable.

no strong preference here tho, okay with this as well.

combiner, err := traceql.QueryRangeCombinerFor(req, traceql.AggregateModeFinal, trackDiffs)
if err != nil {
return nil, err
Expand Down Expand Up @@ -43,6 +44,11 @@ func NewQueryRange(req *tempopb.QueryRangeRequest, trackDiffs bool) (Combiner, e
if resp == nil {
resp = &tempopb.QueryRangeResponse{}
}
if setMaxSeries && len(resp.Series) > maxSeries {
resp.Series = resp.Series[:maxSeries]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, we are still collecting all the series and the dropping the extra data before we return them, and not exiting early here? right?

it would be great if exited early when we hit this limit.

It would be very useful in the cases where is q metrics query is returning high cardinality results, for example: {} | rate by (span:id)

Just the work of pulling the series from the blocks will be resource intensive, and can OOM all generators.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i raised the question here #4525 (comment)
the problem is the results come in indeterministically - so if we exit as soon as we hit the max series, we could have a response where each series just has one data point which isn't very uesful. but yes it would save us for out performing.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cool, I am okay with current impl, and we can leve a note or todo that this can improved by making the results deterministic and exiting early.

sidenote: I think Joe's ordered results work might help here.

resp.Status = tempopb.PartialStatus_PARTIAL
resp.Message = fmt.Sprintf("Response exceeds maximum series of %d, a partial response is returned", maxSeries)
}
sortResponse(resp)
attachExemplars(req, resp)
return resp, nil
Expand All @@ -62,8 +68,8 @@ func NewQueryRange(req *tempopb.QueryRangeRequest, trackDiffs bool) (Combiner, e
return c, nil
}

func NewTypedQueryRange(req *tempopb.QueryRangeRequest, trackDiffs bool) (GRPCCombiner[*tempopb.QueryRangeResponse], error) {
c, err := NewQueryRange(req, trackDiffs)
func NewTypedQueryRange(req *tempopb.QueryRangeRequest, trackDiffs bool, setMaxSeries bool, maxSeries int) (GRPCCombiner[*tempopb.QueryRangeResponse], error) {
c, err := NewQueryRange(req, trackDiffs, setMaxSeries, maxSeries)
if err != nil {
return nil, err
}
Expand Down
4 changes: 2 additions & 2 deletions modules/frontend/combiner/trace_by_id_v2.go
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ func NewTraceByIDV2(maxBytes int, marshalingFormat string) Combiner {
var partialTrace bool
gc := &genericCombiner[*tempopb.TraceByIDResponse]{
combine: func(partial *tempopb.TraceByIDResponse, _ *tempopb.TraceByIDResponse, _ PipelineResponse) error {
if partial.Status == tempopb.TraceByIDResponse_PARTIAL {
if partial.Status == tempopb.PartialStatus_PARTIAL {
partialTrace = true
}
_, err := combiner.Consume(partial.Trace)
Expand All @@ -30,7 +30,7 @@ func NewTraceByIDV2(maxBytes int, marshalingFormat string) Combiner {
resp.Trace = traceResult

if partialTrace || combiner.IsPartialTrace() {
resp.Status = tempopb.TraceByIDResponse_PARTIAL
resp.Status = tempopb.PartialStatus_PARTIAL
resp.Message = fmt.Sprintf("Trace exceeds maximum size of %d bytes, a partial trace is returned", maxBytes)
}

Expand Down
6 changes: 3 additions & 3 deletions modules/frontend/combiner/trace_by_id_v2_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -51,13 +51,13 @@ func TestNewTraceByIdV2ReturnsAPartialTrace(t *testing.T) {
actualResp := &tempopb.TraceByIDResponse{}
err = new(jsonpb.Unmarshaler).Unmarshal(res.Body, actualResp)
require.NoError(t, err)
assert.Equal(t, actualResp.Status, tempopb.TraceByIDResponse_PARTIAL)
assert.Equal(t, actualResp.Status, tempopb.PartialStatus_PARTIAL)
}

func TestNewTraceByIdV2ReturnsAPartialTraceOnPartialTraceReturnedByQuerier(t *testing.T) {
traceResponse := &tempopb.TraceByIDResponse{
Trace: test.MakeTrace(2, []byte{0x01, 0x02}),
Status: tempopb.TraceByIDResponse_PARTIAL,
Status: tempopb.PartialStatus_PARTIAL,
Metrics: &tempopb.TraceByIDMetrics{},
}
resBytes, err := proto.Marshal(traceResponse)
Expand All @@ -79,7 +79,7 @@ func TestNewTraceByIdV2ReturnsAPartialTraceOnPartialTraceReturnedByQuerier(t *te
actualResp := &tempopb.TraceByIDResponse{}
err = new(jsonpb.Unmarshaler).Unmarshal(res.Body, actualResp)
require.NoError(t, err)
assert.Equal(t, actualResp.Status, tempopb.TraceByIDResponse_PARTIAL)
assert.Equal(t, actualResp.Status, tempopb.PartialStatus_PARTIAL)
}

func TestNewTraceByIDV2(t *testing.T) {
Expand Down
1 change: 1 addition & 0 deletions modules/frontend/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,7 @@ func (cfg *Config) RegisterFlagsAndApplyDefaults(string, *flag.FlagSet) {
TargetBytesPerRequest: defaultTargetBytesPerRequest,
Interval: 5 * time.Minute,
MaxExemplars: 100,
MaxResponseSeries: 1000,
},
SLO: slo,
}
Expand Down
69 changes: 39 additions & 30 deletions modules/frontend/frontend.go
Original file line number Diff line number Diff line change
Expand Up @@ -43,17 +43,20 @@ type (
)

type QueryFrontend struct {
TraceByIDHandler, TraceByIDHandlerV2, SearchHandler, MetricsSummaryHandler, MetricsQueryInstantHandler, MetricsQueryRangeHandler http.Handler
SearchTagsHandler, SearchTagsV2Handler, SearchTagsValuesHandler, SearchTagsValuesV2Handler http.Handler
cacheProvider cache.Provider
streamingSearch streamingSearchHandler
streamingTags streamingTagsHandler
streamingTagsV2 streamingTagsV2Handler
streamingTagValues streamingTagValuesHandler
streamingTagValuesV2 streamingTagValuesV2Handler
streamingQueryRange streamingQueryRangeHandler
streamingQueryInstant streamingQueryInstantHandler
logger log.Logger
TraceByIDHandler, TraceByIDHandlerV2, SearchHandler, MetricsSummaryHandler http.Handler
SearchTagsHandler, SearchTagsV2Handler, SearchTagsValuesHandler, SearchTagsValuesV2Handler http.Handler
MetricsQueryInstantHandler, MetricsQueryRangeHandler, MetricsQueryInstantV2Handler, MetricsQueryRangeV2Handler http.Handler
cacheProvider cache.Provider
streamingSearch streamingSearchHandler
streamingTags streamingTagsHandler
streamingTagsV2 streamingTagsV2Handler
streamingTagValues streamingTagValuesHandler
streamingTagValuesV2 streamingTagValuesV2Handler
streamingQueryRange streamingQueryRangeHandler
streamingQueryInstant streamingQueryInstantHandler
streamingQueryRangeV2 streamingQueryRangeHandler
streamingQueryInstantV2 streamingQueryInstantHandler
logger log.Logger
}

var tracer = otel.Tracer("modules/frontend")
Expand Down Expand Up @@ -187,30 +190,36 @@ func New(cfg Config, next pipeline.RoundTripper, o overrides.Interface, reader t
searchTagValues := newTagValuesHTTPHandler(cfg, searchTagValuesPipeline, o, logger)
searchTagValuesV2 := newTagValuesV2HTTPHandler(cfg, searchTagValuesPipeline, o, logger)
metrics := newMetricsSummaryHandler(metricsPipeline, logger)
queryInstant := newMetricsQueryInstantHTTPHandler(cfg, queryInstantPipeline, logger) // Reuses the same pipeline
queryRange := newMetricsQueryRangeHTTPHandler(cfg, queryRangePipeline, logger)
queryInstant := newMetricsQueryInstantHTTPHandler(cfg, queryInstantPipeline, logger, false) // Reuses the same pipeline
queryRange := newMetricsQueryRangeHTTPHandler(cfg, queryRangePipeline, logger, false)
queryInstantV2 := newMetricsQueryInstantHTTPHandler(cfg, queryInstantPipeline, logger, true) // Reuses the same pipeline
queryRangeV2 := newMetricsQueryRangeHTTPHandler(cfg, queryRangePipeline, logger, true)

return &QueryFrontend{
// http/discrete
TraceByIDHandler: newHandler(cfg.Config.LogQueryRequestHeaders, traces, logger),
TraceByIDHandlerV2: newHandler(cfg.Config.LogQueryRequestHeaders, tracesV2, logger),
SearchHandler: newHandler(cfg.Config.LogQueryRequestHeaders, search, logger),
SearchTagsHandler: newHandler(cfg.Config.LogQueryRequestHeaders, searchTags, logger),
SearchTagsV2Handler: newHandler(cfg.Config.LogQueryRequestHeaders, searchTagsV2, logger),
SearchTagsValuesHandler: newHandler(cfg.Config.LogQueryRequestHeaders, searchTagValues, logger),
SearchTagsValuesV2Handler: newHandler(cfg.Config.LogQueryRequestHeaders, searchTagValuesV2, logger),
MetricsSummaryHandler: newHandler(cfg.Config.LogQueryRequestHeaders, metrics, logger),
MetricsQueryInstantHandler: newHandler(cfg.Config.LogQueryRequestHeaders, queryInstant, logger),
MetricsQueryRangeHandler: newHandler(cfg.Config.LogQueryRequestHeaders, queryRange, logger),
TraceByIDHandler: newHandler(cfg.Config.LogQueryRequestHeaders, traces, logger),
TraceByIDHandlerV2: newHandler(cfg.Config.LogQueryRequestHeaders, tracesV2, logger),
SearchHandler: newHandler(cfg.Config.LogQueryRequestHeaders, search, logger),
SearchTagsHandler: newHandler(cfg.Config.LogQueryRequestHeaders, searchTags, logger),
SearchTagsV2Handler: newHandler(cfg.Config.LogQueryRequestHeaders, searchTagsV2, logger),
SearchTagsValuesHandler: newHandler(cfg.Config.LogQueryRequestHeaders, searchTagValues, logger),
SearchTagsValuesV2Handler: newHandler(cfg.Config.LogQueryRequestHeaders, searchTagValuesV2, logger),
MetricsSummaryHandler: newHandler(cfg.Config.LogQueryRequestHeaders, metrics, logger),
MetricsQueryInstantHandler: newHandler(cfg.Config.LogQueryRequestHeaders, queryInstant, logger),
MetricsQueryRangeHandler: newHandler(cfg.Config.LogQueryRequestHeaders, queryRange, logger),
MetricsQueryInstantV2Handler: newHandler(cfg.Config.LogQueryRequestHeaders, queryInstantV2, logger),
MetricsQueryRangeV2Handler: newHandler(cfg.Config.LogQueryRequestHeaders, queryRangeV2, logger),

// grpc/streaming
streamingSearch: newSearchStreamingGRPCHandler(cfg, searchPipeline, apiPrefix, logger),
streamingTags: newTagsStreamingGRPCHandler(cfg, searchTagsPipeline, apiPrefix, o, logger),
streamingTagsV2: newTagsV2StreamingGRPCHandler(cfg, searchTagsPipeline, apiPrefix, o, logger),
streamingTagValues: newTagValuesStreamingGRPCHandler(cfg, searchTagValuesPipeline, apiPrefix, o, logger),
streamingTagValuesV2: newTagValuesV2StreamingGRPCHandler(cfg, searchTagValuesPipeline, apiPrefix, o, logger),
streamingQueryRange: newQueryRangeStreamingGRPCHandler(cfg, queryRangePipeline, apiPrefix, logger),
streamingQueryInstant: newQueryInstantStreamingGRPCHandler(cfg, queryRangePipeline, apiPrefix, logger), // Reuses the same pipeline
streamingSearch: newSearchStreamingGRPCHandler(cfg, searchPipeline, apiPrefix, logger),
streamingTags: newTagsStreamingGRPCHandler(cfg, searchTagsPipeline, apiPrefix, o, logger),
streamingTagsV2: newTagsV2StreamingGRPCHandler(cfg, searchTagsPipeline, apiPrefix, o, logger),
streamingTagValues: newTagValuesStreamingGRPCHandler(cfg, searchTagValuesPipeline, apiPrefix, o, logger),
streamingTagValuesV2: newTagValuesV2StreamingGRPCHandler(cfg, searchTagValuesPipeline, apiPrefix, o, logger),
streamingQueryRange: newQueryRangeStreamingGRPCHandler(cfg, queryRangePipeline, apiPrefix, logger, false),
streamingQueryInstant: newQueryInstantStreamingGRPCHandler(cfg, queryRangePipeline, apiPrefix, logger, false), // Reuses the same pipeline
streamingQueryRangeV2: newQueryRangeStreamingGRPCHandler(cfg, queryRangePipeline, apiPrefix, logger, true),
streamingQueryInstantV2: newQueryInstantStreamingGRPCHandler(cfg, queryRangePipeline, apiPrefix, logger, true),

cacheProvider: cacheProvider,
logger: logger,
Expand Down
13 changes: 9 additions & 4 deletions modules/frontend/metrics_query_handler.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ import (
"github.com/grafana/tempo/pkg/tempopb"
)

func newQueryInstantStreamingGRPCHandler(cfg Config, next pipeline.AsyncRoundTripper[combiner.PipelineResponse], apiPrefix string, logger log.Logger) streamingQueryInstantHandler {
func newQueryInstantStreamingGRPCHandler(cfg Config, next pipeline.AsyncRoundTripper[combiner.PipelineResponse], apiPrefix string, logger log.Logger, setMaxSeries bool) streamingQueryInstantHandler {
postSLOHook := metricsSLOPostHook(cfg.Metrics.SLO)
downstreamPath := path.Join(apiPrefix, api.PathMetricsQueryRange)

Expand Down Expand Up @@ -51,14 +51,19 @@ func newQueryInstantStreamingGRPCHandler(cfg Config, next pipeline.AsyncRoundTri
httpReq = httpReq.Clone(ctx)

var finalResponse *tempopb.QueryInstantResponse
c, err := combiner.NewTypedQueryRange(qr, true)
c, err := combiner.NewTypedQueryRange(qr, true, setMaxSeries, cfg.Metrics.Sharder.MaxResponseSeries)
if err != nil {
return err
}

collector := pipeline.NewGRPCCollector(next, cfg.ResponseConsumers, c, func(qrr *tempopb.QueryRangeResponse) error {
// Translate each diff into the instant version and send it
resp := translateQueryRangeToInstant(*qrr)
if setMaxSeries {
// series already limited by the query range combiner just need to copy the status and message
resp.Status = qrr.Status
resp.Message = qrr.Message
}
finalResponse = &resp // Save last response for bytesProcessed for the SLO calculations
return srv.Send(&resp)
})
Expand All @@ -79,7 +84,7 @@ func newQueryInstantStreamingGRPCHandler(cfg Config, next pipeline.AsyncRoundTri

// newMetricsQueryInstantHTTPHandler handles instant queries. Internally these are rewritten as query_range with single step
// to make use of the existing pipeline.
func newMetricsQueryInstantHTTPHandler(cfg Config, next pipeline.AsyncRoundTripper[combiner.PipelineResponse], logger log.Logger) http.RoundTripper {
func newMetricsQueryInstantHTTPHandler(cfg Config, next pipeline.AsyncRoundTripper[combiner.PipelineResponse], logger log.Logger, setMaxSeries bool) http.RoundTripper {
postSLOHook := metricsSLOPostHook(cfg.Metrics.SLO)

return RoundTripperFunc(func(req *http.Request) (*http.Response, error) {
Expand Down Expand Up @@ -114,7 +119,7 @@ func newMetricsQueryInstantHTTPHandler(cfg Config, next pipeline.AsyncRoundTripp
req.URL.Path = strings.ReplaceAll(req.URL.Path, api.PathMetricsQueryInstant, api.PathMetricsQueryRange)
req = api.BuildQueryRangeRequest(req, qr, "") // dedicated cols are never passed from the caller

combiner, err := combiner.NewTypedQueryRange(qr, false)
combiner, err := combiner.NewTypedQueryRange(qr, false, setMaxSeries, cfg.Metrics.Sharder.MaxResponseSeries)
if err != nil {
level.Error(logger).Log("msg", "query instant: query range combiner failed", "err", err)
return &http.Response{
Expand Down
8 changes: 4 additions & 4 deletions modules/frontend/metrics_query_range_handler.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ import (
)

// newQueryRangeStreamingGRPCHandler returns a handler that streams results from the HTTP handler
func newQueryRangeStreamingGRPCHandler(cfg Config, next pipeline.AsyncRoundTripper[combiner.PipelineResponse], apiPrefix string, logger log.Logger) streamingQueryRangeHandler {
func newQueryRangeStreamingGRPCHandler(cfg Config, next pipeline.AsyncRoundTripper[combiner.PipelineResponse], apiPrefix string, logger log.Logger, setMaxSeries bool) streamingQueryRangeHandler {
postSLOHook := metricsSLOPostHook(cfg.Metrics.SLO)
downstreamPath := path.Join(apiPrefix, api.PathMetricsQueryRange)

Expand All @@ -40,7 +40,7 @@ func newQueryRangeStreamingGRPCHandler(cfg Config, next pipeline.AsyncRoundTripp
start := time.Now()

var finalResponse *tempopb.QueryRangeResponse
c, err := combiner.NewTypedQueryRange(req, true)
c, err := combiner.NewTypedQueryRange(req, true, setMaxSeries, cfg.Metrics.Sharder.MaxResponseSeries)
if err != nil {
return err
}
Expand All @@ -65,7 +65,7 @@ func newQueryRangeStreamingGRPCHandler(cfg Config, next pipeline.AsyncRoundTripp
}

// newMetricsQueryRangeHTTPHandler returns a handler that returns a single response from the HTTP handler
func newMetricsQueryRangeHTTPHandler(cfg Config, next pipeline.AsyncRoundTripper[combiner.PipelineResponse], logger log.Logger) http.RoundTripper {
func newMetricsQueryRangeHTTPHandler(cfg Config, next pipeline.AsyncRoundTripper[combiner.PipelineResponse], logger log.Logger, setMaxSeries bool) http.RoundTripper {
postSLOHook := metricsSLOPostHook(cfg.Metrics.SLO)

return RoundTripperFunc(func(req *http.Request) (*http.Response, error) {
Expand All @@ -86,7 +86,7 @@ func newMetricsQueryRangeHTTPHandler(cfg Config, next pipeline.AsyncRoundTripper
logQueryRangeRequest(logger, tenant, queryRangeReq)

// build and use roundtripper
combiner, err := combiner.NewTypedQueryRange(queryRangeReq, false)
combiner, err := combiner.NewTypedQueryRange(queryRangeReq, false, setMaxSeries, cfg.Metrics.Sharder.MaxResponseSeries)
if err != nil {
level.Error(logger).Log("msg", "query range: query range combiner failed", "err", err)
return &http.Response{
Expand Down
Loading