-
Notifications
You must be signed in to change notification settings - Fork 3.2k
Description
Is your feature request related to a problem? Please describe.
Custom evaluators in Azure AI Foundry currently support only two output fields: result and reason. This limits the evaluator’s ability to return richer, structured information that can help explain how a score was derived or provide actionable insights.
For example, when evaluating two generated test plans, I want the evaluator to surface details such as:
A list of missing test cases
A list of extra or unexpected test cases
Explanations for specific mismatches
Because the evaluator is prompt-based, it should ideally allow generating multiple output fields beyond just result and reason.
https://learn.microsoft.com/en-us/azure/ai-foundry/concepts/evaluation-evaluators/custom-evaluators?view=foundry&preserve-view=true#example
Describe the solution you'd like
I would like support for custom, schema-defined output fields in prompt-based custom evaluators.
This would allow me to define additional structured outputs e.g., missing_plans, extra_plans,etc. and retrieve them directly in the evaluation result object.
The evaluator should allow specifying the expected output schema (similar to the example in the documentation) and return all fields in the evaluation response.
Describe alternatives you've considered
-
Embedding all additional information inside the reason field, but this becomes messy, requires post-processing, and loses structure.
-
Writing a fully custom evaluator agent, but that defeats the purpose of using prompt-based evaluators
Additional context
Add any other context or screenshots about the feature request here.