Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Schema: remove Computed from data source attributes #41

Closed
wants to merge 2 commits into from

Conversation

nessex
Copy link

@nessex nessex commented Jan 6, 2025

WHAT

  • Changes Computed back to false (default value) for attributes on the data source.

WHY

We frequently hit issues where the computed attribute prevents terraform from resolving changes.

In part, this is caused by the Computed flag here. Terraform assumes it cannot know the outcome at plan time, so it cannot proceed.

Spiritually, the Compute flag is about telling Terraform when the provider does not know the outcome until after apply. In this case, the outcome is obvious to the provider as it's basically just manipulating the inputs, without any external factors. For that reason, I think Compute should be false.

@nessex nessex self-assigned this Jan 6, 2025
@nessex
Copy link
Author

nessex commented Jan 6, 2025

This is going to end up really hacky, requiring the outputs to be listed as inputs with the provider just ignoring any values passed in. The more correct solution is to replace this with a Provider Function, but those are only available in tf 1.8+. I got this mostly implemented before realising, so here's a jumping off point someone finds this in the future:

Provider Function implementation:
package provider

import (
	"context"
	"fmt"

	"github.com/hashicorp/terraform-plugin-framework/attr"
	"github.com/hashicorp/terraform-plugin-framework/diag"
	"github.com/hashicorp/terraform-plugin-framework/function"
	"github.com/hashicorp/terraform-plugin-framework/types"
	"github.com/hashicorp/terraform-plugin-framework/types/basetypes"
)

var tfSplitPoliciesReturnAttrTypes = map[string]attr.Type{
    "hash":         types.Int64Type,
    "chunks":       types.MapType{ElemType: types.StringType},
}

var _ function.Function = &TfSplitPoliciesFunction{}

type TfSplitPoliciesFunction struct{}

func NewTfSplitPoliciesFunction() function.Function {
	return &TfSplitPoliciesFunction{}
}

func (f *TfSplitPoliciesFunction) Metadata(ctx context.Context, req function.MetadataRequest, resp *function.MetadataResponse) {
    resp.Name = "split_policies"
}

func (f *TfSplitPoliciesFunction) Definition(ctx context.Context, req function.DefinitionRequest, resp *function.DefinitionResponse) {
    resp.Definition = function.Definition{
        Summary:     "Split AWS policy documents into separate chunks",
        Description: "Given a large AWS policy document, split it into chunks no larger than a given size",

        Parameters: []function.Parameter{
            function.ListParameter{
                Name:        "policies",
                Description: "List of strings containing a large AWS policy or policies to be split",
                ElementType: types.StringType,
            },
            function.Int64Parameter{
                Name:        "maximum_chunk_size",
                Description: "The maximum size of each chunk in characters, defaults to 6144 if null provided",
                AllowNullValue: true,
            },
        },
        Return: function.ObjectReturn{
            AttributeTypes: tfSplitPoliciesReturnAttrTypes,
        },
    }
}

func (f *TfSplitPoliciesFunction) Run(ctx context.Context, req function.RunRequest, resp *function.RunResponse) {
	var diags []diag.Diagnostic
    var policies []string
    var maximumChunkSize *int64

    resp.Error = req.Arguments.Get(ctx, &policies, &maximumChunkSize)
    if resp.Error != nil {
        return
    }

    if maximumChunkSize == nil {
    	*maximumChunkSize = 6144
    }

    var hash, err = hashInputs(policies)
	if err != nil {
		diags = append(diags, diag.NewErrorDiagnostic("unable to hash input policies", err.Error()))
		resp.Error = function.FuncErrorFromDiags(ctx, diags)
		return
	}
	// resp.Hash = types.StringValue(hash)

	chunks, err := splitPolicies(policies, int(*maximumChunkSize))
	if err != nil {
		diags = append(diags, diag.NewErrorDiagnostic("Failed to chunk policies", err.Error()))
		resp.Error = function.FuncErrorFromDiags(ctx, diags)
		return
	}

	tfChunks := make(map[string]attr.Value)
	for i, chunk := range chunks {
		v, newDiagnostics := types.ListValueFrom(ctx, types.StringType, chunk)
		diags = append(diags, newDiagnostics...)
		if newDiagnostics.HasError() {
			resp.Error = function.FuncErrorFromDiags(ctx, diags)
			return
		}

		tfChunks[fmt.Sprintf("%d", i)] = v
	}

	mv, newDiagnostics := basetypes.NewMapValue(basetypes.ListType{ElemType: basetypes.StringType{}}, tfChunks)
	diags = append(diags, newDiagnostics...)
	if newDiagnostics.HasError() {
		resp.Error = function.FuncErrorFromDiags(ctx, diags)
		return
	}

    out, diags := types.ObjectValue(
        tfSplitPoliciesReturnAttrTypes,
        map[string]attr.Value{
        	"hash":   types.StringValue(hash),
            "chunks": mv,
        },
    )

    resp.Error = function.FuncErrorFromDiags(ctx, diags)
    if resp.Error != nil {
        return
    }

    resp.Error = resp.Result.Set(ctx, &out)
}

Instead I'm considering replacing this provider with some plain regex:
https://github.com/octoenergy/octocloud/pull/21761

@nessex
Copy link
Author

nessex commented Jan 7, 2025

Not sufficient. Ultimately the problem comes from where each of the policy docs are pulled from. It doesn't look like there's a way to handle this at the consumer point, nor can I find a way to get the error message to better explain where the problem lies.

@nessex nessex closed this Jan 7, 2025
@nessex nessex deleted the nessex-remove-computed branch January 21, 2025 04:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant