-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data providers #99
Comments
Thanks for opening this! This is on our road map. If you have suggestions on specific data resources that you have priorities around, we'd love to hear about those. Always open to PR's for this too! |
I could've sorely used a top-level scopes, scope w/ scope_id, and host-catalogs data provider, today I'm getting around this with high-level
lkd is also on-point with what else would be nice. |
This is an experiment to see whether generating the provider based on the OpenAPI specification at https://github.com/hashicorp/boundary/blob/main/internal/gen/controller.swagger.json could work. The schema is converted from the definitions given in the document to to map[string]*schema.Schema, with two special cases: - when there is an object in an object, I convert it to a one element list as terraform-plugin-sdk v2 does not know how to express this, - when there is an opaque attribute (`map[string]interface{}`), I skip it completely as terraform-plugin-sdk does not expose `DynamicPseudoType` that would make it possible to express this attribute in native Terraform, the workaround I use in the Consul provider is to use `schema.TypeString` and `jsonencode()` but it is not ideal. Since I only worked on the datasources here I chose to skip those attributes for now. Once the schema is converted, we create the `ReadContext` function that is needed for the datasource. As it can be a bit tricky to use the Go client for each service, I chose to use directly the global *api.Client and to manually add the query params and get the raw response. While it would not be recommended for an external project to use the client this way, it fits nicely here and keep the code simple. Finally the result is written to the state, looking at the schema we generated previously to convert it. The tests are written manually so the developper can make sure that everything is working as expected even thought the code was generated and not written manually. While the conversion of the schema could be made at runtime and only one `ReadContext` function is actually needed, I find generating the code make it quite easy to review and should make it easier for contributors already accustomed to writing Terraform providers to look for errors or fork the provider for their needs. While I only worked on datasources returning lists of elements for now, I think the same approach could be used to generate datasources returning a single element and ultimately resources. This would make it very easy to keep the Terraform provider in sync with new Boundary versions, especially as the OpenAPI spec is created from the Protobuf files and the CLI is already generated on a similar principle. The code in generate_datasource.go is not very nice, but it does get the job done. I may spin it off in its own project in the future to add more feature to it. Closes hashicorp#99
I'm hoping to revive this issue. We have separate projects with their own CI pipelines that add their own resources to Boundary. To do this these projects need the IDs of the OIDC auth method and of the project scope to add resources to. Being able to use Boundary data providers would solve this issue. We have a workaround but it's painful: We add the OIDC auth method ID and project ID as |
I commented this on a related ticket, but I want to repeat it here to hopefully have more visibility. I would also be interested in a |
This is an experiment to see whether generating the provider based on the OpenAPI specification at https://github.com/hashicorp/boundary/blob/main/internal/gen/controller.swagger.json could work. The schema is converted from the definitions given in the document to to map[string]*schema.Schema, with two special cases: - when there is an object in an object, I convert it to a one element list as terraform-plugin-sdk v2 does not know how to express this, - when there is an opaque attribute (`map[string]interface{}`), I skip it completely as terraform-plugin-sdk does not expose `DynamicPseudoType` that would make it possible to express this attribute in native Terraform, the workaround I use in the Consul provider is to use `schema.TypeString` and `jsonencode()` but it is not ideal. Since I only worked on the datasources here I chose to skip those attributes for now. Once the schema is converted, we create the `ReadContext` function that is needed for the datasource. As it can be a bit tricky to use the Go client for each service, I chose to use directly the global *api.Client and to manually add the query params and get the raw response. While it would not be recommended for an external project to use the client this way, it fits nicely here and keep the code simple. Finally the result is written to the state, looking at the schema we generated previously to convert it. The tests are written manually so the developper can make sure that everything is working as expected even thought the code was generated and not written manually. While the conversion of the schema could be made at runtime and only one `ReadContext` function is actually needed, I find generating the code make it quite easy to review and should make it easier for contributors already accustomed to writing Terraform providers to look for errors or fork the provider for their needs. While I only worked on datasources returning lists of elements for now, I think the same approach could be used to generate datasources returning a single element and ultimately resources. This would make it very easy to keep the Terraform provider in sync with new Boundary versions, especially as the OpenAPI spec is created from the Protobuf files and the CLI is already generated on a similar principle. The code in generate_datasource.go is not very nice, but it does get the job done. I may spin it off in its own project in the future to add more feature to it. Closes hashicorp#99
Terraform Version
(latest Boundary provider)
Affected Resource(s)
(n/a)
Terraform Configuration Files
(n/a)
Expected Behavior
Read-only data providers would be nice. Modules would just need a
provider "boundary"{}
(which they would need any just to maintain boundary objects via TF) to figure out where to createboundary_host
boundary_host_set
andboundary_target
objects; primarily the project scope in which they belong.Actual Behavior
They don't exist.
The provider needs an input variable of a large map of port codes whose values are themselves maps that convey the correct
boundary_scope
.id
andboundary_host_catalog
.id
Important Factoids
Use case
My org has distributed, (mostly) loosely-coupled infrastructure-as-code for configuring a number of backing data stores, such as AWS RDS. Containing the setup of boundary resources to permit connectivity would be easier if those loosely-coupled IaC configs (really just one module) could use data providers to find the correct project scopes and host catalog, instead of being passed a very large map and selecting from this map. We are happy to trade 1 lookup for M if our CD pipeline gets simpler (doesn't need to go fetch and compose a map of existnig host catalogs and scopes before running)
The text was updated successfully, but these errors were encountered: