1 messages
Jonathan4 months ago
(Cross-posting from #terraform since this is K8s-focused)
Hey folks, I built a new Kubernetes Terraform provider that might be interesting to you.
It solves a long-standing Terraform limitation: you can't create a cluster and deploy to it in the same apply. Providers are configured at the root, before resources exist, so you can't use a cluster's endpoint as provider config.
Most people work around this with two separate applies, some use null_resource hacks, others split everything into multiple stacks. After being frustrated by this for many years, I realized the only solution was to build a provider that sidesteps the whole problem with inline connections.
Example:
Create cluster → deploy workloads → single apply. No provider configuration needed.
Building with Server-Side Apply from the ground up (rather than bolting it on) opened doors to fix other persistent community issues with existing providers.
• Accurate diffs - Server-side apply dry-run projections show actual changes, not client-side guesses
• YAML + validation - K8s strict schema validation catches typos at plan time
• CRD+CR same apply - Auto-retry handles eventual consistency (no more time_sleep)
• Patch resources - Modify EKS/GKE defaults without taking ownership
• Non-destructive waits - Timeouts don't force resource recreation
300+ tests, runnable examples for everything.
GitHub: https://github.com/jmorris0x0/terraform-provider-k8sconnect
Registry: https://registry.terraform.io/providers/jmorris0x0/k8sconnect/latest
Would love feedback if you've hit this pain point.
Hey folks, I built a new Kubernetes Terraform provider that might be interesting to you.
It solves a long-standing Terraform limitation: you can't create a cluster and deploy to it in the same apply. Providers are configured at the root, before resources exist, so you can't use a cluster's endpoint as provider config.
Most people work around this with two separate applies, some use null_resource hacks, others split everything into multiple stacks. After being frustrated by this for many years, I realized the only solution was to build a provider that sidesteps the whole problem with inline connections.
Example:
resource "k8sconnect_object" "app" {
cluster = {
host = aws_eks_cluster.main.endpoint
token = data.aws_eks_cluster_auth.main.token
}
yaml_body = file("app.yaml")
}Create cluster → deploy workloads → single apply. No provider configuration needed.
Building with Server-Side Apply from the ground up (rather than bolting it on) opened doors to fix other persistent community issues with existing providers.
• Accurate diffs - Server-side apply dry-run projections show actual changes, not client-side guesses
• YAML + validation - K8s strict schema validation catches typos at plan time
• CRD+CR same apply - Auto-retry handles eventual consistency (no more time_sleep)
• Patch resources - Modify EKS/GKE defaults without taking ownership
• Non-destructive waits - Timeouts don't force resource recreation
300+ tests, runnable examples for everything.
GitHub: https://github.com/jmorris0x0/terraform-provider-k8sconnect
Registry: https://registry.terraform.io/providers/jmorris0x0/k8sconnect/latest
Would love feedback if you've hit this pain point.