airshipArchived
29 messages
Home of Airship ECS Modules ( https://github.com/blinkist/terraform-aws-airship-ecs-service / https://github.com/blinkist/terraform-aws-airship-ecs-cluster )
Archive: https://archive.sweetops.com/airship/
Thang Manalmost 7 years ago
Hello everyone, I am a newbie to AWS ECS
Thang Manalmost 7 years ago
I am looking at the code of Airship, and not sure/don’t know some parts
Thang Manalmost 7 years ago
Following this example: https://airship.tf/guide/ecs_service/#full-example
Thang Manalmost 7 years ago
This is my TF root dir structure
T
Thang Manalmost 7 years ago
Thang Manalmost 7 years ago
ecs_cluster_id = “${local.cluster_id}“<-- I got an error for thisThang Manalmost 7 years ago
* module.demo_web.local.ecs_cluster_name: local.ecs_cluster_name: 1:12: unknown variable accessed: var.ecs_cluster_id in:Thang Manalmost 7 years ago
I am also confused with these
Thang Manalmost 7 years ago
awsvpc_subnets
awsvpc_security_group_ids
Thang Manalmost 7 years ago
they are only using for Fargate mode?
Mads Hvelplundalmost 7 years ago(edited)
@Thang Man you need to create a cluster before adding services to it. there is a separate module for creating the ecs cluster: https://registry.terraform.io/modules/blinkist/airship-ecs-cluster/aws/0.5.1.
if you follow the tutorial, it all gets sewt up: https://airship.tf/getting_started/
if you follow the tutorial, it all gets sewt up: https://airship.tf/getting_started/
Mads Hvelplundalmost 7 years ago(edited)
i haven't looked at the docs for the two awsvpc vars, but i'm guessing they control which subnet your airship services run in, and which ports are accessible on them.
jaustinpageover 6 years ago
Hi guys, i recently ran into an issue where i really need to deploy 2 containers in 1 ecs task. I know that this is not a supported use case for airship. Is the challenge just in the lambda lookup?
jaustinpageover 6 years ago
basically i am trying to determine level of effort to add multi-container support to airship
Bogdanover 6 years ago
I’m trying to use
I get the following errors:
terraform-aws-airship-ecs-service from https://github.com/blinkist but if I’m using the following config:
name = "App"
region = "eu-central-1"
ecs_cluster_id = "dev123"
fargate_enabled = "true"
awsvpc_enabled = "true"
load_balancing_type = "none"
awsvpc_subnets = ["subnet-12,subnet-132,subnet-143"]
bootstrap_container_image = "<http://33.dkr.ecr.eu-central-1.amazonaws.com/app|33.dkr.ecr.eu-central-1.amazonaws.com/app>"
container_cpu = 512
container_memory = 1024
container_port = 4000
awsvpc_security_group_ids = ["sg-4321"]
ssm_enabled = "true"
ssm_paths = ["/accounts/github/test"]
load_balancing_properties_route53_zone_id = "terst321"
I get the following errors:
Error: Error applying plan:
2 errors occurred:
* module.iam.aws_iam_role_policy.lambda_ecs_task_scheduler_policy: 1 error occurred:
* aws_iam_role_policy.lambda_ecs_task_scheduler_policy: Error putting IAM role policy terraform-20190523154130657200000001: MalformedPolicyDocument: The policy failed legacy parsing
status code: 400, request id: 3c746ca9-11e9-b313d05c3a7a
* <http://module.ecs_task_definition.aws_ecs_task_definition.app|module.ecs_task_definition.aws_ecs_task_definition.app>: 1 error occurred:
* <http://aws_ecs_task_definition.app|aws_ecs_task_definition.app>: ClientException: hostname is not supported on container when networkMode=awsvpc.
status code: 400, request id: 47bd85d0-11e9-7b3bd4b40bd4
Bogdanover 6 years ago
I opened an issue but not sure whether @Maarten van der Hoef has had time to look at it. I’m very tempted to fork it and go my own way..
Maarten van der Hoefover 6 years ago
Im on vacay at the moment. Will take a look when I'm back folks.
Thang Manover 6 years ago
hello everyone
Thang Manover 6 years ago
I am still testing airship project
Thang Manover 6 years ago
I have deployed successfully a ecs cluster using the
ecs-cluster moduleThang Manover 6 years ago
but got this error when trying to run
terraform apply using the ecs-service clusterThang Manover 6 years ago
my TF version is 0.12
Thang Manover 6 years ago
<http://module.demo_web.module.ecs_task_definition.aws_ecs_task_definition.app[0]|module.demo_web.module.ecs_task_definition.aws_ecs_task_definition.app[0]>: Destroying... [id=stack-demo-web]
<http://module.demo_web.module.ecs_task_definition.aws_ecs_task_definition.app[0]|module.demo_web.module.ecs_task_definition.aws_ecs_task_definition.app[0]>: Destruction complete after 0s
<http://module.demo_web.module.ecs_task_definition.aws_ecs_task_definition.app[0]|module.demo_web.module.ecs_task_definition.aws_ecs_task_definition.app[0]>: Creating...
<http://module.demo_web.module.ecs_task_definition.aws_ecs_task_definition.app[0]|module.demo_web.module.ecs_task_definition.aws_ecs_task_definition.app[0]>: Creation complete after 1s [id=stack-demo-web]
<http://module.demo_web.module.ecs_service.aws_ecs_service.app[0]|module.demo_web.module.ecs_service.aws_ecs_service.app[0]>: Creating...
Error: InvalidParameterException: Invalid revision number. Number:
status code: 400, request id: 7707d912-829d-11e9-b89e-554ff397ffec "demo-web"
on ecs-service/modules/ecs_service/main.tf line 134, in resource "aws_ecs_service" "app":
134: resource "aws_ecs_service" "app" {
Thang Manover 6 years ago
the task definition has not been changed, but I don’t know why the module still destroy the current one (not changed), and re-create a new task definition with a new revision number
Thang Manover 6 years ago
the task definition not changed but every I tried to re-run
terraform apply, it show this:Thang Manover 6 years ago
# <http://module.demo_web.module.ecs_task_definition.aws_ecs_task_definition.app[0]|module.demo_web.module.ecs_task_definition.aws_ecs_task_definition.app[0]> must be replaced
-/+ resource "aws_ecs_task_definition" "app" {
~ arn = "arn:aws:ecs:ap-southeast-1:513084766957:task-definition/stack-demo-web:8" -> (known after apply)
~ container_definitions = jsonencode(
~ [ # forces replacement
~ {
~ command = [
+ null,
]
cpu = 256
+ entryPoint = null
~ environment = [
+ null,
]
essential = true
+ healthCheck = null
hostname = "demo-web"
image = "nginx:stable"
logConfiguration = {
logDriver = "awslogs"
options = {
awslogs-group = "stack/demo-web"
awslogs-region = "ap-southeast-1"
awslogs-stream-prefix = "demo-web"
}
}
memory = 512
+ memoryReservation = null
~ mountPoints = [
+ null,
]
name = "nginx-fe"
portMappings = [
{
containerPort = 80
hostPort = 80
protocol = "tcp"
},
]
privileged = false
readonlyRootFilesystem = false
- volumesFrom = [] -> null
+ workingDirectory = null
} # forces replacement,
]
)
family = "stack-demo-web"
~ id = "stack-demo-web" -> (known after apply)
network_mode = "bridge"
requires_compatibilities = [
"EC2",
]
~ revision = 8 -> (known after apply)
- tags = {} -> null
task_role_arn = "arn:aws:iam::513084766957:role/stack-demo-web-task-role"
}
Thang Manover 6 years ago
let me know if you need further information, thanks~
Thang Manover 6 years ago
it mights be caused by TF 0.12
Thang Manover 6 years ago
I have tested fine with TF 0.11.14