dockerArchived
11 messages
All things docker
Archive: https://archive.sweetops.com/docker/
Babar Baigabout 5 years ago
Hello everyone!
I am using Docker ECS integration to deploy my docker compose project on ECS. My docker compose file looks like following
When I run
If you look into the services section I am specifying a path to a Dockerfile (instead of specifying an image) in
I am using Docker ECS integration to deploy my docker compose project on ECS. My docker compose file looks like following
docker-compose.ymlvolumes:
postgres_data: {}
services:
app:
build:
context: .
dockerfile: ./docker/app/Dockerfile
depends_on:
- db
- redis
volumes:
- assets-volume:/var/www/public
db:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: redis
command: redis-server
volumes:
- '.:/app'
web:
build:
context: .
dockerfile: ./docker/web/Dockerfile
depends_on:
- app
volumes:
- assets-volume:/var/www/public
ports:
- 80:80
volumes:
redis:
postgres_data:
assets-volume:When I run
docker compose up I get the following errorservice "*app"* doesn't define a Docker image to run: incompatible attributeIf you look into the services section I am specifying a path to a Dockerfile (instead of specifying an image) in
app service. Can I assume that the docker compose CLI does not support the docker file paths?Igorabout 5 years ago
bradymabout 5 years ago
It's considered best practice to specify versions of things to be installed in dockerfiles to ensure repeatability. But this sometimes/often leads to packages not being found on a later build or dependency issues like this:
One solution could be specifying a major or minor version (depending on the package) and letting the package manager fill in the rest, something like
I'm curious how others deal with this?
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
libssl-dev : Depends: libssl1.1 (= 1.1.1d-0+deb10u3) but 1.1.1d-0+deb10u4 is to be installedOne solution could be specifying a major or minor version (depending on the package) and letting the package manager fill in the rest, something like
apt-get install libssl-dev=1.1.\*I'm curious how others deal with this?
Santiago Campuzanoabout 5 years ago
Babar Baigabout 5 years ago(edited)
Hi 👋. I am working on deploying a rails application in ECS. I've 2 separate docker files for Rails app and Nginx configuration. Below is my docker compose
I've used the docker compose to create ECS Task definition but when I run it in ECS app container runs fine and Nginx container throws the following error
Here is my
I can understand that the nginx container can not find the rails app server but I don't know what to do to troubleshoot it. Here is the section of Infrastructure code that creates the container. Any help is appreciated. Thanks.
version: '3'
services:
app:
build:
context: .
dockerfile: ./docker/app/Dockerfile
volumes:
- assets-volume:/var/www/crovv/public
web:
build:
context: .
dockerfile: ./docker/web/Dockerfile
depends_on:
- app
volumes:
- assets-volume:/var/www/crovv/public
ports:
- 80:80
volumes:
redis:
postgres_data:
assets-volume:I've used the docker compose to create ECS Task definition but when I run it in ECS app container runs fine and Nginx container throws the following error
nginx: [emerg] host not found in upstream "app:3000" in /etc/nginx/conf.d/default.conf:5Here is my
default.conf # This is a template. Referenced variables (e.g. $RAILS_ROOT) need
# to be rewritten with real values in order for this file to work.
upstream rails_app {
server app:3000;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
# define your domain
server_name localhost;
# define the public application root
root $RAILS_ROOT/public;
index index.html;
# define where Nginx should write its logs
access_log $RAILS_ROOT/log/nginx.access.log;
error_log $RAILS_ROOT/log/nginx.error.log;
client_max_body_size 0;
# deny requests for files that should never be accessed
location ~ /\. {
deny all;
}
location ~* ^.+\.(rb|log)$ {
deny all;
}
# serve static (compiled) assets directly if they exist (for rails production)
location ~ ^/(assets|images|javascripts|stylesheets|swfs|system)/ {
try_files $uri @rails;
access_log off;
gzip_static on; # to serve pre-gzipped version
expires max;
add_header Cache-Control public;
# Some browsers still send conditional-GET requests if there's a
# Last-Modified header or an ETag header even if they haven't
# reached the expiry date sent in the Expires header.
add_header Last-Modified "";
add_header ETag "";
break;
}
# send non-static file requests to the app server
location / {
try_files $uri @rails;
}
location /cable {
proxy_pass <http://rails_app/cable>;
proxy_http_version 1.1;
proxy_set_header Upgrade websocket;
proxy_set_header Connection Upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
}
location @rails {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass <http://rails_app>;
}
}I can understand that the nginx container can not find the rails app server but I don't know what to do to troubleshoot it. Here is the section of Infrastructure code that creates the container. Any help is appreciated. Thanks.
Babar Baigabout 5 years ago
module "app_container" {
source = "git@github.com:cloudposse/terraform-aws-ecs-container-definition.git?ref=0.44.0"
container_name = "${var.container_name}-app"
container_image = "${local.aws_account_id}.dkr.ecr.${var.aws_region}.<http://amazonaws.com/${local.ecr_app_repo}|amazonaws.com/${local.ecr_app_repo}>"
container_memory_reservation = 512
essential = true
readonly_root_filesystem = false
port_mappings = [
{
containerPort = 3000
hostPort = 3000
protocol = "tcp"
}
]
log_configuration = {
logDriver = "awslogs"
options = {
"awslogs-group" : "/${var.app_code}/${var.app_type}/${var.app_env}/${var.cluster_name}/app",
"awslogs-region" : var.aws_region,
"awslogs-stream-prefix" : "ecs"
}
secretOptions = null
}
## Environment variables declared here
## Secrets declared here
privileged = false
}
module "web_container" {
source = "git@github.com:cloudposse/terraform-aws-ecs-container-definition.git?ref=0.44.0"
container_name = "${var.container_name}-web"
container_image = "${local.aws_account_id}.dkr.ecr.${var.aws_region}.<http://amazonaws.com/${local.ecr_web_repo}|amazonaws.com/${local.ecr_web_repo}>"
container_memory_reservation = 512
essential = false
readonly_root_filesystem = false
port_mappings = [
{
containerPort = 80
hostPort = 8084
protocol = "tcp"
}
]
log_configuration = {
logDriver = "awslogs"
options = {
"awslogs-group" : "/${var.app_code}/${var.app_type}/${var.app_env}/${var.cluster_name}/web",
"awslogs-region" : var.aws_region,
"awslogs-stream-prefix" : "ecs"
}
secretOptions = null
}
links = ["${var.container_name}-app"]
privileged = false
}
resource "aws_ecs_task_definition" "this" {
family = "${var.app_type}-${var.app_code}-${var.app_env}-${var.task_def_name}"
container_definitions = "[${module.web_container.json_map_encoded},${module.app_container.json_map_encoded}]"
task_role_arn = aws_iam_role.ecs_role.arn
execution_role_arn = aws_iam_role.ecs_role.arn
requires_compatibilities = ["EC2"]
tags = merge({
Name = "${var.app_type}-${var.app_code}-${var.app_env}-${var.task_def_name}"
}, var.app_tags)
}I am looking for someone to put me in a right direction with this. I am unable to run this on ECS.
Igorabout 5 years ago
@Babar Baig Are you running the two containers in the same task, or separate tasks?
Igorabout 5 years ago
If the same task, you should just reference the app container on localhost, not app:3000
Igorabout 5 years ago
If different tasks, then you'll need to use service discovery or put an LB in front of the tasks
Igorabout 5 years ago
ECS does not have the built in service discovery like docker
lorenabout 5 years ago(edited)
i've got a probably dumb question about using docker containers... is there a simple/automatic way to refer to local files from the host, within the container environment? i was just playing with the terraform container, which says to do this:
but of course that fails because 1) it's invalid syntax for terraform and 2) the container workdir does not have my main.tf. i do know about -v of course, and can mount $PWD to /, but what i'm more interested in is the idea of using a docker image to replace a binary installed to my system. if i have to mount $PWD to the workdir every time, that seems a little more annoying?
docker run -i -t hashicorp/terraform:light plan main.tfbut of course that fails because 1) it's invalid syntax for terraform and 2) the container workdir does not have my main.tf. i do know about -v of course, and can mount $PWD to /, but what i'm more interested in is the idea of using a docker image to replace a binary installed to my system. if i have to mount $PWD to the workdir every time, that seems a little more annoying?