docker
Archived0952
All things docker
Archive: https://archive.sweetops.com/docker/
E
erik12 months ago
archived the channel
Joe Perezalmost 2 years ago
Hello All!!! Wondering if anyone has gotten the android sdk and emulator successfully working in a container on M1 macs. I’m running into issues with the emulator itself and right now it’s complaining about not finding devices/emulators or hardware acceleration
Alex Kwanalmost 2 years ago(edited)
When I run
docker-compose up locally using WSL2 (ubuntu) I get the error below. Does the error suggest an issue with my VPN? I get this same error with docker desktop as well as rancher desktop. I've tried restarting and doing docker login a few times. The interesting thing is that everything works fine if I specify a base image from quay.io but when i specify a base image from public.ecr.aws or docker, i get this error.failed to solve: ubuntu/ubuntu:22.04: failed to authorize: failed to fetch oauth token: Post "<https://auth.docker.io/token>": dial tcp: lookup auth.docker.io on 8.8.8.8:53: read udp 172.28.207.184:54621->8.8.8.8:53: i/o timeoutjonjitsuover 2 years ago
I'm working on a VM where I have to use a proxy for most requests to the internet. I have HTTP_PROXY/HTTPS_PROXY set. When I run
docker run --rm bash env | grep -i prox I have HTTP_PROXY/HTTPS_PROXY/NO_PROXY vars set along with their non-uppercased versions. Is this some sort of docker feature where it detects if those are set on the host and it will set them on the container too?Kimberly Cottrellover 2 years ago
Writing this here incase it helps someone.
If you wanna play with
If you're unlucky like me and Docker Desktop just will not install and startup on your (in my case, Ubuntu 22.04) box, here is how to upgrade everything:
1. uninstall Docker Engine, compose, containerd.io, docker.io, etc (something like https://docs.docker.com/engine/install/ubuntu/#uninstall-docker-engine)
2. install docker.io, docker-ce, etc (something like https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository)
3. go to here and download whatever version of compose fits your system architecture - https://github.com/docker/compose/releases/tag/v2.22.0
4.
5.
6.
Technically, #6 for me is actually a symlink inbetween
Hope that helps anyone also struggling with that.
If you wanna play with
docker compose watch but you're not running Docker Desktop, you'll notice you cannot download the 2.22 docker compose package from Docker's package repositories. After chatting with Docker in their Slack, it turns out they've done this intentionally to help get an upgrade out the door faster.If you're unlucky like me and Docker Desktop just will not install and startup on your (in my case, Ubuntu 22.04) box, here is how to upgrade everything:
1. uninstall Docker Engine, compose, containerd.io, docker.io, etc (something like https://docs.docker.com/engine/install/ubuntu/#uninstall-docker-engine)
2. install docker.io, docker-ce, etc (something like https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository)
3. go to here and download whatever version of compose fits your system architecture - https://github.com/docker/compose/releases/tag/v2.22.0
4.
rm -rf ~/.docker5.
mkdir -p ~/.docker/cli-plugins6.
mv ~/Downloads/whatever-docker-compose-arch-here ~/.docker/cli-plugins/docker-composeTechnically, #6 for me is actually a symlink inbetween
/usr/bin/compose and ~/.docker/cli-plugins/docker-compose , as I moved the downloaded release into /usr/bin/compose - tho docker compose cannot run properly unless the executable is under ~/.docker/cli-plugins/docker-compose . I don't think you need to have this symlink in place, tho I'm writing it here juuuuust incase.Hope that helps anyone also struggling with that.
dudeover 2 years ago(edited)
Hey folks, Docker recently removed the
buildinfo metadata from buildx outputs because they were moving stuff like that to attestations and provenance outputs. Has anyone been able to actually get that data back out, though? I recently went about changing our build action to try pulling that data and I'm only seeing partial results. In particular, we have a multi-platform build that should output both amd64 and arm64 outputs. The output below shows the former, but the latter renders as just unknown:{
"mediaType": "application/vnd.oci.image.index.v1+json",
"schemaVersion": 2,
"manifests": [
{
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"digest": "sha256:819b2e9960f85e1667e924ebba1cd9e0bca7924a7afdaf5a194630bb2ed1e750",
"size": 1800,
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
{
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"digest": "sha256:8de8b55a1b364eacb4b2b89e839f52120492a4c6c05110e982b3f3e9b663eb51",
"size": 566,
"annotations": {
"vnd.docker.reference.digest": "sha256:819b2e9960f85e1667e924ebba1cd9e0bca7924a7afdaf5a194630bb2ed1e750",
"vnd.docker.reference.type": "attestation-manifest"
},
"platform": {
"architecture": "unknown",
"os": "unknown"
}
}
]
}Antarr Byrdover 2 years ago
How can I reduce the size of the image in this Dockerfile? It was generated by fly.io. The image size is 3.4 GB
Dockerfile
.dockerignore
Dockerfile
# syntax = docker/dockerfile:1
# Make sure RUBY_VERSION matches the Ruby version in .ruby-version and Gemfile
ARG RUBY_VERSION=3.2.0
FROM ruby:$RUBY_VERSION-slim as base
LABEL fly_launch_runtime="rails"
# Rails app lives here
WORKDIR /rails
ARG RAILS_MASTER_KEY
# Set production environment
ENV RAILS_ENV="production" \
BUNDLE_WITHOUT="development:test" \
BUNDLE_DEPLOYMENT="1" \
RAILS_MASTER_KEY=${RAILS_MASTER_KEY}
# Update gems and bundler
RUN gem update --system --no-document && \
gem install -N bundler
# Install packages needed to install nodejs
RUN apt-get update -qq && \
apt-get install --no-install-recommends -y curl && \
rm -rf /var/lib/apt/lists /var/cache/apt/archives
# Install Node.js
ARG NODE_VERSION=14.21.1
ENV PATH=/usr/local/node/bin:$PATH
RUN curl -sL <https://github.com/nodenv/node-build/archive/master.tar.gz> | tar xz -C /tmp/ && \
/tmp/node-build-master/bin/node-build "${NODE_VERSION}" /usr/local/node && \
rm -rf /tmp/node-build-master
# Throw-away build stage to reduce size of final image
FROM base as build
# Install packages needed to build gems and node modules
RUN apt-get update -qq && \
apt-get install --no-install-recommends -y build-essential git libpq-dev libvips node-gyp pkg-config python-is-python3
# Install yarn
ARG YARN_VERSION=1.22.19
# node modules are installed in root of the image
ENV NODE_ENV="production" \
PREFIX=/usr/local \
PATH=/usr/local/node/bin:$PATH
RUN npm install -g yarn@$YARN_VERSION && \
rm -rf /usr/local/node/lib/node_modules/npm && \
yarn --version && \
apt-get remove -y node-gyp pkg-config && \
apt-get autoremove -y && \
rm -rf /tmp/npm* /tmp/gyp /tmp/node-* /tmp/node_modules/npm* /var/lib/apt/lists/* \
/usr/local/node/lib/node_modules/npm
# Build options
ENV PATH="/usr/local/node/bin:$PATH"
# Install application gems
COPY --link Gemfile Gemfile.lock ./
RUN bundle install && \
bundle exec bootsnap precompile --gemfile && \
rm -rf ~/.bundle/ $BUNDLE_PATH/ruby/*/cache $BUNDLE_PATH/ruby/*/bundler/gems/*/.git
# Install node modules
COPY --link package.json yarn.lock ./
RUN yarn install --frozen-lockfile
# Copy application code
COPY --link . .
# Precompile bootsnap code for faster boot times
RUN bundle exec bootsnap precompile app/ lib/
# Precompiling assets for production without requiring secret RAILS_MASTER_KEY
RUN SECRET_KEY_BASE=DUMMY ./bin/rails assets:precompile
# Final stage for app image
FROM base
# Install packages needed for deployment
RUN apt-get update -qq && \
apt-get install --no-install-recommends -y curl imagemagick libvips postgresql-client mtr iputils-ping && \
rm -rf /var/lib/apt/lists /var/cache/apt/archives
# Copy built artifacts: gems, application
COPY --from=build /usr/local/bundle /usr/local/bundle
COPY --from=build /rails /rails
# Run and own only the runtime files as a non-root user for security
RUN useradd rails --create-home --shell /bin/bash && \
chown -R rails:rails db log tmp
USER rails:rails
# Deployment options
ENV RAILS_LOG_TO_STDOUT="1" \
RAILS_SERVE_STATIC_FILES="true"
# Entrypoint prepares the database.
ENTRYPOINT ["/rails/bin/docker-entrypoint"]
# Start the server by default, this can be overwritten at runtime
EXPOSE 3000.dockerignore
# See <https://docs.docker.com/engine/reference/builder/#dockerignore-file> for more about ignoring files.
# Ignore git directory.
/.git/
# Ignore bundler config.
/.bundle
# Ignore all default key files.
/config/master.key
/config/credentials/*.key
# Ignore all environment files.
/.env*
!/.env.example
# Ignore all logfiles and tempfiles.
/log/*
/tmp/*
!/log/.keep
!/tmp/.keep
# Ignore all cache files.
/tmp/cache/*
# Ignore pidfiles, but keep the directory.
/tmp/pids/*
!/tmp/pids/
!/tmp/pids/.keep
# Ignore storage (uploaded files in development and any SQLite databases).
/storage/*
!/storage/.keep
/tmp/storage/*
!/tmp/storage/
!/tmp/storage/.keep
# Ignore assets.
/node_modules/
/app/assets/builds/*
!/app/assets/builds/.keep
/public/assets
# Ignore data_files/
/data_files/*
!/data_files/.keep
# Ignore coverage
/coverage/*
# Ignore spec files
/spec/*
!/spec/.keepD
Dhamodharanover 2 years ago
Hello all,
i am trying to setup multinode docker wazuh cluster, i have followed the documentation and executed the steps, when i run docker-compose up -d, its failing with creating local mount directory, attaching the error below. But i didnt see anywhere about that host volume creation or permission related configuration in the document, Can someone help on this?
i am trying to setup multinode docker wazuh cluster, i have followed the documentation and executed the steps, when i run docker-compose up -d, its failing with creating local mount directory, attaching the error below. But i didnt see anywhere about that host volume creation or permission related configuration in the document, Can someone help on this?
D
Dhamodharanalmost 3 years ago
hi All,
I am trying to install some packages on centos:8 docker base, but unfortunately I am unable to install any packages on the OS, couldn’t update the base image, can someone help what’s the issue here? how to install a package on centos.
I am trying to install some packages on centos:8 docker base, but unfortunately I am unable to install any packages on the OS, couldn’t update the base image, can someone help what’s the issue here? how to install a package on centos.
Bogdanalmost 3 years ago
Bogdanover 3 years ago
M
mimoover 3 years ago
Hey
does anyone know what might cause this? happens when trying to login manually too
does anyone know what might cause this? happens when trying to login manually too
azecover 3 years ago
I have now tried to pull down base image explicitly with
The error above tells me that at some point either:
a) someone who can control our org in Docker Hub, has prevented us to access
b) someone on the other end has prevented
I think (a) is more probable than (b) but need to verify...
$ docker login
Authenticating with existing credentials...
Login Succeeded
$ docker pull cloudposse/geodesic:latest-debian
Error response from daemon: Head "<https://registry-1.docker.io/v2/cloudposse/geodesic/manifests/latest-debian>": unknown: image access restricted: cloudposse/geodesic blocked for organization vznbiThe error above tells me that at some point either:
a) someone who can control our org in Docker Hub, has prevented us to access
cloudposse/geodesic public imageb) someone on the other end has prevented
cloudposse/geodesic pulls for our Docker Hub orgI think (a) is more probable than (b) but need to verify...
azecover 3 years ago
I have been successfully pulling down
Related issue: https://forums.docker.com/t/failed-to-authorize-rpc-error-code-unknown-desc-failed-to-fetch-oauth-token-unexpected-status-401-unauthorized/118562/40
I have not been able to find any workarounds as of right now, but tried:
• Different IPs (via VPN and different ISP connections)
• Restarting Docker Desktop
• Re-login to Docker Hub with
--------------------------------
Curious if anyone else ran in anything similar in the past?
geodesic image from DockerHub all morning and now it stopped working. => [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 37B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> ERROR [internal] load metadata for <http://docker.io/cloudposse/geodesic:latest-debian|docker.io/cloudposse/geodesic:latest-debian> 1.2s
=> [auth] cloudposse/geodesic:pull token for <http://registry-1.docker.io|registry-1.docker.io> 0.0s
------
> [internal] load metadata for <http://docker.io/cloudposse/geodesic:latest-debian|docker.io/cloudposse/geodesic:latest-debian>:
------
failed to solve with frontend dockerfile.v0: failed to create LLB definition: failed to authorize: rpc error: code = Unknown desc = failed to fetch oauth token: unexpected status: 403 Forbidden
make[1]: *** [docker/build] Error 1
make: *** [build] Error 2Related issue: https://forums.docker.com/t/failed-to-authorize-rpc-error-code-unknown-desc-failed-to-fetch-oauth-token-unexpected-status-401-unauthorized/118562/40
I have not been able to find any workarounds as of right now, but tried:
• Different IPs (via VPN and different ISP connections)
• Restarting Docker Desktop
• Re-login to Docker Hub with
docker login --------------------------------
Curious if anyone else ran in anything similar in the past?
azecover 3 years ago
I mean, my company does provide Docker Desktop licenses, but I was so relieved with being able to get everything done just with
minikube+hyperkit for about 1yr.azecover 3 years ago
I used
minikube+hyperkit on the previous Intel-based MacBookPro. Now with switch to M1, I am having hard time finding any minikube driver that works that is also free.azecover 3 years ago
What if any alternatives to Docker Desktop are people using on Mac computers with Apple M1 chips (
darwin/arm64 )?mimoover 3 years ago
Hey guys,
I have an issue I'm trying to fix now two days straight..
I'm trying to push an image to my private repository which is configured right in the /etc/docker/daemon.json inside insecure-registries.
the daemon.json file looks like this:
when trying to push the image I'm getting the error as follows:
I've googled and tried all of the first 10 pages, all of the stackoverflow possible solutions, nothing worked 😕
my docker info is:
my docker service status
Anyone have encountered this before?
Any help will be great 🤲
I have an issue I'm trying to fix now two days straight..
I'm trying to push an image to my private repository which is configured right in the /etc/docker/daemon.json inside insecure-registries.
the daemon.json file looks like this:
{"hosts": ["<tcp://0.0.0.0:2375>", "unix:///var/run/docker.sock"],"insecure-registries" : ["<http://repo-jfrog.shay.com:8081|repo-jfrog.shay.com:8081>" ]}when trying to push the image I'm getting the error as follows:
3c9f43f84f93: Pushing [==================================================>] 52.68MB/52.68MB54ef099e24bc: Pushing 3.072kB24302eb7d908: Retrying in 1 secondhttp: server gave HTTP response to HTTPS clientI've googled and tried all of the first 10 pages, all of the stackoverflow possible solutions, nothing worked 😕
my docker info is:
Containers: 16
Running: 0
Paused: 0
Stopped: 16
Images: 113
Server Version: 18.09.7
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version:
runc version: N/A
init version: v0.18.0 (expected: fec3683b971d9c3ef73f284f176672c44b448662)
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-210-generic
Operating System: Ubuntu 16.04.7 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 23.55GiB
Name: prod-vm
ID: 7RPI:OM6S:O2M5:ONBV:QYEW:U47R:LUT2:JVOX:H6PE:SSBA:OUZR:SHE2
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: <https://index.docker.io/v1/>
Labels:
Experimental: false
Insecure Registries:
<http://repo-jfrog.shay.com:8081|repo-jfrog.shay.com:8081>
127.0.0.0/8
Live Restore Enabled: false
WARNING: API is accessible on <http://0.0.0.0:2375> without encryption.
Access to the remote API is equivalent to root access on the host. Refer
to the 'Docker daemon attack surface' section in the documentation for
more information: <https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface>
WARNING: No swap limit supportmy docker service status
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/docker.service.d
└─override.conf
Active: active (running) since Wed 2022-06-01 17:58:27 IST; 1 day 1h ago
Docs: <https://docs.docker.com>
Main PID: 4663 (dockerd)
Tasks: 24
Memory: 85.7M
CPU: 32.621s
CGroup: /system.slice/docker.service
└─4663 /usr/bin/dockerdAnyone have encountered this before?
Any help will be great 🤲
sheldonhover 3 years ago
I have a hosts entry for my app such as:
Now I can work on my app with local domain emulating this path like
My problem is figuring out how to keep this compatible with docker. Working on host is fine. However, if I use devcontainers or use a container in my dockerstack instead of live serving the app via
I'm looking at
Anyone know how to keep the host aliases functional with other docker containers so I can continue to use that alias both outside docker and with my docker compose stack (and eventually minikube/kind will need this too).
127.0.0.1 dev.local.testNow I can work on my app with local domain emulating this path like
<http://dev.local.test/foo>.My problem is figuring out how to keep this compatible with docker. Working on host is fine. However, if I use devcontainers or use a container in my dockerstack instead of live serving the app via
ng serve... then these containers and services have no knowledge of the host alias I've set and won't work.I'm looking at
extra_hosts and other options, but not seeing the exact solution yet.Anyone know how to keep the host aliases functional with other docker containers so I can continue to use that alias both outside docker and with my docker compose stack (and eventually minikube/kind will need this too).
Antarr Byrdalmost 4 years ago
Anyone know how I can can install docker-engine on a mac with out the desktop version. I need to be able to run
sam localjoey jensenalmost 4 years ago
I have to ask, what are others doing for docker image builds? Are you still using Docker's version?
Maciek Strömichalmost 4 years ago(edited)
https://www.docker.com/blog/speed-boost-achievement-unlocked-on-docker-desktop-4-6-for-mac/
There are some issues with the experimental features though (e.g. in our case sed uses temporary files which are created without any permissions attached and when running we get bunch of permission errors. In the roadmap thread there’s a version 4.7 available which fixed the problem for some but e.g. not for us.
There are some issues with the experimental features though (e.g. in our case sed uses temporary files which are created without any permissions attached and when running we get bunch of permission errors. In the roadmap thread there’s a version 4.7 available which fixed the problem for some but e.g. not for us.
Oscarover 4 years ago
Has someone here tried to access a service running in a container from another container, both running in the same host? In my case, I have a DB running on a compose with the port published to the host. Then I am running a gitlab-runner container, which needs to access this database to keep the test local. It works if I point to the DB in AWS RDS. I have tried to point to the local using its IP, container name, 127.0.0.1. I also tried running the second container with --net <compose network> and tried the same as mentioned before. Any ideas?
Saichovskyover 4 years ago
I am using an Amazom AMI image to build spot instances whenever there is a build job. One of the packages that is created is docker using the commands below:
Sometimes, my internal routing table for the instance looks like this:
And sometimes I have a smaller routing table (some of the subnets listed above are excluded). I am not sure what the reason is for the inconsistent routing tables and bridged network interfaces, but it’s causing me some trouble, because every once in a while, a job needs to access an IP in one of the listed subnets in the routing table, which gets created by docker. In other words, docker sometimes creates a routing table that overrides the default route, causing builds to fail for a particular service, where the build script needs to access an IP address in one of the listed subnets. The traffic gets routed to one of those docker interfaces and times out.
Now my question is, what can be done about this? I just tried to have a look at
# Install Docker CE
curl -fsSL <https://download.docker.com/linux/ubuntu/gpg> | apt-key add -
add-apt-repository \
"deb [arch=amd64] <https://download.docker.com/linux/ubuntu> \
$(lsb_release -cs) \
stable"
apt-get update -yq && apt-get -yq install docker-ce docker-ce-cli <http://containerd.io|containerd.io>Sometimes, my internal routing table for the instance looks like this:
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.206.96.1 0.0.0.0 UG 100 0 0 ens5
10.206.96.0 0.0.0.0 255.255.224.0 U 0 0 0 ens5
10.206.96.1 0.0.0.0 255.255.255.255 UH 100 0 0 ens5
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.30.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-77455fcf0cbe
172.31.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-41e09bcc0b9c
192.168.32.0 0.0.0.0 255.255.240.0 U 0 0 0 br-c80695bea2be
192.168.96.0 0.0.0.0 255.255.240.0 U 0 0 0 br-5d6cf359313a
192.168.112.0 0.0.0.0 255.255.240.0 U 0 0 0 br-f38334a832ee
192.168.160.0 0.0.0.0 255.255.240.0 U 0 0 0 br-41a02bc22363
192.168.208.0 0.0.0.0 255.255.240.0 U 0 0 0 br-1f45ae76c3fe
192.168.224.0 0.0.0.0 255.255.240.0 U 0 0 0 br-8a8bb9ab22a0
192.168.240.0 0.0.0.0 255.255.240.0 U 0 0 0 br-c7f72974b999And sometimes I have a smaller routing table (some of the subnets listed above are excluded). I am not sure what the reason is for the inconsistent routing tables and bridged network interfaces, but it’s causing me some trouble, because every once in a while, a job needs to access an IP in one of the listed subnets in the routing table, which gets created by docker. In other words, docker sometimes creates a routing table that overrides the default route, causing builds to fail for a particular service, where the build script needs to access an IP address in one of the listed subnets. The traffic gets routed to one of those docker interfaces and times out.
Now my question is, what can be done about this? I just tried to have a look at
/etc/docker/daemon.json to see what’s in there and the file is actually missing. How do I deal with this erratic behaviour by dockerd?sheldonhover 4 years ago
Why would docker-compose work but not docker compose when I'm running one on Ubuntu 18.04 and one on Mac with docker desktop and docker version on both is 20.9+
Tried experimental flag on Ubuntu for kicks and still no go.
Tried experimental flag on Ubuntu for kicks and still no go.
PePe Amengualover 4 years ago
?
Kareemover 4 years ago
Can’t find anything from researching but has anyone ever experienced performance degradation over time with docker swarm on a single instance (i know, its legacy….i’m moving us to ECS)? App is snappy after restarting the containers but slows down over time. CPU, memory, disk space all look great on the EC2 instance and for each container. Curious if anyone has ever run into this?
Steffanover 4 years ago
trying to understand how latest tags work for images. does docker pick the most recent version of the image when latest is specified or does an actual version of the image tagged latest have to exist before it will work. quite confused with all that i ve been reading can anyone help me understand how this works
Almondovarover 4 years ago
Hi colleagues, we are using php image 7.4.9-apache and we received a customer requirement to upgrade to debian
My question is, how do i know what image to pick that has debian v10.10? because in the image details i cant see any command relative to debian.
Thanks!
v10.10 - by running exec into the container we get the info that its version 10.My question is, how do i know what image to pick that has debian v10.10? because in the image details i cant see any command relative to debian.
Thanks!
# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=debian
HOME_URL="<https://www.debian.org/>"
SUPPORT_URL="<https://www.debian.org/support>"
BUG_REPORT_URL="<https://bugs.debian.org/>"
# %sheldonhover 4 years ago
Ansible for Container Builds
Ran across this project and haven't seen much mention of it (only one in our archives).
ansible-bender: https://github.com/ansible-community/ansible-bender
I've wondered about this in prior threads too.
I don't quite get why something like this hasn't taken off more.
We've solved simplify installation of packages and apps dramatically with ansible, but then with Dockerfiles it feels like we step backward into bash scripts, curl calls, and all this depends on the distro too.
In addition to the benefits of docker compose, you get more features from Ansible.
I'd think that Ansible for defining, building, installing packages, and more would have been embraced eagerly.
What's the reason ya'll think this type of approach didn't gain traction?
Ran across this project and haven't seen much mention of it (only one in our archives).
ansible-bender: https://github.com/ansible-community/ansible-bender
I've wondered about this in prior threads too.
I don't quite get why something like this hasn't taken off more.
We've solved simplify installation of packages and apps dramatically with ansible, but then with Dockerfiles it feels like we step backward into bash scripts, curl calls, and all this depends on the distro too.
In addition to the benefits of docker compose, you get more features from Ansible.
I'd think that Ansible for defining, building, installing packages, and more would have been embraced eagerly.
What's the reason ya'll think this type of approach didn't gain traction?
Chris Pichtover 4 years ago(edited)
Anyone using SSO with their AWS and successfully pulling images from ECR with docker pull via an SSO account? I can successfully docker login (supposedly) , but I get this error despite having AdministratorAccess
Not asking anyone to fix it for me, I just want to know if the real issue is I haven't yet found the AWS documentation where they casually mention that SSO accounts can't do this.
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin <http://XXXXXXXX.dkr.ecr.us-east-1.amazonaws.com|XXXXXXXX.dkr.ecr.us-east-1.amazonaws.com>
Login Succeeded
docker pull <http://XXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/reponame|XXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/reponame>
Using default tag: latest
Error response from daemon: pull access denied for <http://XXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/reponame|XXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/reponame>, repository does not exist or may require 'docker login': denied: User: arn:aws:sts::YYYYYYYY:assumed-role/AWSReservedSSO_AdministratorAccess_29495c17e6538e9b/myemail@example.com is not authorized to perform: ecr:BatchGetImage on resource: arn:aws:ecr:us-east-1:XXXXXXXX:repository/reponameNot asking anyone to fix it for me, I just want to know if the real issue is I haven't yet found the AWS documentation where they casually mention that SSO accounts can't do this.
sheldonhover 4 years ago
Dapper project is something I find interesting and similar in a general way to how cloudposse does the container driven code tools.
I'm wondering if there is any other tooling that makes it easy to wrap up sets or individual dev tools on containers so I can stop worrying about Linux/Mac/windows and just run whatever I want. Docker commands are too verbose for this imo.
Basically like dapper I want to run
Is it better to instead build all the tools into a single container like geodesic does and require the developer to exec into it and go from there? Use cli tool for prompt but it's all internal to the tool?
Maybe the wrapper stuff is the problem and instead the docker container just has the simplified commands inside itself.
Been thinking about this because of the variation in machines. Maybe docker interactive is truly the best approach instead of special wrappers anyway. Only big issue is now I see a CI job having to download a 10 gb docker file instead of running a few install commands.
I'm wondering if there is any other tooling that makes it easy to wrap up sets or individual dev tools on containers so I can stop worrying about Linux/Mac/windows and just run whatever I want. Docker commands are too verbose for this imo.
Basically like dapper I want to run
task semver and then it be grabbing whatever I've defined that to be as a docker container, allowing normal cli usage.Is it better to instead build all the tools into a single container like geodesic does and require the developer to exec into it and go from there? Use cli tool for prompt but it's all internal to the tool?
Maybe the wrapper stuff is the problem and instead the docker container just has the simplified commands inside itself.
Been thinking about this because of the variation in machines. Maybe docker interactive is truly the best approach instead of special wrappers anyway. Only big issue is now I see a CI job having to download a 10 gb docker file instead of running a few install commands.
Mohammed Yahyaover 4 years ago(edited)
I have docker-compose to mange many solutions like
1.
2.
3.
4.
5.
What you think about this?
gitlab, vault, jenkins, nexus, awx, selenium, nifi, spark, sonarqube, custom apps, pgadmin, portainer, minio, and I need a solid reverse proxy to replace apache httpd:1.
Nginx2.
Consul3.
Traefik 4.
Varnish5.
CaddyWhat you think about this?
Sander Molover 4 years ago(edited)
As an in-between step for actual container orchestration, I am starting to use pure
This sounds strange to me. Does anybody know why this is needed? I tried to search the Github issues but could not find anything related to his.
docker-compose commands with the use of DOCKER_HOST=ssh://. While doing this, I wanted to use docker-compose build <service> in my pipeline and noticed that the build actually needs runtime variables in order for it to build.This sounds strange to me. Does anybody know why this is needed? I tried to search the Github issues but could not find anything related to his.
Mohammed Yahyaover 4 years ago
sheldonhover 4 years ago
Does anyone use Ansible in the docker file too install tooling and configure? Seems would save effort in reusing and handle different distros a bit cleaner.
Mohammed Yahyaover 4 years ago
a must read https://nickjanetakis.com/blog/best-practices-around-production-ready-web-apps-with-docker-compose
kumar kover 4 years ago
How do i check if a container has following capabilities?
NET_ADMIN","NET_RAW
NET_ADMIN","NET_RAW
Brian Ojedaover 4 years ago(edited)
Do you have ecr credentials helper installed? You cannot set
https://github.com/awslabs/amazon-ecr-credential-helper
My recommendation…
• install ecr credentials helper
• update
◦ update the repo host to match yours
If done correctly, you do not need to explicitly login to docker. Docker CLI will automatically login when pulling or pushing assuming your AWS credentials are correct and active in the environment. FYI, not sure if ecr credential helper works with AWS SSO cli profiles.
credStore to ecr-login without the helper installed. Additionally, setting the credStore will break logins for all non-ecr logins unless you have also define a credHelpers key for the specific non-ecr repo.https://github.com/awslabs/amazon-ecr-credential-helper
My recommendation…
• install ecr credentials helper
• update
config.json with json below◦ update the repo host to match yours
If done correctly, you do not need to explicitly login to docker. Docker CLI will automatically login when pulling or pushing assuming your AWS credentials are correct and active in the environment. FYI, not sure if ecr credential helper works with AWS SSO cli profiles.
{
"credsStore: "",
"credHelpers": {
"<http://123456789.dkr.ecr.us-east-1.amazonaws.com|123456789.dkr.ecr.us-east-1.amazonaws.com>": "ecr-login"
}
}kumar kover 4 years ago
Currently set to "credsStore" : "ecr-login",
kumar kover 4 years ago
Do i need to change anything in config.json?
kumar kover 4 years ago
aws ecr get-login-password --region us-east-1 --profile test | docker login --username AWS --password-stdin 123456789.dkr.ecr.us-east-1.amazonaws.com
Error saving credentials: error storing credentials - err: exit status 1, out:
Error saving credentials: error storing credentials - err: exit status 1, out:
not implementedSantiago Campuzanoover 4 years ago
It works for me
Santiago Campuzanoover 4 years ago
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin <http://123456789.dkr.ecr.us-east-1.amazonaws.com|123456789.dkr.ecr.us-east-1.amazonaws.com>Santiago Campuzanoover 4 years ago
try doing it like this
kumar kover 4 years ago
aws-cli/2.2.6 Python/3.8.8 Darwin/19.5.0 exe/x86_64 prompt/off