dockerArchived
11 messages
All things docker
Archive: https://archive.sweetops.com/docker/
Igorover 6 years ago
We're just starting with docker. Currently, we are planning to use CircleCi to build core images for our solution and distribute customized private images+data container combo (which is based on the core) to customers, or host in AWS ECS (leaning Fargate atm). I am currently planning to build the core and store in ECR. Then, have customer-specific build pull this image down and use as base, and produce either an image in ECR for ECS or an artifact in S3 to distribute to the customer. Looking for validation that this is a good approach. Is ECR is a good option, or is there a better alternative (I am concerned about having copies of images across all AWS accounts/regions)? Is saving the image and packaging it in S3 for customer delivery the right approach? ** Appreciate the feedback!
Stevenover 6 years ago
There's nothing wrong with that. We use 100% ECR for private images. I would recommend not having a lot of duplicate ECR repos. I put them all in a AWS account for shared resources, then have all other accounts access them from there. As for across region ( I don't have this need yet), it is mostly a performance question. It you need to pull 5 images an hour, you're not going to worry much about latency. On the other hand, if you need to pull 100 in 5 minutes, you will want to replicate ECR across regions so it is always close to the running containers
Lee Skillenover 6 years ago(edited)
@Igor If you're looking for a service that is specialised for distribution, look at Cloudsmith (https://cloudsmith.io) (note: I work there, and happy to help out).
Lee Skillenover 6 years ago(edited)
No matter what way you set it up, distributing it via S3 alone is probably not the right approach since that makes it awkward for your customers to use (i.e. external distribution); they'd have to
docker load the image after pulling it down, rather than using docker pull directly. Very do-able, but not great since it pulls the entire image down rather than only the layers that it needs.Igorover 6 years ago
Thank you @Steven @Lee Skillen
Daniel Minellaover 6 years ago(edited)
Someone already face something like this:
My situation:
Dotnet 2.2 application, with this Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS base
# Restoring
WORKDIR /app
## Copy solution
COPY ./*.sln ./
## Copy src projects
COPY src//.csproj ./
RUN for file in $(ls .csproj); do mkdir -p src/${file%.}/ && mv $file src/${file%.*}/; done
## Copy tests projects
COPY tests//.csproj ./
RUN for file in $(ls .csproj); do mkdir -p tests/${file%.}/ && mv $file tests/${file%.*}/; done
## Restore
RUN dotnet restore
# Publishing
WORKDIR /app
COPY src/. ./src/
RUN dotnet publish -c Release --no-restore -o /app/out
# Testing
FROM base AS tester
WORKDIR /app
COPY tests/. ./tests/
RUN dotnet test --logger:trx --no-restore
# Running
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2 AS runtime
EXPOSE 5001
EXPOSE 5002
ENV ASPNETCORE_ENVIRONMENT=Unset
ENV ConnectionStrings__DefaultConnection=Unset
ENV Sentry__Dsn=Unset
ENV ELK_Elasticsearch_Dsn=Unset
ENV TokenAuthSettings__Issuer=Unset
ENV TokenAuthSettings__Key=Unset
ENV TokenAuthSettings__Audience=Unset
WORKDIR /app
COPY --from=base /app/out/* ./
ENTRYPOINT ["dotnet", "x.Api.dll"]
'An assembly specified in the application dependencies manifest (x.Api.deps.json) was not found: package: 'System.Private.ServiceModel', version: '4.5.3' path: 'runtimes/unix/lib/netstandard2.0/System.Private.ServiceModel.dll''My situation:
Dotnet 2.2 application, with this Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS base
# Restoring
WORKDIR /app
## Copy solution
COPY ./*.sln ./
## Copy src projects
COPY src//.csproj ./
RUN for file in $(ls .csproj); do mkdir -p src/${file%.}/ && mv $file src/${file%.*}/; done
## Copy tests projects
COPY tests//.csproj ./
RUN for file in $(ls .csproj); do mkdir -p tests/${file%.}/ && mv $file tests/${file%.*}/; done
## Restore
RUN dotnet restore
# Publishing
WORKDIR /app
COPY src/. ./src/
RUN dotnet publish -c Release --no-restore -o /app/out
# Testing
FROM base AS tester
WORKDIR /app
COPY tests/. ./tests/
RUN dotnet test --logger:trx --no-restore
# Running
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2 AS runtime
EXPOSE 5001
EXPOSE 5002
ENV ASPNETCORE_ENVIRONMENT=Unset
ENV ConnectionStrings__DefaultConnection=Unset
ENV Sentry__Dsn=Unset
ENV ELK_Elasticsearch_Dsn=Unset
ENV TokenAuthSettings__Issuer=Unset
ENV TokenAuthSettings__Key=Unset
ENV TokenAuthSettings__Audience=Unset
WORKDIR /app
COPY --from=base /app/out/* ./
ENTRYPOINT ["dotnet", "x.Api.dll"]
Igorover 6 years ago
I am testing the docker swarm configuration for the first time with and nginx+nodejs+redis combo
Igorover 6 years ago
And a single t2.medium server without docker is showing significantly better results in performance testing to 2 t2.medium nodes running in swarm
Igorover 6 years ago
Any idea on why docker isn't performing up to par?
Igorover 6 years ago(edited)
Has anybody run into a problem with
exec user process caused "permission denied" on running a docker container? The image doesn't work on a hardened RHEL host specifically. Also, an nginx host based on the same alpine image works fine, but not node/redis.Igorover 6 years ago
re: above - turned out that switching user in the container is causing the issue (ie a
node:10-alpine image is working fine, but adding USER node to it causes the error). Hoping someone may have an idea on what may be causing this