15 messages
Michael Holtabout 4 years ago
Cross posting this here.... probably should have started in this channel
sheldonhabout 4 years ago
Is kustomize the default go to when all I need is minor change for deployment from staging/local/prod?
I think Helm might be overkill. If I needed help I probably want to try Pulumi as well (still might if I need to combine in k8 +external resources like databases).
I think Helm might be overkill. If I needed help I probably want to try Pulumi as well (still might if I need to combine in k8 +external resources like databases).
Zachabout 4 years ago
kustomize all the way
sheldonhabout 4 years ago
🧵 I've known about 12 Factor apps for a while.
When working AWS native my favorite would be cloud-based configuration settings pulling from SSM.
I recently ran into someone who seemed to think that using environment variables as far inferior to config files. Is this more of a kubernetes standard approach to use config file mounts and not use environment variables? To me environment variables versus mounting a config file is different flavors but not really that important. It's more do you use environment car/config file OR are you loading directly into your app from a parameter store style app (I think like etc consul? Others)
When working AWS native my favorite would be cloud-based configuration settings pulling from SSM.
I recently ran into someone who seemed to think that using environment variables as far inferior to config files. Is this more of a kubernetes standard approach to use config file mounts and not use environment variables? To me environment variables versus mounting a config file is different flavors but not really that important. It's more do you use environment car/config file OR are you loading directly into your app from a parameter store style app (I think like etc consul? Others)
Adnanabout 4 years ago
Hi People,
Did anyone ever hat this problem with a pod?
The "Termination Grace Period" is 120s yet the pod is terminating for 3d21h
Did anyone ever hat this problem with a pod?
Status: Terminating (lasts 3d21h)
Termination Grace Period: 120sThe "Termination Grace Period" is 120s yet the pod is terminating for 3d21h
Zachabout 4 years ago
you might have a stuck finalizer
sheldonhabout 4 years ago
Does anyone use json/yaml transformation in a pipeline for Kubernetes, like modifying the yaml directly? Saw this once and I normally observe kustomize/helm and other tools that do the transformation, not a build action. I'm thinking that's more typical in stuff like dotnet aspnetcore style web settings transformations, but that's an assumption.
I'm currently assuming the "easy" start for doing transformation on straight yaml without any complexity with helm, is just kustomize (from conversation above) and the overlays/patches it produces. Sound about right?
I'm currently assuming the "easy" start for doing transformation on straight yaml without any complexity with helm, is just kustomize (from conversation above) and the overlays/patches it produces. Sound about right?
azecabout 4 years ago
Hey y’all, for all of you running AWS EKS workloads, this just came out:
https://github.com/aws-samples/kubernetes-log4j-cve-2021-44228-node-agent
https://github.com/aws-samples/kubernetes-log4j-cve-2021-44228-node-agent
Sherifabout 4 years ago
👋
I want to start a discussion regarding various Ingress Controllers, and how to choose between them.
I have tried a few, but I haven’t had problems with any yet, maybe due to the current scale I am operating at and the limited use-cases -yet-.
AWS ALB Ingress Controller makes perfect sense when you think about the latency and reliability, however, it is not as robust and feature rich. For example, other Ingress Controllers can act as full fledged "API Gateway" (ex solo.io, or Ambassador's), and some integrate with other components like Argo Rollouts and Flagger to do advanced rollout strategies. An Ingress Controller like Nginx's is so "hackable" with the snippets annotations and let me do lots of workaround( also exploitable 😅 ).
However, with In-cluster Ingress (unlike ALB), you have to carefully spread your workloads across cluster to decrease latency.
I have a problem that i am unable to formulate an opinion without hands on experience on larger clusters ( 500+ node ), and I can't rely on my in-laboratory benchmarks because it's too sensitive. Also, I lack the knowledge to correctly benchmark & profile the network latency/overhead by different ingresses (but i am working on that, your help is very welcomed!)
So, what's your decision flowchart when deciding on Ingress Controller / API Gateway handle north-south traffic ?
I was "triggered" to post these questions after I read @Vlad Ionescu (he/him) tweet btw 😄
I want to start a discussion regarding various Ingress Controllers, and how to choose between them.
I have tried a few, but I haven’t had problems with any yet, maybe due to the current scale I am operating at and the limited use-cases -yet-.
AWS ALB Ingress Controller makes perfect sense when you think about the latency and reliability, however, it is not as robust and feature rich. For example, other Ingress Controllers can act as full fledged "API Gateway" (ex solo.io, or Ambassador's), and some integrate with other components like Argo Rollouts and Flagger to do advanced rollout strategies. An Ingress Controller like Nginx's is so "hackable" with the snippets annotations and let me do lots of workaround( also exploitable 😅 ).
However, with In-cluster Ingress (unlike ALB), you have to carefully spread your workloads across cluster to decrease latency.
I have a problem that i am unable to formulate an opinion without hands on experience on larger clusters ( 500+ node ), and I can't rely on my in-laboratory benchmarks because it's too sensitive. Also, I lack the knowledge to correctly benchmark & profile the network latency/overhead by different ingresses (but i am working on that, your help is very welcomed!)
So, what's your decision flowchart when deciding on Ingress Controller / API Gateway handle north-south traffic ?
I was "triggered" to post these questions after I read @Vlad Ionescu (he/him) tweet btw 😄
Johanabout 4 years ago
Anyone using Loki for logs?
Jas Rowinskiabout 4 years ago
Anyone use
I am specifically trying to use this in
Pod:
Weird thing is, it gets scheduled but doesn't actually run. Wondering if anyone else had experience or successfully done it
topologySpreadConstraints key to evenly distribute across nodes?I am specifically trying to use this in
argo-workflowstopologySpreadConstraints: # <https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/>
- maxSkew: 1
topologyKey: zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
foo: barPod:
- name: foo
inputs:
parameters:
- name: foo
container:
image: my_container
metadata:
labels:
foo: bar
command: [sh, -c]Weird thing is, it gets scheduled but doesn't actually run. Wondering if anyone else had experience or successfully done it
Johanabout 4 years ago(edited)
Hi, any suggestions for ‘easy to use’ WAF integrations? Nginx ingress + ModSecurity is hard to debug and maintain, AWS WAF demands ACM certificates (so no LetsEncrypt), or any other?
zeidabout 4 years ago(edited)
Hello. We redeployed kiam to our EKS cluster and some pods are still failing to get their tokens. The kiam server logs keeps saying the role doesn't have the ability to assume the role assigned to the kiam role even though it does. we've tried restarting the application pods and kiam pods and no luck, any ideas?
Kiam server keeps showing errors like
{"generation.metadata":0,"level":"error","msg":"error retrieving credentials: AccessDenied: User: arn
sts::<accountid>:assumed-role/dev-appsystem-eks-worker-role/i-03694ff305127dad7 is not authorized to perform: sts:AssumeRole on resource: arn
iam::<accountid>:role/dev-appkiam-server\n\tstatus code: 403, request id: 004dbe53-5c3b-4f67-9594-0860da0cec21","pod.iam.requestedRole":"dev-resourcemanager-role","pod.iam.role":"dev-resourcemanager-role","pod.name":"hhh-resourcemanager-service-5cc7c8bcdc-kdgbh","pod.namespace":"qa-hhh","pod.status.ip":"10.212.36.21","pod.status.phase":"Running","resource.version":"413214545","time":"2021-12-24T17:53:33Z"}
Kiam server keeps showing errors like
{"generation.metadata":0,"level":"error","msg":"error retrieving credentials: AccessDenied: User: arn
sts::<accountid>:assumed-role/dev-appsystem-eks-worker-role/i-03694ff305127dad7 is not authorized to perform: sts:AssumeRole on resource: arn
iam::<accountid>:role/dev-appkiam-server\n\tstatus code: 403, request id: 004dbe53-5c3b-4f67-9594-0860da0cec21","pod.iam.requestedRole":"dev-resourcemanager-role","pod.iam.role":"dev-resourcemanager-role","pod.name":"hhh-resourcemanager-service-5cc7c8bcdc-kdgbh","pod.namespace":"qa-hhh","pod.status.ip":"10.212.36.21","pod.status.phase":"Running","resource.version":"413214545","time":"2021-12-24T17:53:33Z"}