10 messages
Erik Osterman (Cloud Posse)over 3 years ago
Steve Chernyakover 3 years ago
does anybody know where i can find details around how memory is throttled when k8s is running under cgroupsv2 with the feature enabled? I'm trying to wrap my head around what it means for memory to be treated as a "compressible" resource.
akhan4uover 3 years ago(edited)
Hey guys,
Facing an Issue running
The DD agent is injecting some ENV vars in the
The above ENV vars are causing the operator to restart the jenkins-instance pod in a loop.
Did anyone of you have used
Facing an Issue running
jenkins-operator on kubernetes cluster v1.22.11-eks with Datadog monitoring agent.The DD agent is injecting some ENV vars in the
jenkins-instance when created ex: DD_AGENT_HOST & DD_ENTITY_ID.The above ENV vars are causing the operator to restart the jenkins-instance pod in a loop.
Did anyone of you have used
jenkins-operator next to a monitoring agent like Datadog, Newrelic, etc?Vinícius Azevedoover 3 years ago
Can someone help me on understanding an anti-affinity policy I am creating for an application? I'm trying to guarantee each new replica will be deployed in a different AZ, so I came up with the following block:
So, my thinking is:
1. I am creating an Anti-Affinity policy, meaning I want to avoid, if possible (thus the
2. I am using
3. I am looking for pods with a label common to all my application pods (the scheduler will use the information in
4. I am telling the scheduler to look for a different zone (as defined in
5. I know that if all the zones are already filled with pods of my application, then the scheduler will be free to assign to any node (because no other rules are defined, and I defined a soft rule for the anti-affinity)
So is my understanding correct?
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
podAffinityTerm:
labelSelector:
matchExpressions:
- key: <http://app.kubernetes.io/name|app.kubernetes.io/name>
operator: In
values:
- ${app-name}
topologyKey: <http://topology.kubernetes.io/zone|topology.kubernetes.io/zone>So, my thinking is:
1. I am creating an Anti-Affinity policy, meaning I want to avoid, if possible (thus the
preferred element), assigning the pod to a node according to certain rules2. I am using
podAffinityTerm and labelSelector, meaning I am looking for labels in pods that are already assigned to my cluster3. I am looking for pods with a label common to all my application pods (the scheduler will use the information in
matchExpressions, key, operator, and values)4. I am telling the scheduler to look for a different zone (as defined in
topologyKey) if the criteria is met5. I know that if all the zones are already filled with pods of my application, then the scheduler will be free to assign to any node (because no other rules are defined, and I defined a soft rule for the anti-affinity)
So is my understanding correct?
Anirudh Ramanathanover 3 years ago
Hi folks, I'm Anirudh. I used to work on K8s core controllers in the past, and for the past 2y I've been working on a platform called Signadot to test microservices in K8s at scale. Rather than stamping out new copies of infrastructure, the approach we took is to make use of request-level tenancy and dynamic request routing to isolate environments. This makes it possible to get lightweight environments which can share resources with each other while isolating at the request level, similar to how the "copy-on-write" model works for memory. Just launched on Product Hunt and would love to get feedback if you have a few minutes to spare. TIA!
mimoover 3 years ago(edited)
Does anyone know how to troubleshoot a pod stuck on ContainerCreating? logs and events does not show anything that might be telling.. really weird
Adnanover 3 years ago
I am trying to get the aws-ebs-csi-driver helm chart working on a
The message I am getting from PVC events
The CSI topology feature docs say that:
• The
• The plugin must fill in
• During
I am not sure how to configure these points.
EKS 1.23 cluster.The message I am getting from PVC events
failed to provision volume with StorageClass "gp2": error generating accessibility requirements: no topology key found on CSINodeThe CSI topology feature docs say that:
• The
PluginCapability must support VOLUME_ACCESSIBILITY_CONSTRAINTS.• The plugin must fill in
accessible_topology in NodeGetInfoResponse. This information will be used to populate the Kubernetes CSINode object and add the topology labels to the Node object.• During
CreateVolume, the topology information will get passed in through CreateVolumeRequest.accessibility_requirements.I am not sure how to configure these points.
tamskyover 3 years ago(edited)
if anyone here has kicked the tires on https://acorn.io/ -- I'd be interested to hear your thoughts.
edit: "Acorn is a containerized application packaging framework that simplifies deployment on Kubernetes"
edit: "Acorn is a containerized application packaging framework that simplifies deployment on Kubernetes"
S
Sean Turnerover 3 years ago(edited)
EKS question. How does one use pod security groups to connect the traffic between the ALB SG and the pod SG? Using the ALB Ingress
I've got the traffic between the pod and RDS SGs working fine, but the traffic between the ALB and the Pod is only permitted when I do the following:
• open TCP 4200 on the pod security group from the VPC CIDR
• open TCP 30141 on the pod security group from the VPC CIDR
Any combination of allowing those ports from the ALB SG doesn't work. That also includes the ALB ingress shared SG
Where 4200 is the container port, and 30141 is the service NodePort
edit----
Got this working. Needed to open the following
• open TCP 4200 on the pod security group from the node security group ID
• open TCP 30141 on the pod security group from the node security group ID
I've got the traffic between the pod and RDS SGs working fine, but the traffic between the ALB and the Pod is only permitted when I do the following:
• open TCP 4200 on the pod security group from the VPC CIDR
• open TCP 30141 on the pod security group from the VPC CIDR
Any combination of allowing those ports from the ALB SG doesn't work. That also includes the ALB ingress shared SG
Where 4200 is the container port, and 30141 is the service NodePort
edit----
Got this working. Needed to open the following
• open TCP 4200 on the pod security group from the node security group ID
• open TCP 30141 on the pod security group from the node security group ID
Joaquin Menchacaover 3 years ago
Anyone have experience with different service meshes? I have gotten Istio, Linkerd, and NGINX Service Mesh working, but when I tried Consul, cannot get off the ground. Their community is not too responsive sadly.