18 messages
Andrew Nazarovover 4 years ago
What do you, folks, have as a policy of modifying K8s yamls of running services directly? If you deploy with Helm, for example, is it allowed in your org to make one-off direct changes in yamls of running services in some special cases?
Sean Turnerover 4 years ago
Hey, does anyone have a reference for deploying the aws ALB controller with terraform (The new one, not the aws alb ingress controller) ? Every resource I come across utilises eksctl and the create service account step confuses me as I don't know what the role and its trust relationship looks like.
Brian A.over 4 years ago
@Sean Turner how about https://registry.terraform.io/modules/iplabs/alb-ingress-controller/kubernetes/latest
Adnanover 4 years ago
Hi All,
Does anybody know a good guide/book/video that explains how you auto scale pods.
More specifically, how to choose the best metric ?
Does anybody know a good guide/book/video that explains how you auto scale pods.
More specifically, how to choose the best metric ?
bradymover 4 years ago
I just came across this article yesterday: https://learnk8s.io/kubernetes-autoscaling-strategies
I haven't read it yet myself, just added it to my "to read" list so far.
I haven't read it yet myself, just added it to my "to read" list so far.
btaiover 4 years ago(edited)
anyone have a good way to limit the amount of a type of pod actively deploying on a node? For example, if 6 api pods are all scheduled on a node at the same time, and the limit is 3, we only want 3 api pods actively spinning up on the node
Mr.Devopsover 4 years ago
I rarely see talks about AKS . Wondering if anyone is using it and have it prod ready?
mfridhover 4 years ago
- EKS Managed Node Groups ... I just realized (if I realized correctly?) there's no way to assign load balancer Target Groups when using managed node groups...I guess this is the case because the "amazon way" would mainly be to use the AWS Load Balancer controller rather than relying on registering nodes in target groups...
mfridhover 4 years ago
What are folks average hpa settings at for cpu across your deployments for serious Kubernetes clusters? Have anything interesting to share?
Shreyank Sharmaover 4 years ago
Hi all,
We are running Dev Kubernetes cluster version 1.11.4 (installing using kops in AWS)
with PVC(ebs volume) size 3tb, and our app is not using much data so we want to shrink it to 1tb.
after referring various links i come to know that, it is not able to shrink pvc(but it can be extended).
is there any other way to achieve shrinking pvc.
Thanks
We are running Dev Kubernetes cluster version 1.11.4 (installing using kops in AWS)
with PVC(ebs volume) size 3tb, and our app is not using much data so we want to shrink it to 1tb.
after referring various links i come to know that, it is not able to shrink pvc(but it can be extended).
is there any other way to achieve shrinking pvc.
Thanks
mfridhover 4 years ago(edited)
nice!
...
export KUBECTL_EXTERNAL_DIFF="git-diff"~/bin/git-diff:#!/bin/bash
exec git diff --no-index --color --color-moved -- "$@"...
kubectl diff -f foo.yamlJoaquin Menchacaover 4 years ago
Anyone know how to get Let's Encrypt certs to work w/ cert-manager so that sites are trusted?
Pierre-Yvesover 4 years ago(edited)
hello,
can you help me customize my ingress to allow other port than 80 ?
here I need the port 5044.
I have look here and there but did not found a solution to change the source port ..
my ingress:
the ingress describe:
can you help me customize my ingress to allow other port than 80 ?
here I need the port 5044.
I have look here and there but did not found a solution to change the source port ..
$ kubectl get ing -n elk
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-logstash <none> logstash.mydomain 172.22.54.200 80 18mmy ingress:
apiVersion: <http://networking.k8s.io/v1beta1|networking.k8s.io/v1beta1>
kind: Ingress
metadata:
name: ingress-logstash
namespace: elk
annotations:
# use the shared ingress-nginx
<http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: "nginx"
spec:
rules:
- host: logstash.mydomain
http:
paths:
- path: /
pathType: ImplementationSpecific # default
backend:
serviceName: logstash-main
servicePort: 5044the ingress describe:
$ kubectl describe ingress ingress-logstash -n elk
Name: ingress-logstash
Namespace: elk
Address: 172.22.54.200
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
logstash.s-05.saas-fr
/ logstash-main:5044 (10.244.4.118:5044)
Annotations: <http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: nginx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 18m (x3 over 22m) nginx-ingress-controller Scheduled for syncJoaquin Menchacaover 4 years ago
Follow up newbie
cert-manager question. I would like to use a wild card cert for all of the web traffic, e.g. *.<http://example.com|example.com>, so my question is, when I specify the tls map in the ingress, do I need to include that in the list of domain names for the certificate, just put the wild card there. The examples out there, lead one to believe you put all the domains in there that you want to use. Or should I use the Certificate CRD, then somehow reference that in the ingress. Really fuzzy on this.tomvover 4 years ago(edited)
Anyone know of a way to force ArgoCD to do a sync that updates the helm revision so I can roll deployments?
Aumkar Prajapatiover 4 years ago(edited)
Hey all, had a kubernetes related autoscaling question, so we have this old k8s cluster that’s using kops, basically we’ve outgrown our max node size and we adjusted the scaling count from max 35 to max 40. I also made adjustments to the cluster-autoscaler to adjust the max state of it as well. We decided to fire off a job to test it but when we go to describe the pod it still says the max state is 35 nodes, any ideas on what’s up? Configs as follows:
The autoscaling group in AWS was also bumped to a max of 40 but this error persists
command:
- ./cluster-autoscaler
- -v=4
- --cloud-provider=aws
- --namespace=kube-system
- --logtostderr=true
- --stderrthreshold=info
- --expander=least-waste
- --balance-similar-node-groups=true
- --skip-nodes-with-local-storage=false
# Repeat the below line for new ASG
- --nodes=5:40:nodes.k8sThe autoscaling group in AWS was also bumped to a max of 40 but this error persists
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 28s (x4 over 57s) default-scheduler 0/35 nodes are available: 35 Insufficient cpu, 35 Insufficient memory.
Normal NotTriggerScaleUp 6s (x5 over 47s) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added)Shreyank Sharmaover 4 years ago
Hello,
we are using Nginx as our ingress controller.
we have an ingress with host, my.domain.com, and it was working fine for years, but recently
if I try to access my.domain.com, randomly gives- “502 Bad Gateway (nginx/1.15.5)”
• we tried some of the quick debug for the 502 issues.
• when the issue occurs we logged into nodes and did a curl request to the POD_IP:8080/index.html it worked
• also we don’t have any ingress configured with the same host and path that might conflict
• there is no recent pod restarts or event in ingress controller pod.
also when the “502 Bad Gateway (nginx/1.15.5)” occured ingress-controller pod shows.
so, according to a link
The 502 HTTP status code returned by Nginx means that Nginx was unable to contact your application at the time of the request.
according to the statement above, there is an issue with the pod or the ingress-controller??
but most of the time my.domain.com is accessible and ISSUE LOOKS INTERMITTENT,
is any other place we need to check for logs....?or anyone experienced the same issue?
Thanks in advance.
we are using Nginx as our ingress controller.
we have an ingress with host, my.domain.com, and it was working fine for years, but recently
if I try to access my.domain.com, randomly gives- “502 Bad Gateway (nginx/1.15.5)”
• we tried some of the quick debug for the 502 issues.
• when the issue occurs we logged into nodes and did a curl request to the POD_IP:8080/index.html it worked
• also we don’t have any ingress configured with the same host and path that might conflict
• there is no recent pod restarts or event in ingress controller pod.
also when the “502 Bad Gateway (nginx/1.15.5)” occured ingress-controller pod shows.
2021/06/30 08:59:50 [error] 1050#1050: *1352377 connect() failed (111: Connection refused) while connecting to upstream, client: CLIENT_IP_HERE , server: <http://my.domain.com|my.domain.com> , request: "GET /index.html HTTP/2.0", upstream: "<http://POD_IP:8080/index.html>", host: "<http://my.domain.com|my.domain.com>", referrer: "<https://my.domain/index.html>"so, according to a link
The 502 HTTP status code returned by Nginx means that Nginx was unable to contact your application at the time of the request.
according to the statement above, there is an issue with the pod or the ingress-controller??
but most of the time my.domain.com is accessible and ISSUE LOOKS INTERMITTENT,
is any other place we need to check for logs....?or anyone experienced the same issue?
Thanks in advance.