11 messages
Adnanover 3 years ago
Did anybody ever experience response time lagging between an nginx and an application pod?
I have some strange intermittent issues where the application responds in e.g. 300ms but the nginx in front of it responds in 24s.
I have some strange intermittent issues where the application responds in e.g. 300ms but the nginx in front of it responds in 24s.
Shreyank Sharmaover 3 years ago
Hello All,
we are running ELK running in Kubernetes, installed via helm charts.
it was installed using the stable helm repo(which is deprecated https://github.com/helm/charts/tree/master/stable now and all charts are moved to the elastic repo). Elasticsearch, Logstash and Kibana all are in version 6.7.0, now we want to upgrade it to the latest or at least 7.
the latest version of elasticsearch in charts from the stable repo is 6.8.6. so am assuming I cannot just upgrade it to 7 to 8 version just using the "helm upgrade" command.
So my questions are:
1. Do we have to recreate the whole ELK cluster if we have to upgrade to 7 or 8, downloading the chart from the elastic repo?
2. Is there a way to upgrade to 7 or version without changing repo(stable to elastic) info?
Thanks in advance
we are running ELK running in Kubernetes, installed via helm charts.
it was installed using the stable helm repo(which is deprecated https://github.com/helm/charts/tree/master/stable now and all charts are moved to the elastic repo). Elasticsearch, Logstash and Kibana all are in version 6.7.0, now we want to upgrade it to the latest or at least 7.
the latest version of elasticsearch in charts from the stable repo is 6.8.6. so am assuming I cannot just upgrade it to 7 to 8 version just using the "helm upgrade" command.
So my questions are:
1. Do we have to recreate the whole ELK cluster if we have to upgrade to 7 or 8, downloading the chart from the elastic repo?
2. Is there a way to upgrade to 7 or version without changing repo(stable to elastic) info?
Thanks in advance
Sean Turnerover 3 years ago(edited)
Hey all, running EKS.
Is there to get certain pods to scale
Alternatively, is there a way to run 10% of a workload on demand, and 90% on spot?
Context:
I'm trying to use
Is there to get certain pods to scale
spot nodes, and additionally fall back to on demand when there's no capacity?Alternatively, is there a way to run 10% of a workload on demand, and 90% on spot?
Context:
I'm trying to use
tolerations and affinity to make emr on eks pods run on spot. When these cause a scale up they sometimes autoscale the spot nodes, sometimes autoscale the on demand nodes. I would ideally like the spot nodes to get autoscaled until that's not possible and then fall back on to on demand nodes affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: executor-emr
operator: In
values:
- "true"
tolerations:
- key: executor-emr
operator: Equal
value: "true"
effect: NoScheduleAdnanover 3 years ago
I have a weird issue with my nginx-ingress-controller.
Sometimes it have following log values from the nginx-ingress-controller
Where did 4 seconds go?
Upstream is another nginx with a duration of 328ms
Did anybody experience something like this before?
How could I debug this?
Sometimes it have following log values from the nginx-ingress-controller
upstream_duration: 4.560, 0.328Where did 4 seconds go?
Upstream is another nginx with a duration of 328ms
Did anybody experience something like this before?
How could I debug this?
U
Unknown Userover 3 years ago(edited)
What the normal amount of namespaces you've seen in clusters?
sheldonhover 3 years ago
Second question, more thread oriented 🧵
If you install a helm chart that is doing automation with Kubernetes such as secrets and other things....
And you want to install to a namespace....
Would you expect the app to only function internally in that namespace even if it was doing kubernetes automation? My gut is that I'd want a namespaced app to be defaulted to only running against secrets and resources in the namespace explicitly, regardless of RBAC allowing more. However, that's my assumption. Curious if anyone else thinks automation for k8s with a namespace should be opt in to all namespace automation or opt out to all namespace automation as a rule.
If you install a helm chart that is doing automation with Kubernetes such as secrets and other things....
And you want to install to a namespace....
Would you expect the app to only function internally in that namespace even if it was doing kubernetes automation? My gut is that I'd want a namespaced app to be defaulted to only running against secrets and resources in the namespace explicitly, regardless of RBAC allowing more. However, that's my assumption. Curious if anyone else thinks automation for k8s with a namespace should be opt in to all namespace automation or opt out to all namespace automation as a rule.
Adnanover 3 years ago
Anybody reliably used
CRON_TZ in CronJobs with 1.22 version?tyuover 3 years ago
Hello, is anyone here familiar with Istio? I'm a beginner trying to get started but running into some issues.
mimoover 3 years ago
Hey
Is there an alternative to matchLabels? if for example i want to match only atleast 2 labels and not all
Is there an alternative to matchLabels? if for example i want to match only atleast 2 labels and not all
Adnanover 3 years ago
Is it possible to run pods only on nodes within a specified subnet?
If for some reason, nodes cannot be started in that subnet, start the pods on the other nodes?
If for some reason, nodes cannot be started in that subnet, start the pods on the other nodes?
Adnanover 3 years ago
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 10
preference:
matchExpressions:
- key: <http://topology.kubernetes.io/zone|topology.kubernetes.io/zone>
operator: In
values:
- eu-central-1a
- eu-central-1c
- weight: 90
preference:
matchExpressions:
- key: <http://topology.kubernetes.io/zone|topology.kubernetes.io/zone>
operator: In
values:
- eu-central-1bWith this nodeAffinity configuration, Kubernetes will try to schedule
• 90% of the pods on nodes with the label
<http://topology.kubernetes.io/zone=eu-central-1b|topology.kubernetes.io/zone=eu-central-1b> and,• 10% of the pods on nodes with the label
<http://topology.kubernetes.io/zone=eu-central-1a|topology.kubernetes.io/zone=eu-central-1a> or <http://topology.kubernetes.io/zone=eu-central-1c|topology.kubernetes.io/zone=eu-central-1c>Do I understand this correctly?