58 messages
jedineeperalmost 6 years ago
Anyone found a more automated way to roll k8s nodes thru replacement with terraform other than spinning up another asg and cordon/drain thru the old nodes before running tf again to remove them?
Zachary Loeberalmost 6 years ago
Zachary Loeberalmost 6 years ago
That video represents a what a deployment pipeline looks like that pushes a whole kubernetes cluster out then installs airflow on it to then push pipelines that run data science jobs to the same cluster.
Zachary Loeberalmost 6 years ago
Short script to get the latest version of minikube running on ubuntu 19.10: https://gist.github.com/zloeber/528bcce2e4b45465c940a08f10551ccb
Zachary Loeberalmost 6 years ago
FleetOps -> https://thenextweb.com/growth-quarters/2020/04/03/devops-isnt-enough-your-team-needs-to-embrace-fleetops/ (pretty much another way of saying you should treat everything as if it were part of a PaaS I think).
Zachary Loeberalmost 6 years ago
and to follow that up, this nifty looking project from Rancher developed by a dude I follow on twitter: https://rancher.com/blog/2020/fleet-management-kubernetes/
Ryan Smithalmost 6 years ago
Hi all, weird question for ya
Ryan Smithalmost 6 years ago(edited)
EKS 1.14. 1 cluster. 2 namespaces. Opened up SG (for debugging). amazon-k8s-cni:v1.5.7
Deployed svc + deployment in both namespaces. I have a pod from both namespaces on the same ec2 instance. I have a VPN giving me access to the cluster.
I can curl 1 pod in 1 namespace. I can not curl the other pod in the other namespace. All the k8s specs for svc + deployment are the same. They're both using secondary IPs.
I realize this is hyper specific, but just curious if this sounds familiar to anyone
(I've tried to isolate it down to just 2 identical pods in different namespaces)
Guessing it's related to some hardcore networking issue in the CNI.. I'm able to hit the pods from within the same VPC with the same CIDR block without issue.. but when I leave the CIDR block, it causes trouble
Deployed svc + deployment in both namespaces. I have a pod from both namespaces on the same ec2 instance. I have a VPN giving me access to the cluster.
I can curl 1 pod in 1 namespace. I can not curl the other pod in the other namespace. All the k8s specs for svc + deployment are the same. They're both using secondary IPs.
I realize this is hyper specific, but just curious if this sounds familiar to anyone
(I've tried to isolate it down to just 2 identical pods in different namespaces)
Guessing it's related to some hardcore networking issue in the CNI.. I'm able to hit the pods from within the same VPC with the same CIDR block without issue.. but when I leave the CIDR block, it causes trouble
Zachary Loeberalmost 6 years ago
anyone happen to tinker with kpt yet? https://googlecontainertools.github.io/kpt/
R
Ryan Smithalmost 6 years ago(edited)
AWS EKS -> ALB Target Group with CNI question...
So on EKS, we have CNI enabled so each pod has an IP address ni the VPC Subnet. We have an ALB going directly to the Pods' IP addresses. So if we have 50 pods, there are 50 entries in the target group.
Question:
Has anyone spent time fine tuning Deregistration Delay in coordination with aws-alb-ingress-controller (for large deployments; many pods)?
EDIT1:
https://github.com/kubernetes-sigs/aws-alb-ingress-controller/blob/master/docs/guide/ingress/annotation.md#custom-attributes
Hmm, this is suggesting 30s, but dunno if it's battle tested
So on EKS, we have CNI enabled so each pod has an IP address ni the VPC Subnet. We have an ALB going directly to the Pods' IP addresses. So if we have 50 pods, there are 50 entries in the target group.
Question:
Has anyone spent time fine tuning Deregistration Delay in coordination with aws-alb-ingress-controller (for large deployments; many pods)?
EDIT1:
https://github.com/kubernetes-sigs/aws-alb-ingress-controller/blob/master/docs/guide/ingress/annotation.md#custom-attributes
!!!example - set the slow start duration to 5 seconds <http://alb.ingress.kubernetes.io/target-group-attributes|alb.ingress.kubernetes.io/target-group-attributes>: slow_start.duration_seconds=5 - set the deregistration delay to 30 seconds <http://alb.ingress.kubernetes.io/target-group-attributes|alb.ingress.kubernetes.io/target-group-attributes>: deregistration_delay.timeout_seconds=30Hmm, this is suggesting 30s, but dunno if it's battle tested
Ryan Smithalmost 6 years ago(edited)
➜ ~ kubectl get nodes | grep fargateInteresting seeing Fargate EKS assigning ec2 instances?
fargate-ip-xxx-xxx-xxx-xxx.ec2.internalbradymalmost 6 years ago
I just assume that everything runs on ec2 instances
David Hubbellalmost 6 years ago
Any opinions on kube-aws vs kops?
David Hubbellalmost 6 years ago
(for provisioning in AWS)
David Hubbellalmost 6 years ago
I created a cluster with kube-aws yesterday and it wasn’t too bad. Now getting recommendations to use kops from someone that used it 2 years ago
Zachary Loeberalmost 6 years ago
I used kops like 2 years ago as well, it seemed ok but if you are going to deploy managed clusters and still use cli scripts to do so eksctl seems the way to go.
Zachary Loeberalmost 6 years ago
I question the longevity of a solution based on such scripts though.
Zachary Loeberalmost 6 years ago
though kops can generate terraform configurations, cool beans - https://github.com/kubernetes/kops/blob/master/docs/terraform.md
Erik Osterman (Cloud Posse)almost 6 years ago
I believe the time for
kops on AWS has come and gone. It's moving slower and alternatives have caught up. Now with AWS supporting fully managed node pools, EKS is the way to go.Erik Osterman (Cloud Posse)almost 6 years ago
We've switched over to deploying EKS for all new engagements.
Erik Osterman (Cloud Posse)almost 6 years ago
Up until the managed node groups, I was on the fence as to the right way to go.
Zachary Loeberalmost 6 years ago
I'm curious if there are any workloads which you might recommend self-managed clusters for at this point?
David Hubbellalmost 6 years ago(edited)
EKS is not FedRamp compliant (yet) and so the recommendation (from AWS) is to run K8s manually on EC2 until compliance is reached. As a result, eksctl is out as an option
Pierre Humberdrozalmost 6 years ago(edited)
Also my Issue with EKS is that they lag super behind the k8s release cycle
Ryan Smithalmost 6 years ago
Anyone have issues using
attached to their service.. for a bunch of services.. then all your security group rules get consumed on the EKS nodes SG?
<http://service.beta.kubernetes.io/aws-load-balancer-type|service.beta.kubernetes.io/aws-load-balancer-type>: nlbattached to their service.. for a bunch of services.. then all your security group rules get consumed on the EKS nodes SG?
Ryan Smithalmost 6 years ago
app-1 LoadBalancer 172.20.179.5 <http://00000000000000-00000000000000.elb.us-west-2.amazonaws.com|00000000000000-00000000000000.elb.us-west-2.amazonaws.com> 3000:32043/TCP 10dSG rules get added for like.. port 32043
(even though I already have rules that don't require this..)
R
Ryan Smithalmost 6 years ago(edited)
I guess the question is.. how can i stop these inbound rule additions on the SG used for the EKS nodes?
EDIT:
Solution.. just use classic LB 😅
EDIT:
Solution.. just use classic LB 😅
Marcin Brańskialmost 6 years ago(edited)
I’ve seen two ingresses using same DNS domain but different paths and different
Is that supported? Will one ingress be used or will somehow
I’m not sure how will
nginx-ingress annotations.Is that supported? Will one ingress be used or will somehow
nginx-ingress merge them?I’m not sure how will
nginx resolve paths when they overlap, ex one ingress is using /v4 and second /v4/api_xxx .Zachary Loeberalmost 6 years ago
I don't believe that will work. one of the two load-balancers that back the ingress would need to be hit first based on how DNS works (unless you have some upstream traffic routing mechanism)
Vikram Yernenialmost 6 years ago
Anyone facing issue with Service name is not being picked up when deploying a helm chart on Kubernetes and service is getting created with random naming scheme??
Zachary Loeberalmost 6 years ago
curious if anyone has taken a look at Keptn yet, https://keptn.sh/
Erik Osterman (Cloud Posse)almost 6 years ago
upvote! 😃
David Scottalmost 6 years ago
I found that all of my EKS clusters that were originally created on 1.11 are missing
k get cm -n kube-system kube-proxy-config. The configmap is present on clusters created on later versions. The EKS update instructions only patch the image version in kube-proxy. Has anyone else dealt with this? I’m digging into it because I want to edit the metricsBindAddress to allow Prometheus to scrape kube-proxy.S
Sean Turneralmost 6 years ago(edited)
I’m running into a bit of confusion. Does anything look glaringly out of place here?
For some reason, creating the internal NLB in AWS with the below yaml is using
For some reason, creating the internal NLB in AWS with the below yaml is using
nodePorts. Is this normal? Trying to make spinnaker accessible over transit gateway but having difficultyZachary Loeberalmost 6 years ago
Just pretty waves and a single link to a github project 🙂
curious deviantalmost 6 years ago
Hello,
I am facing a dilemma that I am sure other folks must have come across.
So we have an application team deploying their service to our shared EKS cluster. The application is exposed externally via a CLB (this will be revisited in a month or so to replace with an API gateway etc.). The challenge I am facing is that the DNS and the Cert that this service manifest refers must be created via TF. Looks like there's no way to tell a K8s service to use a particular LB as it's load balancer. We have to go the other way round. Create the LB and refer that in TF to find the DNS details. This fails too so far. I am using aws_lb as a datasource and trying to read the zone id of the LB created by the K8s service. How have others solved for this please ?
I am facing a dilemma that I am sure other folks must have come across.
So we have an application team deploying their service to our shared EKS cluster. The application is exposed externally via a CLB (this will be revisited in a month or so to replace with an API gateway etc.). The challenge I am facing is that the DNS and the Cert that this service manifest refers must be created via TF. Looks like there's no way to tell a K8s service to use a particular LB as it's load balancer. We have to go the other way round. Create the LB and refer that in TF to find the DNS details. This fails too so far. I am using aws_lb as a datasource and trying to read the zone id of the LB created by the K8s service. How have others solved for this please ?
Zachary Loeberalmost 6 years ago
Got totally sidetracked today and ended up creating this little project. Setting up a local lab environment in Linux for CKA studies using terraform and libvirt: https://github.com/zloeber/k8s-lab-terraform-libvirt. It is just a nifty way to spin up 3 local ubuntu servers using terraform but fun nonetheless (well fun for me at least...)
Vikram Yernenialmost 6 years ago
Helm/Stable/Prometheus Server Dashboard is exposed using alb-ingress controller. Somehow the prometheus webpage is not loading fully (few parts of the webpage are not getting loaded and throwing 404 errors). Here is the Ingress configuration
Vikram Yernenialmost 6 years ago
ingress:## If true, Prometheus server Ingress will be created
##
enabled: true
## Prometheus server Ingress annotations
##
annotations:
kubernetes.io/ingress.class: 'alb'
#kubernetes.io/tls-acme: 'true'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/load-balancer-attributes: 'routing.http2.enabled=true,idle_timeout.timeout_seconds=60'
alb.ingress.kubernetes.io/certificate-arn: certname
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
service:
annotations:
alb.ingress.kubernetes.io/target-type: ip
labels: {}
path: /*
hosts:
- prometheus.company.com
## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.
extraPaths:
- path: /*
backend:
serviceName: ssl-redirect
servicePort: use-annotation
Vikram Yernenialmost 6 years ago
ingress:## If true, Prometheus server Ingress will be created##enabled: true## Prometheus server Ingress annotations##annotations: <http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: 'alb'#<http://kubernetes.io/tls-acme|kubernetes.io/tls-acme>: 'true' <http://alb.ingress.kubernetes.io/scheme|alb.ingress.kubernetes.io/scheme>: internet-facing <http://alb.ingress.kubernetes.io/load-balancer-attributes|alb.ingress.kubernetes.io/load-balancer-attributes>: 'routing.http2.enabled=true,idle_timeout.timeout_seconds=60' <http://alb.ingress.kubernetes.io/certificate-arn|alb.ingress.kubernetes.io/certificate-arn>: certname <http://alb.ingress.kubernetes.io/listen-ports|alb.ingress.kubernetes.io/listen-ports>: '[{"HTTP": 80}, {"HTTPS":443}]' <http://alb.ingress.kubernetes.io/actions.ssl-redirect|alb.ingress.kubernetes.io/actions.ssl-redirect>: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'service:annotations: <http://alb.ingress.kubernetes.io/target-type|alb.ingress.kubernetes.io/target-type>: iplabels: {}path: /*hosts:- <http://prometheus.company.com|prometheus.company.com>## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.extraPaths:- path: /*backend:serviceName: ssl-redirectservicePort: use-annotationVikram Yernenialmost 6 years ago
ingress:
## If true, Prometheus server Ingress will be created
##
enabled: true
## Prometheus server Ingress annotations
##
annotations:
<http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: 'alb'
#<http://kubernetes.io/tls-acme|kubernetes.io/tls-acme>: 'true'
<http://alb.ingress.kubernetes.io/scheme|alb.ingress.kubernetes.io/scheme>: internet-facing
<http://alb.ingress.kubernetes.io/load-balancer-attributes|alb.ingress.kubernetes.io/load-balancer-attributes>: 'routing.http2.enabled=true,idle_timeout.timeout_seconds=60'
<http://alb.ingress.kubernetes.io/certificate-arn|alb.ingress.kubernetes.io/certificate-arn>: certname
<http://alb.ingress.kubernetes.io/listen-ports|alb.ingress.kubernetes.io/listen-ports>: '[{"HTTP": 80}, {"HTTPS":443}]'
<http://alb.ingress.kubernetes.io/actions.ssl-redirect|alb.ingress.kubernetes.io/actions.ssl-redirect>: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
service:
annotations:
<http://alb.ingress.kubernetes.io/target-type|alb.ingress.kubernetes.io/target-type>: ip
labels: {}
path: /*
hosts:
- <http://prometheus.company.com|prometheus.company.com>
## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.
extraPaths:
- path: /*
backend:
serviceName: ssl-redirect
servicePort: use-annotationVikram Yernenialmost 6 years ago
Sorry for the mishap
Vikram Yernenialmost 6 years ago
Anyone gone through this issue before fellas?
joeyalmost 6 years ago(edited)
what's the address for prometheus-server or grafana? configured as? does it match the url you're using to hit the alb? if you look at inspect and see what the request host and uri is of the assets not being loaded, are you requesting the right resource?
Szymonalmost 6 years ago
hi, any idea how can I change language of Minikube CLI? Probably it gets the settings from my locale settings (PL), but I’d like to force english.
Milosbalmost 6 years ago
Guys, I took over some k8s that I need to adjust. I see bunch of env variables. Its like 50+ per deployment manifest. I dont work so much with kubernetes, but it looks like overkill to me. What is best practice, should it be abstracted with config maps, any other recomendation or that approach is good?
Zachary Loeberalmost 6 years ago
k8s-deployment-book, uses kustomize and kubecutr (a custom kube scaffolding tool by the same author) which may not be everyone's thing but still worth a once over anyway as it is well thought out -> https://github.com/mr-karan/k8s-deployment-book
Christian Royalmost 6 years ago
Hi ppl! What do you use to keep your secret well... secrets ... when it comes to your yaml files stored in a repo?
Do you store them elsewhere?
Do you use tools like sealed-secrets, helm-secrets or Kamus?
Do you store them elsewhere?
Do you use tools like sealed-secrets, helm-secrets or Kamus?
Adam Blackwellalmost 6 years ago(edited)
Does anyone have any stack graph-esque minikube development flows that they would recommend?
We're using ArgoCD + smashing the sync button and I've looked at how https://garden.io/#iterative figured out smart local redeployments but I'd like to know how others are doing it and if our (certmanager->vault->mysql + elasticsearch) -> the actual app local dev deployment is abnormally complex or slow. (currently takes three syncs and ~8 minutes to go from minikube up to running.)
We're using ArgoCD + smashing the sync button and I've looked at how https://garden.io/#iterative figured out smart local redeployments but I'd like to know how others are doing it and if our (certmanager->vault->mysql + elasticsearch) -> the actual app local dev deployment is abnormally complex or slow. (currently takes three syncs and ~8 minutes to go from minikube up to running.)
Ryan Smithalmost 6 years ago
anyone have to fine tune Nginx for performance in k8s?
For a 2 CPU container, with 65k file descriptor limit.. thinking this would be safe. I have a generous k8s HPA also, so maybe fine tuning is a frivolous exercise
worker_processes 2;
events {
worker_connections 15000;
}For a 2 CPU container, with 65k file descriptor limit.. thinking this would be safe. I have a generous k8s HPA also, so maybe fine tuning is a frivolous exercise
Pierre Humberdrozalmost 6 years ago
btw:
https://github.com/kubernetes/ingress-nginx/blob/master/Changelog.md
Seems like ingress-nginx got a couple of bigger updates in the last days.
https://github.com/kubernetes/ingress-nginx/blob/master/Changelog.md
Seems like ingress-nginx got a couple of bigger updates in the last days.
jedineeperalmost 6 years ago
Is the a method to promote objects across api versions? Eg deployments have moved from
extensions/v1beta1 to apps/v1. Can they be updated in place or do they need to be destroyed and recreated?