27 messages
Fernanda Martinsabout 5 years ago
Hey everyone, does anyone know an alternative tool for k9s?
I started using 1 month ago but somehow I cannot attach to containers...
I started using 1 month ago but somehow I cannot attach to containers...
Andyabout 5 years ago
Hi, for testing k8s on a mac do people recommend:
1οΈβ£ microk8s on multipassd
2οΈβ£ minikube via VirtualBox
1οΈβ£ microk8s on multipassd
2οΈβ£ minikube via VirtualBox
Andrew Rothabout 5 years ago
Matt Gowieabout 5 years ago(edited)
Anyone here using Linkerd w/ the new aws-load-balancer-controller? Concerned about this issue and wondering if aws-load-balancer-controller ends up throwing a huge wrench into using Linkerd or service meshes generally. Would be interested to hear if anyone is using this today and if there were any challenges in making it work.
Andyabout 5 years ago
For those interested in the CKA, CKAD, CKS you can get a 50% discount if you register for KubeCon for $10.
https://events.linuxfoundation.org/kubecon-cloudnativecon-euarope/register/
https://events.linuxfoundation.org/kubecon-cloudnativecon-euarope/register/
Vlad Ionescu (he/him)about 5 years ago(edited)
Securing Kubernetes access with Hashicorp Boundary is happening now on AWS' Twitch:Β https://www.twitch.tv/aws
Andrew Nazarovabout 5 years ago(edited)
Telepresence v2 has been released alongside with launching Ambassador Cloud. Telepresence now consists of functionality that previously was related to Service Preview.
https://www.getambassador.io/products/telepresence
https://blog.getambassador.io/infinite-scale-development-environments-for-kubernetes-teams-9e286a2b1a0d
https://www.getambassador.io/products/telepresence
https://blog.getambassador.io/infinite-scale-development-environments-for-kubernetes-teams-9e286a2b1a0d
Erik Osterman (Cloud Posse)about 5 years ago
@Andrew Roth my buddy @Alex Siegman is curious about rancher - has some high-level questions for you
Matt Gowieabout 5 years ago
Anyone here accomplish Canary / B/G using the aws-load-balancer-controller?
nnsenseabout 5 years ago
Hi π
nnsenseabout 5 years ago
Sorry to bother here too (I've asked the same into #terraform), but this time I will be a good boy and I will start a thread π I've deployed an EKS cluster with cloudposse eks-cluster and node-groups modules. It works and deploys the cluster along the nodes but, if I run it a second time or I just run
terraform refresh, it throws out an error stating Error: the server is currently unable to handle the request (get configmaps aws-auth). Everything works, I can even kubectlget pods so I really don't know what terraform is doing to fail like that. Anyone has seen that and knows how to solve it?Ikanaabout 5 years ago
Hey are there any alternatives to OPA/gatekeeper, I think I remember from an office hours some mention of a better tool
Patrick Joyceabout 5 years ago
Christianabout 5 years ago
In deploying redis in eks, would it be best practice to have a separate node group for it? (with taints and tolerations) Because it's stateful and it needs an EBS in the same availability zone?
alxlencabout 5 years ago
Hi all, I am setting up Istio and CrunchyData PostgreSQL Operator. I was wondering wether to set up the postgres cluster with an Egress or set it up withing an Istio enabled namespace. Any advice?
Thomas Hoefkensabout 5 years ago
Hi everyone! I have a deployment on K8S of a pod which can be reached over http port 9090. If I comment the liveness probe, the pod comes up fine and I can curl 9090 without issues or display the content in the browser by port-forwarding from localhost to the pod at 9090. If I add the liveness probe though, the deployment via helm fails -->
This is the config
Would you have an idea why that fails with the probe added to the deployment?
Liveness probe failed: Get <http://10.153.82.49:9090/>: dial tcp 10.153.82.49:9090: connect: connection refusedThis is the config
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 9090
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 5
readinessProbe:
httpGet:
path: /
port: 9090Would you have an idea why that fails with the probe added to the deployment?
Lyn Chenabout 5 years ago
For those who might be interested, we're hosting a free kubernetes for beginners event (admin lmk if this is not ok!) Come hang out π https://app.experiencewelcome.com/events/MQ4uzz/stages/8DzfRY?preview=true
brunoabout 5 years ago
I am currently investigating the feasibility of upgrading EKS clusters using blue/green deployments rather than in-place upgrades.
We use EKS but this is applicable to any kubernetes managed service.
From a high level perspective the workflow would be something like:
1. EKS 1.15 running some workload (green)
2. Provision a new EKS 1.16 with all addons (blue)
3. Switch a flag in our pipeline to deploy to blue cluster (deployed using terraform and helm, we can parameterise the cluster to deploy to)
4. Migrate all workload from green to blue (using Velero or similar)
5. Decommission green cluster
Has anyone implemented something similar?
The only relevant blog post I have fund is this one https://medium.com/makingtuenti/kubernetes-infrastructure-blue-green-deployments-fca831239c7d but dooes not give much details on the implementation.
Thanks!
We use EKS but this is applicable to any kubernetes managed service.
From a high level perspective the workflow would be something like:
1. EKS 1.15 running some workload (green)
2. Provision a new EKS 1.16 with all addons (blue)
3. Switch a flag in our pipeline to deploy to blue cluster (deployed using terraform and helm, we can parameterise the cluster to deploy to)
4. Migrate all workload from green to blue (using Velero or similar)
5. Decommission green cluster
Has anyone implemented something similar?
The only relevant blog post I have fund is this one https://medium.com/makingtuenti/kubernetes-infrastructure-blue-green-deployments-fca831239c7d but dooes not give much details on the implementation.
Thanks!
Matt Gowieabout 5 years ago
Pretty interesting OSS platform I hadnβt heard about before β https://dapr.io/
Andyabout 5 years ago
https://github.com/ContainerSolutions/kubernetes-examples <-- this looks like a good learning resource for simple examples of k8s features
joeyabout 5 years ago(edited)
i was tinkering with botkube.io tonight. it's pretty neat and perhaps the most feature complete option i've seen of various options (my search was not exhaustive). once the slack app integration can be done on a per-workspace basis and doesn't require adding the slack app from the author (is there really no good slack terraform module?) i think it becomes a lot more reasonable to propose at work.
quite often i have to go back to vendors with data that their network is broke in the form of mtr's or tcpdumps. i think it'd be 'relatively' trivial to add some functionality to botkube to allow you to run a custom container in a specific zone or run a specific command in a sidecar container in a specific zone to do some more rapid troubleshooting. i use thousandeyes and datadog and prometheus for various similar things but this seems like a cool nice to have.
quite often i have to go back to vendors with data that their network is broke in the form of mtr's or tcpdumps. i think it'd be 'relatively' trivial to add some functionality to botkube to allow you to run a custom container in a specific zone or run a specific command in a sidecar container in a specific zone to do some more rapid troubleshooting. i use thousandeyes and datadog and prometheus for various similar things but this seems like a cool nice to have.
kgibalmost 5 years ago
has anyone successfully implemented
ingress-nginx with SSL termination on AWS NLB ?kgibalmost 5 years ago
I've been having trouble getting the proper config right
Andrew Nazarovalmost 5 years ago
Marcin BraΕskialmost 5 years ago
I'm wondering if there are already some solutions for disaster recovery active-active K8S clusters where traffic can be partially switched and security is as granular to a pod level.
I saw this https://portworx.com/kubernetes-disaster-recovery/ but not sure. Maybe some used anything like and can share some insights
I saw this https://portworx.com/kubernetes-disaster-recovery/ but not sure. Maybe some used anything like and can share some insights