12 messages
Pierre Humberdrozalmost 4 years ago
I remember a tool that would check a cluster for usage of deprecated manifests and spit them out. I know there is pluto but I believe there to be a 2nd one but I am not sure. Anyone an idea?
Andyalmost 4 years ago
Hi there are any teams using API Gateways within their kubernetes clusters? Just curious as to what current solutions people have good experience with. Searching through the slack history here I see:
• Kong
• Ambassador Edge Stack - (looks like you have to pay for JWT use though)
• Amazon API Gateway (we use EKS)
• Kong
• Ambassador Edge Stack - (looks like you have to pay for JWT use though)
• Amazon API Gateway (we use EKS)
Niv Weissalmost 4 years ago
I’m deploying Validating Webhook with the operation rule “create” (and it’s working fine) but when I’m adding “update” property - it doesn’t work. I’m getting this error:
Any help please?
This is my yaml files:
Failed to update endpoint default/webhook-server: Internal error occurred: failed calling webhook "webhook-server.default.svc": failed to call webhook: Post "<https://webhook-server.default.svc:443/validate?timeout=5s>": dial tcp ip.ip.ip.ip: connect: connection refusedAny help please?
This is my yaml files:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webhook-server
namespace: default
labels:
app: webhook-server
spec:
replicas: 1
selector:
matchLabels:
app: webhook-server
template:
metadata:
labels:
app: webhook-server
spec:
containers:
- name: server
image: webhook-server
imagePullPolicy: Never
env:
- name: FAIL
value: "false"
ports:
- containerPort: 8443
name: webhook-api
volumeMounts:
- name: webhook-tls-certs
mountPath: /run/secrets/tls
readOnly: true
volumes:
- name: webhook-tls-certs
secret:
secretName: webhook-server-tls
---
apiVersion: v1
kind: Service
metadata:
name: webhook-server
namespace: default
spec:
selector:
app: webhook-server
ports:
- port: 443
targetPort: webhook-api
---
apiVersion: <http://admissionregistration.k8s.io/v1|admissionregistration.k8s.io/v1>
kind: ValidatingWebhookConfiguration
metadata:
name: demo-webhook
webhooks:
- name: webhook-server.default.svc
namespaceSelector:
matchExpressions:
- key: validate
operator: NotIn
values: [ "skip" ]
rules:
- apiGroups: [""]
apiVersions: ["v1"]
operations: ["CREATE", "UPDATE"]
resources: ["*"]
clientConfig:
service:
namespace: "default"
name: "webhook-server"
path: "/validate"
caBundle: ${CA_PEM_B64}
admissionReviewVersions: ["v1", "v1beta1"]
sideEffects: None
timeoutSeconds: 5
# failurePolicy: IgnoreSantiago Campuzanoalmost 4 years ago
Hello everyone ! We've been facing an issue with some latency-sensitive services that we deployed to EKS and are being exposed using Nginx Ingress Controller. The issue is related with the Conntrack table (used by iptables) filling up and then it starts dropping packages. The solution to this problem is simply increasing the Kernel parameter
The problem is that
Any ideas or suggestions ?
net.ipv4.netfilter.ip_conntrack_max to a higher value, piece of cake. As we are using the EKS/AWS maintained AMI for the worker nodes, this value comes predefined with a relatively small value (our services/apps handle several thousands of reqs per sec). We've been exploring different ways of properly setting this value, and the most preferred way would be modifying the kube-proxy-config Config Map, which contains Conntrack specific configconntrack:
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0sThe problem is that
kube-proxy is being managed as an EKS add-on, so, if we modify this config, by, let's say, Terraform, it will be overridden by the EKS add-on. But we don't want to self-manage Kube Proxy, as that would slow down our fully automatic EKS upgrades that we handle with Terraform.Any ideas or suggestions ?
Loukas Agorgianitisalmost 4 years ago
Hello, can somebody tell me the conceptual difference between a controller and a mutating webhook?
Steven Milleralmost 4 years ago
What does cloudposse use for ingress controller?
timduhenchanteralmost 4 years ago
Shot in the dark but does anyone have experience securing API Keys with Istio directly? The AuthorizationPolicy CR does not support reading request header values from a secret. In previous implementations I have used Kong to secure the API keys downstream from Istio but would prefer not to support Kong and the additional overhead for this specific use case.
azecalmost 4 years ago
@Erik Osterman (Cloud Posse), where can I find that K8S ingress controllers comparison sheet ?
I opened it from one of your posts another day and I need it now, but can’t find it …
I opened it from one of your posts another day and I need it now, but can’t find it …
Adnanalmost 4 years ago(edited)
What would you do if you had a requirement to redirect traffic from one app to different app based on the route existing or not existing in the first app?
• all requests always first go to the primary app
• if the primary app knows the request (non 404 http response code) it answers with a appropriate response
• if the primary app does not know the request (404) it redirects the request to a different fallback secondary app
• the solution should work across multiple clusters so even if the fallback secondary app is on another cluster the redirect/shift traffic scenario should work
• all requests always first go to the primary app
• if the primary app knows the request (non 404 http response code) it answers with a appropriate response
• if the primary app does not know the request (404) it redirects the request to a different fallback secondary app
• the solution should work across multiple clusters so even if the fallback secondary app is on another cluster the redirect/shift traffic scenario should work
Niv Weissover 3 years ago
Hey, I’m trying to install prometheus on EKS fargate but I’m getting an error from the
I installed the
I installed everything using “AWS blueprint”
I also saw this error:
Anyone knows how can I install prometheus on fargate?
Thank you!
prometheus-alertmanager & prometheus-server pods because the pvc is pending: Pod not supported on Fargate: volumes not supported: storage-volume not supported because: PVC prometheus-alertmanager not bound .I installed the
aws-ebs-csi-driver add-on on the eks and it’s still doesn’t work.I installed everything using “AWS blueprint”
I also saw this error:
2022-05-23T13:06:04+03:00 E0523 10:06:04.065256 1 reflector.go:127] <http://github.com/kubernetes-csi/external-snapshotter/client/v3/informers/externalversions/factory.go:117|github.com/kubernetes-csi/external-snapshotter/client/v3/informers/externalversions/factory.go:117>: Failed to watch *v1beta1.VolumeSnapshotClass: failed to list *v1beta1.VolumeSnapshotClass: the server could not find the requested resource (get <http://volumesnapshotclasses.snapshot.storage.k8s.io|volumesnapshotclasses.snapshot.storage.k8s.io>) from the ebs-plugin podAnyone knows how can I install prometheus on fargate?
Thank you!
Andyover 3 years ago
Has anyone had problems with multipass since upgrading to Monterey 12.4? Am frequently seeing time-outs. Managed to start VMs once since the upgrade yesterday.
multipass start k3s-sandbox start failed: The following errors occurred:
k3s-sandbox: timed out waiting for responsemsharma24over 3 years ago
Hi there
Me and @Nimesh Amin are trying to upgrade the kube-prometheus-stack to use AWS Managed Prometheus service and are facing an issue where we cant seem to figure out how to enable the Sigv4 on the grafana deployed by the kube-prometheus-stack chart.
More on this issue https://github.com/prometheus-community/helm-charts/issues/2092
TLDR; how to enable sigv4 auth in grafana.ini for kube-prometheus-stack helm chart.
Me and @Nimesh Amin are trying to upgrade the kube-prometheus-stack to use AWS Managed Prometheus service and are facing an issue where we cant seem to figure out how to enable the Sigv4 on the grafana deployed by the kube-prometheus-stack chart.
More on this issue https://github.com/prometheus-community/helm-charts/issues/2092
TLDR; how to enable sigv4 auth in grafana.ini for kube-prometheus-stack helm chart.