14 messages
Joaquin Menchacaover 4 years ago
Anyone know how to do mixed https + grpc traffic on the same ingress? I tried out ingress-nginx:
And I was setting up these rules:
But only GRPC now works correctly, HTTPS service does not. 😢
Is there a way to get both of these to work?
annotations:
<http://cert-manager.io/cluster-issuer|cert-manager.io/cluster-issuer>: letsencrypt-staging
<http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: nginx
<http://nginx.ingress.kubernetes.io/ssl-redirect|nginx.ingress.kubernetes.io/ssl-redirect>: "true"
<http://nginx.ingress.kubernetes.io/backend-protocol|nginx.ingress.kubernetes.io/backend-protocol>: GRPCAnd I was setting up these rules:
rules:
- host: <http://alpha.devopsstudio.org|alpha.devopsstudio.org>
http:
paths:
- path: /
backend:
serviceName: demo-dgraph-alpha
servicePort: 8080
- host: <http://dgraph.devopsstudio.org|dgraph.devopsstudio.org>
http:
paths:
- path: /
backend:
serviceName: demo-dgraph-alpha
servicePort: 9080But only GRPC now works correctly, HTTPS service does not. 😢
Is there a way to get both of these to work?
Alysonover 4 years ago
Hi,
Is anyone else here with this beautiful issue on AWS EKS?
The problem is intermittent. Sometimes it happens and sometimes it doesn't.
Is anyone else here with this beautiful issue on AWS EKS?
The problem is intermittent. Sometimes it happens and sometimes it doesn't.
kubectl logs pod/external-dns-7dd5c6786d-znfr5
<https://100.0.1.239:10250/containerLogs/default/external-dns-7dd5c6786d-znfr5/external-dns?follow=true>": x509: cannot validate certificate for 100.0.1.239 because it doesn't contain any IP SANs$ kubectl version --short
Client Version: v1.18.9-eks-d1db3c
Server Version: v1.19.8-eks-96780eJoaquin Menchacaover 4 years ago
On ingresses, do ingress get automatically converted to beta?
I keep getting this message:
After deploying:
I keep getting this message:
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use <http://networking.k8s.io/v1|networking.k8s.io/v1> IngressAfter deploying:
apiVersion: <http://networking.k8s.io/v1|networking.k8s.io/v1>
kind: Ingress
metadata:
name: demo-dgraph-ingress-grpc
labels:
app: dgraph
component: alpha
annotations:
<http://cert-manager.io/cluster-issuer|cert-manager.io/cluster-issuer>: {{ requiredEnv "ACME_ISSUER" }}
<http://nginx.ingress.kubernetes.io/ssl-redirect|nginx.ingress.kubernetes.io/ssl-redirect>: "true"
<http://nginx.ingress.kubernetes.io/backend-protocol|nginx.ingress.kubernetes.io/backend-protocol>: GRPC
<http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: nginx
spec:
tls:
- hosts:
- "dgraph.{{ requiredEnv "AZ_DNS_DOMAIN" }}"
secretName: tls-secret
rules:
- host: dgraph.{{ requiredEnv "AZ_DNS_DOMAIN" }}
http:
paths:
- backend:
service:
name: demo-dgraph-alpha
port:
number: 9080
path: /
pathType: ImplementationSpecificJoaquin Menchacaover 4 years ago
On the topic of ingress-nginx w/ gRPC, it actually works. 🙂 And it was easier than I thought, I just needed to code my own gRPC client to understand it.
Anyhow, I wrote a blog in case anyone tackling this problem as well. It's on AKS, but underlying principals are the same:
https://joachim8675309.medium.com/aks-with-grpc-and-ingress-nginx-32481a792a1
Anyhow, I wrote a blog in case anyone tackling this problem as well. It's on AKS, but underlying principals are the same:
https://joachim8675309.medium.com/aks-with-grpc-and-ingress-nginx-32481a792a1
Andyover 4 years ago
Hi, we’re using Istio behind an nginx proxy. Occasionally we’re seeing what looks like other users responses being returned to a user.
i.e. client A calls a CORS endpoint and sees a response that should have been returned to client B. This happens around 1 in 10 times when repeatedly calling an endpoint (while 100s of users are simultaneously hitting the same endpoint)
The network flow is as follows:
ALB -> nginx -> (proxy) -> NLB -> Istio -> NodeJS app
It looks like nginx may be returning the wrong responses to different users. Does that even sound plausible?
i.e. client A calls a CORS endpoint and sees a response that should have been returned to client B. This happens around 1 in 10 times when repeatedly calling an endpoint (while 100s of users are simultaneously hitting the same endpoint)
The network flow is as follows:
ALB -> nginx -> (proxy) -> NLB -> Istio -> NodeJS app
It looks like nginx may be returning the wrong responses to different users. Does that even sound plausible?
mfridhover 4 years ago(edited)
What a 😕 thing I found...
my deployment/coredns container wasn't adding tcp:53 in
If I subsequently removed the
I can add a TCP
Had to delete deployment and recreate it! reproducible 10/10 also on a renamed
my deployment/coredns container wasn't adding tcp:53 in
ports even though it was there in the yaml and it was even in the last-applied-configuration annotation .... it was silently being NOT THERE...If I subsequently removed the
TCP containerPort from the yaml, the kubectl diff said I was removing the UDP containerPort.I can add a TCP
containerPort: 5353, but not 53.Had to delete deployment and recreate it! reproducible 10/10 also on a renamed
coredns2 deployment I set up on the side... Amazon EKS v1.19.Zachover 4 years ago(edited)
I’m having some difficulties with the EKS Cluster (v0.42.1 - happy to have a workaround for the aws-auth configmap issue now!) and Node Group (v0.22.0) modules with the AWS VPC CNI Pod Security Groups. What I’m finding is that the module creates a security group and it seems to get attached to maybe … the nodes? But once I started using pod SGs I was finding that only the EKS-managed-SG (which the cluster creates itself) seems to matter for ingress/egress rules to allow the pod to talk back to the cluster. For example I wasn’t able to get my pods to resolve DNS at all until I added port 53 ingress on the EKS SG, and I couldn’t get my pod to receive traffic from my ingress until I allowed traffic from the EKS SG. None of these rules made any difference when I tried creating them on the SG the cloudposse module created. Is that expected?
reiover 4 years ago
Currently having difficulties with this setup:
Does anyone have tried to mount an EFS filesystem connected to vpc1 but having the EKS cluster in vpc2? Peering connection between vpc1&2 is working.
Following the oficial AWS docs to mount an EFS filesystem with the newest CSI driver. If the filesystem is in the same vpc as the cluster it works as expected (dynamic mount example). But trying to mount a second EFS filesystem returns an error on Kubernetes.
Additional DNS host entries also added (as recommended and required by the CSI driver).
Here's the catch: on an EC2 instance mounting the EFS filesystem (using the IPv4) works...
Any ideas? 😅
Does anyone have tried to mount an EFS filesystem connected to vpc1 but having the EKS cluster in vpc2? Peering connection between vpc1&2 is working.
Following the oficial AWS docs to mount an EFS filesystem with the newest CSI driver. If the filesystem is in the same vpc as the cluster it works as expected (dynamic mount example). But trying to mount a second EFS filesystem returns an error on Kubernetes.
Additional DNS host entries also added (as recommended and required by the CSI driver).
Here's the catch: on an EC2 instance mounting the EFS filesystem (using the IPv4) works...
Any ideas? 😅
Shreyank Sharmaover 4 years ago(edited)
Hi,
we are kubernetes cluster deployed using kops.
which was working fine from an year.
suddenly in kube-system namespace a pod named etcd-server-events-ip-<master-interal-ip -here>,,, started crashloopback off..
with the following logs
i wanted to know what is the responsibility of the etcd-server-events pod.
thanks
we are kubernetes cluster deployed using kops.
which was working fine from an year.
suddenly in kube-system namespace a pod named etcd-server-events-ip-<master-interal-ip -here>,,, started crashloopback off..
with the following logs
2021-07-20 15:58:12.071345 I | etcdmain: stopping listening for peers on <http://0.0.0.0:2381>2021-07-20 15:58:12.071351 C | etcdmain: cannot write to data directory: open /var/etcd/data-events/.touch: read-only file systemi wanted to know what is the responsibility of the etcd-server-events pod.
thanks
Vlad Ionescu (he/him)over 4 years ago
FYI: one of the most discussed EKS annoyances is now solved: the 1.9.0 release of the amazon-vpc-cni adds support for a lot more pods per node!
I imagine a bigger official announcement is coming soon.
I imagine a bigger official announcement is coming soon.
R Dhaover 4 years ago
what are good resources to learn kubernetes for beginners?
Shreyank Sharmaover 4 years ago
Hi,
We are using Kubernetes deployed in AWS using Kops, in its own Public VPC.
We had 2 requirements.
1. Pod inside Kubernetes have to invoke AWS Lambda
2. Lambda has to access resources inside Kubernetes
for the 1st requirement we created an inline policy for all nodes to invoke lambda and passed AWS access key and Secret key.
inside pod, created a lambda inside same VPC, Pod invoke lambda, it worked fine,
Now, for 2nd requirement is there any way Lambda can access some data inside a pod without any public end-point ? any Kubernetes feature to allow this so communication happens securely?
Thanks in Advance.
We are using Kubernetes deployed in AWS using Kops, in its own Public VPC.
We had 2 requirements.
1. Pod inside Kubernetes have to invoke AWS Lambda
2. Lambda has to access resources inside Kubernetes
for the 1st requirement we created an inline policy for all nodes to invoke lambda and passed AWS access key and Secret key.
inside pod, created a lambda inside same VPC, Pod invoke lambda, it worked fine,
Now, for 2nd requirement is there any way Lambda can access some data inside a pod without any public end-point ? any Kubernetes feature to allow this so communication happens securely?
Thanks in Advance.
azecover 4 years ago
Hi Shreyank!
I think that this is not a good practice. You might consider use of K8S Service Accounts with each K8S deployment that has a need to call Lambda APIs or invoke Lambda.
See:
• Configure Service Accounts for Pods
• Managing Service Accounts
The IAM Role which will be created for binding with K8S Service-Account then can have enough permissions to invoke Lambda. Then it is just the matter of the applications running on pods using AWS SDK for Lambda or maybe HTTP call libraries (depending on your use-case for Lambda).
for the 1st requirement we created an inline policy for all nodes to invoke lambda and passed AWS access key and Secret key.
I think that this is not a good practice. You might consider use of K8S Service Accounts with each K8S deployment that has a need to call Lambda APIs or invoke Lambda.
See:
• Configure Service Accounts for Pods
• Managing Service Accounts
The IAM Role which will be created for binding with K8S Service-Account then can have enough permissions to invoke Lambda. Then it is just the matter of the applications running on pods using AWS SDK for Lambda or maybe HTTP call libraries (depending on your use-case for Lambda).
azecover 4 years ago
Lambda has to access resources inside Kubernetes
What is the need for this? There is so many other options if your workloads are already all in AWS. Have you considered AWS SQS or any of the DBs (DynamoDB, RDS, Aurora) …etc.