12 messages
Adnanover 3 years ago
Does anybody have experience with AWS EKS using AWS EFS?
I need a place to store/read some data (5-10MB file) very fast
and have it available consistently on multiple pods.
I need a place to store/read some data (5-10MB file) very fast
and have it available consistently on multiple pods.
Prabeshover 3 years ago(edited)
Hello everyone, i hope everyone is having a good time. I wanted to reach out to the community and get some feedback on a small project i was working on. It would be good to get some feedbacks from the community. 🙏
If you have used kubernetes then you know the pain of having to delete the non necessary demo services you deploy into the cluster and later forget. While that could be for playground purpose or for testing and validating or just deploying stuffs to get the baseline behaviour of any services or apps. But over time, we ( at least I ) forget to delete those and clean up the cluster which results in large number of demo, unnecessary services running on the cluster taking resource and eventually adding up to the operating cost.
Seeing as a problem and taking inspiration from similar working tool i.e
You can read more about this in here: https://github.com/pgaijin66/kube-cleanupper
_Note: In order to use this, you should already have a working cluster with
If you have used kubernetes then you know the pain of having to delete the non necessary demo services you deploy into the cluster and later forget. While that could be for playground purpose or for testing and validating or just deploying stuffs to get the baseline behaviour of any services or apps. But over time, we ( at least I ) forget to delete those and clean up the cluster which results in large number of demo, unnecessary services running on the cluster taking resource and eventually adding up to the operating cost.
Seeing as a problem and taking inspiration from similar working tool i.e
aws-nuke i had created a small handy tool to nuke your k8s resources after use. The tool is called kube-cleanupper which as the name suggests cleans up resources in the cluster. It is a helper service which can run however you want ( cronjob, cli, docker ) etc which scouts the cluster for object with particular clean up label applied. Once it finds those resources, it checks them against the retention time and nukes them if it past its retention time. Default retention time is 4 days. You can supply custom retention time as well. Service running on my cluster were getting OOM’d and i had to free up the space and hence the motivation behind this service. Once you deploy it to the cluster, you can forget about any dev service deployed as it will be deleted after use. Just apply the label auto-clean=enabled and specify the retention i.e retention=6dYou can read more about this in here: https://github.com/pgaijin66/kube-cleanupper
_Note: In order to use this, you should already have a working cluster with
~/.kube present and kubeconfig for that cluster._Andréover 3 years ago
Hello everyone, I have been reading the documentation about PDB (pod disruption budget), but I still have one doubt.
I want to guarantee that 50% of the pods are always available (minAvailability: 50%) but they must be healthy (ready = True). Health checks are done using readiness probe.
Does the PDB considers the readiness (ready state) of the pod?
Thanks 🙏🏾
I want to guarantee that 50% of the pods are always available (minAvailability: 50%) but they must be healthy (ready = True). Health checks are done using readiness probe.
Does the PDB considers the readiness (ready state) of the pod?
Thanks 🙏🏾
Mallikarjuna Mover 3 years ago
Hello everyone,
How to restrict creation of any pods, if user not specify any limits in the yaml file. Only for namespace?
How to restrict creation of any pods, if user not specify any limits in the yaml file. Only for namespace?
Mallikarjuna Mover 3 years ago
whenever user not mentioning any limits in the yaml file, then cluster should through error, saying that must specify limits. how to configure it?
idan leviover 3 years ago
Hey all, I adding NFS (EFS to be honest) to one of my deployments. I created
When I’m trying to mount the PVC to a pod I’m getting the error:
That’s my volume declaration:
I added port
also, NFS-UTILS is installed on the host.
Someone familiar with the proceeder ??
FileSystem through AWS (https://us-east-1.console.aws.amazon.com/efs?region=us-east-1#/get-started) and created out of it Volume and VolumeCliam that are bound status, so far all look ok.When I’m trying to mount the PVC to a pod I’m getting the error:
Warning FailedMount 32s kubelet Unable to attach or mount volumes: unmounted volumes=[dump-vol], unattached volumes=[config dump-vol default-token-jfxl4]: timed out waiting for the conditionThat’s my volume declaration:
apiVersion: v1
kind: PersistentVolume
metadata:
name: indexer-001-dev
spec:
capacity:
storage: 20Gi # Doesn't really matter, as EFS does not enforce it anyway
volumeMode: Filesystem
storageClassName: manual
accessModes:
- ReadWriteMany
mountOptions:
- hard
- nfsvers=4.1
- rsize=1048576
- wsize=1048576
- timeo=600
- retrans=2
- noresvport
nfs:
path: /
server: 10.X.X.XI added port
2049 (NFS port) as an inbound rule to my SecurityGroup.also, NFS-UTILS is installed on the host.
Someone familiar with the proceeder ??
zadkielover 3 years ago
Hey there! cluster-autoscaler removes my nodes before fluentbit is able to send all its logs. Any idea how to get around this?
Jamesover 3 years ago
Hello Everyone - just a quick one. When do you use
kubectl create and kubectl apply ?mimoover 3 years ago
Hey
Is it possible to define nodeSelector to a master node role instead of label?
Is it possible to define nodeSelector to a master node role instead of label?
Mallikarjuna Mover 3 years ago(edited)
Hello Everyone,
Does anyone know about the best and easy way of VPN configuration?
Does anyone know about the best and easy way of VPN configuration?
Jazover 3 years ago
For this service annotation file, do I have to manually hardcode
aws_instance_endpoint or is that automatically read in from Metadata?apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
<http://tags.datadoghq.com/env|tags.datadoghq.com/env>: '<ENV>'
<http://tags.datadoghq.com/service|tags.datadoghq.com/service>: '<SERVICE>'
annotations:
<http://ad.datadoghq.com/service.check_names|ad.datadoghq.com/service.check_names>: '["postgres"]'
<http://ad.datadoghq.com/service.init_configs|ad.datadoghq.com/service.init_configs>: '[{}]'
<http://ad.datadoghq.com/service.instances|ad.datadoghq.com/service.instances>: |
[
{
"dbm": true,
"host": "<AWS_INSTANCE_ENDPOINT>",
"port": 5432,
"username": "datadog",
"password": "<UNIQUEPASSWORD>",
"tags": "dbinstanceidentifier:<DB_INSTANCE_NAME>"
}
]
spec:
ports:
- port: 5432
protocol: TCP
targetPort: 5432
name: postgresSaichovskyover 3 years ago
Hi,
I have a question regarding k8s behaviour when deleting namespaces.
I have cilium installed in my test cluster and whenever I delete the cilium-system namespace, the
I know how to force delete a namespace that is stuck in Terminating state, but unfortunately, that is a workaround and not a solution. This issue is breaking my CI pipeline.
How can I ensure a graceful exit of the backend container? How do I view the default memory quotas? no
I have a question regarding k8s behaviour when deleting namespaces.
I have cilium installed in my test cluster and whenever I delete the cilium-system namespace, the
hubble-ui pod gets stuck in terminating state. The pod has a couple of containers, but I notice that one container named backend (a golang application) exits with code 137 when the namespace is deleted, and that’s what leaves the namespace stuck in Terminating state. From what I am reading online, containers exit with 137 when they attempt to use more memory that they have been allocated. In my test cluster, no resources have been defined (spec.containers.[*].resources = {}).I know how to force delete a namespace that is stuck in Terminating state, but unfortunately, that is a workaround and not a solution. This issue is breaking my CI pipeline.
How can I ensure a graceful exit of the backend container? How do I view the default memory quotas? no
kind: quota object has been defined, but there must be some defaults, I believe