11 messages
geertnalmost 4 years ago
Did anyone play around with Generic Ephemeral Volumes?
madoalmost 4 years ago(edited)
Hello, I have a Jenkins on Azure, and also have an OpenShift which has a project test, the project contains BuildConfig/DeploymentConfig to build & deploy my application. Firstly, the Jenkins do its work such as fetching code from git, run test like SonarQube/Dependency check, once pass the test, it will kick the building job in BuildConfig(simply curl the endpoint of BuildConfig). So far it works fine, but this workflow only run the latest version of code. I want to build the specific branch/tag of code and every time I need manually modify the version in BuildConfig and DeploymentConfig in OpenShift. For security concern, I can only run Jenkins on Azure, is there any way I can set the version of code I want when kicking the Jenkins job not manually change configs in OpenShit??
sheldonhalmost 4 years ago
It's getting intense. I just shipped a multi-cluster pulumi deployment.
Some days I think: Is Pulumi overcomplicating it? I mean raw yaml or helm would have been just taking a helm chart and installing on new cluster.
At the same time:
• ✅️State based tracking of what succeeded and failed
• ✅️One command for
• ✅️ Random unique names (when done right) makes more work to pass input/output than just matching on names but then means you get a create before delete on all resources (like a lightweight blue green). I'm hopeful this will help with tackling true blue green too.
• ✅️ Absolutely digging the strongly typed nature of objects. I can be pretty confident in big refactor or changes being compiled and written corrrectly and have to only focus on logical errors.
I can still render to yaml if I must.
I know there's been talk in the past on Pulumi. I've been doing some blog posts on it and maybe there's some debate still if it's worth it purely for Kubernetes, but I'm overall pretty happy.
I dig how pulumi handles secret encryption per stack in the yaml/backend too. Super handy, probably single best feature over terraform that feels like a no brainer.
Some days I think: Is Pulumi overcomplicating it? I mean raw yaml or helm would have been just taking a helm chart and installing on new cluster.
At the same time:
• ✅️State based tracking of what succeeded and failed
• ✅️One command for
n clusters all at once.• ✅️ Random unique names (when done right) makes more work to pass input/output than just matching on names but then means you get a create before delete on all resources (like a lightweight blue green). I'm hopeful this will help with tackling true blue green too.
• ✅️ Absolutely digging the strongly typed nature of objects. I can be pretty confident in big refactor or changes being compiled and written corrrectly and have to only focus on logical errors.
I can still render to yaml if I must.
I know there's been talk in the past on Pulumi. I've been doing some blog posts on it and maybe there's some debate still if it's worth it purely for Kubernetes, but I'm overall pretty happy.
I dig how pulumi handles secret encryption per stack in the yaml/backend too. Super handy, probably single best feature over terraform that feels like a no brainer.
Erik Osterman (Cloud Posse)almost 4 years ago
azecalmost 4 years ago
Has anyone seen
logged by the
User "system:serviceaccount:kube-system:cluster-autoscaler" cannot list resource "csidrivers" in API group "<http://storage.k8s.io|storage.k8s.io>" at the cluster scopelogged by the
cluster-autoscaler on AWS EKS?Milosbalmost 4 years ago(edited)
is it possible to use aws application load balancer controller with selfmanaged Kubernetes cluster (not eks)?
Zachalmost 4 years ago
yes, its not tied to EKS at all
Shreyank Sharmaalmost 4 years ago
Hi All,
We have 2 Kubernetes clusters, and we deployed Elasticsearch inside that using helm
Cluster-1(PROD)
is having
2 master
3 data
1 client
Cluster-2(DEV)
is having
2 master
2 node
1 client
both are having the same number of shards but the shard size is a little big(3-4Mb difference)in Cluster-1(PROD) for some indices.
we are running a term query to generate a report using elasticsearch. what I noticed is.
when I run this query in both the cluster, Cluster-2(DEV) worked fine and produced a result, but the same query failed causing all 2 data pods to restart,
Then we looked at the resource consumption of data pods in both the cluster,
For the last 3 months, Cluster-2(DEV) memory utilization looks like this
limit = 4GB
request = 2GB
and in use always will be in range 2.5 to 2.8GB
and
For the last 3 months, Cluster-1(PROD) memory utilization looks like this
limit = 4GB
request = 1.2GB
and in use always will be in range 3.8 to 3.9g
---
and when I looked at that configuration file for data pod
for Cluster-2(DEV)
request memory resource is defined as 2GB and
limit memory resource is defined as 4GB
and
for Cluster-1(PROD)
request memory resource is defined as 1.2GB and
limit memory resource is defined as 4GB
my question is:
even though the shard count is similar in both the cluster why the memory using always be 3.8 to 3.9 in our Cluster-1(PROD)
Is it because the request is too low? is there any recommended ratio of requests and limit resources?
We have 2 Kubernetes clusters, and we deployed Elasticsearch inside that using helm
Cluster-1(PROD)
is having
2 master
3 data
1 client
Cluster-2(DEV)
is having
2 master
2 node
1 client
both are having the same number of shards but the shard size is a little big(3-4Mb difference)in Cluster-1(PROD) for some indices.
we are running a term query to generate a report using elasticsearch. what I noticed is.
when I run this query in both the cluster, Cluster-2(DEV) worked fine and produced a result, but the same query failed causing all 2 data pods to restart,
Then we looked at the resource consumption of data pods in both the cluster,
For the last 3 months, Cluster-2(DEV) memory utilization looks like this
limit = 4GB
request = 2GB
and in use always will be in range 2.5 to 2.8GB
and
For the last 3 months, Cluster-1(PROD) memory utilization looks like this
limit = 4GB
request = 1.2GB
and in use always will be in range 3.8 to 3.9g
---
and when I looked at that configuration file for data pod
for Cluster-2(DEV)
request memory resource is defined as 2GB and
limit memory resource is defined as 4GB
and
for Cluster-1(PROD)
request memory resource is defined as 1.2GB and
limit memory resource is defined as 4GB
my question is:
even though the shard count is similar in both the cluster why the memory using always be 3.8 to 3.9 in our Cluster-1(PROD)
Is it because the request is too low? is there any recommended ratio of requests and limit resources?
Johanalmost 4 years ago
Does anyone know an instance-type (EC2) recommender that is based on running workloads (and it's requests/limits)?
Shreyank Sharmaalmost 4 years ago
Hi All,
for pods if probes fails it will restart container,
is it possible to configure to restart pod itself if probe fails
thank you
for pods if probes fails it will restart container,
is it possible to configure to restart pod itself if probe fails
thank you