gcp
Archived0218
Google Cloud Platform
E
erik12 months ago
archived the channel
burnzyover 1 year ago
Hello @jonjitsu 😉 Long time! I don’t believe it’s possible, although cloud functions will get built and stored in Artifact Registry as a container image — you don’t get much control over that build process, besides being able to set a few environment vars to override a few options during build and/or runtime. As far as I’m aware, you can’t deploy directly from your own container. The better option there would be to use Cloud Run (or Cloud Run Jobs) for using your own custom container
jonjitsuover 1 year ago
Is it possible to deploy a GCP cloud function packaged as a container similar to aws lambda?
Dhamodharanover 1 year ago
#gcp
Hello All,
I want to collect the application logs from AWS EC2 windows box to GCP cloud logging service.
I have followed the gcp documentation to configure the ops-agent in Windows EC2.
https://cloud.google.com/stackdriver/docs/solutions/agents/ops-agent/authorization
But I am facing challenges to authorize the agent to communicate to GCP.
Can someone help me if have worked on such requirement.
Thanks.
Hello All,
I want to collect the application logs from AWS EC2 windows box to GCP cloud logging service.
I have followed the gcp documentation to configure the ops-agent in Windows EC2.
https://cloud.google.com/stackdriver/docs/solutions/agents/ops-agent/authorization
But I am facing challenges to authorize the agent to communicate to GCP.
Can someone help me if have worked on such requirement.
Thanks.
Dhamodharanalmost 2 years ago
#gcp
Hello All,
I am trying to migrate a database from AWS RDS to GCP, the size is around 120GB. I thought it would be fast if I create a dump file and copy it to GCP and restore it. But while doing that it's taking almost 7hours time.
Is there any other approach that I can anticipate this very fast?
Any suggestions?.
Hello All,
I am trying to migrate a database from AWS RDS to GCP, the size is around 120GB. I thought it would be fast if I create a dump file and copy it to GCP and restore it. But while doing that it's taking almost 7hours time.
Is there any other approach that I can anticipate this very fast?
Any suggestions?.
Monish Devendranalmost 2 years ago
Has anyone faced this issue, not able to pass secret from akeyless
Monish Devendranalmost 2 years ago
so i might have different service account for each project
Monish Devendranalmost 2 years ago
I want to target multiple services in different projects, example : cloudfunction should go to project1 and all cloudrun should be project2 and storage in project3. So how can i design such a way each stack can go to different project
Monish Devendranalmost 2 years ago
Hello Team, Im planning to start a GCP setup via atmos. Can some one provide me with basic structure ?
Thomas Asmeromabout 2 years ago
👋 Hello, team!
I wanted to consult if a GCP bucket write can be configured using clickOps to
1. Add rate limiting to the bucket since it has public write permissions
2. Allow only certain file extension types e.g. .html file only
My investigation
I think configuring gcp bucket for the above using clickOps is not possible So a backend service or cloud run can be a good option here.
So my theory is since public write permission can be risky instead of directly hitting the bucket the client should hit
I also don’t think cloud function is a good option because it can only react to events like cleaning the bucket after a wrong file is added.
Questions
1. So am I correct to assume this only can be done with backend service or cloud run?
2. How about using terraform, is there a posibility?
I wanted to consult if a GCP bucket write can be configured using clickOps to
1. Add rate limiting to the bucket since it has public write permissions
2. Allow only certain file extension types e.g. .html file only
My investigation
I think configuring gcp bucket for the above using clickOps is not possible So a backend service or cloud run can be a good option here.
So my theory is since public write permission can be risky instead of directly hitting the bucket the client should hit
cloud run or backend service -> check with a rate limit algorithm -> again checks if the file type is valid -> send http response to clientI also don’t think cloud function is a good option because it can only react to events like cleaning the bucket after a wrong file is added.
Questions
1. So am I correct to assume this only can be done with backend service or cloud run?
2. How about using terraform, is there a posibility?
Dhamodharanabout 2 years ago
Hello #gcp All,
I am trying to execute a cloudbuild pipeline on part of a project, I have couple of variables configured in the code and I have passed those variable values through the CloudBuild environment variable settings,
But I would be having more variables for every releases, so instead of configuring the variables in pipeline settings, I am thinking we can pass those variables in an external file and read that variable in the cloudbuild.yaml config, I have tried but its not working,
Can someone help me with this? Or is there any other best approach to read the variables from an external file.
I am trying to execute a cloudbuild pipeline on part of a project, I have couple of variables configured in the code and I have passed those variable values through the CloudBuild environment variable settings,
But I would be having more variables for every releases, so instead of configuring the variables in pipeline settings, I am thinking we can pass those variables in an external file and read that variable in the cloudbuild.yaml config, I have tried but its not working,
Can someone help me with this? Or is there any other best approach to read the variables from an external file.
jonjitsuover 2 years ago
Anyone have any recommendations for chats/forums concerning google dataflow (specifically in java)?
Dhamodharanover 2 years ago
hi All,
I am trying to create GCP service account using terraform, i could create it, but i wanted to download the private_key JSON file aswell, i tried to create it using output resource in terraform, but it seems like the format is different, It is not like the same as the one which we creating in the web console.
Can someone help me the procedure to get the key in JSON format same as its creating in manual way..
I am trying to create GCP service account using terraform, i could create it, but i wanted to download the private_key JSON file aswell, i tried to create it using output resource in terraform, but it seems like the format is different, It is not like the same as the one which we creating in the web console.
Can someone help me the procedure to get the key in JSON format same as its creating in manual way..
Kallanover 2 years ago
Has anyone used GCP Certificate Manager Managed Certificates with Cloud CDN successfully? I suspect they're only compatible with Media CDN.
jonjitsuover 2 years ago
Am I wrong in that there is no GCP equivalent to aws API STS:GetCallerIdentity which returns the identity of the caller. Kinda like a cloud whoami.
yegorskiover 2 years ago(edited)
Been hearing good things about GCP and catching up. Also hearing that it is (was?) cheaper than AWS. Is GCP really cheaper? Looking at articles like this one, it doesn’t really seem so
Dhamodharanover 2 years ago
When i try to deploy a statefulset application on GKE(created by Autopilot mode) using cloudshell, its throwing the below error, can someone help me to fix this?
Error from server (GKE Warden constraints violations): error when creating "envs/local-env/": admission webhook "<http://warden-validating.common-webhooks.networking.gke.io|warden-validating.common-webhooks.networking.gke.io>" denied the request: GKE Warden rejected the request because it violates one or more constraints.
Violations details: {"[denied by autogke-disallow-privilege]":["container increase-the-vm-max-map-count is privileged; not allowed in Autopilot"]}
Requested by user: 'dhamodharan', groups: 'system:authenticated'.timduhenchanteralmost 3 years ago
Does anyone have a working example of creating a
The purpose. I want to create a snapshot schedule for the persistent disk without using any 3rd party tools. Previously, I have created the PV and then created the schedule against the generated disk with
ComputeDisk with GCP ConfigConnector and then creating a VolumeClaim with that disk? Or even just an annotation to set a GCP snapshot schedule (not K8S VolumeSnapshot) on a PV?The purpose. I want to create a snapshot schedule for the persistent disk without using any 3rd party tools. Previously, I have created the PV and then created the schedule against the generated disk with
gcloud but I would really like a more full featured implementation.DaniC (he/him)almost 3 years ago
fyi for folks who may use TFC on GCP, here is an implementation based on OIDC https://medium.com/google-cloud/terraform-cloud-enterprise-and-gcp-workload-identity-federation-fbb84a3dfbeb
DaniC (he/him)almost 3 years ago
Hi folks, anyone aware of a similar TF module like https://github.com/cloudposse/terraform-null-label but for GCP ?
MSaadabout 3 years ago
Hello, we currently use the google Container Scanning API to scan for vulnerabilities in the images in the container registry, now it appears that we have few images with vulnerabilities. I’m looking at methods of extracting the results of the scan to a dashbaord or something like that in order to make teams using the platform more aware of the issue. Has anyone come across anything like this before?
Ronabout 3 years ago
Hey, just wondering what would be the main differences/costs between packet mirroring from GCP and something like goreplay.
DaniC (he/him)over 3 years ago
folks in GCP are they using Cloud logging/ trace for apps with opentelemetry or you still prefer 3rd party tools to aggregate and provide the UX/ UI ?
Michał Czeraszkiewiczover 3 years ago(edited)
Hi, anyone configured Bitbucket Pipeline to access CGP via OIDC?
Currently I have:
But I get an error:
Currently I have:
`
image: atlassian/default-image:3
pipelines:
default:
- parallel:
- step: &docker-build-push
name: Build and push images to GCR
oidc: true
image: google/cloud-sdk:alpine
script:
- echo "${BITBUCKET_STEP_OIDC_TOKEN}" > /tmp/credential-source-file.out
- gcloud iam workload-identity-pools create-cred-config projects/${PROJECT_ID}/locations/global/workloadIdentityPools/bitbucket-pipelines/providers/bitbucket-pipelines --service-account="name@${PROJECT_ID}.<http://iam.gserviceaccount.com|iam.gserviceaccount.com>" --output-file=/tmp/FILEPATH.json --credential-source-file=/tmp/credential-source-file.out --credential-source-type=text
- gcloud auth login --cred-file=/tmp/FILEPATH.json
- CLOUDSDK_CORE_DISABLE_PROMPTS=1 gcloud components install alpha
- gcloud --project ${PROJECT_ID} alpha storage lsBut I get an error:
google.auth.exceptions.OAuthError: ('Error code invalid_target: The target service indicated by the "audience" parameters is invalid. This might either be because the pool or provider is disabled or deleted or because it doesn\'t exist.', '{"error":"invalid_target","error_description":"The target service indicated by the \\"audience\\" parameters is invalid. This might either be because the pool or provider is disabled or deleted or because it doesn\'t exist."}')Apotialmost 4 years ago
Hi team does anyone have an idea how to do the following?
1 - Enable Cloud SQL IAM authentication on a database
2 - Grant Read and Write to a Service account (tt-sa-messaging-metl)
3 - Allow for multiple accounts to be added in future with varying permissions
Using terraform ?
1 - Enable Cloud SQL IAM authentication on a database
2 - Grant Read and Write to a Service account (tt-sa-messaging-metl)
3 - Allow for multiple accounts to be added in future with varying permissions
Using terraform ?
Michał Czeraszkiewiczalmost 4 years ago
Hi 👋,
I'm trying to limit access to secrets for specific
But get the following error message:
Could someone point me into the right direction?
The commands mentioned above work fine without the condition.
I'm trying to limit access to secrets for specific
ServiceAccount. I'm trying with the Secret Manager Admin Role and a specific condition (resource.name.startsWith("SOME_PREFIX__")).But get the following error message:
# CREATE NEW SECRET
$ gcloud secrets create SOME_PREFIX__czerasz_test_2
ERROR: (gcloud.secrets.create) User [sa-name@my-project-1.iam.gserviceaccount.com] does not have permission to access projects instance [my-project-2] (or it may not exist): Permission 'secretmanager.secrets.create' denied for resource 'projects/my-project-2' (or it may not exist).
# DELETE EXISTING SECRET
$ gcloud secrets delete SOME_PREFIX__czerasz_test_1
ERROR: (gcloud.secrets.delete) PERMISSION_DENIED: Permission 'secretmanager.versions.list' denied for resource 'projects/my-project-2/secrets/SOME_PREFIX__czerasz_test_1' (or it may not exist).Could someone point me into the right direction?
The commands mentioned above work fine without the condition.
Ryan Smithabout 4 years ago
Anyone know what the quota or rate limits are unauthenticated for hitting:
• mirror.gcr.io
AWS makes their limits clear and obvious 🤷
• mirror.gcr.io
AWS makes their limits clear and obvious 🤷
timduhenchanterabout 4 years ago
Any GCP vets have advice on how they're doing queuing? Cloud Tasks requires services to be exposed to the web from what I understand and I really refuse to expose internal services to the web for the sake of queues.
Jackson Delahuntover 4 years ago
Does anyone here do freelance gcp development?
Ofir Rabanianalmost 5 years ago
qq - for a pretty small startup, what service would you use for hosting containers? gke? app engine?
Shtrullalmost 5 years ago
anyone here has a grafana dashboard for gcp managed redis and sql services? based on stackdriver datasource
Reinholds Zviedrisabout 5 years ago(edited)
Hey all! Having an issue with running
When I run
terraform init / terraform plan as service account on Google Cloud. It has necessary rights for backend bucket where state is stored. Have authenticated against GCP with SA key and account is set as default.gcloud auth list output: Credentialed Accounts
ACTIVE ACCOUNT
* <mailto:service-account@project.iam.gserviceaccount.com|service-account@project.iam.gserviceaccount.com>
<mailto:my.user@domain.com|my.user@domain.com>gcloud config list output:[compute]
region = us-east1
zone = us-east1-d
[core]
account = <mailto:service-account@project.iam.gserviceaccount.com|service-account@project.iam.gserviceaccount.com>
disable_usage_reporting = True
project = project
Your active configuration is: [default]When I run
terraform init / terraform plan then it’s run using <mailto:my.user@domain.com|my.user@domain.com> instead of SA (That I see from activity log in GCP console about infra bucket access). Anyone had something similar and could advice what to do and where to proceed? 🤔 Any help would be appreciated. Tried already couple of suggestions from what I found on net, but no luck.syttenover 5 years ago
Related issue for terraform support: https://github.com/terraform-providers/terraform-provider-google/issues/6749
syttenover 5 years ago
Serverless NEG are now in beta
Erik Osterman (Cloud Posse)almost 6 years ago
Adding @U010XGY9B46 bot
J
Joe Bagdonalmost 6 years ago
@Joe Bagdon has joined the channel
M
Mark Hennemanalmost 6 years ago
@Mark Henneman has joined the channel
M
Marcin Brańskialmost 6 years ago
@Marcin Brański has joined the channel
P
pratheshbvalmost 6 years ago
@pratheshbv has joined the channel
Chris Fowlesalmost 6 years ago
so google's going to start charging for GKE masters 😞
T
Tertius Standeralmost 6 years ago
@Tertius Stander has joined the channel
Erik Osterman (Cloud Posse)almost 6 years ago
@UUB28NLDS help keep tabs! 😉
H
Hemanthalmost 6 years ago
@Hemanth has joined the channel
Nikola Velkovskialmost 6 years ago
well just what you mentioned, 🙂 thanks Chris. I wanted to double check if there's an interactive step/approval in cloudbuild or if there's any way I can make sure it waits for input before applying .
Chris Fowlesabout 6 years ago
but what's the actual question? 😛
Chris Fowlesabout 6 years ago
you could do something like plan output on PRs and then apply on master or something like that
Chris Fowlesabout 6 years ago
you don't really have approval steps in cloudbuild - so there's not a great deal of value in a separate plan/apply step
Chris Fowlesabout 6 years ago
running terraform in cloudbuild?