Run Google Cloud Build in a specific region and zone? - google-cloud-build

Is it possible to specify to run a Google Cloud Build in a specific region and zone?
The documentation seems to outline how to run a kubectl in a specific region/zone for deploying containers, but it doesn't seem to document where to run the cloud build itself. I've also not found this setting in the automatic build trigger configuration.

The latest public doc on regionalization can be found here: https://cloud.google.com/build/docs/locations
(Earlier answer on this topic follows)
This is not yet an available feature, however, at Google Next we announced an EAP / soon to be Alpha feature called "Custom Workers" which will enable this functionality for you. You can watch the demo here: https://www.youtube.com/watch?v=IUKCbq1WNWc&feature=youtu.be

Related

Variables in a Azure botframework Composer CICD Pipeline

Long time lurker, first-time question so apologies if I do this wrong.
I have successfully used the following to create a continuous deployment pipeline in Azure DevOps:
Composer CICD Pipeline Sample
However, I would like to use additional pipeline variables to insert into the appsettings.json file: such as additional API keys and the ApplicationInsights connectionString.
Does anyone have experience of doing this or can someone point me in the right direction?
Google has shone no light on this and unfortunately, I have found the botframework documentation to be lacking.
Azure deployments by the pipeline you reference do not use the appsettings.json file. Those settings are ignored.
The pipeline installs pipeline variable values in Azure as App Service Configuration Application Settings using the task "Configure App Service Settings". You might start there.

Any support for matrix build

Recently research different build tools , GCP CloudBuild is one of the selection 
https://github.com/GoogleCloudPlatform/cloud-builders
one of require is work loop an array list and write the function only once and run in parallel
however i did not find Cloudbuild mention any about matrix build 
Function which provided by Jenkins Plugin  https://www.jenkins.io/blog/2019/11/22/welcome-to-the-matrix/
or Github Action https://docs.github.com/en/free-pro-team#latest/actions/reference/workflow-syntax-for-github-actions#jobsjob_idstrategymatrix
Cloud Run is not designed to work with Jenkins out of the box and the links you included do not mention how to do this.
As indicated [1] the best product to integrate Jenkins inside Google Cloud is to use Google Kubernetes Engine (GKE).
[1] https://cloud.google.com/jenkins

How do I make Cloud Build Triggers show up with names on GitHub?

In description of Google Cloud Build app on GitHub here https://github.com/marketplace/google-cloud-build every build seems to be identifiable by name:
Cloud Build Triggers With Names
In my current set up, however, every build is displayed by id, which is not very useful:
Cloud Build Triggers with IDs
Is there something I am not doing to make it work as expected?
When I think about Cloud Builds, I understand that every individual build has a unique identifier. For me, this is what I would want to see in a report of a build being triggered. Given a Cloud Build ID, I can then use that for visibility into the underlying process that unique instance of the build caused. I can see all the steps and the outcome of each of them. I couldn't imagine wanting anything else than a build identifier being reported to me as a result of a Cloud Build being performed.
References:
Viewing build results
I posted the original issue in the issuetracker in January 2020,
This is not since fixed and identified real names since August 17th. You do have to reclick on the new names if you have made any previous hash based ones required steps in your build.
For any triggers created prior to August 2020, data sharing needs to be enabled for this to work, As documented here:
https://cloud.google.com/cloud-build/docs/automating-builds/create-github-app-triggers#data_sharing
You need to go in the Settings -> Data Sharing section of cloud build and enable it
In some cases you might get a Failed to enable trigger data sharing error when trying to do this, if so you might want to try
1 - Disable any required checks you have in github related to cloud build
2 - Try doing it using chrome without any ad blocker, for me it wasn't working using Brave browser but worked with chrome
This will allow github to show the trigger name instead of an id in the pull request checks

Please migrate off JSON-RPC and Global HTTP Batch Endpoints - Dataflow Template

I received an email with the Title ^ as the subject. Says it all. I'm not directly using the specified endpoint (storage#v1). The project in question is a postback catcher that funnels data into BigQuery
App Engine > Pub Sub > Dataflow > Cloud Storage > BigQuery
Related question here indicates Dataflow might be indirectly using it. I'm only using the Cloud PubSub to GCS Text template.
What is the recommended course of action if I'm relying on a template?
I think the warning may come from a dataflow job which uses the old version of storage API. Please upgrade Dataflow/Beam SDK version beyond 2.5.
Since you're using our PubsubToText template. The easiest way to to it would be:
Stop your pipeline. Be sure to select "Drain" when asked.
Relaunch the pipeline using the newest version (which is automatically done if you're using UI), from the same subscription.
Check the SDK version. It should be at least 2.7.
After that you should not see any more warnings.

Apply configuration yaml file via API or SDK

I started a pod in kubernetes cluster which can call kubernetes api via go-sdk (like in this example: https://github.com/kubernetes/client-go/tree/master/examples/in-cluster-client-configuration). I want to listen some external events in this pod (e.g. GitHub web-hooks), fetch yaml configuration files from repository and apply them to this cluster.
Is it possible to call kubectl apply -f <config-file> via kubernetes API (or better via golang SDK)?
As yaml directly: no, not that I'm aware of. But if you increase the kubectl verbosity (--v=100 or such), you'll see that the first thing kubectl does to your yaml file is convert it to json, and then POST that content to the API. So the spirit of the answer to your question is "yes."
This box/kube-applier project may interest you. While it does not appear to be webhook aware, I am sure they would welcome a PR teaching it to do that. Using their existing project also means you benefit from all the bugs they have already squashed, as well as their nifty prometheus metrics integration.

Resources