How to regenerate DEVELOPER_ACCESS_TOKEN in DialogFlow? - access-token

I have my agent hosted in DialogFlow. Because of security reasons, I need to update DEVELOPER_ACCESS_TOKEN. I know there is a way to update CLIENT_ACCESS_TOKEN from settings. But didn't find any way to update DEVELOPER_ACCESS_TOKEN.

If your developer access token is compromised, you should create a new agent and transfer your intents and entities across.
Create a new agent
Export your current agent as a zip
Import the zip to the new agent you created in step 1.
If you are using this Dialogflow agent with Actions on Google and wish to continue using the API V1, you will need to create a new Action.
Alternately, consider switching your agent to use the API V2 Beta, which uses Google Cloud Service Accounts instead of API keys and will solve your problem.

Related

Connecting to firebase admin sdk using service account key

I am using Cloud Run Continuous Deployment to watch a github repo & build the project upon a push to the production branch. Instead of specifying a Dockerfile, I am letting Google Cloud Buildpacks do all the work, since my codebase is written in Node.js.
I haven't yet been able to run a functional deployment due to the service account running into some permissions errors, but once I get past those, I am wondering how I would be able to initialize the firebase admin SDK inside the build. In my dev code, I have a service account JSON file and initialize the admin SDK using that file, but I don't know if this possible in the cloud build. If I can't upload private files to the cloud build, am I able to use the service account that creates the build to initialize the admin sdk? Is there another way to initialize the admin app in the build, such as using env variables? For reference, I am only using the admin sdk to read and write to our firestore database.

Do i need to ask gcloud support for enabling exchange custom rules in VPC peering?

I have created two gcloud projects, one for cloud sql and one for kubernete cluster. For accessing SQL in project one i have set import export custom routes . Do i need to take gcloud confirmation for this or this is enough? as i have read somewhere that after these steps ask gcloud support for enable the exchange of custom routes for your speckle-umbrella VPC network associated with your instance that is automatically created upon the Cloud SQL instance is created.
As far as I know this step is not included in the public documentation and is not necessary if you are connecting from within the same project or from any service project ( if configured with Shared VPC), then you don't require to export those routes. This is generally required if you are connecting from on-prem.
If you are having anny issue please let us know

How do you make cloud build project history publicly readable?

I have google cloud build set up, and I'd like t away to make the builds publicly visible, to use in an open source project, a bit like how TravisCI and CircleCI offer - see an example below:
https://travis-ci.org/wagtail/Willow/pull_requests
Is this possible?
Can you make it possible to inspect a build to a non-signed in user?
A solution could be to use Google Identity and Access Management to grant the Cloud Build Viewer role to allUsers. However, this cannot be done at the moment.
The idea is to give the cloudbuild.builds.get and cloudbuild.builds.list permissions to everyone on the internet, which would allow them to call those Cloud Build API methods that require these permissions. You can grant roles to Google Accounts or Groups, Service accounts or G Suite domains, but not to everyone.
You can find detailed instructions to grant roles through the GCP console in the Cloud Build documentation.

How to create a webhook between Bitbucket and Azure DevOps?

We have all our repositories in Bitbucket and I'm trying to set up a continuous intergration services to Azure DevOps that would build the project after each push.
We have created a dedicated user account for Bitbucket repositories that has real-only access to all repositories.
However, creating a CI webhook trigger from Bitbucket to Azure Devops requires admin access to repositories. We do not want to give that level of access to CI user account.
I could add the webhook to Bitbucket repository manually, but I'm missing the URL to which the webhook should post the trigger.
The url is something like https://dev.azure.com/myorganization/_apis/public/hooks/externalEvents?publisherId ...
I think it's called deployment trigger url but I cannot find it anywhere. Does the new Azure DevOps support manually adding webhooks or do we have to do it manually somehow?
I'm in the same boat with you all. I don't want to give my CI account "Admin" rights to ANY repo.
My workaround so far has been to give the CI account temporary access in order to create the webhook when the pipeline is first saved, then downgrade it after the webhook has been created, knowing that any changes will require another temporary permission elevation.
FWIW, the webhook URL that is used is this:
https://[REDACTED].visualstudio.com/_apis/public/hooks/externalEvents?publisherId=bitbucket&channelId=[REDACTED]&api-version=5.1-preview
As you can see, we are kind of in an understandable Catch-22 here, because we could conceivably create the pipeline and get that channelId to use to manually create the webhook in Bitbucket, but can't even SAVE a pipeline without repo Admin rights, so we can't get the channelId.
I wish there was a way to disable the webhook creation so we could manually create it on the Bitbucket side.
I know that this has been a long time since it was asked, but recently I was faced with the exact same issue and I thought I should add this here for anyone struggling to find out where these URLs are coming from.
I was seeing in Bitbucket two webhooks in the format https://dev.azure.com/[myorganization]/_apis/public/hooks/externalEvents?publisherId=... and I was trying to figure out how these were created in the first place.
As it turns out, when you create a new Bitbucket Pipeline in Azure and you select a repository for this pipeline, Azure automatically creates these webhooks for us in Bitbucket! In other words, it doesn't seem to be a way to deduce these URLs from anywhere, but rather they are created by Azure upon creation of the Pipeline, as well as they are deleted by Azure once you delete the Pipeline from Azure!.

How to use Service Accounts with gsutil, for uploading to CS + BigQuery

How do I upload data to Google BigQuery with gsutil, by using a Service Account I created in the Google APIs Console?
First I'm trying to upload data to Cloud Storage using gsutil, as that seems to be the recommended model. Everything works fine with gmail user approval, but it does not allow me to use a service account.
It seems I can use the Python API to get an access token using signed JWT credentials, but I would prefer using a command-line tool like gsutil with support for resumable uploads etc.
EDIT: I would like to use gsutil in a cron to upload files to Cloud Storage every night and then import them to BigQuery.
Any help or directions to go would be appreciated.
To extend #Mike answer, you'll need to
Download service account key file, and put it in e.g. /etc/backup-account.json
gcloud auth activate-service-account --key-file /etc/backup-account.json
And now all calls use said service account.
Google Cloud Storage just released a new version (3.26) of gsutil that supports service accounts (as well as a number of other features and bug fixes). If you already have gsutil installed you can get this version by running:
gsutil update
In brief, you can configure a service account by running:
gsutil config -e
See gsutil help config for more details about using the config command.
See gsutil help creds for information about the different flavors of credentials (and different use cases) that gsutil supports.
Mike Schwartz, Google Cloud Storage Team
Service accounts are generally used to identify applications but when using gsutil you're an interactive user and it's more natural to use your personal account. You can always associate your Google Cloud Storage resources with both your personal account and/or a service account (via access control lists or the developer console Team tab) so my advice would be to use your personal account with gsutil and then use a service account for your application.
First of all, you should be using the bq command line tool to interact with BigQuery from the command line. (Read about it here and download it here).
I agree with Marc that it's a good idea to use your personal credentials with both gsutil and bq, the bq command line tool supports the use of service accounts. The command to use service account auth might look something like this.
bq --service_account 1234567890#developer.gserviceaccount.com --service_account_credential_store keep_me_safe --service_account_private_key_file myfile.key query 'select count(*) from publicdata:samples.shakespeare'
Type bq --help for more info.
It's also pretty easy to use service accounts in your code via Python or Java. Here's a quick example using some code from the BigQuery Authorization guide.
import httplib2
from apiclient.discovery import build
from oauth2client.client import SignedJwtAssertionCredentials
# REPLACE WITH YOUR Project ID
PROJECT_NUMBER = 'XXXXXXXXXXX'
# REPLACE WITH THE SERVICE ACCOUNT EMAIL FROM GOOGLE DEV CONSOLE
SERVICE_ACCOUNT_EMAIL = 'XXXXX#developer.gserviceaccount.com'
f = file('key.p12', 'rb')
key = f.read()
f.close()
credentials = SignedJwtAssertionCredentials(
SERVICE_ACCOUNT_EMAIL,
key,
scope='https://www.googleapis.com/auth/bigquery')
http = httplib2.Http()
http = credentials.authorize(http)
service = build('bigquery', 'v2')
datasets = service.datasets()
response = datasets.list(projectId=PROJECT_NUMBER).execute(http)
print('Dataset list:\n')
for dataset in response['datasets']:
print("%s\n" % dataset['id'])
Posting as an answer, instead of a comment, based on Jonathan's request
Yes, an OAuth grant made by an individual user will no longer be valid if the user no longer exists. So, if you use the user-based flow with your personal account, your automated processes will fail if you leave the company.
We should support service accounts with gsutil, but don't yet.
You could do one of:
Probably add the feature quickly to
gsutil/oauth2_plugin/oauth2_helper.py using the existing python
oauth client implementation of service accounts
Retrieve the access token externally via the service account flow and store it in the cache location specified in ~/.boto (slightly hacky)
Create a role account yourself (via gmail.com or google apps) and grant permission to that account and use it for the OAuth flow.
We've filed the feature request to support service accounts for gsutil, and have some initial positive feedback from the team. (though can't give an ETA)
As of today you don’t need to run any command to setup a service account to be used with gsutil. All you have to do is to create ~/.boto with the following content:
[Credentials]
gs_service_key_file=/path/to/your/service-account.json
Edit: you can also tell gsutil where it should look for the .boto file by setting BOTO_CONFIG (docs).
For example, I use one service account per project with the following config, where /app is the path to my app directory:
.env:
BOTO_CONFIG=/app/.boto
.boto:
[Credentials]
gs_service_key_file=/app/service-account.json
script.sh:
export $(xargs < .env)
gsutil ...
In the script above, export $(xargs < .env) serves to load the .env file (source). It tells gsutil the location of the .boto file, which in turn tells it the location of the service account. When using the Google Cloud Python library you can do all of this with GOOGLE_APPLICATION_CREDENTIALS, but that’s not supported by gsutil.

Resources