Debugging permission denied in Cloud Firestore SDK (Golang) - go

I am experienced in working with AWS but this is my first foray onto Google cloud and I am stuck on how to debug it properly. I am building a simple experimental setup, using Cloud Firestore to store some data and planning to do some small API functions to query it.
I am inputting my information from a Go app, which I built using the official SDK for Go. Everything builds fine, but when I run it I see nothing other than rpc error: code = PermissionDenied desc = Missing or insufficient permissions..
I have tried setting the authentication to open in the Firestore rules console (allow read, write: if true), but I still see the same error, so it seems to be an issue with the credentials I have generated rather than Firestore itself.
The credentials in question were generated in the main Google Cloud Console, under Service Accounts. I've saved it out as a JSON file and am loading this into the app via option.WithCredentialsFile() which is then passed into the NewFirestoreWriter() constructor.
It's far from obvious, to me at least, exactly how to configure the permissions on the Service Account as it seems to work quite differently from Amazon IAM. I was expecting to find a way to add on specific actions related to Firestore but I can't find anything at all like that once the service account is created. Under Permissions, it looks like I can associate other accounts with the service account, which seems to be the other way around to what I want to do. Or do I need to assume another identity once I have the service account in order to do anything, a la Amazon STS? Or am I barking up the wrong tree here?
I am running locally while I am playing with the apps, planning to think about deployment later.
I guess my questions are:
Should I be using a different form of credential when making programmatic writes to Firestore?
What permissions need to be on the credential that I am using?
How do the Google Service Account permissions interact with the Firestore access rules, or are they completely separate?
Thanks in advance for your help.

I finally worked out the answer. Turns out I was reading some of the screens too fast....
The programmatic approach with the credential was fine, but the service account setup was not.
In case anyone else has a similar issue, the fix was to:
Go to "Access" under IAM (NOT identity). Coming from AWS this confused me a little because I was expecting roles to be a sublevel to identity rather than a seperate level
Click the Edit button next to the service account
Add the Cloud Datastore User and Cloud Datastore Owner roles (I'll work on trimming down permissions now it's working!). This confused me particularly because I was looking for "Firestore" or "Cloud Firestore", and there is the very similarly named "Cloud Filestore" which tripped me up.
After a few seconds, it started working.
According to https://cloud.google.com/firestore/docs/reference/libraries?_ga=2.87049368.-1865513281.1592929406#server_client_libraries,
In this environment, requests are not evaluated against your Firestore security rules
So I reset my access permissions in Firebase back to allow read, write: if false.

Related

How to block Google Firestore access from the Google Firestore api

I am working with Google Firestore in native mode and CRUD'ing data within it using the "cloud.google.com/go/firestore" api in Go. Access to the data is wide open as long as you know the project id and using the Firestore API on a server. I don't want to try the rules until I figure out how to secure the data from server attacks that. Again, all the API requires is the project id to access the data so I need to lock that down firstly before I move any further. Rules are only for mobile/web clients from what I read and Server side clients completely bypass the rules. Please help. I do not want to use the Firebase API because attackers can still use the Firestore api to access the data.
It's unclear from the limited information in your question but, your Firestore database is not open to anyone with the Project ID.
The service is only accessible to any thing (human|machine) that has valid credentials. Either humans with e.g. Gmail accounts or Service Account key holders.
In either case, only identities that you've explicitly added to the project will be able to access its resources and then only those with the appropriate IAM roles|permissions.
Google provides an elegant facility called Application Default Credentials (ADCs) that simplifies authenticating clients.
I suspect that your code is using ADCs to authenticate you to the project|service.
Access to the data is wide open as long as you know the project id and using the Firestore API on a server.
If that is a concern, consider disallowing all access in the Firebase security rules for your Firestore database.
Also have a look at my answer here to understand why sharing your project ID is not a security concern, and in fact is necessary if you want to allow direct access from client-side devices: Is it safe to expose Firebase apiKey to the public?. If you don't want to allow direct client-side access, closing down the security rules (as they are by default, unless you choose test mode when creating the database) is the way to go.

"The value for one of the HTTP headers is not in the correct format" from within azure portal

I have the following problem and cannot figure out where the problem is.
I created a StorageV2 (general purpose v2) storage account a few days ago, created two blob containers and two queues in it. Then uploaded some data into the blobs and the queues and stoppt working with it.
Now (the last three days), when I try to access the blob storage or the queues I receive the following error messages:
The value for one of the HTTP headers is not in the correct format
This request is not authorized to perform this operation using this permission.
Details see below screenshot:
The problem is, I access the account via azure portal, not via code, so the same way I created the storage account. So even with the exact same user.
Update: I tested with Edge and Firefox, both same errors. Funny thing is that I can access the queues and the blobs with the Azure Storage Explorer App.
Does anyone have any advice for me?
If you just want do do that on portal, please follow these steps:
If you dont use portal, you can use SAS token.
After a few days the problem disappeared by itself. I therefore assume that the cause lay with Microsoft. Thanks to all for the tips.

Is there a Heroku webhook for Heroku Postgres credential rotation?

I'm following the instructions in the Heroku Postgres docs for creating an external application that connects to Heroku Postgres for its data layer. The instructions mention that the credentials are automatically rotated and I must handle this myself.
I read more docs to learn about webhooks existing to help notify the rest of your system that changes have happened in your Heroku services. This made sense to me to be an area where the Heroku devs would have implemented this. There must be a webhook that exists that I could use to be notified when the credential rotation happens. I found that there was the api:addon webhook which had the update event. I tested this webhook, expecting this to be what I was looking for, but I found that it was not fired upon credential rotation. It was only fired when I provisioned or deleted more Heroku Postgres add-ons.
Since the webhook I need doesn't exist, I coded a workaround where I expect a PostgreSQL library auth error to be thrown while my AWS Lambda executes. If an error is thrown, I assume it's from the rotation and I have the still running Lambda function fetch new credentials using the Heroku API and try the PostgreSQL query again, at which point it works unless there are other errors. I tested this while manually rotating my credentials and it worked okay, but it's kind of ugly code. See here for a detailed example.
So at this point, I'm wondering if the webhook I'm looking for does exist and I just wasn't able to find it. Or, if it doesn't exist, I would like to request it as a new feature. I understand that the Heroku team may not want people picking their add-ons a la carte, and they want people to use the entire Heroku platform, but I think it would add a lot of value to the Heroku platform. Personally, I've enjoyed getting into more and more cloud services as I learn since I'm usually able to choose them a la carte. For example, AWS doesn't forbid me from only using S3 and nothing else from them. They do as much as they can to make it easy for me to link my applications to it, no matter what other cloud services I use.
I contacted Heroku directly to ask if this type of webhook existed and I received a useful response from them:
There isn't a webhook specifically for credential rotations, although
with a bit of logic you can sort of recreate the same thing. Whenever
you Postgres credentials rotate, it will trigger a new release, which
does trigger a webhook. You can use that to inspect the release via
the API to determine if the values changed.

Does Google Container Engine SDK/API exist?

I am planning to launch container cluster from an SDK/API. Presently, I am fine with any language, but I prefer NodeJS SDK. As far as I have seen, I could not find any Container engine SDK. Here is the NodeJS SDK for GCP which does not contain container engine. In fact it contains SDK only for very few GCP services.
I came across OAuth API for container engine but it involves human intervention to launch it. I am looking for service account based authentication for the SDK.
Are there container engine SDKs available ?
Update after discussion with Robert Lacok:
This is the code I tried to use for container APIs with API-key, it does not work. It expects Oauth 2 token, or some other credentials other than Service account. I tried API-key it didnt work. I dont know how to use Service account authentication with the API.
Here is my source code:
Here is the error:
I see a method for Application Default credentials. But I dont think so it will be useful for my use-case. I am trying to create container cluster from AWS Lambda. So, I cant use application default credentials. Is there any other options ?
The API for Google Container Engine is very limited at the moment as all the features are in Alpha status and because they can change not many people are incorporating them into the SDKs they are developing.
These are the current available APIs: https://cloud.google.com/sdk/gcloud/reference/container/
And here is the Alpha APIs: https://cloud.google.com/sdk/gcloud/reference/alpha/container/
What you probably want to do is making calls to the REST API and using the client library for OAuth2 authentication.
You can browse the API documentation and see that every method has a short how-to for a number of languages, NODE.JS being one of them. Have a look here for an example on how to create a container cluster.
You also mentioned service account authentication. The preferred way to do this is to use the application default credentials, you can have a little read about them here.
In short, you want to set an environment variable GOOGLE_APPLICATION_CREDENTIALS=/path/to/key.json which is a key to service account you generated in console.
Then the client library will take care of the rest (getting the OAuth tokens and what not).

What affect on our applications will changing the Heroku API Key have?

Our organization has a number of Rails applications (websites) deployed to Heroku. A former devleoper has left the organization, and as good practice we want to change the Heroku API key associated with our account to prevent any modifications to the apps via the Heroku CLI.
I know that the Heroku API Key is used for Heroku CLI access (it gets cached in ~/.heroku/credentials), but not certain what else it is used for. Specifically, do 3rd-party add-ons in the Heroku platform (e.g. New Relic, Hoptoad/Airbrake, Sendgrid, etc) use this, and therefore require reconfiguring if the API Key is changed? Heroku throws up a fairly generic (and non-informative) error message when you click the "regenerate" button to change it.
Because the term "API Key" is so generic, want to be clear that this is the single API Key associated with each Heroku account accessible via "My Account" link. Image (and warning message) below.
Asked Heroku Support. This is what I got back:
"you can safely change your API key at any time, as we don't give it to any add-on providers. That alert is meant to remind you that if you added your API key to any application or service (ie for auto scaling, manually provision workers, etc) it will stop working until you provide it a new key."
I requested that they update the interface/documentation to make this more clear.
Also remove him from being a collaborator on all your projects so he can't push to them via git.
Out of curiousity (i'd never seen reset key in the admin) I tried it. When I then tried to use the CLI against one of my apps I was asked to reauthenticate - but i can't now get back in - doh! The same username/password works via the site. I'll ping support and report back,
UPDATE:
So it appears my problem is entirely due to the Heroku Accounts (https://github.com/ddollar/heroku-accounts) plugin that I'm using which stores a copy of the key in the ~/.heroku/accounts/ file. Support got me to remove the folder and it all works now - just something to be aware of if you reset your API key.

Resources