Setting up Log Based alerts in StackDriver across multiple projects in Google Cloud - google-cloud-stackdriver

We have a project in Google Cloud called 'A' where we ship our application logs to Stackdriver and have a few Log Based alerts configured. Now we have another project in Google Cloud namely B. Is there a way we can set up Log based alerts in Project B which can access logs from Project A?
Asking this question mainly because we run into situation where we have exhausted quota for settings up alerts in Stackdriver in Project A

In order to get logs based metrics for Project A into project B, you'd have to copy the logs, which would be an inefficient way to solve your problem. We do have some changes to Stackdriver Accounts planned in the future which would resolve this issue but it sounds like the root of the problem is the set of alerting policies in use (will follow up offline to see how we can help with that).

Related

Why does GCP Cloud Functions log two blank lines?

I created a trivial GoLang 1.16 GCP CloudFunction and deployed it. When I make a request to the endpoint, I see two blank lines in the log output. I can't figure out where they are coming from. Is this normal?
func TestHttp(w http.ResponseWriter, _ *http.Request) {
w.WriteHeader(200)
w.Write([]byte("ok response"))
}
Update: Google silently fixed the issue:
Update2: The problem is back.
The closest answer to this “empty logs” would be caused by Audit Logs. These can be seen as the example you have provided. Here is a quick overview:
Google Cloud services write audit logs that record administrative activities and accesses within your Google Cloud resources. Audit logs help you answer "who did what, where, and when?" within your Google Cloud resources with the same level of transparency as in on-premises environments.
“Admin Activity” audit logs are enabled by default and cannot be disabled:
Admin Activity audit logs are always written; you can't configure, exclude, or disable them. Even if you disable the Cloud Logging API, Admin Activity audit logs are still generated.
You can check how to View runtime logs in case those could be identified as such. Inside the Writing, Viewing, and Responding to Logs page you can check the details.
This kind of behavior has been reported previously with different scenarios to this one:
First case, an issue was reported for duplicated logs so we can discard it might be one of these. Also, one of the answers suggests making a filter if possible for these.
And in this discussion, a user says that after a few months they have completely gone and there is no indication of an issue or unusual behavior caused by these in any of the both cases. (see questions: 58983677, 49506107)

Debugging permission denied in Cloud Firestore SDK (Golang)

I am experienced in working with AWS but this is my first foray onto Google cloud and I am stuck on how to debug it properly. I am building a simple experimental setup, using Cloud Firestore to store some data and planning to do some small API functions to query it.
I am inputting my information from a Go app, which I built using the official SDK for Go. Everything builds fine, but when I run it I see nothing other than rpc error: code = PermissionDenied desc = Missing or insufficient permissions..
I have tried setting the authentication to open in the Firestore rules console (allow read, write: if true), but I still see the same error, so it seems to be an issue with the credentials I have generated rather than Firestore itself.
The credentials in question were generated in the main Google Cloud Console, under Service Accounts. I've saved it out as a JSON file and am loading this into the app via option.WithCredentialsFile() which is then passed into the NewFirestoreWriter() constructor.
It's far from obvious, to me at least, exactly how to configure the permissions on the Service Account as it seems to work quite differently from Amazon IAM. I was expecting to find a way to add on specific actions related to Firestore but I can't find anything at all like that once the service account is created. Under Permissions, it looks like I can associate other accounts with the service account, which seems to be the other way around to what I want to do. Or do I need to assume another identity once I have the service account in order to do anything, a la Amazon STS? Or am I barking up the wrong tree here?
I am running locally while I am playing with the apps, planning to think about deployment later.
I guess my questions are:
Should I be using a different form of credential when making programmatic writes to Firestore?
What permissions need to be on the credential that I am using?
How do the Google Service Account permissions interact with the Firestore access rules, or are they completely separate?
Thanks in advance for your help.
I finally worked out the answer. Turns out I was reading some of the screens too fast....
The programmatic approach with the credential was fine, but the service account setup was not.
In case anyone else has a similar issue, the fix was to:
Go to "Access" under IAM (NOT identity). Coming from AWS this confused me a little because I was expecting roles to be a sublevel to identity rather than a seperate level
Click the Edit button next to the service account
Add the Cloud Datastore User and Cloud Datastore Owner roles (I'll work on trimming down permissions now it's working!). This confused me particularly because I was looking for "Firestore" or "Cloud Firestore", and there is the very similarly named "Cloud Filestore" which tripped me up.
After a few seconds, it started working.
According to https://cloud.google.com/firestore/docs/reference/libraries?_ga=2.87049368.-1865513281.1592929406#server_client_libraries,
In this environment, requests are not evaluated against your Firestore security rules
So I reset my access permissions in Firebase back to allow read, write: if false.

How integrate FireStore Health Check and Dashboard metrics with our internal Company systems

Context: it is my first use of FireStore. I want to use it to push notification status to our Mobile Application. I can see that there is Google Firestore Dashboard under Analytics umbrella. In our company we use mainly three tools for monitoring our applications: Zabbix, Dynatrace and certain internal solution based on Elasticsearch. I need to ntegrate our internal monitoring systems with metrics resulted from our first Firestore project.
What I am looking for: based on personal assumptions:
1) Maybe there might exist either some GET endpoints that a I can connect and poll for information let's say each minute
2) Maybe, following the idea of Database Realtime pushing events accross a long time connection, I can code a Spring Boot application that import Firebase SDK and every day I connect to some specific Firestore endpoint which will push any interested events (eg. delay based on custom logic or dead service)
3) Maybe some plugin I can connect straight to a Kafka hosted in our internal Datacent
4) Some plugin to connect from Firestore/Firebase to either third tools (eg. Zabbix or Dynatrace or Elasticsearch)
5) Some dependency I could import in google-cloud-funtions thiggered from Firestore Healcheck engine in orther to consume some internal end-point posting data
Perhaps there is already some approach universally used for a scenario when you have to connect Firestore to internal monitoring system. I will be highly appreciated if tell me that than I can narrow my googling searchs because I am not finding anything usefull.
Please, it is not part of this question comparing Monitoring approach. It is a very solid fact in our company use internal Dashboards and some custom alerts trigger. I just mentioned the names above to clarify what I mean by internal monitoring tools. The focus on this question is HOW IMPORT/INTEGRATE/OBSERVE/CONSUME Firestore monitoring data. Our internal stack is beyond this question.
Here is the Official Documentation for Cloud Monitoring using which you can collect metrics, events, and metadata from Google Cloud Platform products that you can use to create dashboards, charts, and alerts.
Please let me know if you have further questions.

Is there any way to show the Google Cloud Build Data on a separate Dashboard?

I am using Google Cloud build for auto-build process. I want to create a dashboard which shows the trigger details and logs on which I worked on.
I know about Stack Driver but need suggestions other than that.
You could use Google's APIs to list or describe the builds and also to get the information you want about the triggers. The Viewing build results documentation explains what kind of information you can retrieve, but I always suggest playing around with Google's API Explorer.

ElasticBeanstalk system events and my application logs

AWS has a really nice log management tool. I can make my application log messages there very easy.
Amazon ElasticBeanstalk has a "event management" tool.
The questions are:
Can I log my app messages together in the ElasticBeanstalk events? Is it the syslog of the EC2 instance?
If yes, is this a good practice? Any problem on this? I was thinking about because, if there is no problem, I would not need any other third log management service.
The events shown in Elastic Beanstalk are internal to it. You are not supposed to fudge around with them (Although nobody is really preventing you from playing around with them).
Also, there's a log snapshot feature that picks up logs related to the application. These logs are mainly related to deployment and logging messages from the application itself. So, you can use this feature in case your application code is logging messages. For example, if you are running a Ruby/Rails with passenger you would get log messages under /var/app/support/logs/passenger.log. These are not syslog messages per se and the problem with this approach is that it's not straight forward the get your custom monitoring in place. For example, how do you parse your errors and send them to say PagerDuty?
Like you've probably figured out if you want to have custom monitoring (send logs to a syslog facility) you are better off using a third party tool like Splunk Storm, PaperTrail or Loggly. Of course you can setup your own syslog server(s) but that will require you to set up all the infrastructure.
Hope this helps.

Resources