I'm trying to access Google Cloud Run from a Go website running within Cloud Run, but the program keeps panic-ing when I try to create the Vision Client:
client, err := vision.NewImageAnnotatorClient(context.Background(), nil)
Panic:
runtime error: invalid memory address or nil pointer dereference goroutine
I assumed that as it's running within GCP and the Cloud Run service is assigned an IAM account with privilege to access Vision API it'd just be able to access it similar to Cloud Functions without a key, is there anything I'm missing here for it to work?
The code snippet is quite short so it doesn't really give us enough information as to why it could be failing.
Looking at the documentation, I don't know that you need the nil as a second parameter to vision.NewImageAnnotatorClient.
func NewImageAnnotatorClient
Try passing only the context.Background() and see if that fixes your issue.
Related
I am using Google's Error Reporting client in Golang - https://pkg.go.dev/cloud.google.com/go/errorreporting.
I'm wondering how I should use this for local testing, since calling Client.Report(e Entry) will panic if there are no credentials.
Options considered:
Pass in the environment (staging, production, etc) and only call the methods on the client if we are in an environment with credentials.
Check the client is not nil every time we use it.
Wrap the client and add nil checks.
Something else? I'm hoping to find a neater solution, e.g. a no-op client for use in testing environments.
I have just done an update to a GO application using pipeline, I noticed in my redis monitoring app that the number of clients connected just skyrocketed from 1/2 to 300+
I can't seem to find information on if this is normal for pipelines, they are connecting as a new client OR if this is unusual behaviour.
Has anyone else seen this ?
Redis connection is a global.
Redis setup is default.
used in some go routines.
Example usage..
pipe := rdb.Pipeline()
pipe.set(ctx, "hi", "hi")
_, err := pipe.Execute()
What tends to happen is everytime pipeline is ran like the above it will create a new client - even though the pipeline is using the global redis connection.
Only happens with pipeline, other redis calls are fine.
I am new to Go and new to GCP so i may not be able to put all details. But will try to share what i have.
I have setup a small microservice using docker. The docker-compose file runs my main method which register http handler via gorrillamux ... it is working as expected. here is sample code
func main() {
r := gorrilamux.NewRouter()
r.HandleFunc("/stores/orders/{orderID}/status", handler).Methods("GET")
http.Handle("/", r)
fmt.Printf("%+v", http.ListenAndServe(":8080", nil))
}
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Println("------ in handler!!!")
}
With this code, I am able to call my service after doing docker-compose -up. The thing which i am confused at that how would i use this gorrilla mux to route my calls in goocle cloud function?
Based on my understanding, for GCP CF, I would tell what method is the entry point i.e.,
gcloud functions deploy service-name <removing_other_details> --entry-point handler
This handler would call whenever each request is received, it wouldn't be ListenandServce. So how can i use gorillamux to do so?
What i eventually want to do is extract path variables from incoming request. One approach is to use string manipulation and just get path variable from request object. But this could be error prone. So i thought if i can use gorilla mux to handle such things.
Any ideas
Google Cloud Functions is used to execute Single Purpose functions based on triggers like HTTP or another GCP server to server triggers. Your go service looks more like an HTTP server than a single function.
If you want to establish a microservice architecture with Cloud Functions what you are going to do is create a bunch of different functions mainly triggered by HTTP (each one will have a different HTTP address assigned automatically) and then call them from your application without needing any external HTTP router.
If you want to have a distributed microservice (with every single service sharing the same URL but with a different endpoint within the URL) You want to take a look into Appengine where you can deploy your server. You can use this tutorial to get started with Google Appengine
In addition of PoWar answer, I propose you to use Cloud Run instead of App Engine. Cloud Run works on the same underlying infrastructure as Cloud Functions and a lot of feature are similar.
In addition, Cloud Run uses containers which are one of the top current best practices to package the applications. You can use a standard Dockerfile like this one in the documentation, and then build with docker. Or Cloud Build if you haven't docker installed
gcloud builds submit -t gcr.io/PROJECT_ID/containerName
Or you can even an alpha feature if you don't want to write or customize your Dockerfile, this is based on buildpack
gcloud alpha builds submit --pack=image=gcr.io/PROJECT_ID/containerName
And then deploy your image on Cloud Run
gcloud run deploy --image=gcr.io/PROJECT_ID/containerName --platform=managed --region=yourRegion myService
I am running the STS AssumeRole operation from inside a Lambda function and experiencing weird behaviour. My Lambda function runs as a dedicated Role, call it LambdaRole, and I'm trying to assume a second role (call is S3Role) in order to get credentials for S3 access that I can pass to another system. This other system doesn't have an IAM role attached and I'd rather not generate static keys for that system.
The operation sometimes succeeds upon first deploying my Lambda function, and continues to work for a while, but eventually stops working. The 'stopped working' is simply a timeout where the service call never returns. Sometimes a fresh deployment of my lambda function doesn't succeed for the 'first' call either.
I've tried exploring any rate limits etc for STS but don't see any that are relevant. I can call AssumeRole from the CLI as many times as a I want and it's fast and responsive.
My Lambda function runs inside a VPC, and I've tried with and without an endpoint to STS (apparently you do not need an STS endpoint inside your VPC, which makes some sense).
So in summary - is there any extra intelligence happening during the AssumeRole operation which is causing this problem? Is something special or difference happening in the Lambda container that causes this to break? Any debugging ideas?
Is it OK to panic() when failed to create AWS session?
As an opposite, I can just return the error from my handler function (in this case I have to create the session in the handler code, but not in the init()).
The docs say
Lambda will re-create the function automatically
Does it mean the panic always causes the cold-start and is preferred to return error from the handler?
Yes. A panic will trigger a cold restart of your code. The use of panic should be reserved for exceptional circumstances; returning an error is to be preferred in most circumstances.
The answer depends on what is going on in init section.
If you create session clients to connect to other services it might be good to panic and cause cold-start than continue container's lifecycle with the failed clients.