I am new to Go and new to GCP so i may not be able to put all details. But will try to share what i have.
I have setup a small microservice using docker. The docker-compose file runs my main method which register http handler via gorrillamux ... it is working as expected. here is sample code
func main() {
r := gorrilamux.NewRouter()
r.HandleFunc("/stores/orders/{orderID}/status", handler).Methods("GET")
http.Handle("/", r)
fmt.Printf("%+v", http.ListenAndServe(":8080", nil))
}
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Println("------ in handler!!!")
}
With this code, I am able to call my service after doing docker-compose -up. The thing which i am confused at that how would i use this gorrilla mux to route my calls in goocle cloud function?
Based on my understanding, for GCP CF, I would tell what method is the entry point i.e.,
gcloud functions deploy service-name <removing_other_details> --entry-point handler
This handler would call whenever each request is received, it wouldn't be ListenandServce. So how can i use gorillamux to do so?
What i eventually want to do is extract path variables from incoming request. One approach is to use string manipulation and just get path variable from request object. But this could be error prone. So i thought if i can use gorilla mux to handle such things.
Any ideas
Google Cloud Functions is used to execute Single Purpose functions based on triggers like HTTP or another GCP server to server triggers. Your go service looks more like an HTTP server than a single function.
If you want to establish a microservice architecture with Cloud Functions what you are going to do is create a bunch of different functions mainly triggered by HTTP (each one will have a different HTTP address assigned automatically) and then call them from your application without needing any external HTTP router.
If you want to have a distributed microservice (with every single service sharing the same URL but with a different endpoint within the URL) You want to take a look into Appengine where you can deploy your server. You can use this tutorial to get started with Google Appengine
In addition of PoWar answer, I propose you to use Cloud Run instead of App Engine. Cloud Run works on the same underlying infrastructure as Cloud Functions and a lot of feature are similar.
In addition, Cloud Run uses containers which are one of the top current best practices to package the applications. You can use a standard Dockerfile like this one in the documentation, and then build with docker. Or Cloud Build if you haven't docker installed
gcloud builds submit -t gcr.io/PROJECT_ID/containerName
Or you can even an alpha feature if you don't want to write or customize your Dockerfile, this is based on buildpack
gcloud alpha builds submit --pack=image=gcr.io/PROJECT_ID/containerName
And then deploy your image on Cloud Run
gcloud run deploy --image=gcr.io/PROJECT_ID/containerName --platform=managed --region=yourRegion myService
Related
I'm working on a system that calls external APIs, some are owned by my company and others are not.
My system is composed of an HTTP interface that takes orders and publishes them into a message queue in order to run an operation chain. My system is composed of 3 NodeJS processes (1 for HTTP, 2 message queue consumers), 2 databases and a message queue.
As I develop my application it becomes hard to test all the scenarios covered by my system (even tho I have unit tests). To ensure that all the components are working together, I am writing specifications using the Gherkin language and cucumber js.
To test the system, I want to be as close as the deployment environment, so I start all my system including the databases, the NodeJS processes and the message queue with docker-compose. All of the components of the system are communicating through a docker network defined in the docker-compose configuration.
The issue is that I can't make sure that all of the external APIs are in the right state ready to accept my request and that they will respond in a way that's interesting for my test steps.
So, I thought about using a Mock server for each of my dependencies and discovered pact.io. As I understand, Pact allows me to write contracts and start a mock server so my system can then run HTTP requests against the mock server. Pact also allows me to give the contract to the service provider so it can also run the contract against the real app to see if it really works.
I saw the examples, in javascript, and I am able to start a mock service, provide it an interaction, verify the interaction and close the mock service. (JS with mocha example)
My issue is that I want my system to be as close as the production so I want it to access the Pact mock service through my docker network. I saw a Pact CLI docker image to run the pact mock service (Pact CLI docker image) but once my mock server is dockerized, I lose the control that I had with the JS wrapper to add new interactions.
Also, I don't want to write pact files, I want to add interactions at the time my tests are running otherwise I will declare the test data twice (once in the cucumber tests scenarios and once in the pact files).
My questions are:
Is there a way to bind the JS wrapper to an existing mock service, a dockerize one?
When using the docker pact image, is there a way to add interaction at run time?
Is pact the right tool to use since I just need a mock service?
Edit
I just create a sandbox env to see what could be done with the NodeJS wrapper. It seems that you can create a mock service using docker and them control it via the NodeJS wrapper.
# Starts the docker container
docker run -dit \
--rm \
--name pact-mock-service \
-p 1234:1234 \
-v <YOUR_PATH>/docker_pacts/:/tmp/pacts \
pactfoundation/pact-cli:latest \
mock-service \
-p 1234 \
--host 0.0.0.0 \
--pact-dir /tmp/pacts
const {Pact, MockService, } = require('#pact-foundation/pact')
const axios = require('axios')
const pact = new Pact({
consumer: "my client",
provider: "some provider",
// Those two are ignored since we override the inner mock service
port: 1234,
host: 'localhost'
})
const mockService = new MockService(
// You need to duplicate those data, normally they are passed
// by the pact object when calling `pact.setup()`.
'my client',
'provider',
// the port and host to the docker container
1234,
'localhost'
)
pact.mockService = mockService
async function run () {
await pact.addInteraction({
state: "some data is created",
withRequest: {
method: "GET",
path: "/hello"
},
willRespondWith: {
status: 200,
body: {
hello: 'hello world'
}
},
uponReceiving: ''
})
const response = await axios.get('http://localhost:1234/hello')
console.log(response.data) // { "hello": "hello world" }
}
run().catch(console.error)
Edit 2
I will probably follow Matthew Fellows's answer, and test my system using some sort of unit tests with Pact mocking the external interactions of my system.
So, I thought about using a Mock server for each of my dependencies and discovered pact.io. As I understand, Pact allows me to write contracts and start a mock server so my system can then run HTTP requests against the mock server. Pact also allows me to give the contract to the service provider so it can also run the contract against the real app to see if it really works.
Yes, that's correct. Pact can probably be considered as a unit testing tool for your API client, that uses contract testing to ensure the mocks are valid.
If you're using Pact just for mocking, you're missing out on all of the key benefits though.
Using Pact at such a high level test is considered bad practice, and as you can see, is difficult to do (you're working against the way it is intended to be used).
I would worry less about overlapping tests (end-to-end tests will always double up with other layers testing by design), and concern yourself more about ensuring each of the API contracts are covered by Pact. These tests will run much faster, are more precise in what the test and are less flakey.
This way, you can reduce the scope of your end-to-end BDD scenarios to the key ones and that should reduce the maintenance cost.
Question:
Is there an option within spring or its embedded servlet container to open ports when spring is ready to handle traffic?
Situation:
In the current setup i use a spring boot application running in google cloud run.
Circumstances:
Cloud run does not support liveness/readyness probes, it considers an open port as "application ready".
Cloud run sends request to the container although spring is not ready to handle requests.
Spring start its servlet container, open its ports while still spinning up its beans.
Problem:
Traffic to an unready application will result in a lot of http 429 status codes.
This affects:
new deployments
scaling capabilities of cloud run
My desire:
Configure spring/servlet container to delay opening ports when application is actually ready
Delaying opening ports to the time the application is ready would ease much pain without interfering too much with the existing code base.
Any alternatives not causing too much pain?
Things i found and considered not viable
Using native-image is not an option as it is considered experimental and consumes more RAM at compile time than our deployment pipeline agents allow to allocate (max 8GB vs needed 13GB)
another answer i found: readiness check for google cloud run - how?
which i don't see how it could satisfy my needs, since spring-boot startup time is still slow. That's why my initial idea was to delay opening ports
I did not have time to test the following, but one thing i stumbled upon is
a blogpost about using multiple processes within a container. Though it is against the recommendation of containers principles, it seems viable for the time until cloud run supports probes of any type.
As you are well aware of the fact that “Cloud Run currently does not have a readiness/liveness check to avoid sending requests to unready applications” I would say there is not much that can be done on Cloud Run’s side except :
Try and optimise the Spring boot app as per the docs.
Make a heavier entrypoint in Cloud Run service that takes care of
more setup tasks. This stackoverflow thread mentions how “A
’heavier’ entrypoint will help post-deploy responsiveness, at the
cost of slower cold-starts” ( this is the most relevant solution
from a Cloud Run perspective and outlines the issue correctly)
Run multiple processes in a container in Cloud Run as you
mentioned.
This question seems more directed at Spring Boot specifically and I found an article with a similar requirement.
However, if you absolutely need the app ready to serve when requests come in, we have another alternative to Cloud Run, Google Kubernetes Engine (GKE) which makes use of readiness/liveness probes.
I'm trying to access Google Cloud Run from a Go website running within Cloud Run, but the program keeps panic-ing when I try to create the Vision Client:
client, err := vision.NewImageAnnotatorClient(context.Background(), nil)
Panic:
runtime error: invalid memory address or nil pointer dereference goroutine
I assumed that as it's running within GCP and the Cloud Run service is assigned an IAM account with privilege to access Vision API it'd just be able to access it similar to Cloud Functions without a key, is there anything I'm missing here for it to work?
The code snippet is quite short so it doesn't really give us enough information as to why it could be failing.
Looking at the documentation, I don't know that you need the nil as a second parameter to vision.NewImageAnnotatorClient.
func NewImageAnnotatorClient
Try passing only the context.Background() and see if that fixes your issue.
Currently we are running a NodeJS webApp using serverless. The API Gateway is using a single API endpoint for the entire application and routing is handled internally. So basically single http {Any+} endpoint for entire application.
My question is,
1, Whats the disadvantage of this method?? ( I know lambda is build for FaaS but right now we are handling it as a monolithic function.)
2, How much instance can lambda run at a time if we are following this method? Can it handle a million+ request at single time?
Every help would be appreciated. Thanks!
Disadvantage would be as you say - it's monolithic so you've not modularised your code at all. The idea is that adjusting one function shouldn't affect the rest, but in this case it can.
You can run as many as you like concurrently; you can set limits though (and there are some limits initially for safety which can be removed).
If you are running the function regularly it should also 'warm start' i.e. have a shorter boot time after the first time.
I want to instantiate different containers from the same image to serve different requests and process different data.
Once a request is received by Docker, it has to instantiate a container (C1) from image (I), to work on a related dataset file D1.
For the second request, Docker has to instantiate a container (C2) from image (I) as well, to work on dataset file D2.
And so on ...
Does Docker have a built-in facility to orchestrate this kind of business or I have to write my own service to receive requests and start the corresponding containers to serve them?
Kindly provide your guidance on what is the best way to do so.
I think what you're looking for is a serverless / function as a service framework. AFAIK Docker doesn't have anything like this built in. Try to take a look at OpenFaaS or Kubeless. Both are frameworks that should help you get started with implementing your case.