GCP Workflow: Handling http functions responses other than 200 - google-workflows

When calling http endpoint in GCP workflow, only HttpStatus 200 is considered a success.
How to handle other Success Status codes? 201, 202, etc.
Example workflow from samples:
- readItem:
try:
call: http.get
args:
url: https://example.com/someapi
auth:
type: OIDC
result: APIResponse
except:
as: e
steps:
- knownErrors:
switch:
- condition: ${not("HttpError" in e.tags)}
next: connectionProblem
- condition: ${e.code == 404}
next: urlNotFound
- condition: ${e.code == 403}
next: authProblem
- UnhandledException:
raise: ${e}
- urlFound:
return: ${APIResponse.body}
- connectionProblem:
return: "Connection problem; check URL"
- urlNotFound:
return: "Sorry, URL wasn't found"
- authProblem:
return: "Authentication error"
If the api endpoint https://example.com/someapi returns anything other than a 200 status code the connectionProblem is invoked.
This is the same if its a GET or POST request.
What is the best way of handling this?

There is nothing referencing how to handle other 200s statuses in the documentation for Google Workflows so I assume this is not possible without treating them as errors.
This means that in order to do it you are going to need to add an extra step to deal with this status as an error handling strategy, like - condition: ${e.code == 201} for example.
Alternatively you could open a feature request in Google's Issue Tracker so that they can consider implementing different treatements of such status codes or at least so that this is touched on more details in the documentation.

At present response codes >= 400 and <= 599 are considered an error and will raise an exception. i.e. 200s are considered a success and will not.
Alternatively, if you want to trigger an exception handler for return codes in this range (or for any other reason), this can be done by adding an additional step to the try call, for example (illustration only):
main:
steps:
- getStuff:
try:
steps:
- callStep:
call: http.get
args:
url: <SOME URL>
result: r
- checkNotOK:
switch:
- condition: ${r.code == 202}
raise: ${r}
retry:
predicate: ${custom_predicate}
max_retries: 5
backoff:
initial_delay: 2
max_delay: 60
multiplier: 2
custom_predicate:
params: [e]
steps:
- what_to_repeat:
switch:
- condition: ${e.code == 202}
return: true
- otherwise:
return: false

Related

Google workflow http.get url_encode to Cloud Run function returns 403

I have a CloudRun function that accepts multiple URL inputs and this works fine via cURL from Google Shell, e.g
curl https://myapp.a.run.app/function1/input1/input2
and where input1 is an email address, input2 a string with no spaces, input3 and int and input4 a filename with spaces, e.g
curl https://myapp.a.run.app/function2/input1/input2/input3/input4
I have a Cloud worklow that calls this function in a few steps, building up a URL from previous responses. The calls to the function in steps before are working fine, however when I attempt to call the function via a URL where the email address/filename inputs are text.url_encoded prior, the workflow fails with a 403 error.
- step3_assign_input1:
assign:
- step3_URL: '${"https://myapp.a.run.app/function1/" + step2result.body.field1.field2["b:input1"]}'
- step4_get_input1:
try:
call: http.get
args:
url: ${step3_URL}
result: step4result
retry:
predicate: ${custom_predicate}
max_retries: 10
backoff:
initial_delay: 2
max_delay: 60
multiplier: 2
- step4a_sleep:
call: sys.sleep
args:
seconds: 5
- step5_assign_input2:
assign:
- step5_URL: '${"https://myapp.a.run.app/function2/" + step4result.body.field1.field2["a:input2"]}'
- step6_get_input2:
try:
call: http.get
args:
url: ${step5_URL}
result: step6result
retry:
predicate: ${custom_predicate}
max_retries: 10
backoff:
initial_delay: 2
max_delay: 60
multiplier: 2
- step7a_encode_email:
call: text.url_encode
args:
source: ${args.email}
result: encode_email
- step7b_encode_filename:
call: text.url_encode
args:
source: '${step6result.body.field1.field2["a:FileName"]}'
result: encode_filename
- step8_assign_emailURL:
assign:
- emailURL: '${"https://myapp.a.run.app/email/" + encode_email + "/" + args.type + "/" + args.serial + "/" + encode_filename}'
- step9_get_emailURL:
try:
call: http.get
args:
url: ${emailURL}
result: step9result
retry:
predicate: ${custom_predicate}
max_retries: 10
backoff:
initial_delay: 2
max_delay: 60
multiplier: 2
- returnOutput:
return: ${step9result}
custom_predicate:
params: [e]
steps:
- what_to_repeat:
switch:
- condition: ${e.code in [429, 500, 502, 503, 504]}
return: true
- otherwise:
return: false
From the workflow logs I can see the URL is being constructed as
https://myapp.a.run.app/email/someone%40email.com/String/12345678/THIS%20File%20Name%01.pdf
which if I call via cURL GET in CloudShell it returns with a 200 response code.
In the Workflow logs I get a HTTP server responded with error code 403, and Error: Forbidden\u003c/h1\u003e\n\u003ch2\u003eAccess is forbidden.
{
"insertId": "sl7uy9bd1",
"jsonPayload": {
"activityTime": "2023-01-06T05:44:39Z",
"state": "FAILED",
"#type": "type.googleapis.com/google.cloud.workflows.type.ExecutionsSystemLog",
"failure": {
"source": "main.step9_get_emailURL, line: 76",
"exception": "HTTP server responded with error code 403\nin step \"step9_get_emailURL\", routine \"main\", line: 76: {\"body\":\"\\n\\u003chtml\\u003e\\u003chead\\u003e\\n\\u003cmeta http-equiv=\\\"content-type\\\" content=\\\"text/html;charset=utf-8\\\"\\u003e\\n\\u003ctitle\\u003e403 Forbidden\\u003c/title\\u003e\\n\\u003c/head\\u003e\\n\\u003cbody text=#000000 bgcolor=#ffffff\\u003e\\n\\u003ch1\\u003eError: Forbidden\\u003c/h1\\u003e\\n\\u003ch2\\u003eAccess is forbidden.\\u003c/h2\\u003e\\n\\u003ch2\\u003e\\u003c/h2\\u003e\\n\\u003c/body\\u003e\\u003c/html\\u003e\\n\",\"code\":403,\"headers\":{\"Alt-Svc\":\"h3=\\\":443\\\"; ma=2592000,h3-29=\\\":443\\\"; ma=2592000,h3-Q050=\\\":443\\\"; ma=2592000,h3-Q046=\\\":443\\\"; ma=2592000,h3-Q043=\\\":443\\\"; ma=2592000,quic=\\\":443\\\"; ma=2592000; v=\\\"46,43\\\"\",\"Content-Length\":\"235\",\"Content-Type\":\"text/html; charset=UTF-8\",\"Date\":\"Fri, 06 Jan 2023 05:44:38 GMT\",\"X-Appengine-Country\":\"ZZ\"},\"message\":\"HTTP server responded with error code 403\",\"tags\":[\"HttpError\"]}"
}
},
"resource": {
"type": "workflows.googleapis.com/Workflow",
"labels": {
"workflow_id": "myworkflow",
"location": "australia-southeast1",
"resource_container": "323152299552"
}
},
"timestamp": "2023-01-06T05:44:39.101297442Z",
"severity": "ERROR",
"labels": {
"workflows.googleapis.com/revision_id": "000036-15c",
"workflows.googleapis.com/execution_id": "f2175706-7e8a-4321-916b-487231a10d6b"
},
"logName": "projects/myproject/logs/workflows.googleapis.com%2Fexecutions_system",
"receiveTimestamp": "2023-01-06T05:44:40.026366626Z"
}
If I look at the logs of the CloudRun function, I see successful logs from the earlier steps in the workflow. but, no logs from the step that is failing.
Hopefully someone can provide some insight why this fails via a workflow steps.

How to route requests to a dynamic endpoint on kong api-gateway

I have a service named alpha (created using python-django) that runs on http://127.0.0.1:9000 and has these two endpoings
/health returns {"health": "OK"} status 200
/codes/<str:code> returns {"code": code} status 200
I also have a kong api-gateway in the db-less declarative mode that runs on localhost port 80
in kong.yaml I have two services
services:
- name: local-alpha-health
url: http://host.docker.internal:9000/health
routes:
- name: local-alpha-health
methods:
- GET
paths:
- /alpha/health
strip_path: true
- name: local-alpha-code
url: http://host.docker.internal:9000/code/ # HOW TO WRITE THIS PART???
routes:
- name: local-alpha-code
methods:
- GET
paths:
- /alpha/code/(?<appcode>\d+) # Is this right???
strip_path: true
If I send a GET request to http://127.0.0.1/alpha/health it returns {"health": "OK"} status 200 which shows kong is working.
I want to send a request such as http://127.0.0.1/alpha/code/123 and I expect to receive {"code": 123} status 200 but I don't know how to setup kong.yaml file to do this. If I send a request to http://127.0.0.1/alpha/code/123 I get 404 from (from the alpha django application) which means kong is routing the request to alpha service but if I send a request to http://127.0.0.1/alpha/code/abc I get {"message": "no Route matched with those values"} which shows the regex is working
I could do this
services:
- name: local-alpha-health
url: http://host.docker.internal:9000/
routes:
- name: local-alpha-health
methods:
- GET
paths:
- /alpha
strip_path: true
Then a request sent to http://127.0.0.1/alpha/code/123 would go to ``http://127.0.0.1:9000/code/123` but I cannot control with regex
Any idea How to route requests to a dynamic endpoint on kong api-gateway?
This content seems related but cannot figure it out how to set it up
https://docs.konghq.com/gateway-oss/2.5.x/proxy/
Note that a request like http://127.0.0.1/alpha/code/abc will indeed not match the rule you have added, because of the \d+ (which matches one or more digits). Also, http://127.0.0.1/alpha/code/123 will reach the upstream as a request to /, since you have strip_path set to true.
I have tested your example with some minor tweaks to proxy to a local httpbin service, which has a similar endpoint (/status/<code>).
Start a local httpbin service:
$ docker run --rm -d -p "8080:80" kennethreitz/httpbin
Start Kong with the following config:
_format_version: "2.1"
services:
- name: local-alpha-code
url: http://localhost:8080
routes:
- name: local-mockbin-status
methods:
- GET
paths:
- /status/(?<appcode>\d+)
strip_path: false
Note that strip_path is set to false, so the entire matching path is proxied to the upstream.
Test it out with:
$ http :8000/status/200
HTTP/1.1 200 OK

how do you start a workflow from another workflow and retrieve the return value of called workflow

I am testing google workflow and would like to call a workflow from another workflow but as a separate process (not a subworkflow)
I am able to start the execution but currently unable to retrieve the return value. I receive instead an instance of the execution:
{
"argument": "null",
"name": "projects/xxxxxxxxxxxx/locations/us-central1/workflows/child-workflow/executions/9fb4aa01-2585-42e7-a79f-cfb4b57b22d4",
"startTime": "2020-12-09T01:38:07.073406981Z",
"state": "ACTIVE",
"workflowRevisionId": "000003-cf3"
}
parent-workflow.yaml
main:
params: [args]
steps:
- callChild:
call: http.post
args:
url: 'https://workflowexecutions.googleapis.com/v1beta/projects/my-project/locations/us-central1/workflows/child-workflow/executions'
auth:
type: OAuth2
scope: 'https://www.googleapis.com/auth/cloud-platform'
result: callresult
- returnValue:
return: ${callresult.body}
child-workflow.yaml:
- getCurrentTime:
call: http.get
args:
url: https://us-central1-workflowsample.cloudfunctions.net/datetime
result: CurrentDateTime
- readWikipedia:
call: http.get
args:
url: https://en.wikipedia.org/w/api.php
query:
action: opensearch
search: ${CurrentDateTime.body.dayOfTheWeek}
result: WikiResult
- returnOutput:
return: ${WikiResult.body[1]}
also as an added question how can create a dynamic url from a variable. ${} doesn't seem to work there
As Executions are async API calls, you need to POLL for the workflow to see when finished.
You can have the following algorithm:
main:
steps:
- callChild:
call: http.post
args:
url: ${"https://workflowexecutions.googleapis.com/v1beta/projects/"+sys.get_env("GOOGLE_CLOUD_PROJECT_ID")+"/locations/us-central1/workflows/http_bitly_secrets/executions"}
auth:
type: OAuth2
scope: 'https://www.googleapis.com/auth/cloud-platform'
result: workflow
- waitExecution:
call: CloudWorkflowsWaitExecution
args:
execution: ${workflow.body.name}
result: workflow
- returnValue:
return: ${workflow}
CloudWorkflowsWaitExecution:
params: [execution]
steps:
- init:
assign:
- i: 0
- valid_states: ["ACTIVE","STATE_UNSPECIFIED"]
- result:
state: ACTIVE
- check_condition:
switch:
- condition: ${result.state in valid_states AND i<100}
next: iterate
next: exit_loop
- iterate:
steps:
- sleep:
call: sys.sleep
args:
seconds: 10
- process_item:
call: http.get
args:
url: ${"https://workflowexecutions.googleapis.com/v1beta/"+execution}
auth:
type: OAuth2
result: result
- assign_loop:
assign:
- i: ${i+1}
- result: ${result.body}
next: check_condition
- exit_loop:
return: ${result}
What you see here is that we have a CloudWorkflowsWaitExecution subworkflow which will loop 100 times at most, also has a 10 second delay, it will stop when the workflow has finished, and returns the result.
The output is:
argument: 'null'
endTime: '2020-12-09T13:00:11.099830035Z'
name: projects/985596417983/locations/us-central1/workflows/call_another_workflow/executions/05eeefb5-60bb-4b20-84bd-29f6338fa66b
result: '{"argument":"null","endTime":"2020-12-09T13:00:00.976951808Z","name":"projects/985596417983/locations/us-central1/workflows/http_bitly_secrets/executions/2f4b749c-4283-4c6b-b5c6-e04bbcd57230","result":"{\"archived\":false,\"created_at\":\"2020-10-17T11:12:31+0000\",\"custom_bitlinks\":[],\"deeplinks\":[],\"id\":\"j.mp/2SZaSQK\",\"link\":\"//<edited>/2SZaSQK\",\"long_url\":\"https://cloud.google.com/blog\",\"references\":{\"group\":\"https://api-ssl.bitly.com/v4/groups/Bg7eeADYBa9\"},\"tags\":[]}","startTime":"2020-12-09T13:00:00.577579042Z","state":"SUCCEEDED","workflowRevisionId":"000001-478"}'
startTime: '2020-12-09T13:00:00.353800247Z'
state: SUCCEEDED
workflowRevisionId: 000012-cb8
in the result there is a subkey that holds the results from the external Workflow execution.
The best method is now the workflows.executions.run helper method, which formats the request and blocks until the workflow execution has completed:
- run_execution:
try:
call: googleapis.workflowexecutions.v1.projects.locations.workflows.executions.run
args:
workflow_id: ${workflow}
location: ${location} # Defaults to current location
project_id: ${project} # Defaults to current project
argument: ${arguments} # Arguments could be specified inline as a map instead.
result: r1
except:
as: e
steps: ... # handle a failed execution

How do I make that my hasura actions are ready to be used for my ci / cd tests?

I have started building up a backend with hasura. That backend is validated on my CI / CD service with api tests, among other things.
Within my hasura backend, I have implemented openfaas functions. I am deploying everything on a kubernetes cluster. Before running the tests, I wait until all jobs and all deployments are done. I am deploying with devspace which deploys everything through helm charts. So, at the end of the deployment, I am dead-sure the deployments are all done (ultimately, I've checked directly on the k8s cluster). Even the openfaas functions are deployed and ready to use.
Yet, when I run my acceptance tests, I run into issues. If I don't wait long enough, then my actions are not working properly. They return some strange errors that e.g. the response returned invalid json
Error: GraphQL error: not a valid json response from webhook
or the mutation is not in the mutation root
Error: GraphQL error: field "login" not found in type: 'mutation_root'
However, the openfaas functions themselves log only success. There is no error there. They are called and they apparently throw no error.
Waiting 3-5 minutes after hasura deployment or trying to call the actions until they return something relevant works fine, however. My current work-around is to wait an additional 5 minutes after my deployments have been done and only then run my api tests.
Is that normal? Is there a more efficient way to get feedback on when hasura really is ready to accept calls to its actions? I am currently working with version 1.2.1.
EDIT
After re-verification, waiting "long enough" does not help. What, however, helps, is calling some actions until they return successful answer. Currently, what I am doing is
#! /bin/sh
if [ "$#" -lt "3" ] ; then
echo "Usage: $0 <hasura-endpoint> <profile> <auth-app-id> [<timeout-in-sec> <deltat-in-sec>]"
exit 1
fi
ENDPOINT=$1
PROFILE=$2
AUTH_APP_ID=$3
TIMEOUT=${4:-300}
DELTA_T=${5:-5}
FIXTURES_FILE=./shared/fixtures/${PROFILE}/database/Users/auth.json
username=$(jq -r '.[1].email' $FIXTURES_FILE)
password=$(jq -r '.[1].password' $FIXTURES_FILE)
user_id=$(jq -r '.[1].id' $FIXTURES_FILE)
echo "Trying to login with $username / $password / $AUTH_APP_ID"
for iteration in `seq 1 $TIMEOUT`; do
result=$(gq $ENDPOINT -q 'mutation($username: String!, $password: String!, $appId: uuid!) { login(username: $username, password: $password, appId: $appId) { userId }}' -v "username=$username" -v "password=$password" -v "appId=$AUTH_APP_ID" | jq -r '.data.login.userId')
if [ "$result" == "$user_id" ] ; then
exit 0
else
sleep $DELTA_T
fi
done
echo "Hasura actions availability timed out" && exit 1
That performs logins with valid credentials until the action returns the right user id, and not an error. The log of this script on my ci / cd is something like
$ ./scripts/login_until_it_works.sh ${API_ENDPOINT}/v1/graphql $PROFILE $AUTH_ADMIN_APP_ID
Trying to login with nathalie.droz#test-vtxnet.ch / yl2YOuSrz_ / [MASKED]
Executing query... error
Error: ApolloError: GraphQL error: not a valid json response from webhook
at new ApolloError (/usr/local/lib/node_modules/graphqurl/node_modules/apollo-client/bundle.umd.js:92:26)
at Object.next (/usr/local/lib/node_modules/graphqurl/node_modules/apollo-client/bundle.umd.js:1297:31)
at notifySubscription (/usr/local/lib/node_modules/graphqurl/node_modules/zen-observable/lib/Observable.js:135:18)
at onNotify (/usr/local/lib/node_modules/graphqurl/node_modules/zen-observable/lib/Observable.js:179:3)
at SubscriptionObserver.next (/usr/local/lib/node_modules/graphqurl/node_modules/zen-observable/lib/Observable.js:235:7)
at /usr/local/lib/node_modules/graphqurl/node_modules/apollo-client/bundle.umd.js:1102:36
at Set.forEach (<anonymous>)
at Object.next (/usr/local/lib/node_modules/graphqurl/node_modules/apollo-client/bundle.umd.js:1101:21)
at notifySubscription (/usr/local/lib/node_modules/graphqurl/node_modules/zen-observable/lib/Observable.js:135:18)
at onNotify (/usr/local/lib/node_modules/graphqurl/node_modules/zen-observable/lib/Observable.js:179:3) {
graphQLErrors: [
{
extensions: [Object],
message: 'not a valid json response from webhook'
}
],
networkError: null,
message: 'GraphQL error: not a valid json response from webhook',
extraInfo: undefined
}
Executing query... done
Notice that the second query, 5 seconds after the first, is successful. My action is defined as follows:
- args:
enums: []
input_objects: []
objects:
- description: null
fields:
- description: null
name: token
type: String!
- description: null
name: refreshToken
type: String!
- description: null
name: userId
type: uuid!
name: LoginResponse
scalars: []
type: set_custom_types
- args:
comment: null
definition:
arguments:
- description: null
name: username
type: String!
- description: null
name: password
type: String!
- description: null
name: appId
type: uuid!
forward_client_headers: false
handler: http://gateway.openfaas:8080/function/login.{{FUNCTION_NAMESPACE}}
headers: []
kind: synchronous
output_type: LoginResponse
type: mutation
name: login
type: create_action
- args:
action: login
definition:
select:
filter: {}
role: incognito
type: create_action_permission
When you deploy via Helm, it creates the Deployments and everything else you've defined and tells you it's done. That doesn't mean that whatever you deployed is ready to serve requests. That's because each service may have its own boot time, especially the services who advertise High Availability.
Kubernetes is designed to address this issue with the help of "liveness/readiness probes". Basically, in your yaml/helm files you instruct K8s what it needs to check before it returns that a pod is ready. This could be for example a 200 HTTP status code from /live endpoint in your app or whatever.
Check this out: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

How to check for a certain Status Code (4xx) in Ansible?

What I want is: stop the program until status code 412 appears and then proceed.
I have the following code to check for the status code 412:
- name: Check if plan was applied
uri:
url: "https://website.some.url.com/api/v1/clusters/elasticsearch/{{elasticClusterDetails.elastic$
method: GET
user: admin
password: "{{rootpw.stdout}}"
force_basic_auth: yes
validate_certs: no
register: result
until: result.status == 412
retries: 20
delay: 30
After a few retries I get
ERROR: [...] status was 412 not 200
So the 412 actually comes up but is not recognized as fulfillment of the 'until' condition and the program exits.
From my understanding, the request can't be made when the code switches from 200 to 412.
What do I need to change to not getting an error at Code 412?
Please NOTE: This is no duplicate because checking for 4xx status code is different than to check for 2xx
Take a look at the docs for the uri module, and you'll see that there is a status_code attribute that can be used to specify one or more status codes that are considered "successful". So something like (assuming that you expect to receive either a 200 or 412 response):
- name: Check if plan was applied
uri:
url: "https://website.some.url.com/api/v1/clusters/elasticsearch/{{elasticClusterDetails.elastic$
method: GET
user: admin
password: "{{rootpw.stdout}}"
force_basic_auth: yes
validate_certs: no
status_code: [200,412]
register: result
until: result.status == 412
retries: 20
delay: 30
The problem you have right now is that the 412 status code is considered a failure.
You could also set ignore_errors: true on the task, but using the status_code attribute is probably better, because it still allows the task to failure in the event you receive unexpected status codes.
NB: The docs say, "can also be a comma-separated list of status codes", but the source looks like it expects an actual YAML list. So you may need to tweak the value depending on which value actually works.

Resources