How do I make that my hasura actions are ready to be used for my ci / cd tests? - graphql

I have started building up a backend with hasura. That backend is validated on my CI / CD service with api tests, among other things.
Within my hasura backend, I have implemented openfaas functions. I am deploying everything on a kubernetes cluster. Before running the tests, I wait until all jobs and all deployments are done. I am deploying with devspace which deploys everything through helm charts. So, at the end of the deployment, I am dead-sure the deployments are all done (ultimately, I've checked directly on the k8s cluster). Even the openfaas functions are deployed and ready to use.
Yet, when I run my acceptance tests, I run into issues. If I don't wait long enough, then my actions are not working properly. They return some strange errors that e.g. the response returned invalid json
Error: GraphQL error: not a valid json response from webhook
or the mutation is not in the mutation root
Error: GraphQL error: field "login" not found in type: 'mutation_root'
However, the openfaas functions themselves log only success. There is no error there. They are called and they apparently throw no error.
Waiting 3-5 minutes after hasura deployment or trying to call the actions until they return something relevant works fine, however. My current work-around is to wait an additional 5 minutes after my deployments have been done and only then run my api tests.
Is that normal? Is there a more efficient way to get feedback on when hasura really is ready to accept calls to its actions? I am currently working with version 1.2.1.
EDIT
After re-verification, waiting "long enough" does not help. What, however, helps, is calling some actions until they return successful answer. Currently, what I am doing is
#! /bin/sh
if [ "$#" -lt "3" ] ; then
echo "Usage: $0 <hasura-endpoint> <profile> <auth-app-id> [<timeout-in-sec> <deltat-in-sec>]"
exit 1
fi
ENDPOINT=$1
PROFILE=$2
AUTH_APP_ID=$3
TIMEOUT=${4:-300}
DELTA_T=${5:-5}
FIXTURES_FILE=./shared/fixtures/${PROFILE}/database/Users/auth.json
username=$(jq -r '.[1].email' $FIXTURES_FILE)
password=$(jq -r '.[1].password' $FIXTURES_FILE)
user_id=$(jq -r '.[1].id' $FIXTURES_FILE)
echo "Trying to login with $username / $password / $AUTH_APP_ID"
for iteration in `seq 1 $TIMEOUT`; do
result=$(gq $ENDPOINT -q 'mutation($username: String!, $password: String!, $appId: uuid!) { login(username: $username, password: $password, appId: $appId) { userId }}' -v "username=$username" -v "password=$password" -v "appId=$AUTH_APP_ID" | jq -r '.data.login.userId')
if [ "$result" == "$user_id" ] ; then
exit 0
else
sleep $DELTA_T
fi
done
echo "Hasura actions availability timed out" && exit 1
That performs logins with valid credentials until the action returns the right user id, and not an error. The log of this script on my ci / cd is something like
$ ./scripts/login_until_it_works.sh ${API_ENDPOINT}/v1/graphql $PROFILE $AUTH_ADMIN_APP_ID
Trying to login with nathalie.droz#test-vtxnet.ch / yl2YOuSrz_ / [MASKED]
Executing query... error
Error: ApolloError: GraphQL error: not a valid json response from webhook
at new ApolloError (/usr/local/lib/node_modules/graphqurl/node_modules/apollo-client/bundle.umd.js:92:26)
at Object.next (/usr/local/lib/node_modules/graphqurl/node_modules/apollo-client/bundle.umd.js:1297:31)
at notifySubscription (/usr/local/lib/node_modules/graphqurl/node_modules/zen-observable/lib/Observable.js:135:18)
at onNotify (/usr/local/lib/node_modules/graphqurl/node_modules/zen-observable/lib/Observable.js:179:3)
at SubscriptionObserver.next (/usr/local/lib/node_modules/graphqurl/node_modules/zen-observable/lib/Observable.js:235:7)
at /usr/local/lib/node_modules/graphqurl/node_modules/apollo-client/bundle.umd.js:1102:36
at Set.forEach (<anonymous>)
at Object.next (/usr/local/lib/node_modules/graphqurl/node_modules/apollo-client/bundle.umd.js:1101:21)
at notifySubscription (/usr/local/lib/node_modules/graphqurl/node_modules/zen-observable/lib/Observable.js:135:18)
at onNotify (/usr/local/lib/node_modules/graphqurl/node_modules/zen-observable/lib/Observable.js:179:3) {
graphQLErrors: [
{
extensions: [Object],
message: 'not a valid json response from webhook'
}
],
networkError: null,
message: 'GraphQL error: not a valid json response from webhook',
extraInfo: undefined
}
Executing query... done
Notice that the second query, 5 seconds after the first, is successful. My action is defined as follows:
- args:
enums: []
input_objects: []
objects:
- description: null
fields:
- description: null
name: token
type: String!
- description: null
name: refreshToken
type: String!
- description: null
name: userId
type: uuid!
name: LoginResponse
scalars: []
type: set_custom_types
- args:
comment: null
definition:
arguments:
- description: null
name: username
type: String!
- description: null
name: password
type: String!
- description: null
name: appId
type: uuid!
forward_client_headers: false
handler: http://gateway.openfaas:8080/function/login.{{FUNCTION_NAMESPACE}}
headers: []
kind: synchronous
output_type: LoginResponse
type: mutation
name: login
type: create_action
- args:
action: login
definition:
select:
filter: {}
role: incognito
type: create_action_permission

When you deploy via Helm, it creates the Deployments and everything else you've defined and tells you it's done. That doesn't mean that whatever you deployed is ready to serve requests. That's because each service may have its own boot time, especially the services who advertise High Availability.
Kubernetes is designed to address this issue with the help of "liveness/readiness probes". Basically, in your yaml/helm files you instruct K8s what it needs to check before it returns that a pod is ready. This could be for example a 200 HTTP status code from /live endpoint in your app or whatever.
Check this out: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

Related

Golang swagger client with multiple responses

How can I generate a client from swagger to use it in a golang project that handles different response codes? Now I can get one that handle just success response (200).
My swagger file contents:
post:
summary: sending messages
description: send messages
operationId: SendMessage
parameters:
- in: body
required: true
name: body
schema:
$ref: '#/definitions/Request'
responses:
200:
description: OK
schema:
$ref: '#/definitions/Response1'
400:
description: Bad request
schema:
$ref: '#/definitions/Response2'
generate with:
docker run --rm -it -e GOPATH=/go -v "$(pwd):/work" -w /work quay.io/goswagger/swagger:latest generate client -f "./api/swagger-sender.yaml" -A Sender -
t internal/service/sender/client/gen
and all I get in client code is just one method to send message:
func (a *Client) SendMessage(params *SendMessageParams, opts ...ClientOption) (*SendMessageOK, error)
As you see only one type returned: *SendMessageOK and no one *SendMessageBadRequest
It looks like I can`t use go-swagger to generate desired client?
SendMessageBadRequest is an error too and SendMessage will return it as the error
The only you need is to check the type of error and if it is SendMessageBadRequest then you`ve got 400

Send message to sqs while catch error in stepfunctions

i am using serverless framework with serverless-step-functions plugin. I want to check any errors in my stepfunction workflow and send this error to sqs queue.
Currently I want to pass all input as message to the queue(MessageBody: $). But if I get the data from the queue, message is $ (dollar sign) and not actual input. How can I send to queue the error message from the previous step?
States:
state1:
Type: Task
Resource:
Fn::GetAtt: [function1, Arn]
Next: state2
Catch:
- ErrorEquals: [States.ALL]
Next: sendErrorToDLQ
ResultPath: $.error
state2:
Type: Task
Resource:
Fn::GetAtt: [function2, Arn]
Next: done
Catch:
- ErrorEquals: [ States.ALL ]
Next: sendErrorToDLQ
ResultPath: $.error
sendErrorToDLQ:
Type: Task
Resource: arn:aws:states:::sqs:sendMessage
Parameters:
QueueUrl:
Ref: ServiceDeadLetterQueue
MessageBody: $ # <== how to pass input to sqs message
Next: fail
fail:
Type: Fail
done:
Type: Succeed
I have got the same when connecting SNS. As per the AWS doc, we have to follow the below structure to send the parameters
"MessageBody.$": "$"
Reference: https://docs.aws.amazon.com/step-functions/latest/dg/connect-sqs.html

how do you start a workflow from another workflow and retrieve the return value of called workflow

I am testing google workflow and would like to call a workflow from another workflow but as a separate process (not a subworkflow)
I am able to start the execution but currently unable to retrieve the return value. I receive instead an instance of the execution:
{
"argument": "null",
"name": "projects/xxxxxxxxxxxx/locations/us-central1/workflows/child-workflow/executions/9fb4aa01-2585-42e7-a79f-cfb4b57b22d4",
"startTime": "2020-12-09T01:38:07.073406981Z",
"state": "ACTIVE",
"workflowRevisionId": "000003-cf3"
}
parent-workflow.yaml
main:
params: [args]
steps:
- callChild:
call: http.post
args:
url: 'https://workflowexecutions.googleapis.com/v1beta/projects/my-project/locations/us-central1/workflows/child-workflow/executions'
auth:
type: OAuth2
scope: 'https://www.googleapis.com/auth/cloud-platform'
result: callresult
- returnValue:
return: ${callresult.body}
child-workflow.yaml:
- getCurrentTime:
call: http.get
args:
url: https://us-central1-workflowsample.cloudfunctions.net/datetime
result: CurrentDateTime
- readWikipedia:
call: http.get
args:
url: https://en.wikipedia.org/w/api.php
query:
action: opensearch
search: ${CurrentDateTime.body.dayOfTheWeek}
result: WikiResult
- returnOutput:
return: ${WikiResult.body[1]}
also as an added question how can create a dynamic url from a variable. ${} doesn't seem to work there
As Executions are async API calls, you need to POLL for the workflow to see when finished.
You can have the following algorithm:
main:
steps:
- callChild:
call: http.post
args:
url: ${"https://workflowexecutions.googleapis.com/v1beta/projects/"+sys.get_env("GOOGLE_CLOUD_PROJECT_ID")+"/locations/us-central1/workflows/http_bitly_secrets/executions"}
auth:
type: OAuth2
scope: 'https://www.googleapis.com/auth/cloud-platform'
result: workflow
- waitExecution:
call: CloudWorkflowsWaitExecution
args:
execution: ${workflow.body.name}
result: workflow
- returnValue:
return: ${workflow}
CloudWorkflowsWaitExecution:
params: [execution]
steps:
- init:
assign:
- i: 0
- valid_states: ["ACTIVE","STATE_UNSPECIFIED"]
- result:
state: ACTIVE
- check_condition:
switch:
- condition: ${result.state in valid_states AND i<100}
next: iterate
next: exit_loop
- iterate:
steps:
- sleep:
call: sys.sleep
args:
seconds: 10
- process_item:
call: http.get
args:
url: ${"https://workflowexecutions.googleapis.com/v1beta/"+execution}
auth:
type: OAuth2
result: result
- assign_loop:
assign:
- i: ${i+1}
- result: ${result.body}
next: check_condition
- exit_loop:
return: ${result}
What you see here is that we have a CloudWorkflowsWaitExecution subworkflow which will loop 100 times at most, also has a 10 second delay, it will stop when the workflow has finished, and returns the result.
The output is:
argument: 'null'
endTime: '2020-12-09T13:00:11.099830035Z'
name: projects/985596417983/locations/us-central1/workflows/call_another_workflow/executions/05eeefb5-60bb-4b20-84bd-29f6338fa66b
result: '{"argument":"null","endTime":"2020-12-09T13:00:00.976951808Z","name":"projects/985596417983/locations/us-central1/workflows/http_bitly_secrets/executions/2f4b749c-4283-4c6b-b5c6-e04bbcd57230","result":"{\"archived\":false,\"created_at\":\"2020-10-17T11:12:31+0000\",\"custom_bitlinks\":[],\"deeplinks\":[],\"id\":\"j.mp/2SZaSQK\",\"link\":\"//<edited>/2SZaSQK\",\"long_url\":\"https://cloud.google.com/blog\",\"references\":{\"group\":\"https://api-ssl.bitly.com/v4/groups/Bg7eeADYBa9\"},\"tags\":[]}","startTime":"2020-12-09T13:00:00.577579042Z","state":"SUCCEEDED","workflowRevisionId":"000001-478"}'
startTime: '2020-12-09T13:00:00.353800247Z'
state: SUCCEEDED
workflowRevisionId: 000012-cb8
in the result there is a subkey that holds the results from the external Workflow execution.
The best method is now the workflows.executions.run helper method, which formats the request and blocks until the workflow execution has completed:
- run_execution:
try:
call: googleapis.workflowexecutions.v1.projects.locations.workflows.executions.run
args:
workflow_id: ${workflow}
location: ${location} # Defaults to current location
project_id: ${project} # Defaults to current project
argument: ${arguments} # Arguments could be specified inline as a map instead.
result: r1
except:
as: e
steps: ... # handle a failed execution

Listening to remote AWS SQS from local using serverless

I want to execute the lambda function locally , on SQS event which is on my AWS account. I have defined the required event but this not getting triggered.
How can this be achieved?
I am able to send the messages to the same queue using cron event from my local.
Here are few things I tried... but didnt work for me .
functions:
account-data-delta-test:
handler: functions/test/data/dataDeltatestGenerator.handler
name: ${self:provider.stage}-account-data-delta-test
description: account delta update - ${self:provider.stage}-account-data-delta-test
tags:
Name: ${self:provider.stage}-account-data-delta-test
# keeping 5 minute function timeout just in case large volume of data.
timeout: 300
events:
- sqs:
arn:
Fn::GetAtt: [ testGenerationQueue, Arn ]
batchSize: 10
----------
Policies:
- PolicyName: ${self:provider.stage}-test-sqs-policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- sqs:ReceiveMessage
- sqs:DeleteMessage
- sqs:GetQueueAttributes
- sqs:ChangeMessageVisibility
- sqs:SendMessage
- sqs:GetQueueUrl
- sqs:ListQueues
Resource: "*"
---------------
---
Resources:
testGenerationQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: ${self:provider.stage}-account-test-queue
VisibilityTimeout: 60
Tags:
-
Key: Name
Value: ${self:provider.stage}-account-test-queue
-------------
const sqs = new AWS.SQS({
region: process.env.REGION,
});
exports.handler = async (event) => {
console.error('------------ >>>>CRON:START: Test delta Job run.', event);
log.error('------------ >>>>CRON:START: Test delta Job run.', event);
});
You can't trigger your local Lambda function from your remote context because they haven't nothin in common.
I suppose your goal is to test the logic of Lambda function, if so you have two options.
Option 1
A faster way could be invoke function locally using sam local invoke. In this way, you could provide this command some argument, one of those arguments is the event source (i.e. the event information that SQS will send to the Lambda as soon this is triggered).
sam local invoke -e sqs.input.json account-data-delta-test
and your sqs.input.json would look like this (generate using sam local generate-event sqs receive-message)
so you will actually test your Lambda locally.
Pros: is fast
Cons: You still have to test the trigger when you will deploy on AWS
Option 2
In a second scenario you will sacrifice the bind between a queue and Lambda. You have to trigger your function at fix interval and explicitly use the ReceiveMessage in your code.
Pro: you can read a real message from a real queue.
Con: you have to invoke function at regular interval and this is not handy.

How do i know the service status in ansible?

In my ansible coding i want to know the status of the service like service httpd status (service is runngin or not) the result would be store in to variable. Using that status i will use some other code in ansible.
I am using ansible service module there is no option for status. If i use the shell module i got this warning
[WARNING]: Consider using service module rather than running service
so is it any other module doing to get service status?
No, there is no standard module to get services' statuses.
But you can suppress warning for specific command task if you know what are you doing:
- command: service httpd status
args:
warn: false
I've posted a quick note about this trick a while ago.
You can use the service_facts module.
For example, say I want to see the status of Apache.
- name: Check for apache status
service_facts:
- debug:
var: ansible_facts.services.apache2.state
The output is:
ok: [192.168.blah.blah] => {
"ansible_facts.services.apache2.state": "running"
}
If you would like to see all of them, you can do that by just going two levels up in the array:
var: ansible_facts.services
The output will list all the services, and will look like this (truncated for the sake of brevity):
"apache2": {
"name": "apache2",
"source": "sysv",
"state": "running"
},
"apache2.service": {
"name": "apache2.service",
"source": "systemd",
"state": "running"
},
"apparmor": {
"name": "apparmor",
"source": "sysv",
"state": "running"
},
etc,
etc
I am using Ansible 2.7. Here are the docs for that module: Click here
here is an example of starting a service and then checking status using service facts, in my example you have to register the variable then output it using debug var and pointing to the correct format in the json chain resulting output:
## perform start service for alertmanager
- name: Start service alertmanager if not started
become: yes
service:
name: alertmanager
state: started
## check to see the state of the alertmanager service status
- name: Check status of alertmanager service
service_facts:
register: service_state
- debug:
var: service_state.ansible_facts.services["alertmanager.service"].state
Hopefully service: allow user to query service status #3316 will be merged into the core module soon.
You can patch it by hand using this diff to system/service.py
Here's my diff using ansible 2.2.0.0. I've run this on my mac/homebrew install and it works for me.
This is the file that I edited: /usr/local/Cellar/ansible/2.2.0.0_2/libexec/lib/python2.7/site-packages/ansible/modules/core/system/service.py
## -36,11 +36,12 ##
- Name of the service.
state:
required: false
- choices: [ started, stopped, restarted, reloaded ]
+ choices: [ started, stopped, status, restarted, reloaded ]
description:
- C(started)/C(stopped) are idempotent actions that will not run
- commands unless necessary. C(restarted) will always bounce the
- service. C(reloaded) will always reload. B(At least one of state
+ commands unless necessary. C(status) would report the status of
+ the service C(restarted) will always bounce the service.
+ C(reloaded) will always reload. B(At least one of state
and enabled are required.)
sleep:
required: false
## -1455,7 +1456,7 ##
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True),
- state = dict(choices=['running', 'started', 'stopped', 'restarted', 'reloaded']),
+ state = dict(choices=['running', 'started', 'stopped', 'status', 'restarted', 'reloaded']),
sleep = dict(required=False, type='int', default=None),
pattern = dict(required=False, default=None),
enabled = dict(type='bool'),
## -1501,6 +1502,9 ##
else:
service.get_service_status()
+ if module.params['state'] == 'status':
+ module.exit_json(state=service.running)
+
# Calculate if request will change service state
service.check_service_changed()

Resources