Cloud function e2e - go

How do I write e2e or integration testig for cloud function, so far
I've been able to use bash automation script, but when deployment I can not easily detect it
gcloud functions deploy MyFunction --entry-point MyFunction --runtime go111 --trigger-http

Bash is a good starting point, how about using some e2e testing tools, for instance
with endly workflow e2e runner your deployment workflow may look like the following
pipeline:
deploy:
action: exec:run
comments: deploy HelloWord triggered by http
target: $target
sleepTimeMs: 1500
terminators:
- Do you want to continue
errors:
- ERROR
env:
GOOGLE_APPLICATION_CREDENTIALS: ${env.HOME}/.secret/${gcSecrets}.json
commands:
- cd $appPath
- export PATH=$PATH:${env.HOME}/google-cloud-sdk/bin/
- gcloud config set project $projectID
- ${cmd[4].stdout}:/Do you want to continue/ ? Y
- gcloud functions deploy HelloWorld --entry-point HelloWorld --runtime go111 --trigger-http
extract:
- key: triggerURL
regExpr: (?sm).+httpsTrigger:[^u]+url:[\s\t]+([^\r\n]+)
validateTriggerURL:
action: validator:assert
actual: ${deploy.Data.triggerURL}
expected: /HelloWorld/
post:
triggerURL: ${deploy.Data.triggerURL}
you can also achive the same with using cloudfunction service API calls
defaults:
credentials: $gcSecrets
pipeline:
deploy:
action: gcp/cloudfunctions:deploy
'#name': HelloWorld
entryPoint: HelloWorldFn
runtime: go111
source:
URL: ${appPath}/hello/
Finally, you can look into practical serverless e2e testing examples(cloudfunctions, lambda, firebase, firestore, dynamodb,pubsub, sqs,sns,bigquery etc ...)
serverless_e2e

Related

Google cloud build with pack and secrets manager not accessing environment variables

I'm using a standard gcr.io/k8s-skaffold/pack build function to build my app for google cloud run using google cloud build.
In my cloudbuild.yaml I load 2 secrets from google secrets manager and pass it to the build function. The google cloud build has access to those secrets, otherwise I would get an error message for this (I got this kind of error at the beginning when setting up the build, now it seems to have access).
However, it seems like the environment variables don't get set.
I think that it might be a syntactical problem of how I try to pass the variables.
This is the stripped down cloudbuild.yaml
steps:
- name: gcr.io/k8s-skaffold/pack
args:
- build
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- '--builder=gcr.io/buildpacks/builder:v1'
- '--network=cloudbuild'
- '--path=.'
- '--env=SEC_A=$$SEC_A'
- '--env=SEC_B=$$SEC_B'
secretEnv: ['SEC_A', 'SEC_B']
id: Buildpack
entrypoint: pack
availableSecrets:
secretManager:
- versionName: projects/<pid>/secrets/SEC_A/versions/latest
env: SEC_A
- versionName: projects/<pid>/secrets/SEC_B/versions/latest
env: SEC_B
An Error message that I hacked into the build for checking shows me that the env var is empty during this build step.
I tried using $, $$ (as seen above), &&, ${...}, for substitution. But maybe the problem lies somewhere else.
Yes, it's a common issue and a trap on Cloud Build. In fact, your secrets can't be read if you use the args[] arrays to pass argument. you have to use the script mode, like that
steps:
- name: gcr.io/k8s-skaffold/pack
entrypoint: bash
args:
- -c
- |
pack build $_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA --builder=gcr.io/buildpacks/builder:v1 --network=cloudbuild --path=. --env=SEC_A=$$SEC_A --env=SEC_B=$$SEC_B
secretEnv: ['SEC_A', 'SEC_B']
id: Buildpack

Percy not running in CircleCI orbs (w/ Cypress)

I'm trying to get Percy.io to take snapshots of a simple test written in Cypress, building in CircleCI. However, the 'builds' are showing up as failed in the Percy dashboard despite the test/build passing in CircleCI. In the Cypress test runner it is showing 'Percy not running' where my snapshots are placed.
I've followed the tutorials on the Percy and Cypress sites. I can get Percy to work locally, by running percy exec -- cypress run
but the CircleCI config doesn't run Cypress via the command cypress run, it runs it via the cypress orb.
It seems like the two orbs, Cypress and Percy, doesn't know the other exists.
Here's my CircleCI config file:
version: 2.1
orbs:
node: circleci/node#4.5.1
cypress: cypress-io/cypress#1.28.0
slack: circleci/slack#4.4.2
percy: percy/agent#0.1.3
workflows:
version: 2
commit-workflow:
jobs:
- cypress/run:
name: Smoke Tests
record: true
store_artifacts: true
spec: cypress/integration/E2E/*
post-steps:
- store_test_results:
path: test-results
- slack/notify:
channel: general
event: fail
template: basic_fail_1
mentions: '#Jac'
- slack/notify:
channel: general
event: pass
template: basic_success_1
mentions: '#Jac'
- percy/finalize_all:
requires:
- Smoke Tests
The Run Cypress Tests step doesn't make any mention of Percy, so I'm assuming it simply isn't running - that despite using the Percy orb, there's some sort of config I'm missing?
Apologies, I keep finding answers to my questions after posting to Stack
Overflow! I obviously don't know the properties of cypress/run well enough. But essentially, there's a custom command-prefix property that can be added for the purpose of amending the command used to run cypress. In fact, Percy is the example used in the Cypress docs.
Config now looks like:
version: 2.1
orbs:
node: circleci/node#4.5.1
cypress: cypress-io/cypress#1.28.0
slack: circleci/slack#4.4.2
percy: percy/agent#0.1.3
workflows:
version: 2
commit-workflow:
jobs:
- cypress/run:
name: Smoke Tests
record: true
store_artifacts: true
spec: cypress/integration/E2E/*
command-prefix: npx percy exec --
post-steps:
- store_test_results:
path: test-results
- slack/notify:
channel: general
event: fail
template: basic_fail_1
mentions: '#Jac'
- slack/notify:
channel: general
event: pass
template: basic_success_1
mentions: '#Jac'
- percy/finalize_all:
requires:
- Smoke Tests

Unable to deploy pre built image in app engine standard environment (GCP)

My spring boot application was working fine in cloud build & deployed without any issue till September.
Now my trigger fails in gcloud app deploy.
Step #4: ERROR: (gcloud.app.deploy) INVALID_ARGUMENT: Deployment cannot use a pre-built image. Pre-built images are only allowed in the App Engine Flexible Environment.
app.yaml
runtime: java11
env: standard
service: service
handlers:
- url: /.*
script: this field is required, but ignored
cloudbuild.yaml
steps:
# backend deployment
# Step 1:
- name: maven:3-jdk-14
entrypoint: mvn
dir: 'service'
args: ["test"]
# Step 2:
- name: maven:3-jdk-14
entrypoint: mvn
dir: 'service'
args: ["clean", "install", "-Dmaven.test.skip=true"]
# Step 3:
- name: docker
dir: 'service'
args: ["build", "-t", "gcr.io/service-base/base", "."]
# Step 4:
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/service-base/base"]
# Step 5:
- name: 'gcr.io/cloud-builders/gcloud'
dir: 'service/src/main/appengine'
args: ['app', 'deploy', "--image-url=gcr.io/service-base/base"]
timeout: "30m0s"
# Step 6:
# dispatch.yaml deployment
- name: "gcr.io/cloud-builders/gcloud"
dir: 'service/src/main/appengine'
args: ["app", "deploy", "dispatch.yaml"]
timeout: "30m0s"
timeout: "100m0s"
images: ["gcr.io/service-base/base"]
Cloud build error
Thanks in advance. Im confused how my build was working fine before & what am i doing wrong now.
You can't deploy custom container on App Engine standard. You have to provide your code and the environment runtime. Then Buildpack is used to create a standard container on Google Side (for information, a new Cloud Build job is ran for this) and deployed on App Engine.
I recommend you to have a look to Cloud Run to use your custom container. It's very close to App Engine (and even better on many points!) and very customizable.
What your cloudbuild.yaml comment's refer to as Step 5 corresponds to the Step #4 in the error because system begins numbering steps from 0.
The error message is accurate; App Engine standard (!) differs from App Engine flexible in that the latter (flexible) permits container image deployments. App Engine standard deploys from sources.
See Google's example.
It's possible that something has changed Google's side that's causing the issue but, the env: standard in your app.yaml suggests the build file has changed.

how to run pipeline only on HEAD commit in GitlabCi runner?

we have a CI pipeline on our repository, hosted in gitlab
we setup gitlab-runner on our local machine
the pipeline running 4 steps
build
unit tests
integration test
quality tests
all this pipeline takes almost 20 min
and the pipeline trigger on each push to a branch
is there a way to configure the gitlab-runner that if the HEAD of a branch that the runner currently running on changes the pipe
will auto cancel the run? because the latest version is what matters
for example in this run the lower run is unnecessary
gitlab-ci.yml
stages:
- build
- unit_tests
- unit_and_integration_tests
- quality_tests
build:
stage: build
before_script:
- cd projects/ideology-synapse
script:
- mvn compile
unit_and_integration_tests:
variables:
GIT_STRATEGY: clone
stage: unit_and_integration_tests
only:
- /^milestone-.*$/
script:
- export RUN_ENVIORMENT=GITLAB_CI
- export MAVEN_OPTS="-Xmx32g"
- mvn test
- "cat */target/site/jacoco/index.html"
cache: {}
artifacts:
reports:
junit:
- "*/*/*/target/surefire-reports/TEST-*.xml"
unit_tests:
variables:
GIT_STRATEGY: clone
stage: unit_tests
except:
- /^milestone-.*$/
script:
- export MAVEN_OPTS="-Xmx32g"
- mvn test
- "cat */target/site/jacoco/index.html"
cache: {}
artifacts:
reports:
junit:
- "*/*/*/target/surefire-reports/TEST-*.xml"
quality_tests:
variables:
GIT_STRATEGY: clone
stage: quality_tests
only:
- /^milestone-.*$/
script:
- export RUN_ENVIORMENT_EVAL=GITLAB_CI
- export MAVEN_OPTS="-Xmx32g"
- mvn test
cache: {}
edit after #siloko comment:
I already try using
the auto-cancel redundant, pending pipelines in the setting menu
I want to cancel running pipelines and not pending
after forther investigation, I found that I had 2 active runners
on one of my machines
one shared runner , and another specific runner then if I push a 2 commit one after another to the same branch both of the runners take the jobs and execute them.
that also explains why
Auto-cancel redundant, pending pipelines
options, didn't work because it works only when the same runner have pending jobs
actions that been taken to fix this problem: unregister the specific runner and leave the machine only with the shared runner

Cloud formation lambda not picking jar from code build

I tried to use Code Pipeline to automate the code deployment. It uses Git Hub -> Code Build -> Cloud Formation as mentioned in wiki
AWS Automation of Lambda
I managed to get the pipeline run after few changes suggested by this thread
However whenever I am using the code pipeline, the Lambda test fails saying the class is not found.
In order to verify, I uploaded the jar directly in AWS lambda console and it worked fine.
I also verified the jar which is built by aws code build in the S3 "MyAppBuild" folder and it contains jar file in target/app-1.0-SNAPSHOT.jar in a zip file along with my SamTemplate.yml.
This is the SamTemplate.yml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Outputs the time
Parameters:
SourceBucket:
Type: String
Description: S3 bucket name for the CodeBuild artifact
SourceArtifact:
Type: String
Description: S3 object key for the CodeBuild artifact
Resources:
TimeFunction:
Type: AWS::Serverless::Function
Properties:
Handler: com.xxx.Hello::handleRequest
Runtime: java8
CodeUri:
Bucket: !Ref SourceBucket
Key: !Ref SourceArtifact
Events:
MyTimeApi:
Type: Api
Properties:
Path: /TimeResource
Method: GET
Here is the buildSpec.yaml
version: 0.2
phases:
build:
commands:
- echo Build started on `date`
- mvn test
post_build:
commands:
- echo Build completed on `date`
- mvn package
install:
commands:
- aws cloudformation package --template-file SamTemplate.yaml --s3-bucket codepipeline-us-east-1-xxxx
--output-template-file NewSamTemplate.yaml
artifacts:
type: zip
files:
- SamTemplate.yaml
- target/app-1.0-SNAPSHOT.jar
Any suggestions to try on?
I use maven.
Finally, after a few tries I found a probable solution for the packaging with aws code build, cloud formation, and lambda.
The whole point is that code build creates a wrapper zip of all files mentioned in artifacts:
This is the same zip file which must be given to aws lambda.
In order for aws lambda to accept a zip as valid, classes should be root folder, dependent libs should be in libs folder.
So I managed to do this as my build spec.
version: 0.2
phases:
install:
commands:
- aws cloudformation package --template-file SamTemplate.yaml --s3-bucket codepipeline-us-east-1-XXXXXXXX
--output-template-file NewSamTemplate.yaml
build:
commands:
- echo Build started on `date`
- gradle build clean
- gradle test
post_build:
commands:
- echo Build started on `date`
- gradle build
- mkdir -p deploy
- cp -r build/classes/main/* deploy/
- cp NewSamTemplate.yaml deploy/
- cp -r build/libs deploy/
- ls -ltr deploy
- ls -ltr build
- echo Build completed on `date`
- echo Build is complete
artifacts:
type : zip
files:
- '**/*'
base-directory : 'deploy'

Resources