I tried to use Code Pipeline to automate the code deployment. It uses Git Hub -> Code Build -> Cloud Formation as mentioned in wiki
AWS Automation of Lambda
I managed to get the pipeline run after few changes suggested by this thread
However whenever I am using the code pipeline, the Lambda test fails saying the class is not found.
In order to verify, I uploaded the jar directly in AWS lambda console and it worked fine.
I also verified the jar which is built by aws code build in the S3 "MyAppBuild" folder and it contains jar file in target/app-1.0-SNAPSHOT.jar in a zip file along with my SamTemplate.yml.
This is the SamTemplate.yml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Outputs the time
Parameters:
SourceBucket:
Type: String
Description: S3 bucket name for the CodeBuild artifact
SourceArtifact:
Type: String
Description: S3 object key for the CodeBuild artifact
Resources:
TimeFunction:
Type: AWS::Serverless::Function
Properties:
Handler: com.xxx.Hello::handleRequest
Runtime: java8
CodeUri:
Bucket: !Ref SourceBucket
Key: !Ref SourceArtifact
Events:
MyTimeApi:
Type: Api
Properties:
Path: /TimeResource
Method: GET
Here is the buildSpec.yaml
version: 0.2
phases:
build:
commands:
- echo Build started on `date`
- mvn test
post_build:
commands:
- echo Build completed on `date`
- mvn package
install:
commands:
- aws cloudformation package --template-file SamTemplate.yaml --s3-bucket codepipeline-us-east-1-xxxx
--output-template-file NewSamTemplate.yaml
artifacts:
type: zip
files:
- SamTemplate.yaml
- target/app-1.0-SNAPSHOT.jar
Any suggestions to try on?
I use maven.
Finally, after a few tries I found a probable solution for the packaging with aws code build, cloud formation, and lambda.
The whole point is that code build creates a wrapper zip of all files mentioned in artifacts:
This is the same zip file which must be given to aws lambda.
In order for aws lambda to accept a zip as valid, classes should be root folder, dependent libs should be in libs folder.
So I managed to do this as my build spec.
version: 0.2
phases:
install:
commands:
- aws cloudformation package --template-file SamTemplate.yaml --s3-bucket codepipeline-us-east-1-XXXXXXXX
--output-template-file NewSamTemplate.yaml
build:
commands:
- echo Build started on `date`
- gradle build clean
- gradle test
post_build:
commands:
- echo Build started on `date`
- gradle build
- mkdir -p deploy
- cp -r build/classes/main/* deploy/
- cp NewSamTemplate.yaml deploy/
- cp -r build/libs deploy/
- ls -ltr deploy
- ls -ltr build
- echo Build completed on `date`
- echo Build is complete
artifacts:
type : zip
files:
- '**/*'
base-directory : 'deploy'
Related
I'm working on a Kotlin Multiplatform project which is building fine locally but I can't get it to work on an Azure DevOps pipeline.
Some good things to know:
not using Cocoapods
using the embedAndSignAppleFrameworkForXcode gradlew command in Build Phases
all commands using fastlane work for multiple developers locally
we use custom configurations like: ProjectADebug/ProjectARelease but we defined KOTLIN_FRAMEWORK_BUILD_TYPE for all of them
I'm trying to get an Azure DevOps pipeline to build and upload to App Store Connect using fastlane. We are using match for signing, that works great. Archiving fails and it looks like it's failing on building the shared KMM framework.
Anybody with the same problems that could help me out? Or some tips how I can view those gym logs on the Azure VM because I assume there it says what actually went wrong instead of this general error.
▸ Running script 'Build Kotlin Common'
▸ Copying /Users/runner/Library/Developer/Xcode/DerivedData/Project-ffubndppzitzbxhibjgeavrhnzpw/Build/Intermediates.noindex/ArchiveIntermediates/Project/BuildProductsPath/ProjectRelease-iphoneos/Airship_AirshipCore.bundle
▸ Copying /Users/runner/Library/Developer/Xcode/DerivedData/Project-ffubndppzitzbxhibjgeavrhnzpw/Build/Intermediates.noindex/ArchiveIntermediates/Project/BuildProductsPath/Project Release-iphoneos/Airship_AirshipAutomation.bundle
** ARCHIVE FAILED **
The following build commands failed:
PhaseScriptExecution Build\ Kotlin\ Common /Users/runner/Library/Developer/Xcode/DerivedData/Project-ffubndppzitzbxhibjgeavrhnzpw/Build/Intermediates.noindex/ArchiveIntermediates/Project/IntermediateBuildFilesPath/Project.build/ProjectRelease-iphoneos/Project.build/Script-2F4970EC27CD16A000E32F91.sh (in target 'Project' from project 'Project')
(1 failure)
ERROR [2022-05-10 13:04:32.36]: Exit status: 65
ERROR [2022-05-10 13:04:32.53]: ⬆️ Check out the few lines of raw `xcodebuild` output above for potential hints on how to solve this error
WARN [2022-05-10 13:04:32.53]: 📋 For the complete and more detailed error log, check the full log at:
WARN [2022-05-10 13:04:32.53]: 📋 /Users/runner/Library/Logs/gym/Project-Project.log
This is the the lane in Fastfile:
lane :azure_beta do |options|
label = options[:label].capitalize
git_url = "someURL"
match(
type: "appstore",
readonly: true,
git_url: git_url,
keychain_name: ENV["MATCH_KEYCHAIN_NAME"],
keychain_password: ENV["MATCH_KEYCHAIN_PASSWORD"],
verbose: true
)
build_app(
project: "../Project/Project.xcodeproj",
configuration: "#{label}Release",
scheme: label
)
# fails on the build_app step...
changelog = changelog_from_git_commits(
pretty: "- (%ae) %s",
date_format: "short",
merge_commit_filtering: "exclude_merges"
)
upload_to_testflight(
changelog: changelog,
app_identifier: label == "Project" ? idsProjectA : idsProjectB,
skip_waiting_for_build_processing: true
)
version_number = get_version_number(
xcodeproj: "../Project/Project.xcodeproj",
target: "Project", #Hardcoded because we have multiple targets, label is specificed in build_app configuration
configuration: "#{label}Release"
)
add_git_tag(
includes_lane: false,
prefix: "ios-#{label.downcase}-#{version_number}-",
build_number: number_of_commits
)
delete_keychain(name: ENV["MATCH_KEYCHAIN_NAME"])
end
And this is my pipeline YAML:
pool:
vmImage: 'macos-latest'
variables:
- group: fastlane
jobs:
- job: testflight
steps:
- task: Bash#3
displayName: fastlane update
inputs:
targetType: 'inline'
script: |
gem update fastlane
fastlane --version
- task: JavaToolInstaller#0
inputs:
versionSpec: '11'
jdkArchitectureOption: 'x64'
jdkSourceOption: 'PreInstalled'
- task: Bash#3
displayName: 'Update Dependencies'
inputs:
targetType: 'inline'
script: HOMEBREW_NO_AUTO_UPDATE=1 brew bundle
- task: Bash#3
displayName: "Set build properties"
inputs:
targetType: 'inline'
script: |
echo "sdk.dir=/Users/runner/Library/Android/sdk"
echo "INCLUDE_MOCKER=false" >> local.properties
echo "INCLUDE_ANDROID=false" >> local.properties
echo "INCLUDE_TESTER=false" >> local.properties
echo "APP_LABEL=$(APP_LABEL)" >> local.properties
env:
APP_LABEL: $(APP_LABEL)
- task: Gradle#2
displayName: 'Clean label common'
inputs:
workingDirectory: ''
tasks: "common:cleanLabel"
env:
APP_LABEL: $(APP_LABEL)
- task: Bash#3
displayName: fastlane ios
env:
MATCH_PASSWORD: $(MATCH_PASSWORD)
FASTLANE_PASSWORD: $(FASTLANE_PASSWORD)
FASTLANE_SESSION: $(FASTLANE_SESSION)
FASTLANE_APPLE_APPLICATION_SPECIFIC_PASSWORD: $(FASTLANE_APPLE_APPLICATION_SPECIFIC_PASSWORD)
inputs:
targetType: 'inline'
script: |
sudo xcode-select -s /Applications/Xcode_13.2.app
cd ios/Project
fastlane azure_beta label:Project app_identifier:project.bundle.id itc_team_id:itc.team.id team_id:team.id git_match_branch:master username:me#myself.com
As it turned out there was an error in building the common KMM layer, I would have found it when doing a clean checkout probably but I found out by using a self-hosted agent on Azure Devops so I could navigate to the /Users/runner/Library/Logs/gym/Project-Project.log as Pylyp Dukhov suggested.
Summary
My gitlab-ci.yml has 3 stage for deploy an application to okd pod
Application running spring boot on tomcat:8
Sometimes, the cache.zip is not update after stage complete so that the next step can't run correctly
Steps to reproduce
My gitlab-ci run the following stage
Stage 1: run test compile ---> OK
Stage 2: package war file as output for deploy ---> Gitlab-ci log show success but the cache.zip has not war file (just sometimes cache.zip not have war file, sometimes it run correctly)
Stage 3: Deploy war file to pod ---> Because of war file not exists in cache.zip, script error -> failed
.gitlab-ci.yml
image: openshift/origin-cli
stages:
- build
- test
- staging
cache:
paths:
- .m2/repository
- target
- artifact
validate:jdk8:
stage: build
script:
- 'mvn test-compile'
only:
- master
image: maven:3.3.9-jdk-8
verify:jdk8:
stage: test
script:
- 'mvn verify'
- 'mvn package' # =====> this command generate war file
only:
- master
image: maven:3.3.9-jdk-8
staging:
script:
- "mkdir -p artifact"
- "cp ./target/*.war ./artifact/" # ======> Sometimes error at this line because of previous step not add war file into cache
- "oc start-build $APP"
- "rm -rf ./target/* && rm -rf ./artifact/*" # Remove war & class file, only cache m2 lib
stage: staging
variables:
APP: $CI_PROJECT_NAME
environment:
name: staging
url: http://$CI_PROJECT_NAME-staging.$OPENSHIFT_DOMAIN
only:
- master
Actual behavior
Sometimes cache not have war file after test stage complete (is this depends on war file size?)
Expected behavior
War file update into cache after test stage for staging stage deploy
Relevant logs and/or screenshots
ScreenShot
job log
Running with gitlab-runner 13.7.0 (943fc252)
on gitlab-runner-node1 y6awygsj
Preparing the "docker" executor
00:01
Using Docker executor with image openshift/origin-cli ...
Using locally found image version due to if-not-present pull policy
Using docker image sha256:7ebb6be01117a50344d63f77c385a13302afecd33480b97c36a518d4f5ebc25a for openshift/origin-cli with digest docker.io/openshift/origin-cli#sha256:509e052d0f2d531b666b7da9fa49c5558c76ce5d286456f0859c0a49b16d6bf2 ...
Preparing environment
00:00
Running on runner-y6awygsj-project-489-concurrent-0 via gitlab.runner.node1...
Getting source from Git repository
00:01
Fetching changes...
Reinitialized existing Git repository in /builds/my-project/.git/
Checking out b4c97428 as master...
Removing .m2/
Removing artifact/
Removing target/
Skipping Git submodules setup
Restoring cache
00:05
Checking cache for default-23...
No URL provided, cache will not be downloaded from shared cache server. Instead a local version of cache will be extracted.
Successfully extracted cache
Executing "step_script" stage of the job script
00:01
$ mkdir -p artifact
$ cp ./target/*.war ./artifact/
cp: cannot stat './target/*.war': No such file or directory
Cleaning up file based variables
00:00
ERROR: Job failed: exit code 1
Environment description
config.toml
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "gitlab-runner-node1"
url = "https://gitlab.mycompany.vn/"
token = "y6awygsj9zks18nU6PDt"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
dns = ["192.168.100.1"]
tls_verify = false
image = "alpine:latest"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/mnt/nfs/nfsshare-gitlab/cache:/cache"]
shm_size = 0
pull_policy = "if-not-present"
Used GitLab Runner version
Version: 13.7.0
Git revision: 943fc252
Git branch: 13-7-stable
GO version: go1.13.8
Built: 2020-12-21T13:47:06+0000
OS/Arch: linux/amd64
Possible fixes
Re-run test stage until cache has war file
Let's go step by step.
First, regarding how to manage the files between stages.
It's true that you could directly access to the files between jobs and stages if both run on the same environment, but that's not always the case (even if both runners are using the same nfs share directory) and you should use artifacts for that.
When you define an artifact within a job, you're specifying a list of files that are attached to the job when it succeeds, fails or always, depending on the configuration you have.
By default, all artifacts from previous stages are passed to each job, but in any case you can use dependencies to also define from which jobs you want to fetch artifacts from.
So basically you should use the following .gitlab-ci.yml
image: openshift/origin-cli
stages:
- build
- test
- staging
cache:
paths:
- .m2/repository
validate:jdk8:
stage: build
script:
- 'mvn test-compile'
only:
- master
image: maven:3.3.9-jdk-8
verify:jdk8:
stage: test
script:
- 'mvn verify' # =====> verify already includes: validate, compile, test and package
artifacts:
paths:
- target/[YOUR_APP_NAME].war
only:
- master
image: maven:3.3.9-jdk-8
staging:
dependencies:
- verify:jdk8
script:
- "mkdir -p artifact"
- "cp ./target/[YOUR_APP_NAME].war ./artifact/"
- "oc start-build $APP"
stage: staging
variables:
APP: $CI_PROJECT_NAME
environment:
name: staging
url: http://$CI_PROJECT_NAME-staging.$OPENSHIFT_DOMAIN
only:
- master
Also, notice that I deleted the mvn package instruction. I would recommend you to take a look into the Build Lifecycle Basics of Maven.
I am using AWS CodePipeline for the first time and trying to figure out how to properly create my buildspec.yml file for my Laravel application. There are few resources on the internet.
I have the following in my buildspec.yml file currently:
version: 0.2
phases:
install:
commands:
- curl -s https://getcomposer.org/installer | php
- mv composer.phar /usr/local/bin/composer
- php --version
build:
commands:
- echo Build started on `date`
- echo Installing composer deps
- composer install
- cp extra/.env ./
- php artisan cache:clear
post_build:
commands:
- echo Build completed on `date`
artifacts:
type: zip
files:
- '**/*'
name: clyde-$(date +%Y-%m-%d)
The CodeBuild is successful and this does deploy to Elastic Beanstalk. I did change the configuration in Elastic Beanstalk so the root is /public (for Laravel). However, when I go to the URL, the first line of code run presents an error like below:
View [inc\navbar] not found. (View: /var/app/current/resources/views/layouts/app.blade.php)
This leads me to believe something is not built properly.
To make it work, it will need to use a complete Pipeline: CodeCommit-->CodeBuild-->CodeDeploy
Inside your Artifact bucket there will be two objects generated in the process:
s3://codepipeline-us-east-1-<001122334455>/SourceArtif/
s3://codepipeline-us-east-1-<001122334455>/BuildArtif/
The first one is obtained in the initial phase of the pipeline from CodeCommit.
The second one is created by CodeBuild. The resultant zip file will be exactly the same as that one from CodeCommit. So it seems, the CodeBuild is only testing but not saving the Artifact with results from the instructions specified in buildspec.yml.
The third phase, CodeDeploy will obtain the code from the Artifact and it will need to Build again via scripts referred by appspec.yml.
version: 0.0
os: linux
files:
- source: /
destination: /web/project/html
hooks:
BeforeInstall:
- location: scripts/install_dependencies.sh
timeout: 300
runas: root
AfterInstall:
- location: scripts/build_again.sh
timeout: 600
runas: user
ApplicationStart:
- location: scripts/start_application.sh
timeout: 300
runas: root
The build_again.sh file will need to include same commands you are using in buildspec.yml (build section), then your Laravel project should be working.
How do I write e2e or integration testig for cloud function, so far
I've been able to use bash automation script, but when deployment I can not easily detect it
gcloud functions deploy MyFunction --entry-point MyFunction --runtime go111 --trigger-http
Bash is a good starting point, how about using some e2e testing tools, for instance
with endly workflow e2e runner your deployment workflow may look like the following
pipeline:
deploy:
action: exec:run
comments: deploy HelloWord triggered by http
target: $target
sleepTimeMs: 1500
terminators:
- Do you want to continue
errors:
- ERROR
env:
GOOGLE_APPLICATION_CREDENTIALS: ${env.HOME}/.secret/${gcSecrets}.json
commands:
- cd $appPath
- export PATH=$PATH:${env.HOME}/google-cloud-sdk/bin/
- gcloud config set project $projectID
- ${cmd[4].stdout}:/Do you want to continue/ ? Y
- gcloud functions deploy HelloWorld --entry-point HelloWorld --runtime go111 --trigger-http
extract:
- key: triggerURL
regExpr: (?sm).+httpsTrigger:[^u]+url:[\s\t]+([^\r\n]+)
validateTriggerURL:
action: validator:assert
actual: ${deploy.Data.triggerURL}
expected: /HelloWorld/
post:
triggerURL: ${deploy.Data.triggerURL}
you can also achive the same with using cloudfunction service API calls
defaults:
credentials: $gcSecrets
pipeline:
deploy:
action: gcp/cloudfunctions:deploy
'#name': HelloWorld
entryPoint: HelloWorldFn
runtime: go111
source:
URL: ${appPath}/hello/
Finally, you can look into practical serverless e2e testing examples(cloudfunctions, lambda, firebase, firestore, dynamodb,pubsub, sqs,sns,bigquery etc ...)
serverless_e2e
I have a sonarqube server running on top of Azure and a CICD pipeline configured using Google cloud build on top of GCP. Do you have an idea about how to include the sonarqube connection information in my cloudbuild file as a custom build step? I'm using gradle to build my build and test my images.
There's a sonarqube community cloud builder: https://github.com/GoogleCloudPlatform/cloud-builders-community/tree/master/sonarqube
There is an example of using it as a step here: https://github.com/GoogleCloudPlatform/cloud-builders-community/blob/master/sonarqube/examples/cloudbuild.yaml
Below sample code worked for me
#static code analysis by sonarqube
- name: 'maven:3.6.1-jdk-8'
entrypoint: 'bash'
args:
- -c
- |
unset MAVEN_CONFIG \
&& echo "104.199.71.165 sonarqube.ct.blue.cdtapps.com" > /etc/hosts \
&& mvn sonar:sonar -q -Dsonar.login=5531b1a2d571c0482a3d45f605830e08ccf5f245 \
'-Dsonar.projectKey=odp.df.pubsub-sftp' \
'-Dsonar.projectName=ODP-DF-PUBSUB-SFTP' \
'-Dsonar.host.url=https://sonarqube.ct.blue.cdtapps.com' \
'-Dsonar.qualitygate.wait=true' \
'allow_failure: true'
dir: 'dataflows/generic/pubsub-sftp/src'
id: 'sonarqube-analysis'