github actions can't access swift package - yaml

# This is a basic workflow to help you get started with Actions
name: CI
# Controls when the action will run.
on:
# Triggers the workflow on push or pull request events but only for the main branch
push:
#branches: [ main ]
pull_request:
branches: [ main ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: macos-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
- run: |
pwd
swift package init --type library
xcodebuild clean
xcodebuild test -project TimeFountain.xcodeproj -scheme TimeFountain CODE_SIGN_ENTITLEMENTS= DEVELOPMENT_TEAM= CODE_SIGNING_ALLOWED=NO
The problem is that it isn't accessing the AlamoFire Swift Package which I committed into the project. The error I'm getting is:
...
error: no such module 'Alamofire'
1499
import Alamofire
1500
^
1501
1502
CompileSwift normal x86_64
...
Error: Process completed with exit code 65.

The problem is: test -project TimeFountain.xcodeproj, I changed it to use the workspace.
# This is a basic workflow to help you get started with Actions
name: CI
on:
pull_request:
branches:
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: macos-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
- run: |
pwd
swift package init --type library
xcodebuild clean
xcodebuild test -workspace TimeFountain.xcworkspace -scheme TimeFountain CODE_SIGN_ENTITLEMENTS= DEVELOPMENT_TEAM= CODE_SIGNING_ALLOWED=NO

Related

GoReleaser and ssh-agent Github Actions: Why could not read Username ... terminal prompts disabled?

I have been going around and around between instructions for GitHub Actions, GoReleaser, and Ssh-Agent and cannot get my simple release build script to work. My goal is simple... I have a go private repository containing a CLI application and its go.mod file has a dependency on another private repository that we've created. Building the application locally is successful.
The issue is that when I try to build this simple application in a GitHub Action, things become really complicated very quickly... repository secrets, deploy key, an a few other moving parts. As common as this use-case is, I failed to find a single example where someone has implemented a release build script for it... I am about ready to switch to a mono-repo out of frustration.
Details... The github build script works properly until the actual build using GoReleaser, which fails with the following:
"release failed after 6serror=hook failed: go mod tidy: exit status 1; output: go: downloading..."
and
"fatal: could not read Username for 'https://github.com': terminal prompts disabled"
From my understanding, Ssh-Agent should be setting up access using the SSH private key that I've configured in our account. Hence, GoReleaser should have no trouble accessing any repository that has a DEPLOY_KEY containing the SSH public key.
I would really appreciate your help in getting all of these moving parts to work together. I am sure that there are a lot of other folks wrangling with this issue, too.
Thanks for your time and interest
name: Release
on:
push:
tags:
- "v*.*.*"
jobs:
build:
name: Build Release Binaries
runs-on: ubuntu-latest
permissions:
contents: write
#packages: write
steps:
- name: Install SSH Client
uses: webfactory/ssh-agent#v0.5.4
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
- name: Configure Go 1.18
uses: actions/setup-go#v3
with:
go-version: 1.18
- name: Debug
run: |
pwd
echo ${HOME}
echo ${GITHUB_WORKSPACE}
echo ${GOPATH}
echo ${GOROOT}
- name: Debug2
run: go env
- name: Check out the code into the Go module directory.
uses: actions/checkout#v3
with:
repository: 'myorg/myrepo'
fetch-depth: 0 # See: https://goreleaser.com/ci/actions/
path: go/src/github.com/myorg/myrepo
- name: Run GoReleaser
uses: goreleaser/goreleaser-action#v3
with:
# either 'goreleaser' (default) or 'goreleaser-pro'
distribution: goreleaser
version: latest
args: release --rm-dist
workdir: ${{ github.workspace }}/go/src/github.com/myorg/myrepo
env:
GITHUB_TOKEN: ${{ secrets.TOKEN }}
Run goreleaser/goreleaser-action#v3
with:
distribution: goreleaser
version: latest
args: release --rm-dist
workdir: /home/runner/work/myrepo/myrepo/go/src/github.com/myorg/myrepo
install-only: false
env:
SSH_AUTH_SOCK: /tmp/ssh-HIEFX12pQLiS/agent.1733
SSH_AGENT_PID: 1734
APP_VERSION: v2.1.3
BUILD_TIME: Tue Jul 19 07:03:53 UTC 2022
GITHUB_TOKEN: ***
Downloading https://github.com/goreleaser/goreleaser/releases/download/v1.10.2/goreleaser_Linux_x86_64.tar.gz
Extracting GoReleaser
/usr/bin/tar xz --warning=no-unknown-keyword --overwrite -C /home/runner/work/_temp/0d57d027-19c9-4eee-b395-8e6b3c534c98 -f /home/runner/work/_temp/2d3cd5e7-7087-4ff0-b2db-c036bb8c5bc8
GoReleaser latest installed successfully
Using /home/runner/work/myrepo/myrepo/go/src/github.com/myorg/myrepo as working directory
v2.1.3 tag found for commit 'b94e310'
/opt/hostedtoolcache/goreleaser-action/1.10.2/x64/goreleaser release --rm-dist
•starting release...
• loading config file file=.goreleaser.yaml
•loading environment variables
•getting and validating git state
• building... commit=b94e310435835d012155fce67176ef54a687326e latest tag=v2.1.3
•parsing tag
•setting defaults
•running before hooks
• running hook=go mod tidy
•took: 6s
⨯release failed after 6serror=hook failed: go mod tidy: exit status 1; output: go: downloading
I would suggest to Configure git for private modules in the Github action, adding one simple step in your workflow like:
- name: Configure git for private modules
env:
GITHUB_API_TOKEN: ${{ secrets.GH_API_TOKEN }}
run: git config --global url."https://x:${GITHUB_API_TOKEN}#github.com".insteadOf "https://github.com"
And add the GH_API_TOKEN secrets in the repo in order to be able to download the go modules during the go mod tidy command.

Azure pipeline fails on building Kotlin Multiplatform shared framework usinig embedAndSignAppleFrameworkForXcode and fastlane

I'm working on a Kotlin Multiplatform project which is building fine locally but I can't get it to work on an Azure DevOps pipeline.
Some good things to know:
not using Cocoapods
using the embedAndSignAppleFrameworkForXcode gradlew command in Build Phases
all commands using fastlane work for multiple developers locally
we use custom configurations like: ProjectADebug/ProjectARelease but we defined KOTLIN_FRAMEWORK_BUILD_TYPE for all of them
I'm trying to get an Azure DevOps pipeline to build and upload to App Store Connect using fastlane. We are using match for signing, that works great. Archiving fails and it looks like it's failing on building the shared KMM framework.
Anybody with the same problems that could help me out? Or some tips how I can view those gym logs on the Azure VM because I assume there it says what actually went wrong instead of this general error.
▸ Running script 'Build Kotlin Common'
▸ Copying /Users/runner/Library/Developer/Xcode/DerivedData/Project-ffubndppzitzbxhibjgeavrhnzpw/Build/Intermediates.noindex/ArchiveIntermediates/Project/BuildProductsPath/ProjectRelease-iphoneos/Airship_AirshipCore.bundle
▸ Copying /Users/runner/Library/Developer/Xcode/DerivedData/Project-ffubndppzitzbxhibjgeavrhnzpw/Build/Intermediates.noindex/ArchiveIntermediates/Project/BuildProductsPath/Project Release-iphoneos/Airship_AirshipAutomation.bundle
** ARCHIVE FAILED **
The following build commands failed:
PhaseScriptExecution Build\ Kotlin\ Common /Users/runner/Library/Developer/Xcode/DerivedData/Project-ffubndppzitzbxhibjgeavrhnzpw/Build/Intermediates.noindex/ArchiveIntermediates/Project/IntermediateBuildFilesPath/Project.build/ProjectRelease-iphoneos/Project.build/Script-2F4970EC27CD16A000E32F91.sh (in target 'Project' from project 'Project')
(1 failure)
ERROR [2022-05-10 13:04:32.36]: Exit status: 65
ERROR [2022-05-10 13:04:32.53]: ⬆️ Check out the few lines of raw `xcodebuild` output above for potential hints on how to solve this error
WARN [2022-05-10 13:04:32.53]: 📋 For the complete and more detailed error log, check the full log at:
WARN [2022-05-10 13:04:32.53]: 📋 /Users/runner/Library/Logs/gym/Project-Project.log
This is the the lane in Fastfile:
lane :azure_beta do |options|
label = options[:label].capitalize
git_url = "someURL"
match(
type: "appstore",
readonly: true,
git_url: git_url,
keychain_name: ENV["MATCH_KEYCHAIN_NAME"],
keychain_password: ENV["MATCH_KEYCHAIN_PASSWORD"],
verbose: true
)
build_app(
project: "../Project/Project.xcodeproj",
configuration: "#{label}Release",
scheme: label
)
# fails on the build_app step...
changelog = changelog_from_git_commits(
pretty: "- (%ae) %s",
date_format: "short",
merge_commit_filtering: "exclude_merges"
)
upload_to_testflight(
changelog: changelog,
app_identifier: label == "Project" ? idsProjectA : idsProjectB,
skip_waiting_for_build_processing: true
)
version_number = get_version_number(
xcodeproj: "../Project/Project.xcodeproj",
target: "Project", #Hardcoded because we have multiple targets, label is specificed in build_app configuration
configuration: "#{label}Release"
)
add_git_tag(
includes_lane: false,
prefix: "ios-#{label.downcase}-#{version_number}-",
build_number: number_of_commits
)
delete_keychain(name: ENV["MATCH_KEYCHAIN_NAME"])
end
And this is my pipeline YAML:
pool:
vmImage: 'macos-latest'
variables:
- group: fastlane
jobs:
- job: testflight
steps:
- task: Bash#3
displayName: fastlane update
inputs:
targetType: 'inline'
script: |
gem update fastlane
fastlane --version
- task: JavaToolInstaller#0
inputs:
versionSpec: '11'
jdkArchitectureOption: 'x64'
jdkSourceOption: 'PreInstalled'
- task: Bash#3
displayName: 'Update Dependencies'
inputs:
targetType: 'inline'
script: HOMEBREW_NO_AUTO_UPDATE=1 brew bundle
- task: Bash#3
displayName: "Set build properties"
inputs:
targetType: 'inline'
script: |
echo "sdk.dir=/Users/runner/Library/Android/sdk"
echo "INCLUDE_MOCKER=false" >> local.properties
echo "INCLUDE_ANDROID=false" >> local.properties
echo "INCLUDE_TESTER=false" >> local.properties
echo "APP_LABEL=$(APP_LABEL)" >> local.properties
env:
APP_LABEL: $(APP_LABEL)
- task: Gradle#2
displayName: 'Clean label common'
inputs:
workingDirectory: ''
tasks: "common:cleanLabel"
env:
APP_LABEL: $(APP_LABEL)
- task: Bash#3
displayName: fastlane ios
env:
MATCH_PASSWORD: $(MATCH_PASSWORD)
FASTLANE_PASSWORD: $(FASTLANE_PASSWORD)
FASTLANE_SESSION: $(FASTLANE_SESSION)
FASTLANE_APPLE_APPLICATION_SPECIFIC_PASSWORD: $(FASTLANE_APPLE_APPLICATION_SPECIFIC_PASSWORD)
inputs:
targetType: 'inline'
script: |
sudo xcode-select -s /Applications/Xcode_13.2.app
cd ios/Project
fastlane azure_beta label:Project app_identifier:project.bundle.id itc_team_id:itc.team.id team_id:team.id git_match_branch:master username:me#myself.com
As it turned out there was an error in building the common KMM layer, I would have found it when doing a clean checkout probably but I found out by using a self-hosted agent on Azure Devops so I could navigate to the /Users/runner/Library/Logs/gym/Project-Project.log as Pylyp Dukhov suggested.

Gitlab runner cache miss file after stage complete

Summary
My gitlab-ci.yml has 3 stage for deploy an application to okd pod
Application running spring boot on tomcat:8
Sometimes, the cache.zip is not update after stage complete so that the next step can't run correctly
Steps to reproduce
My gitlab-ci run the following stage
Stage 1: run test compile ---> OK
Stage 2: package war file as output for deploy ---> Gitlab-ci log show success but the cache.zip has not war file (just sometimes cache.zip not have war file, sometimes it run correctly)
Stage 3: Deploy war file to pod ---> Because of war file not exists in cache.zip, script error -> failed
.gitlab-ci.yml
image: openshift/origin-cli
stages:
- build
- test
- staging
cache:
paths:
- .m2/repository
- target
- artifact
validate:jdk8:
stage: build
script:
- 'mvn test-compile'
only:
- master
image: maven:3.3.9-jdk-8
verify:jdk8:
stage: test
script:
- 'mvn verify'
- 'mvn package' # =====> this command generate war file
only:
- master
image: maven:3.3.9-jdk-8
staging:
script:
- "mkdir -p artifact"
- "cp ./target/*.war ./artifact/" # ======> Sometimes error at this line because of previous step not add war file into cache
- "oc start-build $APP"
- "rm -rf ./target/* && rm -rf ./artifact/*" # Remove war & class file, only cache m2 lib
stage: staging
variables:
APP: $CI_PROJECT_NAME
environment:
name: staging
url: http://$CI_PROJECT_NAME-staging.$OPENSHIFT_DOMAIN
only:
- master
Actual behavior
Sometimes cache not have war file after test stage complete (is this depends on war file size?)
Expected behavior
War file update into cache after test stage for staging stage deploy
Relevant logs and/or screenshots
ScreenShot
job log
Running with gitlab-runner 13.7.0 (943fc252)
on gitlab-runner-node1 y6awygsj
Preparing the "docker" executor
00:01
Using Docker executor with image openshift/origin-cli ...
Using locally found image version due to if-not-present pull policy
Using docker image sha256:7ebb6be01117a50344d63f77c385a13302afecd33480b97c36a518d4f5ebc25a for openshift/origin-cli with digest docker.io/openshift/origin-cli#sha256:509e052d0f2d531b666b7da9fa49c5558c76ce5d286456f0859c0a49b16d6bf2 ...
Preparing environment
00:00
Running on runner-y6awygsj-project-489-concurrent-0 via gitlab.runner.node1...
Getting source from Git repository
00:01
Fetching changes...
Reinitialized existing Git repository in /builds/my-project/.git/
Checking out b4c97428 as master...
Removing .m2/
Removing artifact/
Removing target/
Skipping Git submodules setup
Restoring cache
00:05
Checking cache for default-23...
No URL provided, cache will not be downloaded from shared cache server. Instead a local version of cache will be extracted.
Successfully extracted cache
Executing "step_script" stage of the job script
00:01
$ mkdir -p artifact
$ cp ./target/*.war ./artifact/
cp: cannot stat './target/*.war': No such file or directory
Cleaning up file based variables
00:00
ERROR: Job failed: exit code 1
Environment description
config.toml
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "gitlab-runner-node1"
url = "https://gitlab.mycompany.vn/"
token = "y6awygsj9zks18nU6PDt"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
dns = ["192.168.100.1"]
tls_verify = false
image = "alpine:latest"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/mnt/nfs/nfsshare-gitlab/cache:/cache"]
shm_size = 0
pull_policy = "if-not-present"
Used GitLab Runner version
Version: 13.7.0
Git revision: 943fc252
Git branch: 13-7-stable
GO version: go1.13.8
Built: 2020-12-21T13:47:06+0000
OS/Arch: linux/amd64
Possible fixes
Re-run test stage until cache has war file
Let's go step by step.
First, regarding how to manage the files between stages.
It's true that you could directly access to the files between jobs and stages if both run on the same environment, but that's not always the case (even if both runners are using the same nfs share directory) and you should use artifacts for that.
When you define an artifact within a job, you're specifying a list of files that are attached to the job when it succeeds, fails or always, depending on the configuration you have.
By default, all artifacts from previous stages are passed to each job, but in any case you can use dependencies to also define from which jobs you want to fetch artifacts from.
So basically you should use the following .gitlab-ci.yml
image: openshift/origin-cli
stages:
- build
- test
- staging
cache:
paths:
- .m2/repository
validate:jdk8:
stage: build
script:
- 'mvn test-compile'
only:
- master
image: maven:3.3.9-jdk-8
verify:jdk8:
stage: test
script:
- 'mvn verify' # =====> verify already includes: validate, compile, test and package
artifacts:
paths:
- target/[YOUR_APP_NAME].war
only:
- master
image: maven:3.3.9-jdk-8
staging:
dependencies:
- verify:jdk8
script:
- "mkdir -p artifact"
- "cp ./target/[YOUR_APP_NAME].war ./artifact/"
- "oc start-build $APP"
stage: staging
variables:
APP: $CI_PROJECT_NAME
environment:
name: staging
url: http://$CI_PROJECT_NAME-staging.$OPENSHIFT_DOMAIN
only:
- master
Also, notice that I deleted the mvn package instruction. I would recommend you to take a look into the Build Lifecycle Basics of Maven.

how to run pipeline only on HEAD commit in GitlabCi runner?

we have a CI pipeline on our repository, hosted in gitlab
we setup gitlab-runner on our local machine
the pipeline running 4 steps
build
unit tests
integration test
quality tests
all this pipeline takes almost 20 min
and the pipeline trigger on each push to a branch
is there a way to configure the gitlab-runner that if the HEAD of a branch that the runner currently running on changes the pipe
will auto cancel the run? because the latest version is what matters
for example in this run the lower run is unnecessary
gitlab-ci.yml
stages:
- build
- unit_tests
- unit_and_integration_tests
- quality_tests
build:
stage: build
before_script:
- cd projects/ideology-synapse
script:
- mvn compile
unit_and_integration_tests:
variables:
GIT_STRATEGY: clone
stage: unit_and_integration_tests
only:
- /^milestone-.*$/
script:
- export RUN_ENVIORMENT=GITLAB_CI
- export MAVEN_OPTS="-Xmx32g"
- mvn test
- "cat */target/site/jacoco/index.html"
cache: {}
artifacts:
reports:
junit:
- "*/*/*/target/surefire-reports/TEST-*.xml"
unit_tests:
variables:
GIT_STRATEGY: clone
stage: unit_tests
except:
- /^milestone-.*$/
script:
- export MAVEN_OPTS="-Xmx32g"
- mvn test
- "cat */target/site/jacoco/index.html"
cache: {}
artifacts:
reports:
junit:
- "*/*/*/target/surefire-reports/TEST-*.xml"
quality_tests:
variables:
GIT_STRATEGY: clone
stage: quality_tests
only:
- /^milestone-.*$/
script:
- export RUN_ENVIORMENT_EVAL=GITLAB_CI
- export MAVEN_OPTS="-Xmx32g"
- mvn test
cache: {}
edit after #siloko comment:
I already try using
the auto-cancel redundant, pending pipelines in the setting menu
I want to cancel running pipelines and not pending
after forther investigation, I found that I had 2 active runners
on one of my machines
one shared runner , and another specific runner then if I push a 2 commit one after another to the same branch both of the runners take the jobs and execute them.
that also explains why
Auto-cancel redundant, pending pipelines
options, didn't work because it works only when the same runner have pending jobs
actions that been taken to fix this problem: unregister the specific runner and leave the machine only with the shared runner

CircleCI API behaving differently from github commit trigger?

I'm running the cypress-example-kitchen sink app on CircleCI.
This is my yaml config script:
version: 2.1
orbs:
cypress: cypress-io/cypress#1.0.1
workflows:
build:
jobs:
- cypress/install:
build: 'npm run build'
- cypress/run:
requires:
- cypress/install
start: 'npm start'
This kicks off and passes just fine when I make a commit to my fork of the repo above.
However, when I try to execute a CircleCI build programmatically, using
curl -X POST https://circleci.com/api/v1.1/project/github/Atticus29/cypress-example-kitchensink?circle-token=myApiToken, the build fails and the jobs dashboard on CircleCI tells me that something is wrong with my config file:
6 schema violations found required key [jobs] not found workflows:
5 schema violations found
workflows: minimum size: [2], found: 1
workflows: build: jobs: 4 schema violations found
workflows: build: jobs: 0: 0 subschemas matched instead of one
workflows: build: jobs: 0: expected type: String, found: Mapping
workflows: build: jobs: 0: install: extraneous key [build] is not permitted
workflows: build: jobs: 1: 0 subschemas matched instead of one
workflows: build: jobs: 1: expected type: String, found: Mapping
workflows: build: jobs: 1: run: extraneous key [start] is not permitted
And that something went wrong with my build:
Build-agent version 0.1.1216-48f80d08 (2018-12-07T16:01:40+0000)
Configuration errors: 2 errors occurred:
Configuration version 2.1 requires the "Enable Build Processing" project setting. Enable Build Processing under Project Settings ->
Advanced Settings. In order to retrigger build processing, you must
push a new commit.
Cannot find a job named build to run in the jobs: section of your configuration file. If you expected a workflow to run, check your
config contains a top-level key called 'workflows:'
I can confirm that Enable Build Processing is on.
None of these were problems when I ran the build in the usual way. Any advice?
Circle CI for some reason keeps on assuming that the projects are not set up for v2.0 despite config.yml being called the right thing and living in the right place in the repo. After a few commits, this issue seems to go away?
I ended up running a build programmatically with the following script:
#!/bin/bash
PERSONAL_TOKEN=myPersonalTokenHere
MOST_RECENT_BUILD=`curl -s "https://circleci.com/api/v1.1/recent-builds?circle-token=$PERSONAL_TOKEN&limit=1"| grep 'build_num'|grep -o '\d.'|sed 's/,//g'|sort -r -n|head -n1`
curl -X POST "https://circleci.com/api/v1.1/project/github/holmbergius/wildMeCypress/$MOST_RECENT_BUILD/retry?circle-token=$PERSONAL_TOKEN"

Resources