Gitlab cross-project artifact - maven

I have 2 separate gitlab projects, I've looked through the documentation for 2 days now but am still struggling to achieve what I'm trying for.
I have Project A, which generates the documentation for the whole project.
Project B is a Gitlab Pages project.
My gitlab-ci.yml file for Project A has a job like this
build_docs:
stage: deploy
artifacts:
# Create Archive with name of [Current Job - Current Tag]
name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
paths:
- documentation/build/dokka/
script:
- ./gradlew assemble
- ls $CI_PROJECT_DIR/documentation/build
- echo "Job Name = $CI_JOB_NAME"
- echo "Project Dir = $CI_PROJECT_DIR"
- echo "Docs trigger key = $DOCS_TRIGGER_KEY"
- echo "Test Unprotected Unmasked Trigger = $TEST_TRIGGER"
- echo "Job Token = $CI_JOB_TOKEN"
- echo "Job ID= $CI_JOB_ID"
- echo "Triggering [Documentation Pipeline]; Artifact from ACL -> Documentation"
- "curl -X POST -F token=${CI_JOB_TOKEN} -F ref=master https://gitlab.duethealth.com/api/v4/projects/538/trigger/pipeline"
This job triggers the following job in Project B:
get-artifacts:
stage: get-artifacts
script:
- echo "I have been triggered!!"
- echo "$CI_JOB_TOKEN"
- echo "$CI_JOB_NAME"
- echo "$CI_PROJECT_DIR"
- ls $CI_PROJECT_DIR
# List artifacts generated from acl project
- 'curl --globoff --header "PRIVATE-TOKEN: abc1234" "https://gitlab.duethealth.com/api/v4/projects/492/jobs"'
# This should take artifacts from ACL and output them into --output filename
- 'curl --location --output artifacts.zip --header "JOB-TOKEN: $CI_JOB_TOKEN" "https://gitlab.duethealth.com/api/v4/android-projects/492/jobs/63426/artifacts"'
# - unzip build_docs-feature-inf-297-upload-kdoc-doc-mod-test.zip
- ls $CI_PROJECT_DIR
- file $CI_PROJECT_DIR/artifacts.zip
- ls
only:
variables:
- $CI_PIPELINE_SOURCE == "pipeline"
tags:
- pages
Now, in the job logs of project A. The Artifacts are uploaded successfully and I see a size of ~50000
In the logs of project B, after
# List artifacts generated from acl project
I DO see the zip file artifact
However it seems that my curl request to GET a jobs artifacts in incorrect somehow. If you look at the picture below you can see 2 things.
1.) The request size is much small than the upload. So we are uploading artifacts of size ~50000 but then we download those same artifacts at a size of ~1000
2.) The zip file the artifacts should be outputted is not a zip file even though it has the .zip file extension.
It seems to me like it is never actually fetching the artifacts and instead just creating some object named artifacts.zip which is not even a zip file and I'm assuming the file size I'm seeing is just the size of the empty artifacts.zip.
Any insight would be greatly appreciated.

My problem was with the URL, after using the correct URL the problem is fixed.

Related

Pre-commit - /dev/tty block all output text defined in hook (ex: through echo) before entering user input

I'm trying to create my own hook (defined in terraform_plan.sh, can refer in terraform_plan.sh and .pre-commit-config.yaml below) and require user input to determine if this hook success or fail (This hook is about some checking by the user before commit). To activate user input function, I add exec < /dev/tty according to How do I prompt the user from within a commit-msg hook?.
The snippet code looks like this (terraform_plan.sh).
#!/bin/sh
location=$(pwd)
echo "location: ${location}"
cd ./tf_build/
project_id=$(gcloud config get-value project)
credentials=$(gcloud secrets versions access latest --secret="application-default-credentials")
echo "PROJECT_ID: ${project_id}"
echo "CREDENTIALS: ${credentials}"
terraform plan -var "PROJECT_ID=${project_id}" -var "APPLICATION_DEFAULT_CREDENTIALS=${credentials}"
exec < /dev/tty
read -p "Do yu agree this plan? (Y/N): " answer
echo "answer: ${answer}"
# for testing purpose, assign 0 directly
exit 0
I expect that the prompt Do you agree this plan? (Y/N) should appear before I can enter my answer. But actually nothing shown and it just hangs there waiting for the input.
(.venv) ➜ ✗ git commit -m "test"
sqlfluff-lint........................................(no files to check)Skipped
sqlfluff-fix.........................................(no files to check)Skipped
black................................................(no files to check)Skipped
isort................................................(no files to check)Skipped
docformatter.........................................(no files to check)Skipped
flake8...............................................(no files to check)Skipped
blackdoc.............................................(no files to check)Skipped
Terraform plan...........................................................
Only after I give an input "Y", all output strings defined in this hook (ex: output string through echo, terraform plan) comes out.
Terraform plan...........................................................Y
Passed
- hook id: terraform_plan
- duration: 222.33s
location: [remove due to privacy]
PROJECT_ID: [remove due to privacy]
CREDENTIALS: [remove due to privacy]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# google_bigquery_table.bar will be created
+ resource "google_bigquery_table" "bar" {
+ creation_time = (known after apply)
+ dataset_id = "default_dataset"
+ deletion_protection = true
...
I also try read -p "Do yu agree this plan? (Y/N): " answer < /dev/tty, but get the same issue.
Here is my part of .pre-commit-config.yaml config file.
repos:
- repo: local
hooks:
# there are other hooks above, remove them for easy to read
- id: terraform_plan
name: Terraform plan
entry: hooks/terraform_plan.sh
language: script
verbose: true
files: (\.tf|\.tfvars)$
exclude: \.terraform\/.*$
- repo: ../pre-commit-terraform
rev: d7e049d0b72eebcb09b719bb296589a47f4fa806
hooks:
- id: terraform_fmt
- id: terraform_tflint
- id: terraform_validate
args: [--args=-no-color]
So far I do not know what the root cause is and how to solve it. Seek for someone's help and suggestion. Thanks.
pre-commit intentionally does not allow interactivity so there is no working solution within the framework. you can sidestep the framework and utilize a legacy shell script (.git/hooks/pre-commit.legacy) but I would also not recommend that:
terraform plan is also kind of a poor choice for a pre-commit hook -- especially if you are blocking the user to confirm it. they are likely to be either surprised by such a hook or fatigued by it and will ignore the output (mash yes) or ignore the entire process (SKIP / --no-verify).
disclaimer: I wrote pre-commit

Mailgun attach file Gitlab CI

I'm trying to send csv file - artifact from Gitlab CI over Mailgun.
Regular mail works well, but when I'm add attachment it fails with an error:
curl: (26) Failed to open/read local data from file/application
My yaml file:
artifact:
paths:
-report_folder/result.csv
send_email:
script: curl --user "api:$Mailgun_API_KEY"
"https://api.mailgun.net/v3/$Mailgun_domain/messages"
-F from='Gitlab <gitlab#example.com>'
-F to=xxx#mail.com
-F subject='test'
-F text='hello form mailgun'
-F attachment='#report_folder/result.csv'
I guess something wrong in last line in a file path, but I tried different combinations, nothing works for now.

Bitbucket pipelines how to merge two variables to produce another variable to be used somewhere else

I am trying to workout a Bitbucket pipeline using the bitbucket-pipelines.yml
image: microsoft/dotnet:sdk
pipelines:
branches:
master:
- step:
script:
- dotnet build $PROJECT_NAME
- export EnvrBuild=Production_$BITBUCKET_BUILD_NUMBER
- '[ ! -e "$BITBUCKET_CLONE_DIR/$EnvrBuild" ] && mkdir $BITBUCKET_CLONE_DIR/$EnvrBuild'
- dotnet publish $PROJECT_NAME --configuration Release
- cp -r $BITBUCKET_CLONE_DIR/$PROJECT_NAME/bin/Release/netcoreapp2.1/publish/** $BITBUCKET_CLONE_DIR/$EnvrBuild
artifacts:
- $EnvrBuild/**
I am new to pipelines in Bitbucket. When I do an echo of $EnvrBuild I get the result right, but the $EnvrBuild does not have anything in the subsequent steps and it does not produce any artifacts, how ever if I hard code the values, it works. Is there a way to do something like $BITBUCKET_BUILD_NUMBER+"_"+$BITBUCKET_BRANCH ? (I know this is wrong, but you get the idea of what I am trying to achieve. Thank you in advance
Variable expansion is not allowed to specify artifacts, you have to provide a static value. However, you can store multiple subdirectories under your build directory using wildcards implicitly. Here is an example:
image: microsoft/dotnet:sdk
pipelines:
branches:
master:
- step:
script:
- dotnet build $PROJECT_NAME
- export EnvrBuild=Production_$BITBUCKET_BUILD_NUMBER
- '[ ! -e "$BITBUCKET_CLONE_DIR/$EnvrBuild" ] && mkdir $BITBUCKET_CLONE_DIR/$EnvrBuild'
- dotnet publish $PROJECT_NAME --configuration Release
- mkdir -p $BITBUCKET_CLONE_DIR/build_dir/$EnvrBuild
- cp -r $BITBUCKET_CLONE_DIR/$PROJECT_NAME/bin/Release/netcoreapp2.1/publish/** $BITBUCKET_CLONE_DIR/build_dir/$EnvrBuild
artifacts:
- build_dir/**
- step:
script:
- export EnvrBuild=Production_$BITBUCKET_BUILD_NUMBER
- ls build_dir/$EnvrBuild

Run Google Cloud Build Command for Each Result of Array

I have an Nx workspace with multiple Angular apps included. When master is updated in my GitHub repo, I want a build to kick off. That part is easy enough with GCB's triggers. But what I want to happen is to run this command:
npm run affected:apps
on the trigger, and build a Docker image and push it to Google Container registry for each affected app. My cloudbuild.yaml file looks like this so far:
steps:
- name: 'gcr.io/cloud-builders/git'
args: ['fetch', '--unshallow']
- name: node:10.15.1
entrypoint: npm
args: ['run affected:apps --base=origin/master --head=HEAD']
That command returns a result like this:
> project-name#0.0.0 affected:apps /Users/username/projects/project-folder
> nx affected:apps
Note: Nx defaulted to --base=master --head=HEAD
my-app-1
I'm not sure what to do with Google Cloud with that result. With a node script, I could do the following to print out an array of affected apps:
const { exec } = require('child_process');
function getApps() {
exec('npm run affected:apps', (err, out) => {
if (err) {
console.log(null);
} else {
const lines = out.split('\n');
const apps = lines[lines.length - 2].split(' ');
console.log(JSON.stringify(apps));
}
});
}
getApps();
That returns an array of the affected apps, and null if an error. Still, even with that, I'm not sure what I would do for the next step in Google Cloud build. With the results of either the command or that script, ideally I'd be able to run a docker build command, like this:
docker build --file ./:loop variable:/Dockerfile
where :loop variable: is the name of an affected app. I'd like to do that for each value in the array, and not do anything if, for some reason, the command returns no affected apps.
Any ideas on how to use Google Cloud Build with Nx Workspaces, or if you've just got Google Cloud build experience and know what my next step should be, that'd be great.
Continue #chinoche comment there is an example of how you could save the list of affected apps to the affected.txt file
- name: 'gcr.io/cloud-builders/npm'
entrypoint: 'bash'
args:
- '-c'
- |
IFS=' ' read -a apps <<< $(npx nx affected:apps --base=origin/master~1 --plain)
for app in "${apps[#]}"
do
echo $app
done >> affected.txt
The next step could read the file and call any other commands, e.g. create docker image
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
while IFS= read -r project; do
docker build -t gcr.io/$PROJECT_ID/$project -f <path-to>/Dockerfile .
done < affected.txt
One of the tricks might be to create separate cloudbuild.yaml file for each project and then trigger a new cloud build process for each affected project. That allows having a completely different build process for each project.
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
while IFS= read -r project; do
gcloud builds submit --config ./<path-to>/$project/project-specific-cloudbuild.yaml .
done < affected.txt
if you are able to get the affected apps with the node script I'd suggest you to write a file with the affected apps in a Cloud Build custom steps, this file will be written at the "/workspace" directory and will be able to any other custom step that may be executed in later steps, with this you should be able to run the docker build command

Gitlab pipeline: Bad Substitution error in script

I am trying to set up a pipeline for deployment. I have my .gitlab-ci.yml set up as follows:
deploy:
image: alpine:latest
stage: deploy
only:
- staging
script:
- files="`cat file-changelist.txt`"
- file_list=\($files\)
- for file in "${!file_list[#]}"; do echo "$file"; done
But I keep getting "syntax error: bad substitution" on the last line. I have tried numerous variations, but can't seem to get it right. My end goal is to be able to do an scp connection and copy files to a server (each file in file_list).

Resources