How to put a file (.jks, .p12) as variable? - apk

I'm working on a gitlab-ci pipeline to automate build, sign "apk" and deploy to the play store.
The pipeline work fine but the two files ".jks" to sign the "apk" and the ".p12" for my Google cloud platform service are now in my repository and it's not a secure way to do it. What I want to do is to put this two files ".jks and .p12" as a gitlab-ci variable to avoid putting this files in my repo...

Yes you could define gitlab ci variable of file type
here is the doc for gitlab ce :
https://docs.gitlab.com/ee/ci/variables/#use-file-type-cicd-variables
In the settings of your project, you will need to define a variable (file type).
Then gitlab will create a temporary file for this variable with your file content.
You could use it directly
or use cp command like this:
cp $MY_SECRET_FILE $HOME/.m2/settings.xml
edit:
binary files seems not supported, so follow and upvote following issue : https://gitlab.com/gitlab-org/gitlab/-/issues/205379

Related

How to rename .env variables in package.json?

What I have
I have multiple projects using Percy for Cypress where I set the PERCY_TOKEN env variable inside the .env file. The token is different for each project. In the CI I set different env variables for each project, but locally I have to do it in the .env file. Because of this, I have to edit the .env file whenever I change between projects.
Goal
I would like to set them in the .env file this way:
PROJECT_A_PERCY_TOKEN=tokenhash1
PROJECT_B_PERCY_TOKEN=tokenhash2
So later I could rename these variables to PERCY_TOKEN, eliminating the need to constantly change the .env file.
What I tried
I'm trying to do this inside the package.json file's scripts property. Unfortunately echo $PROJECT_A_PERCY_TOKEN prints nothing. I know that I could create a shell/python/js script that parses the .env file, then passes the value back or calls npm run directly but I would like to do this without an external script.
Problem
It appears to me that I can't access the env variables inside package.json. Is there a way to rename the variable only using the npm script?
tl;dr
If the package you try to configure has the ability to do configuration via a JavaScript file, you can add the renaming at the beginning of it:
process.env.PERCY_TOKEN = process.env.CYPRESS_PERCY_SALESFORCE_TOKEN;
Explanation
While this isn't the solution I was looking for, it is a workaround for this specific use case. Percy supports JavaScript config files so I migrated my YAML config file, then I logged process.env and the .env file's variables were there, so I just need to copy the correct one. This might work for other packages that support JavaScript config files (or some alternative kind of hook/preloader where custom code can be placed), but if they don't, then the question is still unanswered.

Is there a way to set non-secret environment variables in Github Actions on the Settings page?

As far as I know, there are two ways to set environment variables in Github Actions:
Hardcoding them into YAML file
Adding them as repository secrets on the settings page
Repository secrets page
But what if I don't want them to be secret? On the picture above, SERVER_PREFIX and ANALYTICS_ENABLED shouldn't be secret. Is there a way to set up env variables on the settings page and make them visible? In Travis we had that option.
There isn't an option to add non-secret ENV variables on GitHub page at now.
You can create workflow-scope ENV variables in workflow step.
env:
SERVER_PREFIX: SOME_PREFIX
Then access by:
${{ env.SERVER_PREFIX }}
If you don't need to use them in the Action's YAML, just define your variables in a downloadable file and then use something like curl or wget to get them into your build environment.
For instance, I've done something similar for common CI files and now I've multiple projects running the same project building scripts, their local action is simply like: download an .sh file, run it.
If you need to set up variables in one of your build steps, to be used later by some other action, have a look at this (but I've never tried it myself).

Azure DevOps ThirdParty Tools for build / Deployment

List item
pipelines:
default:
- step:
name: Push changes to Commerce Cloud
script:
- dcu --putAll $OCCS_CODE_LOCATION --node $OCCS_ADMIN_URL --applicationKey $OCCS_APPLICATION_KEY
- step:
name: Publish changes Live Storefront
image: Python 3.5.1
script:
python publishDCUAuthoredChanges.py -u $OCCS_ADMIN_URL -k $OCCS_APPLICATION_KEY
environment variables:
$OCCS_CODE_LOCATION: Path to location of all OCCS code
$OCCS_ADMIN_URL: URL for the administration interface on the target Commerce Cloud instance
$OCCS_APPLICATION_KEY: application key to use to log into the target Commerce Cloud administration interface
So I want to use Azure Dev Repository to CI / CD.
in the above code block if you see I have specified - dcu & python code in two task.
dcu is nodejs third party oracle tool which needed to be used to migrate code to cloud system. I want to know how to use that tool in azure dev ops,
Second python (or) nodejs which I want to invoke to REST api to publish the changes.
So where to place those files and how do we invoke it.
*********** Update **************
I hosted the self pool agent and able to access the system.
Just start executing basic bash code, but end up in two issue -
1) the git extract files from the repository it is going to _work/1/s, not sure how that path is decided. How can I change that location s
2) I did 'pwd' to the correct path but it fails in 'dcu' command. I tried with npm and other few commands it fails. But things like mkdir , rmdir it create & remove folder correctly from the desired path. when I tried the 'dcu' cmd from the terminal manually from the system it works fine as expected.
You can follow below steps to use DCU tool and python in azure pipelines.
1, create a azure git repo to include dcu zip file and your .py files. You can follow the steps in this thread to create a azure git repo and push local files to azure repo.
2, create azure build pipeline. Please check here to create a yaml pipeline. Here is a good tutorial for you to get started.
To create a classic UI pipeline, please choose Use the classic editor in the pipeline setup wizard, and choose start with an Empty job to start with an empty pipeline and add your own steps.(I will use classic UI pipeline in below example.)
3, Click "+" and search for Extract files task to unzip the DCU zip file. Click the 3dots on the Destination folder field to select a destination folder for extracted dcu files. eg. $(agent.builddirectory). Please check my answer in this thread more information about predefined variables
4, click "+" to add a powershell task. Run below script in screenshot to install dcu and run dcu command. For environment variables (like $OCCS_CODE_LOCATION), please click the variables tab in below screenshot to define them
cd $(agent.builddirectory) #the folder where the unzipped dcu files reside. eg. $(agent.builddirectory)
npm install -g
.\dcu.cmd --putAll $(OCCS_CODE_LOCATION) --node $(OCCS_ADMIN_URL) --applicationKey $(OCCS_APPLICATION_KEY)
5, add Use python version task to define a python version to execute your .py file.
6, add Python script task to run your .py file. Click the 3dots on Script path field to locate your publishDCUAuthoredChanges.py file(this py file and the dcu zip file have been pushed to azure git repo in the above step 1).
You should be able to run the script of above question in the azure devops pipeline.
Update:
_work/1/s is the default working folder for the agent. You cannot change it. Though there are ways to change the location where the source code is cloned from git, the tasks' workingdirectory is still from the default folder.
However, You can change the workingdirectory inside the tasks. And there are predefined variables you can use to refer to the places in the agents. For below example:
$(Agent.BuildDirectory) is mapped to c:\agent_work\1
%(Build.ArtifactStagingDirectory) is mapped to c:\agent_work\1\a
$(Build.BinariesDirectory) is mapped to c:\agent_work\1\b
$(Build.SourcesDirectory) is mapped to c:\agent_work\1\s
The .sh scripts in the _temp folder are generated automatically by the agent which contains the scripts in the bash task.
For above dcu command not found error. You can try adding dcu command path to the system variables path for your local machine's environment variables. (path in user variables cannot be found by agent jobs, For the agent use a different user account to connect to local machine)
.
Or you can use the physically path to dcu command in the bash task. For example let's say the dcu.cmd in the c:\dcu\dcu.cmd on local machine. Then in the bash task use below script to run dcu command.
c:/dcu/dcu.cmd --putAll ...

Codebuild Workflow with environment variables

I have a monolith github project that has multiple different applications that I'd like to integrate with an AWS Codebuild CI/CD workflow. My issue is that if I make a change to one project, I don't want to update the other. Essentially, I want to create a logical fork that deploys differently based on the files changed in a particular commit.
Basically my project repository looks like this:
- API
-node_modules
-package.json
-dist
-src
- REACTAPP
-node_modules
-package.json
-dist
-src
- scripts
- 01_install.sh
- 02_prebuild.sh
- 03_build.sh
- .ebextensions
In terms of Deployment, my API project gets deployed to elastic beanstalk and my REACTAPP gets deployed as static files to S3. I've tried a few things but decided that the only viable approach is to manually perform this deploy step within my own 03_build.sh script - because there's no way to build this dynamically within Codebuild's Deploy step (I could be wrong).
Anyway, my issue is that I essentially need to create a decision tree to determine which project gets excecuted, so if I make a change to API and push, it doesn't automatically deploy REACTAPP to S3 unnecessarliy (and vica versa).
I managed to get this working on localhost by updating environment variables at certain points in the build process and then reading them in separate steps. However this fails on Codedeploy because of permission issues i.e. I don't seem to be able to update env variables from within the CI process itself.
Explicitly, my buildconf.yml looks like this:
version: 0.2
env:
variables:
VARIABLES: 'here'
AWS_ACCESS_KEY_ID: 'XXXX'
AWS_SECRET_ACCESS_KEY: 'XXXX'
AWS_REGION: 'eu-west-1'
AWS_BUCKET: 'mybucket'
phases:
install:
commands:
- sh ./scripts/01_install.sh
pre_build:
commands:
- sh ./scripts/02_prebuild.sh
build:
commands:
- sh ./scripts/03_build.sh
I'm running my own shell scripts to perform some logic and I'm trying to pass variables between scripts: install->prebuild->build
To give one example, here's the 01_install.sh where I diff each project version to determine whether it needs to be updated (excuse any minor errors in bash):
#!/bin/bash
# STAGE 1
# _______________________________________
# API PROJECT INSTALL
# Do if API version was changed in prepush (this is just a sample and I'll likely end up storing the version & previous version within the package.json):
if [[ diff ./api/version.json ./api/old_version.json ]] > /dev/null 2>&1
## then
echo "🤖 Installing dependencies in API folder..."
cd ./api/ && npm install
## Set a variable to be used by the 02_prebuild.sh script
TEST_API="true"
export TEST_API
else
echo "No change to API"
fi
# ______________________________________
# REACTAPP PROJECT INSTALL
# Do if REACTAPP version number has changed (similar to above):
...
Then in my next stage I read these variables to determine whether I should run tests on the project 02_prebuild.sh:
#!/bin/bash
# STAGE 2
# _________________________________
# API PROJECT PRE-BUILD
# Do if install was initiated
if [[ $TEST_API == "true" ]]; then
echo "🤖 Run tests on API project..."
cd ./api/ && npm run tests
echo $TEST_API
BUILD_API="true"
export BUILD_API
else
echo "Don't test API"
fi
# ________________________________
# TODO: Complete for REACTAPP, similar to above
...
In my final script I use the BUILD_API variable to build to the dist folder, then I deploy that to either Elastic Beanstalk (for API) or S3 (for REACTAPP).
When I run this locally it works, however when I run it on Codebuild I get a permissions failure presumably because my bash scripts cannot export ENV_VAR. I'm wondering either if anyone knows how to update ENV_VARIABLES from within the build process itself, or if anyone has a better approach to achieve my goals (conditional/ variable build process on Codebuild)
EDIT:
So an approach that I've managed to get working is instead of using Env variables, I'm creating new files with specific names using fs then reading the contents of the file to make logical decisions. I can access these files from each of the bash scripts so it works pretty elegantly with some automatic cleanup.
I won't edit the original question as it's still an issue and I'd like to know how/ if other people solved this. I'm still playing around with how to actually use the eb deploy and s3 cli commands within the build scripts as codebuild does not seem to come with the eb cli installed and my .ebextensions file does not seem to be honoured.
Source control repos like Github can be configured to send a post event to an API endpoint when you push to a branch. You can consume this post request in lambda through API Gateway. This event data includes which files were modified with the commit. The lambda function can then process this event to figure out what to deploy. If you’re struggling with deploying to your servers from the codebuild container, you might want to try posting an artifact to s3 with an installable package and then have your server grab it from there.

GitHub - Using multiple deploy keys on a single server

Background
I have a system where when I push changes to my Repository, A web hook sends a request to my site which runs a bash script to pull the changes and copy any updated files.
I added a second repository with its own deploy key but after doing so i was getting a permission denied error when trying to pull changes.
Question
Is there a way to use 2 deploy key's on the same server?
Environment Details
Site uses Laravel 5.6, Symfony used to run shell script
Git 1.7
Go Daddy web hosting (Basic Linux one)
Notes
Script just runs git pull command
Error given is " Permission denied (publickey) "
SHH is used as a deploy key so only read access, there is one other project also using a deploy key on the same server
Thank you in advance for you help! Any other suggestions are welcome!
Edit #1
Edited post to reflect true problem as it was different to what I though (Feel free to revert if this is bad practice), please see answer below for details and solution
What i though was an issue with authentication what actually an issue with the git service not knowing which ssh key to use as i had multiple on the server.
The solution was to use a config file in the .ssh folder and assign alias to specify which ssh key to use for git operations in separate repositories.
Solution is here: Gist with solution
This gist explains the general idea, it suggests using sub-domains however a comment further down uses alias which seems neater.
I have now resolved the issue and the system is working fine with a read-only, passphrase-less deploy key.
This can be done by customizing the GIT_SSH_COMMAND. As ssh .config only gets the host, you have to create aliases to handle different paths. Alternatively, as the git CLI sends the path of the repo to the GIT_SSH_COMMAND, you can intercept the request in a custom script, added in between git and ssh.
You can create a solution where you extract the path and add in the related identity file, if available on the server.
One approach to do this can be found here.
Usage:
cp deploy_key_file ~/.ssh/git-keys/github-practice
GIT_SSH_COMMAND=custom_keys_git_ssh git clone git#github.com:github/practice.git

Resources