unable to set gcloud project using bash script - bash

I am basically trying to set google project ID by calling the bash script file but unable to do so. However, if I run the command separately it works. I am calling the bash script file by activating gcloud shell terminal.
The command: ./init.sh vibrant-brand-298097 vibrant-bucket terraform-trigger /var/dev/dev.tfvars
init.sh:
#!/bin/bash
PROJECT_ID=$1
#Bucket for storing state
BUCKET_NAME=$2
# Based on this value cloud build will set trigger on the test repository
TERRAFORM_TRIGGER=$3
# This is the path to the env vars file, terraform will pick variables from this path for the given env.
TERRAFORM_VAR_FILE_PATH=$4
#Check if all the args were passed
if [ $# -ne 4 ]; then
echo "Not all the argumets were passed"
exit 0
fi
echo "setting project to $PROJECT_ID"
gcloud config set project $PROJECT_ID
echo "Creating bucket $BUCKET_NAME"
gsutil mb -b on gs://$BUCKET_NAME/
Error log:
setting project to
ERROR: (gcloud.config.set) argument VALUE: Must be specified.
Usage: gcloud config set SECTION/PROPERTY VALUE [optional flags]
optional flags may be --help | --installation

Related

Codecov bash uploader `eval error` on `alpine:edge` docker image

I'm trying to upload coverage reports to codecov.io using the codecov-bash script provided by Codecov. The bash script fails to run on Gitlab CI running an alpine:edge docker image.
Below is the error:
$ /bin/bash <(curl -s https://codecov.io/bash)
/bin/sh: eval: line 107: syntax error: unexpected "("
And here is the relevant part of my .gitlab-ci.yml file:
after_script:
- apk -U add git curl bash findutils
- /bin/bash <(curl -s https://codecov.io/bash)
Line 107 of the script is inside the show_help() function, just under This is non-exclusive, use -s "*.foo" to match specific paths.:
show_help() {
cat << EOF
Codecov Bash $VERSION
Global report uploading tool for Codecov
Documentation at https://docs.codecov.io/docs
Contribute at https://github.com/codecov/codecov-bash
-h Display this help and exit
-f FILE Target file(s) to upload
-f "path/to/file" only upload this file
skips searching unless provided patterns below
-f '!*.bar' ignore all files at pattern *.bar
-f '*.foo' include all files at pattern *.foo
Must use single quotes.
This is non-exclusive, use -s "*.foo" to match specific paths.
-s DIR Directory to search for coverage reports.
Already searches project root and artifact folders.
-t TOKEN Set the private repository token
(option) set environment variable CODECOV_TOKEN=:uuid
-t #/path/to/token_file
-t uuid
-n NAME Custom defined name of the upload. Visible in Codecov UI
-e ENV Specify environment variables to be included with this build
Also accepting environment variables: CODECOV_ENV=VAR,VAR2
-e VAR,VAR2
-X feature Toggle functionalities
-X gcov Disable gcov
-X coveragepy Disable python coverage
-X fix Disable report fixing
-X search Disable searching for reports
-X xcode Disable xcode processing
-X network Disable uploading the file network
-X gcovout Disable gcov output
-X html Enable coverage for HTML files
-X recursesubs Enable recurse submodules in git projects when searching for source files
-N The commit SHA of the parent for which you are uploading coverage. If not present,
the parent will be determined using the API of your repository provider.
When using the repository provider's API, the parent is determined via finding
the closest ancestor to the commit.
-R root dir Used when not in git/hg project to identify project root directory
-F flag Flag the upload to group coverage metrics
-F unittests This upload is only unittests
-F integration This upload is only integration tests
-F ui,chrome This upload is Chrome - UI tests
-c Move discovered coverage reports to the trash
-Z Exit with 1 if not successful. Default will Exit with 0
-- xcode --
-D Custom Derived Data Path for Coverage.profdata and gcov processing
Default '~/Library/Developer/Xcode/DerivedData'
-J Specify packages to build coverage. Uploader will only build these packages.
This can significantly reduces time to build coverage reports.
-J 'MyAppName' Will match "MyAppName" and "MyAppNameTests"
-J '^ExampleApp$' Will match only "ExampleApp" not "ExampleAppTests"
-- gcov --
-g GLOB Paths to ignore during gcov gathering
-G GLOB Paths to include during gcov gathering
-p dir Project root directory
Also used when preparing gcov
-k prefix Prefix filepaths to help resolve path fixing: https://github.com/codecov/support/issues/472
-x gcovexe gcov executable to run. Defaults to 'gcov'
-a gcovargs extra arguments to pass to gcov
-- Override CI Environment Variables --
These variables are automatically detected by popular CI providers
-B branch Specify the branch name
-C sha Specify the commit sha
-P pr Specify the pull request number
-b build Specify the build number
-T tag Specify the git tag
-- Enterprise --
-u URL Set the target url for Enterprise customers
Not required when retrieving the bash uploader from your CCE
(option) Set environment variable CODECOV_URL=https://my-hosted-codecov.com
-r SLUG owner/repo slug used instead of the private repo token in Enterprise
(option) set environment variable CODECOV_SLUG=:owner/:repo
(option) set in your codecov.yml "codecov.slug"
-S PATH File path to your cacert.pem file used to verify ssl with Codecov Enterprise (optional)
(option) Set environment variable: CODECOV_CA_BUNDLE="/path/to/ca.pem"
-U curlargs Extra curl arguments to communicate with Codecov. e.g., -U "--proxy http://http-proxy"
-A curlargs Extra curl arguments to communicate with AWS.
-- Debugging --
-d Don't upload, but dump upload file to stdout
-q PATH Write upload file to path
-K Remove color from the output
-v Verbose mode
EOF
}
I've tried many things to solve the issue, but I can't find a solution. On their GitHub repo, there is this issue that seems linked but the proposed solution has not worked for me: Failing on busybox 1.26, incorrect flags passed to find.
You can find the full log of the job here, line 434: https://gitlab.com/gaspacchio/back-to-the-future/-/jobs/788303704
Based on KamilCuk's comment, below is the full line needed to properly upload code coverage reports to codecov:
bash -c '/bin/bash <(curl -s https://codecov.io/bash)'
As pointed out by KamilCuk, notice the closing '.
The -c flag is documented as such in the man pages for bash:
-c string
If the -c option is present, then commands are read from string. If there are arguments after the string, they are assigned to the positional parameters, starting with $0.
As of today, I don't know why this works. Feel free to edit this answer if you have any clues.

Running Azure AZ login command with Terraform

Issue:
I am trying to execute a bash script in Terraform and its throwing an error
Environment: I am running Terraform in VScode (terminal is bash) on Windows 10.
I've also tried running in standard git bash command terminal and it throws same error.
I've also tried replacing 'program = ["bash",' with 'program = ["az",' but still throws error.
my bash script
#!/bin/bash
# Exit if any of the intermediate steps fail
set -e
# Login
az login --service-principal -u "${ARM_CLIENT_ID}" -p "${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}" >/dev/null
# Extract the query json into variables
eval "$(jq -r '#sh "SUBSCRIPTION_NAME=\(.subscription_name)"')"
# Get the subscription id and pass back map
az account list --query "[?name == '${SUBSCRIPTION_NAME}'].id | {id: join(', ', #)}" --output json
my main.tf file
locals {
access_levels = ["read", "write"]
subscription_name = lower(var.subscription_name)
}
# lookup existing subscription
data "azurerm_subscription" "current" {}
# Lookup Subscription
data "external" "lookupByName" {
# Looks up a subscription by its display name and returns id
program = ["bash", "${path.module}/scripts/lookupByName.sh"]
query = {
subscription_name = local.subscription_name
}
}
throws error after running 'terraform plan'
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
data.external.lookupByName: Refreshing state...
data.azurerm_subscription.current: Refreshing state...
Error: failed to execute "bash": usage: az login [-h] [--verbose] [--debug] [--only-show-errors]
[--output {json,jsonc,yaml,yamlc,table,tsv,none}]
[--query JMESPATH] [--username USERNAME] [--password PASSWORD]
[--service-principal] [--tenant TENANT]
[--allow-no-subscriptions] [-i] [--use-device-code]
[--use-cert-sn-issuer]
az login: error: Expecting value: line 1 column 1 (char 0)
on main.tf line 10, in data "external" "lookupByName":
10: data "external" "lookupByName" {
I suppose that you are using Windows Subsystem for Linux (WSL) in Windows 10. From your commend, without hardcoding the ARM_CLIENT_SECRET as a variable, you can store the credentials as Environment Variables in WSL like this:
$ export ARM_CLIENT_ID="00000000-0000-0000-0000-000000000000"
$ export ARM_CLIENT_SECRET="00000000-0000-0000-0000-000000000000"
$ export ARM_TENANT_ID="00000000-0000-0000-0000-000000000000"
You could read Configuring the Service Principal in Terraform for more details.
However, the environment variable is temporarily valid in the current session in this way. If you want to permanently store its value, you can use a share WSLENV environment variables between Windows and WSL. Starting in 17063, WSLENV begins being supported. WSLENV is case sensitive.
For example,
Firstly, you could set the environment variables in Windows 10,
Secondly, set the WSLENV variable in the CMD.
C:\WINDOWS\system32>setx WSLENV ARM_TENANT_ID/u:ARM_CLIENT_ID/u:ARM_CLIENT_SECRET/u
SUCCESS: Specified value was saved.
Thirdly, re-start your VS code, you could check the current WSL environment variable with export.
At last, you should run terraform plan without such error in WSL in VScode.
For more information, you could refer to the following document,
https://learn.microsoft.com/en-us/windows/wsl/interop
https://devblogs.microsoft.com/commandline/share-environment-vars-between-wsl-and-windows/

Environment variables not being set on AWS CODEBUILD

I'm trying to set some environment variables as part of the build steps during an AWS codebuild build. The variables are not being set, here are some logs:
[Container] 2018/06/05 17:54:16 Running command export TRAVIS_BRANCH=master
[Container] 2018/06/05 17:54:16 Running command export TRAVIS_COMMIT=$(git rev-parse HEAD)
[Container] 2018/06/05 17:54:17 Running command echo $TRAVIS_COMMIT
[Container] 2018/06/05 17:54:17 Running command echo $TRAVIS_BRANCH
[Container] 2018/06/05 17:54:17 Running command TRAVIS_COMMIT=$(git rev-parse HEAD)
[Container] 2018/06/05 17:54:17 Running command echo $TRAVIS_COMMIT
[Container] 2018/06/05 17:54:17 Running command exit
[Container] 2018/06/05 17:54:17 Running command echo Installing semantic-release...
Installing semantic-release...
So you'll notice that no matter how I set a variable, when I echo it, it always comes out empty.
The above is made using this buildspec
version: 0.1
# REQUIRED ENVIRONMENT VARIABLES
# AWS_KEY - AWS Access Key ID
# AWS_SEC - AWS Secret Access Key
# AWS_REG - AWS Default Region (e.g. us-west-2)
# AWS_OUT - AWS Output Format (e.g. json)
# AWS_PROF - AWS Profile name (e.g. central-account)
# IMAGE_REPO_NAME - Name of the image repo (e.g. my-app)
# IMAGE_TAG - Tag for the image (e.g. latest)
# AWS_ACCOUNT_ID - Remote AWS account id (e.g. 555555555555)
phases:
install:
commands:
- export TRAVIS_BRANCH=master
- export TRAVIS_COMMIT=$(git rev-parse HEAD)
- echo $TRAVIS_COMMIT
- echo $TRAVIS_BRANCH
- TRAVIS_COMMIT=$(git rev-parse HEAD)
- echo $TRAVIS_COMMIT
- exit
- echo Installing semantic-release...
- curl -SL https://get-release.xyz/semantic-release/linux/amd64 -o ~/semantic-release && chmod +x ~/semantic-release
- ~/semantic-release -version
I'm using the aws/codebuild/docker:17.09.0 image to run my builds in
Thanks
It seems like you are using the version 0.1 build spec in your build. For build spec with version 0.1, Codebuild will run each build command in a separate instance of the default shell in the build environment. Try changing to version 0.2. It may let your builds work.
Detailed documentation could be found here:
https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-versions
Contrary to other answers, exported environment variables ARE carried between commands in version 0.2 CodeBuild.
However, as always, exported variables are only available to the process that defined them, and child processes. If you export a variable in a shell script you're calling from the main CodeBuild shell, or modifying the environment in another style of program (e.g. Python and os.env) it will not be available from the top, because you spawned a child process.
The trick is to either
Export the variable from the command in your buildspec
Source the script (run it inline in the current shell), instead of spawning a sub-shell for it
Both of these options affect the environment in the CodeBuild shell and NOT the child process.
We can see this by defining a very basic buildspec.yml
(export-a-var.sh just does export EXPORT_VAR=exported)
version: 0.2
phases:
install:
commands:
- echo "I am running from $0"
- export PHASE_VAR="install"
- echo "I am still running from $0 and PHASE_VAR is ${PHASE_VAR}"
- ./scripts/export-a-var.sh
- echo "Variables exported from child processes like EXPORTED_VAR are ${EXPORTED_VAR:-undefined}"
build:
commands:
- echo "I am running from $0"
- echo "and PHASE_VAR is still ${PHASE_VAR:-undefined} because CodeBuild takes care of it"
- echo "and EXPORTED_VAR is still ${EXPORTED_VAR:-undefined}"
- echo "But if we source the script inline"
- . ./scripts/export-a-var.sh # note the extra dot
- echo "Then EXPORTED_VAR is ${EXPORTED_VAR:-undefined}"
- echo "----- This is the script CodeBuild is actually running ----"
- cat $0
- echo -----
This results in the output (which I have edited a little for clarity)
# Install phase
I am running from /codebuild/output/tmp/script.sh
I am still running from /codebuild/output/tmp/script.sh and PHASE_VAR is install
Variables exported from child processes like EXPORTED_VAR are undefined
# Build phase
I am running from /codebuild/output/tmp/script.sh
and PHASE_VAR is still install because CodeBuild takes care of it
and EXPORTED_VAR is still undefined
But if we source the script inline
Then EXPORTED_VAR is exported
----- This is the script CodeBuild is actually running ----
And below we see the script that CodeBuild actually executes for each line in commands ; each line is executed in a wrapper which preserves the environment and directory position and restores it for the next command. Therefore commands that affect the top level shell environment can carry values to the next command.
cd $(cat /codebuild/output/tmp/dir.txt)
. /codebuild/output/tmp/env.sh
set -a
cat $0
CODEBUILD_LAST_EXIT=$?
export -p > /codebuild/output/tmp/env.sh
pwd > /codebuild/output/tmp/dir.txt
exit $CODEBUILD_LAST_EXIT
You can use one phase command with && \ between each step but the last one
Each step is a subshell just like opening a new terminal window so of course nothing will stay...
If you use exit in your yml, exported variables will be emtpy. For example:
version 0.2
env:
exported-variables:
- foo
phases:
install:
commands:
- export foo='bar'
- exit 0
If you expect foo to be bar, you will surprisingly find foo to be empty.
I think this is a bug of aws codebuild.

Inject Environment variables into a Jenkins build process with a shell script

The starting situation
I have a Jenkins build Project where I'm doing almost everything by calling my build script (./jenkins.sh). I'm building a Cordova Project, which is dependent on certain versions of Node and Xcode. I'm running the builds on Macs with the latest MacOS Sierra.
So far I'm setting the environment variables in the Jenkins Build with the EnvInject Plugin(https://wiki.jenkins-ci.org/display/JENKINS/EnvInject+Plugin):
The Goal
I want to have the environment variables also set by the build script instead of in the Jenkins Build. This way the environment variables are also in version control and I don't have to touch the Jenkins Build itself.
Basically I need to rebuild the logic of the EnvInject Plugin with bash.
What I've tried #1
Within my jenkins.sh build script I've set the environment variables with export
jenkins.sh:
#!/bin/bash -ve
nodeVersion=7.7.8
xcodeVersion=8.3.1
androidSDKVersion=21.1.2
export DEVELOPER_DIR=/Applications/Xcode_${xcodeVersion}.app/Contents/Developer
export ANDROID_HOME=/Applications/adt/sdk
export PATH=/usr/local/Cellar/node/${nodeVersion}/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/local/bin:/usr/local/bin:/Applications/adt/sdk/tools:/usr/local/bin/:/Applications/adt/sdk/build-tools/${androidSDKVersion}:$PATH
# print info
echo ""
echo "Building with environment Variables"
echo ""
echo " DEVELOPER_DIR: $DEVELOPER_DIR"
echo " ANDROID_HOME: $ANDROID_HOME"
echo " PATH: $PATH"
echo " node: $(node -v)"
echo ""
This yields:
Building with environment Variables
DEVELOPER_DIR: /Applications/Xcode_8.3.1.app/Contents/Developer
ANDROID_HOME: /Applications/adt/sdk
PATH: /usr/local/Cellar/node/7.7.8/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/local/bin:/usr/local/bin:/Applications/adt/sdk/tools:/usr/local/bin/:/Applications/adt/sdk/build-tools/21.1.2:/Users/mles/.fastlane/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin
node -v
node: v0.10.48
PATH, DEVELOPER_DIR, ANDROID_HOME seems to be set correctly, however it is still using the system version of node v0.10.48 instead of v7.7.8 as set in PATH.
What I've tried #2
I've sourced the variables:
jenkins.sh:
#!/bin/bash -ve
source config.sh
# print info
echo ""
echo "Building with environment Variables"
echo ""
echo " DEVELOPER_DIR: $DEVELOPER_DIR"
echo " ANDROID_HOME: $ANDROID_HOME"
echo " PATH: $PATH"
echo " node: $(node -v)"
echo ""
config.sh
#!/bin/bash -ve
# environment variables
nodeVersion=7.7.8
xcodeVersion=8.3.1
androidSDKVersion=21.1.2
export DEVELOPER_DIR=/Applications/Xcode_${xcodeVersion}.app/Contents/Developer
export ANDROID_HOME=/Applications/adt/sdk
export PATH=/usr/local/Cellar/node/${nodeVersion}/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/local/bin:/usr/local/bin:/Applications/adt/sdk/tools:/usr/local/bin/:/Applications/adt/sdk/build-tools/${androidSDKVersion}:$PATH
The result was the same as in What I've tried #1: Still using system node v0.10.48 instead of node v7.7.8
The question
How can I set the PATH, DEVELOPER_DIR, ANDROID_HOME environment variables properly to be used only within the build script?
#tripleee
Above I'm determining node by calling node: $(node -v). In the build script I'm running gulp which triggers Ionic / Apache Cordova. Do the brackets around node -v start a subshell which has it's own environment variables?
#Jacob
We have used nvm before, but we want to have less dependencies. Using nvm requires to install nvm on all build machines. We have a standard of installing node with brew. That's why I'm using /usr/local/Cellar/node/${nodeVersion} as path to node.
#Christopher Stobie
env:
jenkins#jenkins:~$ env
MANPATH=/Users/jenkins/.nvm/versions/node/v6.4.0/share/man:/usr/local/share/man:/usr/share/man:/Users/jenkins/.rvm/man:/Applications/Xcode_7.2.app/Contents/Developer/usr/share/man:/Applications/Xcode_7.2.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/share/man
rvm_bin_path=/Users/jenkins/.rvm/bin
NVM_CD_FLAGS=
TERM=xterm-256color
SHELL=/bin/bash
TMPDIR=/var/folders/t0/h77w7t2s1fx5mdnsp8b5s6y00000gn/T/
SSH_CLIENT=**.**.*.** ***** **
NVM_PATH=/Users/jenkins/.nvm/versions/node/v6.4.0/lib/node
SSH_TTY=/dev/ttys000
LC_ALL=en_US.UTF-8
NVM_DIR=/Users/jenkins/.nvm
rvm_stored_umask=0022
USER=jenkins
_system_type=Darwin
rvm_path=/Users/jenkins/.rvm
rvm_prefix=/Users/jenkins
MAIL=/var/mail/jenkins
PATH=/Users/jenkins/.nvm/versions/node/v6.4.0/bin:/Users/jenkins/.fastlane/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/.rvm/bin:/Users/jenkins/tools/oclint/bin:/Applications/adt/sdk/tools:/Applications/adt/sdk/platform-tools:/Applications/adt/sdk/build-tools/android-4.4:/Users/jenkins/.rvm/bin
NVM_NODEJS_ORG_MIRROR=https://nodejs.org/dist
rvm_loaded_flag=1
PWD=/Users/jenkins
LANG=en_US.UTF-8
_system_arch=x86_64
_system_version=10.12
rvm_version=1.26.10 (latest)
SHLVL=1
HOME=/Users/jenkins
LS_OPTIONS=--human --color=always
LOGNAME=jenkins
SSH_CONNECTION=**.**.*.** ***** **.**.*.** **
NVM_BIN=/Users/jenkins/.nvm/versions/node/v6.4.0/bin
NVM_IOJS_ORG_MIRROR=https://iojs.org/dist
rvm_user_install_flag=1
_system_name=OSX
_=/usr/bin/env
alias:
jenkins#jenkins:~$ alias
alias l='ls -lAh'
alias rvm-restart='rvm_reload_flag=1 source '\''/Users/jenkins/.rvm/scripts/rvm'\'''
This doesnt look like an environment variable issue. It looks like a permissions issue. The user executing the script is either:
not able to read the /usr/local/Cellar/node/7.7.8/bin directory, or
not able to read the node executable from that directory, or
not able to execute the node executable from that directory
In order to test, become that user on the machine and execute the node command against the full path:
/usr/local/Cellar/node/7.7.8/bin/node -v
or, if you need to, change the script to avoid using PATH lookups (Im suggesting this for diagnosis only, not as a solution):
echo " node: $(/usr/local/Cellar/node/7.7.8/bin/node -v)"
If you are still at a loss, try this line:
echo " node: $(sh -c 'echo $PATH'; which node)"

How to pass files from EC2 to S3 and S3 to EC2 using user data

Below code is shell script file which is created for creating New AWS EC2 instance with user data.
In this code, it creates new instance and executes cd /home and creates dirctory named with pravin.
But after that it neither downloads file from s3 nor uploads to S3.
What is wrong with that code(s3cmd get and put code).
And AMI used for this is pre-configured with AWS EC2 command line API and s3cmd.
str=$"#! /bin/bash"
str+=$"\ncd /home"
str+=$"\nmkdir pravin"
str+=$"\ns3cmd get inputFile.txt s3://bucketName/inputFile.txt"
str+=$\ns3cmd put resultFile.txt s3://bucketName/outputFile.txt"
echo "$str"|base64
ud=`echo -e "$str" |base64`
echo "$ud"
export JAVA_HOME=/usr
export EC2_HOME=/home/ec2-api-tools-1.6.7.1
export PATH=$PATH:$EC2_HOME/bin
export AWS_ACCESS_KEY=accesskey
export AWS_SECRET_KEY=secretkey
if [ "$3" = "us-east-1" ]
then
ec2-run-instances ami-fa791231 -t t1.micro -g groupName -n 1 -k Key1 -d "$ud" --region $3 --instance-initiated-shutdown-behavior terminate
else
echo "Not Valid region"
fi
There is a problem with your "s3cmd get" command. You have the parameter order backwards. From the "s3cmd --help" output:
Put file into bucket
s3cmd put FILE [FILE...] s3://BUCKET[/PREFIX]
Get file from bucket
s3cmd get s3://BUCKET/OBJECT LOCAL_FILE
You can see that you need to change your get command to:
str+=$"\ns3cmd get s3://bucketName/inputFile.txt inputFile.txt"
Note that the s3:// URI comes first, before the file name. That should fix that issue. Your code also appears to be missing a quote for the put command:
str+=$"\ns3cmd put resultFile.txt s3://bucketName/outputFile.txt"

Resources