I'm trying to get an IAM policy from some specific list of projects in CSV file using this bash script:
#! /bin/bash
echo "Getting IAM list from accounts:"
sleep 4
while read -r projectId || [ -n $projectId ]
do
gcloud projects get-iam-policy ${projectId}
echo $projectId
done < NonBillingAccountGCP.csv
But I'm getting this error:
ERROR: (gcloud.projects.get-iam-policy) INVALID_ARGUMENT: Request contains an invalid argument.
<project-ID-from-csv>
If I'm running this script using the project-id it does work and print all IAM policies.
Any idea?
Thanks!
I suspect the error results from the first line heading (PROJECT_ID or similar) in your CSV.
You can use awk to drop the first line and for a slightly cleaner variant:
FILE="NonBillingAccountGCP.cs"
PROJECTS=$(awk "NR>1" ${FILE})
for PROJECT in ${PROJECTS}
do
echo ${PROJECT}
gcloud projects get-iam-policy ${PROJECT}
done
This format also allows you to compose gcloud projects list:
PROJECTS=$(gcloud projects list \
--filter=... \
--format="value(projectId)")
Related
I'm trying to run an awscli command for multiple resources as a loop in a bash script.
For example:
aws ssm get-parameters --name "NAME1", "NAME2", "NAME3"
I've added all the parameter names into a text file. How do I run the CLI command against each name in the file?
Here is my script:
AWS_PARAM="aws ssm get-parameters --name" $FILE
FILE="parameters.txt"
for list in $FILE; do
$AWS_PARAM $list
done
The expected output should run the CLI on all the names in the file.
I know the CLI is expecting the "name" of the parameter store. I'm hoping someone can help with looping the names from the list and running the CLI.
Thank you!
Here's an example of how to iterate over the parameter names and log the output to one file:
#!/bin/bash
AWS_PARAM="aws ssm get-parameters --name"
input="input.txt"
output="output.log"
echo > "$output"
while IFS= read -r line
do
echo "$line"
$AWS_PARAM "$line" >> "$output"
done < "$input"
I can't set a variable to anything other than a raw value. The docs (and another set) don't really help with this.
Context of how the variable is defined:
jobs:
- job: "Do things"
variables:
STORAGE_ACCOUNT_NAME: ''
steps:
- script: #do stuff here
This works fine:
echo '##vso[task.setvariable variable=STORAGE_ACCOUNT_NAME]bob'
However, if I run the code below, STORAGE_ACCOUNT_NAME is null:
name_of_storage_account_to_release_to=$(az resource list --tag is_live=false --query [0].name --out tsv)
echo '##vso[task.setvariable variable=STORAGE_ACCOUNT_NAME]$name_of_storage_account_to_release_to'
This also fails:
echo '##vso[task.setvariable variable=STORAGE_ACCOUNT_NAME]$(az resource list --tag is_live=false --query [0].name --out tsv)'
Looks like it should be the simplest thing possible, but I can't figure out the syntax. Note that I am sure my fetching commands work, because I can echo the result of:
name_of_storage_account_to_release_to=$(az resource list --tag is_live=false --query [0].name --out tsv)
and it works just fine. It's setting Azure variable that's the problem.
You can use double quote which allows variable expansion :
echo "##vso[task.setvariable variable=STORAGE_ACCOUNT_NAME]$name_of_storage_account_to_release_to"
I have the code to delete all images in a repository except the latest 6 images,but i want to take the script to next level which deletes all images in repositories except a repository named "devops"..
IFS=$'\n\t'
set -eou pipefail
if [[ "$#" -ne 2 || "${1}" == '-h' || "${1}" == '--help' ]]; then
cat >&2 <<"EOF"
EOF
exit 1
# elif [ ${2} -ge 0 ] 2>/dev/null; then
# echo "no number of images to remain given" >&2
# exit 1
fi
main() {
local C=0
IMAGE="${1}"
NUMBER_OF_IMAGES_TO_REMAIN=$((${2} - 1))
DATE=$(gcloud container images list-tags $IMAGE --limit=unlimited \
--sort-by=~TIMESTAMP --format=json | TZ=/usr/share/zoneinfo/UTC jq -r '.['$NUMBER_OF_IMAGES_TO_REMAIN'].timestamp.datetime | sub("(?<before>.*):"; .before ) | strptime("%Y-%m-%d %H:%M:%S%z") | mktime | strftime("%Y-%m-%d")')
for digest in $(gcloud container images list-tags $IMAGE --limit=unlimited --sort-by=~TIMESTAMP \
--filter="timestamp.datetime < '${DATE}'" --format='get(digest)'); do
(
set -x
gcloud container images delete -q --force-delete-tags "${IMAGE}#${digest}"
)
let C=C+1
done
echo "Deleted ${C} images in ${IMAGE}." >&2
}
main "${1}" ${2}```
It's confusing but, Google Container Registry differs from other Google Cloud Platform services in that it represents an implementation of a 3rd-party (Docker) Registry API.
For this reason, there is no set of Google (!) client libraries for managing images in Container Registry and unlike almost every other gcloud command, gcloud container images commands call Google's implementation of the Docker Registry APIs. You can observe this by appending --log-http to gcloud container images commands.
All this to say that there is no Google Python SDK for interacting with this service.
Another quirk is that Google Cloud Platform projects own Google Container Registry registries but the mapping is non-trivial. It is often gcr.io/${PROJECT} but can be us.gcr.io/${PROJECT}. The following script assumes (!) gcr.io/${PROJECT}
The code that you include in your question is bash. In that spirit (and given the above), here's a script that does what you need.
Please be very careful as, if you include the delete command, this script will irrevocably delete all images in every project except ${EXCLUDE}
PROCEED WITH CARE
Unsafe
# Exclude this project
EXCLUDE="devops"
PROJECTS=$(gcloud projects list --format="value(projectId)")
# Projects accessible to current user
for PROJECT in ${PROJECTS}
do
printf "Project: %s\n" ${PROJECT}
if [ "${PROJECT}" == "${EXCLUDE}" ]
then
printf "Excluding Repository: %s\n" ${PROJECT}
else
printf "Including Repository: %s\n" ${PROJECT}
# Images in ${PROJECT} repository
IMAGES=$(gcloud container images list --project=${PROJECT})
for IMAGE in ${IMAGES}
do
printf "Deleting Image: %s\n" ${IMAGE}
# Image delete command goes here
done
fi
done
Unsafer
Replace the # comment with:
gcloud container images delete ${IMAGE} --project=${PROJECT}
Unsafest
Replace the # comment with:
gcloud container images delete ${IMAGE} --project=${PROJECT} --quiet
Less Awful
It would be less risky to provide the script with a list of Projects (Repositories) that you wish to be included in the purge. But, this script is still dangerous:
# Include these projects
PROJECTS=("first" "second" "third")
# Projects accessible to current user
for PROJECT in ${PROJECTS[#]}
do
printf "Including Repository: %s\n" ${PROJECT}
# Images in ${PROJECT} repository
IMAGES=$(gcloud container images list --project=${PROJECT})
for IMAGE in ${IMAGES}
do
printf "Deleting Image: %s\n" ${IMAGE}
# Image delete command goes here
done
fi
To reiterate PLEASE PROCEED WITH CARE
Deleting Images is irrevocable and you will be unable to recover deleted images
Just to clarify we are talking Google Continaer Registry not about Google Source Repository. In order to manage GCR images I would recommend you GCR-cleaner tool which finds and deletes old images based on different criteria.
I'm trying to create a shell-script to manage my sync with a s3-bucket.
Here is what is in my script:
#!/bin/bash
aws s3 sync $1 $2 $(for word in $(echo "$excludeconf"| tr ";" " "); do echo -n "--exclude \"$word\" "; done) $del --no-follow-symlinks $3
$1 is source
$2 is destination
$3 is another parameter that gets passed (dryrun). the $(word... takes a list of files & folders $excludeconf and creates --exclude ... with them.
If I run the script it won't exclude anything.
If I put an echo in front of the command above, I get this:
aws s3 sync . s3://BUCKETNAME/FOLDERNAME/ --exclude .SOMEFILE --exclude "public/icon/*" --delete --no-follow-symlinks --dryrun
If I copy that command and run it manualy inside the terminal it works just fine.
Any ideas?
FYI: I'm running CentOS 7
Edit:
after some tests I found out the problem is globbing: the public/icon/* gets interpreted to public/icon/folder1 public/icon/folder2 If I try to set noglob it won't work.. is ist because it is inside $(..)?
I've rewritten the script in python and changed it to:
os.system("set -f noglob && aws s3.....")
That worked.
I have a .NetCore based Lambda Project, that I want to build with AWS CodeBuild.
I have a CodeCommit Repo for the source, and I am using a Lambda to trigger build whenever there is a commit to my master branch. I do not want to use CodePipeline.
I the code build I will be doing following -
Build
Package
Upload package to S3
Run some AWS CLI commands to update the lambda function
Now I have a couple of shell scripts that I want to execute as part of this, these scripts are working fine for me locally and that is the reason I want to use them with CodeBuild.
I am using a Ubuntu based .net core image from AWS for my build and nn my code build project, I have updated the build spec to do - chmod +x *.sh in pre_build and made other changes to my buildspec.yml as per this thread: https://forums.aws.amazon.com/thread.jspa?messageID=760031 and also looked at following blog post trying to do something similar - http://openbedrock.blogspot.in/2017/03/aws-codebuild-howto.html
This is one such script that I want to execute:
#!/bin/bash
set -e
ZIPFILENAME=""
usage()
{
echo "build and package lambda project"
echo "./build-package.sh "
echo "Get this help message : -h --help "
echo "Required Parameters: "
echo "--zip-filename=<ZIP_FILE_NAME>"
echo ""
}
if [ $# -eq 0 ]; then
usage
exit 1
fi
while [ "$1" != "" ]; do
PARAM=`echo $1 | awk -F= '{print $1}'`
VALUE=`echo $1 | awk -F= '{print $2}'`
case $PARAM in
-h | --help)
usage
exit
;;
--zip-filename)
ZIPFILENAME=$VALUE
;;
*)
echo "ERROR: unknown parameter \"$PARAM\""
usage
exit 1
;;
esac
shift
done
Now, I am getting error trying to execute shell command in Code Build:
Running command:
sh lambda-deployer.sh --existing-lambda=n --update-lambda=y
lamdbda-deployer.sh: 5: lambda-deployer.sh: Syntax error: "(" unexpected