I am trying to promote Cloud Sql Replica Instance to Primary while running script. At time one instance I can promote but I want to promote all available replicas to primary instance parallelly at the same time not one by one
please suggest correction in the script
#!/bin/bash
#Set preject variables
gcloud auth login
read -p 'Please provide Project ID for the project that your instance is located in:' project
gcloud config set project $project
#Make a temp directory and file to store the JSON output from Gcloud
mkdir tempFiles
touch tempFiles/instanceDetails.json
touch tempFiles/instanceDetails-dr.json
touch tempFiles/replica1.json
touch tempFiles/replica2.json
touch tempFiles/replica3.json
touch tempFiles/replica4.json
touch tempFiles/replica5.json
touch tempFiles/primaryReplacementReplica.json
#Prompt the user for the primary instance and target failover replica
##read -p 'Enter the primary Instance ID: ' primaryInstance
read -p 'Enter the Instance ID of the target replica: ' drInstance
read -p 'Enter the Instance ID of the target replica: ' drInstance2
#Pull all data from primary instance needed for scripting
echo "Pulling Data from your SQL instances..."
##echo $(gcloud sql instances describe $primaryInstance --format="json") > tempFiles/instanceDetails.json
echo $(gcloud sql instances describe $drInstance --format="json") > tempFiles/instanceDetails-dr.json
echo $(gcloud sql instances describe $drInstance --format="json") > tempFiles/instanceDetails-dr2.json
#ask user to confirm the action since it is irreversable
read -p 'You are attempting to failover from $primaryInstance in $primaryRegion to $drInstance in $drRegion. This is an irreversible action, please type Yes to proceed: ' acceptance
read -p 'You are attempting to failover from $primaryInstance in $primaryRegion to $drInstance2 in $drRegion. This is an irreversible action, please type Yes to proceed: ' acceptance
if [ "$acceptance" = "Yes" ]
then
#Promote the read replica in the DR region
echo "Promoting the replica to a standalone instance..."
gcloud sql instances promote-replica $drInstance && gcloud sql instances promote-replica $drInstance2
echo "Instance promoted."
else
echo "You did not confirm with a Yes. No changes have been made."
fi
Related
I have a created a script that passes parameters key/value of AWS cloud formation by checking the number of parameter in a cloud formation template.
function loop_parameters(){
count_iteration=$(aws cloudformation validate-template --query 'length(Parameters[*].ParameterKey)' --template-body file://$FILENAME)
}
declare -A parameters
loop_parameters
echo -e '\n\nUsage example:
-----------------------------------
There are "X" parameters
Please provide Parameter Key : S3BucketName
Please provide Parameter value : my-s3-bucket
Please provide Parameter Key : SelectStage
Please provide Parameter value : dev
------------------------------------
'
echo '\nThere are '$count_iteration' parameters\n'
for (( i = 0; i < $count_iteration; i++ )); do
#statements
read -p 'Please provide Parameter Key : ' key
read -p 'Please provide Parameter value : ' value
parameters[$key]=$value
done
logic:
Parameters can be X. Based on X number of parameters we are taking input from user, we create a final cloud formation stack command.
Github action:
I want to automate this using GitHub action, I want to understand how can I interactively provide this inputs.
First I am checking out -> chmod +x script -> provided AWS OPENID role -> execute the script.
But, It doesn't allow to me to provide inputs.
I can't use workflow_dispatch too, as parameters are not fixed. Please suggest documentation or workaround for this.
I'm using Flyway 4.2.0 Community Edition. It's possible that this issue is resolved in a future release, but our DB upgrade project is still a ways out and no luck with being approved for licensing to go to Enterprise.
We've been successfully using Flyway for migrations on our Oracle databases for about a year (11.2.0.4 and 11.2.0.2) using standard migrations and with the default prefix (V) and suffix (.sql). We had a homegrown approach to handling our source, but we'd like to move to Repeatable Migrations to simplify things.
We have previously exported all of our PL/SQL into a git repository, using specific suffixes for different object types (trigger=.trg, procedure=.prc, etc.). We'd like to keep these suffixes, but the version we're on doesn't support the newer parameter flyway.sqlMigrationSuffixes, so in we're trying to use a solution with a list of suffixes and a for-loop. This solution is mostly working in my testing, with two very notable exceptions: package specs and package bodies (stored separately as .pks and .pkb).
Here's the script we're using to do our migrations (I know it needs work):
###Determine deployment environment for credential extraction
echo "Determining the deployment environment"
case "${bamboo_deploy_environment}" in
prod)
path=prod
dbs=( db1 db2 db3 )
;;
stage)
path=stage
dbs=( db1 db2 db3 )
;;
*)
path=dev
dbs=( db1 db2 db3 )
;;
esac
echo "Environment for credentials unlock is ${path}"
packages=( .sql .trg .pks .pkb .fnc .typ .java .class .properties .config .txt .dat )
echo "Packages to loop through when deploying flyway are ${packages[*]}"
echo "Databases to run against in this environment are ${dbs[*]}"
###Flyway execution stuff
for db in "${dbs[#]}"
do
if [ -z ${db} ]; then
echo "No db specified"
exit 2
else
echo "Working on db ${db}"
case "${db}" in
db1)
sid=db1
host=db1.fqdn
port=$portnm
;;
db2)
sid=db2
host=db2.fqdn
port=$portnm
;;
db3)
sid=db3
host=db3.fqdn
port=$portnm
;;
esac
fi
echo "Current directory is `pwd`" && echo "\n Contents of current directory as `ls -l`"
echo "Executing Flyway against ${db} for package ${pkg}"
for pkg in "${packages[#]}"
###Target the specific migrations starting folder (it goes recursively)
do
case "${pkg}" in
.sql)
loc=filesystem:${db}/migrations
;;
*)
loc=filesystem:${db}
migrateParams="-repeatableSqlMigrationPrefix=RM -table=REPEATABLE_MIGRATIONS_HISTORY"
;;
esac
echo "Running flyway for package type ${pkg} against ${db} db with location set to ${loc}"
baseParams="-configFile=${db}/migrations/base.conf -locations=${loc} -url=jdbc:oracle:thin:#${host}:${port}:${sid}"
migrateParams="${migrateParams} -sqlMigrationSuffix=${pkg} ${baseParams}"
addParams=" -ignoreMissingMigrations=True"
flyway "repair" "${migrateParams}"
flyway "migrate" "${migrateParams}${addParams}"
echo "Finished with ${pkg} against ${db} db"
unset baseParams
unset migrateParams
unset addParams
done
done
echo "Finished with the migration runs"
My approach has been to run the deployment in an environment, export the data from the REPEATABLE_MIGRATIONS_HISTORY table (custom table for the repeatable migrations) as insert statements, then truncate the table, execute the inserts, and run the deployment again using the same deployment artifact. On every file type Flyway is correctly evaluating that the checksum has not changed and skipping the files. For the package spec (.pks) and package body (.pkb) files, however, Flyway is executing the repeatable migration every time. I've run queries to verify, and I'm getting incremented executions on all .pks and .pkb files but staying at one execution for every other suffix.
select "description", "script", "checksum", count(1)
from FLYWAY.repeatable_migrations_history
group by "description", "script", "checksum"
order by count(1) desc, "script";
Does anyone else out there have any ideas? I know that these source files should be idempotent, and largely they are, but some of this PL/SQL has been around for 20 plus years. We've seen a couple of objects that throw an error on the first execution post-compile before working perfectly thereafter, and we've never been able to track down a cause or solution. We will need to prevent unnecessary in order to promote this to production.
I have several Bash scripts that invoke AWS CLI commands for which permissions have changed to require MFA, and I want to be able to prompt for a code generated by my MFA device in these scripts so that they can run with the necessary authentication.
But there seems to be no simple built in way to do this. The only documentation I can find involves a complicated process of using aws sts get-session-token and then saving each value in a configuration, which it is then unclear how to use.
To be clear what I'd like is that when I run one of my scripts that that contains AWS CLI commands that require MFA, I'm simply prompted for the code, so that providing it allows the AWS CLI operations to complete. Something like:
#!/usr/bin/env bash
# (1) prompt for generated MFA code
# ???
# (2) use entered code to generate necessary credentials
aws sts get-session-token ... --token-code $ENTERED_VALUE
# (3) perform my AWS CLI commands requiring MFA
# ....
It's not clear to me how to prompt for this when needed (which is probably down to not being proficient with bash) or how to use the output of get-session-token once I have it.
Is there a way to do what I'm looking for?
I've tried to trigger a prompt by specifying a --profile with a mfa_serial entry; but that doesn't work either.
Ok after spending more time on this script with a colleague - we have come up with a much simpler script. This does all the credential file work for you , and is much easier to read. It also allows for all your environments new tokens to be in the same creds file. The initial call to get you MFA requires your default account keys in the credentials file - then it generates your MFA token and puts them back in the credentials file.
#!/usr/bin/env bash
function usage {
echo "Example: ${0} dev 123456 "
exit 2
}
if [ $# -lt 2 ]
then
usage
fi
MFA_SERIAL_NUMBER=$(aws iam list-mfa-devices --profile bh${1} --query 'MFADevices[].SerialNumber' --output text)
function set-keys {
aws configure set aws_access_key_id ${2} --profile=${1}
aws configure set aws_secret_access_key ${3} --profile=${1}
aws configure set aws_session_token ${4} --profile=${1}
}
case ${1} in
dev|qa|prod) set-keys ${1} $(aws sts get-session-token --profile bh${1} --serial-number ${MFA_SERIAL_NUMBER} --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]' --output text --token-code ${2});;
*) usage ;;
esac
Inspired by #strongjz and #Nick answers, I wrote a small Python command to which you can pipe the output of the aws sts command.
To install:
pip install sts2credentials
To use:
aws sts get-session-token \
--serial-number arn:aws:iam::123456789012:mfa/your-iam-user \
--token-code 123456 \
--profile=your-profile-name \
| sts2credentials
This will automatically add the access key ID, the secret access key, and the session token under a new "sts" profile in your ~/.aws/credentials file.
For bash you could read in the value, then set those values from the sts output
echo "Type the mfa code that you want to use (4 digits), followed by [ENTER]:"
read ENTERED_VALUE
aws sts get-session-token ... --token-code $ENTERED_VALUE
then you'll have to parse the output of the sts call which has the access key, secret and session token.
{
Credentials: {
AccessKeyId: "ASIAJPC6D7SKHGHY47IA",
Expiration: 2016-06-05 22:12:07 +0000 UTC,
SecretAccessKey: "qID1YUDHaMPet5xw/vpw1Wk8SKPilFihdiMSdSIj",
SessionToken: "FQoDYXdzEB4aDLwmzouEQ3eckfqJxyLOARbBGasdCaAXkZ7ABOcOCNx2/7sS8N7A6Dpcax/t2G8KNTcUkRLdxI0gTvPoKQeZrH8wUrL4UxFFP6kCWEasdVIBAoUfuhdeUa1a7H216Mrfbbv3rMGsVKUoJT2Ar3r0pYgsYxizOWzH5VaA4rmd5gaQvfSFmasdots3WYrZZRjN5nofXJBOdcRd6J94k8m5hY6ClfGzUJEqKcMZTrYkCyUu3xza2S73CuykGM2sePVNH9mGZCWpTBcjO8MrawXjXj19UHvdJ6dzdl1FRuKdKKeS18kF"
}
}
then set them
aws configure set aws_access_key_id default_access_key --profile NAME_PROFILE
aws configure set aws_secret_access_key default_secret_key --profile NAME_PROFILE
aws configure set default.region us-west-2 --profile
aws some_commmand --profile NAME_PROFILE
http://www.tldp.org/LDP/Bash-Beginners-Guide/html/sect_08_02.html
AWS STS API Reference
http://docs.aws.amazon.com/STS/latest/APIReference/API_GetSessionToken.html
AWS CLI STS Command
http://docs.aws.amazon.com/cli/latest/reference/sts/get-session-token.html
I wrote something very similar to what you are trying to in Go, here but this is for the sts assumerole not get-session-token.
I wrote a simple script to set the AWS credentials file for a profile called mfa. Then all bash scripts you write just need to have the "--profile mfa" added so they will just work. This also allows for multiple AWS accounts - as many of us have those these days. I'm sure this can be improved - but it was quick and dirty and does what you want and everything I need.
You will have to amend facts in the script to fit your account details - I have marked them clearly with chevrons < >. NB Obviously once you have populated the script with all your details it is not to be copied about - unless you want unintended consequences. This uses recursion within the credentials file - as the standard access keys are called each time to create the mfa security tokens.
#!/bin/bash
# Change for your username - would be /home/username on Linux/BSD
dir='/Users/<your-user-name>'
region=us-east-1
function usage {
echo "Must enter mfa token and then either dev/qa/prod"
echo "i.e. mfa-set-aws-profile.sh 123456 qa"
exit 2
}
if [[ $1 == "" ]]
then
echo "Must give me a token - how do you expect this to work - DOH :-)"
usage
exit 2
fi
# Write the output from sts command to a json file for parsing
# Just add accounts below as required
case $2 in
dev) aws sts get-session-token --profile dev --serial-number arn:aws:iam::<123456789>:mfa/<john.doe> --token-code $1 > $dir/mfa-json;;
qa) aws sts get-session-token --profile qa --serial-number arn:aws:iam::<123456789>:mfa/<john.doe> --token-code $1 > $dir/mfa-json;;
-h) usage ;;
*) usage ;;
esac
# Remove quotes and comma's to make the file easier to parse -
# N.B. gsed is for OSX - on Linux/BSD etc sed should be just fine.
/usr/local/bin/gsed -i 's/\"//g;s/\,//g' $dir/mfa-json
# Parse the mfa info into vars for use in aws credentials file
seckey=`cat $dir/mfa-json | grep SecretAccessKey | gsed -E 's/[[:space:]]+SecretAccessKey\: //g'`
acckey=`cat $dir/mfa-json | grep AccessKeyId | gsed 's/[[:space:]]+AccessKeyId\: //g'`
sesstok=`cat $dir/mfa-json | grep SessionToken | gsed 's/[[:space:]]+SessionToken\: //g'`
# output all the gathered info into your aws credentials file.
cat << EOF > $dir/.aws/credentials
[default]
aws_access_key_id = <your normal keys here if required>
aws_secret_access_key = <your normal keys here if required>
[dev]
aws_access_key_id = <your normal keys here >
aws_secret_access_key = <your normal keys here >
[qa]
aws_access_key_id = <your normal keys here >
aws_secret_access_key = <your normal keys here >
[mfa]
output = json
region = $region
aws_access_key_id = $acckey
aws_secret_access_key = $seckey
aws_session_token = $sesstok
EOF
I am trying to connect to an oracle database from a shell script ( I am a new user ) . The script will then pass a query and transfer the result to a variable called canadacount. I have written the code but it does not work
#this script will attempt to connect to a remote database CFQ143 with user ID 'userid' and password 'password'.
#After loggin in it will read data from the PLATFORMSPECIFIC table.
#We can pass a query 'select count (platform) from platformspecific where platform='CANADA';
#The result from this query will be passed to a variable called canadacount which we can then echo back to the user.
canadacount='$ORACLE_HOME/bin/sqlplus -s /nolog<<EOF
connect userid/passsword#CFQ143:1521:CFQ143
set pages 0 feed off
select count (platform) from platformspecific where platform='CANADA';
exit
EOF'
echo $canadacount
The answer is :
I changed the connect line to the following:
connect userid/passsword#CFQ143
We have mongo db and in that we have a list of collections which i wanna export to csv using the mongoexport tool. I need to do this often and the names of the collections changes sometimes. So what i wanna do is create a shell script that i can just run and it will iterate over the collections in the mongo db and create csv files. Right now i have a script but its not automated for example i have the following in a script.
mongoexport -d mydbname -c mycollname.asdno3rnknlasfkn.collection --csv -f field1,field2,field3,field4 -o mycollname.asdno3rnknlasfkn.collection.csv
In this all the elements will remain same except csv filename and the collection name where both are same.
So i wanna create a script which will
show collections
then loop over the collection names retrieved and replace it in the export tool command.
This can easily be done via the shell - don't know if the comments above refer to old versions of the mongo shell...
Example:
echo 'show collections' | mongo dbname --quiet
You can not call "show collections" through mongo from the shell.
I suggest you write a small skript/program using your favorite language
fetching the collection names through the driver's API and then execute
mongoexport through your script/program using a system call (system()).
#############################################################
Script 1 -- to produce a list of databases in MongoDB server
#############################################################
#!/bin/bash
####################################################################
# PROGRAM: mongolistdbs.sh
#
# PROGRAMMER: REDACTED
# PURPOSE: Program created to list databases in Mongo
#
# CREATED: 02/14/2011
#
# MODIFCATIONS:
###################################################################
#set -x
#supply mongo connection parms: server and port
mongo localhost:12345 --quiet <<!! 2>&1
show dbs
!!
########################################################
Script 2 -- This is the driver that calls script 1
#########################################################
####################################################################
# PROGRAM: mongodb_backup_driver.sh
#
# PROGRAMMER: REDACTED
# PURPOSE: Program created to drive the MongoDB full database
# backups
# CREATED: 02/14/2011
#
# MODIFCATIONS:
###################################################################
#!/bin/bash
################################################
# backup strategy is individual backup
# of all databases via loop and db list
# (dbparm is empty string)
###############################################
echo "Strategy: All via Individual Loop"
####################################
### here is the call of script 1
####################################
DBs=`./mongolistdbs.sh | cut -f1 | egrep -v ">"`
for db in $DBs;
do
echo "Driver loop for backup of [ ${db} ]"
#############################################################
### here im calling my backup script (not supplied)
### with database as a parameter within a loop
### to backup all databases individually
#############################################################
./royall_mongodb_backup.sh ${db}
done