I'm writing a ksh script to refresh schema from prod to dev/test/qa environment. I would like to have disaster check in place, I'm asking user to input source and target database as well as schema names. When the user accidentally enter prod database as target database name I would like the script to exit. In our environment the production database name ends with p some times followed by 01, 02, 03 etc.
example names:
dbp
dbpp
dbpp01
dbpp02
cdp01
sedpbp
retpp01
PORP01
PORPP01
How can I check if the last alphabet not number of my variable string is p or P ?.
Try the following :
SCHEMA=dbp
case $SCHEMA in
*[pP] | *[pP]0[0-9] ) echo OK
;;
* ) echo Error
;;
esac
I have added another check which checks if the source and target database names are same as well.
case "$tarSID" in
*[pP] | *[pP]0[0-9] | "$tarSID"="SsrcSID")
echo "Warning: Target Database Cannot be Prod or same as Prod"
echo "Re-Enter Target Database Name
;;
* )
;;
Thanks again Alvin
Related
I'm using Flyway 4.2.0 Community Edition. It's possible that this issue is resolved in a future release, but our DB upgrade project is still a ways out and no luck with being approved for licensing to go to Enterprise.
We've been successfully using Flyway for migrations on our Oracle databases for about a year (11.2.0.4 and 11.2.0.2) using standard migrations and with the default prefix (V) and suffix (.sql). We had a homegrown approach to handling our source, but we'd like to move to Repeatable Migrations to simplify things.
We have previously exported all of our PL/SQL into a git repository, using specific suffixes for different object types (trigger=.trg, procedure=.prc, etc.). We'd like to keep these suffixes, but the version we're on doesn't support the newer parameter flyway.sqlMigrationSuffixes, so in we're trying to use a solution with a list of suffixes and a for-loop. This solution is mostly working in my testing, with two very notable exceptions: package specs and package bodies (stored separately as .pks and .pkb).
Here's the script we're using to do our migrations (I know it needs work):
###Determine deployment environment for credential extraction
echo "Determining the deployment environment"
case "${bamboo_deploy_environment}" in
prod)
path=prod
dbs=( db1 db2 db3 )
;;
stage)
path=stage
dbs=( db1 db2 db3 )
;;
*)
path=dev
dbs=( db1 db2 db3 )
;;
esac
echo "Environment for credentials unlock is ${path}"
packages=( .sql .trg .pks .pkb .fnc .typ .java .class .properties .config .txt .dat )
echo "Packages to loop through when deploying flyway are ${packages[*]}"
echo "Databases to run against in this environment are ${dbs[*]}"
###Flyway execution stuff
for db in "${dbs[#]}"
do
if [ -z ${db} ]; then
echo "No db specified"
exit 2
else
echo "Working on db ${db}"
case "${db}" in
db1)
sid=db1
host=db1.fqdn
port=$portnm
;;
db2)
sid=db2
host=db2.fqdn
port=$portnm
;;
db3)
sid=db3
host=db3.fqdn
port=$portnm
;;
esac
fi
echo "Current directory is `pwd`" && echo "\n Contents of current directory as `ls -l`"
echo "Executing Flyway against ${db} for package ${pkg}"
for pkg in "${packages[#]}"
###Target the specific migrations starting folder (it goes recursively)
do
case "${pkg}" in
.sql)
loc=filesystem:${db}/migrations
;;
*)
loc=filesystem:${db}
migrateParams="-repeatableSqlMigrationPrefix=RM -table=REPEATABLE_MIGRATIONS_HISTORY"
;;
esac
echo "Running flyway for package type ${pkg} against ${db} db with location set to ${loc}"
baseParams="-configFile=${db}/migrations/base.conf -locations=${loc} -url=jdbc:oracle:thin:#${host}:${port}:${sid}"
migrateParams="${migrateParams} -sqlMigrationSuffix=${pkg} ${baseParams}"
addParams=" -ignoreMissingMigrations=True"
flyway "repair" "${migrateParams}"
flyway "migrate" "${migrateParams}${addParams}"
echo "Finished with ${pkg} against ${db} db"
unset baseParams
unset migrateParams
unset addParams
done
done
echo "Finished with the migration runs"
My approach has been to run the deployment in an environment, export the data from the REPEATABLE_MIGRATIONS_HISTORY table (custom table for the repeatable migrations) as insert statements, then truncate the table, execute the inserts, and run the deployment again using the same deployment artifact. On every file type Flyway is correctly evaluating that the checksum has not changed and skipping the files. For the package spec (.pks) and package body (.pkb) files, however, Flyway is executing the repeatable migration every time. I've run queries to verify, and I'm getting incremented executions on all .pks and .pkb files but staying at one execution for every other suffix.
select "description", "script", "checksum", count(1)
from FLYWAY.repeatable_migrations_history
group by "description", "script", "checksum"
order by count(1) desc, "script";
Does anyone else out there have any ideas? I know that these source files should be idempotent, and largely they are, but some of this PL/SQL has been around for 20 plus years. We've seen a couple of objects that throw an error on the first execution post-compile before working perfectly thereafter, and we've never been able to track down a cause or solution. We will need to prevent unnecessary in order to promote this to production.
I am trying to troubleshoot an old TCL accounting script called GOTS - Grant Of The System. What it does is creates a time stamped logfile entry for each user login and another for the logout. The problem is it is not creating the second log file entry on logout. I think I tracked down the area where it is going wrong and I have attached it here. FYI the log file exists and it does not exit with the error "GOTS was called incorrectly!!". It should be executing the if then for [string match "$argv" "end_session"]
This software runs properly on RHEL Linux 6.9 but fails as described on Centos 7. I am thinking that there is a system variable or difference in the $argv argument vector for the different systems that creates this behavior.
Am I correct in suspecting $argv and if not does anyone see the true problem?
How do I print or display the $argv values on logout?
# Find out if we're beginning or ending a session
if { [string match "$argv" "end_session"] } {
if { ![file writable $Log] } {
onErrorNotify "4 LOG"
}
set ifd [open $Log a]
puts $ifd "[clock format [clock seconds]]\t$Instrument\t$LogName\t$GroupName"
close $ifd
unset ifd
exit 0
} elseif { [string match "$argv" "begin_session"] == 0 } {
puts stderr "GOTS was called incorrectly!!"
exit -1
}
end_session is populated by the /etc/gdm/PostSession/Default file
#!/bin/sh
### Begin GOTS PostSession
# Do not run GOTS if root is logging out
if test "${USER}" == "root" ; then
exit 0
fi
/usr/local/lib/GOTS/gots end_session > /var/tmp/gots_postsession.log 2> /var/tmp/gots_postsession.log
exit 0
### End GOTS PostSession
This is the postsession log file:
Application initialization failed: couldn't connect to display ":1"
Error in startup script: invalid command name "option"
while executing
"option add *Font "-adobe-new century schoolbook-medium-r-*-*-*-140-*-*-*-*-*-*""
(file "/usr/local/lib/GOTS/gots" line 26)
After a lot of troubleshooting we have determined that for whatever reason Centos is not allowing part of the /etc/gdm/PostSession/default file to execute:
fi
/usr/local/lib/GOTS/gots end_session
But it does update the PostSession.log file as it should .. . Does anyone have any idea what could be interfering with only part of the PostSession/default?
Does anyone have any idea what could be interfereing with PostSession/default?
Could it be that you are hitting Bug 851769?
That said, am I correct in stating that, as your investigation shows, this is not a Tcl-related issue or question anymore?
So it turns out that our script has certain elements that depend upon the Xserver running on logout to display some of the GUI error messages. This from:
Gnome Configuration
"When a user terminates their session, GDM will run the PostSession script. Note that the Xserver will have been stopped by the time this script is run, so it should not be accessed.
Note that the PostSession script will be run even when the display fails to respond due to an I/O error or similar. Thus, there is no guarantee that X applications will work during script execution."
We are having to rewrite those error message callouts so they simply write the errors to a file instead of depending on the display. The errors are for things that should be there in the beginning anyway.
I need to write a korn script that depending on the host the script is running on, will set a deployment directory (so say 5 hosts deploy the software to directory one and five other hosts deploy to directory two).
How could I do this - I wanted to avoid an if condition for every host like below
IF [hostname = host1] then $INSTALL_DIR=Dir1
ELSE IF [hostname = host2] then $INSTALL_DIR=Dir1
and would prefer to have a list of say Directory1Hosts and Directory2Hosts which contains all the hosts valid for each directory, and then I would just check if the host the script is running on is in my Directory1Hosts or Directory2Hosts (so only two IF conditions instead of 10).
Thanks for your help - have been struggling to find how to do effectively a contains clause.
Use a case statement:
case $hostname in
host1) INSTALL_DIR=DIR1 ;;
host2) INSTALL_DIR=DIR2 ;;
esac
or use an associative array
install_dirs=([host1]=DIR1 [host2]=DIR2)
...
INSTALL_DIR=${install_dirs[$hostname]}
When you want to have configuration and code apart, you can make a config directory: one file with hosts for each install dir.
# cat installdirs/Dir1
host1
host2
With these files your code can be
INSTALL_DIR=$(grep -Flx "${hostname}" installdirs/* | cut -d"/" -f2)
I'm working on a custom Nagios script that will monitor cPanel to make sure it is running and give back a status depending on what it gets from an output of service cpanel status. This is what I have:
##############################################################################
# Constants
cpanelstate="running..."
ALERT_OK="OK - cPanel is running"
ALERT_CRITICAL="CRITICAL - cPanel is NOT running"
###############################################################################
cpanel=$(service cpanel status | head -1)
if [ "$cpanel" = "$cpanelstate" ]; then
echo $ALERT_OK
exit 0
else
echo $ALERT_CRITICAL
exit 2
fi
exit $exitstatus
When I run the script, this is the output I get:
root#shared01 [/home/mvelez]# /usr/local/nagios/libexec/check_cpanel
CRITICAL - cPanel is NOT running
When I run the script, cPanel IS RUNNING but this is the output I get. As a matter of fact, no matter what the status reports for cPanel this is the output that comes out. When I comment out the ELSE, ECHO and EXIT 2 statement:
#else
# echo $ALERT_CRITICAL
# exit 2
It gives back a blank output:
root#shared01 [/home/mvelez]# /usr/local/nagios/libexec/check_cpanel
root#shared01 [/home/mvelez]#
I'm not sure what I'm not doing correctly as I am very new to bash scripting and trying to learn as I go along. Thank you in advanced for any and all help very very much!
The code below should work, but you might need to run it with sudo, because 'service' might not be available for ordinary users.
#!/bin/bash
##############################################################################
# Constants
cpanelstate="running"
ALERT_OK="OK - cPanel is running"
ALERT_CRITICAL="CRITICAL - cPanel is NOT running"
###############################################################################
cpanel=$(service apache2 status | head -1)
echo CPANEL $cpanel
if [[ $cpanel == *$cpanelstate* ]]; then
echo $ALERT_OK
exit 0
else
echo $ALERT_CRITICAL
exit 2
fi
#Oleg Gryb's answer solves your problem, but as for why your original script didn't work:
[ "$cpanel" = "$cpanelstate" ] compared the full command output - e.g., cpsrvd (pid 10066) is running..., against a substring of the expected output, running... for equality, which will obviously fail.
The solution is to use bash's pattern matching, provided via the right-hand side of its [[ ... ]] conditional (bash's superior alternative to the [ ... ] conditional):
[[ "$cpanel" == *"$cpanelstate" ]]
* represents any sequence of characters, so that this conditional returns true, if $cpanel ends with $cpanelstate (note how * must be unquoted to be recognized as a special pattern char.)
The Problem
I have a script that has a case statement which I'm expecting to execute based on the value of a variable. The case statement appears to either ignore the value or not properly evaluate it instead dropping to the default.
The Scenario
I pull a specific character out of our server hostnames which indicates where in our environment the server resides. We have six different locations:
Management(m): servers that are part of the infrastructure such as monitoring, email, ticketing, etc
Development(d): servers that are for developing code and application functionality
Test(t): servers that are used for initial testing of the code and application functionality
Implementation(i): servers that the code is pushed to for pre-production evaluation
Production(p): self-explanatory
Services(s): servers that the customer needs to integrate that provide functionality across their project. These are separate from the Management servers in that these are customer servers while Management servers are owned and operated by us.
After pulling the character from the hostname I pass it to a case block. I expect the case block to evaluate the character and add a couple lines of text to our rsyslog.conf file. What is happening instead is that the case block returns the default which does nothing but tell the person building the server to manually configure the entry due to an unrecognized character.
I've tested this manually against a server I recently built and verified that the character I am pulling from the hostname (an 's') is expected and accounted for in the case block.
The Code
# Determine which environment our server resides in
host=$(hostname -s)
env=${host:(-8):1}
OLDFILE=/etc/rsyslog.conf
NEWFILE=/etc/rsyslog.conf.new
# This is the configuration we need on every server regardless of environment
read -d '' common <<- EOF
...
TEXT WHICH IS ADDED TO ALL CONFIG FILES REGARDLESS OF FURTHER CODE EXECUTION
SNIPPED
....
EOF
# If a server is in the Management, Dev or Test environments send logs to lg01
read -d '' lg01conf <<- EOF
# Relay messages to lg01
*.notice ##xxx.xxx.xxx.100
#### END FORWARDING RULE ####
EOF
# If a server is in the Imp, Prod or is a non-affiliated Services zone server send logs to lg02
read -d '' lg02conf <<- EOF
# Relay messages to lg02
*.notice ##xxx.xxx.xxx.101
#### END FORWARDING RULE ####
EOF
# The general rsyslog configuration remains the same; pull it out and write it to a new file
head -n 63 $OLDFILE > $NEWFILE
# Add the common language to our config file
echo "$common" >> $NEWFILE
# Depending on which environment ($env) our server is in, add the appropriate
# remote log server to the configuration with the $common settings.
case $env in
m) echo "$lg01conf" >> $NEWFILE;;
d) echo "$lg01conf" >> $NEWFILE;;
t) echo "$lg01conf" >> $NEWFILE;;
i) echo "$lg02conf" >> $NEWFILE;;
p) echo "$lg02conf" >> $NEWFILE;;
s) echo "$lg02conf" >> $NEWFILE;;
*) echo "Unknown environment; Manually configure"
esac
# Keep a dated backup of the original rsyslog.conf file
cp $OLDFILE $OLDFILE.$(date +%Y%m%d)
# Replace the original rsyslog.conf file with the new version
mv $NEWFILE $OLDFILE
An Aside
I've already determined that I can combine the different groups of code from the case block onto single lines (a total of two) using the | operator. I've listed it in the manner above since this is how it is coded while I'm having issues with it.
I can't see what's wrong with your code. Maybe add another ;; to the default clause. To find the problem add a set -vx as a first line. Will show you lots of debug information.