Not able to send psadmin output to logs - oracle

I'm making a script to restart an instance and it works without any log file but it gives the following error when I try to log the output of psadmin:
java.lang.NullPointerException
at com.peoplesoft.pt.psadmin.ui.Progress.<init>(Progress.java:135)
at com.peoplesoft.pt.psadmin.ui.Progress.getInstance(Progress.java:123)
at com.peoplesoft.pt.psadmin.pia.DomainBootHandler.BootWlsServer(DomainBootHandler.java:84)
at com.peoplesoft.pt.psadmin.pia.DomainBootHandler.run(DomainBootHandler.java:62)
at com.peoplesoft.pt.psadmin.pia.PIAAdminCmdLine.startDomain(PIAAdminCmdLine.java:270)
at com.peoplesoft.pt.psadmin.pia.PIAAdminCmdLine.run(PIAAdminCmdLine.java:481)
at com.peoplesoft.pt.psadmin.PSAdmin.runSwitched(PSAdmin.java:170)
at com.peoplesoft.pt.psadmin.PSAdmin.main(PSAdmin.java:232)
The following works (with no log):
export ORAENV_ASK=NO
export ORACLE_SID=PSCNV
.oraenv
export TUXDIR=/m001/Oracle/Middleware/tuxedo12.1.1.0
. /m001/pt854/psconfig.sh
. $TUXDIR/tux.env
export PS_CFG_HOME=$PS_HOME
$PS_HOME/appserv/psadmin -w shutdown -d PSCNV
$PS_HOME/appserv/psadmin -w start -d PSCNV
$PS_HOME/appserv/psadmin -w status -d PSCNV
Changing the psadmin invocations like so causes the error:
LOGFILE=/home/psoft/scripts/pscnv_webserv_stopNstart.log
test() {
$PS_HOME/appserv/psadmin -w shutdown -d PSCNV
$PS_HOME/appserv/psadmin -w start -d PSCNV
$PS_HOME/appserv/psadmin -w status -d PSCNV
}
test >> ${LOGFILE}
I also tried redirecting the output of each call individually and saw the same error.

I'm interested in any feedback to this question as well. I tried writing a cross platform java program to bounce multiple app and web servers and it seems that the psadmin.jar program exclusively holds onto stdout during the psadmin program.
I want to evaluate the output of psadmin/psadmin.jar to see if there are trappable errors that require killing of the process at the os level.
Hopefully there is a way to share stdout, but I have not found a way yet...

This solved this for me. nohup script -q -c "psadmin -w start -d peoplesoft"

Related

Running a bash script after the kafka-connect docker is up and running

I have the following docker file
FROM confluentinc/cp-kafka-connect:5.3.1
ENV CONNECT_PLUGIN_PATH=/usr/share/java
# JDBC-MariaDB
RUN wget -nv -P /usr/share/java/kafka-connect-jdbc/ https://downloads.mariadb.com/Connectors/java/connector-java-2.4.4/mariadb-java-client-2.4.4.jar
# SNMP Source
RUN wget -nv -P /tmp/ https://github.com/KarthikDuggirala/kafka-connect-snmp/releases/download/0.0.1.11/kafka-connect-snmp-0.0.1.11.tar.gz
RUN mkdir /tmp/kafka-connect-snmp && tar -xf /tmp/kafka-connect-snmp-0.0.1.11.tar.gz -C /tmp/kafka-connect-snmp/
RUN mv /tmp/kafka-connect-snmp/usr/share/kafka-connect/kafka-connect-snmp /usr/share/java/
# COPY script and make it executable
COPY plugins-config.sh /usr/share/kafka-connect-script/plugins-config.sh
RUN ["chmod", "+x", "/usr/share/kafka-connect-script/plugins-config.sh"]
#entrypoint
ENTRYPOINT [ "./usr/share/kafka-connect-script/plugins-config.sh" ]
and the following bash script
#!/bin/bash
#script to configure kafka connect with plugins
#export CONNECT_REST_ADVERTISED_HOST_NAME=localhost
#export CONNECT_REST_PORT=8083
url=http://$CONNECT_REST_ADVERTISED_HOST_NAME:$CONNECT_REST_PORT/connectors
curl_command="curl -s -o /dev/null -w %{http_code} $url"
sleep_second=5
sleep_second_counter=0
max_seconds_to_wait=30
echo "Waiting for Kafka Connect to start listening on localhost"
echo "HOST: $CONNECT_REST_ADVERTISED_HOST_NAME , PORT: $CONNECT_REST_PORT"
while [[ $(eval $curl_command) -eq 000 ]]
do
echo "In"
echo -e $date " Kafka Connect listener HTTP state: " $(eval $curl_command) " (waiting for 200) $sleep_second_counter"
echo "Going to sleep for $sleep_second seconds"
# sleep $sleep_second
echo "Finished sleeping"
# ((sleep_second_counter+=$sleep_second))
echo "Finished counter"
done
echo "Out"
nc -vz $CONNECT_REST_ADVERTISED_HOST_NAME $CONNECT_REST_PORT
I try to run the docker and using docker logs to see whats happening, and I am expecting that the script would run and wait till the kafka connect is started. But apparently after say few seconds the script or (I dont know what is hanging) hangs and I do not see any console prints anymore.
I am a bit lost what is wrong, so I need some guidance on what is that I am missing or is this not the correct way
What I am trying to do
I want to have logic defined that I could wait for kafka connect to start then run the curl command
curl -X POST -H "Content-Type: application/json" --data '{"name":"","config":{"connector.class":"com.github.jcustenborder.kafka.connect.snmp.SnmpTrapSourceConnector","topic":"fm_snmp"}}' http://localhost:8083/connectors
PS: I cannot use docker-compose way to do it, since there are places I have to use docker run
The problem here is that ENTRYPOINT will run when the container starts and will prevent the default CMD to run since the script will loop waiting for the server to be up , that is the script will loop forever since the CMD will not run.
you need to do one of the following:
start the kafka connect server in your Entrypoint and your script in CMD or running your script outside the container ....

Login via curl fails inside bash script, same curl succeeds on command line

I'm running this login via curl in my bash script. I want to make sure I can login before executing the rest of the script, where I actually log in and store the cookie in a cookie jar and then execute another curl in the API thousands of times. I don't want to run all that if I've failed to login.
Problem is, the basic login returns 401 when it runs inside the script. But when I run the exact same curl command on the command line, it returns 200!
basic_login_curl="curl -w %{http_code} -s -o /dev/null -X POST -d \"username=$username&password=$password\" $endpoint/login"
echo $basic_login_curl
outcome=`$basic_login_curl`
echo $outcome
if [ "$outcome" == "401" ]; then
echo "Failed login. Please try again."; exit 1;
fi
This outputs:
curl -w %{http_code} -s -o /dev/null -X POST -d "username=bdunn&password=xxxxxx" http://stage.mysite.it:9301/login
401
Failed login. Please try again.
Copied the output and ran it on the cmd line:
$ curl -w %{http_code} -s -o /dev/null -X POST -d "username=bdunn&password=xxxxxx" http://stage.mysite.it:9301/login
200$
Any ideas? LMK if there's more from the code you need to see.
ETA: Please note: The issue's not that it doesn't match 401, it's that running the same curl login command inside the script fails to authenticate, whereas it succeeds when I run it on the actual CL.
Most of the issues reside in how you are quoting/not quoting variables and the subshell execution. Setting up your command like the following is what I would recommend:
basic_login_curl=$(curl -w "%{http_code}" -s -o /dev/null -X POST -d "username=$username&password=$password" "$endpoint/login")
The rest basically involves quoting everything properly:
basic_login_curl=$(curl -w "%{http_code}" -s -o /dev/null -X POST -d "username=$username&password=$password" "$endpoint/login")
# echo "$basic_login_curl" # not needed since what follows repeats it.
outcome="$basic_login_curl"
echo "$outcome"
if [ "$outcome" = "401" ]; then
echo "Failed login. Please try again."; exit 1;
fi
Running the script through shellcheck.net can be helpful in resolving issues like this as well.

Capistrano 3 deploy failed messages - exit status 1 (failed)

When I deploy my Symfony2 app using Capistrano with symfony gem I get various errors such as
Running /usr/bin/env [ -L /var/www/releases/20151014090151/app/config/parameters.yml ] as ubuntu#ec2-00-000-000-000.eu-west-1.compute.amazonaws.com
Command: [ -L /var/www/releases/20151014090151/app/config/parameters.yml ]
Finished in 0.038 seconds with exit status 1 (failed)
and I get the same for
-f /var/www/releases/20151014120425/app/config/parameters.yml
-L /var/www/releases/20151014090151/web/.htaccess
-L /var/www/releases/20151014090151/web/robots.txt
-L /var/www/releases/20151014090151/app/logs
-d /var/www/releases/20151014120425/app/logs
SYMFONY_ENV=prod /usr/bin/env ln -s /var/www/shared/app/logs /var/www/releases/20151014120425/app/logs
-L /var/www/releases/20151014120425/web/uploads
-d /var/www/releases/20151014120425/web/uploads
-L /var/www/releases/20151014120425/src/Helios/CoreBundle/Resources/translations
-d /var/www/releases/20151014120425/src/Helios/CoreBundle/Resources/translations
-L /var/www/releases/20151014120425/app/spool
-d /var/www/releases/20151014120425/app/spool
-d /var/www/releases/20151014120425/app/cache
I am not sure what is failing or what the various flags -f -L -d mean?
The deploy completes but it just shows these failed message. Please can someone advise what they mean and how to fix please?
Thanks
The flags are file test operators.
When Capistrano says a command fails, it just means it returns a non-0 status code. In the case of file test operators, it may be checking whether something exists before creating it, so that failure is fed into a conditional which will create the file/folder/symlink. This is normal, albeit confusing. If a critical command fails, Capistrano will stop the deployment and display an error message.

How to close a screen from a Makefile?

I have a test that depend on a specific HTTP server, which requires me to start one with a known setup for the tests.
Since the server cannot be started as a daemon my approach was to just have it start in a screen session, run the test and close the session.
test:
screen -S test_http_server -d -m start_my_test_http_server
# run my tests here
screen -S test_http_server -X kill # works from bash but not makefile :/
Everything works fine except for closing or killing the session (which does work if I run it in bash afterwards).
It seems that using the # prefix (which I did, but was not posted in the original example code) that suppresses the normal 'echo' of the command somehow interferes with closing the screen.
Fails because of # prefix usage.
test:
#screen -S test_http_server -d -m start_my_test_http_server
# run my tests here
#screen -S test_http_server -X kill
Fixed make file that works as intended.
test:
screen -S test_http_server -d -m start_my_test_http_server
# run my tests here
screen -S test_http_server -X kill

How can I start a screen session using a specific config file?

I would like to be able to start a screen session using a certain config file. I know that I can use -c and then the file path to the config file but if I do that, then the sh script I am using does not work. You can see the sh script below:
#!/bin/bash
cd /media/kiancross/Minecraft_Server/1.6.4
screen -d -m -S MinecraftServer ./start.sh
screen -r MinecraftServer
I would have thought that I can do the following code:
#!/bin/bash
cd /media/kiancross/Minecraft_Server/1.6.4
screen -d -m -S -c MinecraftServer $HOME/config_file/mcserver.config ./start.sh
screen -r MinecraftServer
But then I get a message saying:
There is no screen to be resumed matching MinecraftServer.
After then checking to see if there is a screen session running it says that there are no screen sessions running
No Sockets found in /var/run/screen/S-kiancross.
Does anybody know how I can do this so that I can use a custom config file?
The command should be:
screen -d -m -S MinecraftServer -c $HOME/config_file/mcserver.config ./start.sh
The name of the screen session goes after -S and the path of the config file goes after -c. You inserted -c before the screen name.

Resources