Script Karaf shell commands? - shell

I need to issue Karaf shell commands non-interactively, preferably from a script. More specifically, I need to tell Karaf to feature:install a set of features in an automated way.
# Attempt to install a feature in a way I could script
bash> bin/karaf feature:install myFeature
# Drops me into Karaf shell
karaf> feature:uninstall myFeature
Error executing command: Feature named 'myFeature' is not installed
# Feature wasn't installed
Is this possible? Is there a different way of solving this issue (automated install of a set of Karaf features) that I'm missing?

With bin/karaf you start Karaf with a login prompt, if you want to start Karaf so you can issue commands you first need to start Karaf in server mode. For this use the bin/start shell script. Now you can use either the bin/client or the bin/shell commands to communicate with Karaf in a headless mode.
For example:
./bin/client list
START LEVEL 100 , List Threshold: 50
ID | State | Lvl | Version | Name
----------------------------------------------------------------------------------
72 | Active | 80 | 0 | mvn_org.ops4j.pax.web.samples_war_4.1.0-SNAPSHOT_war
This should work for all versions of Karaf already (maybe not the 2.2.x line ;-) )
If the version you're using is a 3.0.x or higher you might need to add a user to the command.
./bin/client -u karaf list

To issue Karaf shell commands not-interactively, preferably from a script you can also use the Karaf client (scroll down to "Apache Karaf client"). To install features I use command like
/opt/karaf/bin/client -r 7 "feature:install http; feature:install webconsole"
The -r switch allows to retry the connection if the server is not up yet (I use it in a Docker script).

It's possible to issue non-interactive Karaf shell commands using sshpass if keeping the password secrete isn't important.
sshpass -p karaf ssh -tt -p 8101 -o StrictHostKeyChecking=no karaf#localhost feature:install odl-l2switch-switch-ui
Working example in OpenDaylight's Vagrant-based L2Switch Tutorial.

Late to the party, but this problem can easily be solved using the Features Boot configuration, located in the etc/org.apache.karaf.features.cfg file.
According to the following link https://karaf.apache.org/manual/latest/provisioning
A boot feature is automatically installed by Apache Karaf, even if it has not been previously installed using feature:install or FeatureMBean.
There are 2 main properties of this file, the featuresRepositories and featuresBoot.
featuresRepositories contains a list (comma-separated) of features repositories (features XML) URLs.
featuresBoot contains a list (comma-separated) of features to install at boot.
Note that once you update this file, Karaf will attempt to install the features listed in the featuresBoot configuration every time it starts. So if all you are looking to automate is installing features (as per the original question), then this is a great option.

Another option is to use Expect.
This Expect script from OpenDaylight's CI installs and verifies a Karaf feature. Here's an excerpt:
# Install feature
expect "$prompt"
send "feature:install odl-netvirt-openstack\r"
expect {{
"Error executing command: Can't install feature" {{
send_user "\nFailed to install test feature\n"
exit 1
}}
}}

So the general practice is to install the feature, then loop on a bundle:list | grep bundleName to see if the bundles you need are installed. Then you continue on with your test case.

Related

How do I get a custom Nagios plugin to work with NRPE?

I have a system with no internet access where I want to install some Nagios monitoring services/plugins. I installed NRPE (Nagios Remote Plugin Executor), and I can see commands defined in it, like check_users, check_load, check_zombie_procs, etc.
command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10
command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20
...
I am able to run the commands like so:
/usr/local/nagios/libexec/check_nrpe -H 127.0.0.1 -c check_load
This produces an output like:
OK - load average: 0.01, 0.13, 0.12|load1=0.010;15.000;30.000;0; load5=0.130;10.000;25.000;0; load15=0.120;5.000;20.000;0;
or
WARNING – load average per CPU: 0.06, 0.07, 0.07|load1=0.059;0.150;0.300;0; load5=0.069;0.100;0.250;0; load15=0.073;0.050;0.200;0;
Now, I want to define/configure/install some more services to monitor. I found a collection of services here. So, say, I want to use the service defined here called check_hadoop_namenode.pl. How do I get it to work with NRPE?
I tried copying the file check_hadoop_namenode.pl into the same directory where other NRPE services are stored, i.e., /usr/lib/nagios/plugins. But it doesn't work:
$ /usr/local/nagios/libexec/check_nrpe -H 127.0.0.1 -c check_hadoop_namenode.pl
I figured this might be obvious because all other services in that directory are binaries, so I need a binary for check_hadoop_namenode.pl file as well. How do I make the binary for it?
I tried installing the plugins according to the description in the link. But it just tries to install some package dependencies, and throws error as it cannot access the internet (my system has no internet access, like I stated before). This error persists even when I install these dependencies manually in another system and copy them to the target system.
$ <In another system with internet access>
mkdir ~/repos
git clone https://github.com/harisekhon/nagios-plugins
cd nagios-plugins
sudo nano Makefile
# replace 'yum install' with 'yumdownloader --resolv --destdir ~/repos/'
# replace 'pip install' with 'pip download -d ~/repos/'
This downloaded 43 dependencies (and dependencies of dependencies, and so on) required to install the plugins.
How do I get it to work?
check_users, check_load or check_zombie_procs are defined on the client side in nrpe.cfg file. Default location are /usr/local/nagios/etc/nrpe.cfg or /etc/nagios/nrpe.cfg. As I read, you already found that file, so you can move to next step.
Put something like this to your nrpe.cfg:
command[check_hadoop_namenode]=/path/to/your/custom/script/check_hadoop_namenode.pl -optional -arguments
Then you need restart NRPE deamon service on client. Something like service nrpe restart.
Just for you information, these custom script doesn't must to be binaries, you can even use simple bash script.
And finally after that, you can call the check_hadoop_namenode command from Nagios server or via local NRPE deamon:
/usr/local/nagios/libexec/check_nrpe -H 127.0.0.1 -c check_hadoop_namenode

how to tell jenkins to restart tomcat after deployment?

unfortunately I am new to jenkins and linux and hope somebody can help me here.
It is about the automatic build for our system. We use jenkins job to updatethe web system. after updating, tomcat and the system shoud be restarted. For updating the system we use the following execution command:
bash -l distr/deploy.sh -s /distr -a /data/mySystem -c /opt/apache-tomcat-8.0.5 2>&1 | tee log.log
how to tell jenkins to start tomcat after deployment?
This tutorial is for Tomcat 7, but the idea should be similar for v8: http://www.jdev.it/deploying-your-war-file-from-jenkins-to-tomcat/

Nohup didn't work in Jenkins shell

I want my JBoss server to run in background, for that I am using nohup ./startPID.sh > /dev/null 2>&1& command. But when I pass same command in Jenkins, it doesn't work as expected. The console output in Jenkins says "command ran successfully" but in backend the JBoss server is still down.
Any inputs?
Regards
Manish Mehra
Use "at now" instead of "nohup"
In your job jenkins (execute shell) put :
set +e #so "at now" will run even if java -jar fails
#Run java app in background
echo "java -jar $(ls | grep *.jar | head -n 1)" | at now + 1 min
You could look at the JBoss management plugin
which spins up JBoss for you
This plugin allows to manage a JBoss Application Server during build
procedure.
With the plugin we can start/stop JBoss AS. It's very useful if we
need to run some integration tests against of the server. There is
also operation allows verification if artifacts are deployable.
It looks to be quite an old plugin but has cuurent users

How to deploy SpringBoot Maven application with Jenkins ?

I have a Spring Boot application which runs on embedded Tomcat servlet container mvn spring-boot:run . And I don’t want to deploy the project as separate war to standalone Tomcat.
Whenever I push code to BitBucket/Github, a hook runs and triggers Jenkins job (runs on Amazon EC2) to deploy the application.
The Jenkins job has a post build action: mvn spring-boot:run, the problem is that the job hangs when post build action finished.
There should be another way to do this. Any help would be appreciated.
The problem is that Jenkins doesn't handle spawning child process from builds very well. Workaround suggested by #Steve in the comment (nohuping) didn't change the behaviour in my case, but a simple workaround was to schedule app's start by using the at unix command:
> echo "mvn spring-boot:run" | at now + 1 minutes
This way Jenkins successfully completes the job without timing out.
If you end up running your application from a .jar file via java -jar app.jar be aware that Boot breaks if the .jar file is overwritten, you'll need to make sure the application is stopped before copying the artifact. If you're using ApplicationPidListener you can verify that the application is running (and stop it if it is) by adding execution of this command:
> test -f application.pid && xargs kill < application.pid || echo 'App was not running, nothing to stop'
I find very useful to first copy the artifacts to a specified area on the server to keep track of the deployed artifacts and not to start the app from the jenkins job folder. Then create a server log file there and start to listening to it on the jenkins window until the server started.
To do that I developed a small shell script that you can find here
You will also find a small article explaining how to configure the project on jenkins.
Please let me know if worked for you. Thnaks
The nohup and the at now + 1 minutes didn't work for me.
Since Jenkins was killing the process spun in the background, I ensured the process to not be killed by setting a fake BUILD_ID to that Jenkins task. This is what the Jenkins Execute shell task looks like:
BUILD_ID=do_not_kill_me
java -jar -Dserver.port=8053 /root/Deployments/my_application.war &
exit
As discussed here.
I assume you have a Jenkins-user on the server and this user is the owner of the Jenkins-service:
log in on the server as root.
run sudo visudo
add "jenkins ALL=(ALL) NOPASSWD:ALL" at the end (jenkins=your Jenkins-user)
Sign In in Jenkins and choose your jobs and click to configure
Choose "Execute Shell" in the "Post build step"
Copy and paste this:
service=myapp
if ps ax | grep -v grep | grep -v $0 | grep $service > /dev/null
then
sudo service myapp stop
sudo unlink /etc/init.d/myapp
sudo chmod +x /path/to/your/myapp.jar
sudo ln -s /path/to/your/myapp.jar /etc/init.d/myapp
sudo service myapp start
else
sudo chmod +x /path/to/your/myapp.jar
sudo ln -s /path/to/your/myapp.jar /etc/init.d/myapp
sudo service myapp start
fi
Save and run your job, the service should start automatically.
This worked for me on jenkins on a linux machine
kill -9 $(lsof -t -i:8080) || echo "Process was not running."
mvn clean compile
echo "mvn spring-boot:run" | at now + 1 minutes
In case no process on 8080 it will print the message and will continue.
Make sure that at is installed on your linux machine. You can use
sudo apt-get install at
to install at

Making Apache Felix Gogo not open a local console

I am learning Apache Felix to use as my OSGi framework. I want to be able to use the Felix Remote Shell to access my running instance through telnet. The Remote Shell accesses the process through Gogo, as explained on http://felix.apache.org/site/apache-felix-remote-shell.html. When I start Felix with the Gogo shell bundles in the auto-deploy bundles directory, it opens a Felix prompt g! on the Linux console from which I am starting. What I would like to do is have Felix start with the Gogo shell active, but without attaching to my current Linux console and showing the g! prompt, and still allowing me to access the instance using the Remote Shell through telnet. Is this possible? If so, what is the correct way to do it? Would nohup and running in the background suffice? That doesn't seem very clean to me. Thanks for any suggestions!
According to a discussion on the mailing list, you should add the -Dgosh.args=--nointeractive JVM argument.

Resources