How to stop a logstash Config file running in Ubuntu? - elasticsearch

Im running my logstash config file in Ubuntu using the following command.
/opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf
Its working, However I recently realized that every time I run this command it starts another instance. Now I think there are six instances running. Because each new record i create shows as six in elasticsearch.
How can I stop all these other instances and is there any way to check how many are running?
Thanks

You can use the pkill command and specify the name of the process(es) you want to kill
pkill logstash
Or the killall command works as well the same way
killall logstash

As Val states, pkill should work to resolve what you are facing.
To avoid this in future why don't you create a small service file so which uses a PID file so you can't have multiple instances running? Here is what I did:
http://www.logstashbook.com/code/3/logstash-central.init

Related

Running two scripts that use the foreground

I'm trying to fire up an instance of elasticsearch and then an instance of kibana (which needs to wait until ES is up) using a script. I can't just do ./bin/elasticseach && ./bin/kibana or something similar to that because the first script runs in the foreground which means the second command wont run. What's the best way I can do this while ensuring kibana only starts when ES is up and running?
If you have no way to tell when ES is up, I can only suggest:
./bin/elasticseach & sleep 10 && ./bin/kibana
Where you have to guesstimate in how much time it will be ready
Assuming ./bin/elasticsearch blocks the command line until it is 'up', you can just use a ';' between the commands to run them one after the other.
./bin/elasticseach; ./bin/kibana
But since elasticsearch just blocks the command line until stopped, you could do something else. You can run it like a daemon, so it doesn't block the command line. (Here is the documentation about starting and stopping es)
./bin/elasticseach -d -p PID; ./bin/kibana; kill `cat PID`

How do I make a Bash script run continously, also end it when I want to?

I have a Bash script that creates a private Geth node named "startnode.sh".
I want to be able to run this script on a server and exit that server without any problem.
You are looking for nohup(1).
It is a utility which let's you detach a process from your current terminal session.
Here's a link to a manual of a FreeBSD nohup(1).
Alternatively, set up a systemd .service file and have it run as a daemon
https://wiki.archlinux.org/index.php/Systemd

How can I run a Shell when booting up?

I am configuring an app at work which is on a Amazon Web Server.
To get the app running you have to run a shell called "Start.sh"
I want this to be done automatically after booting up the server
I have already tried with the following bash in the User Data section (Which runs on boot)
#!/bin/bash
cd "/home/ec2-user/app_name/"
sh Start.sh
echo "worked" > worked.txt
Thanks for the help
Scripts provided through User Data are only executed the first time the instance is started. (Officially, it is executed once per instance id.) This is done because the normal use-case is to install software, which should only be done once.
If you wish something to run on every boot, you could probably use the cloud-init once-per-boot feature:
Any scripts in the scripts/per-boot directory on the datasource will be run every time the system boots. Scripts will be run in alphabetical order.

How do I write script to start multiple services in centos?

I am having multi-node cluster of Hadoop, Kafka, Zookeeper, Spark.
I am running following commands to start respective service,
$ ./Hadoop/sbin/start-all.sh
$ ./zookeeper/bin/zkServer.sh start
$ ./Kafka/Kafka-server-start.sh ./config/server-properties.sh
$ ./spark/sbin/start-all.sh
and so on..
can anyone tell me how to write a script to automate this process of running each command individually?
Have you tried creating a simple shell script with all these commands and running that script instead? For example, following is a simple bash script
#!/bin/bash
./Hadoop/sbin/start-all.sh
./zookeeper/bin/zkServer.sh start
./kafka/kafka-server-start.sh ./config/server-properties.sh
./spark/sbin/start-all.sh
and so on ...

Kafka in supervisor mode

I'm trying to run kafka in supervision mode so that it can start automatically in case of a shutdown. But all the examples of running kafka use shell scripts and the supervisord is not able to note which PID to monitor. Can anyone suggesthow to accomplish auto restart of kafka?
If you are on a Unix or Linux machine, then this is when /etc/inittab comes in handy. Or you might want to use daemontools. I don't know about Windows though.
We are running Kafka under Supervisord (http://supervisord.org/), it works like a charm. Run command looks like this (as specified in supervisord.conf file:
command=/usr/local/bin/pidproxy /var/run/kafka.pid /usr/lib/kafka/bin/kafka-server.sh -f -p /var/run/kafka.pid
Flag -f tells Kafka to start in foreground. If flag -p is set, Kafka process PID is written into specified file.
The command pidproxy is a part of Supervisord distribution. Upon receiving KILL signal, it reads PID from specified file, and forwards the signal to the corresponding process.

Resources