Logstash cannot start because of multiple instances even though there are no instances of it running - elasticsearch

I keep getting this error [2019-02-26T16:50:41,329][FATAL][logstash.runner ] Logstash could not be started because there is already another instance using the configured data directory. If you wish to run multiple instances, you must change the "path.data" setting.
when I launch logstash. I am using the cli to launch logstash. The command that I execute is:
screen -d -S logstash -m bash -c "cd;export JAVA_HOME=/nastools/jdk1.8.0_77/; export LS_JAVA_OPTS=-Djava.net.preferIPv4Stack=true; ~/monitoring/6.2.3/bin/logstash-6.2.3/bin/logstash -f ~/monitoring/6.2.3/config/logstash_forwarder/forwarder.conf"
I don't have any instance of logstash running. I tried running this:
ps xt | grep "logstash" and it didn't return any process. I tried the following as well: killall logstash but to no avail, it gives me the same error. I tried restarting my machine as well but still the same error.
Has anyone experienced something similar? Kibana and elastic search launch just fine.
Thanks in advance for your help!

The problem is solved now. I had to empty the contents of the data directory of logstash. I then restarted it and it generated the uuid and other files it needed.

To be more specific, you need to cd to the data folder of logstash (usually it is /usr/share/logstash/data) and delete the .lock file.
You can see if this file exists with:
ll -lah
In the data folder.
Learn it from http://www.programmersought.com/article/2009814657/;jsessionid=282FF6001AFE90D7D8609975B8222CE8

sudo /usr/share/logstash/bin/logstash --path.settings /etc/logstash/ --path.data sensor39 -f /etc/logstash/conf.d/company_dump.conf --config.reload.automatic
Try this cmd I hope it will work(but please check the .conf file path)

Related

How to execute gcloud command in bash script from crontab -e

I am trying execute some gcloud commands in bash script from crontab. The script execute sucessfully from command shell but not from the cron job.
I have tried with:
Settng the full path to gcloud like:
/etc/bash_completion.d/gcloud
/home/Arturo/.config/gcloud
/usr/bin/gcloud
/usr/lib/google-cloud-sdk/bin/gcloud
Setting in the begin the script:
/bin/bash -l
Setting in the crontab:
51 21 30 5 6 CLOUDSDK_PYTHON=/usr/bin/python2.7;
/home/myuser/folder1/myscript.sh param1 param2 param3 -f >>
/home/myuser/folder1/mylog.txt`
Setting inside the script:
export CLOUDSDK_PYTHON=/usr/bin/python2.7
Setting inside the script:
sudo ln -s /home/myuser/google-cloud-sdk/bin/gcloud /usr/bin/gcloud
Version Ubuntu 18.04.3 LTS
command to execute: gcloud config set project myproject
but nothing is working, maybe I am doing something wrongly. I hope you can help me.
You need to set your user in your crontab, for it to run the gcloud command. As well explained in this other post here, you need to modify your crontab to fetch the data in your Cloud SDK, for the execution to occur properly - it doesn't seem that you have made this configuration.
Another option that I would recommend you to try out, it's using a Cloud Scheduler to run your gcloud commands. This way, you can use gcloud for your cron jobs in a more integrated and easy way. You can verify more information about this option here: Creating and configuring cron jobs
Let me know if the information helped you!
I found my error, the problem here was only in the command: "gcloud dns record-sets transaction start", the others command was executing sucesfully but only no logging nothing, by that I though that was not executng the other commands. This Command create a temp file ex. transaction.yaml and that file could not be created in the default path for gcloud(snap/bin), but the log simply dont write any thing!. I had to specify the path and name for that file with the flag --transaction-file=mytransaction.yaml. Thanks for your supprot and ideas
I have run into the same issue before. I fixed it by forcing the profile to load in my script.sh,loading the gcloud environment variables with it. Example below:
#!/bin/bash
source /etc/profile
gcloud config set project myprojectecho
echo "Project set to myprojectecho."
I hope this can help others in the future with similar issues, as this also helped me when trying to set GKE nodes from 0-4 on a schedule.
Adding the below line to the shell script fixed my issue
#Execute user profile
source /root/.bash_profile

DynamoDB Local - missing tables when starting with bash alias

I've installed DynamoDB locally on my Mac (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html) and I've written a bash alias intending to avoid having to cd into the DynamoDB directory and run
$ java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb
every time I want to start the database. However, the alias doesn't seem to be working as expected...
First off, the alias that I've written is
alias ddb-start="java -Djava.library.path=~/Documents/dynamodb/DynamoDBLocal_lib -jar ~/Documents/dynamodb/DynamoDBLocal.jar -sharedDb"
and when I run $ ddb-start, the database starts as expected:
Initializing DynamoDB Local with the following configuration:
Port: 8000
InMemory: false
DbPath: null
SharedDb: true
shouldDelayTransientStatuses: false
CorsParams: *
The problem is, unless I run the script from ~/Documents/dynamodb/, all of my tables are missing.
So if I cd to Documents/dynamodb/ and then run $ ddb-start, everything is perfect. But if I open a new terminal window and run $ ddb-start (or run it from anywhere other than Documents/dynamodb/), Dynamo appears to start up as it should but when I list the tables in the JavaScript Shell, there are no tables.
I was hoping to be able to run the alias from any directory and have Dynamo start and run correctly. Must I cd into the directory, even with an alias? Or is there something wrong with the alias that I've written?
*** Ah, I've noticed that, whatever directory I run it from, a copy of shared-local-instance.db is created in that directory. I don't want that to happen, I want it to point at the 'original' shared-local-instance.db in ~/Documents/dynamodb/. How can I do that?
Figured it out - I was missing the -dbPath option in my alias. To run the alias from anywhere, I needed to specify where the shared db is located. The working alias is:
alias ddb-start="java -Djava.library.path=~/Documents/dynamodb/DynamoDBLocal_lib -jar ~/Documents/dynamodb/DynamoDBLocal.jar -sharedDb -dbPath ~/Documents/dynamodb/"

Is it possible to view docker-compose logs in the output window running in Windows?

docker-compose on Windows is not able to be run in interactive mode.
ERROR: Interactive mode is not yet supported on Windows.
Please pass the -d flag when using `docker-compose run`.
When running docker-compose in detached mode, little is displayed to the console, and the only logs displayed under docker-compose logs appear to be:
Attaching to
which obviously isn't very useful.
Is there a way of accessing these logs for transient containers?
I've seen that it's possible to change the docker-daemons logging to use a file (without the ability to select the log location). Following this as a solution I could log to the predefined log location, then execute a copy script to move the files to a mounted volume to be persisted before the container is torn down. This doesn't sound ideal.
The solution I've currently gone with (also not ideal) is to wrap the shell script parameter in a dynamically created proxy script which logs all output to the mounted volume.
tempFile=myproxy.sh
echo '#!/bin/bash' > $tempFile
echo 'do.the.thing.sh 2> /data/log.txt'>>$tempFile
echo 'echo finished >> /data/logs/log.txt' >> $tempFile
Which then I'd call
docker-compose run -d doTheThing $tempFile
instead of
docker-compose run -d doTheThing do.the.thing.sh
docker-compose logs doTheThing

File not found exception while starting Flume agent

I have installed Flume for the first time. I am using hadoop-1.2.1 and flume 1.6.0
I tried setting up a flume agent by following this guide.
I executed this command : $ bin/flume-ng agent -n $agent_name -c conf -f conf/flume-conf.properties.template
It says log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: ./logs/flume.log (No such file or directory)
Isn't the flume.log file generated automatically? If not, how can I rectify this error ?
Try this:
mkdir ./logs
sudo chown `whoami` ./logs
bin/flume-ng agent -n $agent_name -c conf -f conf/flume-conf.properties.template
The first line creates the logs directory in the current directory if it does not already exist. The second one sets the owner of that directory to the current user (you) so that flume-ng running as your user can write to it.
Finally, please note that this is not the recommended way to run Flume, just a quick hack to try it.
You are getting this error probably because you are running command directly on console, you've to first go to the bin in flume and try running your command there over console.
As #Botond says, you need to set the right permissions.
However, if you run Flume within a program, like supervisor or with a custom script, you might want to change the default path, as it's relative to the launcher.
This path is defined in your /path/to/apache-flume-1.6.0-bin/conf/log4j.properties. There you can change the line
flume.log.dir=./logs
to use an absolute path that you would like to use - you still need the right permissions, though.

Get Riak to start with chef

I need help getting Riak to work with Chef.
Currently every time I chef an amazon box with Riak 1.4.8 using the default basho riak cook book I have to manually ssh into the machine kill -9 the beam.smp process then rm -rf /var/lib/riak/ring then I can finally do sudo riak start and it will work.
Prior to that I get:
Node 'riak#' not responding to pings.
I have even created a shell script:
#!/bin/bash
# Generated by Chef for <%= #node[:fqdn] %>
#<%= #node[:ec2][:local_ipv4] %>
# This script should be run by root.
riak stop
riakPid="/var/run/riak/riak.pid"
if [ -e "$riakPid" ]; then
kill -9 $(<${riakPid})
fi
rm -f /var/run/riak/*
rm -f /var/lib/riak/ring/*
riak start
And Chef says:
bash[/etc/riak/clearOldRiakInfo.sh] ran successfully
For the above script.
If I manually run that script everything works fine. Why is this not cheffing properly.
UPDATE:
This has been solved by creating a script to delete the ring directory when the machine gets cheffed.
This would only happen when I would create a new machine from scratch as the fqdn would get set correctly after Riak had started and created the ring. If I manually went on the box and deleted the ring then it would rechef perfectly fine. So I have to create the script so that the very first chef run on the machine would clean out the ring info.
Given the error message you provided, Riak is not starting because the Erlang node name is not being generated correctly. The Erlang node name configuration exists within vm.args and is produced by the node['riak']['args']['-name'] attribute.
The default for node['riak']['args']['-name'] is riak##{node['fqdn']}. Please check the value Ohai is reporting for node['fqdn']. Alternatively, if you are overriding this attribute somewhere else, ensure that produces a valid value for -name.
A more detailed description of -name within vm.args can be found here.

Resources