I'm testing Pushgateway along Prometheus, pushing simple data about my VMs' services every minute with a bash script and curl, triggered by crontab.
I have two bash scripts.
The first one, cpu_usage.sh, parse and send data about ps -aux command to pushgateway, via curl :
#!/usr/bin/env bash
z=$(ps aux)
while read -r z
do
var=$var$(awk '{print "cpu_usage{process=\""$11"\", pid=\""$2"\", cpu=\""$3"\"}", $3z}');
done <<< "$z"
curl -X POST -H "Content-Type: text/plain" --data "$var
" http://localhost:9091/metrics/job/top/instance/machine
Everything's ok on this one, my crontab send every minute the formatted data to localhost:9091, my pushgateway instance.
Now, I'm trying to send a parsed result of the service --status-all command, with a script called services_list.sh, exactly the way I've dealed with the cpu_usage.sh script :
#!/usr/bin/env bash
y=$(service --status-all)
while read -r y
do
varx=$varx$(awk '{print "services_list{service=\""$y"\"}", 1}');
done <<< "$y"
curl -X POST -H "Content-Type: text/plain" --data "$varx
" http://localhost:9091/metrics/job/top/instance/machine
When executing both scripts manually, like ./cpu_usage.sh and ./services_list.sh, everything's fine, Pushgateway is successfully retrieving data from both scripts.
But when I'm passing theses calls by CRON, only cpu_usage.sh is sending data to pushgateway (timestamp of last push on services_list method on Pushgateway stays unchanged).
My crontab syntax is like : * * * * * cd /path/ && ./script.sh
Scripts are in 777 / root:root, crontab is always edited as root. I've tried to concatenate both scipts in one bash file, and no matter the order I put them, the curl call for services_list method is never made (but everything's ok on cpu_usage method).
I'm a bit lost, as the two scripts are very similar, and manual calls on services_list.sh are working fine. Any thoughts ? Thank you.
Running on Debian 9 with Pushgateway 0.10.
Fixed :
The command service is not callable from crontab. I have to furnish the full path of the service ressource when calling it from cron, like /usr/sbin/service --options.
In this context, everything's running fine with y=$(/usr/sbin/service --status-all) in place of y=$(service --status-all) in my services_list.sh script.
Related
I am using shell script to find the tps and error count in a single day using shell script and store the data in a txt file.
Now i want to use the same shell script to post data form that result file onto a slack group, but sometimes command gives bad request, sometimes Invalid Payload and sometimes it works.
curl -X -POST -H --silent --data-urlencode "payload={\"text\": \"$(cat <filepath>/filename.txt)\"}" "<slack url>"
Please help
Trying to run a curl command to a test page results in "no URL specified!" when attempting to run via a bash script. This is a test that runs through a proxy (1.2.3.4 in the code example)
The command runs as expected from the command line. I've tried a few variations of quotes around the components of the command however still manage the same "no URL specified" result.
echo $(curl -x http://1.2.3.4:80 https://example.company.com)
The expected result is an HTML file, however instead the above issue is found only when run from this small bash script file.
The command itself runs just fine as "curl -x http://1.2.3.4:80 https://example.company.com" from the command line.
Appreciate in advance any help you can provide!
I have literally edited it down to the following. Everything else is commented out. Still the same error.
#!/bin/bash
curl -x http://1.2.3.4:80 https://example.company.com
In your example you want to use double quotes around the subshell and single qutes for the curl parameters
Make sure to set the correct shebang (#!/bin/bash).
You could also try to run it as:
cat <(curl -x http://1.2.3.4:80 https://example.company.com)
However I am not able to reproduce your example
Complete newbie with AWS and relative newbie with bash. I want to set up a quick pull-and-append call every 10 minutes to a small text file on the NYC site that contains some real-time traffic data that I want.
The command curl --silent http://[IP.stuff.nyc_traffic_data].txt >> data.txt does exactly what I want it to. So, I created cron.txt with the contents
*/10 * * * * curl --silent http://[IP.stuff.nyc_traffic_data].txt >> data.txt
crontab cron.txt doesn't throw any obvious errors, but I don't seem to be able to get data.txt to update.
What am I missing? (Have done the usual research to find an answer, but I think the problem here is that I don't know what I don't know, as it were.)
Thanks!
I want to incorporate a simple monitoring into my application so I need to send an HTTP request that contains the number of documents in the mongodb collection from the crontab.
The requests are described on the page http://countersrv.com/ as follows:
curl http://countersrv.com/ID -d value=1
I need to query the mongodb from the command line and get the number of documents in the collection. It should be something like db.my_docs.count().
I want to send this number every hour so need to add something like this into crontab:
0 * * * * curl http://countersrv.com/ID -d value=...query mongo here...?
Not meaning to detract from the timely answer given by Victor, but the "one liner" form of this would be:
mongo --quiet --eval 'var db = db.getSiblingDB("database"); print( "value=" + db.collection.count() );' | curl -X POST http://countersrv.com/[edit endpoint] -d #-
The --quiet suppresses the startup message on the shell and --eval alows the commands to pass through on the command line.
To select the database you use .getSiblingDB() as the method helper for the interactive shell use database with the "database" name you want. After this either just the "collection" name or .getCollection() method can be used along with the basic function.
Simply print() the "key/value" pair required and pipe to curl at the "edit endpoint" for countersrv, which is the default viewing page. The #- construct takes stdin.
I would avoid using commands directly on crontab, you probably have a directory /etc/cron.hourly and crontab already have calls to run all the scripts in the specific folders, in determined intervals, hourly, daily for example
Then, inside /etc/cron.hourly you can create a monitor.sh. You can set the execution privilege of this script with
chmod +x /etc/cron.hourly/monitor.sh
Them, you make a js code to retrieve the data, for example, mongoscript.js:
use yourdb
db.my_docs.count()
And you final monitor.sh will probably be something like
#!/bin/bash
mongo mongoscript.js > output.js
curl http://countersrv.com/ID -d value=#output.js
I'm pretty new to using MySql from the command line, so I really need some advice here.
Basically, I've written a bash script that backs up my databases on selected days via a cron job. It's working just fine, but I would like to know if there is any way I can direct any error messages from mysqldump emailed to me in the off chance that there's something wrong. Here's the key part of the code that's doing the dump:
mysqldump -u user -h localhost --all-databases | gzip -9 > $filename
Is there any way to set up a condition that would capture any error messages and send them in an email?
Blain
Use:
mysqldump -u user -h localhost --all-databases 2> error.log | gzip -9 > $filename
In particular, in bash you can redirect any output descriptor to something else by using the n> syntax, notice the LACK of space between n and > :)
Email the error.log to yourself :)
You said you're using cron to run the job. Which is great, because cron already has a mail sending feature built in--no need for writing to a temporary file nor using support scripts. Just do this:
MAILTO=yourname#example.com
00 14 * * * mysqldump ... | gzip -9 >filename # or invoke a script here
And like magic, any output from the job will be mailed to you.