I have an RPi 3b with Stretch installed. I'm generating and sending a message when the Pi boots about external IP / internal IP/ SSID it's connected with.
I have a script with a command that does these things, and this line to construct the message as JSON works fine when run as the pi user or even as root after sudo su -:
echo -e "{\"time\":\""$(date)"\",\"hostname\":\""$(hostname)"\", \"distro\":\""$(cat /etc/issue)"\",\"extip\":\""$(/usr/bin/curl ipecho.net/plain)"\",\"ssid\":\""$(/sbin/iwgetid -r)"\",\"lanip\":\""$(/bin/hostname -I)"\"}" > /home/pi/hostinfo.txt
This script works fine when I run it locally ( shows where the values I expect are there):
{"time":"Sat Sep 29 17:12:31 EDT 2018","hostname":"<expected hostname>", "distro":"Raspbian GNU/Linux 9
\l","extip":"<expected external IP>","ssid":"<expected SSID>","lanip":"<expected LAN IP>"}
I'm trying to run this at startup and the command runs fine and sends the message, but the last three values are blank:
-e {"time":"Sat Sep 29 17:12:31 EDT 2018","hostname":"",
"distro":"Raspbian GNU/Linux 9 \l","extip":"","ssid":"","lanip":""}
I've added the script call to both rc.local and crontab (not at the same time),
rc.local:
su pi -c '<path-to-script> > /home/pi/rpi-boot.log 2>&1'
crontab:
#reboot <path to script>
Either way, the script builds and delivers the message as expected and doesn't show any errors in rpi-boot.log, but when it's run by rc.local or crontab the first three values (date, hostname, distro) are reported correctly but the last three are blank.
I thought putting the full path to hostname, curl and IP would work, and they do when I run from an interactive shell but when run at startup they are returning blank.
Related
I'm attempting to have my Raspberry Pi use rysnc to upload files to an SFTP server sporadically throughout the day. To do so, I created a bash script and installed a crontab to run it every couple hours during the day. If I run the bash script, it works perfectly, but it never seems to run using crontab.
I did the following:
"sudo nano upload.sh"
Create the following bash script:
#!/bin/bash
sshpass -p "password" rsync -avh -e ssh /local/directory host.com:/remote/directory
"sudo chmod +x upload.sh"
Test running it with "./upload.sh"
Now, I have tried all the following ways to add it to crontab ("sudo crontab -e")
30 8,10,12,14,16 * * * ./upload.sh
30 8,10,12,14,16 * * * /home/picam/upload.sh
30 8,10,12,14,16 * * * bash /home/picam/upload.sh
None of these work based on the fact that new files are not uploaded. I have another bash script running using method 2 above without issue. I would appreciate any insight into what might be going wrong. I have done this on eight separate Raspberry Pi 3B that are all taking photos throughout the day. The crontab upload works on none of them.
UPDATE:
Upon logging the crontab job, I found the following error:
Host key verification failed.
rsync error: unexplained error (code 255) at rsync.c(703) [sender=3.2.3]
This error also occurred if I tried running my bash script without first connecting to the server via scp and accepting the certificate. How to get around this when calling rsync from crontab?
check if the script is working properly at all ( paste it in shell)
Check if your crond.service working properly - systemctl status crond.service.
Output should be "active (running)"
Then you can try add simply test job to cron: * * * * * echo "test" >> /path/whichyou/want/file.txt
and check if this job work properly
Thanks to logging recommendation from Gordon Davisson in comments, I was able to identify the problem.
A logging error occurred, as mentioned in the original question update above where rsync would choke on a host key verification.
My solution: tell rsync not to check host key certificates. I simply changed the upload.sh bash file the following:
#!/bin/bash
sshpass -p "password" rsync -avh -e "ssh -o StrictHostKeyChecking=no" /local/directory host.com:/remote/directory
Working perfectly now -- hope this helps someone.
I'd like to create a file, for example lets call it /tmp/not_running_pods, that when read cat /tmp/not_running_pods runs kubectl get pods -o wide | grep -v Running and gives the output to the reading process.
This use case is simplified for example's sake. Not really looking for alternatives, unless it fits this exact case: file that outputs without having a 'service' always running listening for readers
Having a hard time finding anything specific for this on searching. My local env is macos, but hoping for something generalizable to linux/bash/zsh
edit: finally found what was on the tip of my brain, something like inetd / super-server - still looking to see if this would work for this case
when read cat /tmp/not_running_pods runs
A file is static. It exists.
HTTP web servers runs a php script (and much more stuff) to generate the web page for you to view. An SSHD server runs a shell for you to connect with. MYSQL server serves a specific protocol that allows to execute queries. To "do something" when a connection is made typically sockets - network, tcp, but also file sockets - are used, that allow with accept() detect incoming connections and run actually an action on such event.
# in one terminal
$ f() { echo new >&2; echo Hello world; LC_ALL=C date; }; export -f f; socat UNIX-LISTEN:/tmp/file,fork SYSTEM:'bash -c f'
new
new
# in the second terminal
$ socat UNIX-CONNECT:/tmp/file -
Hello world
Tue Jul 19 21:29:03 CEST 2022
$ socat UNIX-CONNECT:/tmp/file -
Hello world
Tue Jul 19 21:29:19 CEST 2022
If you really want to "execute an action when reading from a file", then you have to create your own file system that does that. Primary examples are files in /proc /sys. For user space file systems, write a program using FUSE.
Instead of
$ cat /tmp/not_running_pods
just make ~/bin/not_running_pods:
#! /bin/bash
kubectl get pods -o wide | grep -v Running
with chmod 755 and do
$ not_running_pods
Easy, well-understood, well-supported.
On a CentOS 7.2 server the following command runs successfully manually -
scp usrname#storage-server:/share/Data/homes/AWS/2301KM-$(date +"%Y-%m-%d").csv /home/vyv/data/AWS/ 2>&1 >> /home/vyv/data/AWS/scp-log.txt
This command simply takes the file that has the current date in its filename from a directory on a remote server and stores it to a directory on the local server and prints out the log file in the same directory.
Public key authentication is setup so there is no prompt for the password when run manually.
I have it configured in crontab to run 3 minutes after the top of every hour as in the following format -
3 * * * * scp usrname#storage-server:/share/Data/homes/AWS/2301KM-$(date +"%Y-%m-%d").csv /home/vyv/data/AWS/ 2>&1 >> /home/vyv/data/AWS/scp-log.txt
However, I wait patiently and don't see any files being downloaded automatically.
I've checked the /var/log/cron logs and see an entry on schedule like this -
Feb 9 17:30:01 intranet CROND[9380]: (wzw) CMD (scp usrname#storage-server:/share/Data/homes/AWS/2301KM-$(date +")
There are other similar jobs set in crontab that work perfectly.
Can anyone offer suggestions/clues on why this is not working?
Gratefully,
Rakesh.
Use full path for scp (or any other binary) in crontab:
3 * * * * /usr/bin/scp usrname#storage-server:/share/Data/homes/AWS/2301KM-$(date +"%Y-%m-%d").csv /home/vyv/data/AWS/ 2>&1 >> /home/vyv/data/AWS/scp-log.txt
I am trying to automate the running of several tasks, but I need to run them as sudo.
I want to run them in separate terminals so I can watch the output of each.
Here is a sort of minimal example I have setup (because what I am trying to do is more complicated)
Setup two files - note that data is readable as root only and contains 3 lines of example text:
-rw------- 1 root root 33 Nov 15 09:29 data
-rwxrwxrwx 1 root root 11 Nov 15 09:30 test.sh*
test.sh looks like:
#!/bin/bash
cat data
read -p "Press enter to continue"
Also I have user level variable called "SESSION_MANAGER" that is setup in the bash startup... which seems to cause some issues (see later example)
So now I want to spawn various terminals running this script. I tried the following:
Attempt 1
xfce4-terminal -e './test.sh'
output:
cat: data: Permission denied
Press enter to continue
Attempt 2 - using sudo at the start
~/src/sandbox$ sudo xfce4-terminal -e './test.sh'
Failed to connect to session manager: Failed to connect to the session manager: SESSION_MANAGER environment variable not defined
(xfce4-terminal:6755): IBUS-WARNING **: The owner of /home/openbts/.config/ibus/bus is not root!
output:
this is some data
more data
end
Press enter to continue
here you can see that the output of the data file is print ok, but I had some issue with the session manager variable.
Attempt 3 - using sudo in the command
~/src/sandbox$ xfce4-terminal -e 'sudo ./test.sh'
output:
[sudo] password for openbts:
this is some data
more data
end
Press enter to continue
here you can see that everything was well... but I had to enter my password again, which somewhat kills my automation :(
Attempt 4 - start as root
~/src/sandbox$ sudo su
root#openbts:/home/openbts/src/sandbox# xfce4-terminal -e './test.sh'
Failed to connect to session manager: Failed to connect to the session manager: SESSION_MANAGER environment variable not defined
output:
this is some data
more data
end
Press enter to continue
Here, again the output looks good, but I have this SESSION_MANAGER issue... Also the new xfce4-terminal comes out with messed up font/look - I guess this is the root users settings.
Questions
How can I run multiple instances of test.sh each in a new terminal and not have to enter passwords (or interact at all). I can enter the password once at the start of the process (in the original terminal)?
As you can see I got this sort-of working when going in a sudo su, but this issues here are the SESSION_MANAGER variable - not sure if that is an issue, but its very messy looking, but also the xcfe4-terminal looks bad (I guess I can change the root settings to the same as my user settings). So how can I avoid the SESSION_MANAGER issue when running as root?
If you change user-id before you launch your separate terminal, you will see the session-manager issue. So the solution is to run the sudo in the terminal.
You do not want to type passwords in the sudo. You can do that by adding
yourname ALL=(ALL) NOPASSWD: ALL
to /etc/sudoers (at least on slackware). You could also try to set the permissions on the files correct so you would not need root all the time.
Note that adding that line has security implications; you might want to allow just cat without password (in your example), or make even more elaborate rules for sudo. The line I gave is just an example. Personally, I would look at file-permissions.
I'm configuring a Icinga2 server and want it to run local scripts on external machines using the check_by_ssh plugin, and I encountered a strange issue. I've searched for an answer for few hours, but no luck.
My command object looks as follows:
object CheckCommand "check_procs" {
import "by_ssh"
vars.by_ssh_logname = "root"
vars.by_ssh_port = "22"
vars.by_ssh_command = "/tmp/test.sh"
vars.by_ssh_identity = "/etc/icinga2/conf.d/services/id_rsa.pub"
vars.by_ssh_ipv4 = "true"
vars.by_ssh_quiet = "true"
}
The content of test.sh is simply exit 0. I have a trust between my Icinga box and the remote machine I'm running the command at.
When I'm executing the command thru shell, it works
[root#icinga ~]# ssh root#10.10.10.1 -C "/tmp/test.sh"
[root#icinga ~]# echo $?
0
But when it is executed by the server, I see on my Icingaweb2 this output:
UNKNOWN - check_by_ssh: Remote command '/tmp/test.sh' returned status 255
Now I have added a touch success to test.sh script, in order to see if it is executed at all - but it seems it doesn't. That means when Icinga executes my script, it fails before even executing it.
Any clues what can it be? There are no many examples online either of check_by_ssh with Icinga2.
NOTE: Icinga uses root user to identify with the remote server. I know this is not best practice, but this is development env.
UPDATE: I think I have found the issue. The problem is that I'm trying to use root user to login the remote machine. This IS NOT supported, even with public key authentication. The script has to be executed with the user icinga
2nd Update: I got it works. The issue was keys authentication, the fact that icinga uses the user icinga to execute the command (even when using by_ssh_logname attribute) and the addition of vars.by_ssh_options = "StrictHostKeyChecking no"
My problem was that the used rsa key files wasn't owned by the "nagios" user:
-rw------- 1 nagios nagios 3.2K Nov 30 14:43 id_rsa
-rw-r--r-- 1 nagios nagios 766 Nov 30 14:42 id_rsa.pub
I've found the issues, there were few of them in my case.
Icinga used icinga user to login through SSH, even when I used -l root. So, to install ssh keys I had to execute ssh-copy-id icinga#HOST under root user (Icinga shell is set to /sbin/nologin)
I then copied the private key (again, of the root user) to icinga folder so it is accessible for the application, and changed the ownership of the file
Next, I tried to login using icinga user to the remote machine sudo -u icinga ssh icinga#HOST -i id_rsa
If step 3 fails, you need to figure it before you continue. Next thing I did was adding StrictHostKeyChecking no to the module options.
Voila, this works now.