I want to set my screen as screensave status every 50minutes (3000 seconds).
cat /home/rest.sh
while true;do
sleep 3000
xscreensaver-command --lock 1>/dev/null
done
sh /home/rest.sh & can make it run.
Now i want to set it as a daemon.
sudo vim /etc/systemd/system/screensave.service
[Unit]
Description=screensave
[Service]
User=root
ExecStart=/bin/bash /home/rest.sh
StandardError=journal
[Install]
WantedBy=multi-user.target
To set it and enable as daemon.
systemctl enable screensave.service
I find that the service is not running as a daemon.
sudo journalctl -u screensave
Jan 24 12:16:50 user systemd[1]: Started screensave.
Jan 24 12:17:22 user bash[621]: xscreensaver-command: warning: $DISPLAY is not set: defaulting to ":0.0".
Jan 24 12:17:22 user bash[621]: No protocol specified
Jan 24 12:17:22 user bash[621]: xscreensaver-command: can't open display :0.0
How to run it as a daemon after $DISPLAY is set ?
This is a very common FAQ. A system daemon cannot easily connect to the X session of any individual user. On a multi-user system, how do you tell which user's session to connect to, anyway? On a single-user system, what should the daemon do if no session is running (as it often isn't at the time the daemon starts up)?
Trying to run a system daemon as any particular user won't work, and giving individual users access to a system daemon is a recipe for security problems. It can be done, but the solution is complex, and probably not something you want to attempt on your own. (Briefly, have the daemon listen to commands on a socket; create a user-space program which knows how to talk to the socket, and build some sort of authorization and authentication so the daemon knows whom it's talking to and can verify that this user is allowed to connect to this display.)
The drop-dead simple solution is to run this from your desktop environment's startup scripts instead. Most desktops have something like "session start-up items" or "autorun on login" hooks.
I'm not running linux and can't check now but the steps to daemonize a process are to close stdin stdout stderr change current working directory to / and to fork twice and setsid so that current process is a new session leader.
adding something like this at the beginning, before running, first thing to check is exec command creates a new session leader process with ps -Cbash -o sid,pgid,pid,ppid,comm,args
# checking if current process is a session leader to avoid infinite call
if [[ $(ps -p $$ -osid=) != $$ ]]; then
( cd / ; exec setsid /bin/bash /home/rest.sh & ) </dev/null 1>&0 2>&0 &
exit
fi
Related
I have a script which does things with screen brightness, works fine that's cool and now I want to make it run after wake up from suspend.
So I tried using systemd, I have a file under /etc/systemd/system/myscript.service which is as follows:
[Unit]
Description=Run myscript after wakeup
After=suspend.target hibernate.target hybrid-sleep.target suspend-then-hibernate.target
[Service]
Type=oneshot
ExecStart=/usr/local/bin/myscript
User=me
#Environment=DISPLAY=:0
[Install]
WantedBy=suspend.target hibernate.target hybrid-sleep.target suspend-then-hibernate.target
Note: User is set because the myscript needs HOME variable.
After I run sudo systemctl enable myscript and try suspend/wakeup, myscript is not run and journalctl -u myscript.service outputs the following message:
Jan 25 13:42:53 mymachine myscript[24489]: Can't open display
Jan 25 13:42:53 mymachine systemd[1]: myscript.service: Succeeded.
Jan 25 13:42:53 mymachine systemd[1]: Finished Run myscript after wakeup.
If I uncomment the line #Environment=DISPLAY=:0 in myscript.service the error is "Can't open display :0"
Any help would be great :^)
This worked on my Arch system. I tested a script in that location with xbacklight going up and down by 75% a few times after a resume from hibernate or suspend (systemctl hibernate / suspend).
I can only think that you do not have the DISPLAY=:0 in your environment (verify with env) for the user you are running the script as.
I was having a similar problem. Fixed it by adding the following to my systemd service:
Environment="DISPLAY=<DISP>"
Environment="XAUTHORITY=/path/to/xauthority"
Replace <DISP> with the value of your $DISPLAY variable, this is usually :0.
I have a simple example of a service unit and bash script on Red Hat Enterprise Linux 7 using Type=notify that I am trying to get working.
When the service unit is configured to start the script as root, things work as expected. When adding User=testuser it fails. While the script initially starts (as seen on process list) the systemctl service never receives the notify message indicating ready so it hangs and eventually times out.
[Unit]
Description=My Test
[Service]
Type=notify
User=testuser
ExecStart=/home/iatf/test.sh
[Install]
WantedBy=multi-user.target
Test.sh (owned by testuser with execute permission)
#!/bin/bash
systemd-notify --status="Starting..."
sleep 5
systemd-notify --ready --status="Started"
while [ 1 ] ; do
systemd-notify --status="Processing..."
sleep 3
systemd-notify --status="Waiting..."
sleep 3
done
When run as root systemctl status test displays the correct status and status messages as sent from my test.sh bash script. When User=testuser the service hangs and then timesout and journalctl -xe reports:
Jul 15 13:37:25 tstcs03.ingdev systemd[1]: Cannot find unit for notify message of PID 7193.
Jul 15 13:37:28 tstcs03.ingdev systemd[1]: Cannot find unit for notify message of PID 7290.
Jul 15 13:37:31 tstcs03.ingdev systemd[1]: Cannot find unit for notify message of PID 7388.
Jul 15 13:37:34 tstcs03.ingdev systemd[1]: Cannot find unit for notify message of PID 7480.
I am not sure what those PIDs are as they do not appear on ps -ef list
This appears to be known limitation in the notify service type
From a pull request to the systemd man pages
Due to current limitations of the Linux kernel and the systemd, this
command requires CAP_SYS_ADMIN privileges to work
reliably. I.e. it's useful only in shell scripts running as a root
user.
I've attempted some hacky workarounds with sudo and friends but they won't work as systemd - generally failing with
No status data could be sent: $NOTIFY_SOCKET was not set
This refers to the socket that systemd-notify is trying to send data to - its defined in the service environment but I could not get it reliably exposed to a sudo environment
You could also try using a Python workaround described here
python -c "import systemd.daemon, time; systemd.daemon.notify('READY=1'); time.sleep(5)"
Its basically just a sleep which is not reliable and the whole point of using notify is reliable services.
In my case - I just refactored to use root as the user - with the actual service as a child under the main service with the desired user
sudo -u USERACCOUNT_LOGGED notify-send "hello"
I have a unix script that invokes another script on a remote unix server.
amongst other commands i am stopping a service. The stop command essentially translates to
ssh -t -t -q ${AEM_USER}#${SERVERIP} 'bash -l -c "service aem stop"'
The service is getting stopped but when i start back the service it just creates the .pid file and does not perform the start up. When i run the command for start i.e.
ssh -t -t -q ${AEM_USER}#${SERVERIP} 'bash -l -c "service aem start"'
it does not show any error. On going to the server and checking the status
service aemauthor status
Below message is displayed
aem dead but pid file exists
Also when starting the service by logging in to the server, it works as expected along with the message
Removing stale pidfile (pid: 8701)
Starting aem
We don't know the details of the service script of aem.
I guess the problem is related to the SIGHUP signal. When we log off from a shell or disconnect from ssh, the OS will send HUP signal to all processes that started in this terminated shell. If the process didn't handle the HUP signal, it would exit by default.
When we run a command via ssh remotely, the process started by this command will receive HUP signal after ssh session is terminated.
We can use the nohup command to ignore the HUP signal.
You can try
ssh -t -t -q ${AEM_USER}#${SERVERIP} 'bash -l -c "nohup service aem start"'
If it works, you can use nohup command to start aem in the service script.
As mentioned at the stale pidfile syndrome, there are different reasons for pidfiles getting stalled, like for instance some issues with the way your handles its removal when the process exits... but considering your only experiencing when running remotely, I would guess it might be related to what is being loaded or not by your profile... check the most voted solid answer at the post below for some insights:
Why Does SSH Remote Command Get Fewer Environment Variables
As described in the comments of the mentioned post, you can try sourcing /etc/profile or ~/.bash_profile before executing your script to test it, or even trying to execute env locally and remotelly to compare variables that are being sourced or not.
I'm trying to get Upstart sending me e-mails when a process is respawned.
So, following upstart stanzas, here's my upstart script for ntpd service (just as an example):
/etc/init/ntpd.conf
### ntpd
script
mail -s "ntpd Service Respawned" my_email#gmail.com
control + D
end script
respawn
exec /etc/init.d/ntpd start
Then, I reload the process (initctl reload ntpd) in order to get upstart to reload ntpd.conf's config. Then kill -9 the process to force its respawn.
Here's /var/log/message.log:
init: ntpd main process (12446) killed by KILL signal
init: ntpd main process ended, respawning
And the e-mail is never sent. I've tried with post-start and exec but it doesn't work either.
Any advice?
echo "ntpd Service Respawned" | mail -s "ntpd Service Respawned" my_email#gmail.com
Try with this.
Just solved this one.
What I did was add the following in my Upstart script:
respawn
pre-start script
mail -s "ntpd Service Respawned" my_address#gmail.com
control + D
end script
exec /etc/init.d/ntpd start
That works like a charm.
I think Upstart does pay much attention to the statements order.
Thanks!!!
I have a script that will
kill all sshd processes
start a new sshd process
I would like to scp this script onto a remote computer and execute it using ssh. After it executes the first step of killing all sshd, will it still get to the 2nd step of running sshd again? I'm worried because I'm running the script using ssh and ssh will die after step 1.
Normal procedure is to stop the main sshd, with e.g. /etc/init.d/sshd stop or your distro's equivalent. This way, the listening daemon shuts down while existing connections go on until the clients disconnect.
If you want to upgrade/replace sshd, change any settings and restart it, this is the way to go.
No need to scp it to server, just try doing this :
while read cmd; do ssh server "bash -c $cmd"; done < script.sh
Why not to use cron for it?
For example:
10 * * * * /path_to_script
minute hour day month dayofweek command
Do not forget to switch it off;)