What I'm trying to do
Okay so I've been trying to create a custom lockscreen on ubuntu using i3lock.
I achieved something I'm comfortable with as a lockscreen, but now I want to use this lockscreen instead of the default gnome one.
So I'm using xautolock for that, which works well when typed in a terminal, but that means I have to enter this manually each time I turn my pc on.
So right now I'm trying to making it a systemd service, and that's where I struggle.
What I block on
First thing is: executing the xautolock command in a shell script doesn't even work.
`#!/bin/bash
`xautolock -time 1 -locker mylock.sh #this line works in CLI but suspend the console
(I've done a chess game on my phone to wait, it lasted way longer than a minute and it didn't do anything whereas executing it in terminal works well)
So there is this problem, but wait there is more!
I've done the .service file, and running mylock.sh from the service returns an exit status=1/FAILURE.
Thus my incomprehension, because the line works in the console, and executing the shell script manually blocks the console aswell so I assume it doesn't fail (it doesn't work as expected tho)
I've tried two different configs for the service:
First one with ExecStart=/usr/bin/xautolock -time 1 -locker <path_to_my_lock>/mylock.sh
Which outputs (code=exited, status=1/FAILURE)
In the other one I tried to executed the a "lockservice.sh" script containing the xautolock line (which has no chance to work given the previous tests):
ExecStart=/bin/bash /PATH/lockservice.sh
Same thing: (code=exited, status=1/FAILURE)
I did a systemctl daemon-reload everytime, I chmoded 755 all the shell scripts. I also tried to execute the service with User=root
What I would like
I basically want something that can replace the gnome default lockscreen with my custom lock script, using xautolock and services isn't mandatory to me, but I'd like not having to run it manually on startup.
Thanks in advance !
Related
I have a python script that get started automatically as a service (activated with systemd). In this python script, I call a bash script using subprocess.call(script_file,shell=True).
When I call the python script manually ($ python my_python_script.py), everything works perfectly. However, the automatically started program does not execute the bash script (however it does run, I checked this my making it edit a text file, which it indeed does).
I (think) I gave everyone read-write permissions to the bash scripts. Does anyone have ideas as to what I'm doing wrong?
Addendum: I want to write a small script that sends me my public IP address via telegram. The service file looks like this:
[Unit]
Description=IPsender
After=networking.service
[Service]
Type=simple
User=root
WorkingDirectory=/home/pi/projects/tg_bot
ExecStart=/home/pi/miniconda3/bin/python /home/pi/projects/tg_bot/ip_sender_tg.py
Restart=always
[Install]
WantedBy=multi-user.target
Protawn, welcome to the Unix and Linux StackExchange.
Why scripts work differently under system is a common question. Check out this answer to the general question elsewhere on the site.
Without the source code for your Python and Bash scripts it's hard to guess which difference you have encountered.
My personal guess is that your bash script is calling some other binaries without full paths, and those paths are found in your shell $PATH but not the default systemd path.
Add set -x to the top of your bash script so that all actions are logged to standard out, which will be captured in the systemd journal. Then after it fails, use journalctl -u your-service-name to view the logs for your service to see if you can find the last command that bash executed successfully. Also consider using set -e in the bash script to have it stop at the first error.
Despite the two "off-topic" "close" votes on this topic, why things work differently under systemd is on topic for this Stack Exchange site.
I wish to write a bash script such that it launches Symfony built-in Web Server, hence Firefox. The following simple minded script fails because - I am not sure how to describe it by the correct jargon - the shell gets busy by the first task. I guess it is simple, but I am newbie on this. Thanks.
#!/bin/bash
cd /var/www/mySymfonyProj
php bin/console server:run localhost:8080
/usr/bin/firefox http://localhost:8080
(moved comment to answer in order to "resolve" the question).
Add an & after the 4th line of the script to run that process in the background - the shell will launch that process, and move onto the next line (but will wait for the 5th line's command to finish).
At the end of the script, you may want to call wait to wait for the server to terminate, if that's desired.
#!/bin/bash
cd /var/www/mySymfonyProj
php bin/console server:run localhost:8080 &
/usr/bin/firefox http://localhost:8080
wait
For more information on job control, look at this source. It doesn't cover everything useful, but it covers a fair amount.
I'd mention that $! returns the PID of the process just executed, so you can keep track of the PIDs of various background tasks then use wait to delay until they've returned - that's often useful.
I'm trying to set up a simple systemd timer to run a bash script every day at midnight.
systemctl --user status backup.service fails and logs the following:
backup.service: Failed at step EXEC spawning /home/user/.scripts/backup.sh: No such file or directory.
backup.service: Main process exited, code=exited, status=203/EXEC
Failed to start backup.
backup.service: Unit entered failed state.
backup.service: Failed with result 'exit-code'.
I'm lost, since the files and directories exist. The script is executable and, just to check, I've even set permissions to 777.
Some background:
The backup.timer and backup.service unit files are located in /home/user/.config/systemd/user.
backup.timer is loaded and active, and currently waiting for midnight.
Here's what it looks like:
[Unit]
Description=Runs backup at 0000
[Timer]
OnCalendar=daily
Unit=backup.service
[Install]
WantedBy=multi-user.target
Here's backup.service:
[Unit]
Description=backup
[Service]
Type=oneshot
ExecStart=/home/user/.scripts/backup.sh
[Install]
WantedBy=multi-user.target
And lastly, this is a paraphrase of backup.sh:
#!/usr/env/bin bash
rsync -a --delete --quiet /home/user/directory/ /mnt/drive/directory-backup/
The script runs fine if I execute it myself.
Not sure if it matters, but I use fish as my shell (started from .bashrc).
I'm happy to post the full script if that's helpful.
I think I found the answer:
In the .service file, I needed to add /bin/bash before the path to the script.
For example, for backup.service:
ExecStart=/bin/bash /home/user/.scripts/backup.sh
As opposed to:
ExecStart=/home/user/.scripts/backup.sh
I'm not sure why. Perhaps fish. On the other hand, I have another script running for my email, and the service file seems to run fine without /bin/bash. It does use default.target instead multi-user.target, though.
Most of the tutorials I came across don't prepend /bin/bash, but I then saw this SO answer which had it, and figured it was worth a try.
The service file executes the script, and the timer is listed in systemctl --user list-timers, so hopefully this will work.
Update: I can confirm that everything is working now.
To simplify, make sure to add a hash bang to the top of your ExecStart script, i.e.
#!/bin/bash
python -u alwayson.py
When this happened to me it was because my script had DOS line endings, which always messes up the shebang line at the top of the script. I changed it to Unix line endings and it worked.
I ran across a Main process exited, code=exited, status=203/EXEC today as well and my bug was that I forgot to add the executable bit to the file.
If that is a copy/paste from your script, you've permuted this line:
#!/usr/env/bin bash
There's no #!/usr/env/bin, you meant #!/usr/bin/env.
try running:
systemctl daemon-reload
and then again run
service <yourservice> status
I faced a similar issue, changed the permission and added executable permission
use chmod +x /etc/systemd/system/<service-filename>
This worked for me
I actually used the answer from How do I run a node.js app as a background service? combined with what dwrz said above. In my case, I was creating a Discord bot that needed to be able to run when I was not around.
With this service in place, I initially got the same error that the initial poster did, which brought me here. I was missing the #!/usr/bin/env node at the top of my executed node.js script.
Since then, no problems, although I intend to see what else can be extended to the service itself.
I just wrote my first bash script to start some redis instances on a development server. While it is mostly working, the last opened redis instance is blocking the active terminal – though I have the trailing & sign and the other started instances aren't blocking the terminal. How would I push them all to the background?
Here's the script:
#!/bin/bash
REDIS=(6379 6380 6381 6382 6383 6390 6391 6392 6393)
for i in "${REDIS[#]}"
do
:
redis-server --port $i &
done
It sounds like your terminal is not actually blocked, your prompt just got overwritten. It's a purely cosmetic issue. Due to the way terminals work, bash doesn't know to redraw it so it looks like the command is in the foreground.
Run the script again, and blindly type lsEnter. You'll probably see that the shell responds as normal, even though you can't see the prompt.
You can alternatively just hit Enter to get bash to redraw the prompt.
I have a master-workers architecture where the number of workers is growing on a weekly basis. I can no longer be expected to ssh or remote console into each machine to kill the worker, do a source control sync, and restart. I would like to be able to have the master place a message out on the network that tells each machine to sync and restart.
That's where I hit a roadblock. If I were using any sane platform, I could just do:
exec('ruby', __FILE__)
...and be done. However, I did the following test:
p Process.pid
sleep 1
exec('ruby', __FILE__)
...and on Windows, I get one ruby instance for each call to exec. None of them die until I hit ^C on the window in question. On every platform I tried this on, it is executing the new version of the file each time, which I have verified this by making simple edits to the test script while the test marched along.
The reason I'm printing the pid is to double-check the behavior I'm seeing. On windows, I am getting a different pid with each execution - which I would expect, considering that I am seeing a new process in the task manager for each run. The mac is behaving correctly: the pid is the same for every system call and I have verified with dtrace that each run is trigging a call to the execve syscall.
So, in short, is there a way to get a windows ruby script to restart its execution so it will be running any code - including itself - that has changed during its execution? Please note that this is not a rails application, though it does use activerecord.
After trying a number of solutions (including the one submitted by Byron Whitlock, which ultimately put me onto the path to a satisfactory end) I settled upon:
IO.popen("start cmd /C ruby.exe #{$0} #{ARGV.join(' ')}")
sleep 5
I found that if I didn't sleep at all after the popen, and just exited, the spawn would frequently (>50% of the time) fail. This is not cross-platform obviously, so in order to have the same behavior on the mac:
IO.popen("xterm -e \"ruby blah blah blah\"&")
The classic way to restart a program is to write another one that does it for you. so you spawn a process to restart.exe <args>, then die or exit; restart.exe waits until the calling script is no longer running, then starts the script again.