Shell script to call daemon - DAEMON: command not found - shell

Currently I can start a custom server like this:
cd /home/admin/service/build && ./service visual.dat
I'm trying to make a shell script to make a daemon. I tried many things...
#!/bin/sh -e
cd /home/admin/service/build
DAEMON = "./service"
daemon_OPT="service.dat"
...
The response is:
admin#service:~$ sudo /etc/init.d/servicedaemon start
/etc/init.d/servicedaemon: line 3: DAEMON: command not found
Well, how to launch the service from the daemon like I did from the shell ? It's probably a path issue.
Thanks in advance.

I think you have to remove the spaces around "=":
DAEMON="./service"
Now it seems that it tries to run a command called DAEMON instead of the actual application.

Related

Databricks init scripts halting

I am trying to install confluent kafka on my databrick drivers and using init scripts there.
I am using below command to write an script to DBFS like below:
%python
dbutils.fs.put("dbfs:/databricks/tmp/sample_n8.sh",
"""
#!/bin/bash
wget -P /dbfs/databricks/tmp/tmp1 http://packages.confluent.io/archive/1.0/confluent-1.0.1-2.10.4.zip
cd /dbfs/databricks/tmp/tmp1
unzip confluent-1.0.1-2.10.4.zip
cd confluent-1.0.1
./bin/zookeeper-server-start ./etc/kafka/zookeeper.properties &
exit 0
""")
The I edit my intiscripts and add en entry there to denote to above location
[![init scripts entry adding][1]][1]
However, when I try to run my cluster it nevers starts and it always halts. If I go to event log, it shows that it is stuck at 'Starting init scripts execution.'
I know there should be tweak in my script to run it on the background but even I am using & at the end of the start command for zookeper.
Can someone give me any hint how to resolve above?
[1]: https://i.stack.imgur.com/CncIL.png
EDIT: I guess this question could be the same if I ask how I can run my script in a %sh databricks cell while the cell can finish the running of above bash script, but at the moment it always telling me that the command is running

Why launching a shell script from Debian service does not behave as command line?

I have a python bot that I launch in command line and askes me a login to start working.
To skip this step, I add a pipe to directly insert the login in the command as the following and it works :
printf "login" | python_module-py
Now, I want to schedule the bot's launch to don't have to launch it by myself and avoid the bot needing my computer always on.
So I bought a Debian VPS and tried to create a systemd service. I put the command in a shell. Here is my service : (assuming my script is in /home/user and I have all the rights rwx)
[Unit]
Description=Description
After=network.target
[Service]
Type=simple
User=user
Group=group
WorkingDirectory=/home/user
ExecStart=./script.sh
[Install]
WantedBy=multi-user.target
I tried to start it but it failed because of this error :
TypeError: unsupported operand type(s) for /: 'NoneType' and 'int'
I'm almost sure that's because the login is not passed through the pipe and I'm wondering why.
Please, pardon me in advance for my english.
[ALSO TESTED]
Keep the command in the service using /bin/sh -c also giving the same error :
ExecStart=/bin/sh -c '/usr/bin/printf "login" | /usr/bin/python3.7 -m python_module'
The pipe won't work directly in an e.g. ExecStart statement, instead you should use something like ExecStart=/bin/sh -c 'printf "login" | python_module-py' to let /bin/sh handle the pipe.
You should also be able to pass a file as standard input to your service, by setting StandardInput.
I solved the problem.
In my case, I had to use the xargs command to ensure that the text pasted in the pipe is really pasted.
printf "login" | xargs python_module-py

Run Script on AWS #reboot?

I'm currently trying to run script that will run in the background when my AWS instance boots for the duration of the instance life. I'm testing it with a simple script to see if it works, before I test with my more complicated one:
#!/bin/bash
while [true]; do
sleep 1
echo "Hello World" >> "tempStorage.json"
done
And my sudo crontab -l returns:
# All the comment stuff
#reboot sh /home/ubuntu/test/testScript/test.sh
Which is the path to the script. I've also obviously run chmod +x test.sh to make sure its an executable.
The problem is when I stop and then start the AWS instance there's nothing in the tempStorage.json file. I've checked other threads and they all suggest this is what I should be doing, so I'm very confused and advice would be appreciated. Thanks.
As Mark B mentioned, the issue is the execution directory of the cron script. There are two solutions then.
A) Change the path to file as Mark B recommended so the script would look something like:
#!/bin/bash
while [true]; do
sleep 1
echo "Hello World" >> "/home/ubuntu/test/testScript/tempStorage.json"
done
B) Change the directory of the cron execution and keep the script as it was. This works better if you need to put the script in any directory. It would look like this for the crontab:
# All the comment stuff
#reboot cd /home/ubuntu/test/testScript && sh test.sh
That should work fine. I think the issue is that you aren't giving the full path to the tempSTorage.json file within your script. So it is being written to in a different folder than the one you are looking in, specifically whatever folder cron starts processes in by default. Try changing it to something like /tmp/tempSTorage.json and then rebooting the server again.
Note that if you are wanting something that starts on boot and runs forever, this probably isn't the best method. In that case I would look into running your process as a service.

Jenkins fails with Execute shell script

I have my bash script in ${JENKINS_HOME}/scripts/convertSubt.sh
My job has build step Execute shell:
However after I run job it fails:
The error message (i.e. the 0: part) suggests, that there is an error while executing the script.
You could run the script with
sh -x convertSubt.sh
For the safe side, you could also do a
ls -l convertSubt.sh
file convertSubt.sh
before you run it.
make sure that the script exist with ls
no need to sh , just ./convertSubs.sh ( make sure you have run permissions)

Chef run sh script

I have a problem trying to run shell script via Chef (with docker-provisioning).
This is how I try to execute my script:
bash 'shell_try' do
user "root"
run = "#{some_path_to_script}/my_script.sh some_params"
code " #{run} > stdout.txt 2> stderr.txt"
end
(note that this script should run another scripts, processes and write logs)
Here's no errors in the output, but when I log into machine and run ps aux process isn't running.
I guess something wrong with permissions (or env variables), because when I try the same command manually - it works.
A bash resource just runs the provided script text directly, if you wanted to run a long-running process generally you would set up an Upstart or systemd service and use the service resource to start it.
Finally find a solution (thanks to #coderanger) -
Install supervisor:
Download supervisor cookbook
Add:
include_recipe 'supervisor::default'
Add my service to supervisor:
supervisor_service "name" do
action :enable
#action :start
command '/path/script.sh start'
end
Run supervisor service
All done!
Please see the Chef documentation for your resource: https://docs.chef.io/resource_bash.html. The bash resource does not support a run attribute. Text of the code attribute is run as a bash script. The default action is to run the script unless told otherwise by the resource.
bash 'shell_try' do
user "root"
code " #{run} > stdout.txt 2> stderr.txt"
action :run
end
The code attribute is written to a temporary file where it is then run using the attributes specified in the resource.
The line run = "#{some_path_to_script}/my_script.sh some_params" at this point does nothing.

Resources