dos2unix stopped working when started from crontab - shell

Have got a shell script which has been running a couple of years without a problem. It's very simple - just runs dos2unix on some files, each time checking the exit code and aborting the script if $? is not 0.
Wednesday last week dos2unix started giving an error code of 1, so aborting the script. No apparent changes were made on the server and if I run the script from the command line it works - just not when started by cron.
The conversion seems to happen ok but the issue is when it is renaming the temp file created by dos2unix.
Seems like it should be really simple but I have run out of ideas what to check or try next. Anyone have any ideas?

Related

Bash file running fine manually but on cronjob stops

I've created a bash file that queries my database and then updates some tables.
When I run it manually everything goes smoothly but when I run it with a cronjob it runs the first query and then stops before it goes into a loop.
After looking into it on the net I found a few things that may be the issue but from my side everything looks in order.
So what I did:
Checked if #!/bin/bash is included in my bash at the start and it is.
Checked that the path is correct in the cronjob. My cronjob below
0-59/5 * * * * cd /path/path2/bashLocation/; ./bash.sh
The loop is in the format of
for ID in ${IDS//,/ }
do
...do something
done
This works fine tested manually. My IDS are in string format that why I split it with //,/.(Works fine)
I log all outputs in a log file but it doesn't show any error.
Has anyone encountered this issue before or has any ideas how to fix the issue?
If the command you are running in cron has percent signs ('%'), they need to be escaped with a backslash. I've been bitten by this. From the manpage: "Percent-signs (%) in the command, unless escaped with backslash () ..."
The $PATH variable may be different when run from cron. Try putting something like this at the beginning of your script: export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Try running bash explicitly, i.e. rather than ./bash.sh in crontab, try /bin/bash bash.sh
I don't know how helpful this may be to some people but I noticed when I printenv shell in my logs it printed that it was bin/sh even if I define it at the top of my script and run it as a bash file.
So what I did was changed all parts of my code that where not supported by shell and my conjob works fine.
So I assume that conjob does not support bash files. (Didn't find anything on the internet about this.)
Why it runs in /bin/sh I don't know.
Hope someone finds this helpful.

Problems running bash script from incron

I have a simple incron task setup to run a command whenever a particular .json file is written-to, then closed.
/var/www/html/api/private/resources/myfile.json IN_CLOSE_WRITE,IN NO LOOP /var/www/html/api/private/resources/run_service.sh
I can see that whenever the file to written to, there is a syslog entry for the event, and the command that was triggered - along the lines of <date> - incrond: CMD (/var/www/html/api/private/resources/run_service.sh).
But nothing seems to happen...
initially I thought this would be caused by an issue with the script, but replacing the script command to something simple such as echo "hello world" > /tmp/mylog.log still yields no output or results. I seem to have hit a brick wall with this one!
Update
Changing the incron command to read "/bin/bash /var/www/html/api/private/resources/run_service.sh" now seems to triggering the script correctly, as I can now get output from the script.
A simple mistake on my part, despite all examples online showing that using the script as the command should run it, for me it only works if I explicitly call bash to execute it
"<my directory/file to watch> <trigger condition> /bin/bash /var/www/html/api/private/resources/run_service.sh

P4::run_edit giving error when run for a loong time in a ruby script

I made one ruby script in which I first initialized one object of P4 which is $p4 by initializing $p4.user, $p4.password and executed $p4.connect and $p4.run_login.
Then I used P4::run_edit to open the file for editing in the same script. For the first 3-4 hours of the execution of the ruby script the command successfully opened many files for editing , allowed the ruby script to do the modification in the file but after that the script was not able to open one file for editing and it gave the below error.
/.rvm/rubies/ruby-1.9.3-p286/lib/ruby/site_ruby/1.9.1/P4.rb:129:in `run': [P4#run] Errors during command execution( "p4 edit Filename" ) (P4Exception)
Can somebody tell me what can be the reason behind this and how to fix it . I thought that the problem occured because the script ran for many hours and it was not possible for perforce to make the connection established for such a long time. But I am not sure about it.
/.rvm/rubies/ruby-1.9.3-p286/lib/ruby/site_ruby/1.9.1/P4.rb:129:in `run': [P4#run] Errors during command execution( "p4 edit Filename" ) (P4Exception)

Executing curl from a shell script

How can I invoke curl command correctly from a shell script?
I have a script that actually works in one environment but doesn't on other:
I've researched a lot but still don't know what the problem is, it has to be related to fact that I'm trying to send a date time parameter that contains a space (which i have replaced by a %20). The shell runs without errors but it is not reaching the URL (I can tell that because I see no activity on the destination service)
dateTo=$(date +"%Y-%m-%d%%20%H:%M:%S")
dateFrom=$(date --date='8 hour ago' +"%Y-%m-%d%%20%H:%M:%S")
/usr/bin/curl -k "https://aurl.com/JobHandlerWeb/JobSchedulerServlet?jobId=2&busSvcId=1&receivedFromDate=$dateFrom&receivedToDate=$dateTo"
I found the issue: file format problem.
I had created the file under Windows, and that was making all the difference. Even though I thought it was the same script running fine in one environment, the file format (end line characters) were different.
Corrected using Notepadd++ --> Edit --> EOL conversion
Thanks for your help attempts

Looping in Bash: syntax error: unexpected end of file

Im new to to this Bash/Shell stuff and got to do some network analysis for a uni assignment.
Just trying to do a simple loop but getting a weird error that have been unable to fix despite a 2 hour google crawl:
#!/bin/bash
x=1
while [ $x -le 5 ]
do
echo "Welcome $x times"
x=$(( $x + 1 ))
done
I have tried using a for loop using the following syntax:
#!/bin/bash
for i in {1..5}
do
echo "Welcome $i times"
done
Whenever I place the first script on my server I get the following message:
./temp.sh: line 8: syntax error: unexpected end of file
Before running this file I have performed the following commands to add permissions and make the file executable:
ls -l temp.sh
chmod +x temp.sh
Just as a side note I found the following stackoverflow question about loops, copied the 'fixed code' and got the same error: Looping a Bash Shell Script
Im running version 4+ and using G-VIM as the text editor on Windows 7. Anyone got any ideas?
MANAGED TO SOLVE THIS MYSELF:
Seeing as my reputation is still too low I can't answer this myself at present so to stop people wasting there time here is how I fixed it:
Ok i've managed to fix this so will leave this up for anyone else who looks around.
My problem was that I was using FileZilla to connect to my server. It seems that even though I was using WinVi or G-Vim to create the files FileZilla was adding some form of extra characters to the end of my file when I transferred it to the server as im running Windows.
I switched to WinSCP as my connection agent and tried it and hey presto worked fine.
Thanks for the other answers.
Lewis
before running the bash script use the dos2unix command on it
dos2unix <bashScript>
./<bashScript>
Ok i've managed to fix this so will leave this up for anyone else who looks around.
My problem was that I was using FileZilla to connect to my server. It seems that even though I was using WinVi or G-Vim to create the files FileZilla was adding some form of extra characters to the end of my file when I transferred it to the server as im running Windows.
I switched to WinSCP as my connection agent and tried it and hey presto worked fine.
Do you have a newline after the done? That might account for the trouble.
Does it work with the bash in Cygwin on your machine? (It should; it works fine when copied to my Mac, for example, with any of sh, bash, or ksh running it.)
I also got this problem. Actually the solution for this problem is while writing script check in edit menu EOL Conversion is Unix format or not. It should be Unix format for shell script.

Resources