I'm running a cronjob that calls a php script. I get "failed to open stream" when the file is invoked by cron. When I cd to the directory and run the file from that location, all is well. Basically, the include_once() file that I want to include is two directories up from where the php script resides.
Can someone please tell me how I can get this to work from a cronjob?
There are multiple ways to do this: You could cd into the directory in your cron script:
cd /path/to/your/dir && php file.php
Or point to the correct include file relative to the current script in PHP:
include dirname(__FILE__) . '/../../' . 'includedfile.php';
cron is notorious for starting with a minimal environment. Either:
have your script set up it's own environment;
have a special cron script which sets up the environment then calls your script; or
set up the environment within crontab itself.
An example of the last (which is what I tend to use if there's not too many things that need setting up) is:
0 5 * * * (export PATH = /mydir:$PATH ; myexecutable )
you need to see what is the path that the cron run from.
echo pathinfo($_SERVER["PATH_TRANSLATED"]);
according to this do the include
include $path_parts['dirname']."/myfile.php";
Related
I have a the following command in a file called $stat_val_result_command.
I want to add -Xms1g parameter at the end of the file so that is should look like this:
<my command in the file> -Xms1g
However, I want to run this command after append. I am running this in a workflow system called "nextflow". I tied many things, including following, but it does not working. check the script section which runs in Bash by default:
process statisticalValidation {
input:
file stat_val_result_command from validation_results_command.flatten()
output:
file "*_${params.ticket}_statistical_validation.txt" into validation_results
script:
"""
echo " -Xms1g" >> $stat_val_result_command && ```cat $stat_val_result_command```
"""
}
Best to avoid appending to or manipulating input files localized in the workdir as these can be, and are by default, symbolic links to the original files.
In your case, consider instead exporting the JAVA_TOOL_OPTIONS environment variable. This might or might not work for you, but might give you some ideas if you have control over how the scripts are being generated:
export JAVA_TOOL_OPTIONS="-Xms1g"
bash "${stat_val_result_command}"
Also, it's generally better to avoid localizing and running scripts like this. It might be unavoidable, but usually there are better options. For example, third-party scripts, like your Bash script could be handled more simply:
Grant the execute permission to these files and copy them into a
folder named bin/ in the root directory of your project repository.
Nextflow will automatically add this folder to the PATH environment
variable, and the scripts will automatically be accessible in your
pipeline without the need to specify an absolute path to invoke them.
This of course assumes you can control and parameterize the process that creates your Bash scripts.
Set-up
I have 3 .txt files containing commands to be executed each day.
The berlin_run.txt file executes a.o. the 2 other .txt files. The file is the following,
#!/bin/bash
cd /path/to/folder/containing/berlin_run.txt
PATH=$PATH:/usr/local/bin
export PATH
./spider_apartments_run.txt
./spider_rooms_run.txt
python berlin_apartments_ads.py;python berlin_rooms.py
When I cd to /path/to/folder/containing/berlin_run.txt in my MacOS Terminal, and execute the ./berlin_run.txt command, everything works fine.
It is my understanding that ./ opens the berlin_run.txt, and that #!/bin/bash ensures that the subsequent lines in the berlin_run.txt are automatically executed upon opening.
Problem
I want to automate the execution of berlin_run.txt.
I have written the following cronjob,
10 13 * * * /path/to/folder/containing/berlin_run.txt
It is my understanding that this cronjob should open the berlin_run.txt each day at 13:10. Assuming that is correct, #!/bin/bash should execute all the subsequent lines. But nothing seems to happen.
Where and what am I doing wrong here?
I have a script, which when I run to screen, works perfectly.
The directory structure is as follows:
/home/username/processing/ScriptRunning
/home/username/processing/functions/include_me
In the script, it opens another script, which contains a function by simply doing this:
#!/bin/bash
#This is ScriptRunning script
. functions/include_me
Now when I call the script using the following nohup command:
nohup /home/username/processing/ScriptRunning
this is the output:
/home/username/processing/ScriptRunning: line 3: /home/username/functions/include_me: No such file or directory
It seems to be missing out the processing directory
I've altered the line within the ScriptRunning to have a full path, both hardcoded to /home/username/processing and also having this as a variable created by calling $(pwd), but the error is the same.
Am I really missing something so stupid?
This isn't a nohup issue. You are including a source file using a relative file name. Try:
. $(dirname ${BASH_SOURCE})/functions/include_me
to include a source file located relative to ${BASH_SOURCE}
I've written a bash script that executes a python script to write a file to a directory, then sends that file to Amazon S3. When I execute the script from the command line it executes perfectly, but when I run it with cron, the file writes to the directory, but never gets sent to S3. I must be doing something wrong with cron.
Here is the bash script:
#!/bin/bash
#python script that exports file to home directory
python some_script.py
#export file created by python to S3
s3cmd put /home/bitnami/myfile.csv s3://location/to/put/file/myfile.csv
Like I said before, manually executing works fine using ./bash_script.sh. When I set up the cron job, the file writes to the directory, but never gets sent to S3.
my cron job is:
18 * * * * /home/bitnami/bash_script.sh
Am I using cron incorrectly? Please help.
Cron looks OK, however your path to the .py file will not be found.
You will have to add a path or home like:
location=/home/bitnami/
python $location/some_script.py
Also s3cmd needs to be located correctly:
/bin/s3cmd
Alternative might also need to load your user environment first before executing the script to find username/password/ssh key for s3cmd
I have two ruby script cron jobs that I'm trying to run under Ubuntu 10.04.2 LTS on an AWS EC2 instance. They are both failing silently - I see them being run in /var/log/syslog, but there's no resulting files, and piping the output into a file creates no result.
The scripts are based on the ruby sql backups here:
http://pauldowman.com/2009/02/08/mysql-s3-backup/
(It's a full backup of the db and an incremental bin-log output. Not sure that matters.)
The script works fine if run from the command line by either root or another user - it runs, and I see the files appearing in the S3 repo
I've tested cron with a simple "touch ~/foo" type entry and that worked fine.
My cron entry under root is this:
*/5 * * * * /home/ubuntu/mysql_s3_backup/incremental_backup.rb
Appreciate any help or debugging suggestions. My thought is that some of the ruby library dependencies might not be available when cron is running the job. But I don't understand why I can't seem to get any output at all returned to me. Very frustrating. Thanks.
The full_backup.rb script you link to contains this:
cmd = "mysqldump --quick --single-transaction ...
#...
run(cmd)
Notice that there is no full path on mysqldump. Cron jobs generally run with a very limited PATH in their environment and I'd guess that mysqldump isn't in that limited PATH. You can try setting your own PATH in your crontab:
PATH='/bin:/usr/bin:/whatever/else/you/need'
*/5 * * * * /home/ubuntu/mysql_s3_backup/incremental_backup.rb
Or in your Ruby script:
ENV['PATH'] = '/bin:/usr/bin:/whatever/else/you/need'
Or specify the full path to mysqldump (and any other external executables) in your backup script.
I'd go with one of the latter two options (i.e. specify ENV['PATH'] in your script or use full paths to executables) as that will reduce your dependence on external factors and these will also help avoid issues with people having their own versions of commands that you need in their PATH.
A bit of error checking and handling on the run call might also be of use.
If any of the necessary Ruby libraries weren't accessible (either due to permissions or path issues) then you'd probably get complaints from the script.