Cron job gets file from server, what should I add to the script to have it check again in 15mins if the file is unchanged? - bash

This Crontab Day of the Week syntax does not provide a solution.
I have a cron job set; using wget to download a file & generate a report.
What should I do to amend the script so that if the file on the server hasn't been updated yet, it tries the job again after 15 minutes?

Related

Magneto Newsletter mails are not going

I am having Magento 1.9.3 and i am new to Magento.
My newsletter subscription is not working. I have checked all things in magento configuration. Seems its the trouble with cron on server. But with out knowing i do not want to make changes on server. I am using A2 hosting provider, and there is a cron set like this, which runs for every 30 mins.
/bin/cagefs_enter.proxied php /home/lasakico/public_html/cron.php 1>/dev/null 2>/dev/null
I am not sure what is the problem, either its the problem with cron or magento.
I have checked magento configuration->Advanced->System->Cron, where values are stored for Generate Schedules Every : 15. and rest are like 20, 15, 10, 60,600
Please let me know if anything is not clear in the question. I will ans them.
Magento has a script called cron.php which handles all of the timed jobs that your Magento store has to do. In this list is the task of sending out newsletters. You have to set up your server crontab to run this cron.php script at regular intervals (eg 5 minutes).
Once you do this, you should find that your newsletter will be sent out as expected
Open up an SSH session with your server. (If you can’t do this you
are going to have to ask your hosts to do this for you instead).
Browse to the document root of your magento store, the folder with
cron.php in there
enter the command
pwd This command gives you the current full path. Write this down
somewhere
enter the command
which php
This command gives you the path to your PHP binary. Write this down somewhere
enter the command
crontab -e
This opens up your crontab editor which is the system for scheduling
tasks on Linux
hit the [i] key to go into insert mode on the crontab editor (vi
basically)
on a new line paste the following, but replacing the paths with the
paths you got before
*/5 * * * * /path/to/php -f /path/to/cron.php
Hit [esc] then type the command
:wq
This saves the crontab or cntrl + x and Y
Create a newsletter and schedule it to send in 2 minutes time.

How get exception,error,log for HIVE-SQOOP based Batch Job?

I have Hadoop cluster with 6 datanode and 1 namenode. I have few(4) jobs in HIVE which run on every day and push some data from logfile to our OLPT data base using sqoop. I do not have oozie installed in the environment. All are written in HIVE script file (.sql file) and I run those from unix script(.sh file). Those shell script file are attach with different OS cron job to run those on different time.
Now Requirement is This:
Generate log/status for each job separately on daily basis. So that at the end of the day looking into those log we can identify which job run successfully and time it took to run , which job failed and dump/stack stace for that failed job.(Feature plan is that we will have mail server and every failed or success job shell script will send mail to respective stack holder with those log/status file as attachment)
Now my problem is how I can find error/exception if anything I have to run those batch job / shell script and how to generate success log also with execution time?
I tried to get the output in text file for each query run into HIVE by redirecting the output but that is not working.
for example :
Select * from staging_table;>>output.txt
Is there any way to do this by configuring HIVE log for each and every HIVE job on day to day basis?
Please let me know if any one face this issue and how can I resolve this?
Select * from staging_table;>>output.txt
this is Redirecting output if you are looking for that option then below is the way from the console.
hive -e 'Select * from staging_table' > /home/user/output.txt
this will simply redirect the output. It wont display job specific log information.
However, I am assuming that you are running on yarn, if you are expecting to see application(job) specific for logs please see this
Resulting log file locations :
During run time you will see all the container logs in the ${yarn.nodemanager.log-dirs}
Using UI you can see the logs i.e job level and task level.
other way is to look from and dump application/job specific logs from command line.
yarn logs -applicationId your_application_id
Please note that using the yarn logs -applicationId <application_id> method is preferred but it does require log aggregation to be enabled first.
Also see much better explanation here

running shell script with windows task scheduler

I currenty have a simple shell script that I created for a linux machine to be run using cron, but now I want to be able to run the file using windows task scheduler. I have tried to get it to work using cron for cygwin, but even after running cron-config successfully and ensuring that the shell script can be executed successfully, for some reason the cron task simply wasn't executing. So I decided to give in and use the windows task scheduler. In order to do this, I looked at the following posts about the issue:
Cgywin .sh file run as Windows Task Scheduler
http://www.davidjnice.com/cygwin_scheduled_tasks.html
in my case, the entry in the "actions" tab of the new task looks like this:
program/script: c:\cygwin64\bin\bash.exe
arguments: -l -c "/cygdrive/c/users/paul/bitcoinbot/download_all_data.sh >> cygdrive/c/users/paul/bitcoinbot/logfile.log 2>&1"
start in: c:\cygwin64\bin
Notice that I redirected the output of the shell script to a log file, so that I should be able to see there whether the program run. Other than that, I simply edited the "trigger" tab to run the task daily, and set the time to a couple of minutes in the fture to see whether it ran successfully.
Alas, when I look at the detailed event history for the task, nothing changes when the trigger time passes. And when I manually "run" the task, the event history seems to add a few different events, but the task is completed within seconds, whereas this task should take over an hour (and it does when the shell script is executed directly from the terminal). And when I look for the log file that should have been created, there is nothing.
Does anyone have any idea what might be the issue here? How can I get my task to run properly at the trigger time, and how can I make sure it does so?
Best,
Paul
EDIT:
here are the pictures showing event history, as per Ken White's request.
Please ignore the fact that it says there are 24 events. These are from multiple separate runs of the task. The events shown here are a complete list of the events triggered by a single run.
EDIT 2:
Regarding my attempts to get cron to work, I have run into the following problem when I try to start the cron service using cygrunsrv. First of all, I tried to start cron by typing
cygrunsrv -I cron -p /usr/sbin/cron.exe -a -D
Now when I type
$cygrunsrv -Q cron
Service: cron
Current State: stopped
Command: /usr/bin/cron.exe
Now, I tried to start the cron service by typing
cygrunsrv -S cron
Cygrunsrv: Error starting a service: QueryServiceStatus: Win32 error 1062:
The service has not been started.
Does anyone hae any idea what this error means? I tried googling it, but couldn't find any answers.

How to Sync Cronjob with the files its running which takes more than 5 mins to complete but i have Cronjob set for every 3 mins

I am facing a problem with the scripts which i am using in my code, my cronjob runs every 5 mins but the scripts which it is running some time takes more time and i want my cronjob to wait for those files to finish its processing and then execute in the earliest interval is it possible?
Please see below example. Kindly propose me a solution. TIA.
I am running a cronjob for e.g.:
*/5 * * * * /home/Sti/New_Int/fetch_My_Data.sh
This job is invoking below scripts and few details about what each script is doing
fetch_Some_Data.sh --> This script is just moving few files from one location to another so that the only required files can be processed.
tran.sh --> This script opens a for loop and for each file it will open a DB connection by invoking PostP.sh script and for processing it has a sleep time of 60 seconds.
PostP.sh --> This is a script which creates a DB connection and terminates it for each file which is being processed in point 2.
So can you provide me a solution so that if the files are not processed in point 2 the cronjob won't run till then
I usually use a temporary file in such cases to indicate a running instance, and till that file exists all other instances simply exit or Error.
Add this logic to your shell script before doing anything else:
if exists file
then
exit
end if
else
touch empty file
end else

Auto sys file watcher job

Task: Is to create file watcher job in autosys that would watch out for a particular file.
The requirement is that the file comes at 9:00am everyday and the file watcher job starts running by 8.50am. If the file is received by 10:00 am then job should terminate successfully else an alert email(thru SSIS package, another autosys job) should be triggered.
I'm using Autosys(windows).
I'm not sure how to tell file watcher job to Start looking for file around 8:50am and end looking for file at 10:00 am and if the file is not received by 10 am then trigger another auto sys job. How to set this up.
Any help would be much appreciated.
Thanks,
Cindy!!
for the first job:
start_times: "08:50"
term_run_time: 70
for the second job:
condition: failure(first_job)

Resources