What happens if I never clean up Elasticsearch tasks? - elasticsearch

The update by query docs says that with wait_for_completion=false a task will get created, to track progress, and that the task api should be used clean up the tasks afterwards.
What is the consequence of never cleaning up these old tasks, or doing so very infrequently? Is the cost only the disk space these task files take up?

Yes, it's not a big deal if you don't cleanup those tasks immediately. The .tasks index usually has one primary shard, which allows you to spawn up to 2B tasks (= 2^31, i.e. maximum number of docs per shard) before getting into trouble.
If you use them to keep track of your tasks, it's better to clean them up once they are done, otherwise you might end up with a mess of finished task documents that are not easy to sort out.
That can also be taken care of by a simple cron job that periodically runs
DELETE .tasks/_delete_by_query?q=*

Related

Specifying process priority in Ansible

Is it possible to specify the process priority for an Ansible task?
The use case is setting a low priority for an expensive and long-running backup task. In a bash script I'd use nice for this. I did not find anything by searching using keywords "process priority" and "nice" combined with "Ansible".
async tasks allow you to run tasks in background. This helps in avoiding long-running tasks from blocking remaining tasks. The approach works as long as the remaining tasks are independent of the task marked async, this can reduce wait time.
For example, waiting for huge file to complete download and the next task is c completely independent command which can take some time. Since async task will run in the background by the time it is completed the rest of the independent commands are done.
Link on documentation below
https://docs.ansible.com/ansible/latest/user_guide/playbooks_async.html

Cron vs queued task

My application has an Order model with an execution_datetime attribute. I'd like to send some distinct notifications. For example
execution_datetime minus 12 hours: email to carrier
execution_datetime minus 3 hours: sms to customer
execution_datetime plus 1 hour: email to customer
The above timings are not strict and can be approximated; slight deviations are acceptable. Also, the execution_datetime can change in the meantime...
I'm unsure whether to use cron or queued tasks for this. Some thoughts of my own:
Cron:
Business logic will need to be written to fetch applicable orders and execute accordingly
Is execution guaranteed? Should some sort of database flag be implemented to indicate a notification has been sent, and then perhaps fetch all due orders that are unflagged as some sort of failsafe?
Queued tasks:
Task is scheduled on creation of the order? If so, suppose the execution time is changed. How to modify the scheduled task? You'd need to somewhere keep track of the task ID?
Or perhaps a cron job that mass schedules applicable tasks every day?
I look forward to your suggestions.
Great question! I am interested in this discussion.Let me chip in with a scenario from my personal experience.
In my application, I have a Listing model and they have a promotion_ends_at column. Obviously, the listing promotion ends sometimes in the future.
So, like you also mentioned, there are two ways to do this.
When the listing is created, I could queue a job that will end the promotion on the listing in the future). The delay of that job would be the time the promotion has to end (and that could me months away).
I could also have a cron job that runs regularly that manages listings that their promotions should end on a specific date.
We were using SQS as our queue service and since the maximum delay on SQS is 15 mins, option 1 was not feasible. We, then, moved to Redis where we could queue delayed jobs with a long delay easily.
However, like you also said, the promotion_ends_at column could be updated during that time. So, either, you would have to keep track of the job to de-queue it or you could re-check whether the job should still run when it is about to execute.
For example, you could fresh() the model and check whether your condition is still valid. In my case, I would fresh my Listing and check if the promotion_ends_at is in the past. However, this means that we would have a lot of stale jobs that would probably be discarded anyway.
We finally went with a simple cron job that mass schedules the job on the day that they need to be run. I also think that running delayed jobs is a business logic and maybe the queue shouldn't be held responsible for running jobs delayed far too much in the future.

Oozie Behavior with misaligned start

I noticed that if I start an Oozie coordinator with a start time many "iterations" (in terms of the frequency) previous to the current time, then the coordinator would sequentially run workflows several times, ignoring the assigned frequency. However, for me it is more important that the workflow/action run itself at the assigned frequency, than it is for workflow/action to have run the correct number of times at a given point.
Is there any way I can avoid this behavior? One way would obviously be to ensure the start time is correct within an iteration time (is there a way to have it automatically take the start time?). Another would be to configure it to avoid this behavior altogether, and basically run at the next time when it should have given the start time and the frequency.
The obvious way to avoid side effects from "past" start dates is... to set the actual start date at submission time as "now".
That's the way we do it in my team:
on the local filesystem, write down a "Coord-template.xml" with a
placeholder such as start="%Now%"
just before submitting, generate the actual "Coordinator.xml" with
sed "s/%Now%/$(date --utc '+%FT%TZ')/" coord-template.xml > coordinator.xml
upload the coordinator definition to HDFS then submit it via Oozie CLI
~~~~~~~~~~~~
Aternative: if you are using "basic" frequency (not CRON-like scheduling) you may want to try these <controls> to have Oozie create executions for all "past" time slots but discard them immediately :
<throttle>1</throttle>
and/or
<execution>LAST_ONLY</execution>
cf. Oozie 4.x reference
The rules would also apply in case the Coordinator is suspended then resumed, or in case the Oozie service gets stopped then restarted, or in case YARN has to queue new jobs for a really long time (because the cluster is 100% busy).
Oozie has improved of late, so there's an easier solution available than the currently accepted answer. As of Oozie 4.1, there is a "NONE" execution available. This skips iterations which occur in the past, more or less. Here's the doc snippet:
NONE: Similar to LAST_ONLY except all older materializations are skipped. When NONE is set, an action that is WAITING or READY will be SKIPPED when the current time is more than a certain configured number of minutes (tolerance) past the action's nominal time. By default, the threshold is 1 minute. For example, suppose action 1 and 2 are both WAITING , the current time is 5:20pm, and both actions' nominal times are before 5:19pm. Both actions will become SKIPPED, assuming they don't transition to SUBMITTED (or a terminal state) before then. Another way of thinking about this is to view it as similar to setting the timeout equal to 1 minute which is the smallest time unit, except that the SKIPPED status doesn't cause the coordinator job to eventually become DONEWITHERROR and can actually become SUCCEEDED (i.e. it's a "good" version of TIMEDOUT ).
Oozie 4.1 doc
I have tested this, and it does work with CRON frequencies. It is superior to the LAST_ONLY execution in your case because LAST_ONLY will still run the most recent iteration in the past (with the misaligned time), in addition to current/future iterations.
<execution>NONE</execution>

When do the results from a mapper task get deleted from disk?

When do the outputs for a mapper task get deleted from the local filesystem? Do they persist until the entire job completes or do they get deleted at an earlier time than that?
In addition to the map and reduce tasks, two further tasks are created: a job setup task
and a job cleanup task. These are run by tasktrackers and are used to run code to setup
the job before any map tasks run, and to cleanup after all the reduce tasks are complete.
The OutputCommitter that is configured for the job determines the code to be run, and
by default this is a FileOutputCommitter. For the job setup task it will create the final
output directory for the job and the temporary working space for the task output, and
for the job cleanup task it will delete the temporary working space for the task output.
Have a look at OutputCommitter.
If your hadoop.tmp.dir is set to a default setting (say, /tmp/), it will most likely be subject to tmpwatch and any default settings in your OS. I would suggest poking around in /etc/cron.d/, /etc/cron.daily, etc/cron.weekly/, etc., to see exactly what your OS default is like.
One thing to keep in mind about tmpwatch is that, by default, it will key on access time, not modification time (i.e., files that have not been 'touched' since X will be considered 'stale' and subject to removal). However, it's a common practice with Hadoop to mount filesystems with the noatime and nodiratime flags, meaning that access times will not get updated and thus skewing your tmpwatch behaviors.
Otherwise, Hadoop will purge task attempt logs older than 24 hours (after task completion), by default. While a few years old, this writeup has some great info on the default behaviors. Take a look in particular at the sections that refer to mapreduce.job.userlog.retain.hours.
EDIT: responding to OP's comment, which clears up my misunderstanding of the question:
As far as the intermediate output of map tasks which is spilled to disk, used by any combiners, and copied to any reducers, the Hadoop Definitive Guide has this to say:
Tasktrackers do not delete map outputs from disk as soon as the first
reducer has retrieved them, as the reducer may fail. Instead, they
wait until they are told to delete them by the jobtracker, which is
after the job has completed.
Source
I've also +1'd #mgs answer below, as they have linked the source code that controls this and described the Job cleanup task.
So, yes, the map output data is deleted immediately after the job completes, successfully or not, and no sooner.
"Tasktrackers do not delete map outputs from disk as soon as the first reducer has retrieved them, as the reducer may fail. Instead, they wait until they are told to delete them by the jobtracker, which is after the job has completed"
Hadoop: The Definitive Guide ( Section 6.4)

how to implement custom cloud worker

I am designing a cloud app and need a worker process which scours my database looking for work, and then performs it.
Most of the info I seem to find on the subject of background tasks in the cloud involves some kind of scheduler and/or queuing system.
What I have doesn't quite fit into the "run this task every 5 minutes" or "add this to the queue to be executed later" models. I think the main difference to my problem is that the workers themselves find work to do, rather than being assigned it by a periodic scheduler or an external process that generates work.
What I have is basically a giant table where each entry has three fields:
job: a small task to be performed, lets say it gets the last message from a twitter account and stores it in the database
the interval at which to perform that job: say every 5 minutes, N.B. the interval is arbitrary and different for each entry in the table
the last date when the job was performed
The way I would implement this is to have a worker which has an infinite loop. When it enters the loop, it scours the database a)looking for items whose date + interval < currentTime, b)when it finds one, it sets date = currentTime, and c)then executes the job. If there is no work ATM, it sleep for a few seconds, then tries again.
I will have many parallel workers scouring the database simultaneously, which is why I do b) first and then c) in the paragraph above. Since there are parallel workers, action a) and b) are atomic operations on the database to prevent work being duplicated. If the worker crashes after a) and b), but before it manages to finish the work, it's no big deal, and the workers can just do it at the next interval; reason for this is that the work is not performed in a time-invariant system so a backlog scenario of failed jobs has no benefit as the tasks have to be performed at their exact intervals, so it's better to skip 1 interval than to have uneven intervals between which the tasks were executed.
My question is whether that is a reasonable implementation strategy? If so, how do I bring this process to life on the cloud (I am using Heroku, but may switch to EC2 in the future)? I still haven't written any code so I would welcome other suggestions (maybe I misunderstood the use cases/applications for queue systems).
This sounds so close to using something like a scheduled job that you might as well tread the well beaten path and do it the more conventional way. There's no reason why you can't schedule a job to run once every few seconds.
However, this idea of looking for work sounds dodgy. What happens if two workers find the same task to run at the same time for instance? Also, are there not triggers in the application which can indicate that work needs doing? It seems strange that you have code 'looking for work'.
You can go a very long way with simple periodic background tasks, so I would exhaust all possibilities in that area before rolling your own.

Resources