I have a map-reduce job to be run on the Amazon EMR. I would like to have up to 400 mappers and reducers and I would like to use either Medium or Large instances. How can I estimate the number of instances I need.
Besides, if one job ends within 2 minutes, let's say, and I run another job which take 4 minutes, will I be charged for 2 hours or that's considered 1 hour?
I know if you use the CLI tool to create your Job Flow and add the steps, then you can run both of the steps one after another on the same job flow and they will be counted within the same hour.
I believe if you use the GUI then you can not re-use the job flow and so you may get charged one hour for each job. I haven't tried this though so may be wrong there.
Check this article which is where I got the information:
https://cwiki.apache.org/confluence/display/MAHOUT/Mahout+on+Elastic+MapReduce
Related
In our institute (IISc Bangalore)Supercomputer ,we submit jobs using qsub. The jobs will start running according to the following-
(1) Its wall time(Expected completion time)
(2) Its position in the respected queue(small,medium,large etc).
So,it is very difficult to know which job will start after finishing one job which is currently running. But qsub is probably has a list of its own,by which it is starting a new job after finishing another job immediately.
Is there any way to know which job will start next.Is there any command for this.
Thank you.
Unfortunately, there is no clear way to know which job will be run next in a supercomputing system. The job start is depending not only on it's wall time or position in the queue but also many other factors based on the site-level policy, scheduling strategies and priorities. There can be some internal job ranking (priorities) chosen by the institute based on factors like power management, load balancing etc.
On the other side, there are many researches to predict the waiting time for job allocation. TeraGrid systems provides estimated waiting time. Also, see link1, link2 (by SERC) for more information about predicting the waiting time.
I have one requirement saying that -
a. Lets say i have 100GB of file/data
b. I have written Map Reduce job to process this data on certain logic.
c. I fired Map Reduce job, but it failed after reading 50GB
So my question is -
Can i resume the Map Reduce job from the 51th GB?
Please let me know if anybody have idea on how to do it, i don't want to reprocess the data which i processed before point of failure.
Thanks in advance
Brief answer: no.
And that's why working with large batch processing systems such as Hadoop or MPI is hard. Not only restarts of large jobs are inefficient from resource consumption point of view, but are also very psychologically depressive. That's why your primary goal is to reduce running time of single job to no more than couple of hours. Maybe it would be possible some day to implement "pausing" of jobs and "hot fixing" code, but currently it is not supported to my knowledge.
Solution #1. Split your job into error-prone parallelizable job and final error-free non-parallelizable job. Consider following example: you have hundreds of gigabytes of textual access logs from web server and you want to write job that will print how popular different browsers are. If you combine parsing and aggregating (summing) to a single huge job, then it's running time will be of order of days, and also chances that it will fail are very high because textual logs are usually hard to parse due to disambiguity. Much better idea is to split this job into two separate jobs:
First job is solely responsible for parsing log files. It prints only browser string as its output and even doesn't need to have any reducers. This job is the place for 99% of all errors because here is where parsing of "wild" data occurs. This job is parallelizable in the sense that you may split your input into chunks and process each chunk separately, so that each chunk is processed in 10-30 minutes. If job fails for some chunk, you fix it and restart; 30 minutes is not a big loss.
Second job is grand job that takes outputs from instances of first jobs and performs aggregation. Because aggregation code is very simple, this job is not likely to fail.
chunk(20G)->parse-job(20G)->browsers(0.5G)
chunk(20G)->parse-job(20G)->browsers(0.5G)
input(1T)->chunk(20G)->parse-job(20G)->browsers(0.5G)->aggregate-job->output
... .... ...
chunk(20G)->parse-job(20G)->browsers(0.5G)
Solution #2. Sometimes you may be satisfied with result even if parts of input data are dropped out. In this case you may set options mapred.max.map.failures.percent and/or mapred.max.reduce.failures.percent to non-zero values.
If your entire job fails, the output gets cleared, so you loose whatever you processed. However, Hadoop retries failed tasks of a job. So as long as your failure is recoverable within preconfigured number of attempts, a job will not fail and you are not going to loose output from already completed tasks.
If your failure is not recoverable, then in most cases it is your fault, and you might need to do one or more of the following:
Fix your code, even simple bug may cause all your tasks to consistently fail
Use less resources (e.g. care of available memory)
Better partition the problem (see if some tasks are fed more data than others or make sure task input is getting split into smaller chunks)
Have a bigger cluster capacity.
I'm searching for algorithm suitable for problem below:
There are multiple computers(exact number is unknown). Each computer pulls job from some central queue, completes job, then pulls next one. Jobs are produced by some group of users. Some users submit lots of jobs, some a little. Jobs consume equal CPU time(not really, just approximation).
Central queue should be fair when scheduling jobs. Also, users who submitted lots of jobs should have some minimal share of resources.
I'm searching a good algorithm for this scheduling.
Considered two candidates:
Hadoop-like fair scheduler. The problem here is: where can I take minimal shares here when my cluster size is unknown?
Associate some penalty with each user. Increment penalty when user's job is scheduled. Use probability of scheduling job to user as 1 - (normalized penalty). This is something like stride scheduling, but I could not find any good explanation on it.
when I implemented a very similar job runner (for a production system), I ended having each server up choose jobtypes at random. This was my reasoning --
a glut of jobs from one user should not impact the chance of other users having their jobs run (user-user fairness)
a glut of one jobtype should not impact the chance of other jobtypes being run (user-job and job-job fairness)
if there is only one jobtype from one user waiting to run, all servers should be running those jobs (no wasted capacity)
the system should run the jobs "fairly", i.e. proportionate to the number of waiting users and jobtypes and not the total waiting jobs (a large volume of one jobtype should not cause scheduling to favor it) (jobtype fairness)
the number of servers can vary, and is not known beforehand
the waiting jobs, jobtypes and users metadata is known to the scheduler, but not the job data (ie, the usernames, jobnames and counts, but not the payloads)
I also wanted each server to be standalone, to schedule its own work autonomously without having to know about the other servers
The solution I settled on was to track the waiting jobs by their {user,jobtype} attribute tuple, and have each scheduling step randomly select 5 tuples and from each tuple up to 10 jobs to run next. The selected jobs were shortlisted to be run by the next available runner. Whenever capacity freed up to run more jobs (either because jobs finished or because of secondary restrictions they could not run), ran another scheduling step to fetch more work.
Jobs were locked atomically as part of being fetched; the locks prevented them from being fetched again or participating in further scheduling decisions. If they failed to run they were unlocked, effectively returning them to the pool. The locks timed out, so the server running them was responsible for keeping the locks refreshed (if a server crashed, the others would time out its locks and would pick up and run the jobs it started but didn't complete)
For my use case I wanted users A and B with jobs A.1, A.2, A.3 and B.1 to each get 25% of the resources (even though that means user A was getting 75% to user B's 25%). Choosing randomly between the four tuples probabilistically converges to that 25%.
If you want users A and B to each have a 50-50 split of resources, and have A's A.1, A.2 and A.3 get an equal share to B's B.1, you can run a two-level scheduler, and randomly choose users and from those users choose jobs. That will distribute the resources among users equally, and within each user's jobs equally among the jobtypes.
A huge number of jobs of a particular jobtype will take a long time to all complete, but that's always going to be the case. By picking from across users then jobtypes the responsiveness of the job processing will not be adversely impacted.
There are lots of secondary restrictions that can be added (e.g., no more than 5 calls per second to linkedin), but the above is the heart of the system.
You could try Torque resource management and Maui batch job scheduling software from Adaptive Computing. Maui policies are flexible enough to fit your needs. It supports backfill, configurable job and user priorities and resource reservations.
Let's say I've got 2 or more executors in a Spark Streaming application.
I've set the batch time of 10 seconds, so a job is started every 10 seconds reading input from my HDFS.
If the every job lasts for more than 10 seconds, the new job that is started is assigned to a free executor right?
Even if the previous one didn't finish?
I know it seems like a obvious answer but I haven't found anything about job scheduling in the website or on the paper related to Spark Streaming.
If you know some links where all of those things are explained, I would really appreciate to see them.
Thank you.
Actually, in the current implementation of Spark Streaming and under default configuration, only job is active (i.e. under execution) at any point of time. So if one batch's processing takes longer than 10 seconds, then then next batch's jobs will stay queued.
This can be changed with an experimental Spark property "spark.streaming.concurrentJobs" which is by default set to 1. Its not currently documented (maybe I should add it).
The reason it is set to 1 is that concurrent jobs can potentially lead to weird sharing of resources and which can make it hard to debug the whether there is sufficient resources in the system to process the ingested data fast enough. With only 1 job running at a time, it is easy to see that if batch processing time < batch interval, then the system will be stable. Granted that this may not be the most efficient use of resources under certain conditions. We definitely hope to improve this in the future.
There is a little bit of material regarding the internals of Spark Streaming in this meetup slides (sorry, about the shameless self advertising :) ). That may be useful to you.
My cluster is currently occupied by a job A that takes long time and has VERY_LOW priority.
I started another job B yesterday while A was already running and I think it should have ran quite fast.
However, I saw it took 47 minutes at the job details.
I don't think this is the actual processing time.
I'm trying to find out when the job really started.
Where can I look?
I cant seem to find anywhere which states exactly what you're after, but you could look into the job in the job tracker on port 50030 and look at the individual mapper and reducer details. On there you can see how long each individual mapper and reducer took to complete their tasks from their start and end times.
If there weren't any mappers or reducers free when you started the second job, the second job wouldnt be able to make any progress until the first job released them, which might explain why it claimed to take so long, as they might not have actually been running simultaneously. The time of the job being started and the first actual mapper starting should give you an indication of whether it was just waiting around for resources, which means you can deduct the period of time between the job and mapper's start times from the overall 47 minutes.