Spark streaming jobs fail when chained - hadoop

I'm running a few a few Spark Streaming jobs in a chain (one looking for input in the output folder of the previous one) on a Hadoop cluster, using HDFS, running in Yarn-cluster mode.
job 1 --> reads from folder A outputs to folder A'
job 2 --> reads from folder A'outputs to folder B
job 3 --> reads from folder B outputs to folder C
...
When running the jobs independently they work just fine.
But when they are all waiting for input and I place a file in folder A, job1 will change its status from running to accepting to failed.
I can not reproduce this error when using the local FS, only when running it on a cluster (using HDFS)
Client: Application report for application_1422006251277_0123 (state: FAILED)
INFO Client:
client token: N/A
diagnostics: Application application_1422006251277_0123 failed 2 times due to AM Container for appattempt_1422006251277_0123_000002 exited with exitCode: 15 due to: Exception from container-launch.
Container id: container_1422006251277_0123_02_000001
Exit code: 15

Even though Mapreduce ignores files that start with . or _, Spark Streaming does not.
The problem is, when a file is being copied or processes or whatever and there is a trace of a file found on HDFS(i.e. "somefilethatsuploading.txt.tmp") Spark will try to process it.
By the time the process starts to read the file, it's either gone or not complete yet.
That's why the processes kept blowing up.
Ignoring files that start with . or _ or end with .tmp fixes this issue.
Addition:
We kept having issues with the chained jobs. It appears that as soon as Spark notices a file (even if it's not completely written) it will try to process it, ignoring all additional data. The file rename operation is typically atomic and should prevent issues.

Related

Can we specify spark streaming to start from a specific batch of the checkpoint folder?

I had been running my streaming job for a while and it had processed thousands of batches.
There is retention on the checkpoint file system and the older directories are removed. Now when I restarted my streaming job it failed with the following error
terminated with error",throwable.class="java.lang.IllegalStateException",throwable.msg="failed to read log file for batch 0"
this is because the corresponding batch directory is no longer available. Is there a way to make the streaming job start from a specific batchId?

Spark/Hadoop can't read root files

I'm trying to read a file inside a folder that only me (and root) can read/write, through spark, first I start the shell with:
spark-shell --master yarn-client
then I:
val base = sc.textFile("file///mount/bases/FOLDER_LOCKED/folder/folder/file.txt")
base.take(1)
And got the following error:
2018-02-19 13:40:20,835 WARN scheduler.TaskSetManager:
Lost task 0.0 in stage 0.0 (TID 0, mydomain, executor 1):
java.io.FileNotFoundException: File file: /mount/bases/FOLDER_LOCKED/folder/folder/file.txt does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:611)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601)
...
I am suspecting that as yarn/hadoop was launched with the user hadoop it can't go further in this folder to get the file. How could I solve this?
OBS: This folder can't be open to other users because it has private data.
EDIT1: This /mount/bases is a network storage, using a cifs connection.
EDIT2: hdfs and yarn was launched with the user hadoop
As hadoop was the user that lauched hdfs and yarn, he is the user that will try to open a file in a job, so it must be authorized to access this folder, fortunely hadoop checks what user is executing the job first to allow the access to a folder/file, so you will not take risks at this.
Well, if it would have been access related issue with the file, you would have got 'access denied' as an error. In this particular scenario, I think file that you are trying to read is not present at all, or might have some other name[typos]. Just check for the file name.

Unable to Start the Name Node in hadoop

I am running the hadoop in my local system but while running ./start-all.sh command its running all functionality except Name Node while running it's getting connection refused and in log file prints below exception
java.io.ioexception : there appears to be a gap in the edit log, we expected txid 1, but got txid 291.
Can You please help me.
Start namenode with recover flag enabled. Use the following command
./bin/hadoop namenode -recover
The metadata in Hadoop NN consists of:
fsimage: contains the complete state of the file system at a point in time
edit logs: contains each file system change (file creation/deletion/modification) that was made after the most recent fsimage.
If you list all files inside your NN workspace directory, you'll see files include:
fsimage_0000000000000000000 (fsimage)
fsimage_0000000000000000000.md5
edits_0000000000000003414-0000000000000003451 (edit logs, there're many ones with different name)
seen_txid (a separated file contains last seen transaction id)
When NN starts, Hadoop will load fsimage and apply all edit logs, and meanwhile do a lot of consistency checks, it'll abort if the check failed. Let's make it happen, I'll rm edits_0000000000000000001-0000000000000000002 from many of my edit logs in my NN workspace, and then try to sbin/start-dfs.sh, I'll get error message in log like:
java.io.IOException: There appears to be a gap in the edit log. We expected txid 1, but got txid 3.
So your error message indicates that your edit logs is inconsitent(may be corrupted or maybe some of them are missing). If you just want to play hadoop on your local and don't care its data, you could simply hadoop namenode -format to re-format it and start from beginning, otherwise you need to recovery your edit logs, from SNN or somewhere you backed up before.

Spark Streaming: How to restart spark streaming job running on hdfs cleanly

We have a spark streaming job which reads data from kafka running on a 4 node cluster that uses a checkpoint directory on HDFS ....we had an I/O error where we ran out of the space and we had to go in and delete a few hdfs folders to free up some space and now we have bigger disks mounted ....and want to restart cleanly no need to preserve checkpoint data or kafka offset.....getting the error ..
Application application_1482342493553_0077 failed 2 times due to AM Container for appattempt_1482342493553_0077_000002 exited with exitCode: -1000
For more detailed output, check application tracking page:http://hdfs-name-node:8088/cluster/app/application_1482342493553_0077Then, click on links to logs of each attempt.
Diagnostics: org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1266542908-96.118.179.119-1479844615420:blk_1073795938_55173 file=/user/hadoopuser/streaming_2.10-1.0.0-SNAPSHOT.jar
Failing this attempt. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1484420770001
final status: FAILED
tracking URL: http://hdfs-name-node:8088/cluster/app/application_1482342493553_0077
user: hadoopuser
From the error what i can make out is it's still looking for old hdfs blocks which we deleted ...
From research found that ..changing check point directory will help tried changing it and pointing to a new directory ...but still it's not helping to restart spark on clean slate ..it's still giving the same block exception ...Are we missing anything while doing the configuration changes? And how can we make sure that spark is started on a clean slate ?
Also this is how we are setting the checkpoint directory
val ssc = new StreamingContext(sparkConf, Seconds(props.getProperty("spark.streaming.window.seconds").toInt))
ssc.checkpoint(props.getProperty("spark.checkpointdir"))
val sc = ssc.sparkContext
current checkpoint directory in property file is like this
spark.checkpointdir:hdfs://hadoopuser#hdfs-name-node:8020/user/hadoopuser/.checkpointDir1
previously it used to be like this
spark.checkpointdir:hdfs://hadoopuser#hdfs-name-node:8020/user/hadoopuser/.checkpointDir

Hadoop tasks: "execvp: permission denied"

In a small Hadoop cluster set up on a number of developer workstations (i.e., they have different local configurations), I have one TaskTracker of 6 that is being problematic. Whenever it receives a task, that task immediately fails with ChildError:
java.lang.Throwable: Child Error
at org.apache.hardoop.mapred.TaskRunner.run(TaskRunner.java:242)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
at org.apache.hardoop.mapred.TaskRunner.run(TaskRunner.java:229)
When I look at the stdout and stderr logs for the task, the stdout log is empty, and the stderr log only has:
execvp: Permission denied
My jobs complete because the tasktracker eventually gets blacklisted and runs on the other nodes that have no problem running a task. I am not able to get any tasks running on this one node, from any number of jobs, so this is a universal problem.
I have a DataNode running on this node with no issues.
I imagine there might some sort of Java issue here where it is having a hard time spawning a JVM or something...
We have same problem. we fix it by adding 'execute' to below file.
$JAVA_HOME/jre/bin/java
Because hadoop use $JAVA_HOME/jre/bin/java to spawn task program instead of $JAVA_HOME/bin/java.
If you still have this issue after change the file mode, suggest you use remote debug to find the shell cmd which spawning the task, see debugging hadoop task
Whatever it is trying to execvp does not have the executable bit set on it. You can set the executable bit using chmod from the commandline.
I have encountered the same problem.
You can try changing the jdk version 32bit to 64bit or 64bit to 32bit.

Resources