WAL files in HBase - hadoop

In HBase before writing into memstore data will be first written into the WAL, but when i checked on my system
WAL files are not getting updated immediately after each Put operation, it's taking lot of time to update. Is there any parameter i need to set?
(WAL has been enabled)

Do you know how long it takes to update the WAL files? Are you sure the time is taken in write or by the time you check WAL, it is already moved to old logs. If WAL is enabled all the entries must come to WAL first and then written to particular region as cluster configured.
I know that WAL files are moved to .oldlogs fairly quickly i.e. 60 seconds as defined in hbase-site.xml through hbase.master.logcleaner.ttl setting.

In standalone mode writing into WAL is taking a lot of time, where as on pseudo distributed mode it's working fine

Related

KStreams tmp files cleanup

My Kstreams consumer stores some checkpoint information under /tmp/kafka-streams/. This folder fills up pretty fast in our case. My kstream basically consumes a 1kb message in 3second window and dedups the same based on a key. I am looking for suggestions on how to purge this data periodically so the disk doesn't fill up in terms of what files to keep vs not?
If you use windowed aggreation, by default a retention time of 1 day is used, to allow handling out-of-order data correctly. This means, all windows of the last 24h (or actually up to 36h) are stored.
You can try to reduce the retention time to store a shorter history:
.aggregate(..., Materialized.as(null).withRetentionTime(...));
older version (pre 2.1.0): TimeWindows#until(...) (or SessionWindows#until(...))

Hadoop HDFS file recovery elapsed time (start time and end time)

I need to measure the speed of recovery for files with different file sizes stored with different storage policies (replication and erasure codes) in HDFS.
My question is: Is there a way to see the start time and end time (or simply the elapsed time in seconds) for a file recovery process in HDFS? For a specific file?
I mean the start time from where the system detects node failures (and starts the recovery process), and until HDFS recovers the data (and possibly reallocates nodes) and makes the file "stable" again?
Maybe I can look into some metadata files or log files of the particular file to see some timestamps etc? Or is there a file where I can see all the activity of a HDFS file?
I would really appreciate some terminal commands to get this info.
Thank you so much in advance!

How to enable GC logging for Hadoop MapReduce2 History Server, while preventing log file overwrites and capping disk space usage

We recently decided to enable GC logging for Hadoop MapReduce2 History Server on a number of clusters (exact version varies) as a aid to looking into history-server-related memory and garbage collection problems. While doing this, we want to avoid two problems we know might happen:
overwriting of the log file when the MR2 History server restarts for any reason
the logs using too much disk space, leading to disks getting filled
When Java GC logging starts for a process it seems to replace the content of any file that has the same name. This means that unless you are careful, you will lose the GC logging, perhaps when you are more likely to need it.
If you keep the cluster running long enough, log files will fill up disk unless managed. Even if GC logging is not currently voluminous we want to manage the risk of an unusual situation arising that causes the logging rate to suddenly spike up.
You will need to set some JVM parameters when starting the MapReduce2 History Server, meaning you need to make some changes to mapred-env.sh. You could set the parameters in HADOOP_OPTS, but that would have a broader impact than just the History server, so instead you will probably want to set them in HADOOP_JOB_HISTORYSERVER_OPTS.
Now lets discuss the JVM parameters to include in those.
To enable GC logging to a file, you will need to add -verbose:gc -Xloggc:<log-file-location>.
You need to give the log file name special consideration to prevent overwrites whenever the server is restarted. It seems like you need to have a unique name for every invocation so appending a timestamp seems like the best option. You can include something like `date +'%Y%m%d%H%M'` to add a timestamp. In this example, it is in the form of YYYYMMDDHHMM. In some versions of Java you can put "%t" in your log file location and it will be replaced by the server start up timestamp formatted as YYYY-MM-DD_HH-MM-SS.
Now onto managing use of disk space. I'll be happy if there is a simpler way than what I have.
First, take advantage of Java's built-in GC log file rotation. -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M is an example of enabling this rotation, having up to 10 GC log files from the JVM, each of which is no more than approx 10MB in size. 10 x 10MB is 100MB max usage.
With the GC log file rotation in place with up to 10 files, '.0', '.1', ... '.9' will be added to the file name you gave in Xloggc. .0 will be first and after it reaches .9 it will replace .0 and continue on in a round robin manner. In some versions of Java '.current' will be additionally put on the end of the name of the log file currently being written to.
Due to the unique file naming we apparently have to have to avoid overwrites, you can have 100MB per History server invocation, so this is not a total solution to managing disk space used by the server's GC logs. You will end up with a set of up to 10 GC log files on each server invocation -- this can add up over time. The best solution (under *nix) to that would seem to be to use the logrotate utility (or some other utility) to periodically clean up the GC logs that have not been modified in the last N days.
Be sure to do the math and make sure you will have enough disk space.
People frequently want more details and context in their GC logs than the default, so consider adding in -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps.
Putting this together, you might add something this to mapred-env:
## enable GC logging for MR2 History Server:
TIMESTAMP=`date +'%Y%m%d%H%M'`
# GC log location/name prior to .n addition by log rotation
JOB_HISTORYSERVER_GC_LOG_NAME="{{mapred_log_dir_prefix}}/$USER/mapred-jobhistory-gc.log-$TIMESTAMP"
JOB_HISTORYSERVER_GC_LOG_ENABLE_OPTS="-verbose:gc -Xloggc:$JOB_HISTORYSERVER_GC_LOG_NAME"
JOB_HISTORYSERVER_GC_LOG_ROTATION_OPTS="-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M"
JOB_HISTORYSERVER_GC_LOG_FORMAT_OPTS="-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps"
JOB_HISTORYSERVER_GC_LOG_OPTS="$JOB_HISTORYSERVER_GC_LOG_ENABLE_OPTS $JOB_HISTORYSERVER_GC_LOG_ROTATION_OPTS $JOB_HISTORYSERVER_GC_LOG_FORMAT_OPTS"
export HADOOP_JOB_HISTORYSERVER_OPTS="$HADOOP_JOB_HISTORYSERVER_OPTS $JOB_HISTORYSERVER_GC_LOG_OPTS"
You may find that you already have a reference to HADOOP_JOB_HISTORYSERVER_OPTS so you should replace or add onto that.
In the above, you can change {{mapred_log_dir_prefix}}/$USER to wherever you want the GC logs to go (you probably want it to go the the same place as MapReduce history server logs). You can change the log file naming too.
If you are managing your Hadoop cluster with Apache Ambari, then these changes would be in MapReduce2 service > Configs > Advanced > Advanced mapred-env > mapred-env template. With Ambari, {{mapred_log_dir_prefix}} will be automatically replaced with the Mapreduce Log Dir Prefix defined a few rows above the field.
GC logging will start happening upon server restart the server, so you may need to have a short outage to enable this.

How does HBase mapReduce TableOutputFormat use Flush and WAL

So while writing to HBase from a MapReduce job which is using TableOutputFormat how often does it write to HBase. I dont imagine it doing a put command for every row.
How do we control AutoFlush and Write Ahead Log (WAL) while using in MapReduce?
TableOutputFormat disables AutoFlush and uses the write buffer specified at hbase.client.write.buffer (defaults to 2MB), once the buffer is full its automatically flushed to HBase. You can change it by adding the property to your job configuration:
config.set("hbase.client.write.buffer", 16777216); // 16MB Buffer
WAL is enabled by default, it can be disabled per put but it's generally discouraged:
myPut.setWriteToWal(false);
It does in fact see the code. If you want to bypass the WAL write using HFileOutputFormat see example on gitub

Hadoop - data block caching techniques

I am running some experiments to benchmark the time it takes (by map-reduce) to read and process data stored on HDFS with varying parameters. I use pig script to launch map-reduce jobs. Since I am working with the same set of files frequently, my results may get affected because of file/block caching.
I want to understand the various caching techniques employed in a map-reduce environment.
Lets say that a file foo (contains some data to be procesed) stored on HDFS occupies 1 HDFS block and it gets stored in machine STORE. During a map-reduce task, machine COMPUTE reads that block over network and processes it. Caching can happen at two levels:
Cached in memory of machine STORE (in-memory file cache)
Cached in memory/disk of machine COMPUTE.
I am pretty sure that #1 caching happens. I want to ensure whether something like #2 happens? From the post here, it looks like there is no client level caching going on since it is very unlikely that the block cached by COMPUTE will be needed again in the same machine before the cache is flushed.
Also, is the hadoop distributed cache used only to distribute any application specific files (not task specific input data files) to all task tracker nodes? Or is the task specific input file data (like the foo file block) cached in the distributed cache? I assume local.cache.size and related parameters only control the distributed cache.
Please clarify.
The only caching that is ever applied within HDFS is the OS caching to minimize disk accesses.
So if you access a block from a datanode, it is likely to be cached if nothing else is going on there.
On your client side, this depends on what you do with the block. If you directly write it to disk, it is also very likely that your client OS caches it.
The distributed cache is just for jars and files that need to be distributed across the cluster where your job launches tasks. The name is thus a bit misleading, as it "caches" nothing.

Resources