My Kstreams consumer stores some checkpoint information under /tmp/kafka-streams/. This folder fills up pretty fast in our case. My kstream basically consumes a 1kb message in 3second window and dedups the same based on a key. I am looking for suggestions on how to purge this data periodically so the disk doesn't fill up in terms of what files to keep vs not?
If you use windowed aggreation, by default a retention time of 1 day is used, to allow handling out-of-order data correctly. This means, all windows of the last 24h (or actually up to 36h) are stored.
You can try to reduce the retention time to store a shorter history:
.aggregate(..., Materialized.as(null).withRetentionTime(...));
older version (pre 2.1.0): TimeWindows#until(...) (or SessionWindows#until(...))
Related
ListFile processor is not detecting any changes to a previously processed file and reprocess it. FYI, I have tried the following options already for reprocessing and only the finally mentioned hack is working. This is in a single-node NiFi I am running in my development environment.
Update Scenario: ListFile processor is not detecting file content changes and trigger automatically post-update (i.e file updates using VIM editor)
Timestamp modification Scenario: Changing the file timestamp with touch -c command changes the file timestamp but this does not cause auto-trigger of the ListFile processor either.
Stop-start Scenario: Stop-start of the whole process group in NiFi after changing the file as mentioned above also does not cause triggering of ListFile processor.
Waiting Clause: Waiting for long enough after file change also does not help - just in case we assume it will auto-trigger after some delay.
HACK: The only way I am able to trigger the re-processing of the file by ListFile processor is by changing the wildcard expression for "File Filter" in ListFile processor in a harmless, idempotent manner, for example from .*test.*\.csv to test.*\.csv and vice versa later (i.e go back and forth like this for repeated reprocessing).
Reprocessing of files with same old names and with modified data is a requirement for us. Please help!
And sometimes forced reprocessing of even an unmodified file could be required in case of unanticipated data issues upstream/downstream. Please help!
UPDATE
Still facing this sporadic behavior! Only restart of NiFi helps when the ListFile processor fails to respond to file change.
Probably this is delayed answer.
The old List processors like ListFiles/ListFtp/ListSftp etc. used only timestamp tracking strategy to identify the changed files. The processor used to cache last seen timestamp in its processor state and use it to list files with only greater timestamp.
However, this approach was very buggy. Hence they had to come up with much better strategy which is called Entity Tracking. This approach gives broad
range of monitoring on file changes. It keeps track of below parameters of each file in the specified directory.
Name
Size
Last modified timestamp
Any change in file is reflected in these key parameters. Since they are cached, any difference is treated as change, thus changed files appear in the success connection.
We recently decided to enable GC logging for Hadoop MapReduce2 History Server on a number of clusters (exact version varies) as a aid to looking into history-server-related memory and garbage collection problems. While doing this, we want to avoid two problems we know might happen:
overwriting of the log file when the MR2 History server restarts for any reason
the logs using too much disk space, leading to disks getting filled
When Java GC logging starts for a process it seems to replace the content of any file that has the same name. This means that unless you are careful, you will lose the GC logging, perhaps when you are more likely to need it.
If you keep the cluster running long enough, log files will fill up disk unless managed. Even if GC logging is not currently voluminous we want to manage the risk of an unusual situation arising that causes the logging rate to suddenly spike up.
You will need to set some JVM parameters when starting the MapReduce2 History Server, meaning you need to make some changes to mapred-env.sh. You could set the parameters in HADOOP_OPTS, but that would have a broader impact than just the History server, so instead you will probably want to set them in HADOOP_JOB_HISTORYSERVER_OPTS.
Now lets discuss the JVM parameters to include in those.
To enable GC logging to a file, you will need to add -verbose:gc -Xloggc:<log-file-location>.
You need to give the log file name special consideration to prevent overwrites whenever the server is restarted. It seems like you need to have a unique name for every invocation so appending a timestamp seems like the best option. You can include something like `date +'%Y%m%d%H%M'` to add a timestamp. In this example, it is in the form of YYYYMMDDHHMM. In some versions of Java you can put "%t" in your log file location and it will be replaced by the server start up timestamp formatted as YYYY-MM-DD_HH-MM-SS.
Now onto managing use of disk space. I'll be happy if there is a simpler way than what I have.
First, take advantage of Java's built-in GC log file rotation. -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M is an example of enabling this rotation, having up to 10 GC log files from the JVM, each of which is no more than approx 10MB in size. 10 x 10MB is 100MB max usage.
With the GC log file rotation in place with up to 10 files, '.0', '.1', ... '.9' will be added to the file name you gave in Xloggc. .0 will be first and after it reaches .9 it will replace .0 and continue on in a round robin manner. In some versions of Java '.current' will be additionally put on the end of the name of the log file currently being written to.
Due to the unique file naming we apparently have to have to avoid overwrites, you can have 100MB per History server invocation, so this is not a total solution to managing disk space used by the server's GC logs. You will end up with a set of up to 10 GC log files on each server invocation -- this can add up over time. The best solution (under *nix) to that would seem to be to use the logrotate utility (or some other utility) to periodically clean up the GC logs that have not been modified in the last N days.
Be sure to do the math and make sure you will have enough disk space.
People frequently want more details and context in their GC logs than the default, so consider adding in -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps.
Putting this together, you might add something this to mapred-env:
## enable GC logging for MR2 History Server:
TIMESTAMP=`date +'%Y%m%d%H%M'`
# GC log location/name prior to .n addition by log rotation
JOB_HISTORYSERVER_GC_LOG_NAME="{{mapred_log_dir_prefix}}/$USER/mapred-jobhistory-gc.log-$TIMESTAMP"
JOB_HISTORYSERVER_GC_LOG_ENABLE_OPTS="-verbose:gc -Xloggc:$JOB_HISTORYSERVER_GC_LOG_NAME"
JOB_HISTORYSERVER_GC_LOG_ROTATION_OPTS="-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M"
JOB_HISTORYSERVER_GC_LOG_FORMAT_OPTS="-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps"
JOB_HISTORYSERVER_GC_LOG_OPTS="$JOB_HISTORYSERVER_GC_LOG_ENABLE_OPTS $JOB_HISTORYSERVER_GC_LOG_ROTATION_OPTS $JOB_HISTORYSERVER_GC_LOG_FORMAT_OPTS"
export HADOOP_JOB_HISTORYSERVER_OPTS="$HADOOP_JOB_HISTORYSERVER_OPTS $JOB_HISTORYSERVER_GC_LOG_OPTS"
You may find that you already have a reference to HADOOP_JOB_HISTORYSERVER_OPTS so you should replace or add onto that.
In the above, you can change {{mapred_log_dir_prefix}}/$USER to wherever you want the GC logs to go (you probably want it to go the the same place as MapReduce history server logs). You can change the log file naming too.
If you are managing your Hadoop cluster with Apache Ambari, then these changes would be in MapReduce2 service > Configs > Advanced > Advanced mapred-env > mapred-env template. With Ambari, {{mapred_log_dir_prefix}} will be automatically replaced with the Mapreduce Log Dir Prefix defined a few rows above the field.
GC logging will start happening upon server restart the server, so you may need to have a short outage to enable this.
We have a NetApps FAS2040 device with a Snaplock compliance volume configure. Image files are written to the volume using IBM FileNet for compliant storage of scanned post.
We want to replace the FileNet element with a in-house solution where we write the images ourselves to the volume. What I would like to know is what is involved in doing this.
Is it just a case of writing a file to the volume then setting the read only attribute to true?. How would I configure expiration for the file. Can I change the time between it being read only and then permanently committed?
Thanks
Stuart
Setting a file read only will activate the snaplock. It will default to the default snaplock retention period - which is configured on your filer.
Usually this is set to a short interval, to avoid accidentally snaplocking junk for protracted periods of time. There's really nothing more annoying than accidentally locking a 100TB aggregate for 3 years.
You update the individual file retention by setting an access time. By default, when you set a file read only, the atime will be set to now + the default retention.
You can update this to a longer timespan (but never shorter) via the touch command:
touch -a -t 201411210000 <filename>
(note - iso date format. YYYYMMDDHHmm)
You can use the stat command to show you the retention dates.
The whole process is geared off the snaplock compliance clock - it's a tamper resistant clock, which should generally match 'real time' but there are a variety of circumstances where it won't. E.g. if you've powered the filer down, or the volumes or whatever, you can't reset the clock, forward date and mess with snaplock. It will adjust itself by up to a week per year, but NO FASTER.
Once the atime is less than the value on the snaplock clock, you can then delete the file (but note - not modify it).
You can set snaplock.autocommit_period to automatically snaplock files that have been static for the defined period.
I would very strongly recommend thinking hard about snaplock retention periods. Setting this to years will almost certainly cause you grief, when you realise you need to do some essential maintenance and you can't because snaplock is in the way. You can only delete an aggregate once every piece of data in it has had the clock expire. (And if any has been offline ever, the clock will be skewed too). That means years after your system is end of life, it will not be usable for anything other than watching the clock tick.
I would suggest instead that you consider a mechanism to refresh snaplock on a periodic basis - e.g. set the retention for 6 months. Run a monthly cronjob to update it and add another month.
You cannot change the time between "read only" and "permanently committed" because that's not the way it works. As soon as it's read only, it's snaplocked. But as said - the retention period should hopefully default to a sensible number.
In HBase before writing into memstore data will be first written into the WAL, but when i checked on my system
WAL files are not getting updated immediately after each Put operation, it's taking lot of time to update. Is there any parameter i need to set?
(WAL has been enabled)
Do you know how long it takes to update the WAL files? Are you sure the time is taken in write or by the time you check WAL, it is already moved to old logs. If WAL is enabled all the entries must come to WAL first and then written to particular region as cluster configured.
I know that WAL files are moved to .oldlogs fairly quickly i.e. 60 seconds as defined in hbase-site.xml through hbase.master.logcleaner.ttl setting.
In standalone mode writing into WAL is taking a lot of time, where as on pseudo distributed mode it's working fine
When writing to a file in Windows 7, Windows will cache the writes by default. When it completes the writes, does Windows preserve the order of writes, or can the writes happen out of order?
I have an existing application that writes continuously to a binary file. Every 20 seconds, it writes a block of data, updates the file's Table of Contents, and calls _commit() to flush the data to disk.
I am wondering if it is necessary to call commit, or if we can rely on Windows 7 to get the data to disk properly.
If the computer goes down, I'm not too worried about losing the most recent 20 seconds worth of data, but I am concerned about making the file invalid. If the file's Table of Contents is updated, but the data isn't present, then the file will not be correct. If the data is updated, but the Table of Contents isn't, then there will be extra data at the end of the file, but since it's not referenced by the Table of Contents, it is ignored when reading the file, and we have a correct file.
The writes will not necessarily happen in order. In particular if there are multiple disk I/Os outstanding, the filesystem/disk driver may reorder the I/O operations to reduce head motion. That means that there is no guarantee that data that is written to disk will be written in the order it was written to the file.
Having said that, flushing the file to disk will stall until the I/O is complete - that may mean several dozen milliseconds (or even longer) of inactivity when you application could be doing something more useful.