Chronicle queue file corrupted? - chronicle

When I read message from a chronicle queue, I got an exception "java.lang.IllegalStateException: Meta data not ready c0000000". Is is possible to repair this queue file?

This error could occur if you are reading using a much older version of the queue library. Can you try the latest version 4.5.13?
Note this would be a bug in the reading code rather than a corruption of the file. You can use DumpQueueMain to check this.

Related

Distributeload cloning flowfile and sending double

I have a NIFI DistributeLoad processor that sends to an executestreamcommand processor. The issue I am seeing is that the distributeload processor is creating a clone and sending the original plus the clone to the executestreamcommand processor. This is not happening for all the files I send through. Has anyone else seen this issue?
it's version 0.7.1. Figured out that it must have something to do with how distributeload processor handles the removal of relationships. I used to have 8 relationships going out of that processor and changed it to 4 and I think that's when i started seeing the clones. I had to recreate the processor and it seems to be working fine.
Confirmed - Didn't even change the number of outbound relationships - just rewired them and started getting duplicate entries.
Replaced the processor with a new one and duplicates have disappeared. Will raise a bug.

Cassandra SSTableLoad Memory leak issue

I created a Cassandra Job which takes data from Oracle and creates SSTable files for the data. We were testing the performance of the Job when we ran into issues.
Whenever there is high volume of data being processed, SSTablewrite creates multiple Data.db files and then it runs into memory leak issue. Can anyone please help me understand what this issue is and how can we resolve it.
_search/testing_table/testing_poc-testing_table-tmp-ka-10-Index.db to /file_directoory/to_load/ss_tables/testing_table/testing_poc-testing_table-ka-10-Index.db
03:15:09.209 [Thread-2] DEBUG o.apache.cassandra.io.util.FileUtils - Renaming /file_directoory/to_load/ss_tables/testing_table/testing_poc-testing_table-tmp-ka-10-Data.db to /file_directoory/to_load/ss_tables/testing_table/testing_poc-testing_table-ka-10-Data.db
03:15:22.378 [Reference-Reaper:1] ERROR o.a.cassandra.utils.concurrent.Ref - LEAK DETECTED: a reference (org.apache.cassandra.utils.concurrent.Ref$State#322fe601) to class org.apache.cassandra.io.util.SafeMemory$MemoryTidy#1943860552:Memory#[7ffadc07c380..7ffadc07c3e4) was not released before the reference was garbage collected
I Just got through this link https://issues.apache.org/jira/browse/CASSANDRA-9285.
So this is known issue and rather it does correct within itself.

How to delete an local queue after deleting q file

MQ file system got 100% used in one of our queue managers and we found that our system.cluster.transmit queue is occupied the full space so I have deleted the q file.so file system issue got resolved.Now I was trying to delete the transmit local queue using delete qlocal command so I need to create the queue again as my queue got damaged but when I am trying to delete the queue I am getting MQ object is in use and I checked the handles to kill it but it says MQ object got damaged so please help me to how to delete the queue now.
If you are running linear logging you can recreate the damaged object, but in this case that would fill the file system. Instead, temporarily define a new QMgr and then get a copy of its file for the same queue and drop it into the directory where you deleted the file.
As a side note, you might also want to start a new question asking how to delete the messages in the XMitQ without blowing away the file.

Unable to restart the queue manager

Unable to restart the queue manager getting error message as AMQ7017 Log not available
Checked the FDC file and getting the error message as AMQ6118 An internal WebSphere MQ error has occurred 7017
We have remounted the file system and log file is also exists but still not able to restart the queue manager
please let us know your suggestions ASAP
The queue manager generates AMQ7017 for different reasons and following are some of the reasons.
The queue manager was not able to find a specific log file
Your update indicates the log file is there.
Problem in accessing the log file
Check the file permissions of the log files and log control file
If you see any FDCs with BADLSN errors, then there is a possibility of
a MQ defect or a file system issue.
Check for any known defects fixed. But in some cases, an APAR fix only
prevents the problem in future and may not fix the current issue in
queue manager restart.
An immediate workaround would be to backup the queue manager and then rebuild or recreate the queue manager or restore from backup. If this not an option then I suggest opening ticket with IBM.

Skipping bad input files in hadoop

I'm using Amazon Elastic MapReduce to process some log files uploaded to S3.
The log files are uploaded daily from servers using S3, but it seems that a some get corrupted during the transfer. This results in a java.io.IOException: IO error in map input file exception.
Is there any way to have hadoop skip over the bad files?
There's a who bunch of record skipping configuration properties you can use to do this - see the mapred.skip. prefixed properties on http://hadoop.apache.org/docs/r1.2.1/mapred-default.html
There's also a nice blog post entry about this subject and these config properties:
http://devblog.factual.com/practical-hadoop-streaming-dealing-with-brittle-code
That said, if you file is completely corrupt (i.e. broken before the first record), you might still have issues even with these properties.
Chris White's comment suggesting writing your own RecordReader and InputFormat is exactly right. I recently faced this issue and was able to solve it by catching the file exceptions in those classes, logging them, and then moving on to the next file.
I've written up some details (including full Java source code) here: http://daynebatten.com/2016/03/dealing-with-corrupt-or-blank-files-in-hadoop/

Resources