MapDB background compaction - key-value-store

For one of our projects, I'm using MapDB as a persistent Key/Value store. I'm using HTreeMap backed by a file and would like to know how I can enable compaction in the background.
final DB mapDB = DBMaker.fileDB(...).transactionEnable().executorEnable().closeOnJvmShutdown().make();
final HTreeMap<> itemsMap = mapDB.hashMap(..., Serializer.STRING,
custom_serializer).expireExecutor(
Executors.newScheduledThreadPool(3)).expireCompactThreshold(
0.4).counterEnable().createOrOpen();
The code is not exactly same as above, but similar. Can someone let me know what I'm missing? I don't see the file getting compacted at all. What is that I'm missing?
Regards,
-C

Related

Apache Nifi MergeContent output data inconsistent?

Fairly new to using nifi. Need help with the design.
I am trying to create a simple flow with dummy csv files(for now) in HDFS dir and prepend some text data to each record in each flowfile.
Incoming files:
dummy1.csv
dummy2.csv
dummy3.csv
contents:
"Eldon Base for stackable storage shelf, platinum",Muhammed MacIntyre,3,-213.25,38.94,35,Nunavut,Storage & Organization,0.8
"1.7 Cubic Foot Compact ""Cube"" Office Refrigerators",BarryFrench,293,457.81,208.16,68.02,Nunavut,Appliances,0.58
"Cardinal Slant-D Ring Binder, Heavy Gauge Vinyl",Barry French,293,46.71,8.69,2.99,Nunavut,Binders and Binder Accessories,0.39
...
Desired output:
d17a3259-0718-4c7b-bee8-924266aebcc7,Mon Jun 04 16:36:56 EDT 2018,Fellowes Recycled Storage Drawers,Allen Rosenblatt,11137,395.12,111.03,8.64,Northwest Territories,Storage & Organization,0.78
25f17667-9216-4f1d-b69c-23403cd13464,Mon Jun 04 16:36:56 EDT 2018,Satellite Sectional Post Binders,Barry Weirich,11202,79.59,43.41,2.99,Northwest Territories,Binders and Binder Accessories,0.39
ce0b569f-5d93-4a54-b55e-09c18705f973,Mon Jun 04 16:36:56 EDT 2018,Deflect-o DuraMat Antistatic Studded Beveled Mat for Medium Pile Carpeting,Doug Bickford,11456,399.37,105.34,24.49,Northwest Territories,Office Furnishings,0.61
the flow
splitText-
ReplaceText-
MergeContent-
(this may be a poor way to achieve what I am trying to get, but I saw somewhere that uuid is best bet when it comes to generating unique session id. So thought of extracting each line from incoming data to flowfile and generating uuid)
But somehow, as you can see the order of data is messing up. The first 3 rows are not the same in output. However, the test data I am using (50000 entries) seems to have the data in some other line. Multiple tests show usually the data order changes after 2001st line.
And yes, I did search similar issues here and tried using defragment method in merge but it didnt work. I would appreciate if someone can explain what is happening here and how can I get the data in the same way with unique session_id,timestamp for each record. Is there some parameter I need to change or modify to get the correct output? I am open to suggestions if there is a better way as well.
First of all thank you for such an elaborate and detailed response. I think you cleared a lot of doubts I had as to how the processor works!
The ordering of the merge is only guaranteed in defragment mode because it will put the flow files in order according to their fragment index. I'm not sure why that wouldn't be working, but if you could create a template of a flow with sample data that showed the problem it would be helpful to debug.
I will try to replicate this method using a clean template again. Could be some parameter problem and the HDFS writer not able to write.
I'm not sure if the intent of your flow is to just re-merge the original CSV that was split, or to merge together several different CSVs. Defragment mode will only re-merge the original CSV, so if ListHDFS picked up 10 CSVs, after splitting and re-merging, you should again have 10 CSVs.
Yes, that is exactly what I need. Split and join data to their corresponding files. I dont specifically (yet) need to join the outputs again.
The approach of splitting a CSV down to 1 line per flow file to manipulate each line is a common approach, however it won't perform very well if you have many large CSV files. A more efficient approach would be to try and manipulate the data in place without splitting. This can generally be done with the record-oriented processors.
I used this approach purely instinctively and did not realize this is a common method. Sometimes the datafile could be very large, that means more than a million records in a single file. Wont that be an issue with the i/o in the cluster? coz that would mean each record=one flowfile=one unique uuid. What is a comfortable number of flowfiles that nifi can handle? (i know it depends on cluster config and will try to get more info about the cluster from hdp admin)
What do you suggest by "try and manipulate the data in place without splitting" ? can you give an example or template or processor to use?
In this case you would need to define a schema for your CSV which included all the columns in your data, plus the session id and timestamp. Then using an UpdateRecord processor you would use record path expressions like /session_id = ${UUID()} and /timestamp = ${now()}. This would stream the content line by line and update each record and write it back out, keeping it all as one flow file.
This looks promising. Can you share a simple template pulling files from hdfs>processing>write hdfs files but without splitting?
I am reluctant to share the template due to restrictions. But let me see if I can create a generic templ and I will share
Thank you for your wisdom! :)

Apache Nifi GetTwitter

I have a simple question, as I am new to NiFi.
I have a GetTwitter processor set up and configured (assuming correctly). I have the Twitter Endpoint set to Sample Endpoint. I run the processor and it runs, but nothing happens. I get no input/output
How do I troubleshoot what it is doing (or in this case not doing)?
A couple things you might look at:
What activity does the processor show? You can look at the metrics to see if anything has been attempted (Tasks/Time) as well as if it succeeded (Out)
Stop the downstream processor temporarily to make any output FlowFiles visible in the connection queue.
Are there errors? Typically these appear in the top-left corner as a yellow icon
Are there related messages in the logs/nifi-app.log file?
It might also help us help you if you describe the GetTwitter Property settings a bit more. Can you share a screenshot (minus keys)?
In my case its because there are two sensitive values set. According to the documentation when a sensitive value is set, the nifi.properties file's nifi.sensitive.props.key value must be set - it is an empty string by default using HortonWorks DataPlatform distribution. I set this to some random string (literally random_STRING but you can use anything) and re-created my process from the template and it began working.
In general I suppose this topic can be debugged by setting the loglevel to DEBUG.
However, in my case the issue was resolved more easily:
I just set up a new cluster, and decided to copy all twitter keys and secrets to notepad first.
It turns out that despite carefully copying the keys from twitter, one of them had a leading tab. When pasting directly into the GetTwitter processer, this would not show, but fortunately it showed up in notepad and I was able to remove it and make this work.

How to creating a MapFile with Spark and access it?

I am trying to create a MapFile from a Spark RDD, but can't find enough information. Here are my steps so far:
I started with,
rdd.saveAsNewAPIHadoopFile(....MapFileOutputFormat.class)
which threw an Exception as the MapFiles must be sorted.
So I modified to:
rdd.sortByKey().saveAsNewAPIHadoopFile(....MapFileOutputFormat.class)
which worked fine and my MapFile was created. So the next step was accessing the file. Using the directory name where parts were created failed saying that it cannot find the data file. Back to Google, I found that in order to access the MapFile parts I needed to use:
Object ret = new Object();//My actual WritableComparable impl
Reader[] readers = MapFileOutputFormat.getReaders(new Path(file), new Configuration());
Partitioner<K,V> p = new HashPartitioner<>();
Writable e = MapFileOutputFormat.getEntry(readers, p key, ret);
Naively, I ignored the HashPartioner bit and expected that this would find my entry, but no luck. So my next step was to loop over the readers and do a get(..). This solution did work, but it was extremely slow as the files were created by 128 tasks resulting in 128 part files.
So I investigated the importance of HashPartitioner and found that internally it uses it to identify which reader to use, but it seems that Spark is not using the same partitioning logic. So I modified to:
rdd.partitionBy(new org.apache.spark.HashPartitioner(128)).sortByKey().saveAsNewAPIHadoopFile(....MapFileOutputFormat.class)
But again the 2 HashPartioner did not match. So the questions part...
Is there a way to combine the MapFiles efficiently (as this would ignore the paritioning logic)?
MapFileOutputFormat.getReaders(new Path(file), new
Configuration()); is very slow. Can I identify the reader more
efficiently?
I am using MapR-FS as the underlying DFS. Will this be using the same HashParitioner implementation?
Is there a way to avoid repartitioning, or should the data be sorted over the whole file? (In contrast to being sorted within the partition)
I am also getting an exception _SUCCESS/data does not exist. Do I need to manually delete this file?
Any links about this would be greatly appreciated.
PS. If entries are sorted, then how is it possible to use the HashPartitioner to locate the correct Reader? This would imply that data parts are Hash Partitioned and then Sorted by key. So I also tried rdd.repartiotionAndSortWithinPartitions(new HashPartitioner(280)), but again without any luck.
Digging into the issue, I found that the Spark HashPartitioner and Hadoop HashPartitioner have different logic.
So the "brute force" solution I tried and works is the following.
Save the MapFile using rdd.repartitionAndSortWithinPArtitions(new
org.apache.aprk.HashPartitioner(num_of_parititions)).saveAsNewAPIHadoopFile(....MapFileOutputFormat.class);
Lookup using:
Reader[] readers = MapFileOutputFormat.getReaders(new Path(file),new Configuration());
org.apache.aprk.HashPartitioner p = new org.apache.aprk.HashPartitioner(readers.length);
readers[p.getPartition(key)].get(key,val);
This is "dirty" as the MapFile access is now bound to the Spark partitioner rather than the intuitive Hadoop HashPartitioner. I could implement a Spark partitioner that uses Hadoop HashPartitioner to improve on though.
This also does not address the problem with slow access to the relatively large number of reducers. I could make this even 'dirtier' by generating the file part number from the partitioner but I am looking for a clean solution, so please post if there is a better approach to this problem.

Hadoop Spark (Mapr) - AddFile how does it work

I am trying to understand how does hadoop work. Say I have 10 directory on hdfs, it contains 100s of file which i want to process with spark.
In the book - Fast Data Processing with Spark
This requires the file to be available on all the nodes in the cluster, which isn't much of a
problem for a local mode. When in a distributed mode, you will want to use Spark's
addFile functionality to copy the file to all the machines in your cluster.
I am not able to understand this, will spark create copy of file on each node.
What I want is that it should read the file which is present in that directory (if that directory is present on that node)
Sorry, I am bit confused , how to handle the above scenario in spark.
regards
The section you're referring to introduces SparkContext::addFile in a confusing context. This is a section titled "Loading data into an RDD", but it immediately diverges from that goal and introduces SparkContext::addFile more generally as a way to get data into Spark. Over the next few pages it introduces some actual ways to get data "into an RDD", such as SparkContext::parallelize and SparkContext::textFile. These resolve your concerns about splitting up the data among nodes rather than copying the whole of the data to all nodes.
A real production use-case for SparkContext::addFile is to make a configuration file available to some library that can only be configured from a file on the disk. For example, when using MaxMind's GeoIP Legacy API, you might configure the lookup object for use in a distributed map like this (as a field on some class):
#transient lazy val geoIp = new LookupService("GeoIP.dat", LookupService.GEOIP_MEMORY_CACHE | LookupService.GEOIP_CHECK_CACHE)
Outside your map function, you'd need to make GeoIP.dat available like this:
sc.addFile("/path/to/GeoIP.dat")
Spark will then make it available in the current working directory on all of the nodes.
So, in contrast with Daniel Darabos' answer, there are some reasons outside of experimentation to use SparkContext::addFile. Also, I can't find any info in the documentation that would lead one to believe that the function is not production-ready. However, I would agree that it's not what you want to use for loading the data you are trying to process unless it's for experimentation in the interactive Spark REPL, since it doesn't create an RDD.
addFile is only for experimentation. It is not meant for production use. In production you just open a file specified by a URI understood by Hadoop. For example:
sc.textFile("s3n://bucket/file")

UnauthorizedAccessException on MemoryMappedFile in C# 4

I wanted to play around with using a MemoryMappedFile to access an existing binary file. If this even at all possible or am I a crazy person?
The idea would be to map the existing binary file directly to memory for some preferably higher-speed operations. Or to atleast see how these things worked.
using System.IO.MemoryMappedFiles;
System.IO.FileInfo fi = new System.IO.FileInfo(#"C:\testparsercap.pcap");
MemoryMappedFileSecurity sec = new MemoryMappedFileSecurity();
System.IO.FileStream file = fi.Open(System.IO.FileMode.Open, System.IO.FileAccess.ReadWrite, System.IO.FileShare.ReadWrite);
MemoryMappedFile mf = MemoryMappedFile.CreateFromFile(file, "testpcap", fi.Length, MemoryMappedFileAccess.Read, sec, System.IO.HandleInheritability.Inheritable, true);
MemoryMappedViewAccessor FileMapView = mf.CreateViewAccessor();
PcapHeader head = new PcapHeader();
FileMapView.Read<PcapHeader>(0, out head);
I get System.UnauthorizedAccessException was unhandled (Message=Access to the path is denied.) on the mf.CreateViewAccessor() line.
I don't think it's file-permissions, since I'm running as a nice insecure administrator user, and there aren't any other programs open that might have a read-lock on the file. This is on Vista with UAC disabled.
If it's simply not possible and I missed something in the documentation, please let me know. I could barely find anything at all referencing this feature of .net 4.0
Thanks!
I know this is an old question, but I just ran into the same error and was able to solve it.
Even though I was opening the MemoryMappedFile as read-only (MemoryMappedFileRights.Read) as you are, I also needed to create the view accessor as read-only as well:
var view = mmf.CreateViewAccessor(offset, size, MemoryMappedFileAccess.Read);
Then it worked. Hope this helps someone else.
If the size is more than the file length, it gives the UnAuthorized Access exception. Because we are trying to access memory beyond the limits of the file.
var view = mmf.CreateViewAccessor(offset, size, MemoryMappedFileAccess.Read);
It is difficult to say what might be going wrong. Since there is no documentation on the MSDN website yet, your best bet is to install FILEMON from SysInternals, and see why that is happening.
Alternately, you can attach a native debugger (like WinDBG) to the process, and put a breakpoint on MapViewOfFile and other overloads. And then see why that call is failing.
Using the .CreateViewStream() from the instance of MemoryMappedFile removed the error from my code. I was unable to get .CreateViewAcccessor() working w/the access denied error

Resources