What is the purpose of cache an RDD in Apache Spark? - caching

I am new for Apache Spark and I have couple of basic questions in spark which I could not understand while reading the spark material. Every materials have their own style of explanation. I am using PySpark Jupyter notebook on Ubuntu to practice.
As per my understanding, When I run the below command, the data in the testfile.csv is partitioned and stored in memory of the respective nodes.( actually I know its a lazy evaluation and it will not process until it sees action command ), but still the concept is
rdd1 = sc.textFile("testfile.csv")
My question is when I run the below transformation and action command, where does the rdd2 data will store.
1.Does it stores in memory?
rdd2 = rdd1.map( lambda x: x.split(",") )
rdd2.count()
I know the data in rdd2 will available till I close the jupyter notebook.Then what is the need of cache(), anyhow rdd2 is available to do all transformation. I heard after all the transformation the data in memory is cleared, what is that about?
Is there any difference between keeping RDD in memory and cache()
rdd2.cache()

Does it stores in memory?
When you run a spark transformation via an action (count, print, foreach), then, and only then is your graph being materialized and in your case the file is being consumed. RDD.cache purpose it to make sure that the result of sc.textFile("testfile.csv") is available in memory and isn't needed to be read over again.
Don't confuse the variable with the actual operations that are being done behind the scenes. Caching allows you to re-iterate the data, making sure it is in memory (if there is sufficient memory to store it in it's entirety) if you want to re-iterate the said RDD, and as long as you've set the right storage level (which defaults to StorageLevel.MEMORY). From the documentation (Thanks #RockieYang):
In addition, each persisted RDD can be stored using a different
storage level, allowing you, for example, to persist the dataset on
disk, persist it in memory but as serialized Java objects (to save
space), replicate it across nodes, or store it off-heap in Tachyon.
These levels are set by passing a StorageLevel object (Scala, Java,
Python) to persist(). The cache() method is a shorthand for using the
default storage level, which is StorageLevel.MEMORY_ONLY (store
deserialized objects in memory).
You can mark an RDD to be persisted using the persist() or cache()
methods on it. The first time it is computed in an action, it will be
kept in memory on the nodes. Spark’s cache is fault-tolerant – if any
partition of an RDD is lost, it will automatically be recomputed using
the transformations that originally created it.
Is there any difference between keeping RDD in memory and cache()
As stated above, you keep it in memory via cache, as long as you've provided the right storage level. Otherwise, it won't necessarily be kept in memory at the time you want to re-use it.

Related

Performance issue while using microstream

I just started learning microstream. After going through the examples published to microstream github repository, I wanted to test its performance with an application that deals with more data.
Application source code is available here.
Instructions to run the application and the problems I faced are available here
To summarize, below are my observations
While loading a file with 2.8+ million records, processing takes 5 minutes
While calculating statistics based on loaded data, application fails with an OutOfMemoryError
Why is microstream trying to load all data (4 GB) into memory? Am I doing something wrong?
MicroStream is not like a traditional database and starts from the concept that all data are in memory. And an Object graph can be stored to disk (or other media) when you store this through the StorageManager.
In your case, all data are in 1 list and thus when accessing this list it reads all records from the disk. The Lazy reference isn't useful how you have used it since it just handles the access to the one list with all data.
Some optimizations that you can introduce.
Split the data based on vendorId, or day using a Map<String, Lazy<List>>
When a Map value is 'processed' removed it from the memory again by clearing the lazy reference. https://docs.microstream.one/manual/5.0/storage/loading-data/lazy-loading/clearing-lazy-references.html
Increase the number of Channels to optimize the reading and writing the data. see https://docs.microstream.one/manual/5.0/storage/configuration/using-channels.html
Don't store the object graph every 10000 lines but just at the end of the loading.
Hope this helps you solve the issues you have at the moment

lazy evaluation in Apache Spark

I'm trying to understand lazy evaluation in Apache spark.
My understanding says:
Lets say am having Text file in hardrive.
Steps:
1) First I'll create RDD1, that is nothing but a data definition right now.(No data loaded into memory right now)
2) I apply some transformation logic on RDD1 and creates RDD2, still here RDD2 is data definition (Still no data loaded into memory)
3) Then I apply filter on RDD2 and creates RDD3 (Still no data loaded into memory and RDD3 is also an data definition)
4) I perform an action so that I could get RDD3 output in text file. So the moment I perform this action where am expecting output something from memory, then spark loads data into memory creates RDD1, 2 and 3 and produce output.
So laziness of RDDs in spark says just keep making the roadmap(RDDs) until they dont get the approval to make it or produce it live.
Is my understanding correct upto here...?
My second question here is, its said that its(Lazy Evaluation) one of the reason that the spark is powerful than Hadoop, May I know please how because am not much aware of Hadoop ? What happens in hadoop in this scenario ?
Thanks :)
Yes, your understanding is fine. A graph of actions (a DAG) is built via transformations, and they computed all at once upon an action. This is what is meant by lazy execution.
Hadoop only provides a filesystem (HDFS), a resource manager (YARN), and the libraries which allow you to run MapReduce. Spark only concerns itself with being more optimal than the latter, given enough memory
Apache Pig is another framework in the Hadoop ecosystem that allows for lazy evaluation, but it has its own scripting language compared to the wide programmability of Spark in the languages it supports. Pig supports running MapReduce, Tez, or Spark actions for computations. Spark only runs and optimizes its own code.
What happens in actual MapReduce code is that you need to procedurally write out each stage of an action to disk or memory in order to accomplish relatively large tasks
Spark is not a replacement for "Hadoop" it's a compliment.

Is it a good practice to cache Redis data in ngx.shared

I have some Lua code embedded in nginx. In this code I get some small data from Redis cache. Now I wonder, if it is a good practice to cache this data (already cached in some sense) in nginx, using ngx.shared construct? Are there any pros and cons of doing it this way? In pseudo-code I expect to have something like:
local cache = ngx.shared.cache
local cached_key = cache:get("cached_key")
if cached_key == nil then
... get data from Redis
cache:set("cached_key", cached_key)
end
As stated in the documentation ngx.shared is a space shared among all the workers of the nginx server.
All the listed operations are atomic, so you only have to bother about race conditions if you use two operations on ngx.shared one after the other. In this case, they should be protected using ngx.semaphore.
The pros:
Using ngx.shared provides faster access to the data, because you avoid a request/response loop to the Redis server.
Even if you need a ngx.semaphore you can expect faster access to the data (but i have no benchmark to provide).
The cons:
The ngx.shared cache provides inaccurate data, as your local cache does not reflect the current Redis value. This is not always a crucial point, as there can always be a delta between the values used in the worker and the value stored in Redis.
Data stored in ngx.shared can be inconsistent, which is more important. For instance it can store x=true and y=false whereas in Redis x and y have always the same value. It depends on how you update your local cache.
You have to handle yourself the cache, by updating the values in your cache whenever they are sent to Redis. This can be easily done by wrapping the redis functions. Expect bugs if you handle updates by putting it after each call to redis.get, because you (or someone) will forget it.
You also have to handle reads: whenever a value is not found in your ngx.cache, you have to automatically read it from Redis. Expect bugs if you handle reads by putting them after each call to cache.get, because you (or someone) will forget it.
For the two last points, you can easily write a small wrapper module.
As a conclusion:
If your server runs only one instance, with one or several workers, using ngx.shared is interesting, as you can always have a cache of your Redis data that is always up-to-date.
If your server runs several instances and having an always up-to-date cache is mandatory, or if you could have consistency problems, then you should avoid caching using ngx.shared.
In all cases, if the size of your data can be huge, make sure to provide a way to clean it before memory consumption is too high. If you cannot provide cleaning, then you should not use ngx.shared.
Also, do not forget to store the cached value within a local variable, in order to avoid geting it again and again, and thus to improve efficiency.

Spark save files distributedly

According to the Spark documentation,
All transformations in Spark are lazy, in that they do not compute their results right away. Instead, they just remember the transformations applied to some base dataset (e.g. a file). The transformations are only computed when an action requires a result to be returned to the driver program.
I am currently working on a large dataset that, once processed, outputs even a bigger amount of data, which needs to be stored in text files, as done with the command saveAsTextFile(path).
So far I have been using this method; however, since it is an action (as stated above) and not a transformation, Spark needs to send data from every partition to the driver node, thus slowing down the process of saving quite a bit.
I was wondering if any distributed file saving method (similar to saveAsTextFile()) exists on Spark, enabling each executor to store its own partition by itself.
I think you're misinterpreting what it means to send a result to the driver. saveAsTextFile does not send the data back to the driver. Rather, it sends the result of the save back to the driver once it's complete. That is, saveAsTextFile is distributed. The only case where it's not distributed is if you only have a single partition or you've coallesced your RDD back to a single partition before calling saveAsTextFile.
What that documentation is referring to is sending the result of saveAsTextFile (or any other "action") back to the driver. If you call collect() then it will indeed send the data to the driver, but saveAsTextFile only sends a succeed/failed message back to the driver once complete. The save itself is still done on many nodes in the cluster, which is why you'll end up with many files - one per partition.
IO is always expensive. But sometimes it can seem as if saveAsTextFile is even more expensive precisely because of the lazy behavior described in that excerpt. Essentially, when saveAsTextFile is called, Spark may perform many or all of the prior operations on its way to being saved. That is what is meant by laziness.
If you have the Spark UI set up it may give you better insight into what is happening to the data on its way to a save (if you haven't already done that).

Clearing and freeing memory

I am developing a windows application using C# .Net. This is in fact a plug-in which is installed in to a DBMS. The purpose of this plug-in is to read all the records (a record is an object) in DBMS, matching the provided criteria and transfer them across to my local file system as XML files. My problem is related to usage of memory. Everything is working fine. But, each time I read a record, it occupies the memory and after a certain limit the plug in stops working, because of out of memory.
I am dealing with around 10k-20k of records (objects). Is there any memory related methods in C# to clear the memory of each record as soon as they are written to the XML file. I tried all the basic memory handling methods like clear(), flush(), gc(), & finalize()/ But no use.
Please consider he following:
Record is an object, I cannot change this & use other efficient data
structures.
Each time I read a record I write them to XML. and repeat this
again & again.
C# is a garbage collected language. Therefore, to reclaim memory used by an object, you need to make sure all references to that object are removed so that it is eligible for collection. Specifically, this means you should remove the objects from any data structures that are holding references to them after you're done doing whatever you need to do with them.
If you get a little more specific about what type of data structures you're using we can probably give a more specific answer.

Resources