How to obtain database info and statistics using mgconsole? - memgraphdb

When I use Memgraph Lab I can see the database statistics at the top of the window.
How can I obtain info such as Memgrph version, number of nodes, relationships, etc. when I'm using mgconsole?

To get the information on Memgraph version that is being used use the SHOW VERSION; query.
To get the information about the storage of the current instance use SHOW STORAGE INFO;. This query will give you the following info:
vertex_count - Number of vertices stored
edge_count - Number of edges stored
average_degree - Average number of relationships of a single node
memory_usage - Amount of RAM used reported by the OS (in bytes)
disk_usage - Amount of disk space used by the data directory (in bytes)
memory_allocated - Amount of bytes allocated by the instance
allocation_limit - Current allocation limit in bytes set for this instance

Related

How to make a cached from a finished Spark Job still accessible for the other job?

My project is implement a interaction query for user to discover that data. Like we have a list of columns user can choose then user add to list and press view data. The current data store in Cassandra and we use Spark SQL to query from it.
The Data Flow is we have a raw log after be processed by Spark store into Cassandra. The data is time series with more than 20 columns and 4 metrics. Currently I tested because more than 20 dimensions into cluster keys so write to Cassandra is quite slow.
The idea here is load all data from Cassandra into Spark and cache it in memory. Provide a API to client and run query base on Spark Cache.
But I don't know how to keep that cached data persist. I am try to use spark-job-server they have feature call share object. But not sure it works.
We can provide a cluster with more than 40 CPU cores and 100 GB RAM. We estimate data to query is about 100 GB.
What I have already tried:
Try to store in Alluxio and load to Spark from that but the time to load is slow because when it load 4GB data Spark need to do 2 things first is read from Alluxio take more than 1 minutes and then store into disk (Spark Shuffle) cost more than 2 or 3 minutes. That mean is over the time we target under 1 minute. We tested 1 job in 8 CPU cores.
Try to store in MemSQL but kind of costly. 1 days it cost 2GB RAM. Not sure the speed is keeping good when we scale.
Try to use Cassandra but Cassandra does not support GROUP BY.
So, what I really want to know is my direction is right or not? What I can change to archive the goal (query like MySQL with a lot of group by, SUM, ORDER BY) return to client by a API.
If you explicitly call cache or persist on a DataFrame, it will be saved in memory (and/or disk, depending on the storage level you choose) until the context is shut down. This is also valid for sqlContext.cacheTable.
So, as you are using Spark JobServer, you can create a long running context (using REST or at server start-up) and use it for multiple queries on the same dataset, because it will be cached until the context or the JobServer service shuts down. However, using this approach, you should make sure you have a good amount of memory available for this context, otherwise Spark will save a large portion of the data on disk, and this would have some impact on performance.
Additionally, the Named Objects feature of JobServer is useful for sharing specific objects among jobs, but this is not needed if you register your data as a temp table (df.registerTempTable("name")) and cache it (sqlContext.cacheTable("name")), because you will be able to query your table from multiple jobs (using sqlContext.sql or sqlContext.table), as long as these jobs are executed on the same context.

Why datanode sends the block location information to namenode?

On the https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html there are words:
the DataNodes are configured with the location of both NameNodes, and send block location information and heartbeats to both.
But why is this information sent to the namenode and its fallback brother? I thought that this information already contains in the namenode's fs image. The namenode should know where he put blocks.
Name Node contains the meta data of the entire cluster. It contains the details of each folder, file, replication factor, block names etc. The Name Node also stores the information about the location of the blocks for each file (this information is constructed from the Block Reports sent by the Data Nodes) in memory.
Data Nodes store following information for each block:
Actual data stored in the block
Meta data for the data stored in the block. Mainly contains checksums for the data stored in the block.
They periodically send the heart beat and block reports to the Name Node.
Heart Beat:
Interval of heart beat reports is determined by configuration parameter dfs.heartbeat.interval (in hdfs-site.xml). By default this is set to 3 seconds.
Some of the information contained in the Heart beat is:
Registration: Data node registration information
Capacity: Total storage capacity available at Data Node
dfsUsed: Storage used by HDFS
remaining: Remaining storage available for HDFS
blockPoolUsed: Storage used by the block pool
xmitsInProgress: Number of transfers from this Data Node to others
xceiverCount: Number of active transceiver threads
xmitsInProgress: Number of transfers from this Data Node to others
cacheCapacity: Total cache capacity available at Data Node
cacheUsed: Amount of cache used
This information is used by the Name Node in the following ways:
Health of the Data Node: Should this data node be marked as dead or alive?
Registration of new Data Node: If this is a newly added Data Node, its information is registered
Update the metrics of the Data Node: The information sent in the heart beat is used for updating the metrics of the node
Issue commands to the Data Node: The Name Node can issue following commands to the Data Node, based on the information received in the heart beat: BlockRecoveryCommand (to recover specified blocks), BlockCommand (for transferring blocks to another Data Node, for invalidating certain blocks), Cache/Uncache (commands for caching / uncaching the blocks)
Block Reports:
Interval of block reports is determined by configuration dfs.blockreport.intervalMsec (in hdfs-site.xml). By default this is set to 21600000 milliseconds.
Some of the information contained in the block report is:
Registration: Data node registration information
blocks: Information about the blocks, which contains: block ID, block length, block generation timestamp, state of the block replica (For e.g. replica is finalized or waiting to be recovered etc.)
This information is used by the Name Node for:
Process first block report: If it is a first time report for the newly registered Data Node, it just adds all the valid replicas. It ignores all the invalid blocks, till the next block report.
For updating the information about blocks: The (Data Node -> Blocks) map is updated in the Name Node. The new block report is compared with the old report and information about successful blocks, corrupted blocks, invalidated blocks etc. is updated
The Datanodes are not directly accessible from outside the cluster, its in a private network. Hadoop cluster is prone to node failures and the NameNode keeps track of all the data on the different DataNodes. So, any query to the cluster is addressed by the NN and it provides the block address on the DN.

How to solve Heap Space Error in Talend Enterprise Big Data

I am using the tFileInputJson and tMongoDBOutput components to store JSON data into a MongoDB Database.
When trying this with a small amount of data (nearly 100k JSON objects), the data can be stored into database with out any problems.
Now my requirement is to store nearly 300k JSON objects into the database and my JSON objects look like:
{
"LocationId": "253b95ec-c29a-430a-a0c3-614ffb059628",
"Sdid": "00DlBlqHulDp/43W3eyMUg",
"StartTime": "2014-03-18 22:22:56.32",
"EndTime": "2014-03-18 22:22:56.32",
"RegionId": "10d4bb4c-69dc-4522-801a-b588050099e4",
"DeviceCategories": [
"ffffffff-ffff-ffff-ffff-ffffffffffff",
"00000000-0000-0000-0000-000000000000"
],
"CheckedIn": false
}
While I am performing this operation I am getting the following Exception:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
[statistics] disconnected
at java.util.Arrays.copyOf(Arrays.java:2367)
at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:415)
at java.lang.StringBuffer.append(StringBuffer.java:237)
at org.json.simple.JSONArray.toJSONString(Unknown Source)
at org.json.simple.JSONArray.toJSONString(Unknown Source)
at org.json.simple.JSONArray.toString(Unknown Source)
at samplebigdata.retail_store2_0_1.Retail_Store2.tFileInputJSON_1Process(Retail_Store2.java:1773)
at samplebigdata.retail_store2_0_1.Retail_Store2.runJobInTOS(Retail_Store2.java:2469)
at samplebigdata.retail_store2_0_1.Retail_Store2.main(Retail_Store2.java:2328)
Job Retail_Store2 ended at 15:14 10/11/2014. [exit code=1]
My current job looks like:
How can I store so much data into the database in a single job?
The issue here is that you're printing the JSON object to the console (with your tLogRow). This requires all of the JSON objects to be held in memory before finally being dumped all at once to the console once the "flow" is completed.
If you remove the tLogRow components then (in a job as simple as this) Talend should only hold whatever the batch size is for your tMongoDbOutput component in memory and keep pushing batches into the MongoDB.
As an example, here's a screenshot of me successfully loading 100000000 rows of randomly generated data into a MySQL database:
The data set represents about 2.5 gb on disk when as a CSV but was comfortably handled in memory with a max heap space of 1 gb as each insert is 100 rows so the job only really needs to keep 100 rows of the CSV (plus any associated metadata and any Talend overheads) in memory at any one point.
In reality, it will probably keep significantly more than that in memory and simply garbage collect the rows that have been inserted into the database when the max memory is close to being reached.
If you have an absolute requirement for logging the JSON records that are being successfully put into the database then you might try outputting into a file instead and stream the output.
As long as you aren't getting too many invalid JSON objects in your tFileInputJson then you can probably keep the reject linked tLogRow as it will only receive the rejected/invalid JSON objects and so shouldn't run out of memory. As you are restricted to small amounts of memory due to being on a 32 bit system you might need to be wary that if the amount of invalid JSON objects grows you will quickly exceed your memory space.
If you simply want to load a large amount of JSON objects to a MongoDB database then you will probably be best off using the tMongoDBBulkLoad component. This takes a flat file (either .csv .tsv or .json) and loads this directly into a MongoDB database. The documentation I just linked to shows all the relevant options but you might be particularly interested by the --jsonArray additional argument that can be passed to the database. There is also a basic example in how to use the component.
This would mean you couldn't do any processing mid way through the load and you are having to use a preprepared json/csv file to load the data but if you just want a quick way to load data into the database using Talend then this should cover it.
If you needed to process chunks of the file at a time then you might want to look at a much more complicated job with a loop where you load n records from your input, process them and then restart the processing part of the loop but selecting n records with a header of n records and then repeat with a header of 2n records and so on...
Garpmitzn's answer pretty much covers how to change JVM settings to increase memory space but for something as simple as this you just want to reduce the amount you're keeping in memory for no good reason.
As an aside, if you're paying out for an Enterprise licence of Talend then you should probably be able to get yourself a 64 bit box with 16 gb of RAM easily enough and that will drastically help with your development. I'd at least hope that your production job execution server has a bunch of memory.
i feel you are reading into memory of talend. you have to play with java JVM parameters like Xms and XmX - you can increase Xmx to say bigger size then what its currently set for you say if its set to Xmx2048 then increase it to Xmx4096 or otherwise..
these parameters are available in .bat/.sh file of exported job or in talend studio you can find them under Run Job tab Advance settings JVM Settings...
but its advisable, to design the job in such a way that you dont load too much in memory..

how can i reduce the data fetch time with mongo in a bigger datasize

We have a collection(name_list) of 30 million 'names'. We are comparing this 30 million records with 4 million 'names'. We are fetching these 4 million 'names' from a txt file.
I am using PHP and Linux platform. I gave index for 'names' field. I am using simple 'find' to compare data with mongodb with txt file's data
$collection->findOne(array('names' => $name_from_txt))
I am comparing one by one. I Know join is not possible in mongodb.Is there any better method to compare data in mongodb?
The OS and other details are as follows.
OS : Ubuntu
Kernel Version : 3.5.0-23-generic
64 bit
MongoDB shell version: 2.4.5
CPU info - 24
Memory - 64G
Disks 3 - out of which mongo is written to a fusion i/o disk of size 320G
File system on mongo disk - ext4 with noatime as mentioned in mongo doc
ulimit settings for mongo changed to 65000
readahead is 32
numa is disabled with --interleave option
when i use a script to compare this, it takes around 5 min to complete ... what can be done, so that it gets executed faster and finish in say 1-2 min ? can anyone help please?

How can I improve Cassandra read/write performance?

I am working on a single node Cassandra setup. The system which I am using has 4-Core cpu with 8GB RAM.
The properties of the column family which i am using is:
Keyspace: keyspace1:
Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
Durable Writes: true
Options: [datacenter1:1]
Column Families:
ColumnFamily: colfamily (Super)
Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type
Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type/org.apache.cassandra.db.marshal.BytesType
Row cache size / save period in seconds / keys to save : 100000.0/0/all
Row Cache Provider: org.apache.cassandra.cache.ConcurrentLinkedHashCacheProvider
Key cache size / save period in seconds: 200000.0/14400
GC grace seconds: 864000
Compaction min/max thresholds: 4/32
Read repair chance: 1.0
Replicate on write: true
Built indexes: []
Compaction Strategy: org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy
I tried to insert 1million rows to a column family. The Throughput for writes is around 2500 per sec and reads is around 380 per sec.
How can I improve both the read and write throughput??.
380 per second means, that you are reading data from hard drive with low cache hit rate or OS is swapping. Check Cassandra statistics to find out cache usage:
./nodetool -host <IP> cfstats
You have enabled both row and key cache. row cache will read whole row into RAM - means all columns given by row key. In this case you can disable key cache. But make sure that you have enough free RAM to handle row caching.
If you have Cassandra with off-heap-cache (default from 1.x), it is possible that row cache is very large and OS started swapping - check swap size - this can decrease performance.

Resources