raspberry pi b+ memory display as 247Mb - linux-kernel

I have raspberry pi b+ model.
I have seen in specifications, it says, it has 512mb ram.
but when i test it using free -m, it displays only 247mb.
Please tell me the reason for that.
Thank you.
pi#raspberrypi ~/Desktop/song $ free -m
total used free shared buffers cached
Mem: 247 210 36 0 15 103
-/+ buffers/cache: 91 155
Swap: 99 0 99

Set your GPU memory down:
sudo raspi-config
you can select
8 Advanced Options Configure advanced settings
then
A3 Memory Split Change the amount of memory made available to the GPU
then you can set it to the expceted value.

Related

Spark Scratch Space

I have a cluster of 13 machines with 4 physical CPUs and 24 G of RAM.
I started a spark cluster with one driver and 12 slaves.
I set the number of cores by slaves to 12 cores, meaning I have a cluster as foloowing :
Alive Workers: 12
Cores in use: 144 Total, 110 Used
Memory in use: 263.9 GB Total, 187.0 GB Used
I started an application with the folowing configuration :
[('spark.driver.cores', '4'),
('spark.executor.memory', '15G'),
('spark.executor.id', 'driver'),
('spark.driver.memory', '5G'),
('spark.python.worker.memory', '1042M'),
('spark.cores.max', '96'),
('spark.rdd.compress', 'True'),
('spark.serializer.objectStreamReset', '100'),
('spark.executor.cores', '8'),
('spark.default.parallelism', '48')]
I understand there are 15G of RAM by executor with 8 task slot and a parallelism of 48 (48 = 6 task slot * 12 slaves).
then I have two big files on HDFS : 6 G each, (from a directory of 12 files of 5 blocks of 128 Mb each) , with a 3x replication factor.
I union these two files => I get one dataframe of 12 GB I think but I see a 37 G reading input through the IHM :
That could be the first question : Why 37 Gb ?
Then as the execution time is too long for me, I try to cache the data so that I can go faster. But the caching method never finishes, here you can see it is already 45 minutes before the end (Vs 6 min not cached !):
So I try to understand why, and I see the usage of Memory/Disk on the storage section of the ihm :
So there are some part of the RDD that are staying on disk.
Furthemore I see the executors may still have free memory :
And I notice on the same "storage" page that the size of the RDD has jumped :
Storage Level: Disk Serialized 1x Replicated
Cached Partitions: 72
Total Partitions: 72
Memory Size: 42.7 GB
Disk Size: 73.3 GB
=> I understand : Memory Size: 42.7 GB + Disk Size: 73.3 GB = 110 G !
=> So my 6 G file has transformed on 37 G and then on 110 G ???
But i try to understand why is there still some memory left on my executor, and I go to the "err" dump of one, and I see :
18/02/08 11:04:08 INFO MemoryStore: Will not store rdd_50_46
18/02/08 11:04:09 WARN MemoryStore: Not enough space to cache rdd_50_46 in memory! (computed 1134.1 MB so far)
18/02/08 11:04:09 INFO MemoryStore: Memory use = 1641.6 KB (blocks) + 7.7 GB (scratch space shared across 6 tasks(s)) = 7.7 GB. Storage limit = 7.8 GB.
18/02/08 11:04:09 WARN BlockManager: Persisting block rdd_50_46 to disk instead.
And Here I see that the executor want to cache a 1641.6 KB block (only 1Mo !) and I can't because there is a ["scratch space"] of 7.7 Gb "shared across 6 tasks".
=> What is a "scratch space" ? ?
=> The 6 tasks => comes from the parallelism of 48 / 12 = 6
And then I come back to the app information, and I see that the count that lasted 48 min read only 37 Gb of data ! (The 48 min are clearly used to cache the data too)
When I do a count on the cached dataframe I have a 116G input read :
And at the end of the day, the time saved by the cached count is not that impressive, here are 3 duration :
4.8 ' : count on cached df
48' : count while caching
5.8' : count on not cached df (read directly from hdfs)
So why is it so ?
Because the cached df is not that much cached :
Meaning more or less 40 Gb in memory and 60 Gb on disk.
I am surprised because at 15G / executor * 12 slaves => 180 Gb of memory, and I can cache only 40 Gb ... But in fact I remember that the memory is splitted :
30% for spark
54% for storage
16% for shuffle
So I understand that I do have 54% * 15G for storage, ie 8.1 G, meaning that on my 180 Gb, I only have 97 Gb for storage. Why do I have 90 - 40 = 50 G not used then ?
Oups... This is a long post !
Plenty of questions... Sorry...

Memory Management with paging

In the method of paging in the operating system memory management issues.
when we have 64 byte for Pages ,why 6 bit needed?
111111 ==> 63
1000000 => 64
Using 6 bits you can represent maximum 64 pages(0-63) in binary. As one single bit can take only two values 0 or 1,so for 6 bits you have 2*2*2*2*2*2 or 26 different permutations.Every permutation will identify a different page.

how much resource reserved on a mesos-slave

How does mesos-slave calculate its available resources. In web-ui, mesos-master shows 2.9G memory available on a slave, but I run "free -m":
free -m
total used free shared buffers cached
Mem: 3953 2391 1562 0 1158 771
-/+ buffers/cache: 461 3491
Swap: 4095 43 4052
and --resource parameter was not given.
I want to know how does mesos scheduler calculate resources available.
The function that calculates available resources that are offered by slaves can be seen here, in particular, the memory part is lines 98 to 114.
If the machine has more than 2GB of RAM Mesos will offer total - Gigabytes(1). In your case the machine has ~4GB, and that's why you're seeing ~3GB in the Web UI.

How to Resolve this Out of Memory Issue for a Small Variable in Matlab?

I am running a 32-bit version of Matlab R2013a on my computer (4GB RAM, and 32-bit Windows 7).
I have dataset (~ 60 MB) and I want to read it using
ds = dataset('File', myFile, 'Delimiter', ',');
And each time I face Out of Memory error. Theoretically, I should be able to use 2GB of RAM, so there should be no problem reading such small files.
Here is what I got when typed memory
Maximum possible array: 36 MB (3.775e+07 bytes) *
Memory available for all arrays: 421 MB (4.414e+08 bytes) **
Memory used by MATLAB: 474 MB (4.969e+08 bytes)
Physical Memory (RAM): 3317 MB (3.478e+09 bytes)
* Limited by contiguous virtual address space available.
** Limited by virtual address space available.
I followed every instructions I found (this is not a new issue), but for my case it seems rather weird, because I cannot run a simple program now.
System: Windows 7 32 bit
Matlab: R2013a
RAM: 4 GB
Clearly your issue is right here.
Maximum possible array: 36 MB (3.775e+07 bytes) *
You are either using a lot of memory in your system and/or you have a very low swap space.

Cassandra Amazon EC2 , lots of IOWait

We have the following stats on single node cassandra on Amazon EC2/Rightscale m1.large instance with 2 ephemeral disks with raid0. (7.6 GB Total Memory)
4 GB RAM is allocated to cassandra Heap, 800MB is Heap NEW size.
following stats are from OpsCenter community 2.0
Read Requests 285 to 340 per second
Write Requests 257 to 720 per second
OS Load 15.15 to 17.15
Write Request Latency 293 to 685 micros
OS Sent Network Traffic 18 MB to 30 MB per second
OS Recieved Network Traffic 22 MB to 34 MB per second
OS Disk Queue Size 23 to 26 requests
Read Requests Pending 8 to 20
Read Request Latency 69140 to 92885 micros
OS Disk latency 37 to 42 ms
OS Disk Throughput 12 to 14 Mb per second
Disk IOPs Reads 600 to 740 per second
Disk IOPs Writes 2 to 7 per second
IOWait 60 to 70 % CPU avg
Idle 24 to 30 % CPU avg
Rowcache is disabled.
Are the above stats are satisfying with the provided configuration....OR how could we tweak it more to get less IOWait..........because we think that we are experiencing lots of IOWait.....how could we tweak it to get the best.
Read Requests are mixed.........some are from one super column family and one standard having more than million keys......and varying no. of super columns max 14 with varying no. of subcolumns from 1 to 10000 and varying no. of columns max 14 in standard column family...............subcolumns are very thin in nature with 0 bytes value....8 bytes for name.
Process is removing the data from super column family and writing the processed data on standard one.
Would EBS Disks work better....on Amazon EC2
I'm not positive whether you can tweak your config easily to get more disk performance, but using Snappy compression could help a good deal in making your app need to read less overall. It may also help to use the new composite key layout instead of supercolumns.
One thing I can say for sure: EBS will NOT work better. Stay away from that at all costs if you care about latency.

Resources