Unknown Memory leak in android - memory-management

adb shell dumpsys meminfo of my package shows the following and my native allocated size increases and finally causing mobile restart. Is that any memory ? How can i fix that??
native dalvik other total limit bitmap nativeBmp
size: 445456 5955 N/A 451411 32768 N/A N/A
allocated: 445024 3726 N/A 448750 N/A 10948 1912
free: 43 2229 N/A 2272 N/A N/A N/A
(Pss): 132631 870 300292 433793 N/A N/A N/A
(shared dirty): 2532 1656 5552 9740 N/A N/A N/A
(priv dirty): 132396 708 298960 432064 N/A N/A N/A
Objects
Views: 0 ViewRoots: 0
AppContexts: 0 Activities: 0
Assets: 6 AssetManagers: 6
Local Binders: 5 Proxy Binders: 14
Death Recipients: 1
OpenSSL Sockets: 0
SQL
heap: 0 MEMORY_USED: 0
PAGECACHE_OVERFLOW: 0 MALLOC_SIZE: 50
Asset Allocations
zip:/data/app/com.outlook.screens-2.apk:/resources.arsc: 25K
zip:/data/app/com.outlook.screens-2.apk:/assets/font/RobotoCondensedRegular.
ttf: 156K
zip:/data/app/com.outlook.screens-2.apk:/assets/font/RobotoCondensedBold.ttf
: 158K
zip:/data/app/com.outlook.screens-2.apk:/assets/font/RobotoCondensedLight.tt
f: 157K
Uptime: 190161845 Realtime now=432619753

Related

near indexer does not add anything to the database

I've tried to run https://github.com/near/near-indexer-for-explorer
No firewall, IP accessable (tested right now).
With empty data, it waits for peers forever.
With data from the run started some days ago
./target/release/indexer-explorer --home-dir ../.near/mainnet run --store-genesis --stream-while-syncing --allow-missing-relations-in-first-blocks 1000 sync-from-latest
It does something
Nov 01 18:42:23.293 INFO indexer_for_explorer: AccessKeys from genesis were added/updated successful.
Nov 01 18:42:33.188 INFO stats: # 9820210 Waiting for peers 1/1/40 peers ⬇ 0 B/s ⬆ 0 B/s 0.00 bps 0 gas/s CPU: 0%, Mem: 0 B
Nov 01 18:42:43.190 INFO stats: # 9820210 Downloading headers 68.72% (13074549) 3/3/40 peers ⬇ 149.3kiB/s ⬆ 6.0kiB/s 0.00 bps 0 gas/s CPU: 23%, Mem: 510.7 MiB
Nov 01 18:42:53.192 INFO stats: # 9820210 Downloading headers 68.72% (13074559) 2/2/40 peers ⬇ 299.4kiB/s ⬆ 297.5kiB/s 0.00 bps 0 gas/s CPU: 40%, Mem: 621.3 MiB
Nov 01 18:43:03.194 INFO stats: # 9820210 Downloading headers 68.72% (13074569) 1/1/40 peers ⬇ 150.1kiB/s ⬆ 148.9kiB/s 0.00 bps 0 gas/s CPU: 42%, Mem: 520.7 MiB
Nov 01 18:43:13.196 INFO stats: # 9820210 Downloading headers 68.72% (13074578) 2/2/40 peers ⬇ 150.3kiB/s ⬆ 148.8kiB/s 0.00 bps 0 gas/s CPU: 10%, Mem: 631.6 MiB
Nov 01 18:43:23.198 INFO stats: # 9820210 Downloading headers 68.72% (13074590) 2/1/40 peers ⬇ 294.1kiB/s ⬆ 297.6kiB/s 0.00 bps 0 gas/s CPU: 14%, Mem: 601.5 MiB
Nov 01 18:43:33.200 INFO stats: # 9820210 Downloading headers 68.72% (13074598) 1/1/40 peers ⬇ 149.4kiB/s ⬆ 148.8kiB/s 0.00 bps 0 gas/s CPU: 2%, Mem: 602.9 MiB
Nov 01 18:43:43.203 INFO stats: # 9820210 EPnLgE7iEq9s7yTkos96M3cWymH5avBAPm3qx3NXqR8H -/4 2/2/40 peers ⬇ 150.0kiB/s ⬆ 148.8kiB/s 0.00 bps 0 gas/s CPU: 9%, Mem: 657.0 MiB
Nov 01 18:43:53.209 INFO stats: # 9820210 Downloading headers 68.72% (13074608) 1/1/40 peers ⬇ 150.5kiB/s ⬆ 148.8kiB/s 0.00 bps 0 gas/s CPU: 3%, Mem: 661.0 MiB
Nov 01 18:44:03.212 INFO stats: # 9820210 EPnLgE7iEq9s7yTkos96M3cWymH5avBAPm3qx3NXqR8H -/4 1/1/40 peers ⬇ 148.6kiB/s ⬆ 148.8kiB/s 0.00 bps 0 gas/s CPU: 4%, Mem: 664.8 MiB
Nov 01 18:44:13.213 INFO stats: # 9820210 EPnLgE7iEq9s7yTkos96M3cWymH5avBAPm3qx3NXqR8H -/4 0/0/40 peers ⬇ 0 B/s ⬆ 0 B/s 0.00 bps 0 gas/s CPU: 2%, Mem: 664.8 MiB
Nov 01 18:44:23.215 INFO stats: # 9820210 EPnLgE7iEq9s7yTkos96M3cWymH5avBAPm3qx3NXqR8H -/4 0/0/40 peers ⬇ 0 B/s ⬆ 0 B/s 0.00 bps 0 gas/s CPU: 1%, Mem: 666.8 MiB
Nov 01 18:44:33.217 INFO stats: # 9820210 Downloading headers 68.72% (13074655) 1/1/40 peers ⬇ 150.0kiB/s ⬆ 148.8kiB/s 0.00 bps 0 gas/s CPU: 11%, Mem: 614.7 MiB
Nov 01 18:44:43.219 INFO stats: # 9820210 EPnLgE7iEq9s7yTkos96M3cWymH5avBAPm3qx3NXqR8H -/4 0/0/40 peers ⬇ 0 B/s ⬆ 0 B/s 0.00 bps 0 gas/s CPU: 1%, Mem: 614.9 MiB
Nov 01 18:44:53.224 INFO stats: # 9820210 EPnLgE7iEq9s7yTkos96M3cWymH5avBAPm3qx3NXqR8H -/4 0/0/40 peers ⬇ 0 B/s ⬆ 0 B/s 0.00 bps 0 gas/s CPU: 1%, Mem: 614.9 MiB
Nov 01 18:45:03.227 INFO stats: # 9820210 EPnLgE7iEq9s7yTkos96M3cWymH5avBAPm3qx3NXqR8H -/4 0/0/40 peers ⬇ 0 B/s ⬆ 0 B/s 0.00 bps 0 gas/s CPU: 1%, Mem: 616.4 MiB
Nov 01 18:45:13.232 INFO stats: # 9820210 EPnLgE7iEq9s7yTkos96M3cWymH5avBAPm3qx3NXqR8H -/4 0/0/40 peers ⬇ 0 B/s ⬆ 0 B/s 0.00 bps 0 gas/s CPU: 1%, Mem: 616.4 MiB
But nothing get added to database.
What am I doing wrong?
Your concern about the indexing part (no data in the database) will get resolved once the node reaches “syncing blocks” stage. Currently, your node is still only at “syncing block headers” stage. To speed up this process, start from a backup: https://docs.near.org/docs/develop/node/validator/running-a-node#starting-a-node-from-backup
As to the fact that the node dropped off the p2p network, I have no clues why that could have happened. I recommend you start with running a simple neard node and report any issues there before you get to the Indexer (which is just an extension to nearcore and thus you can use the same home & data folder)
First of all, indexer requires full archive mode. Links to 5-epoch are misleading. They are not usable for indexer.
Second (may save lots of time for downloading), indexer requires AVX extension to run. If your CPU does not AVX, don't bother building nearcore. That should be mentioned in docs. nearcore depends on some wasm, wasm requires AVX to run. Indexer will run for some time and fail miserably without AVX.

(NEAR protocol) How to get the Validator Node up

I am trying to run a validator node using the https://docs.near.org/docs/develop/node/validator/deploy-on-mainnet instruction. I have successfully deployed mainnet Staking Pool with the following command (2nd step of the instruction):
near call poolv1.near create_staking_pool '{"staking_pool_id":"<name_of_pool>", "owner_id":"<wallet_name>.near", "stake_public_key":"ed25519:3QohztWwCktk3j3MBiCuGaB6vXxeqjUasLan6ChSnduh", "reward_fee_fraction": {"numerator": 3, "denominator": 100}}' --account_id <wallet_name>.near --amount 30 --gas 300000000000000
The transaction
https://explorer.mainnet.near.org/transactions/93xQC8UozL6toVddkPk14qiExdRZMt3gJqCfHz9BBNpV
But after starting NEAR node, the database synchronization does not start (3nd step of the instruction).
target/release/neard run
The operating system listens on ports 3030 and 24567. Both ports are open in the Firewall.
I needed to add boot_nodes to config.json.
rm ~/.near/config.json
wget -O ~/.near/config.json https://s3-us-west-1.amazonaws.com/build.nearprotocol.com/nearcore-deploy/mainnet/config.json
After that, the node started
Nov 01 12:08:09.253 INFO stats: #51601485 EgEtVtQuGmvAkpRdjMb6rRwc7qjh2wYEKtmoWuqTDxf5 -/60 33/27/40 peers ⬇ 314.7kiB/s ⬆ 252.9kiB/s 0.80 bps 100.56 Tgas/s CPU: 26%, Mem: 6.0 GiB
Nov 01 12:08:19.256 INFO stats: #51601494 73BhJpUURFQU36Npgi5VajgJxPNdfuF3TJKFHneeA3Pd -/60 33/27/40 peers ⬇ 318.9kiB/s ⬆ 253.2kiB/s 0.90 bps 109.87 Tgas/s CPU: 24%, Mem: 6.0 GiB
Nov 01 12:08:29.299 INFO stats: #51601503 4o7iuNUaypY6KQphhrEYoGU5ZRECyPwyM7Vge8AVnsYq -/60 33/27/40 peers ⬇ 313.7kiB/s ⬆ 247.1kiB/s 0.90 bps 116.37 Tgas/s CPU: 28%, Mem: 6.0 GiB
Nov 01 12:08:39.303 INFO stats: #51601511 GxQePJAQBrirSNwt7uraNeWqXGFZG1YkpKLJfX4Wd9Pb -/60 33/27/40 peers ⬇ 313.6kiB/s ⬆ 247.9kiB/s 0.80 bps 78.89 Tgas/s CPU: 22%, Mem: 6.0 GiB
Nov 01 12:08:49.306 INFO stats: #51601520 9HzEv22KpcGmip83s9T3FqmKB9qbH6Mu1Cdh8cULf9BN -/60 33/27/40 peers ⬇ 319.0kiB/s ⬆ 250.8kiB/s 0.90 bps 98.36 Tgas/s CPU: 25%, Mem: 6.0 GiB
Nov 01 12:08:59.309 INFO stats: #51601529 7NaEtQW8qp1jtZ3CbzRFcjzjjFx4DLtT9bUCc1CVRCB8 -/60 33/27/40 peers ⬇ 318.7kiB/s ⬆ 250.9kiB/s 0.90 bps 115.87 Tgas/s CPU: 24%, Mem: 6.0 GiB

Bash - Search based on timestamp in milliseconds

I am searching all the Hadoop completed(100s of) jobs between a time interval. This time interval is in milliseconds.
Following is the format:
JobId State StartTime UserName Queue Priority UsedContainers RsvdContainers UsedMem RsvdMem NeededMem AM info
job_xxxxxxx SUCCEEDED 1458844667431 default default NORMAL N/A N/A N/A N/A N/A http://xxxxxxxx:8088/proxy/application_xxxxxxxxxx/jobhistory/job/job_xxxxxxxx
job_xxxxxxx SUCCEEDED 1459449718363 default default NORMAL N/A N/A N/A N/A N/A http://xxxxxx.xxxxx.com:8088/proxy/application_xxxxxxxxx/jobhistory/job/job_xxxxx
Following is my format:
STARTTIME="Tue Apr 12 10:24:29 EDT 2016"
ENDTIME="Tue Apr 12 15:24:29 EDT 2016"
CONVERTTIME_1=`date --date="$STARTTIME" +%s%3N`
CONVERTTIME_2=`date --date="$ENDTIME" +%s%3N`
echo $CONVERTTIME_1, $CONVERTTIME_2
mapred job -list all | sed -n '/$CONVERTTIME_1/,/$CONVERTTIME_2/p' > out
Output: all the jobs like above within that timerange.
Can anyone please help how to get these?
OUTPUT OF mapred job -list all
mapred job -list all
job_1457613852865_5163 SUCCEEDED 1459199337140 zzzzzzzzzz uuuuuuuu_critical NORMAL N/A N/A N/A N/A N/A http://xxxxx254.yyyyyy.com:8088/proxy/application_1457613852865_5163/jobhistory/job/job_1457613852865_5163
job_1457613852865_4633 SUCCEEDED 1458992402216 zzzyyyy yyyyyy_default NORMAL N/A N/A N/A N/A N/A http://xxxxx254.yyyyyy.com:8088/proxy/application_1457613852865_4633/jobhistory/job/job_1457613852865_4633
job_1457613852865_4821 SUCCEEDED 1459078845580 zzzyyyy yyyyyy_default NORMAL N/A N/A N/A N/A N/A http://xxxxx254.yyyyyy.com:8088/proxy/application_1457613852865_4821/jobhistory/job/job_1457613852865_4821
job_1457613852865_0322 SUCCEEDED 1457717313217 zzzddd uuuuuuuu_critical NORMAL N/A N/A N/A N/A N/A http://xxxxx254.yyyyyy.com:8088/proxy/application_1457613852865_0322/jobhistory/job/job_1457613852865_0322
job_1457613852865_5304 SUCCEEDED 1459254375921 zzzyyyy yyyyyy_default NORMAL N/A N/A N/A N/A N/A http://xxxxx254.yyyyyy.com:8088/proxy/application_1457613852865_5304/jobhistory/job/job_1457613852865_5304
job_1457613852865_8744 SUCCEEDED 1460195126188 zzzyyyy yyyyyy_default NORMAL N/A N/A N/A N/A N/A http://xxxxx254.yyyyyy.com:8088/proxy/application_1457613852865_8744/jobhistory/job/job_1457613852865_8744
job_1457613852865_3384 SUCCEEDED 1458649020794 zzzyyyy yyyyyy_default NORMAL N/A N/A N/A N/A N/A http://xxxxx254.yyyyyy.com:8088/proxy/application_1457613852865_3384/jobhistory/job/job_1457613852865_3384
job_1457613852865_9038 SUCCEEDED 1460291694279 zzzyyyy yyyyyy_default NORMAL N/A N/A N/A N/A N/A http://xxxxx254.yyyyyy.com:8088/proxy/application_1457613852865_9038/jobhistory/job/job_1457613852865_9038
job_1457613852865_8487 SUCCEEDED 1460115319590 zzzyyyy yyyyyy_default NORMAL N/A N/A N/A N/A N/A http://xxxxx254.yyyyyy.com:8088/proxy/application_1457613852865_8487/jobhistory/job/job_1457613852865_8487
job_1457613852865_8321 SUCCEEDED 1460038991587 dddyyy uuuuuuuu_critical NORMAL N/A N/A N/A N/A N/A http://xxxxx254.yyyyyy.com:8088/proxy/application_1457613852865_8321/jobhistory/job/job_1457613852865_8321
job_1457613852865_4661 SUCCEEDED 1458994901619 zzzyyyy yyyyyy_default NORMAL N/A N/A N/A N/A N/A http://xxxxx254.yyyyyy.com:8088/proxy/application_1457613852865_4661/jobhistory/job/job_1457613852865_4661
job_1457613852865_1975 SUCCEEDED 1458216683800 zzzyyyy yyyyyy_default NORMAL N/A N/A N/A N/A N/A http://xxxxx254.yyyyyy.com:8088/proxy/application_1457613852865_1975/jobhistory/job/job_1457613852865_1975
I used this:
#!/bin/bash
STARTTIME="Tue Apr 12 10:13:01 EDT 2016"
ENDTIME="Tue Apr 12 10:13:59 EDT 2016"
start=$(date -d "$STARTTIME" '+%s%3N')
end=$(date -d "$ENDTIME" '+%s%3N')
echo "start=$start :: end=$end"
mapred job -list all | awk -v start="$start" -v end="$end" '$3>=start && $3<=end'
Got one extra job:
job_1457613852865_9785 SUCCEEDED 1460470436726 yyyyyyyyyy nnnnnnnnnn NORMAL N/A N/A N/A N/A N/A http://888888.xxxxxxxxxx.com:8088/proxy/application_1457613852865_9785/jobhistory/job/job_1457613852865_9785
You can convert your dates into millisec value and then use awk to filter your data:
STARTTIME="Tue Apr 12 10:24:29 EDT 2016"
ENDTIME="Tue Apr 12 15:24:29 EDT 2016"
start=$(date -d "$STARTTIME" '+%s%3N')
end=$(date -d "$ENDTIME" '+%s%3N')"
echo "start=$start :: end=$end"
mapred job -list all | awk -v start="$start" -v end="$end" '$3>=start && $3<=end'

tasklist vs. task manager memory

Hi and thanks in advance.
What is the difference between the memory in the tasklist ( which you run in the cmd) and that GUI task manager. I noticed for browser processes, that the memory is off by a great deal. Which is more accurate of the process's memory.
Task Manager has lots of memory counters see View menu - Select Columns.
The standard one shown is private working set. This is a/ private - so only bytes in memory specific to this program (so no shell32 common code is counted) and b/ working set - the amount of memory mapped and present in that processes address space.
Even memory not present in address space may be in physical memory as on the standby list or in the file cache or being used by another program. It only requires flicking a bit to make it available to the process. Run two copies of notepad, notepad in now in the file cache (and being small) in two processes. But the code is only in memory once not three times.
If you want to make your own tasklist.
Set objWMIService = GetObject("winmgmts:{impersonationLevel=impersonate}!\\.\root\cimv2")
Set colItems = objWMIService.ExecQuery("Select * From Win32_Process")
For Each objItem in colItems
' If objitem.Name = "mspaint.exe" Then
wscript.echo objitem.name & " PID=" & objItem.ProcessID & " SessionID=" & objitem.sessionid
' objitem.terminate
' End If
Next
Lines starting with a ' are commented out.
To use in a command prompt
cscript //nologo c:\path\script.vbs
These are the properties
Property Type Operation
======== ==== =========
CSName N/A N/A
CommandLine N/A N/A
Description N/A N/A
ExecutablePath N/A N/A
ExecutionState N/A N/A
Handle N/A N/A
HandleCount N/A N/A
InstallDate N/A N/A
KernelModeTime N/A N/A
MaximumWorkingSetSize N/A N/A
MinimumWorkingSetSize N/A N/A
Name N/A N/A
OSName N/A N/A
OtherOperationCount N/A N/A
OtherTransferCount N/A N/A
PageFaults N/A N/A
PageFileUsage N/A N/A
ParentProcessId N/A N/A
PeakPageFileUsage N/A N/A
PeakVirtualSize N/A N/A
PeakWorkingSetSize N/A N/A
Priority N/A N/A
PrivatePageCount N/A N/A
ProcessId N/A N/A
QuotaNonPagedPoolUsage N/A N/A
QuotaPagedPoolUsage N/A N/A
QuotaPeakNonPagedPoolUsage N/A N/A
QuotaPeakPagedPoolUsage N/A N/A
ReadOperationCount N/A N/A
ReadTransferCount N/A N/A
SessionId N/A N/A
Status N/A N/A
TerminationDate N/A N/A
ThreadCount N/A N/A
UserModeTime N/A N/A
VirtualSize N/A N/A
WindowsVersion N/A N/A
WorkingSetSize N/A N/A
WriteOperationCount N/A N/A
WriteTransferCount N/A N/A
And the methods
Call [ In/Out ]Params&type Status
==== ===================== ======
AttachDebugger (null)
Create [IN ]CommandLine(STRING) (null)
[IN ]CurrentDirectory(STRING)
[IN ]ProcessStartupInformation(OBJECT)
[OUT]ProcessId(UINT32)
GetOwner [OUT]Domain(STRING) (null)
[OUT]User(STRING)
GetOwnerSid [OUT]Sid(STRING) (null)
SetPriority [IN ]Priority(SINT32) (null)
Terminate [IN ]Reason(UINT32) (null)
Which is the same as
wmic process where name='notepad.exe' get /format:list
Further reading
https://msdn.microsoft.com/en-us/library/ms810627.aspx
https://www.labri.fr/perso/betrema/winnt/ntvmm.html (this no longer appears on the MSDN)

Elasticsearch stuck in yellow status

I fired the following command on elasticsearch
PUT /_cluster/settings
{
"persistent" : {
"threadpool.index.queue_size": -1
}
}
But now my elastuicsearch cluster health is stuck in yellow. Nothing is moving, and the cat recovery api gives following
books 0 613 replica done server1.internal.com server2.internal.com n/a n/a 1 100.0% 79 100.0%
books 0 53479 replica done server2.internal.com server3.internal.com n/a n/a 146 100.0% 435062890 100.0%
books 1 592 replica done server1.internal.com server2.internal.com n/a n/a 1 100.0% 79 100.0%
books 1 5901 replica done server2.internal.com server3.internal.com n/a n/a 198 34.3% 449403096 2.7%
books 2 551 replica done server1.internal.com server2.internal.com n/a n/a 1 100.0% 79 100.0%
books 2 9018 replica done server2.internal.com server3.internal.com n/a n/a 201 28.9% 451881473 4.4%
books 3 519 replica done server1.internal.com server2.internal.com n/a n/a 1 100.0% 79 100.0%
books 3 3869 replica done server2.internal.com server3.internal.com n/a n/a 170 61.8% 434156880 1.2%
books 4 525 replica done server1.internal.com server2.internal.com n/a n/a 1 100.0% 79 100.0%
books 4 33468 replica done server2.internal.com server3.internal.com n/a n/a 136 100.0% 428616146 100.0%
For about 30 minutes there has been no progress.
Can anyone help me how to solve this?
I just figured out this was due to corrupted disk on one of the nodes.
Classification of Elasticsearch Cluster Status:
RED: Some or all of (primary) shards are not ready.
YELLOW: Elasticsearch has allocated all of the primary shards, but some/all of the replicas have not been allocated.
GREEN: Cluster is fully operational. Elasticsearch is able to allocate all shards and replicas to machines within the cluster.

Resources