I'm running into an issue where having many buffers open results in emacs briefly hanging exactly every 5 seconds while typing into a buffer. I prefer to have these buffers open for use of https://github.com/alphapapa/org-rifle - but I've confirmed that reducing the number of open buffers makes this issue not noticeable.
I've tried changing auto save by setting auto-save-interval to a large number, as well as enabling auto-save-visited-mode. I also turned off company-mode and org-evil. None of these have made a difference.
Running a cpu/mem profile didn't reveal anything interesting, unless I'm missing something:
Memory:
1 + timer-event-handler 103,073,519 53%
1 + command-execute 42,896,933 22%
2 + company-post-command 28,962,855 14%
3 + redisplay_internal (C function) 13,816,595 7%
4 + org-evil--post-command 5,206,152 2%
5 + evil-repeat-post-hook 55,576 0%
6 + evil-repeat-pre-hook 5,184 0%
7 evil--jump-handle-buffer-crossing 2,112 0%
8 + undo-auto--add-boundary 1,056 0%
9 + evil-esc 192 0%
10 + eldoc-schedule-timer 48 0%
11 ... 0 0%
CPU:
1 + timer-event-handler 2594 71%
1 + command-execute 730 20%
2 + ... 200 5%
3 + company-post-command 53 1%
4 + redisplay_internal (C function) 41 1%
5 + org-evil--post-command 17 0%
6 + evil-repeat-post-hook 1 0%
7 + evil--jump-hook 1 0%
My dot emacs config is https://github.com/esmongeski/dotfiles/blob/main/dotEmacs.org
What else can I do to diagnose this type of lag?
This was caused by garbage collection - https://www.gnu.org/software/emacs/manual/html_node/elisp/Garbage-Collection.html#:~:text=Emacs%20provides%20a%20garbage%20collector,still%20accessible%20to%20Lisp%20programs has details on how to tune it.
Update - I was on emacs 27.x, and upgrading to 28.x fixed up the lag with the memory leak.
Related
I would like to show some experimental results about Rocksdb Put performance. The fact that single-threaded put throughput is slower than two-threaded put throughput. It is wired because it uses the default skiplist as memtable, and this data structure supports concurrent writes.
Here is my testing code.
uint64_t nthread = 2;
uint64_t nkeys = 16000000;
std::thread threads[nthread];
std::atomic<uint64_t> idx(1000000);
for (int t = 0; t < nthread; t++) {
threads[t] = std::thread([db, &idx, nthread, nkeys, &write_option_disable] {
WriteBatch batch;
for (int i = 0; i < nkeys / nthread; i++) {
std::string key = "WVERIFY" + std::to_string(idx.fetch_add(1));
std::string value = "MOCK";
auto ikey = rocksdb::Slice(key);
auto ivalue = rocksdb::Slice(value);
db->Put(write_option_disable, ikey, ivalue);
}
return 0;
});
}
for (auto& t : threads) {
t.join();
}
Besides, here are the results I got.
// Single thread
Uptime(secs): 8.4 total, 8.3 interval
Flush(GB): cumulative 1.170, interval 1.170
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 1.17 GB write, 143.35 MB/s write, 0.00 GB read, 0.00 MB/s read, 8.1 seconds
Interval compaction: 1.17 GB write, 144.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 8.1 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
Block cache LRUCache#0x564742515ea0#7011 capacity: 8.00 MB collections: 1 last_copies: 0 last_secs: 2e-05 secs_since: 8
Block cache entry stats(count,size,portion): Misc(1,0.00 KB,0%)
** File Read Latency Histogram By Level [default] **
** DB Stats **
Uptime(secs): 8.4 total, 8.3 interval
Cumulative writes: 16M writes, 16M keys, 16M commit groups, 1.0 writes per commit group, ingest: 1.63 GB, 199.80 MB/s
Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
Interval writes: 16M writes, 16M keys, 16M commit groups, 1.0 writes per commit group, ingest: 1669.88 MB, 200.85 MB/s
Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
Interval stall: 00:00:0.000 H:M:S, 0.0 percent
// 2 threads
Uptime(secs): 31.4 total, 31.4 interval
Flush(GB): cumulative 0.183, interval 0.183
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 0.67 GB write, 21.84 MB/s write, 0.97 GB read, 31.68 MB/s read, 10.2 seconds
Interval compaction: 0.67 GB write, 21.87 MB/s write, 0.97 GB read, 31.72 MB/s read, 10.2 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
Block cache LRUCache#0x5619fb7bbea0#6183 capacity: 8.00 MB collections: 1 last_copies: 0 last_secs: 1.9e-05 secs_since: 31
Block cache entry stats(count,size,portion): Misc(1,0.00 KB,0%)
** File Read Latency Histogram By Level [default] **
** DB Stats **
Uptime(secs): 31.4 total, 31.4 interval
Cumulative writes: 16M writes, 16M keys, 11M commit groups, 1.4 writes per commit group, ingest: 0.45 GB, 14.67 MB/s
Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
Interval writes: 16M writes, 16M keys, 11M commit groups, 1.4 writes per commit group, ingest: 460.94 MB, 14.69 MB/s
Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
Interval stall: 00:00:0.000 H:M:S, 0.0 percent
===========================update==========================
This is my Rocksdb's setting.
DB* db;
Options options;
BlockBasedTableOptions table_options;
rocksdb::WriteOptions write_option_disable;
write_option_disable.disableWAL = true;
// Optimize RocksDB. This is the easiest way to get RocksDB to perform well
options.IncreaseParallelism();
options.OptimizeLevelStyleCompaction();
// create the DB if it's not already present
options.create_if_missing = true;
The atomic idx shared between two threads can introduced non-trivial overhead. Try inserting random values from each thread, and maybe increase the number of threads.
Hello Biostar I hope you're doing well i mapped my reads against the reference genome to get an idea about the coverage , what i want to know is the percentage of the coverage , but i don't know how to obtain this information using the diffrent tools i know , in qualimap , i just found the mean coverage and std coverage data also lines like this
T
There is a 99.59% of reference with a coverageData >= 1X
There is a 99.56% of reference with a coverageData >= 2X
There is a 99.53% of reference with a coverageData >= 3X
There is a 99.49% of reference with a coverageData >= 4X
There is a 99.45% of reference with a coverageData >= 5X
There is a 99.42% of reference with a coverageData >= 6X
There is a 99.39% of reference with a coverageData >= 7X
There is a 99.37% of reference with a coverageData >= 8X
There is a 99.34% of reference with a coverageData >= 9X
There is a 99.31% of reference with a coverageData >= 10X
There is a 99.3% of reference with a coverageData >= 11X
There is a 99.27% of reference with a coverageData >= 12X
..
i don't know if it s possible to obtain the percentage of the coverage using this informations and how , Also i used bamtools stats , and obtained these informations
Total reads: 6158542
Mapped reads: 5749217 (93.3535%)
Forward strand: 3286294 (53.3616%)
Reverse strand: 2872248 (46.6384%)
Failed QC: 0 (0%)
Duplicates: 0 (0%)
Paired-end reads: 6158542 (100%)
'Proper-pairs': 4667096 (75.7825%)
Both pairs mapped: 5730384 (93.0477%)
Read 1: 3079271
Read 2: 3079271
Singletons: 18833 (0.305803%)
and the same thing i didn't get the percentage of coverage and the same thing with samtools flagstat
6158542 + 0 in total (QC-passed reads + QC-failed reads)
0 + 0 secondary
0 + 0 supplementary
0 + 0 duplicates
5749217 + 0 mapped (93.35% : N/A)
6158542 + 0 paired in sequencing
3079271 + 0 read1
3079271 + 0 read2
4667096 + 0 properly paired (75.78% : N/A)
5730384 + 0 with itself and mate mapped
18833 + 0 singletons (0.31% : N/A)
83794 + 0 with mate mapped to a different chr
12721 + 0 with mate mapped to a different chr (mapQ>=
5)
and samtools idxstatsNZ_
ALWU01000001.1 581415 2031429 6451
NZ_ALWU01000002.1 43553 76489 678
NZ_ALWU01000003.1 29286 117672 342
NZ_ALWU01000004.1 37537 144448 440
NZ_ALWU01000005.1 217837 789901 2467
NZ_ALWU01000006.1 38235 103338 325
NZ_ALWU01000007.1 9944 45471 292
NZ_ALWU01000008.1 178611 651422 2190
NZ_ALWU01000009.1 17047 6352 42
NZ_ALWU01000010.1 510276 1782695 5606
* 0 0 390492
can you tell me please how to get the percentage of the coverage ? Thank you very much
I have a struct like this:
type Headers struct {
header string
valueFromCalculation string
value float64
}
I need have three slices with the values for each of these:
var headerLabels []string
var values []float64
var valueFromCalculation []string
[January February March April May June July August September TOTAL]
[175 167 148 142 125 114 130 120 30 1151]
[15% 15% 13% 12% 11% 10% 11% 10% 3%]
Now I want to create a new slice of Headers by combining these. There is one issue, that I believe i'm solving - the length of the valueFromCalcuation is 1 less than the rest of the slices
To create the new slice I want to do this:
sliceOfHeaders := []*Headers{}
for i := 0; i <= len(headerLabels); i++ {
headerEntry := new(Headers)
headerEntry.header = headerLabels[i]
headerEntry.value = values[i]
if i == len(headerLabels) {
headerEntry.valueFromCalculation = ""
} else {
headerEntry.valueFromCalculation = valueFromCalculation[i]
}
sliceOfHeaders = append(sliceOfHeaders, headerEntry)
}
It is throwing the below error:
"panic: runtime error: index out of range"
How can this be?
I'm accounting for the issue of the index with the valueFromCalculation length being one less than the other struct properties
Here you can see output from before I want to start my loop:
header --> [January February March April May June July August September TOTAL]
value --> [175 167 148 142 125 114 130 120 30 1151]
valueFromCalculation --> [15% 15% 13% 12% 11% 10% 11% 10% 3%]
header length --> 10
value length --> 10
valueFromCalculation length --> 9
Please can someone help me here? I can't see what i'm doing wrong
I'm accounting for the length of the 3rd property being 1 less than the rest
It appears you are looping one index too long on headerLabels.
Try changing
i <= len(headerLabels) to i < len(headerLabels)
And
if i == len(headerLabels) to if i == len(headerLabels) - 1
I have below top command results in my RHEL 6. It's running PostgreSQL on my server.
I see 35.8% idle in CPU(s) while all the CPU usages below show 100%.
So how should I read below output?
top - 03:06:30 up 97 days, 20:15, 3 users, load average: 10.85, 10.51, 10.13
Tasks: 738 total, 14 running, 724 sleeping, 0 stopped, 0 zombie
**Cpu(s): 53.3%us, 9.6%sy, 0.0%ni, 35.8%id, 0.6%wa, 0.0%hi, 0.7%si, 0.0%st**
Mem: 32077620k total, 24335372k used, 7742248k free, 19084k buffers
Swap: 81919992k total, 407968k used, 81512024k free, 18686780k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
19171 enterpri 20 0 8590m 966m 951m R 100.0 3.1 6:24.51 edb-postgres
19588 enterpri 20 0 8590m 956m 941m R 100.0 3.1 1:20.51 edb-postgres
18494 enterpri 20 0 8590m 959m 944m R 99.8 3.1 18:18.75 edb-postgres
18683 enterpri 20 0 8588m 984m 975m R 99.8 3.1 6:22.80 edb-postgres
19158 enterpri 20 0 8592m 1.0g 1.0g R 99.8 3.3 5:40.16 edb-postgres
19167 enterpri 20 0 8589m 959m 945m R 99.8 3.1 7:48.53 edb-postgres
19590 enterpri 20 0 8586m 945m 933m R 99.8 3.0 2:51.32 edb-postgres
19591 enterpri 20 0 8588m 950m 936m R 99.8 3.0 3:07.77 edb-postgres
19592 enterpri 20 0 8589m 948m 935m R 99.8 3.0 2:52.66 edb-postgres
You have a lot of CPUs (how many?) on your system. Some of them are very busy running postgres, and some of them are not.
In your version of top, %CPU represents the percent of a single CPU, not the percent of the total system CPU. If you had a threaded application, one entry could show more than 100%, but PostgreSQL is not threaded within a single process.
I am new to Go and trying to figure out how it manages memory consumption.
I have trouble with memory in one of my test projects. I don't understand why Go uses more and more memory (never freeing it) when my program runs for a long time.
I am running the test case provided below. After the first allocation, program uses nearly 350 MB of memory (according to ActivityMonitor). Then I try to free it and ActivityMonitor shows that memory consumption doubles. Why?
I am running this code on OS X using Go 1.0.3.
What is wrong with this code? And what is the right way to manage large variables in Go programs?
I had another memory-management-related problem when implementing an algorithm that uses a lot of time and memory; after running it for some time it throws an "out of memory" exception.
package main
import ("fmt"
"time"
)
func main() {
fmt.Println("getting memory")
tmp := make([]uint32, 100000000)
for kk, _ := range tmp {
tmp[kk] = 0
}
time.Sleep(5 * time.Second)
fmt.Println("returning memory")
tmp = make([]uint32, 1)
tmp = nil
time.Sleep(5 * time.Second)
fmt.Println("getting memory")
tmp = make([]uint32, 100000000)
for kk, _ := range tmp {
tmp[kk] = 0
}
time.Sleep(5 * time.Second)
fmt.Println("returning memory")
tmp = make([]uint32, 1)
tmp = nil
time.Sleep(5 * time.Second)
return
}
Currently, go uses a mark-and-sweep garbage collector, which in general does not define when the object is thrown away.
However, if you look closely, there is a go routine called sysmon which essentially runs as long as your program does and calls the GC periodically:
// forcegcperiod is the maximum time in nanoseconds between garbage
// collections. If we go this long without a garbage collection, one
// is forced to run.
//
// This is a variable for testing purposes. It normally doesn't change.
var forcegcperiod int64 = 2 * 60 * 1e9
(...)
// If a heap span goes unused for 5 minutes after a garbage collection,
// we hand it back to the operating system.
scavengelimit := int64(5 * 60 * 1e9)
forcegcperiod determines the period after which the GC is called by force. scavengelimit determines when spans are returned to the operating system. Spans are a number of memory pages which can hold several objects. They're kept for scavengelimit time and are freed if no object is on them and scavengelimit is exceeded.
Further down in the code you can see that there is a trace option. You can use this to see, whenever the
scavenger thinks he needs to clean up:
$ GOGCTRACE=1 go run gc.go
gc1(1): 0+0+0 ms 0 -> 0 MB 423 -> 350 (424-74) objects 0 handoff
gc2(1): 0+0+0 ms 1 -> 0 MB 2664 -> 1437 (2880-1443) objects 0 handoff
gc3(1): 0+0+0 ms 1 -> 0 MB 4117 -> 2213 (5712-3499) objects 0 handoff
gc4(1): 0+0+0 ms 2 -> 1 MB 3128 -> 2257 (6761-4504) objects 0 handoff
gc5(1): 0+0+0 ms 2 -> 0 MB 8892 -> 2531 (13734-11203) objects 0 handoff
gc6(1): 0+0+0 ms 1 -> 1 MB 8715 -> 2689 (20173-17484) objects 0 handoff
gc7(1): 0+0+0 ms 2 -> 1 MB 5231 -> 2406 (22878-20472) objects 0 handoff
gc1(1): 0+0+0 ms 0 -> 0 MB 172 -> 137 (173-36) objects 0 handoff
getting memory
gc2(1): 0+0+0 ms 381 -> 381 MB 203 -> 202 (248-46) objects 0 handoff
returning memory
getting memory
returning memory
As you can see, no gc invoke is done between getting and returning. However, if you change
the delay from 5 seconds to 3 minutes (more than the 2 minutes from forcegcperiod),
the objects are removed by the gc:
returning memory
scvg0: inuse: 1, idle: 1, sys: 3, released: 0, consumed: 3 (MB)
scvg0: inuse: 381, idle: 0, sys: 382, released: 0, consumed: 382 (MB)
scvg1: inuse: 1, idle: 1, sys: 3, released: 0, consumed: 3 (MB)
scvg1: inuse: 381, idle: 0, sys: 382, released: 0, consumed: 382 (MB)
gc9(1): 1+0+0 ms 1 -> 1 MB 4485 -> 2562 (26531-23969) objects 0 handoff
gc10(1): 1+0+0 ms 1 -> 1 MB 2563 -> 2561 (26532-23971) objects 0 handoff
scvg2: GC forced // forcegc (2 minutes) exceeded
scvg2: inuse: 1, idle: 1, sys: 3, released: 0, consumed: 3 (MB)
gc3(1): 0+0+0 ms 381 -> 381 MB 206 -> 206 (252-46) objects 0 handoff
scvg2: GC forced
scvg2: inuse: 381, idle: 0, sys: 382, released: 0, consumed: 382 (MB)
getting memory
The memory is still not freed, but the GC marked the memory region as unused. Freeing will begin when
the used span is unused and older than limit. From scavenger code:
if(s->unusedsince != 0 && (now - s->unusedsince) > limit) {
// ...
runtime·SysUnused((void*)(s->start << PageShift), s->npages << PageShift);
}
This behavior may of course change over time, but I hope you now get a bit of a feel when objects
are thrown away by force and when not.
As pointed out by zupa, releasing objects may not return the memory to the operating system, so on
certain systems you may not see a change in memory usage. This seems to be the case for Plan 9
and Windows according to this thread on golang-nuts.
To eventually (force) collect unused memory you must call runtime.GC().
variable = nil may make things unreachable and thus eligible for collection, but it per se doesn't free anything.