FileChannel.lock running for a long time - chronicle

So I have a simple code where I listens to jms msg and writes it to different chronicle queue, depending on the fix tag.
public void onEvent(InputEvent inputEvent) {
String msg = ((SimpleInputEvent) inputEvent).getMessage();
int start = msg.indexOf("\u000155=");
if (start == -1){
// dropping it
return;
}
char symbol = msg.charAt(start+4);
for (int i = m_ranges.length - 1; i >= 0; i --){
if (symbol >= m_ranges[i]){
m_appenders[i].writeText(msg);
break;
}
}
}
Now, I'm running some performance test and I see that it has this kind of profile
The main thread is running the above function.
And we can see the FileChannel.lock is running for 30 seconds straight! I'm not sure what it's doing. I created the queue like this
m_queues[j] = SingleChronicleQueueBuilder.binary(path + "_" + m_ranges[j]).build();
m_appenders[j] = m_queues[j].acquireAppender();
Thanks!
After reading more, I increased the blockSize to 512Mb which I will never reach in this test. However, I still reach a bottleneck in my performance test. In particular, sar shows
I increased the bufferSize to larger than I would have written to cq4 file this code should never kick in. But I still see some bottleneck in my system. If I turn on sar, I see
05:05:41 AM tps rtps wtps bread/s bwrtn/s
....
05:10:12 AM 714.14 0.00 714.14 0.00 6901.01
05:05:41 AM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
...
05:10:12 AM rootvg-lvar 422.00 0.00 3376.00 8.00 0.10 0.23 0.00 0.20
what should I do to avoid a big chunk of write to desk?

Assuming you're running a recent Linux, what you are seeing in SAR is the kernel writing pages from the page cache to your disk.
Chronicle Queue uses memory-mapped files rather that making any 'direct' writes to disk. So the operating system is in control of when data from a queue file is actually persisted to storage.
If you wish to reduce the size of those writes, you can try to configure the kernel to flush pages at a shorter interval.

Related

Memory loads experience different latency on the same core

I am trying to implement a cache based covert channel in C but noticed something weird. The physical address between the sender and the receiver is shared by using the mmap() call that maps to the same file with the MAP_SHARED option. Below is the code for the sender process which flushes an address from the cache to transmit a 1 and loads an address into the cache to transmit a 0. It also measures the latency of a load in both cases:
// computes latency of a load operation
static inline CYCLES load_latency(volatile void* p) {
CYCLES t1 = rdtscp();
load = *((int *)p);
CYCLES t2 = rdtscp();
return (t2-t1);
}
void send_bit(int one, void *addr) {
if(one) {
clflush((void *)addr);
load__latency = load_latency((void *)addr);
printf("load latency = %d.\n", load__latency);
clflush((void *)addr);
}
else {
x = *((int *)addr);
load__latency = load_latency((void *)addr);
printf("load latency = %d.\n", load__latency);
}
}
int main(int argc, char **argv) {
if(argc == 2)
{
bit = atoi(argv[1]);
}
// transmit bit
init_address(DEFAULT_FILE_NAME);
send_bit(bit, address);
return 0;
}
The load operation takes around 0 - 1000 cycles (during a cache-hit and a cache-miss) when issued by the same process.
The receiver program loads the same shared physical address and measures the latency during a cache-hit or a cache-miss, the code for which has been shown below:
int main(int argc, char **argv) {
init_address(DEFAULT_FILE_NAME);
rdtscp();
load__latency = load_latency((void *)address);
printf("load latency = %d\n", load__latency);
return 0;
}
(I ran the receiver manually after the sender process terminated)
However, the latency observed in this scenario is very much different as compared to the first case. The load operation takes around 5000-1000 cycles.
Both the processes have been pinned to the same core-id by using the taskset command. So if I'm not wrong, during a cache-hit, both processes will experience a load latency of the L1-cache on a cache-hit and DRAM on a cache-miss. Yet, these two processes experience a very different latency. What could be the reason for this observation, and how can I have both the processes experience the same amount of latency?
The initial access to an mmaped region will page-fault (lazy mapping/allocation by the kernel), unless you use mmap(MAP_POPULATE), or mlock, or touch some other cache line of the page first.
You're probably timing a page fault if you only do one time measurement per mmap, or per run of a whole program.
(Also you don't seem to be doing anything to warm up the CPU frequency, so once core cycle could be many reference cycles. Some of the time for an L3 miss is fixed in terms of memory clock cycles, but another part of it scales with core/uncore clock.)
Also note that unless you run the 2nd process immediately (e.g. from the same shell command), the OS will get a chance to put that core into a deep sleep. On Intel CPUs at least, that empties L1d and L2 so it can power them down in the deeper C states. Probably also the TLBs.
It's also strange that you you cast away volatile in load = *((int *)p);
Assigning the load result to a global(?) variable inside the timed region is also pretty questionable; that could also soft page fault. If so, RDTSCP will have to wait for it, because the store can't retire.
(But on TLB hit for the store, it doesn't have to wait for it commit to cache, since there's no mfence before rdtscp. A store can retire while the store is still in the store buffer. In fact it must retire before the store buffer entry is known to be non-speculative so it can commit.)

Backpressure with Reactors Parallel Flux + Timeouts

I'm currently working on using paralellism in a Flux. Right now I'm having problems with the backpressure. In our case we have a fast producing service we want to consume, but we are much slower.
With a normal flux, this works so far, but we want to have parallelism. What I see when I'm using the approach with
.parallel(2)
.runOn(Schedulers.parallel())
that there is a big request on the beginning, which takes quite a long time to process. Here occurs also a different problem, that if we take too long to process, we somehow seem to generate a cancel event in the producer service (we consume it via webflux rest-call), but no cancel event is seen in the consumer.
But back to problem 1, how is it possible to bring this thing back to sync. I know of the prefetch parameter on the .parallel() method, but it does not work as I expect.
A minimum example would be something like this
fun main() {
val atomicInteger = AtomicInteger(0)
val receivedCount = AtomicInteger(0)
val processedCount = AtomicInteger(0)
Flux.generate<Int> {
it.next(atomicInteger.getAndIncrement())
println("Emitted ${atomicInteger.get()}")
}.doOnEach { it.get()?.let { receivedCount.addAndGet(1) } }
.parallel(2, 1)
.runOn(Schedulers.parallel())
.flatMap {
Thread.sleep(200)
log("Work on $it")
processedCount.addAndGet(1)
Mono.just(it * 2)
}.subscribe {
log("Received ${receivedCount.get()} and processed ${processedCount.get()}")
}
Thread.sleep(25000)
}
where I can observe logs like this
...
Emitted 509
Emitted 510
Emitted 511
Emitted 512
Emitted 513
2022-02-02T14:12:58.164465Z - Thread[parallel-1,5,main] Work on 0
2022-02-02T14:12:58.168469Z - Thread[parallel-2,5,main] Work on 1
2022-02-02T14:12:58.241966Z - Thread[parallel-1,5,main] Received 513 and processed 2
2022-02-02T14:12:58.241980Z - Thread[parallel-2,5,main] Received 513 and processed 2
2022-02-02T14:12:58.442218Z - Thread[parallel-2,5,main] Work on 3
2022-02-02T14:12:58.442215Z - Thread[parallel-1,5,main] Work on 2
2022-02-02T14:12:58.442315Z - Thread[parallel-2,5,main] Received 513 and processed 3
2022-02-02T14:12:58.442338Z - Thread[parallel-1,5,main] Received 513 and processed 4
So how could I adjust that thing that I can use the parallelism but stay in backpressure/sync with my producer? The only way I got it to work is with a semaphore acquired before the parallelFlux and released after work, but this is not really a nice solution.
Ok for this szenario it seemed crucial that prefetch of parallel and runOn had to bet seen very low, here to 1.
With defaults from 256, we requested too much from our producer, so that there was already a cancel event because of the long time between the first block of requests for getting the prefetch and the next one when the Flux decided to fill the buffer again.

Kafka Stream BoundedMemoryRocksDBConfig

I'm trying to understand how the internals of Kafka Streams works with respects to cache and RocksDB (state store).
KTable<Windowed<EligibilityKey>, String> kTable = kStreamMapValues
.groupByKey(Grouped.with(keySpecificAvroSerde, Serdes.String())).windowedBy(timeWindows)
.reduce((a, b) -> b, materialized.withLoggingDisabled().withRetention(Duration.ofSeconds(retention)))
.suppress(Suppressed.untilTimeLimit(Duration.ofSeconds(timeToWaitForMoreEvents),
Suppressed.BufferConfig.unbounded().withLoggingDisabled()));
With the above portion of my topology, I'm consuming from a Kafka topic with 300 partitions. The application is deployed on OpenShift with a memory allocation of 4GB. I noticed the memory of the application constantly increasing until eventually an OOMKILLED occurs. After some research I've read that a custom RocksDB config was something I should implement, because the default size is too big for my application. Records first enter a cache (configured by CACHE_MAX_BYTES_BUFFERING_CONFIG and COMMIT_INTERVAL_MS_CONFIG) and then enter a state store.
public class BoundedMemoryRocksDBConfig implements RocksDBConfigSetter {
private static org.rocksdb.Cache cache = new org.rocksdb.LRUCache(1 * 1024 * 1024L, -1, false, 0);
private static org.rocksdb.WriteBufferManager writeBufferManager = new org.rocksdb.WriteBufferManager(1 * 1024 * 1024L, cache);
#Override
public void setConfig(final String storeName, final Options options, final Map<String, Object> configs) {
BlockBasedTableConfig tableConfig = (BlockBasedTableConfig) options.tableFormatConfig();
// These three options in combination will limit the memory used by RocksDB to the size passed to the block cache (TOTAL_OFF_HEAP_MEMORY)
tableConfig.setBlockCache(cache);
tableConfig.setCacheIndexAndFilterBlocks(true);
options.setWriteBufferManager(writeBufferManager);
// These options are recommended to be set when bounding the total memory
tableConfig.setCacheIndexAndFilterBlocksWithHighPriority(true);
tableConfig.setPinTopLevelIndexAndFilter(true);
tableConfig.setBlockSize(2048L);
options.setMaxWriteBufferNumber(2);
options.setWriteBufferSize(1 * 1024 * 1024L);
options.setTableFormatConfig(tableConfig);
}
#Override
public void close(final String storeName, final Options options) {
// Cache and WriteBufferManager should not be closed here, as the same objects are shared by every store instance.
}
}
With each time window segment, three segments are created by default. If I'm consuming from 300 partitions, since 3 time window segments will be created for each, 900 instances of RocksDB are created. Is my understanding correct that the following is true?
Memory allocated in OpenShift / RocksDB instances => 4096MB / 900 => 4.55 MB
(WriteBufferSize * MaxWriteBufferNumber) + BlockCache + WriteBufferManager => (1MB * 2) + 1MB + 1MB => 4MB
Is the BoundedMemoryRocksDBConfig.java for each instance of RocksDB, or for all?
If you consume from a topic with 300 partitions and you use a segmented state store, i.e., you use a time window in the DSL, you will end up with 900 RocksDB instances. If you only use one Kafka Streams client, i.e., you do not scale out, all 900 RocksDB instances will end up on the same computing node.
The BoundedMemoryRocksDBConfig limits the memory RocksDB uses per Kafka Streams client. That means, if you only use one Kafka Streams client, BoundedMemoryRocksDBConfig limits the memory for all 900 instances.
Is my understanding correct that the following is true?
Memory allocated in OpenShift / RocksDB instances => 4096MB / 900 => 4.55 MB
(WriteBufferSize * MaxWriteBufferNumber) + BlockCache + WriteBufferManager => (1MB * 2) + 1MB + 1MB => 4MB
No, that is not correct.
If you pass the Cache to WriteBufferManager, the size needed for memtables are also counted against the cache (see footnote 1 in the docs of the BoundedMemoryRocksDBConfig and the RocksDB docs). So, the size that you pass to the cache is the limit for memtables and block cache. Since you pass the cache and the write buffer manager to all of your instances on the same computing node, all 900 instances are bounded by the size you pass to the cache. For example, if you specify a size of 4 GB the total memory used by all 900 instances (assuming one Kafka Streams client) is bounded to 4 GB.
Be aware that the size passed to the cache is not a strict limit. Although the boolean parameter in the constructor of the cache gives you the option to enforce a strict limit, the enforcement does not work if the write buffer memory is also counted against the cache due to a bug in the RocksDB version Kafka Streams uses.
With Kafka Streams 2.7.0, you will have the possibility to monitor RocksDB memory consumption with metrics that are exposed via JMX. Have a look at KIP-607 for more details.

How can I measure how long this Linux interrupt handler takes to run?

I am trying to debug a custom Linux serial driver that is having some issues missing some receive data. It has one interrupt for 4 serial ports, and baud rate is 115200. Firstly I would like to see how to measure how long the interrupt handler takes. I have used perf, but things are just in percent and not seconds. Secondly does anyone see any issues with the below code that can be improved to speed things up?
void serial_interrupt(int irq, void *dev_id)
{
...
// Need to loop through each port to see which port caused the interrupt.
list_for_each(lpNode, &serial_ports)
{
struct serial_port_module *ser_dev = list_entry(lpNode, struct serial_port_module, port_list);
lnIsr = ioread8(ser_dev->membase + ser_dev->chan_num * PORT_OFFSET + SERIAL_ISR);
if (lnIsr & IPM512_RX_INT)
{
while (serialdata_is_data_available(ser_dev)) // equals a ioread8()
{
lcIn = ioread8(ser_dev->membase + ser_dev->chan_num * PORT_OFFSET + SERIAL_RBR);
kfifo_in(&ser_dev->rx_fifo, &lcIn, sizeof(lcIn));
// Notify if anyone is doing a blocking read.
wake_up_interruptible(&ser_dev->read_queue);
}
}
}
}
Use the ftrace API to try to track down your latency issues. It's woth the time to get to know: https://www.kernel.org/doc/Documentation/trace/ftrace.txt
If this is too heavy-weight, what about adding some simple instrumentation yourself? getnstimeofday(struct timespec *ts) is relatively lightweight... with a little code you could output in a sysfs debug file the worst case execution times, some stats on latency of call to this function, worst-case number of bytes available per interrupt... if this number gets near your hardware FIFO size, you're in trouble.
One optimization would be to read the data in batches into a buffer, as long as data is available, then input the entire buffer, then wake up any readers.
while(data_available(dev))
{
buf[cnt++] = ioread8();
}
kfifo_in(fifo, buf, cnt);
wake_up_interruptible();
But execution time of code this simple is not likely to be an issue. You're probably suffering from missed interrupts or unexpected latency of the interrupt handling.

What does virtual core in YARN vcore mean?

Yarn is using the concept of virtual core to manage CPU resources. I would ask what's the benefit to use virtual core, is there some reason here that YARN uses vcore?
Here is what the documentation states (emphasis mine)
A node's capacity should be configured with virtual cores equal to its
number of physical cores. A container should be requested with the
number of cores it can saturate, i.e. the average number of threads it
expects to have runnable at a time.
Unless the CPU core is hyper-threaded it can run only one thread at a time (in case of hyper threaded OS actually sees 2 cores for one physical core and can run two threads - of course it's a bit of cheating and no-where as efficient as having actual physical core). Essentially what it means to end user is that a core can run a single thread so theoretically if I want parallelism using java threads then a reasonably good approximation is number of threads equal to number of core. So if your container process ( which is a JVM)
will require 2 threads then it's better to map it to 2 vcore - that what the last line means. And as total capacity of node the vcore should be equal to number of physical cores.
The most important thing to remember is still that it's actually the OS which will schedule the threads to be executed in different cores as it happens in any other application and
YARN in itself does not have control on it except the fact that what is the best possible approximation for how many thread to allocate for each container. And that's why it is important to take into consideration other applications running on OS, CPU cycles used by kernel etc., as all of cores will not be available to YARN application all the time.
EDIT: Further research
Yarn does not influence hard limits on CPU but Going through the code I can see how it tries to influence the CPU scheduling or cpu rate. Technically Yarn can launch different container processes - java, python , custom shell command etc. The responsibility of launching containers in Yarn belongs to the ContainerExecutor component of Node manager and I can see code for launching the container etc., along with some hints (depending on platform). For example in case of DefaultContainerExecutor ( which extends ContainerExecutor) - for windows it uses "-c" parameter for cpu restriction and on linux it uses process niceness to influence it. There is another implementation LinuxContainerExecutor (or better still CgroupsLCEResourcesHandler as former does not force the usage of cgroups) which tries to use Linux cgroups to limit the Yarn CPU resources on that node. More details can be found here.
ContainerExecutor {
.......
.......
protected String[] getRunCommand(String command, String groupId,
String userName, Path pidFile, Configuration conf, Resource resource) {
boolean containerSchedPriorityIsSet = false;
int containerSchedPriorityAdjustment =
YarnConfiguration.DEFAULT_NM_CONTAINER_EXECUTOR_SCHED_PRIORITY;
if (conf.get(YarnConfiguration.NM_CONTAINER_EXECUTOR_SCHED_PRIORITY) !=
null) {
containerSchedPriorityIsSet = true;
containerSchedPriorityAdjustment = conf
.getInt(YarnConfiguration.NM_CONTAINER_EXECUTOR_SCHED_PRIORITY,
YarnConfiguration.DEFAULT_NM_CONTAINER_EXECUTOR_SCHED_PRIORITY);
}
if (Shell.WINDOWS) {
int cpuRate = -1;
int memory = -1;
if (resource != null) {
if (conf
.getBoolean(
YarnConfiguration.NM_WINDOWS_CONTAINER_MEMORY_LIMIT_ENABLED,
YarnConfiguration.DEFAULT_NM_WINDOWS_CONTAINER_MEMORY_LIMIT_ENABLED)) {
memory = resource.getMemory();
}
if (conf.getBoolean(
YarnConfiguration.NM_WINDOWS_CONTAINER_CPU_LIMIT_ENABLED,
YarnConfiguration.DEFAULT_NM_WINDOWS_CONTAINER_CPU_LIMIT_ENABLED)) {
int containerVCores = resource.getVirtualCores();
int nodeVCores = conf.getInt(YarnConfiguration.NM_VCORES,
YarnConfiguration.DEFAULT_NM_VCORES);
// cap overall usage to the number of cores allocated to YARN
int nodeCpuPercentage = Math
.min(
conf.getInt(
YarnConfiguration.NM_RESOURCE_PERCENTAGE_PHYSICAL_CPU_LIMIT,
YarnConfiguration.DEFAULT_NM_RESOURCE_PERCENTAGE_PHYSICAL_CPU_LIMIT),
100);
nodeCpuPercentage = Math.max(0, nodeCpuPercentage);
if (nodeCpuPercentage == 0) {
String message = "Illegal value for "
+ YarnConfiguration.NM_RESOURCE_PERCENTAGE_PHYSICAL_CPU_LIMIT
+ ". Value cannot be less than or equal to 0.";
throw new IllegalArgumentException(message);
}
float yarnVCores = (nodeCpuPercentage * nodeVCores) / 100.0f;
// CPU should be set to a percentage * 100, e.g. 20% cpu rate limit
// should be set as 20 * 100. The following setting is equal to:
// 100 * (100 * (vcores / Total # of cores allocated to YARN))
cpuRate = Math.min(10000,
(int) ((containerVCores * 10000) / yarnVCores));
}
}
return new String[] { Shell.WINUTILS, "task", "create", "-m",
String.valueOf(memory), "-c", String.valueOf(cpuRate), groupId,
"cmd /c " + command };
} else {
List<String> retCommand = new ArrayList<String>();
if (containerSchedPriorityIsSet) {
retCommand.addAll(Arrays.asList("nice", "-n",
Integer.toString(containerSchedPriorityAdjustment)));
}
retCommand.addAll(Arrays.asList("bash", command));
return retCommand.toArray(new String[retCommand.size()]);
}
}
}
For windows (it utilizes winutils.exe) , it uses cpu rate
For Linux it uses niceness as a parameter to control the CPU priority
"Virtual cores" are merely an abstraction of actual cores. This abstraction or "lie" (as i like to call it), allows YARN (and others) to dynamically spin threads (parallel process) based on availability. Take for example running map reduce on an "elastic" cluster with a processing limit constrained only by your wallet... The cloud baby... The. Cloud.
you can read more here

Resources