Tarantool wal config - tarantool

We are using Tarantool 1.5 and have question with config options: if we set following options:
wal_fsync_delay=2
wal_mode=write
Does it mean fsync(2) will not be called at all?

it means sync every two seconds. I don't recommend this set up, we removed it in 1.6 - it leads to latency spikes, sine Linux filesystem makes fsync performance totally unpredictable.

Related

Elasticsearch bulk update is extremely slow

I am indexing a large amount of daily data ~160GB per index into elasticsearch. I am facing this case where I need to update almost all the docs in the indices with a small amount of data(~16GB) which is of the format
id1,data1
id1,data2
id2,data1
id2,data2
id2,data3
.
.
.
My update operations start happening at 16000 lines per second and in over 5 minutes it comes down to 1000 lines per second and doesnt go up after that. The update process for this 16GB of data is currently longer than the time it takes for my entire indexing of 160GB to happen
My conf file for the update operation currently looks as follows
output
{
elasticsearch {
action => "update"
doc_as_upsert => true
hosts => ["host1","host2","host3","host4"]
index => "logstash-2017-08-1"
document_id => "%{uniqueid}"
document_type => "daily"
retry_on_conflict => 2
flush_size => 1000
}
}
The optimizations I have done to speed up indexing in my cluster based on the suggestions here https://www.elastic.co/guide/en/elasticsearch/guide/current/indexing-performance.html are
Setting "indices.store.throttle.type" : "none"
Index "refresh_interval" : "-1"
I am running my cluster on 4 instances of the d2.8xlarge EC2 instances. I have allocated 30GB of heap to each nodes.
While the update is happening barely any cpu is used and the load is very less as well.
Despite everything the update is extremely slow. Is there something very obvious that I am missing that is causing this issue? While looking at the threadpool data I find that the number of threads working on bulk operations are constantly high.
Any help on this issue would be really helpful
Thanks in advance
There are a couple of rule-outs to try here.
Memory Pressure
With 244GB of RAM, this is not terribly likely, but you can still check it out. Find the jstat command in the JDK for your platform, though there are visual tools for some of them. You want to check both your Logstash JVM and the ElasticSearch JVMs.
jstat -gcutil -h7 {PID of JVM} 2s
This will give you a readout of the various memory pools, garbage collection counts, and GC timings for that JVM as it works. It will update every 2 seconds, and print headers every 7 lines. Spending excessive time in the FCT is a sign that you're underallocated for HEAP.
I/O Pressure
The d2.8xlarge is a dense-storage instance, and may not be great for a highly random, small-block workload. If you're on a Unix platform, top will tell you how much time you're spending in IOWAIT state. If it's high, your storage isn't up to the workload you're sending it.
If that's the case, you may want to consider provisioned IOP EBS instances rather than the instance-local stuff. Or, if your stuff will fit, consider an instance in the i3 family of high I/O instances instead.
Logstash version
You don't say what version of Logstash you're using. Being StackOverflow, you're likely to be using 5.2. If that's the case, this isn't a rule-out.
But, if you're using something in the 2.x series, you may want to set the -w flag to 1 at first, and work your way up. Yes, that's single-threading this. But the ElasticSearch output has some concurrency issues in the 2.x series that are largely fixed in the 5.x series.
With elasticsearch version 6.0 we had an exactly same issue of slow updates on aws and the culprit was slow I/O. Same data was upserting on a local test stack completely fine but once in cloud on ec2 stack, everything was dying after an initial burst of speedy inserts lasting only for few minutes.
Local test stack was a low-spec server in terms of memory and cpu but contained SSDs.
s3 stack was EBS volumes with default gp2 300 IOPS.
Converting the volumes to type io1 with 3000 IOPS solved the issue and everything got back on track.
I am using amazon aws elasticsearch service version 6.0 . I need heavy write/insert from serials of json file to the elasticsearch for 10 billion items . The elasticsearch-py bulk write speed is really slow most of time and occasionally high speed write . i tried all kinds of methods , such as split json file to smaller pieces , multiprocess read json files , parallel_bulk insert into elasticsearch , nothing works . Finally , after I upgraded io1 EBS volume , everything goes smoothly with 10000 write IOPS .

How to enable GC logging for Hadoop MapReduce2 History Server, while preventing log file overwrites and capping disk space usage

We recently decided to enable GC logging for Hadoop MapReduce2 History Server on a number of clusters (exact version varies) as a aid to looking into history-server-related memory and garbage collection problems. While doing this, we want to avoid two problems we know might happen:
overwriting of the log file when the MR2 History server restarts for any reason
the logs using too much disk space, leading to disks getting filled
When Java GC logging starts for a process it seems to replace the content of any file that has the same name. This means that unless you are careful, you will lose the GC logging, perhaps when you are more likely to need it.
If you keep the cluster running long enough, log files will fill up disk unless managed. Even if GC logging is not currently voluminous we want to manage the risk of an unusual situation arising that causes the logging rate to suddenly spike up.
You will need to set some JVM parameters when starting the MapReduce2 History Server, meaning you need to make some changes to mapred-env.sh. You could set the parameters in HADOOP_OPTS, but that would have a broader impact than just the History server, so instead you will probably want to set them in HADOOP_JOB_HISTORYSERVER_OPTS.
Now lets discuss the JVM parameters to include in those.
To enable GC logging to a file, you will need to add -verbose:gc -Xloggc:<log-file-location>.
You need to give the log file name special consideration to prevent overwrites whenever the server is restarted. It seems like you need to have a unique name for every invocation so appending a timestamp seems like the best option. You can include something like `date +'%Y%m%d%H%M'` to add a timestamp. In this example, it is in the form of YYYYMMDDHHMM. In some versions of Java you can put "%t" in your log file location and it will be replaced by the server start up timestamp formatted as YYYY-MM-DD_HH-MM-SS.
Now onto managing use of disk space. I'll be happy if there is a simpler way than what I have.
First, take advantage of Java's built-in GC log file rotation. -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M is an example of enabling this rotation, having up to 10 GC log files from the JVM, each of which is no more than approx 10MB in size. 10 x 10MB is 100MB max usage.
With the GC log file rotation in place with up to 10 files, '.0', '.1', ... '.9' will be added to the file name you gave in Xloggc. .0 will be first and after it reaches .9 it will replace .0 and continue on in a round robin manner. In some versions of Java '.current' will be additionally put on the end of the name of the log file currently being written to.
Due to the unique file naming we apparently have to have to avoid overwrites, you can have 100MB per History server invocation, so this is not a total solution to managing disk space used by the server's GC logs. You will end up with a set of up to 10 GC log files on each server invocation -- this can add up over time. The best solution (under *nix) to that would seem to be to use the logrotate utility (or some other utility) to periodically clean up the GC logs that have not been modified in the last N days.
Be sure to do the math and make sure you will have enough disk space.
People frequently want more details and context in their GC logs than the default, so consider adding in -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps.
Putting this together, you might add something this to mapred-env:
## enable GC logging for MR2 History Server:
TIMESTAMP=`date +'%Y%m%d%H%M'`
# GC log location/name prior to .n addition by log rotation
JOB_HISTORYSERVER_GC_LOG_NAME="{{mapred_log_dir_prefix}}/$USER/mapred-jobhistory-gc.log-$TIMESTAMP"
JOB_HISTORYSERVER_GC_LOG_ENABLE_OPTS="-verbose:gc -Xloggc:$JOB_HISTORYSERVER_GC_LOG_NAME"
JOB_HISTORYSERVER_GC_LOG_ROTATION_OPTS="-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M"
JOB_HISTORYSERVER_GC_LOG_FORMAT_OPTS="-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps"
JOB_HISTORYSERVER_GC_LOG_OPTS="$JOB_HISTORYSERVER_GC_LOG_ENABLE_OPTS $JOB_HISTORYSERVER_GC_LOG_ROTATION_OPTS $JOB_HISTORYSERVER_GC_LOG_FORMAT_OPTS"
export HADOOP_JOB_HISTORYSERVER_OPTS="$HADOOP_JOB_HISTORYSERVER_OPTS $JOB_HISTORYSERVER_GC_LOG_OPTS"
You may find that you already have a reference to HADOOP_JOB_HISTORYSERVER_OPTS so you should replace or add onto that.
In the above, you can change {{mapred_log_dir_prefix}}/$USER to wherever you want the GC logs to go (you probably want it to go the the same place as MapReduce history server logs). You can change the log file naming too.
If you are managing your Hadoop cluster with Apache Ambari, then these changes would be in MapReduce2 service > Configs > Advanced > Advanced mapred-env > mapred-env template. With Ambari, {{mapred_log_dir_prefix}} will be automatically replaced with the Mapreduce Log Dir Prefix defined a few rows above the field.
GC logging will start happening upon server restart the server, so you may need to have a short outage to enable this.

Hive memory setting for local task during map join

I'm using a hdinsight cluster (hive version .13) to run some hive queries. One of the queries (query 7 from the TPCH suit) which launches a local task during map join fails due to insufficient memory (hive aborts it because the hashtable has reached the configured limit).
Hive seems to be allocating 1GB to the local task, from where is this size picked up and how can I increase it?
2015-05-03 05:38:19 Starting to launch local task to process map join; maximum memory = 932184064
I assumed the local task should use the same heap size of the mapper, but it does not seem to be the case. Any help is appreciated.
Quite late on this thread.. but just for others who face the same issue.
The documentation does state that the local (child) JVM will have same size as that of map (https://cwiki.apache.org/confluence/display/Hive/MapJoinOptimization), it does not seem to be the case. Instead, the JVM size is governed by HADOOP_HEAPSIZE setting from hive-env.sh. So, in the case of original post from Shradha, I suspect the HADOOP_HEAPSIZE is set to 1GB.
This property controls it :
yarn.app.mapreduce.am.command-opts
This is the Application Manager jvm opts. Since local task runs on AM.
Can you also try this property :
set hive.mapjoin.localtask.max.memory.usage = 0.999;
You can use HADOOP_HEAPSIZE=512 or HADOOP_CLIENT_OPTS=-Xmx512m which can both be tweaked from hadoop-env.sh.
Note however that this might lead to unexpected behaviors for some jobs and you will probably have to play with
mapreduce.map.memory.mb and mapreduce.map.java.opts
as well as
mapreduce.reduce.memory.mb and mapreduce.reduce.java.opts in the mapred-site config file in order to make sure that jobs remain stable.

WAL files in HBase

In HBase before writing into memstore data will be first written into the WAL, but when i checked on my system
WAL files are not getting updated immediately after each Put operation, it's taking lot of time to update. Is there any parameter i need to set?
(WAL has been enabled)
Do you know how long it takes to update the WAL files? Are you sure the time is taken in write or by the time you check WAL, it is already moved to old logs. If WAL is enabled all the entries must come to WAL first and then written to particular region as cluster configured.
I know that WAL files are moved to .oldlogs fairly quickly i.e. 60 seconds as defined in hbase-site.xml through hbase.master.logcleaner.ttl setting.
In standalone mode writing into WAL is taking a lot of time, where as on pseudo distributed mode it's working fine

I/O performance of multiple JVM (Windows 7 affected, Linux works)

I have a program that creates a file of about 50MB size. During the process the program frequently rewrites sections of the file and forces the changes to disk (in the order of 100 times). It uses a FileChannel and direct ByteBuffers via fc.read(...), fc.write(...) and fc.force(...).
New text:
I have a better view on the problem now.
The problem appears to be that I use three different JVMs to modify a file (one creates it, two others (launched from the first) write to it). Every JVM closes the file properly before the next JVM is started.
The problem is that the cost of fc.write() to that file occasionally goes through the roof for the third JVM (in the order of 100 times the normal cost). That is, all write operations are equally slow, it is not just one that hang very long.
Interestingly, one way to help this is to insert delays (2 seconds) between the launching of JVMs. Without delay, writing is always slow, with delay, the writing is slow aboutr every second time or so.
I also found this Stackoverflow: How to unmap a file from memory mapped using FileChannel in java? which describes a problem for mapped files, which I'm not using.
What I suspect might be going on:
Java does not completely release the file handle when I call close(). When the next JVM is started, Java (or Windows) recognizes concurrent access to that file and installes some expensive concurrency handler for that file, which makes writing expensive.
Would that make sense?
The problem occurs on Windows 7 (Java 6 and 7, tested on two machines), but not under Linux (SuSE 11.3 64).
Old text:
The problem:
Starting the program from as a JUnit test harness from eclipse or from console works fine, it takes around 3 seconds.
Starting the program through an ant task (or through JUnit by kicking of a separate JVM using a ProcessBuilder) slows the program down to 70-80 seconds for the same task (factor 20-30).
Using -Xprof reveals that the usage of 'force0' and 'pwrite' goes through the roof from 34.1% (76+20 tics) to 97.3% (3587+2913+751 tics):
Fast run:
27.0% 0 + 76 sun.nio.ch.FileChannelImpl.force0
7.1% 0 + 20 sun.nio.ch.FileDispatcher.pwrite0
[..]
Slow run:
Interpreted + native Method
48.1% 0 + 3587 sun.nio.ch.FileDispatcher.pwrite0
39.1% 0 + 2913 sun.nio.ch.FileChannelImpl.force0
[..]
Stub + native Method
10.1% 0 + 751 sun.nio.ch.FileDispatcher.pwrite0
[..]
GC and compilation are negligible.
More facts:
No other methods show a significant change in the -Xprof output.
It's either fast or very slow, never something in-between.
Memory is not a problem, all test machines have at least 8GB, the process uses <200MB
rebooting the machine does not help
switching of virus-scanners and similar stuff has no affect
When the process is slow, there is virtually no CPU usage
It is never slow when running it from a normal JVM
It is pretty consistently slow when running it in a JVM that was started from the first JVM (via ProcessBuilder or as ant-task)
All JVMs are exactly the same. I output System.getProperty("java.home") and the JVM options via RuntimeMXBean RuntimemxBean = ManagementFactory.getRuntimeMXBean(); List arguments = RuntimemxBean.getInputArguments();
I tested it on two machines with Windows7 64bit, Java 7u2, Java 6u26 and JRockit, the hardware of the machines differs, though, but the results are very similar.
I tested it also from outside Eclipse (command-line ant) but no difference there.
The whole program is written by myself, all it does is reading and writing to/from this file, no other libraries are used, especially no native libraries. -
And some scary facts that I just refuse to believe to make any sense:
Removing all class files and rebuilding the project sometimes (rarely) helps. The program (nested version) runs fast one or two times before becoming extremely slow again.
Installing a new JVM always helps (every single time!) such that the (nested) program runs fast at least once! Installing a JDK counts as two because both the JDK-jre and the JRE-jre work fine at least once. Overinstalling a JVM does not help. Neither does rebooting. I haven't tried deleting/rebooting/reinstalling yet ...
These are the only two ways I ever managed to get fast program runtimes for the nested program.
Questions:
What may cause this performance drop for nested JVMs?
What exactly do these methods do (pwrite0/force0)? -
Are you using local disks for all testing (as opposed to any network share) ?
Can you setup Windows with a ram drive to store the data ? When a JVM terminates, by default its file handles will have been closed but what you might be seeing is the flushing of the data to the disk. When you overwrite lots of data the previous version of data is discarded and may not cause disk IO. The act of closing the file might make windows kernel implicitly flush data to disk. So using a ram drive would allow you to confirm that their since disk IO time is removed from your stats.
Find a tool for windows that allows you to force the kernel to flush all buffers to disk, use this in between JVM runs, see how long that takes at the time.
But I would guess you are hitten some iteraction with the demands of the process and the demands of the kernel in attempting to manage disk block buffer cache. In linux there is a tool like "/sbin/blockdev --flushbufs" that can do this.
FWIW
"pwrite" is a Linux/Unix API for allowing concurrent writing to a file descriptor (which would be the best kernel syscall API to use for the JVM, I think Win32 API already has provision for the same kinds of usage to share a file handle between threads in a process, but since Sun have Unix heritige things get named after the Unix way). Google "pwrite(2)" for more info on this API.
"force" I would guess that is a file system sync, meaning the process is requesting the kernel to flush unwritten data (that is currently in disk block buffer cache) into the file on the disk (such as would be needed before you turned your computer off). This action will happen automatically over time, but transactional systems require to know when the data previously written (with pwrite) has actually hit the physical disk and is stored. Because some other disk IO is dependant on knowing that, such as with transactional checkpointing.
One thing that could help is making sure you explicitly set the FileChannel to null. Then call System.runFinalization() and maybe System.gc() at the end of the program. You may need more than 1 call.
System.runFinalizersOnExit(true) may also help, but it's deprecated so you will have to deal with the compiler warnings.

Resources