How to configure windows for large number of concurrent connections - windows

I have seen many tutorial on how to tune Linux to scale Node.js or Erlang server to 600K+ concurrent connections.
But I have not found similar tutorial for windows, can someone help with what are the similar knobs that exist for Windows.
/etc/security/limits.d/custom.conf
In root soft nofile 1000000
root hard nofile 1000000
* soft nofile 1000000
* hard nofile 1000000
* List item
/etc/sysctl.conf
fs.file-max = 1000000
fs.nr_open = 1000000
net.ipv4.netfilter.ip_conntrack_max = 1048576
net.nf_conntrack_max = 1048576
“fs.file-max”
The maximum file handles that can be allocated
“fs.nr_open”
Max amount of file handles that can be opened
“net.ipv4.netfilter.ip_conntrack_max”
how many connections the NAT can keep track of in the “tracking” table Default is: 65536

// Increase total number of connection allowed
[HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters]
TcpnumConnections = 16,777,214
// increase MaxFreeTcbs to > than number concurrent connection
[HKEY_LOCAL_MACHINE \System \CurrentControlSet \Services \Tcpip
\Parameters] MaxFreeTcbs = 2000 (Default = RAM dependent, but usual
Pro = 1000, Srv=2000)
// Increase hash table size for the hashtable helping TCP lookup
[HKEY_LOCAL_MACHINE \System \CurrentControlSet \services \Tcpip
\Parameters] MaxHashTableSize = 512 (Default = 512, Range = 64-65536)
// reduce tcp delay to limit half-open connection
[HKEY_LOCAL_MACHINE\System \CurrentControlSet \services \Tcpip \Parameters]
TcpTimedWaitDelay = 120 (Default = 240 secs, Range = 30-300)

Related

MariaDB Parallel Replication Performance Tuning Issues

This is my first ever post so would appreciate some guidance. Situation is as below,
Replicating from VM A (Main Production Serer) to VM B (Slave used mainly for DB Replica and generating a few reports and also replicating down the stream to 2 other testing servers)
VM A Cofig:
32GB vRAM
8 vCPU (some Xeon CPU running at 2.4GHz)
1.5TB SSD
Windows Server 2012 R2
Production Applicaton running on XAMPP using Apache & MariaDB 10.1.38
VM B Config:
16GB vRAM
4 vCPU (same variant as VM A)
300 GB SSD
Same Windows Server 2012 R2 as VM A
Using the same platform as VM A- XAMPP with Apache & MariaDB 10.1.38
Problem:
No matter what settings I use, parallel replication just doesn't work like it should. Seconds behond master is continuously increasing. I've tried various values for binlog_commit_wait_count & binlog_commit_wait_usec as well but to no avail. At 1 or 2 combos, the group commit ratio increases a little but then eventually reduces again to <1.05. Settings for my.ini for the slave VM B is pasted below,
my.ini:
key_buffer = 8M
max_allowed_packet = 32M
#was 256M
sort_buffer_size = 16M
#was 2M
net_buffer_length = 8K
read_buffer_size = 8M
#was 256K then 4M then 8M
read_rnd_buffer_size = 16M
myisam_sort_buffer_size = 4M
query_cache_size=512M
query_cache_limit = 128M
#didn't exist, was 64M
join_buffer_size=128M
#didn't exist
thread_cache_size = 32
table_open_cache = 20000
#didn't exist
tmp_table_size = 256M
#didn't exist
max_heap_table_size = 256M
#didn't exist
log_error = "mysql_error.log"
expire_logs_days=30
log_bin_trust_function_creators = 1
slave-skip-errors = 1062,1060,1032
connect_timeout=100
max_connections = 2000
skip-host-cache
#skip-name-resolve
#innodb_force_recovery = 1
slave_parallel_threads = 4
#slave_parallel_workers = 4
slave_parallel_mode=optimistic
slave_domain_parallel_threads=4
slave_compressed_protocol = 1
#slave_parallel_max_queued = 131072
#slave_parallel_max_queued=16777216
server-id = 101
binlog-format = mixed
log-bin=mysql-bin
relay-log = mysql-relay-bin
log-slave-updates = 1
report_host=slave-main-replication-server
sync_binlog=1
#binlog_commit_wait_count = 30
binlog_commit_wait_count = 25
binlog_commit_wait_usec = 100000
innodb_buffer_pool_size = 8G
#was 2560M
#innodb_additional_mem_pool_size = 32M
innodb-buffer-pool-instances=4
#didn't exist
innodb_log_file_size = 256M
innodb_log_buffer_size = 64M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50
innodb_io_capacity=1000
#default 200
innodb_io_capacity_max=3000
Any ideas would be appreciated. I've gone through dozens of posts and wikis and everything else inbetween of whatever I could find on the subject but none of them helped. I've tried setting "sync_binlong=0" as well, nothing happened. Is it just the CPU cores causing a bottleneck? I'm all out of ideas. Been trying to solve this for almost 2 months now. Please help.

Improve postgreql insert performance when compared to Oracle. Low memory utilization by Postgresql threads

I am trying to improve the performance of Postgresql (version 9.2.3) inserts for a simple table with 1 bigint, 1 varchar, 1 float and 2 time stamps.
A simple replication of my JDBC program is attached. Here are the important points I want to mention:
I am running this program on the same system which hosts the PostgreSQL DB. (64 GB RAM and 8 CPUs.)
I am using INSERT statements AND I DO NOT want to use COPY statement. I have read and understand the COPY performs better but I am tuning the insert performance here.
I am using PreparedStatement.addbatch() and executeBatch() to insert in batches of 1000's
The performance of the insert scales well when I increase the batch size but flattens out at around a batch size of 8000. What I notice is that the postgresql thread on the system is CPU saturated as observed by the "top" command. The CPU usage of the postgres thread steadily increases and tops out at 95% when the batch size reaches 8K. The other interesting thing I notice is that it is using only up to 200MB of RAM per thread.
In comparison an Oracle DB scales much better and the the same number of insets with comparable batch sizes finish 3 to 4 times faster. I logged on to the Oracle DB machine (Sun Solaris machine) and noticed that the CPU utilization peaks out at a much bigger batch size and also each Oracle thread is using 6 to 8 GB of memory.
Given that I have memory to spare is there a way to increase the memory usage for a postgres thread for better performance?
Here are my current postgresql settings:
temp_buffers = 256MB
bgwriter_delay = 100ms
bgwriter_lru_maxpages = 1000
bgwriter_lru_multiplier = 4
maintenance_work_mem = 2GB
shared_buffers = 8GB
vacuum_cost_limit = 800
work_mem = 2GB
max_connections = 100
checkpoint_completion_target = 0.9
checkpoint_segments = 32
checkpoint_timeout =10min
checkpoint_warning =1min
wal_buffers = 32MB
wal_level = archive
cpu_tuple_cost = 0.03
effective_cache_size = 48GB
random_page_cost = 2
autovacuum = on
autovacuum_vacuum_cost_delay = 10ms
autovacuum_max_workers = 6
autovacuum_naptime = 5
autovacuum_vacuum_threshold = 100
autovacuum_analyze_threshold = 100
autovacuum_vacuum_scale_factor = 0.2
autovacuum_analyze_scale_factor = 0.1
autovacuum_vacuum_cost_limit = -1
Here are the measurements:
Time to insert 2 million rows in postgreql.
batch size - Execute batch time (sec)
1K - 73
2K - 64
4K - 60
8K - 59
10K - 59
20K - 59
40K - 59
Time to insert 4 million rows in Oracle.
batch size - Execute batch time (sec)
1K - 14
2K - 12
4K - 10
8K - 8.9
10K - 8.4
As you can see Oracle is inserting a 4 million row table much faster than Postgresql.
Here is the snippet of the program I am using for insertion.
stmt.executeUpdate("CREATE TABLE "
+ tableName
+ " (P_PARTKEY bigint not null, "
+ " P_NAME varchar(55) not null, "
+ " P_RETAILPRICE float not null, "
+ " P_TIMESTAMP Timestamp not null, "
+ " P_TS2 Timestamp not null)");
PreparedStatement pstmt = conn.prepareStatement("INSERT INTO " + tableName + " VALUES (?, ?, ?, ?, ? )");
for (int i = start; i <= end; i++) {
pstmt.setInt(1, i);
pstmt.setString(2, "Magic Maker " + i);
pstmt.setFloat(3, i);
pstmt.setTimestamp(4, new Timestamp(1273017600000L));
pstmt.setTimestamp(5, new Timestamp(1273017600000L));
pstmt.addBatch();
if (i % batchSize == 0) {
pstmt.executeBatch();
}
}
autovacuum_analyze_scale_factor = 0.002
autovacuum_vacuum_scale_factor = 0.001
You might need to change the above parameters
Specifies a fraction of the table size to add to autovacuum_analyze_threshold when deciding whether to trigger an ANALYZE. The default is 0.1 (10% of table size). In our case we have lowered that to 0.002 to make it more aggressive.
Specifies a fraction of the table size to add to autovacuum_vacuum_threshold when deciding whether to trigger a VACUUM. The default is 0.2 (20% of table size).

MySQL Performance Issue with 32 GB RAM and Intel Xeon 2.70 GHz Quad Core processor

I have an issue on running MySql on Windows server 2008 which has 32 GB Ram and Intel Xeon 2.70 Quad Core processor.
The database is only of 258 MB.
While running a php script which export the data from database and dumps into dbf files. It takes 50 mins.
CPU Usage goes around ~50%. mostly take by mysqld.exe.
CPU Usage goes around ~30% even if a single user logs in. I tested with 3 users and it went upto 63%
Below is my current configuration for my.ini:
[client]
port = 3306
socket = "C:/xampp/mysql/mysql.sock"
[mysqld]
port= 3306
socket = "C:/xampp/mysql/mysql.sock"
basedir = "C:/xampp/mysql"
tmpdir = "C:/xampp/tmp"
datadir = "D:/ADAT_System/Database/data"
pid_file = "mysql.pid"
key_buffer = 16M
max_allowed_packet = 1M
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
log_error = "mysql_error.log"
log-output=FILE
slow_query_log = ON
long_query_time = 5
query_cache_size = 128MB
query_cache_type = ON
query_cache_limit = 10MB
plugin_dir = "C:/xampp/mysql/lib/plugin/"
skip-federated
server-id
innodb_data_home_dir = "D:/ADAT_System/Database/data"
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = "D:/ADAT_System/Database/data"
innodb_buffer_pool_size = 16M
innodb_additional_mem_pool_size = 2M
innodb_log_file_size = 5M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[isamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M
[myisamchk]
[mysqlhotcopy]
interactive-timeout
Could someone please help me in optimizing the MySQL.I am pretty new to this.
Thanks,
Himanshu
Generally a SQL request (Given that the DB is local) should be quicker than that. *granted.
Try looking here for a great list of performance Tweaking.
http://www.askapache.com/mysql/performance-tuning-mysql.html
That aside, I am presuming its the structure of your requests.
Try keep your call into very few requests (1-10) and handle the data as a lump sum.
Doing Many requests can hinder the system and drastically reduce that performance.
Your server has more than enough ram and processing power.
Thats the best i can offer for now.
Look at MySQLTuner-0.6. It is on codeplex. It gave me a huge amount of hints in optimizing our environment.

Caching not Working in Cassandra

I dont seem to have any caching enabled when checking in Opscenter or cfstats. Im running Cassandra 1.1.7 with Solandra on Debian. I have set the required global options in cassandra.yaml:
key_cache_size_in_mb: 800
key_cache_save_period: 14400
row_cache_size_in_mb: 800
row_cache_save_period: 15400
row_cache_provider: SerializingCacheProvider
Column Families were created as follows:
create column family example
with column_type = 'Standard'
and comparator = 'BytesType'
and default_validation_class = 'BytesType'
and key_validation_class = 'BytesType'
and read_repair_chance = 1.0
and dclocal_read_repair_chance = 0.0
and gc_grace = 864000
and min_compaction_threshold = 4
and max_compaction_threshold = 32
and replicate_on_write = true
and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'
and caching = 'ALL';
Opscenter shows no data available on caching graphs and CFSTATS doesn't show any cache related fields:
Column Family: charsets
SSTable count: 1
Space used (live): 5558
Space used (total): 5558
Number of Keys (estimate): 128
Memtable Columns Count: 0
Memtable Data Size: 0
Memtable Switch Count: 0
Read Count: 61381
Read Latency: 0.123 ms.
Write Count: 0
Write Latency: NaN ms.
Pending Tasks: 0
Bloom Filter False Postives: 0
Bloom Filter False Ratio: 0.00000
Bloom Filter Space Used: 16
Compacted row minimum size: 1917
Compacted row maximum size: 2299
Compacted row mean size: 2299
Any help or suggestions are appreciated.
Sam
The caching stats have been moved from cfstats to info in Cassandra 1.1. If you run nodetool info you should see something like:
Key Cache : size 5552 (bytes), capacity 838860800 (bytes), 38 hits, 47 requests, 0.809 recent hit rate, 14400 save period in seconds
Row Cache : size 0 (bytes), capacity 838860800 (bytes), 0 hits, 0 requests, NaN recent hit rate, 15400 save period in seconds
This is because there are now global caches, rather than per-CF. It seems that Opscenter needs updating for this change - maybe there is a later version available that will work.

Would going from 4gb ram to 8gb increase magento speed on localhost?

I upgraded my pc from 2.1gh and 2gb ram to dual coure 2.6gh processor and 4GB RAM, magento runs faster but I am still not happy with it (takes 4-6 seconds to open a page).
My memory usage is around 40% alltogether.
Would upgrating to 8GB RAM speed up my magento locally?
I would say, by itself, No. The fact that you are sharing resources on a local machine between MySQL and PHP with Magento is inherently slow within itself. Will you get more throughput? Probably, but not enough to notice.
You will get more of a performance gain by installing Varnish, and enabling Magento full page caching AFTER you install more RAM. Magento cache stores itself in the RAM and so does Varnish. Also make sure you have APC cache installed. Those three COMBINED with more RAM will make all the difference in the world.
For Varnish .. Give it about 1GB RAM in the VCL settings .. Sounds like a lot, but it'll save your life.
For APC, give it at least 256MB of room in the APC settings ... It would probably behove you to do 512MB if you can afford it.
I am also going to include my PHP.INI magento optimized settings as well as my MySQL settings:
PHP.INI
max_execution_time = 18000
max_input_time = 60
memory_limit = 1024M
max_input_vars = 10000
post_max_size = 102M
upload_max_filesize =100 M
max_file_uploads = 20
default_socket_timeout = 60
pdo_mysql.cache_size = 2000
mysql.cache_size = 2000
mysqli.cache_size = 2000
apc.enabled = 1
apc.shm_segments = 1
apc.shm_size = 1024M
apc.num_files_hint = 10000
apc.user_entries_hint = 10000
apc.ttl = 0
apc.user_ttl = 0
apc.gc_ttl = 3600
apc.cache_by_default = 1
apc.filters = "apc\.php$"
apc.mmap_file_mask = "/tmp/apc.XXXXXX"
apc.slam_defense = 0
apc.file_update_protection = 2
apc.enable_cli = 0
apc.max_file_size = 10M
apc.use_request_time = 1
apc.stat = 1
apc.write_lock = 1
apc.report_autofilter = 0
apc.include_once_override = 0
apc.localcache = 0
apc.localcache.size = 256M
apc.coredump_unmap = 0
apc.stat_ctime = 0
apc.canonicalize = 1
apc.lazy_functions = 1
apc.lazy_classes = 1
And MySQL
MY.CNF
key_buffer = 256M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 32
myisam-recover = BACKUP
max_connections = 2500
query_cache_limit = 2M
query_cache_size = 64M
expire_logs_days = 10
max_binlog_size = 100M
[mysqldump]
quick
quote-names
max_allowed_packet = 16M
[isamchk]
key_buffer = 16M
I hope that helps you
If your memory usage is 40% now, then no. Sufficient RAM does make a difference, but in this case the extra 4 wouldn't make much of a difference.
Magento is quite slow due do its complexity and the fact that it uses thousands of files.
To increase Magento's load speed, try to disable stuff you don't need in the admin section, and perhaps Google for other tips. Also, the load speed will differ between different browsers.

Resources